Winning White Bean Chili

Allow 1 1/2 hours to prepare unless you’re a supper chopper!
Makes about 6 large servings

* 1 large onion, chopped
* 5 cloves garlic, minced
* 2 jalapeno peppers, chopped
* 1 tomatillo pepper, chopped
* 1 1/2 point of turkey breasts, chopped or turkey hamburger
* 2 4 ounce cans of chopped green chile peppers
* 3 tablespoons ground cumin
* 2 tablespoons ground coriander
* 1 tablespoon dried oregano
* 1 tablespoon dried cilantro
* dash of bay leaves
* 1/4 teaspoon ground cayenne pepper (or to taste)
* 1/4 teaspoon ground white pepper (or to taste)
* 4 cans great northern beans, drained and rinsed
* 2-3 cups of chicken broth (add as needed if the chili is too thick or there isn’t enough fluid)
* shredded Monterey Jack cheese (topper)

Directions for the crock pot

Have the crock pot out. In medium size frying pan, saute onion until soft and carmelizing. Add jalapeno, tomatillo, and garlic and saute a few minutes. Transfer to crock pot. Add a little oil to the pan, add turkey, cumin, coriander, white pepper, and cayenne and saute until the turkey turns white. Add the rest of the spices and canned chile peppers and saute a couple of minutes.

Transfer to the the crock pot.

Remove the pan from the burner and take one can of drained and rinsed beans and add it to the pan. Smash up the beans soaking up the oils and spices. Add to the crock pot. Add the rest of the beans to the crock pot and then add the broth until it’s about 1″ below the ingredients. Stir it all around until well mixed.

Cover and heat on low for 8 hours or high for 4 hours.

Posted in Cooking | Tagged , , , , , , | Leave a comment

DevOps Interview

Did my interview Friday. Interesting in general. No real OS questions but questions on Kubernetes and Docker and interest in Helm. Then an actual hands on test. Configure Jenkins to upload a file from a Github site to an Amazon Web Site (AWS).

Git is a revision control software tool, github and gitlab are management sites for git. I’ve used revision control for 21 years but started poking at git last year.

Jenkins is a deployment tool. You set up a configuration that pulls code from github/gitlab when something changes and copies it up to a hosting site like a web server. I’ve used Jenkins for a couple of simple web sites I own.

I have an account for AWS but only poked at it a little. I created a Kubernetes site on Google as a learning project but that’s it.

So I spent 90 minutes on google getting this going, searching for how to upload to AWS from Jenkins, etc. I had a couple of false starts and at least one “sit back and think a minute” moment. They finally called time at 4pm with the task incomplete. I’m pretty sure I would have gotten it within a few more minutes. They had “copy directory blah up” but directory was a level down from the workspace directory.

I will say that I learned quite a bit from this test 🙂 Even for my personal sites. I used the test to rework my Jenkins process to exclude .git and use a shell command vs a plugin (less complicated).

Probably no on the job (nothing heard so far) but I took something away and need to do a little more poking.

Posted in Computers | Leave a comment

Bacon Wrapped Chicken

Two chicken breasts
6 slices of bacon

Preheat oven to 400F

Combine:

1 1/2 teaspoon thyme
1/2 teaspoon sugar
Heaping 1/8 teaspoon cayenne
1 teaspoon salt
1/2 teaspoon black pepper

Mix it up and use it as a rub for the chicken.

Fold one slice of bacon in half and under each chicken breast. Wrap the other two around the chicken with the ends under the folded bacon on the bottom.

Put the two pieces into a non-stick frying pan with the heat on medium. You’ll flip it periodically to crisp the bacon on both the top and bottom of the chicken. The first cook should keep the bottom together for the flip.

Once the bacon is crispy/cooked, put the breasts onto a cookie sheet and slide into the oven. Cook them for approximately 15 minutes or when you insert a thermometer it hits 160F.

The main problem with breasts is people cook them too long which is why you test with a thermometer. While the juice from my chicked was a little pink, it was cooked and nice and juicy.

I also cooked some potatoes for about 30 minutes, probably overcooking them a touch but it kept the potatoes moist even for leftovers.

Posted in Cooking | Tagged , | Leave a comment

Installing Container Linux on VMware

I have a VMware vCenter configuration at home. Two R710 servers connected as a cluster (VMUG subscription).

I was having a little trouble with installing CoreOS (aka Container Linux) and in general have issues due to most tutorials using Vagrant, Virtual Box, AWS, GCS, or even VMware Wrokstation. None of them are a problem honestly but I do have a bit more of an involved local setup and there doesn’t seem to be much in the way of how to’s if you don’t have a VMware configuration.

Anyway, setting up CoreOS took a couple of tries but I got it done. It’s not all that hard honestly.

1. Download the CoreOS iso at https://coreos.com/releases
2. In vCenter, Create a new virtual machine.
3. Configure with 2 CPUs, 2 Gigs of RAM, and 40 Gigs of disk. It only needs 8 apparently but updates are by snapshot to new partitions so it’s nice to have space.
4. Under the virtual machine settings, VM Options, Boot Options, click the ‘Force BIOS setup’ checkbox.
5. Open a console.
6. Boot the VM.
7. You’ll be at a BIOS screen, under the VMRC menu, select the downloaded coreos iso.
8. Under boot, make sure you’re booting to CD
9. Save and boot.
10. Once it’s up and at a core@localhost prompt, you’ll need to create a password.

sudo openssl passwd -1 > cloud-config-file

11. Edit the cloud-config-file like so. The interpreter must find the first line as it is or it’ll fail:

#cloud-config
users:
  - name: 'login-name'
    passwd: 'openssl generated passwd'
    groups:
      - sudo
      - docker
hostname: hostname.domain.name

12. Install coreos by running this command:

sudo coreos-install -d /dev/sda -C stable -c cloud-config-file

13. Under VMRC, unmount the iso.
14. Reboot
15. Log in.
16. The system starts in DHCP mode if you didn’t configure it in the cloud config file. To give it a static ip, in ”’/etc/systemd/network”’, create a file called 00-ens192.network. It can be any file name but starting with numbers will order the startup if you configure other bits of the system.

[Match]
Name=[interfacename]

[Network]
DNS=[dns]
Address=[ipaddress]
Gateway=[gatewayaddress]

And done

Posted in Computers | Leave a comment

Presenting Homelabs To Potential Employers

A recent posting mentioned documenting your HomeLab on your Resume to show you’re going above and beyond what your job requires. Shows initiative and interest. I know when we’re looking at candidates, we want to find people who have external geeky interests like a HomeLab. In some interviews, people have been asked to diagram out their HomeLab and are dinged if they don’t have one or can’t document it on the fly.

But an alternative view was also posted in that is there a Work/Life balance. Are you taking the time to do other things outside of tech? Is tech your sole hobby?

Personally my main hobby is table top gaming. Board Games, War Games, Card Games, Dice Games, and Role Playing Games. Games. In fact, it partially got me into computers. That and I was a typesetter on a computerized typesetting machine back in 1982 or so. Typesetting up a Dungeons & Dragons character sheet got me thinking about it and then creating a Car Wars vehicle generation program in Basic got me a job. But I also tour on a Motorcycle, Ski, Snowshoe, and yes, learn new technologies with my HomeLab.

Since the mid-90’s, my home environment has consisted of at least a separate Firewall system (generally my previous desktop when I upgraded). Back in the early 00’s, I signed up for an external server hosted in Miami where I could put pictures and continue to maintain my Unix knowledge (an OpenBSD server, now a CentOS one).

A few years back I was gifted two Dell R710s that were being replaced at work. I’ve since installed vSphere to have a virtualized environment, pfSense (firewall plus) as the first Virtual server, and then a bunch of servers to isolate the things I do for fun and to learn new tech. Recently I signed up for the VMware Users Group (VMUG) and their Advantage program. This gives me access to the vCenter software and the Operations options to let me turn my two ESX servers into a cluster (and mimicking work much better). We’re moving towards a Private Cloud using vRealize technology so I have access to that too (not installed… yet).

I have a bunch of different servers, syslog, Space Walk, Samba, a couple of development servers, MySQL, several web servers, a CI/CD pipeline is being staged (Jenkins, Artifactory, Ansible, and GitLab), and I’m in the process of rebuilding my Kubernetes servers (3 Masters/3 Workers).

I’ve recently been tasked with taking ownership of the Kubernetes development process, in part due to my explorations of Kubernetes at home. I’m moving the server management scripts over to a GitLab server at work that I built. Again, in part due to me setting it up at home (I have a Revision Control System (RCS) system at home and work now and moved to git).

It’s helping me get more familiar with the new DevOps ideas plus the CI/CD pipeline tools, Orchestration with Kubernetes, and Microservices with Docker.

The question though is, should this kind of information be better documented on a resume? Should there be a HomeLab section or a Learning section, something that shows your interest, motivation, and desire even if your job doesn’t require you to use these tools?

There’s also the idea that such things be documented in a Cover Letter. You see the position, supply your resume, and then in the Cover, call out your HomeLab. It might be a better place for some positions where you’re looking at a specific job. Over the years though, I’ve seen positions where you supply only your resume to a company database which is then mined when a position opens. Tough to supply a cover for a job you don’t know about.

It’s something to think about.

Posted in Computers | 2 Comments

If A Tree Falls In The Forest

I write scripts to better manage the Unix servers we’re responsible for. Shell scripts, perl scripts, whatever it takes to get the information needed to stay on top of the environment. Be proactive.

Generally scripts are written when a problem is discovered. As we’re responsible for the support lab and production servers (not the development or engineering labs) and since only production servers are monitored, there are many scripts that duplicate the functions of monitoring. Unlike production, we don’t need to immediately respond to a failed device in the support lab. But we do need to eventually respond. Scripts are then written to handle that.

The monitoring environment has its own unique issues as well. It’s a reactive configuration in general. You can configure things to provide warnings but it can only warn or alert on issues it’s aware of. And still there’s the issue of not available in the support lab.

I’ve been programming in one manner or another (basic, C, perl, shell, php, etc) since 1982 or so (Timex/Sinclair). One of the things I always tried to understand and correct were compile time warnings and errors. Of course errors needed to be taken care of, but to me, warnings weren’t to be ignored. As such server management scripts are more like code than 6 or 7 lines of whatever commands are needed to accomplish the task. Variables, comments, white space, indentation, etc. The Inventory program I wrote (Solaris/Apache/Mysql/PHP; new one is LAMP) looks like this:

Creating list of files in the Code Repository
Blank Lines: 25687
Comments: 10121
Actual Code: 100775
Total Lines: 136946
Number of Scripts: 702

136,000 lines of code of which 25% are blank lines or comments. The data gathering scripts is up to 11,000 or so lines in 72 scripts and another 100 or so scripts that aren’t managed via a source code repository.

The process generally consists of data captures on each server which is pulled to a central server, then reports are written and made available. One of the more common methods is a comparison file. A verified file compared to the current file. A difference greater than 0 means something’s changed and should be looked at. With some of the data being gathered being pretty fluid, being able to check for every little issue can be pretty daunting so it’s a comparison. This works pretty well overall but of course there’s setup (every server data file has to be reviewed and confirmed) and regular review of the files vs just an alert.

Another tool is the log review process. Logs are pulled from each server to a central location, then processed and filtered to remove the noise (the filter file is editable of course), and the final result concatenated into a single log file which then can be reviewed for potential issues. In most cases the log entries are inconsequential. Noise. As such they are added to the filter file which reduces the final log file size. This can become valuable in situations like the lab where monitoring isn’t installed or for situations where the team doesn’t want to get alerted (paged) but do want to be aware.

But there’s a question. Are the scripts actually valuable? Is the time spent reviewing the daily output greater than the time fixing problems should they occur? These things start off as good intentions but over time become a monolith of scripts and data. At what point do you step back and think, “there should be a better way”?

How I determined it was when I was moved to a new team. In the past, I wrote the scripts to help but if I’m the only one looking at the scripts, is it really helping the team? In moving to the new team, I still have access to the tools but I’m hands off. I can see an error that happened 18 months ago that hasn’t been corrected. I found a failed drive in the support lab 6 months later. I found a failed system fan who knows how long ago it had failed.

There was an attempt to even use Nagios as a view into the environment but there are so many issues that again, working on them becomes overwhelming.

The newest process is to have a master script check quite a few things on individual servers and present that to admins who log in to work on other tasks. Reviewing that shows over 100,000 checks on 1,200 systems and about 23,000 things that need to be investigated and corrected.

But is the problem the scripts aren’t well known enough? Did I fail to transition the use of them? Certainly I’ve passed the knowledge along in email notifications (how the failure was determined) over time and the scripts are internally documented as well as documented on the script wiki.

If a tree falls in the forest and no one is around, does it make a sound?

If a script reports an error and no one looks at the output, is there a problem?

The question then becomes, how do I transfer control to the team? I’ve never said, “don’t touch” and have always said, “make changes, feel free” but I suspect there’s still hesitation to make changes to the scripts I’ve created.

The second question is related more to if the scripts are useful. Just because I found a use for them doesn’t mean the signal to noise ratio is valuable, same as the time to review vs the time to research and resolve.

Finally if the scripts are useful but the resulting data unwieldy, what’s an alternate method of presenting the information that’s more useful. The individual server check scripts seem to be a better solution than a centralized master report with 23,000+ lines but a review shows limited use of the review process upon login.

Is it time for a meeting to review the scripts? Time to see if there is a use, is it valuable, can it be trimmed down. Is there just so much work to manage it that the scripts, while useful, just can’t be addressed due to the reduction of staff (3 admins for 1,200 systems).

Posted in Computers | 4 Comments

Yearly State of the Game Room 2017

And here we are again. The yearly State of the Game Room. With the move to a new place, I got to move all the games into a single room which is a positive thing and lets me display them all rather than jumping between a couple of rooms to locate a game. I also have a lot better lighting in this room making gaming a lot easier.

I’ve added a good 6 Kallex shelf squares of games over the year. Bunny Kingdom and The Thing are two of the ones we enjoyed the most and the ones we’ll be taking to the end of year bash. Another one we played was The Others. It was a pretty interesting game with lots of bits but it really got a bit long a couple of times. I can do a couple or three hours of a game before I start wanting to end it.

List of games I can remember for the year by looking at the shelves. I’m sure I missed a couple especially since we moved this year. I do have an inventory system so I can keep track but I haven’t been keeping up on it. I’m working on getting it updated over the next few weeks or so. Maybe I’ll have a better list for next year 🙂

* 7th Sea RPG
* 7 Wonders: Duel Pantheon
* Apocalypse Prevention, Inc RPG
* Arkham Horror the Card Game
* Blood Bowl
* Bottom of the 9th
* Bunny Kingdom
* a few Call of Cthulhu books
* Castles of Burgundy Card Game
* Charterstone
* Clank
* Conan RPG
* Conan Board Game
* Dark Souls
* DC Deck Building Multiverse Box
* Dead of Winter Warring Colonies
* Dice Forge
* a few Dungons and Dragons books
* Elder Sign expansions
* Eldritch Horror expansions
* Evolution Climate
* Exit
* Fallout
* Fate of the Elder Gods
* Feast For Odin
* First Martian
* Five Tribes expansions
* Fragged RPG
* Gaia Project
* Jaipur
* Jump Drive
* Kemet
* Kindom Builder
* Kingdominos
* Kingsport expansion
* Legendary expansion
* Lovecraft Letters
* Magic Maze
* Mansions of Madness 2nd Edition expansions
* Massive Darkness
* Mountains of Madness
* My Little Pony RPG
* Netrunner packs
* New Angeles
* The Others
* Pandemic Second Season
* Pandemic Iberia
* Pandemic Rising Tide
* Paranoia RPG (Kickstarter)
* a few Pathfinder books
* Queendominos
* a couple of Red Dragon Inn expansions
* a few Savage Worlds books
* Scythe expansions
* Secret Hitler
* a few Shadowrun 5th books
* Sherlock Holmes expansions
* Cities of Spendor
* Talon
* Terraforming Mars
* The Thing
* Ticket to Ride Germany
* Time Stories
* Twilight Imperium
* via Nebula
* Villages of Valeria
* Whistle Stop
* Whitechapel expansion
* Whitehall
* a few X-Wing minis.

After getting things organized and making room for the next year, I have pictures.







Posted in Gaming | Leave a comment

‘If You Were Smart, You’d Be In Engineering’

This is a comment I heard a few years back from a manager on the Engineering side of the house. Of course it’s stated in a bad way, but the words behind it is that to advance in technology, your next step should be in Engineering (or Development) and you should be working towards that.

As a Systems Admin for many many years, I’ve found that I’m good at the job. Managing systems and providing solutions in Operations to improve the environment. It also plays to my own strengths and weaknesses. Troubleshooting problems, fixing them, and moving on vs spending all my time on a single product as a programmer or engineer. I’ve had discussions with prior managers about advancement both in Engineering and in Management. I’ve even attended Management Training. After discussion, it was felt that I should work with my strengths.

A few years back, I was moved into an Operations Engineer role. The company started implementing a Plan, Build, Run model and I was moved into the Build team. No real reason was given for the choices. The concept was to move a portion of the team away from day to day maintenance duties in order to focus on Business related Product deployments. It’s taking too long for the entire team to get Products into the field, what with being on call and dealing with maintenance issues like failing hardware so let’s pull three people away from the team to be dedicated on Business related Projects.

Sadly there was no realization that the delays are in part due to multiple groups in Operations having their own priorities. Delays were more due to the inability of the groups to fully align on a project. With three business groups each with their own priorities, the time to deploy really didn’t change much and there were fewer people to do the work.

As we moved forward, the company redefined the Build role. Titles change and we’re Enterprise Systems Engineers. One of the new goals was to move away from server builds and into server automation. Build one or two servers for the project in order to create automation and then turn the rest of the project as a Turn Key solution, over to the Run team to complete.

In addition, a new project came along and was designed to follow the Continuous Integration/Continuous Delivery (CI/CD) process. The concept is to deploy software using an incremental process vs the current 6 month to 18 month process.

The current process is to gather together all the change requests that identify problems in the product plus the product enhancement requests and spend time creating the upgraded product. It can take months to get this assembled, get code written and tested, fix problems that crop up, get it through QA testing, and then into Operations hands. And as noted before, deployment itself can take several months due to other projects on my plate and tasks on other groups plates.

The CI/CD process means that there’s a pipeline between Development into Operations. A flow, picture a stream. A problem is discovered, it’s passed back to development, they fix it, put it into the flow, automated testing is performed to make sure it doesn’t break anything, it’s put into the production support environment, then production. Rather than taking months to assemble and fix a large release, you have a single minor release and you can identify, fix, and deploy it into production in a matter of minutes. Even better, the Developer or Engineer would be on call for their product. If something happened, they’d be paged along with the teams, could potentially fix it right then, put it into the flow and let the fix get deployed. Automated testing is key here though and part of the fix is to fix or add new tests to make sure it doesn’t happen again or even with other products.

This process is pretty similar to how a single programmer working on a personal project for example might manage their program(s). I have a project I’ve been working on for years and has reached about 140,000 lines of code (actually about 98,000 lines of actual program with the remaining 42,000 lines being comments or blank lines). I’ve created a personal pipeline, without automated testing though. And an identified problem can be fixed and deployed in a few minutes. New requests can generally take about 30 minutes to complete and push out to the production locations.

This CI/CD oriented project was interesting. Plus it included creating a Kubernetes environment. I’d had someone comment about DevOps several years earlier so I’d been poking about at articles and even reading The Phoenix Project, a book about CI/CD and DevOps. A second project followed using CI/CD and Kubernetes. A part of the CI/CD process was utilizing Ansible and Ansible Tower, a tool I’ve been interested in using for a few years.

As a Systems Administrator, one of our main responsibilities is to automate. If you’re doing a task more than once, automate it. Write a script. I started out hobby programming, then working part time and then full time, back in the early 80’s. Even though I moved into system administration, I’ve been hobby programming and writing programs to help with work ever since. I believe it’s been helpful in my career as a Server Administrator. I have deeper knowledge about how programs work and can write more effective scripts to manage the environment.

In the new Build environment, it’s been even more helpful. It lets me focus even more on learning automation techniques with cloud computing (I’ve created Amazon Web Services and Google Cloud Services accounts) and working towards automating server builds. Years back I started creating “Gold Image” virtual machine templates. I also started in on scripting server installations. The rest of the team has jumped in and we have an even better installation script framework. Lately I’ve been working on a server validation script. For the almost 1,000 systems we manage, the script is reporting on 85,000 successful server checks with errors called out that can be corrected incrementally by the team (a local login report plus a central reporting structure). As part of this, I’ve started creating Ansible Playbooks and added scripts to our Script framework to make changes to systems. Eventually we’ll be able to use this script to create an automation process to make sure servers are built correctly the first time.

It’s been fun. It’s a challenge and sort of a new way of doing things. I say “sort of” because it still falls into the “automate everything” aspect of being a Systems Admin. With Virtual Machines and Docker Containers, we should be able to further automate server builds. With scripts defining how the environment should look, we should be able to create an automatic build process. Plus we should be able to create a configuration management environment as well. A central location defining a server configuration that ensures the server configuration is stable. You make a change on the server and the configuration script changes it back to match the defined configuration. Changes have to be made centrally.

DevOps though. It seems like this is really “Development learns how to do Operations”. “If you were smart, you’d be in Engineering.” Many of the articles I read, many of the posts on the various forums, many of the discussions I’ve followed on Youtube and in person really seem to be focused on Development becoming Operations. There doesn’t seem to be a lot of work on how to make Operations part of the process. It seems to be assumed that a Systems Admin will move into Engineering or Development or will move on to an environment that still uses the old methods of managing servers. Alternately we’ll be more like COBOL programmers. A niche group that gets called in for Y2K like problems.

I like writing code and I’m certainly pursuing this knowledge. I have a couple of ESX servers at home with almost 100 VMs to test Kubernetes/Docker, work on how to automate server deployments, work on different pipeline like products like Jenkins and Gitlab. I use it for my own sites in an effort to get knowledgeable about such a role and am starting to use them at work. Not DevOps as that’s not a position or role. That’s a philosophy of working together. A “DevOps Engineer” is more along the lines of an Engineer working on pipeline like products. Working to get Product automated testing in place but also working on getting Operations related automated testing in place.

If you’re not taking time to learn this on your own and the company isn’t working on getting you knowledgeable, then you’re going to fall by the wayside.

Posted in Computers | Tagged , , , , , , , , | Leave a comment

git Version Control for rcs Users – Setting up gitlab

Next up is to install Gitlab. The problem will be the inability to access to ‘net from work. So a manual install will need to be done. We’ll see how that goes.

Gitlab is pretty easy to install. Just follow the website. I’m going to build the server, generate an RPM list and a file list, install gitlab and the run a diff to see what changed. It may or may not help in setting up the gitlab server at work.

Comparing the before and after files and rpm listing, there were no differences before the install. Doing the installation now per the website shows the following installation and dependencies:

Installing:
 gitlab-ce                                   x86_64                      10.1.3-ce.0.el7                       gitlab_gitlab-ce                      353 M
Installing for dependencies:
 audit-libs-python                           x86_64                      2.4.1-5.el7                           jumpstart                              69 k
 checkpolicy                                 x86_64                      2.1.12-6.el7                          jumpstart                             247 k
 libsemanage-python                          x86_64                      2.1.10-18.el7                         jumpstart                              94 k
 policycoreutils-python                      x86_64                      2.2.5-20.el7                          jumpstart                             435 k
 python-IPy                                  noarch                      0.75-6.el7                            jumpstart                              32 k
 setools-libs                                x86_64                      3.3.7-46.el7                          jumpstart                             485 k

For a local installation without access to the ‘net, as long as I install the above dependencies on the server and then the gitlab-ce rpm, I should be good.

Rebuilding here in a sec to test my theory.

Confirmed. I installed the above packages and once done, installed the gitlab rpm as noted on the website:

export EXTERNAL_URL=http://lnmt1cuomgit1.internal.pri
rpm -ivh gitlab-ce-10.1.3-ce.0.el7.x86_64.rpm

Ready to be used…

Jenkins next…

Posted in Computers | Tagged , , , , | Leave a comment

git Version Control for rcs Users – Configuring gitlab and Jenkins Servers

This is intended to document the steps involved in moving from an RCS based environment to a git based environment. In previous posts, I showed the environment I’m currently working in and the progress to changing from using RCS to using git. I duplicated the process in installing code to the target servers so that I can now use git from the command line very much like I used rcs and my scripts. Now I want to move a bit farther into the devops side of things, learning about setting up an easier way for the team to manage the scripts and code, providing a method for external teams to access some of the code, and replacing my sync scripts with Jenkins.

I’ll need to duplicate my environment again, this time with even the same server names. Since I have an ESX host at home, I can set up several servers that effectively match my official environment.

* git server: lnmt1cuomgit1
* Jenkins server: lnmt1cuomjkns1
* Target Tool server: lnmt1cuomtool1
* Target Status server: lnmt1cuomstat1
* Target Inventory server: lnmt1cuominv1
* Worker server: This will be a local, virtual box system where I can pull and test scripts and the two websites.

Again, this is partly to help me manage my personal sites better, but also to help with the work effort to get us more focused on Dev type tasks.

The Source server will have gitlab and Jenkins installed since Jenkins uses port 8080, they both should be able to cohabitate. Update: And after testing, that’s a big nope. So two different servers.

At least for the php site (Inventory), I’ll need to do a bit of rewriting as the settings.php file also contains the passwords to the database. Either the settings file needs to be replaced by a table or the passwords will need to be pulled from an external file outside of the site directory structure. Main bits right now though are the scripts the team actually uses. The Inventory has report scripts, data uploads, and of course the front end to the database which the scripts don’t touch.

Environment Specs:

Server CPUs RAM Disk /usr /opt /var
lnmt1cuomgit1 2 4 40 4 10 4
lnmt1cuomjkns1 2 4 40 4 8 grow
lnmt1cuomtool1 2 4 50 16 1 16
lnmt1cuomstat1 2 4 40 4 1 grow
lnmt1cuominv1 2 4 40 4 1 grow

On to gitlab…

Posted in Computers | Tagged , , , , , | Leave a comment