Media Bubbles

Back when I was first starting out in life on my own, I read the Reader’s Digest. Initially for the jokes and Word Power and some articles that piqued my interest. As I expanded my reading to newspapers (Washington Post and local papers), I found the Reader’s Digest was pretty clearly a Christian oriented magazine and spun news in ways positive to being a Christian. Of course, negative spin on non-Christian activities. They were interesting but I found the actual News was more accurate. It was News. Simple reporting on the facts without apparent spin. I did read the opinion columns on the Editorial page (inside back page of the A section) with William Raspberry being one of the more columnists. As I got older, I did pick up a few other papers now and then. The Washington Times and USA Today when it started out. I found they also spun News articles but in a way that didn’t align with my own leanings. I felt the Washington Post was reporting accurately. Certainly it reported News in a way I accepted.

Internet is next though. Folks now think of Facebook and maybe MySpace if you’re a bit older. But back when I first got on the ‘net, it was Usenet. Usenet had all sorts of interesting groups. Technical support groups for just about anything you could think of, no matter how obscure. Recreational groups like rec.humor and rec.humor.funny. But others as well, again on just about anything you could think of. As groups needed to be approved for transfer, there were groups that were somewhat frivolous, fringe, or even illegal. These were the Alternative groups. You could find interesting stuff of course like alt.folklore.urban, but you really had to be thoughtful as to where you wandered or you might get into some bad stuff. Note that much of the FAQ from that group turned into Snopes.

Then email. Sure, email has been available for quite some time already but now we have family members and non-technical friends getting email. Mainly from work but you had AOL for personal access. The problem here were the silly Urban Folklore stuff folks would pass around to all their friends. “Can you believe this?” I’d check alt.folklore.urban’s FAQ and then Snopes and send links to Snopes off to friends and family. Eventually either they checked Snopes first or realized I wasn’t going to “OMG” the email and pass it along and dropped me from their distributions 🙂

Back to the News though. I’d get interesting news bits that aligned with my own standards from Usenet. The bulk of it is technical, how it relates to what I do professionally and as a hobby. I remember an old article on Microsoft and how they didn’t have much of a Government Lobby for their products (Windows) to advance their agenda. But News that was relevant to my own niche. Oh I read the Paper but was getting more and more annoyed with the number of Ads. There’s be a 3/4 column with a couple of continuations in column 2 and 3 but 3/4 or more of the page were Ads. And in the middle of Section A, two or four full page Ads. At one point I considered getting two papers and cutting out the Ads just to see how much actual News there was in the paper.

Bit of a side note, a lot of the tech and “Guy Stuff” were advertised or reported on in the Sports section. As I didn’t follow sports, I seldom saw these ads. I did read the Lifestyle section so I got to see all the less “Guy” Ads.

I do want to point out something that is relevant to where I’m going here. Way back when there were only a few channels on TV, the Fairness Doctrine was enacted. This ensured that, due to the limited access to News, TV Channels must provide access to folks who wanted to discuss controversial issues, forcing these channels to be ‘honest, equitable, and balanced’. In 1987, President Reagan pressured the FCC to eliminate this Doctrine due to the availability of Cable Television, so folks could get alternate viewpoints from an alternate channel on Cable.

I find this problematic as folks would actually have to find these channels and would then be in their own little bubble. Mainstream News organizations like ABC, CBS, and NBC didn’t need to present these viewpoints any more. To me, this is the start of the Media Bubble. From a conspiracy view point, Fox itself, a channel that was mostly the alternate local station in most regions, was consolidated by Rupert Murdoch in 1985. Could pressure from Rupert Murdoch have influenced the GOP?

Read about The Fairness Doctrine here.

As to Politics and the Media Bubble. In general I was somewhat aware of politics. Not active and not more than being aware of various bits as they occurred. I did vote, in general but didn’t really follow all the things going on. As I got older, I did start paying more attention. A little at a time. I did think a businessman as president might prove to be good for the country. Lee Iaccoca and Ross Perot were the ones I was thinking of. One of the ones I really paid attention to was in the mid-90’s. A measure came up for vote about raising sales tax up from 4% to 4.5%. In reading the news and paying some attention, I voted against. My reasoning mainly was the county (Fairfax) wasn’t doing a good job with my existing tax money. Why give them more? The response after the measure failed was surprise by the business folks.

Since then I’ve gotten more thoughtful in my voting and in understanding issues as they pertain to me and to my beliefs. Still not keeping up on every little thing. Once voting, kind of let things run. Only pay attention when it’s time to vote again. In addition, I’m reading more on line. Digital news. But really only when things pop up in one of my various discussion sites. Meaning generally related to my hobbies. Motorcycles, gaming, and computers. Occasionally something political would pop up and of course watching the Presidential elections.

And over the past 20 years, we’ve had more folks get on line. It’s easier to create websites and post content. You can pretty much find any conspiracy theory web site from Anti-Vaxxers to ChemTrails to, well whatever feeds your beliefs. GeoCites and then MySpace gave you a single place to post such things. Then sites like reddit where you can create sub-reddits for just about anything. And of course Social Media pops up. It’s cool to keep in touch with family and friends but you’re now open to those same weird emails you got from Grandma about coke dissolving a steak over night. You can respond with ‘this is nonsense’ but Grandma has 100,000 other friends and now she’s a force to be reckoned with. With Obama getting in office, even more silly things popped up like the Birther nonsense or Secret Muslim or maneuvers in Texas where he’ll be taking over the US. Ultimately I just blocked the sites my family and friends posted.

With the recent election, I was paying about the same attention as usual. Reading the same sites and keeping up on the news. I don’t watch Fox News as they’re pretty right leaning and their opinion folks are pretty out there to me.

But.

As we led up and Trump became the front runner, I was seeing more false stories showing up on my Facebook feed. I posted rebuttals and advisories about reading further into what’s posted but as we know, false stories feed narratives, feed folks beliefs. These things spread over Social Media like wildfire. You can try to prove they’re wrong but Grandma’s 100,000 Granny Force is bigger than you little peep. I posted queries several times after all the ‘Fake News’ quotes Trump spouted. “What news are you reading or watching?” But no response. I worked on expanding my own reading beyond the tidbits I got here and there for the Washington Post, New York Times, CNN, and BBC. Some folks steered me to Al Jazeera which I was hesitant about but did read a few articles and of course NPR which I did occasional read an article on, but more oriented to my hobbies.

An interesting Pew Report on news after the election pointed out that the majority of Conservative voters got most of their news from Fox News (40%) with the next at 8% for CNN. But Liberal voters were spread around much more evenly, reading a broad range of news. And that was interesting. Folks speak of Media Bubbles but it appears the biggest bubble is on the Conservative side, unless you assume every other news organization is Liberal.

Anyway, I’ve expanded further. I purchased subscriptions to both The Washington Post and the New York Times and even created an account on Fox where I do read an occasional article. Yes, they’re biased but I can compare their bias with every other news organization and their biases.

Posted in About Carl | Leave a comment

RHCE Test

Well, when I took the test last time I received a 130 score. Total possible is 300 and 210 is passing so not all that good. Studying was mainly for the extra stuff we don’t do; selinux and firewalld, and stuff we don’t do often like manage systems through yum (package manager), NFS, and Samba. And you have to break in to the system (reset root’s password) in order to proceed. I was beating my head trying to figure that one out.

Since then I’ve built a replacement firewall for my home environment using firewalld so I’m more familiar with it and at work we’ve started working with Satellite and yum to manage systems so I’m a lot more familiar with that.

Okay, study deeper this time. I have a more robust environment and can set up and use the same sorts of tools that will be used on the exam. I snagged a book to study and did a few blog posts here to document how things actually worked for me. I memorized how to accomplish some tasks in the ‘Red Hat Way’ vs just editing a file to make a change. Installed selinux on all servers and configured firewalld.

Took the test. 193 score. Sooo close.

Observations:

First main observation after discussing it with Jeanne. I really don’t study for these sorts of things. I’m validating my own knowledge. So testing on things we don’t do will be my blind spot. In this case I did hit the books more over the two weeks prior to the test but clearly that wasn’t enough.

For example, I set up a kerberos server and client several times in the 2 weeks prior to the test. Had that down without a problem. One of the tasks is to set up a kerberos server. Running kadmin builds the keyfile, a random hash used to encrypt sessions. It can take minutes to generate as it’s pulling information from random this or that, /dev/urandom and things like that. In the test, they simply provided the key file. The problem? Where does the file go??? As I don’t use Kerberos to actually manage users, I really didn’t know where the file belongs. Didn’t even know where to look for the information. I set up the configurations on the server and client including the NFS kerberos configuration but had no idea if it would have worked.

There were other odd things that slowed me down. One of the requirements is to set up a bonded/teamed interface. The systems have three interfaces; eth0, eth1, and eth2. eth0 is the main interface and eth1 and eth2 are to be bonded. The bond should work if either interface is down. Standard bonding. I’ve been doing it at work for the past 9 years including Solaris IPMI. But I’m trying to use the RH7 commands so nmcli con add but I added eth0. Rats. Use nmcli to reconfigure eth0 and then properly configure bond0 with eth1 and eth2. Unfortunately, and it took minutes for me to figure it out, eth0 wasn’t managed using nmcli. I had to check the other system’s ifcfg-eth0 file to recreate the first systems ifcfg-eth0 file and move forward. Plus there was some issue with eth2.

Same things with IPv6. Change the following IPs on eth0. I know the commands for nmcli but not what the actual keywords are. Is it IPADDR6 (that didn’t work) or what? Blah!

I got the iSCSI server set up but couldn’t get the client talking to it. It was using a block device, which I did get working on my home sandbox. Troubleshooting it was a pain, especially when you can’t pop out to google to query some log messages (if there were any).

Heck, I even got the NFS mount working with selinux (semanage fcontext -a -t public_content_rw_t “/shared(/.*)?” top of my head; I’ll check the page to be sure 🙂 ).

Anyway, signed up for another test and I’ll beat on the sandbox again even harder.

Posted in Uncategorized | Leave a comment

Skiing!

I planned out our Valentine’s week vacation starting last February. Scheduled the room, bought two season passes last summer, paid for a semi-private ski lesson for her (5 person vs a 20 group), all out.

Got there Saturday, rented boots for her, and skis and poles for us. Got settled and Sunday morning, went to Winter Park. Took a bit to find her class and I headed up to get my legs back. Been 8 years since I’d been up so it took a couple of runs to get it back. Finally at 3:30 I headed down, picked her up, back to the room and out to dinner.

Personally I’m partial to Chicken Marsala so I ordered Pollo al Marsala. Fettuccini was a bit cheesier than I prefer and virtually no marsala flavor in my opinion. An okay meal but not outstanding. I once gave a $20 tip directly to a cook due to an outstanding Chicken Marsala I had for dinner.

Anyway, 4am rolls around. Up, bathroom, NOW!

Food poisoning. I spent the day Monday either in bed or in the bathroom with dinner making its escape any way it could. By the third time, I was releasing Friday’s breakfast and hoping for death. Jeanne comforted me and made sure I had fluids to stay hydrated. Fruit Juice Gatorade is horrible.

I felt a bit better that night and Tuesday we tentatively headed out for breakfast. I opted for oatmeal and toast and was only able to eat about half and a couple of bites of toast. We did some slight walking but I was pretty weak. But not hungry. All day.

We finally hit a different place for Valentine’s dinner. I had trout to have something mild but only had about half. Still full feeling from breakfast.

Wednesday morning. Now Jeanne is ill. I suspect close association due to us being in the one room while I was sick. I did wash hands and face and brush teeth but floating particles and all. As I was still full feeling, easily out of breath, and weak, I said we should just bail 2 days early. We won’t be well enough to ski but at least we’ll be away from the pest hole.

Took several pauses in getting packed and out. I helped Jeanne out, checked out (and yes, informed the staff -again- about our sickness and the room). Two hour ride home, not too bad. Didn’t need to stop. Washed -all- clothes whether or not we wore them. Crashed Thursday.

I checked in with my health care provider. “Hey, still liquid poo, feel full, ideas?” She said probably an inflamed intestinal tract. Eat bland foods for a few days to reduce irritation and it’ll get better. So bananas, rice, applesauce, tea, and toast. Better today but not fully restored.

So. We skied for a few hours Sunday. Wheee.

On the plus side, Jeanne was super worried about learning to ski. She’s a lot more confident and ready to go again, maybe for a Saturday jaunt in 2 weeks.

But meals next time are at Wendy’s!

Posted in Uncategorized | 2 Comments

RHCE Cheat Sheet

Just the commands ma’am. I can follow the links and read the books but ultimately I just want a cheat sheet to remind me what the actual commands are after all this studying.

Memorize This!

The following bits are the harder to remember, less often used bits. Basically commands with options I tend to forget.
Networking: nmcli con add type team con-name myteam0 ifname team0 config ‘{ “runner”: {“name”: “loadbalance”}}’
iSCSI: iscsiadm –mode discovery –type sendtargets –portal 192.168.1.53 –discover
iSCSI: iscsiadm –mode node –targetname iqn.2017-02.pri.internal:target –portal 192.168.1.53:3260 –login
HTTP: openssl req -new -x509 -nodes -out /etc/pki/tls/certs/host.internal.pri.crt -keyout /etc/pki/tls/private/host.internal.pri.key -days 365
Kerberos/NFS: mount -t nfs4 -o sec=krb5 enwd1cuomnfss1.internal.pri:/home/tools /mnt
MariaDB: grant all on test.* to user@localhost identified by ‘password’;

Password Reset 1

At boot kernel screen
‘e’ to edit
At linux16, add rd.break enforcing=0
Ctrl-X to start
At prompt, mount -o remount,rw /sysroot
chroot /sysroot
passwd – change root password
selinux?
restorecon /etc/shadow
touch /.autorelabel works but is slow as it relabels the system
exit,exit

Password Reset 2

At boot kernel menu, ‘e’ to edit
At linux line, remove rhgb and add init=/bin/sh
At shell, /usr/sbin/load_policy -i
At shell, mount -o remount,rw /
At shell, passwd root
At shell, mount -o remount,ro / (flushes memory)
exit, exit

Networking

man nmcli-examples
nmcli con add con-name ens256 ifname ens256 type ethernet ip4 192.168.1.203/24 gw4 192.168.1.1
nmcli con mod my-con-em1 ipv4.dns “192.168.1.1”
nmcli con mod my-con-em1 +ipv4.dns 8.8.8.8
nmcli con mod my-con-em1 ipv6.dns “2001:4860:4860::8888 2001:4860:4860::8844”
nmcli con mod ens256 ipv4.never-default yes
nmcli -p con show ens256

Networking: Bonding

nmcli con show
nmcli con add type bond con-name mybond0 ifname bond0 mode active-backup
7.0: nmcli con mod mybond0 ipv4.addresses “192.168.1.10/24 192.168.1.1”
7.0: nmcli con mod mybond0 ipv4.method manual
7.1: nmcli con mod mybond0 ipv4.addresses 192.168.1.10/24
7.1: nmcli con mod mybond0 ipv4.gateway 192.168.1.1
7.1: nmcli con mod mybond0 ipv4.method manual
nmcli con add type bond-slave con-name bond0-eth0 ifname eth0 master bond0
nmcli con add type bond-slave con-name bond0-eth1 ifname eth1 master bond0
nmcli con up mybond0
nmcli con show
/etc/sysconfig/network-scripts/ifcfg-[bond-interface]

DEVICE=bond0
TYPE=Bond
BONDING_MASTER=yes
NAME=mybond0
ONBOOT=yes
IPADDR=192.168.1.72
PREFIX=24
GATEWAY=192.168.1.1

/etc/sysconfig/network-scripts/ifcfg-[slave-interface]

NAME=bond0-ens192
DEVICE=ens192
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Networking: Teaming

nmcli con show
nmcli con add type team con-name myteam0 ifname team0 config ‘{ “runner”: {“name”: “loadbalance”}}’
7.0: nmcli con mod myteam0 ipv4.addresses “192.168.1.10/24 192.168.1.1”
7.0: nmcli con mod myteam0 ipv4.method manual
7.1: nmcli con mod myteam0 ipv4.addresses 192.168.1.10/24
7.1: nmcli con mod myteam0 ipv4.gateway 192.168.1.1
7.1: nmcli con mod myteam0 ipv4.method manual
nmcli con add type team-slave con-name team0-slave0 ifname eth0 master team0
nmcli con add type team-slave con-name team0-slave1 ifname eth1 master team0
nmcli con up myteam0
nmcli con show

Networking: IPv6

ip addr show eno16777984
nmcli con show eno16777984 | grep -i ipv6
nmcli con mod eno16777984 ipv6.addresses ‘fddb:fe2a:badb:abe::1/64’
nmcli con mod eno16777984 ipv6.method manual
nmcli con down eno16777984
nmcli con up eno16777984
ip addr show dev eno16777984
/etc/sysconfig/network-scripts/ifcfg-[interface]

IPV6INIT=yes
IPV6ADDR=fddb:fe2a:badb:abe::1/64
IPV6_DEFAULTGW=2001:db8:0:1::1

Networking: IPv6 Troubleshooting

ping6 [ipv6 address]
ip -6 route

Networking: Routing

echo 1 > /proc/sys/net/ipv4/ip_forward
echo net.ipv4.ip_forward = 1 > /etc/sysctl.d/ip_forward.conf
ip route show
/etc/sysconfig/network-scripts/route-[interface]

192.168.1.100/32 via 192.168.1.254 dev eno16777984
ADDRESS0=192.168.1.100
NETMASK0=255.255.255.255
GATEWAY0=192.168.1.254
METRIC0=

Firewall

man firewalld.conf
firewall-cmd –get-services
/usr/lib/firewalld/services
firewall-cmd –zone=external –add-masquerade –permanent
firewall-cmd –reload
firewall-cmd –add-forward-port=port:2022:proto:tcp:toport:22:toaddr:192.168.1.203 –permanent
firewall-cmd –reload

Firewall: Zones

man firewalld.zones
firewall-cmd –get-default-zone
firewall-cmd –get-active-zones
firewall-cmd –get-zones
firewall-cmd –set-default-zone=home
firewall-cmd –permanent –zone=internal –change-interface=eth0
nmcli con show | grep eth0
nmcli con mod “System eth0” connection.zone internal
nmcli con up “System eth0”
/etc/sysconfig/network-scripts/ifcfg-* – ZONE=internal
firewall-cmd –get-zone-of-interface=eth0
firewall-cmd –permanent –zone=public –list-all
firewall-cmd –permanent –new-zone=test
firewall-cmd –reload

Firewall: Rich Rules

man firewalld.richlanguage
firewall-cmd –zone=dmz –add-rich-rule=’rule family=ipv4 source address=10.0.0.100/32 reject’ –timeout=60
firewall-cmd –add-rich-rule=’rule protocol value=icmp accept’ –zone=dmz
firewall-cmd –zone=dmz –add-rich-rule=’rule family=ipv4 source address=10.0.0.0/24 port port=7900-7905 protocol=tcp accept’
firewall-cmd –list-all –zone=dmz

Package Management

/etc/yum.repos.d

[base]
name=Name
baseurl=http://
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/...

yum grouplist
yum whatprovides semanage

SELinux

Test is only on ‘types’ “-t / _t”. _r is Roles, _u is Users.
/etc/selinux/config
/etc/sysconfig/selinux
sestatus -v
getenforce
setenforce
yum install -y policycoreutils-python
semanage
semanage fcontext -l for a long list
semanage fcontext to update the policy
restorecon to apply the policy
chcon updates the context of a file but is temporary only
getsebool
setsebool

iSCSI: Server

vgs
lvcreate -L 200M -n lvsan1 /dev/vg00
lvcreate -L 200M -n lvsan2 /dev/vg00
yum install -y targetcli
Note: cd brings up a select. help gives you help 🙂

# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> cd /backstores
/backstores> ls
o- backstores ................................................................................................................ [...]
  o- block .................................................................................................... [Storage Objects: 0]
  o- fileio ................................................................................................... [Storage Objects: 0]
  o- pscsi .................................................................................................... [Storage Objects: 0]
  o- ramdisk .................................................................................................. [Storage Objects: 0]
/backstores> block/ create block1 /dev/vg00/lvsan1
Created block storage object block1 using /dev/vg00/lvsan1.
/backstores> block/ create block2 /dev/vg00/lvsan2
Created block storage object block2 using /dev/vg00/lvsan2.
/backstores> fileio/ create file1 /opt/diskfile1 100M
Created fileio file1 with size 104857600
/backstores> ls
o- backstores ................................................................................................................ [...]
  o- block .................................................................................................... [Storage Objects: 2]
  | o- block1 ................................................................. [/dev/vg00/lvsan1 (200.0MiB) write-thru deactivated]
  | o- block2 ................................................................. [/dev/vg00/lvsan2 (200.0MiB) write-thru deactivated]
  o- fileio ................................................................................................... [Storage Objects: 1]
  | o- file1 .................................................................... [/opt/diskfile1 (100.0MiB) write-back deactivated]
  o- pscsi .................................................................................................... [Storage Objects: 0]
  o- ramdisk .................................................................................................. [Storage Objects: 0]
/backstores> cd /iscsi/
/iscsi> create iqn.2017-02.pri.internal:target
Created target iqn.2017-02.pri.internal:target.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi> ls
o- iscsi .............................................................................................................. [Targets: 1]
  o- iqn.2017-02.pri.internal:target ..................................................................................... [TPGs: 1]
    o- tpg1 ................................................................................................. [no-gen-acls, no-auth]
      o- acls ............................................................................................................ [ACLs: 0]
      o- luns ............................................................................................................ [LUNs: 0]
      o- portals ...................................................................................................... [Portals: 1]
        o- 0.0.0.0:3260 ....................................................................................................... [OK]
/iscsi> cd iqn.2017-02.pri.internal:target/
/iscsi/iqn.20...ternal:target> tpg1/acls/ create iqn.2017-02.pri.internal:server1
Created Node ACL for iqn.2017-02.pri.internal:server1
/iscsi/iqn.20...ternal:target> tpg1/luns/ create /backstores/block/block1
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2017-02.pri.internal:server1
/iscsi/iqn.20...ternal:target> tpg1/luns/ create /backstores/block/block2
Created LUN 1.
Created LUN 1->1 mapping in node ACL iqn.2017-02.pri.internal:server1
/iscsi/iqn.20...ternal:target> tpg1/luns/ create /backstores/fileio/file1
Created LUN 2.
Created LUN 2->2 mapping in node ACL iqn.2017-02.pri.internal:server1
/iscsi/iqn.20...ternal:target> ls
o- iqn.2017-02.pri.internal:target ....................................................................................... [TPGs: 1]
  o- tpg1 ................................................................................................... [no-gen-acls, no-auth]
    o- acls .............................................................................................................. [ACLs: 1]
    | o- iqn.2017-02.pri.internal:server1 ......................................................................... [Mapped LUNs: 3]
    |   o- mapped_lun0 .................................................................................... [lun0 block/block1 (rw)]
    |   o- mapped_lun1 .................................................................................... [lun1 block/block2 (rw)]
    |   o- mapped_lun2 .................................................................................... [lun2 fileio/file1 (rw)]
    o- luns .............................................................................................................. [LUNs: 3]
    | o- lun0 .................................................................................... [block/block1 (/dev/vg00/lvsan1)]
    | o- lun1 .................................................................................... [block/block2 (/dev/vg00/lvsan2)]
    | o- lun2 ...................................................................................... [fileio/file1 (/opt/diskfile1)]
    o- portals ........................................................................................................ [Portals: 1]
      o- 0.0.0.0:3260 ......................................................................................................... [OK]
/iscsi/iqn.20...ternal:target> cd /
/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 2]
  | | o- block1 ................................................................. [/dev/vg00/lvsan1 (200.0MiB) write-thru activated]
  | | o- block2 ................................................................. [/dev/vg00/lvsan2 (200.0MiB) write-thru activated]
  | o- fileio ................................................................................................. [Storage Objects: 1]
  | | o- file1 .................................................................... [/opt/diskfile1 (100.0MiB) write-back activated]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2017-02.pri.internal:target ................................................................................... [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- iqn.2017-02.pri.internal:server1 ..................................................................... [Mapped LUNs: 3]
  |     |   o- mapped_lun0 ................................................................................ [lun0 block/block1 (rw)]
  |     |   o- mapped_lun1 ................................................................................ [lun1 block/block2 (rw)]
  |     |   o- mapped_lun2 ................................................................................ [lun2 fileio/file1 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 3]
  |     | o- lun0 ................................................................................ [block/block1 (/dev/vg00/lvsan1)]
  |     | o- lun1 ................................................................................ [block/block2 (/dev/vg00/lvsan2)]
  |     | o- lun2 .................................................................................. [fileio/file1 (/opt/diskfile1)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

systemctl enable target
systemctl start target
firewall-cmd –add-port=3260/tcp –permanent
firewall-cmd –reload
systemctl status target

iSCSI: Client

yum install -y iscsi-initiator-utils
/etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2017-02.pri.internal:server1

systemctl enable iscsid
systemctl start iscsid
systemctl start iscsi
iscsiadm –mode discovery –type sendtargets –portal 192.168.1.53 –discover
iscsiadm –mode discovery -P 1
iscsiadm –mode node –targetname iqn.2017-02.pri.internal:target –portal 192.168.1.53:3260 –login
iscsiadm –mode session -P 3
lsblk –scsi
mkfs.xfs /dev/sdb
blkid /dev/sdb (copy UUID)
mkdir /mnt/iscsi
vi /etc/fstab

UUID=ba082551-c983-4f1f-852a-53b1c8a55106  /mnt/iscsi  xfs   _netdev   0   2

mount -a

Performance

top
/proc/meminfo
free -m
swapon -s
cifsiostat
nfsiostat
iostat
mpstat
pidstat
vmstat
dstat – not noted in materials though

Performance: SAR

/etc/cron.d/sysstat
/etc/sysconfig/sysstat – HISTORY variable – default 28 days
sar -n DEV
sar -b
sar -P 0
sar 1 10

Optimization

/proc/meminfo
/proc/cmdline
/proc/cpuinfo
/proc/partitions
/proc/modules
/etc/sysconf.d
sysconf -a
sysconf -p
sysconf -w

net.ipv4.ip_forward
net.ipv4.icmp_echo_ignore_all
net.ipv4.icmp_echo_ignore_broadcasts
vm.swappiness
kernel.hostname

Logging: Server

/etc/rsyslog.conf – im* (input modules)
/etc/rsyslog.conf – om* (output modules)
/etc/rsyslog.conf

$ModLoad imudp
$InputUDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514

systemctl restart rsyslogd
firewall-cmd –add-port=514/tcp –permanent
firewall-cmd –reload

Logging: Clients

@ = via UDP
@@ = via TCP
/etc/rsyslog.conf

*.*   @@enwd1cuomlog1.internal.pri:514

systemctl restart rsyslogd

HTTP/HTTPS: Server

yum groupinstall -y ‘Web Server’
systemctl enable httpd
systemctl start httpd
firewall-cmd –permanent –add-service=http
firewall-cmd –reload

<Directory /var/www/html>
AllowOverride None
Require all granted
</Directory>

HTTP/HTTPS: Virtual Host

/var/www/html
mkdir host.internal.pri
echo “Testing” > /var/www/html/host.internal.pri/index.html
restorecon -R host.internal.pri
cd /etc/httpd/conf.d
edit vhosts.conf

<VirtualHost *:80>
  ServerAdmin webmaster@host.internal.pri
  DocumentRoot /var/www/html/host.internal.pri
  ServerName host.internal.pri
  ErrorLog logs/host.internal.pri-error_log
  CustomLog logs/host.internal.pri-access_log common
</VirtualHost>

mv ssl.conf ssl.conf2
apachectl configtest
apachectl restart
httpd -D DUMP_VHOSTS
yum install -y elinks
elinks http://host.internal.pri

HTTP/HTTPD: Access Restrictions

/var/www/html/private
echo “testing” > /var/www/html/private/index.html
restorecon -R /var/www/html
/etc/httpd/conf/httpd.conf

<Directory "/var/www/html/private">
  AllowOverride None
  Options None
  Require host host.internal.pri
</Directory>

apachectl configtest
/etc/httpd/conf/httpd.conf

<Directory "/var/www/html/private">
  AuthType Basic
  AuthName "Password protected area"
  AuthUserFile /etc/httpd/conf/passwd
  Require user cschelin
<Directory>

apachectl configtest
htpasswd -c /etc/httpd/conf/passwd cschelin
chmod 600 /etc/httpd/conf/passwd
chown apache:apache /etc/httpd/conf/passwd
systemctl restart httpd

HTTP/HTTPD: Group Content

/etc/httpd/conf/httpd.conf

<Directory "/var/www/html/private">
  AuthType Basic
  AuthName "Password protected area"
  AuthGroupFile /etc/httpd/conf/team
  AuthUserFile /etc/httpd/conf/passwd
  Require group team
</Directory>

apachectl configtest
mkdir -p /var/www/html/private
restorecon -R /var/www/html/private
/etc/httpd/conf/team

team: cschelin jainsley

htpasswd -c /etc/httpd/conf/passwd cschelin
htpasswd /etc/httpd/conf/passwd jainsley
systemctl restart httpd

HTTP/HTTPD: TLS

openssl req -new -x509 -nodes -out /etc/pki/tls/certs/host.internal.pri.crt -keyout /etc/pki/tls/private/host.internal.pri.key -days 365
/etc/httpd/confi.d/ssl.conf

SSLCertificateFile /etc/pki/tls/certs/host.internal.pri.crt
SSLCertificateKeyFile /etc/pki/tls/private/host.internal.pri.key
Servername host.internal.pri:443

apachectl configtest
apachectl restart
httpd -D DUMP_VHOSTS
openssl s_client -connect localhost:443 -state

DNS

yum install -y bind
/etc/named.conf

listen-on port 53 { any; };
allow-query { any; };
dnssec-validation no;

named-checkconf
firewall-cmd –permanent –add-service=dns
firewall-cmd –reload
systemctl enable named
systemctl start named

DNS: Troubleshooting

dig
/etc/resolv.conf

NFS: Server

yum groupinstall -y file-server
firewall-cmd –permanent –add-service=nfs
firewall-cmd –reload
systemctl enable rpcbind nfs-server
systemctl start rpcbind nfs-server
mkdir -p /home/tools
chmod 0777 /home/tools
mkdir -p /home/guests
chmod 0777 /home/guests
yum install -y setroubleshoot-server
semanage fcontext –list
semanage fcontext -a -t public_content_rw_t “/home/tools(/.*)?”
semanage fcontext -a -t public_content_rw_t “/home/guests(/.*)?”
restorecon -R /home/tools
restorecon -R /home/guests
semanage boolean -l | egrep “nfs|SELinux”
If needed:
setsebool -P nfs_export_all_rw on
setsebool -P nfs_export_all_ro on
setsebool -P use_nfs_home_dirs on
man exports for examples
/etc/exports

/home/tools enwd1cuomnfsc1.internal.pri(rw,no_root_squash)
/home/guests enwd1cuomnfsc1.internal.pri(rw,no_root_squash)

exportfs -avr
systemctl restart nfs-server
showmount -e localhost

NFS: Client

yum install -y nfs-utils
mount -t nfs enwd1cuomnfss1.internal.pri:/home/tools /mnt

NFS: Group (Server)

yum groupinstall -y ‘file-server’
firewall-cmd –permanent –add-service=nfs
firewall-cmd –reload
systemctl enable rpcbind nfs-server
systemctl start rpcbind nfs-server
mkdir /shared
groupadd -g 60000 sharedgrp
chgrp sharedgrp /shared
chmod 2770 /shared
/etc/exports

/shared enwd1cuomnfsc1.internal.pri(rw,no_root_squash)

exportfs -avr
systemctl restart nfs-server

NFS: Group (Client)

yum install -y nfs-utils
mount -t nfs enwd1cuomnfss1.internal.pri:/shared /mnt

NFS: Kerberos Distribution Center

Need this for further testing:

yum install -y krb5-server krb5-workstation pam_krb5
/var/kerberos/krb5kdc/kdc.conf – update example.com, uncomment master_key_type, add default_principal_flags = +preauth
/var/kerberos/krb5kdc/kadm5.acl – update example.com
/etc/krb5.conf – uncomment lines and replace example.com and kerbserver.example.com
kdb5_util create -s -r internal.pri – This can take quite a while. Be patient
systemctl start krb5kdc kadmin
systemctl enable krb5kdc kadmin
useradd [dummy user]
enter kerberos admin tool: kadmin.local

kadmin.local: addprinc root/admin
kadmin.local: addprinc [dummy user]
kadmin.local: addprinc -randkey host/enwd1cuomkrb1.internal.pri
kadmin.local: ktadd host/enwd1cuomkrb1.internal.pri
kadmin.local: quit

/etc/ssh/ssh_config

GSSAPIAuthentitaction yes
GSSAPIDelegateCredentials yes

systemctl reload sshd
authconfig –enablekrb5 –update
Add the following to /etc/firewalld/services/kerberos.xml to add the kadmin port (cp /usr/lib/firewalld/services/kerberos.xml /etc/firewalld/services/):

<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>Kerberos</short>
  <description>Kerberos network authentication protocol server</description>
  <port protocol="tcp" port="88"/>
  <port protocol="udp" port="88"/>
  <port protocol="tcp" port="749"/>
</service>

firewall-cmd –permanent –add-service=kerberos
alternate: firewall-cmd –permanent –add-port 749/tcp
firewall-cmd –reload
su – [dummy user]
kinit (enter password for user)
klist (to see the ticket)

NFS: Kerberos Client

yum install -y krb5-workstation pam_krb5
scp root@enwd1cuomkrb1.internal.pri:/etc/krb5.conf /etc/krb5.conf
Enter kadmin

kadmin: addprinc -randkey host/enwd1cuomnfsc1.internal.pri
kadmin: ktadd host/enwd1cuomnfsc1.internal.pri

/etc/ssh/ssh_config

GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes

systemctl reload sshd
authconfig –enablekrb5 –update
su – [dummy user]
kinit
klist
ssh enwd1cuomkrb1.internal.pri – to test; should log in without password

NFS: Add NFS Server

kadmin

kadmin: addprinc -randkey nfs/enwd1cuomnfss1.internal.pri
kadmin: ktadd nfs/enwd1cuomnfss1.internal.pri
kadmin: quit

NFS: Add NFS Client

kadmin

kadmin: addprinc -randkey nfs/enwd1cuomnfsc1.internal.pri
kadmin: ktadd nfs/lnmt1cuomdb1.internal.pri
kadmin: quit

systemctl enable nfs-client.target
systemctl start nfs-client.target
mount -t nfs4 -o sec=krb5 enwd1cuomnfss1.internal.pri:/home/tools /mnt
su – [dummy user]
kinit
cd /mnt
echo “This is a test.” > testfile

SMB

yum groupinstall -y ‘file-server’
yum install -y samba-client
/etc/samba/smb.conf

[global]
      workgroup = MYGROUP
      server string = Samba Server Version %v
      netbios name = MYSERVER
      interfaces = lo eth0 192.168.1.0/24
      hosts allow = 127. 192.168.1.
      log file = /var/log/samba/log.%m
      max log size = 50
      security = user
      passdb backend = tdbsam

[shared]
      comment = Shared directory
      browseable = yes
      path = /shared
      valid users = jainsley
      writable = yes

testparm
mkdir /shared
chmod 777 /shared
echo “Testing” > /shared/test
yum install -y setroubleshoot-server
semanage fcontext -a -t samba_share_t “/shared(/.*)?”
restorecon -R /shared
firewall-cmd –permanent –add-service=samba
firewall-cmd –reload
systemctl enable smb
systemctl enable nmb
systemctl start smb
systemctl enable nmb
useradd -s /sbin/nologin cschelin
smbpasswd -a cschelin
smbclient //localhost/shared -U cschelin%[password]

smb: \> ls

SMTP: Forwarder

yum install -y posfix
systemctl enable postfix
systemctl start postfix
/etc/postfix/main.cf

myhostname = enwd1cuomail1.internal.pri
mydomain = internal.pri
myorigin = $mydomain
inet_interfaces = loopback-only
mydestination = 
relayhost = 192.168.1.1

postfix check
postconf -n
systemctl restart postfix
postconf relayhost (to verify)

SMTP: Gateway

yum install -y postfix
systemctl enable postfix
systemctl start postfix
/etc/postfix/main.cf

myhostname = enwd1cuomail1.internal.pri
mydomain = internal.pri
myorigin = $mydomain
inet_interfaces = all
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
mynetworks = 192.168.1.0/24, 127.0.0.0/8
relayhost = 192.168.1.1

postfix check
postconf -n
systemctl restart postfix
firewall-cmd –add-service=smtp –permanent
firewall-cmd –reload

ssh: Server

yum install -y openssh-server
systemctl enable sshd
systemctl start sshd
firewall-cmd –permanent –add-service=ssh
firewall-cmd –reload

ssh: Client

On both servers:
useradd [dummy user]
passwd [dummy user]
As [dummy user]:
ssh-keygen -b 2048 -t rsa
scp .ssh/rd_rsa.pub [dummy user]@server2
/etc/ssh/sshd_config

PasswordAuthentication no
PubkeyAuthentication yes

systemctl restart sshd
ssh server2

ntp: Client

timedatctl set-timezone America/Denver
yum install -y ntp
systemctl enable ntpd
systemctl start ntpd
/etc/ntp.conf
ntpq -p
ntpstat
systemctl stop ntpd
ntpdate pool.ntp.org
systemctl start ntpd

chrony: Client

yum install -y chrony
systemctl enable chronyd
systemctl start chronyd
/etc/chrony.conf
chronyc tracking
chronyc sources -v
chronyc sourcestats -v
ntpdate pool.ntp.org

MariaDB: Server

yum install -y mariadb mariadb-server
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation

MariaDB: backup/restore

mysqldump –user=root –password=[password] –result-file=test.sql test
mysqldump –user=root –password=[password] test > test.sql
mysql –user=root –password=[password] testdb < test.sql

MariaDB: Create Schema

mysql –user=root -p

create database test;
grant all on test.* to user@localhost identified by 'password';
flush privileges;
use test;
create table addresses(id int(10) unsigned, name varchar(20), address varchar(40));
quit

Note: drop user ‘name’@’localhost;

MariaDB: Queries

show tables;
desc addresses;
insert addresses values(1,"James","address1");
insert addresses values(2,"Bill","address2");
select * from addresses where name="James";
select * from addresses order by name ASC";
update addresses set name="John" where name="Bill";
delete from addresses where name="James";
Posted in Computers | Tagged | Leave a comment

Career Day

In thinking about the last 40 years of me being in the workforce, I thought about the reasons for change or even leaving the company.

Marine Corps Reserve

Signed up as a grunt in the 10th grade with parental permission, standard line animal. When I left High School, I went active duty into The Army.

US Army

Started off as a Military Policeman and then as a Dispatcher. Said something threatening about a Sargent that was overheard and reported, and was transferred into working as a Graphics Artist for the Battalion. Worked in Ft. Meade MD as an MP then Graphic Artist, Erlangen Germany as a Battalion Graphic Artist, and then Ft. Belvoir VA as a Post Graphic Artist. Unable to reenlist due to being 5 lbs overweight.

Graphic Artist

I worked part time in Alexandria VA while in The Army as a Graphic Artist, then when I got out, I was a Typesetter. Work declined so was laid off.

Car Salesman

For 2 months, talked to over 300 people but only sold 7.5 cars. Let go.

Security Guard

Gate duty. Let go when the contract ended.

Programmer

Worked part time as a Basic programmer mainly on Leading Edge (IBM compatible) and Franklin (Apple compatible) computers. Left briefly for some additional programming training then returned but left again when I was working more than he could afford.

Programmer/System Installer

Started off as a programmer working on Funeral Home and Point of Sales software. Then when the system installer left to start her own company, I took that over. When the company had issues with Employee taxes and the IRS, I bailed.

System Installer/LAN Admin

Worked installing networks and admining the company LAN. Also did some assembly of computers when needed. Company went out of business.

Tech Support/DBA/LAN Admin

Started off as a telephone tech support person, then moved to a DBA position briefly, then the company’s first full time LAN Admin. IT was outsourced.

LAN Admin

Company indicated I was to get a raise but when I mentioned just that I was getting a raise (not how much) to the others, the raise was pulled. I left the company.

LAN Admin/Trainer

Hired as a contractor to manage the LAN, refresh and provide support to others who had their own LANs, and support the Token Ring network folks. Company bought by another company.

LAN Admin/Trainer

Company advised me that when the contract ended, I’d be let go. Job search and moved on.

LAN Admin

Lots of changes as a contractor at this government agency. Started off as a LAN Admin but the company, which was a programming company ultimately, lost the position when the networks were consolidated into a central managed group.

Tech Support/LAN Admin/Unix Admin

Briefly provided Tech Support while the LAN Admin position opened up. Once it was available, I transferred into that group. After a bit, I was not happy with the work and planned on leaving but the company talked me into taking a Unix Admin position.

Unix Admin

Company was bought by a second company. As part of the transition, I was sent to Cisco training to get networking knowledge.

Unix Admin/Operations Engineer/Network Admin

Company left the contract and I was hired by the Prime Contractor. Started off as a Unix Admin but was brought in as an Operations Engineer to assist with the new contract with the customer. Then transitioned into being a Network Admin before leaving Virginia and moving to Colorado.

Tech Support/Network Admin

Worked in Athens Greece for 30 days assisting with the 2004 Olympics. Left as it was a 30 day contract.

Unix Admin

Managed Unix systems. Contract ended.

Unix Admin

Managed Unix systems. Company was very “cog in a machine” and folks were let go without warning. As I’m not comfortable with that sort of environment, I left.

Unix Admin

Managed Unix systems. The company had been purchased by a larger company a few years before I started but hadn’t exerted any control other than budgetary. Eventually the larger company started pushing for more control.

Operations Engineer

Moved into the ‘Build’ role of the company transition to a ‘Plan/Build/Run’ model. The definition of Build matches an Operations Engineering type role.

Posted in About Carl, Computers | Leave a comment

Development Environments

I have a few big projects at work that I’ve created over the years. An inventory application, status management, and of course all the scripts (some 200 of them).

Initially I’d work on the inventory and status apps on the live server. Changes were minor and new features could be added without disruption. I used RCS, the simple revision control system that comes with Unix and that I’ve used extensively in the past to manage DNS files. But as time went on, some of the updates might take a couple of days to get working which impacted the application. As a result, I took the two desktops I’d salvaged, installed Red Hat and Ubuntu on the two, and copied the source code (php scripts) over to the server as a simple location to work on the scripts.

About 2 years back, I went though a major revision to the Inventory and started looking at creating an actual development environment where there’s a central store of ready to use code and a separate code directory where I could work on stuff without impacting the STABLE code base. It took a bit of work and several changes to make things work cleanly (and of course I’m still updating things).

With the virtualization I now have at home, I’m busily moving that code base over to the new environment. The nice thing is all the test bits I’ve done over the years can be moved over to the test location and the actual sites can be pared down to just the necessary bits. Cleaner and it protects the site a bit.

Environment now:

code – This is a directory with all the code for the web sites. I have a good 15 sites at home and 5 or 6 sites at work.
archive – Old stuff I don’t need but want to hold on to for historical references.
static – Any non-source code bits. Images for the most part but some data files that are imported regularly.
stage – The staging area for the code. The site is assembled into this directory and then sync’d with the production server.
html – The working web directory. All work is done here.

In the code site directory are the following utility scripts:

findcount – This script runs the find -print command to generate a list of all the files in the source which is stored in the countall file.
fixsettings – This script recreates the link to all the settings files to ensure every file has the same settings information. This script is in the html working area.
searchall – This script lets you search all the scripts for a string (pass -i to ignore case). This script is also in the html working area.

In the code directory are two files for each site, three if you include the log file.

make[site] – This is a script that builds the site located in the staging area.
– Runs the findcount script and compares it to the countall.backup file. If a new script has been added, this reminds you to add it to the manifest file.
– Parses the manifest file and creates directories and installs files as listed.
– Uses rsync to copy any static files into the staging area.
– Compares the manifest output with the countall output files to list any scripts that were added but failed to get added to the manifest file.
– A flag is created indicating the site has been updated and needs to be sync’d with the production site.
manifest.[site] – List of directories that need to be created and files that are copied from the source code directory into the staging area.

In the staging area are also three files, four if you count the log file.

sync[site] – This script rsync’s the site to all the target production sites. For work, I have the Inventory going to three servers right now due to transitioning to a new server.
sync.[site] – This is the flag file created by the make[site] script in the code directory. The sync[site] script only sync’s with the target server when this file exists.
exclude.[site] – This is a list of files and/or directories to exclude from the sync process.

For my account, I have two sets of lines in cron. The first set runs at 1am and runs the make[site] script for each listed site. This lets me automatically update the sites each night even if there are no changes (rsync is set with –delete so files that shouldn’t be there are deleted). The second set of lines run the sync[site] commands every minute. At 1am when the make[site] scripts run, they create the sync.[site] flag files and a minute later the sync[site] script runs and updates the site. But also, when I make a change to a site and manually run the make[site] script, the sync[site] script runs after a minute and updates the site.

Other scripts I have are the ”’check”’ scripts.

checkout – Uses the co -l command to check out a script from the source code into the working directory.
checkin – Uses the ci -u command to check in a script.
checkrw – Checks the source code site to see which files have been checked out and are being worked on.
checkmanifest – Checks working site against the manifest and provides a difference between any checked out scripts and STABLE scripts.
checkdiff – Runs a diff between a specific checked out script and the STATIC script.
checkinventory – Tells you what files are in the working directory that aren’t in the site directory.

And an import script that retrieves the mysql database from the target server and imports the data into mysql.

And that’s the environment. It works pretty well for what I’m trying to accomplish. I would like to use git to manage sites but I haven’t been able to find a good tutorial on how to pull files for sites.

Posted in Computers | Leave a comment

State of the Game Room: 2017

Got my gaming table done last year.

Building a New Gaming Table

And plans for the Dice Towers.

Building Dice Towers

And last year I picked up a storage unit from Ikea (first picture).

Yesterday we headed down to Ikea again and picked up 2 more; one 4×4 for under the window (5′ hight) and one 5×5 like the first. From looking at the games in the computer room, I have sufficient space for another 5×5 in the game room which will help with being able to get my books out of the closet again. But one of my main goals was to get the games out of the computer room and off the desktops specifically. I’m not ever going to have all the games in the game room. There’s just not the space without moving the TV and couch out (and even then there’s not the space). It takes about 4 storage boxes for a shelf and there are 3 larger shelves which takes about 12 boxes or half a 5×5 unit. The shorter shelves are about 3 boxes and there are 6 shelves which is 18 boxes for a total of 30 boxes potentially. Another 5×5 unit plus a narrower 5×2 maybe for 35 boxes. Maybe next to the A rack (first picture).

7th Sea through Pandemic in the Game room and Pandemic Legacy through Zpocolypse in the computer room.

Carl

Posted in Gaming | 1 Comment

Setting Up Kubernetes

In a series on my home environment, I’m next working on the Kubernetes sandbox. It’s defined as a 3 master, 5 minion cluster. The instructions I currently have seem to work only with a 1 master, n minion cluster (as many minions as I want to install). I need to figure out how to add 2 more masters.

Anyway, on to the configurations.

For the Master, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379"

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://kube1:8080"

Edit the etcd.conf file. There are a bunch of entries but a majority are commented out. Either edit the lines or copy and comment out the original and add new lines:

# vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

# [cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

Edit the kubernetes apiserver file:

# vi /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""

Configure etcd to hold the network overlay on the master. Use an unused network:

$ etcdctl mkdir /kube-centos/network
$ etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://kube1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS=""

Finally start the services. You should see a green “active (running)” for the status of each of the services.

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld
do
  systemctl restart $SERVICES
  systemctl enable $SERVICES
  systemctl status $SERVICES
done

Everything worked perfectly following the above instructions.

On the Minions or worker nodes, you’ll need to follow these steps. Many are the same as for the master but I split them out to make it easier to follow. Conceivably you can copy the necessary configuration files from the master to all the minions with the exception of the kubelet file.

For the Minions, you need to add in the Centos repository for docker:

# vi /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0

Once in place, enable the repo and then install kubernetes, etcd, and flannel:

# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Update the kubernetes config file. The server names must be in either DNS or /etc/hosts (mine are in DNS):

# vi /etc/kubernetes/config
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://kube1:2379"

# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://kube1:8080"

Update the flannel configuration:

# vi /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD="http://kube1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_KEY="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS=""

Edit the kubelet file on each of the Minions. The main thing to note here is the KUBELET_HOSTNAME. You can either leave it blank if the Minion hostnames are fine or enter in the names you want to use. Leaving it blank lets you copy it to all the nodes without having to edit it, again assuming the hostname is the one you’ll be using for the variable:

# vi /etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=knode1"                    # <-------- Check the node number!

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://lnmt1cuomkube1:8080"

# Add your own!
KUBELET_ARGS=""

And start the services on the nodes:

for SERVICES in kube-proxy kubelet flanneld docker
do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

The final step is to configure kubectl:

kubectl config set-cluster default-cluster --server=http://kube1:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

Once that's done on the Master and all the Minions, you should be able to get a node listing:

# kubectl get nodes
NAME     STATUS    AGE
knode1   Ready     1h
knode2   Ready     1h
knode3   Ready     1h
knode4   Ready     1h
knode5   Ready     1h
Posted in Computers | Leave a comment

Configuring The Home Environment

I’m using the new drive space and systems to basically mirror the work environment. Part of it is in order to have a playground or sandbox where I can try new things and learn how to use the tools we have and part is just “Because It’s There” 🙂 There’s satisfaction in being able to recreate the basic work setup at home.

As noted in previous posts, I have a pretty decent computer network now and I’ve created four environments.

Site 1. CentOS 7 based and hosts my personal, more live stuff like a movie and music server, development environment (2 servers), and backups. I also have a couple of Windows Workstation installations and Server installs for Jeanne. Plus of course the firewall. 13 Servers in total.
Site 2. CentOS 5, 6, and 7 based and hosts the Ansible and Kubernetes/Docker environments. In addition, there’s now an Ansible Tower server and a Spacewalk server. 24 Servers in total.
Site 3. Red Hat 6 and 7 based for Ansible testing. 11 Servers in total.
Site 4. Miscellaneous operating systems for further Ansible testing. 16 Servers in total.

16 Servers on the main ESX host.
48 Servers on the sandbox ESX host.

Total Servers: 64 Servers.

Red Hat

One of the nice things is Red Hat has a Developer network which provides self-support for Red Hat Enterprise Linux (RHEL) to someone who’s signed up. The little known bit though is you can have unlimited copies of RHEL if you’re running them virtually. Sign up is simple. Go to Red Hat and sign up to the Developer Network. Then download RHEL and install it. Run the following command to register a server:

# subscription-manager register --auto-attach

Note that you will need to renew your registration every year.

Spacewalk

Spacewalk is the freely available tool used for managing your servers. Red Hat’s paid version is Satellite. For ours at work, it’s $10,000 a year for a license. So Spacewalk it is 🙂

I use Satellite at work and it works pretty well. We have about 300 servers registered since the start of the year and are working to add more. I am finding Spacewalk, even though it’s older, to be quite a bit easier to use compared to Satellite. It’s quicker and the tasks are more obvious. Not perfect of course but it seems to be a simpler system to use. I set up CentOS 5, 6, and 7 repositories (repos) to sync and download updates each week.

Before you can connect a client, you need to create a channel for the operation system.

1. You need to create a Channel to provide an anchor for any underlying repos. I created a ‘hcs-centos54’, ‘hcs-centos65′, and hcs-centos7’ channel. Create a Channel: Channels -> Manage Software Channels -> Create Channel
2. You need to create repositories. You can create a one-for-one relationship or add multiple repos to a channel. I did mine one-for-one for now. I had to locate URLs for repositories. For the ‘centos7_mirror’, I used the mirror.centos.org site. For older versions, I had to use the vault.centos.org site. Create a Repository: Channels -> Manage Software Channels -> Manage Repositories
3. Now associate the repo with a channel. Simply go to the channel and click on the Repositories tab. Check the appropriate repo(s) and click the Update Repositories button.

The command to associate a server requires an activation key. This lets you auto-register clients so you don’t have to pop into Spacewalk to manually associate servers. The only thing needed is a name, I used ‘centos5-base’ for one, and associate a channel. The key is created automatically once you click the button. Create an Activation Key: Systems -> Activation Keys -> Create Key -> Description, Base Channel, click Update Activation Key

You’ll need the ‘1-‘ at the beginning of the key to activate a client.

There’s a set of tools needed in order to support the activation and what gets installed depends on the OS version. For my purposes, the following are needed:

RHEL5/CentOS 5

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/5/x86_64/spacewalk-client-repo-2.5-3.el5.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-5.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL6/CentOS 6

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/6/x86_64/spacewalk-client-repo-2.5-3.el6.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

RHEL7/CentOS 7

rpm -Uvh http://yum.spacewalkproject.org/2.5-client/RHEL/7/x86_64/spacewalk-client-repo-2.5-3.el7.noarch.rpm
BASEARCH=$(uname -i)
rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

Once that’s installed (if there’s an error, you’ll need to install the epel-release package and try again), register the system.

rpm -Uvh http://192.168.1.5/pub/rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm
rhnreg_ks --serverUrl=http://192.168.1.5/XMLRPC --sslCACert=/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT --activationkey=1-[key]

Once done, log in to Spacewalk and click on Systems -> Systems to see the newly registered system. If you’re running different OSs under different channels, you’ll need different keys for the various OSs.

In order to activate kickstarts, you need to sync a kickstart with the Spacewalk server. It’s not complicated but it’s not obvious 🙂 Get the channel name for the kickstart repo you want to create and run the following command:

# spacewalk-repo-sync -c [Channel Name] --sync-kickstart

My channel name is hcs-centos7 so the command on my system would be:

# spacewalk-repo-sync -c hcs-centos65 --sync-kickstart

I plan on taking the kickstart configurations I built for the servers and adding them to Spacewalk to see how that works and maybe kickstart some systems to play with kickstarting.

Configuration

I also have the scripts I wrote for work and have them deployed on all the servers plus adding accounts. I needed to update the times to be Mountain Time as the times for the kick off of scheduled nightly or weekly tasks were going off in early evening and slowing down access to the ‘net for Jeanne and me. This involved updating the timezones and starting the ntp daemon.

RHEL7/CentOS 7

# timedatectl set-timezone America/Denver

RHEL6/CentOS 6

You link because if there’s an update that changes the zone information, such as a day change, the system is automatically correct.

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime

RHEL5/CentOS 5

# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime

Time

And related to time, I need to ensure either ntp or chrony is properly configured and started. Kubernetes especially requires consistent time.

The chronyd and chronyc are the replacements for ntpd and ntpq. The configuration is similar though and with the same understanding about how it works. As I have a time server running on pfSense, I’m ensuring the servers all are enable and are pointing to the local time server. No point in generating a bunch of unnecessary traffic through comcast and just keep pfSense updated.

chronyd

Edit /etc/chrony.conf, comment out the existing pool servers and add in this line:

server 192.168.1.1 iburst

Enable and start chronyd if it’s not running and restart it if it’s already up. Then run the chronyc command to verify the change.

# systemctl enable chronyd
# systemctl start chronyd

or

# systemctl restart chronyd

Results:

# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* pfSense.internal.pri          3   6    17     1    -13us[ -192us] +/-  109ms

ntpd

Edit /etc/ntp.conf, comment out the existing pool servers and add in this line:

server 192.168.1.1

Enable and start ntpd if it’s not running and restart it if it’s already up. Then run the ntpq command to verify the change.

# service ntpd start
# chkconfig ntpd on

or

# service ntpd restart

Results:

# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 pfSense.interna 42.91.213.246    3 u   11   64    1    0.461   -5.037   0.001
 LOCAL(0)        .LOCL.          10 l   10   64    1    0.000    0.000   0.001

Nagios

I started setting a nagios server. Nagios is a tool used to monitor various aspect of servers. At work we’re using it as a basic ping test just to make sure we know servers are up with a quick look. Other bits are being added in as time permits. Here I did install net-snmp and net-snmp-utils in order to build the check_snmp plugin. This gives me lots and lots of options on what to check and might let me replace some of the scripts I have in place.

SNMP Configuration

####
# First, map the community name "public" into a "security name"

#       sec.name  source          community
com2sec AllUser   default         CHANGEME

####
# Second, map the security name into a group name:

#       groupName      securityModel securityName
group   notConfigGroup v1            notConfigUser
group   AllGroup       v2c           AllUser

####
# Third, create a view for us to let the group have rights to:

# Make at least  snmpwalk -v 1 localhost -c public system fast again.
#       name           incl/excl     subtree         mask(optional)
view    systemview     included      .1.3.6.1.2.1.1
view    systemview     included      .1.3.6.1.2.1.25.1.1
view    AllView        included      .1

####
# Finally, grant the group read-only access to the systemview view.

#       group    context sec.model sec.level prefix read    write  notif
access  AllGroup ""      any       noauth    exact  AllView none   none

Unfortunately the default check_snmp command in the commands.cfg file was a bit off.

Old:

# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ $ARG1$ 

In running the program with a -h command, I found the correct options for what I needed to do:

New:

# 'check_snmp' command definition
define command{
        command_name    check_snmp
        command_line    $USER1$/check_snmp -H $HOSTADDRESS$ -C $ARG1$ -o $ARG2$ -P 2c

Per the configuration, I’m using snmp version 2. Other than that, just pass the appropriate community string (-C) and object id (-o) to do the check you want to check.

Uptime:

In the linux.cfg file I created, I added the following check_snmp block:

define service{
        use                             local-service         ; Name of service template to use
        host_name                       [comma separated list of hosts]
        service_description             Uptime
        check_command                   check_snmp!CHANGEME!.1.3.6.1.2.1.1.3.0!
        }

Possibly Interesting OIDs:

Network Interface Statistics

  • List NIC names: .1.3.6.1.2.1.2.2.1.2
  • Get Bytes IN: .1.3.6.1.2.1.2.2.1.10
  • Get Bytes IN for NIC 4: .1.3.6.1.2.1.2.2.1.10.4
  • Get Bytes OUT: .1.3.6.1.2.1.2.2.1.16
  • Get Bytes OUT for NIC 4: .1.3.6.1.2.1.2.2.1.16.4

Load

  • 1 minute Load: .1.3.6.1.4.1.2021.10.1.3.1
  • 5 minute Load: .1.3.6.1.4.1.2021.10.1.3.2
  • 15 minute Load: .1.3.6.1.4.1.2021.10.1.3.3

CPU times

  • percentages of user CPU time: .1.3.6.1.4.1.2021.11.9.0
  • percentages of system CPU time: .1.3.6.1.4.1.2021.11.10.0
  • percentages of idle CPU time: .1.3.6.1.4.1.2021.11.11.0
  • raw user cpu time: .1.3.6.1.4.1.2021.11.50.0
  • raw system cpu time: .1.3.6.1.4.1.2021.11.52.0
  • raw idle cpu time: .1.3.6.1.4.1.2021.11.53.0
  • raw nice cpu time: .1.3.6.1.4.1.2021.11.51.0

Memory Statistics

  • Total Swap Size: .1.3.6.1.4.1.2021.4.3.0
  • Available Swap Space: .1.3.6.1.4.1.2021.4.4.0
  • Total RAM in machine: .1.3.6.1.4.1.2021.4.5.0
  • Total RAM used: .1.3.6.1.4.1.2021.4.6.0
  • Total RAM Free: .1.3.6.1.4.1.2021.4.11.0
  • Total RAM Shared: .1.3.6.1.4.1.2021.4.13.0
  • Total RAM Buffered: .1.3.6.1.4.1.2021.4.14.0
  • Total Cached Memory: .1.3.6.1.4.1.2021.4.15.0

Disk Statistics

  • Path where the disk is mounted: .1.3.6.1.4.1.2021.9.1.2.1
  • Path of the device for the partition: .1.3.6.1.4.1.2021.9.1.3.1
  • Total size of the disk/partion (kBytes): .1.3.6.1.4.1.2021.9.1.6.1
  • Available space on the disk: .1.3.6.1.4.1.2021.9.1.7.1
  • Used space on the disk: .1.3.6.1.4.1.2021.9.1.8.1
  • Percentage of space used on disk: .1.3.6.1.4.1.2021.9.1.9.1
  • Percentage of inodes used on disk: .1.3.6.1.4.1.2021.9.1.10.1

System Uptime OID’s

  • .1.3.6.1.2.1.1.3.0

One problem with the OIDs are they’re statistics and not much use without a trigger. They’re really more useful with MRTG where you can see what things look like over a period of time. What you really want to do is check to see when stats exceed expected norms.

MRTG

This is primarily a network traffic monitoring type tool but I’ve configured it to track other system statistics regarding disk space, swap, memory, and whatnot. It’s not configured just yet but that’s my next configuration task.

Posted in Computers | Leave a comment

Home Network and Internet Access

Over the years I’ve had a few network configurations.

In the mid-80’s I ran a BBS and connected to other BBSs through a transfer scheme. My system would be accessed at all hours and I had the biggest collection of utilities and games in the area at the time.

In 1989 I got a job at Johns Hopkins APL and I got direct internet access. With that, I started poking around on how to get access at home without going through a pay service like AOL or CompuServe or Prodigy. One of my coworkers at jhuapl recommended PSINet and I was able to finally get to get direct access to the Internet.

In the mid-90’s Comcast started offering access with a mixed cable/dial-up configuration and I switched over. Faster download speeds was a big draw and the payment was less than PSINet.

Comcast eventually offered full cable access. At that point I repurposed an older system as an internet gateway. It gave me the ability to play with Red Hat at home plus I’d just become a full time Unix admin at NASA. I had a couple of 3Com ethernet cards in the old computer and was running Red Hat 3 I think. It worked well and I was able to get access to the Internet and Usenet.

One of the problems though was Red Hat or the 3Com cards didn’t support CIDR. I’d gone through a system upgrade again and put the old system to the side. I built a new system and used Linksys cards (I still have one in a package 🙂 ) and Mandrake Linux as a gateway and firewall. I learned about iptables and built a reasonably safe system. I did have my schelin.org website on the server using DynDNS to make sure it was accessible, and hosted pictures there but since it was on Comcast user space, not everyone was able to see the pictures. It was about this time that I started looking in to a hosted system and went to ServerPronto for a server hosted in a data center in Florida. I configured the local system (now using Mandriva after the merge) to access the remote server and configured that server with OpenBSD.

In 2004 I bought an Apple Extreme wireless access point (WAP) for my new MacBook G4. I added a third network interface to the gateway and blocked the laptop from the internal network (only permitted direct internet access). By this time I also had a Linksys 10/100 switch so other systems could directly access the ‘net.

In 2008 it was time to switch systems again. My old XP system, which was reasonably beefy with 300 Gigs of disk space mirrored and 16 gigs of RAM was converted over to be a firewall and the old box wiped and disposed of at the computer recycling place. I installed Ubuntu to muck around with it and configured its firewall. Still running three network cards and a new Apple Extreme as the old one tanked.

In November 2015, I was caught by one of the Microsoft “Upgrade to Windows 10” dialog boxes and upgraded my Windows 7 Pro system with Windows 10. For months there were problems with some of the games I played and other issues with drivers.

Around February 2016, the Virtualization folks at work were in the process of replacing their old VMWare systems. These systems are pretty beefy as they run hundreds of virtual machines. A Virtual Machine is a fully installed Unix or Windows box but only using a slice of the underlying physical system. It severely reduces time and cost in that I can get a VM stood up pretty quickly and don’t have to add in the purchase and racking of a physical system. Very efficient.

Anyway, they were decommissioning the old gear and one of the guys asked me if I was interested in one of the servers. I was a bit puzzled because I didn’t know they were decommissioning old systems and thought it would be on my desk or something. But I said sure and looked at my desk to see where I’d put it. But I received the paperwork and it was actually transferring the system to my ownership. Belongs to me, take it home. Woah! I signed off and picked up the system from the dock a few days later.

The system is a Dell R710 with 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror), 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards.

Holy crap!

I immediately set it up as a gateway and firewall using CentOS 7. I’d recently received my Red Hat Certified System Admin certification and need to try for my Red Hat Certified Engineer certification. This gave me something to play on. I set up the firewall and got all my files transferred over. The old box (XP then Ubuntu) is sitting under my desk.

In March of 2016, I bought a new system entirely. In part because of the issues I was having with my 2008 system and in part because of a decent tax check refund. Over the years I’d added a couple of 2 TB drives to the 2008 system so they were transferred over to the new system. I also have an external 3TB drive that stopped working for some reason.

I moved the 2008 system away and got the new one up and running. I had to do some troubleshooting as there were video issues but it’s now working very well.

But in mid summer the Virtualization folks asked me again if I wanted a system. They were still decommissioning systems. Initially I declined as the one I have actually does work very well for my needs but one of my coworkers highly suggested snagging it and setting up an ESX host running VMware’s VSphere 6. I’d been mucking about with Red Hat’s KVM without much success so I went back and changed my mind. Sure, I’ll take the second system.

The new system is a Dell R710 again. It had 192 Gigs of RAM but the guy gave me enough to fully populate the system to 288 Gigs of Ram. It also had 2 6 Core Xenon X5660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards just like the first one. One of the drives had failed though. At suggestion, I puchased 5 3 TB SATA drives. This gave me 8 TB of space and a spare SATA drive in case one fails plus the remaining 3 750 Gig drives are available to the first system in case a drive fails.

I configured the new system with VMWare and created a virtual machine firewall. I created a VM to replace the full physical system and copied all the files from the physical server over to VMs on the new system. With all that redundancy, the files should be safe over there. They’re plugged into a UPS as well as is my main system. I’ve been using UPSs for years, since I kept losing hardware due to brownouts and such in Virginia.

Once everything was copied over, I converted the first system into an ESX host. That one has my sandbox environment now. Mirroring the work environment in order to test out scripts, Ansible, and Kubernetes/Docker.

My sandbox consists of VMs which have been set up for three environments, and use a naming scheme that tells me what major OS version is running.

For Ansible testing, at site 1, I have an Ansible server, 2 utility servers, and 2 pairs each of 2 CentOS 5, 6, and 7 servers (2 db and 2 web). 15 servers.

At site 2, I also have an Ansible server, 2 utility servers, 2 pairs each of 2 Red Hat 6 and 7 servers (2 db and 2 web). 11 servers. And Red Hat will let you register Red Hat servers on an ESX host for free which is excellent.

For off the wall Ansible testing, at site 3, I have just a pair of servers for current versions of Fedora, Ubuntu, Slackware, SUSE, FreeBSD, OpenBSD, Solaris 10, and Solaris 11 (1 db and 1 web). 16 servers.

For Kubernetes and Docker testing, I have 3 master servers and 5 minion servers. 8 servers.

So far, 50 servers for the sandbox environment.

For my personal sites, I have a firewall, a development server, 3 host servers for site testing, a staging server, a remote backup server, a local Samba backup server, a movie and music server, a Windows XP server, a Windows 7 server, and 2 Windows 2012 servers for Jeanne’s test environment. In general, these were all on the XP server I had before I got the R710. The ability to set up VMs lets me better manage the various tasks including rebooting a server or even powering it off when I’m not using it.

And 13 servers for 63 total servers.

But wait, there’s more 🙂

In September, the same coworker made available a Sun 2540. This is a drive array he got from work under the same circumstances as the R710’s I have. He’s a big storage guy so had a lot of storage type stuff at home. I picked it up along with instructions, drive trays (no drives though), and fiber to connect to the two ESX systems. Fully populated with 3 TB drives, this would give me 36 TB of raw disk space. As RAID 5 loses a drive, a single RAID 5 would give me 33 TB however the purposes of this is to present space to both systems so I’d need to slice it up. I checked on line and purchased 6 3 TB drives for 18 TB raw, RAIDed to 15 TB and the same guy pointed us to a guy in Colorado Springs selling 18 2 TB drives. I snagged 8 and completely populated the array with 6 2 TB drives for 12 TB raw, RAIDed to 10 TB giving me 25 TB of available space to the two ESX systems. Because of the age of the 2540, I have Solaris 10 installed so I can run the Oracle management software.

As the 3 TB external drive had tanked for some reason, I extracted it from the case and installed it into the Windows 10 system. I wanted to mirror the 2 2 TB drives but couldn’t as long as there was data on it. With a 3 TB drive, I can move everything from the 2 TB one and then mirror the drives for safety.

I’ve come a long way from a single system with maybe 60 Megabytes of disk space up to a Home Environment with 70 Terabytes of raw disk space.

And of course, more will come as tech advances.

Posted in Computers | Leave a comment