Home Network and Internet Access

Over the years I’ve had a few network configurations.

In the mid-80’s I ran a BBS and connected to other BBSs through a transfer scheme. My system would be accessed at all hours and I had the biggest collection of utilities and games in the area at the time.

In 1989 I got a job at Johns Hopkins APL and I got direct internet access. With that, I started poking around on how to get access at home without going through a pay service like AOL or CompuServe or Prodigy. One of my coworkers at jhuapl recommended PSINet and I was able to finally get to get direct access to the Internet.

In the mid-90’s Comcast started offering access with a mixed cable/dial-up configuration and I switched over. Faster download speeds was a big draw and the payment was less than PSINet.

Comcast eventually offered full cable access. At that point I repurposed an older system as an internet gateway. It gave me the ability to play with Red Hat at home plus I’d just become a full time Unix admin at NASA. I had a couple of 3Com ethernet cards in the old computer and was running Red Hat 3 I think. It worked well and I was able to get access to the Internet and Usenet.

One of the problems though was Red Hat or the 3Com cards didn’t support CIDR. I’d gone through a system upgrade again and put the old system to the side. I built a new system and used Linksys cards (I still have one in a package 🙂 ) and Mandrake Linux as a gateway and firewall. I learned about iptables and built a reasonably safe system. I did have my schelin.org website on the server using DynDNS to make sure it was accessible, and hosted pictures there but since it was on Comcast user space, not everyone was able to see the pictures. It was about this time that I started looking in to a hosted system and went to ServerPronto for a server hosted in a data center in Florida. I configured the local system (now using Mandriva after the merge) to access the remote server and configured that server with OpenBSD.

In 2004 I bought an Apple Extreme wireless access point (WAP) for my new MacBook G4. I added a third network interface to the gateway and blocked the laptop from the internal network (only permitted direct internet access). By this time I also had a Linksys 10/100 switch so other systems could directly access the ‘net.

In 2008 it was time to switch systems again. My old XP system, which was reasonably beefy with 300 Gigs of disk space mirrored and 16 gigs of RAM was converted over to be a firewall and the old box wiped and disposed of at the computer recycling place. I installed Ubuntu to muck around with it and configured its firewall. Still running three network cards and a new Apple Extreme as the old one tanked.

In November 2015, I was caught by one of the Microsoft “Upgrade to Windows 10” dialog boxes and upgraded my Windows 7 Pro system with Windows 10. For months there were problems with some of the games I played and other issues with drivers.

Around February 2016, the Virtualization folks at work were in the process of replacing their old VMWare systems. These systems are pretty beefy as they run hundreds of virtual machines. A Virtual Machine is a fully installed Unix or Windows box but only using a slice of the underlying physical system. It severely reduces time and cost in that I can get a VM stood up pretty quickly and don’t have to add in the purchase and racking of a physical system. Very efficient.

Anyway, they were decommissioning the old gear and one of the guys asked me if I was interested in one of the servers. I was a bit puzzled because I didn’t know they were decommissioning old systems and thought it would be on my desk or something. But I said sure and looked at my desk to see where I’d put it. But I received the paperwork and it was actually transferring the system to my ownership. Belongs to me, take it home. Woah! I signed off and picked up the system from the dock a few days later.

The system is a Dell R710 with 192 Gigs of Ram, 2 8 Core Xenon x5550 2.67GHz CPUs, 2 143 Gig 10,000 RPM Drives set up as a RAID 1 (mirror), 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards.

Holy crap!

I immediately set it up as a gateway and firewall using CentOS 7. I’d recently received my Red Hat Certified System Admin certification and need to try for my Red Hat Certified Engineer certification. This gave me something to play on. I set up the firewall and got all my files transferred over. The old box (XP then Ubuntu) is sitting under my desk.

In March of 2016, I bought a new system entirely. In part because of the issues I was having with my 2008 system and in part because of a decent tax check refund. Over the years I’d added a couple of 2 TB drives to the 2008 system so they were transferred over to the new system. I also have an external 3TB drive that stopped working for some reason.

I moved the 2008 system away and got the new one up and running. I had to do some troubleshooting as there were video issues but it’s now working very well.

But in mid summer the Virtualization folks asked me again if I wanted a system. They were still decommissioning systems. Initially I declined as the one I have actually does work very well for my needs but one of my coworkers highly suggested snagging it and setting up an ESX host running VMware’s VSphere 6. I’d been mucking about with Red Hat’s KVM without much success so I went back and changed my mind. Sure, I’ll take the second system.

The new system is a Dell R710 again. It had 192 Gigs of RAM but the guy gave me enough to fully populate the system to 288 Gigs of Ram. It also had 2 6 Core Xenon X5660 2.8 GHz CPUs, 2 143 Gig 10,000 RPM Drives, 4 750 Gig 7,200 RPM Drives, 4 onboard Ethernet Ports, 1 4 Port 1 Gigabit PCI card, 1 10 Gigabit PCI card, and 2 2 port Fiber HBA PCI cards just like the first one. One of the drives had failed though. At suggestion, I puchased 5 3 TB SATA drives. This gave me 8 TB of space and a spare SATA drive in case one fails plus the remaining 3 750 Gig drives are available to the first system in case a drive fails.

I configured the new system with VMWare and created a virtual machine firewall. I created a VM to replace the full physical system and copied all the files from the physical server over to VMs on the new system. With all that redundancy, the files should be safe over there. They’re plugged into a UPS as well as is my main system. I’ve been using UPSs for years, since I kept losing hardware due to brownouts and such in Virginia.

Once everything was copied over, I converted the first system into an ESX host. That one has my sandbox environment now. Mirroring the work environment in order to test out scripts, Ansible, and Kubernetes/Docker.

My sandbox consists of VMs which have been set up for three environments, and use a naming scheme that tells me what major OS version is running.

For Ansible testing, at site 1, I have an Ansible server, 2 utility servers, and 2 pairs each of 2 CentOS 5, 6, and 7 servers (2 db and 2 web). 15 servers.

At site 2, I also have an Ansible server, 2 utility servers, 2 pairs each of 2 Red Hat 6 and 7 servers (2 db and 2 web). 11 servers. And Red Hat will let you register Red Hat servers on an ESX host for free which is excellent.

For off the wall Ansible testing, at site 3, I have just a pair of servers for current versions of Fedora, Ubuntu, Slackware, SUSE, FreeBSD, OpenBSD, Solaris 10, and Solaris 11 (1 db and 1 web). 16 servers.

For Kubernetes and Docker testing, I have 3 master servers and 5 minion servers. 8 servers.

So far, 50 servers for the sandbox environment.

For my personal sites, I have a firewall, a development server, 3 host servers for site testing, a staging server, a remote backup server, a local Samba backup server, a movie and music server, a Windows XP server, a Windows 7 server, and 2 Windows 2012 servers for Jeanne’s test environment. In general, these were all on the XP server I had before I got the R710. The ability to set up VMs lets me better manage the various tasks including rebooting a server or even powering it off when I’m not using it.

And 13 servers for 63 total servers.

But wait, there’s more 🙂

In September, the same coworker made available a Sun 2540. This is a drive array he got from work under the same circumstances as the R710’s I have. He’s a big storage guy so had a lot of storage type stuff at home. I picked it up along with instructions, drive trays (no drives though), and fiber to connect to the two ESX systems. Fully populated with 3 TB drives, this would give me 36 TB of raw disk space. As RAID 5 loses a drive, a single RAID 5 would give me 33 TB however the purposes of this is to present space to both systems so I’d need to slice it up. I checked on line and purchased 6 3 TB drives for 18 TB raw, RAIDed to 15 TB and the same guy pointed us to a guy in Colorado Springs selling 18 2 TB drives. I snagged 8 and completely populated the array with 6 2 TB drives for 12 TB raw, RAIDed to 10 TB giving me 25 TB of available space to the two ESX systems. Because of the age of the 2540, I have Solaris 10 installed so I can run the Oracle management software.

As the 3 TB external drive had tanked for some reason, I extracted it from the case and installed it into the Windows 10 system. I wanted to mirror the 2 2 TB drives but couldn’t as long as there was data on it. With a 3 TB drive, I can move everything from the 2 TB one and then mirror the drives for safety.

I’ve come a long way from a single system with maybe 60 Megabytes of disk space up to a Home Environment with 70 Terabytes of raw disk space.

And of course, more will come as tech advances.

This entry was posted in Computers. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *