Documentation And Side Projects

Years ago I started off working in military (Military Policeman, Graphic Artist, and then Typesetter). From a personal perspective, I picked up a Timex/Sinclair Z80 computer. It needed a cassette player to save data to cassette and plug into a TV. After the military I did some odd jobs (car salesman and security guard) to make ends meet and wrote programs to make the job better (flash card like program for car salesmen and vehicle manager for the security guard). I finally got a job as a part time programmer, moved into full time programming, into LAN administration, and then managing Unix Systems. During my part time programming job, I attended a set of classes that taught Structured Programming. I liked the concepts and applied them to my current job and even more at my first full time programming job. I spent some of my own time rewriting the modules of the program I was maintaining and improving (Funeral Home Software) to make it better and easier to maintain. Each section had an initialization section, data retrieval, display (for editing), and printing sections. There are always exceptions of course but in general I rewrote the code to make it easy to maintain and locate bugs (and I did find a few and some logic issues; someone didn’t know how to create a loop so duplicated a block of code 10 times).

At that job I added the installation of networks and managing LANs, then went full time as a LAN admin. When the job ended, I started working as a Senior Telephone Tech Support person working with great people like John McKewon, Ron Sitnick, and Omar Fink. I learned how to use a database (Paradox) to manage tickets, then moved into a job as a DBA working on r:Base and the EIS with Bill Beers, and then as the company’s first full time LAN administrator working on 3+Share for Steve Horton. I significantly improved the configuration and I continued my programming interest by working on various tools to improve my job. I created a menu configuration tool that created a configuration files for 3+Menus and a tool that crossed domains to get information. Of course I continued my personal fun and wrote programs for gaming purposes and as plugins for bulletin boards.

Next I started working at Johns Hopkins APL where I gained access to Usenet. I was managing the Administrator’s network and teaching the new folks how to administer 3+Share (they’d had classes 6 months earlier but hadn’t touched it since then so refresher). To keep me occupied, I created a Usenet Reader. I’d gained access to Usenet and in learning how to create the reader, I learned quite a bit about RFCs and return codes from internet services (like usenet). When I started with Bulletin Board software, I’d downloaded a game called Hack and then Nethack. Access to Usenet gave me lots of interesting discussion groups, one of which was the nethack group (alt.games.nethack I think). After loads of discussion, I was approached to see if I was interested in helping with the coding of Nethack. It was maintenance stuff, minor bugs in how things worked and nothing really big but still pretty cool.

One of my personal projects was working on a long time program of mine (The Computerized Dungeon Master). In looking to improve it, I found Mike Smedley’s CXL text windowing library. There were others I checked out but this was the only one that I could get working. Norton had a Norton popup tool called Norton Guides with various documentation subjects like C programming (which I was doing). Someone had provided a tool to reverse engineer the databases and create new ones. I created a popup for the CXL API so when I was working on my program, I could just pop up and check the API. Along with the API, I added other bits of information I used when writing my code like line drawing characters, etc. When CXL was purchased by another company (TesSeRact (TSR)), they converted CXL to TCXL and worked to make it cross platform, Windows, OS/2, Unix. While an interesting project, I was more interested in the PC part of course and I spent a bit of time converting and creating a Norton Guides db for TCXL. I also spent a lot of time digging into the header files to find functions and see what they might do. Lots of behind the scenes work by the windowing software. I was rewarded with a phone call asking if I’d proofread the next major version vs waiting for me to pop up on the BBS and provide errors 🙂 I was also brought up to Pennsylvania to hang with the crew and President of the company.

When I started working at NASA, the previous admin (Llarna Burnett) left a notebook called “Processes and Procedures” with some documentation on managing the network. Up to then and while I was managing the International Relations LAN, I was the sole programmer or network admin. I was honestly puzzled as to what the difference was between the two and what I’d even write. Procedures I could understand but Processes were a bit harder to grasp. I did do some research to try and understand but as a loner, I still didn’t fully get it. I continued on managing the network and interacting with other groups at NASA. Eventually I needed to move to the central group as the environment changed, into a centrally managed group. As my servers transitioned, I found I was going to be a Windows NT LAN manager and only responsible for printers, shares, and users. This was my first time working with others of my profession as well. I decided to move on as this wasn’t interesting. The manager (Bill Eichman) and the Unix admin (Kevin Sarver) came to me, not wanting to lose my knowledge and skills (I guess 🙂 ) and asked me if I’d consider being a Unix admin vs moving on. I actually thought about this for about a week. I already had a job lined up as a Novell admin if I wanted it but I’d been mucking about with Linux for several years and of course had access to Usenet and written a Usenet Reader so I took them up on the offer. The first month was spent on the Usenet server and Essential System Administration (AEleen Frisch; I even submitted a correction to her 🙂 ).

Kevin had taken over the Unix environment and documented quite a bit of the servers plus made things work better but he’d documented it in notebooks and stuck them in file folders. Since he’s a senior, I sometimes (most times 🙂 ) had a hard time understanding what he was writing and in many cases, the information was outdated or there wasn’t enough information, just additional notes (“it’s obvious” 🙂 ). As a programmer type and with Norton Guides I’d used in the past, I much prefer having the docs in digital format and at my fingertips. So as I learned what each of the systems did, I wrote up my findings on a website.



I created a ‘Version Description Document’ (VDD) for each of the servers with links to docs and scripts, and had a ‘gather’ directory for captures of server information and documentation on how to achieve common tasks and troubleshooting steps on how to recover if necessary. I write docs and scripts as if I’m going to be hit by a bus so they’re as complete as possible and clear. Since I had other admins on the team (Victor Iriarte), I wanted them to be able to work if I was unavailable but I also wanted them to feel free to make updates if needed. (We worked with Windows guys too; Ed Majkowski and Tim.)




Developers would create their application software for the various ‘Codes’ at NASA (a ‘Code’ is the same as a department) and my team would manage and install the software based on Change Package. In order to not have applications get lost and so the app guys and Code’s knew the status of the deployments, I created a web page with projects and their status. The digital docs were stored in the server directories.



One of the things Victor did was work to expand my limits on what to document. When I became the contract Engineer, I would review all Change Packages. Victor would ask for missing information and in thinking about it, I’d agree, we need the additional docs in order to properly install and manage the applications. One of the incidents I recall is a new package that had a requirement for a newer version of Perl. Unfortunately the system had other applications and tools that may use Perl (we used Cold Fusion for the web site) so there was a big to-do over investigating the existing applications to ensure they worked with the newer version of Perl and instructions to the developer to make sure they communicated those requirements in the future.

I worked at NASA for 13 years before moving to Colorado. I enjoyed working there but a series of visits to Colorado and the sudden death of Edwin Smith (the manager brought in to convert the NASA contract from Red to Green) had me looking for a less stressful environment. The drive to downtown Washington DC and home and general area issues just were pretty bad.

Amusingly, I returned a year later on a visit and while I was visiting, the current Unix Admin team (of which I knew none of) found out it was me and all came over shaking my hand and thanking me profusely for the documentation I’d done and the web site. It was pretty gratifying I must say 🙂

In Colorado I got a contracting job with IBM in Boulder. While a stressful place to work, I continued my efforts to document systems. Of course there are plenty of docs there but I brought my own skills to the plate and greatly expanded the effort creating new pages and making sure the team created their own pages plus provided a gathering place for the existing documentation. I also expanded my own skills in writing web pages including starting to use PHP. I created a pager page (select a user from a list and send out a page) and various documentation pages. The site at IBM was much more involved and I had a hand in creating a directory structure for all servers and data capture with historical information. IBM used tools such as ‘Tonic’ to check environments for configuration issues and ‘lssecfixes’ for security scanning. I started the captures and stored the information on the central server for review.




I really did expand my skills in data capture and management and in creating processes and procedures. The pages at IBM look much better than the NASA pages but the functionality is pretty similar. Web site for processes and procedures and automatic data capture. One of the cool things is the Tonic tool. I was brought in as a Solaris expert and Tonic didn’t really have much of anything on the Solaris side. So I spent a lot of time updating scripts and passing them along to the folks in England who supported the tool. After a year or so, I was contacted and asked if I wanted to be on the team. These two guys never let anyone outside of England much less the two guys who maintained it to join the team so I was pretty honored to be asked.

But the contact changed and I moved to a new team. These guys supported a company in Boston so I spent a year working from home. This was actually pretty bad in that they didn’t like anyone doing things like the documentation I liked to have on hand and I was called a ‘Cowboy’ for trying to do so. I did spend some time working on their tool which was used to manage the environment but I was pretty dissatisfied and the contract, and IBM was pretty toxic in the way employees and especially contractors were treated. Eventually I left and joined the company I currently work at.

When I started at the current company, the team had 6 members but documentation consisted of a block of hundreds of emails that were forwarded to new folks and one of the guys had created a spreadsheet of the systems primarily in order to do a data center walk through. The information was pretty spread out and perhaps difficult to find. There was a lot of room for improvement 🙂

I started by creating a list of servers similar to what I had at IBM (yes, I brought my skills from NASA and IBM here, why not 🙂 ) and used that to start investigating systems. I created a wiki server using Media wiki to start documenting our processes. IBM had a similar server specific site but I couldn’t use that and had been poking at wiki software to that point. The new wiki server was much better than sending a bunch of email 🙂 I also converted the spreadsheet into a simple database. I’d created a Status Management database earlier to manage the weekly status reports management wanted so used a similar process to create the server database. This was the first time working with mysql and I was starting to do some poking about with CSS and more PHP coding.

The first mysql server database was a list of servers (basically a straight import of the spreadsheet) and editing a server brought up a single edit page. At the bottom was a password requirements in order to make a change. The password was hard coded into the page though. The passed password and the stored one was compared and if it matched, the update was made 🙂 Pretty simple actually and horribly insecure but it got the job done.

In the mean time, I started gaming again and part of the gaming was thinking about my TCDM program I’d worked on years before. The game was pretty intense with lots of modifiers so I created a set of Javascript based pages that let me plug in information and have it automatically generate a final result. They actually worked pretty well and I used them when running my games.

I also used it when I started expanding the server database. This was still a single database with some expansion into new territory but it was getting better. After a bit, I wanted to create a login system to manage users but was having a hard time getting that understood and working and eventually just bought a login system package. At the same time, I started the conversion to Inventory 2.0. This involved breaking various bits of the main server database into sub-tables. Hardware to it’s own list associated with the main table. Network information (interfaces) associated with a server, and continued down that path.

I also snagged the CSS file from the phpinfo() page to give me a starting point for making changes to the site. Rounded corners on stuff, shadows, managing screen space. Initially the page was a fixed width but I learned how to create a page that expands with the width of the browser. There was a bit of learning as I changed how I thought about web pages themselves and HTML and trying to get the site to work with Internet Explorer. At one point I created a Convention management site because of the problems with a local gaming convention and came upon some information that let me better design the inventory at work. I also used the work and convention info to rework my personal inventory site. There is lots and lots of bleed over between my personal stuff and work stuff. 🙂

As I continued, I was tasked with creating a Rapid Server Deployment set of scripts. We’d taken the individual bits from the various teams, organized it into a flowchart, and then I took the flow and created the module to the inventory. That’s been a bunch of work but it has improved the deployment speed other than the normal delays with getting good documentation for the project deployments and getting configuration information quickly and accurately from other groups.

Back in August of last year, I started the process of reviewing the inventory with an eye to making things work more consistently. I already had a Coding Standard page I’d written with templates on how to create the various bits in the Inventory to make things consistent. One of the admins was a stickler on consistent and helpful user interfaces so I spent time working that out in my head and then in December, started the project of converting the existing Inventory 2.x system to Inventory 3.0. Part of that was learning a bit of jQuery as well. As much for the tabbing feature, which I thought was cool, as for the ability to theme the site. In the process of conversion, I found many errors. I monitored the error log and created a php error log and addressed every single error I found. Initially it was a lot but eventually I got it down to almost none (there are some things I can’t fix). Not only did it make the site more consistent, it also made it simple to identify mistakes in the code.



The new site has almost 100,000 lines of PHP code (well, comments, blank lines included 🙂 ) and over 400 scripts. The database has 114 tables (lots of little drop down menu management tables too). I’m quite pleased at how it looks and functions and even with a few errors popping up, it’s been a lot easier to fix.

And after years of “we won’t use it because it’s not official” comments, the powers that be have ruled that my Inventory database has the most accurate information to be imported into the new official ‘asset tracking system’.

I did almost all the coding of the inventory on my own time, to satisfy a personal itch, to make my job easier. That’s the same thing I’ve done in the past with the projects at NASA and IBM. I’m working to make things easier for me and my team. It’s always been, “here’s the documentation, here’s the list of equipment, use it or not”. Part of the issue with folks using the app was the ‘use it or not’ attitude. But I cannot make someone else use the tools I create. All I can do is make it available for use.

As time moved on, I continued to upgrade the Inventory and started cleaning up the code to be more efficient. I’d been checking out other inventory programs and they were mainly desktop oriented but there were interesting ideas. I also checked out some IPAM tools. Eventually my goal is that the Inventory is less a tool that retrieves information and now a tool that defines the environment. It does pull information from systems but the goal is to build systems in the Inventory and create the scripts used to build and manage servers.

Part of this view was exposure to DevOps and Infrastructure as Code. This included using Ansible to replace monitoring scripts. I started working with Kubernetes at 1.2 for a product deployment. This required learning about containers and orchestration which expanded my Infrastructure as Code knowledge.

At this point I left the company to join a new one which let me expand my IaC skills to the point where I was able to convert a 100 server build for a project that took about 14 months to complete, to automation that reduced the 100 server build to 90 minutes. And of course I created a ton of documentation for the process.

Posted in About Carl, Computers | Leave a comment

RHCE Objectives

System configuration and management

Network services

Network services are an important subset of the exam objectives. RHCE candidates should be capable of meeting the following objectives for each of the network services listed below:

The SELinux and Server part is in Chapter 11. Specific service configurations have chapters noted.

  • Install the packages needed to provide the service
  • Configure SELinux to support the service
  • Use SELinux port labeling to allow services to use non-standard ports
  • Configure the service to start when the system is booted
  • Configure the service for basic operation
  • Configure host-based and user-based security for the service
  • HTTP/HTTPS (Chapter 14)
    • Configure a virtual host
    • Configure private directories
    • Deploy a basic CGI application
    • Configure group-managed content
    • Configure TLS security
  • DNS (Chapter 17)
    • Configure a caching-only name server
    • Troubleshoot DNS client issues
  • NFS (Chapter 16)
    • Provide network shares to specific clients
    • Provide network shares suitable for group collaboration
    • Use Kerberos to control access to NFS network shares
  • SMB (Chapter 15)
    • Provide network shares to specific clients
    • Provide network shares suitable for group collaboration
  • SMTP (Chapter 13)
    • Configure a system to forward all email to a central mail server
  • SSH
    • Configure key-based authentication
    • Configure additional options described in documentation
  • NTP
    • Synchronize time using other NTP peers

Database services

  • Install and configure MariaDB
  • Backup and restore a database
  • Create a simple database schema
  • Perform simple SQL queries against a database
Posted in Computers | Tagged , | Leave a comment

RHCSA Objectives

Here is the list of things you should be able to do to pass the RHCSA Exam (CertDepot Site):

Understand and use essential tools

  • Access a shell prompt and issue commands with correct syntax
  • Use input-output redirection (>, >>, |, 2>, etc.)
  • Use grep and regular expressions to analyze text
  • Access remote systems using ssh
  • Log in and switch users in multiuser targets
  • Archive, compress, unpack, and uncompress files using tar, star, gzip, and bzip2
  • Create and edit text files
  • Create, delete, copy, and move files and directories
  • Create hard and soft links
  • List, set, and change standard ugo/rwx permissions
  • Locate, read, and use system documentation including man, info, and files in /usr/share/doc

Note: Red Hat may use applications during the exam that are not included in Red Hat Enterprise Linux for the purpose of evaluating candidate’s abilities to meet this objective.

Operate running systems

  • Boot, reboot, and shut down a system normally
  • Boot systems into different targets manually – This part is the systemctl command to a target runlevel, IE systemctl command runlevel3.target.
  • Interrupt the boot process in order to gain access to a system – Simple escape at the bootloader and enter a ‘1’ at the end of the kernal to get into runlevel 1
  • Identify CPU/memory intensive processes, adjust process priority with renice, and kill processes
  • Locate and interpret system log files and journals – journalctl command
  • Access a virtual machine’s console
  • Start and stop virtual machines
  • Start, stop, and check the status of network services – systemctl commands
  • Securely transfer files between systems
    • scp or sftp

Configure local storage

List, create, delete partitions on MBR and GPT disks
Create and remove physical volumes, assign physical volumes to volume groups, and create and delete logical volumes
Configure systems to mount file systems at boot by Universally Unique ID (UUID) or label
Add new partitions and logical volumes, and swap to a system non-destructively

Create and configure file systems

Create, mount, unmount, and use vfat, ext4, and xfs file systems
Mount and unmount CIFS and NFS network file systems
Extend existing logical volumes
Create and configure set-GID directories for collaboration
Create and manage Access Control Lists (ACLs) – This requires the acl option to a filesystem, then setaccl/getaccl
Diagnose and correct file permission problems – ls -Z to get acl info

Deploy, configure, and maintain systems

Configure networking and hostname resolution statically or dynamically
Schedule tasks using at and cron
Start and stop services and configure services to start automatically at boot
Configure systems to boot into a specific target automatically – systemctl set-default runlevel
Install Red Hat Enterprise Linux automatically using Kickstart
Configure a physical machine to host virtual guests
Install Red Hat Enterprise Linux systems as virtual guests
Configure systems to launch virtual machines at boot
Configure network services to start automatically at boot
Configure a system to use time services
Install and update software packages from Red Hat Network, a remote repository, or from the local file system
Update the kernel package appropriately to ensure a bootable system
Modify the system bootloader

Manage users and groups

Create, delete, and modify local user accounts
Change passwords and adjust password aging for local user accounts
Create, delete, and modify local groups and group memberships
Configure a system to use an existing authentication service for user and group information (Kerberos)

Manage security

Configure firewall settings using firewall-config, firewall-cmd, or iptables
Configure key-based authentication for SSH
Set enforcing and permissive modes for SELinux
List and identify SELinux file and process context
Restore default file contexts
Use boolean settings to modify system SELinux settings
Diagnose and address routine SELinux policy violations

Posted in Computers | Tagged | 1 Comment

Red Hat Certifications, Part 5

In the boot process. While I certainly know how things work, the RH7 boot process is a tad different with grub2 and systemd vs grub and init.

Startup process on 6 and earlier is pretty simple. After the BIOS or UEFI kicks the system and the MBR starts, you’re in the grub bootloader managed through /boot/grub/grub.conf (link to menu.lst). The lines are text editable so you can make changes, duplicate lines, etc. After the kernel loads, inittab is checked for the default run level and that directory is parsed starting with S00 and up through S99n. You can add scripts in /etc/init.d with links to the correct run level directory (/etc/rc3.d for example) and once it’s done, you’re up. You can use the standard init n command to change the init level.

Startup on 7 changes things up a touch. After the BIOS or UEFI kicks the system and the MBR starts, you’re still in the grub bootloader but it’s grub2 and the grub.conf file is actually a “compiled” collection of instructions. You don’t edit the file by hand but you use grub2-mkconfig to generate the file from a list of files in /etc/grub.d. These files begin with the order they’re added to the grub.conf file. To add a new stanza, create a new file. For example, I created 15_windows to let me boot to my Windows 7 installation. This puts the menu option after the 10_linux kernel script. The /etc/default/grub file is a configuration file that lets you set things like the default kernel to load. Once you select a kernel, systemd handles the rest of the startup process. See here:

http://www.certdepot.net/rhel7-get-started-systemd/

You use systemctl similarly to using svcs and svcadm in Solaris.

In place of the /etc/inittab file, you use systemctl to set the default run level:

* Run Level 0 – poweroff.target
* Run Level 1 – rescue.target
* Run Level 2 – multi-user.target
* Run Level 3 – multi-user.target
* Run Level 4 – multi-user.target
* Run Level 5 – graphical.target
* Run Level 6 – reboot target

So you’d run systemctl set-default runlevel3.target because ‘multi-user.target’ is ambiguous. I don’t know what would happen if you tried to set it to ‘multi-user.target’.

It’s possible the commands we’re used to are still there although inittab is now just comments. I imagine we’ll need to deal with the changes rather than try to work around them.

Interesting set of links here. One note is that scripts in /etc/init.d will be linked to script.service to be managed by systemd.

http://0pointer.de/blog/projects/systemd-for-admins-1.html
http://0pointer.de/blog/projects/systemd-for-admins-2.html
http://0pointer.de/blog/projects/systemd-for-admins-3.html

Hmm, lots of reading there, on through XXI apparently along with other interesting bits. Time for more reading…

Posted in Computers | Leave a comment

Red Hat Certifications, Part 4

Continuing on Chapter 4 finds firewall configuration and selinux.

Firewalls are something I’ve dealt with before over the years. In general they’re pretty simple but you do need to understand what you’re doing in order to properly set up a firewall.

SELinux is a different kettle of fish. You have basic system access; logins and groups with sudo to provide additional access at root level where necessary. Then you have ACLs which let you further define access restrictions but you need to enable it for the file system you’re intending on using it on. When I enabled acl on my /home directory on my system, it failed to boot this morning. I had to remove it and reboot. I’ll need to check that out and see what I did wrong.

SELinux is even more restrictive or controlling. Plus there are differences between RH6 and RH7, at least in the location of some of the info. RH6 has /selinux which doesn’t exist on RH7. So there will be differences in what I get from the RH6 book and possible RH7 test. I found a RH7 (CentOS7) page on the ‘net:

https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts

There are some differences between 6 and 7. I’ll need to identify and document the changes. Main good thing is if it is used, make sure you install the setroubleshooting package.

Next up, chapter 5 which deals with the boot process, network configuration, and time configuration. Shouldn’t be too hard. The security part is harder in part because it’s not needed is a majority of environments. Firewalls are dealt with by a different team (InfoSec) but is good to know for your personal gear and most folks can be permitted access within the guidelines.

Posted in Computers | Leave a comment

Red Hat 7 Stuff Again

I decided to check out the new ‘*ctl’ features of RH7 and found a few new things in addition to systemctl and journalctl.

bootctl – Manages the boot loader and firmware. ‘status’ tells the status of the boot loader. On Solaris, you don’t know if the boot loader exists on a mirrored drive so you run the command anyway just in case. This might let you confirm there’s a boot loader on a disk?

http://www.freedesktop.org/software/systemd/man/bootctl.html

hostnamectl – Manages the three hostname bits; pretty, icon, and chassis.

http://standards.freedesktop.org/icon-naming-spec/icon-naming-spec-latest.html for icon naming conventions.

journalctl – Manages the binary log file.

kdumpctl – Manage kdump? No man page and nothing from a quick google search.

keyctl – Manages various system keys; user and keyrings. Long man page but doesn’t really explain why.

http://www.ibm.com/developerworks/library/l-key-retention/

localectl – manage locales on a system (change keyboard language type for example)

loginctl – manage user logins

machinectl – vm and container manager

pactl – Manage a PulseAudio sound server. 🙂

panelctl – Manage a digital cable box?

pmcollectl – similar to collectl but provides more info (and is written in python instead of perl; wtf?)

systemctl – manages services, similar to svcs and svcadm on Solaris or service/chkconfig on Linux.

systemd-coredumpctl – get coredumps from journeld

systemd-loginctl – seems similar to loginctl

teamctl – An alternative (and supposedly better) way of aggregating interfaces into a single L2 interface.

http://rhelblog.redhat.com/2014/06/23/team-driver/

timedatectl – Manages the date and time and ntp info.

udisksctl – Gets information from disks. List shows the disks, info gives more detailed information about disks.

wdctl – Manage the watchdog status.

Posted in Computers | Leave a comment

Red Hat Certification, Take 3

This is more of a permissions and security chapter.

First off are file permissions. Executable, sticky bits, etc. Interesting that you cannot set umask to create executable files by default. The parts I’ll need to remember are the various special values. SUID==4, SGID==2, and the sticky bit==1. Of course it’s easier to just run chmod u+s file, chmod g+s, or chmod o+t for the three special values.

SELinux will be the big study part as it’s listed as being pretty pervasive in the test. Per the book, he doesn’t think you can pass the test without knowledge of SELinux.

But the file permissions and tools are pretty common and reasonably well known to a working sysadmin.

The chattr command (and lsattr) could really cause problems with documented procedures. If during a process you find a file can’t be copied or edited, even as root, you may be stymied until you figure it out. It needs to be added to the processes.

Hmm, Access Control Lists need the file system mounted with the acl option. Lots of nice bits with ACLs including letting just one person or group have access to a file or directory. Standard permissions apply too though. If a directory is 700, even if a file is ACLd to permit editing by account, if it can’t get into the directory, it can’t view the file. You can add an ACL to the directory to permit just the user access to the directory. And deny access by passing ‘—‘ to chattr for the user.

Posted in Computers | Leave a comment

Red Hat Certification, Take 2

Second day of poking about at certification. I’m studying the Red Hat 6 exam but have a dual environment set up. Three CentOS 6.6 virtual servers and three CentOS 7.0 servers. It lets me look at what the book is presenting and compare it against the new OS poking about at differences between the two.

Chapter 2 sets up the KVM (virtual machine) environment, which I did yesterday, and reviews both the kickstart process (which we do a lot), the kickstart-configurator, and finally a review of ssh. While we don’t use the configurator, we do use ssh, a lot. But I’m sure there’ll be a few things I knew in my day-to-day work and a few things I didn’t know because either the way I did things already worked or it wasn’t something we needed to do.

Without taking the exams, I’m curious if it’s a “this stuff is broken, fix it” or “we need 6 servers that do web, ftp, nfs, database, and a front end firewall on these networks” type of test. Break-fix can be fun but if things are working, troubleshooting it can be a PITA.

And standard admin tools are discussed; nmap, telnet, mail, lynx (well, elinks; text web browser), and ftp (well, lftp). Pretty common stuff although with some differences in the commands to use.

Chapter 3 is standard command line tools. Should be a piece of cake.

Shells, commands, manage files, text file review (grep anyone?), man and info pages, text editors (as long as they don’t force emacs 🙂 ), services, and network management (hosts, resolv.conf, network, ifcfg-eth0, ifcfg-bond0, route-eth0). All pretty normal stuff.

Posted in Computers | Leave a comment

Red Hat Satellite

One of the things I’m tasked with at work is to be the point man for patch management on the Red Hat/CentOS infrastructure. A daunting task in part because patching, unless under critical circumstances, is almost impossible. It takes months and maybe years to get patches through testing before we’re able to even start scheduling patching.

One of the things we’re looking at is the Red Hat Satellite service to manage newer versions of Red Hat (6 and newer). This leaves a bunch of our older systems in the cold and one of the things I tend to avoid is a solution that only works with a subset of our environment. It might not be a good way to manage things though. But purchased tools tend to only work on the vendor’s product. This has us descend into an environment with 6 or 7 “solutions” turning it into a management nightmare. Even things like configuration managers (puppet, chef, ansible, salt, cfengine, etc) have their problems and likely only work on a portion of the systems we manage.

Anyway, while looking over something else, I discovered the kickstart manager, ‘cobbler’. This lets us manage a Red Hat, CentOS, or other environments. I did this post to remind me to look it over further.

cobbler

🙂

Posted in Computers | Leave a comment

Red Hat Study, Take 1

Going through the study book which is 17 chapters of which the first chapter is setting up your system and installing Red Hat. Disk layout, configuration, setting up an http and ftp server with installation media. I got the http and ftp sites set up, no big deal. It has selinux enabled (sestatus) so you have to change the context of the directories or they won’t be visible to the services (chcon -R –reference).

Which means chapter 1 is completed.

Chapter 2 is setting up virtual machines and kickstarts. I already set up the first server (server1) using both CentOS 6.6 and CentOS 7.1 for comparison purposes. I have two more to set up in each environment. They are setting up a webserver/dbserver back end protected by a firewall from the third server, an external “attacker”.

Chapter 3 is basic command line stuff, should be simple enough.

Chapter 4 lists security bits.

Chapter 5 is the boot process

Chapter 6 is file system administration

Chapter 7 is Package management. Should be interesting since I do know how to use yum (package manager) but there are extra bits on managing repos and in creating rpms.

Chapter 8 is User administration.

Chapter 9 is System administration tasks. Probably printers (cups) and such.

Chapter 10 starts the RHCE chapters starting with ‘A Security Primer’, probably firewalls. I’ve managed various different firewalls including iptables. CentOS7 has firewalld though, should be different 🙂

Chapter 11 discusses services and SELinux.

Chapter 12 is more extensive System administration tasks.

Chapter 13 is Email. I’ve managed sendmail and postfix so don’t feel too out of it.

Chapter 14 is the web server. I’ve set up many web servers. I expect no surprises here.

Chapter 15 is Samba (accessing Windows servers from Linux). I did lots of work with Samba back 10 years or so ago and I’ve some client scripting so it shouldn’t be too difficult.

Chapter 16 is more about File Sharing (probably NFS). Little NFS work in my experience. It’s come up a few times but generally it requires some review before implementing.

Chapter 17 looks to touch on DNS, FTP, and log review. I run DNS servers and have for years. I just set up an ftp server at work so no biggie. And I’m a big log management type person.

After that, there are exam preps to review.

Let’s get the server environment up, shall we?

Posted in Computers | Leave a comment