X11Forwarding from CentOS 6 Linux to Mac OS X Lion via SSH

In my previous post, I wrote about getting gpass (a password manager for the gnome desktop) compiled from source and running on our CentOS 6 platform.  The screenie I took of the welcome screen was a mac-i-fied version.

I had configured my Linux machine to support X11 port-forwarding over a secure shell.  It was surprisingly quick and easy to set-up and execute.

I wanted to remote-display the gpass window to my Mac OS X Lion desktop because I needed to transfer passwords from my 1Password application (running on Lion) to my gpass (Linux) program.  Some of the passwords are pretty gnarly so the only way I can guarantee transferring data without making typos was to set-up a copy-paste-friendly environment.

One quick caveat. I've noticed that, when I terminate an X11 program from my Lion shell, I can no longer use that shell to initialize another X11 applet.  I need to exit and re-start the terminal.  If you know of the work-around for this, please leave a comment/reply to this post.

For all the following commands, it is assumed you have sudo privileges on your Linux system.

The first step I took was to edit the /etc/ssh/ssh_config file.  At the end of the file, past the comments, there is a section labeled:

Host *


ForwardX11Trusted yes X11 Forwarding yes


Make sure that you have those two lines, uncommented and present, in your configuration.

Next, (re)start your sshd server:

# /etc/init.d/sshd restart

Stopping sshd:                                         [ FAILED ] Generating SSH1 RSA host key:         [      OK      ] Generating SSH2 RSA host key:         [      OK      ] Generating SSH2 DSA host key:         [      OK      ] Starting sshd:                                           [      OK      ]


In case you're curious, the FAILED message in the first line of output was generated because I didn't already have sshd running on my system.

My machines run on a 192.168 subnet behind two firewalls - the firewall on my DSL modem, and the firewall on my multi-port router.  Normally, I'm not too concerned about the security of my individual machines.  (e.g.: I'm not running a software firewall on my Mac or my Linux server.)  My subnet is DHCP-served by my router and the router is on it's own subnet DHCP-served by the dsl router/modem.

I need to obtain the current IP address of my linux server which I do so my running the ipconfig command.

Next, I switch over to my Mac and open a terminal -- within the terminal, I enter:

iMac:~ mike$ ssh -X
The authenticity of host ' (' can't be established.
RSA key fingerprint is f9:04:2d:0e:70:3d:a7:8f:92:c0:02:69:8c:f2:e6:51.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (RSA) to the list of known hosts.
mike@'s password:
/usr/bin/xauth: creating new authority file /home/mike/.Xauthority
[mike@codeMonkey ~]$

At the command prompt, I now only have to enter whatever X11 command and that program will be displayed on my Mac Desktop.  I can even open and start an entire desktop session.  I could - but I won't -- my Linux server only has 2gB of Ram...

Instead, I'll open a gnome-terminal.  So, at the prompt, I simply type: gnome-terminal and I get the gnome-terminal to appear on my desktop:

That's pretty much all there is to it, as far as I could tell.  Eazy-peezy.

One last note -- once you have a terminal running on your Lion desktop, then any X11 commands, such as gpass, you enter will all be displayed on your Lion desktop.  This circumvents the one-terminal-one-applet restriction I mentioned at the top of this article.

That's pretty much it for this article -- hope this helps!

Installing gpass on CentOS 6 Linux

Over the last year I have become utterly dependent on a product called 1Password by Agile Bits software.  For those of you that are unfamiliar with this software, 1Password is a multi-platform program that manages all your passwords, in additional to other sensitive information, in an easy-to-use interface.

Originally written for the Mac, the software is now offered on iPad, iPod, 'droid, and Windows machines.  I have it installed on all available platforms.  While initially bemoaning the cost of the product - it's not cheap - I've come to depend on it for all of password storage, my software license management, and even the credit-card information for the card I use for online purchases and subscriptions.

Quick aside and then I'll cease the fanboi gushing: my favorite feature of the program is the password generator.  I can custom-tailor a password to be as obnoxiously long, and obfuscated, as I need and I don't ever, ever, have to type it in when challenged.  Passwords are simply copy-pasted from the 1Password program, or you can use the embedded 1-click feature functionality of the support extensions available for all browsers.

My only complaint with 1Password is the lack of Linux support.  Since I'm using Linux as my LAMP development platform while at-home, I need a comparable password manager. I know I won't have all of the slick features of 1Password, but at least I'll be able to copy-paste long, obfuscated, passwords from the password manager into my Linux desktop applications.

So, let's get started!

There's some good tutorials already available on the 'net about doing just this - however, none I found were exactly right and, following those tutorials, I did run into several side issues.  I'll cover all those issues here so that your installation will be seamless.

Operating System: CentOS 6 Linux Desktop GUI: Gnome gPass version: 0.5.1 EPEL repository: 6.5

Download the gpass source into your "Downloads" directory and unpack the tarball:

[cc lang='bash' line_numbers='false']

wget http://projects.netlab.jp/gpass/release/gpass-0.5.1.tar.gz

tar xvzf gpass-0.5.1.tar.gz

cd gpass-0.5.1


I based my initial install of gpass from the UnixCraft blog post here.  (In the tutorial, they omitted the arguments to the tar command to un-tar the tarball that creates the gpass source directory.)

In step 1, the blog asks you to do a group install of the development tools and, secondly, install the gnome-ui, mhash, and mcrypt development libraries.  The second step failed for me following the successful install of the gnome-ui as my stock yum configuration was unable to locate either the mhash or the mcrypt packages.

After googling the issue, I determined that I needed to at the EPEL repository to my yum configuration.  It's common to have several repositories in your yum catalog.  You'll add additional repositories by establishing configuration files in /etc/yum.repos.d/.

Setting up the EPEL repository is pretty easy as they've created an rpm just for this purpose.  Make sure you have sudo privileges on your account and enter the following commands: (I'm currently in the "Downloads" directory in my $HOME.)

[cc lang='bash' line_numbers='false']

wget http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm ... rpm -Uvh epel-release-6-5.noarch.rpm


Side note: I'm aware when I'm reading how-to's on other sites that reference software versions that said versions may not always be the current, and most stable, release available today.  I always check the repository, using a browser, before downloading to ensure I'm obtaining the latest version.

Once the rpm is installed, you'll need to edit the repository file.  Again, using sudo, edit the /etc/yum.repos.d/epel.repo file and in the EPEL repository section, add the line: priority=3 at the end of the section.

I'm now ready to install the mhash and mcrypt packages, obtaining them from the Redhat EPEL repository.  Again, assuming sudo privileges:

[cc lang='bash' line_numbers='false']

# yum install libmcrypt-devel # yum install mhash-devel


From this point, you need merely to follow the instructions in the UnixCraft blog I linked-to above, but here are the steps to finish the installation.  Again, assuming you've changed-directory to the gpass source:

[cc lang='bash' line_numbers='false']



./make install


At this point, as long as you've not seen any error messages in your output, your gpass program is ready to use.  Test by typing gpass at the command line -- you should see the gpass window pop-up on your desktop:

In the screen-shot to the right, those of you that are past your second cup of coffee may have noticed that my gpass window looks suspiciously like a Mac OS X version.

I am running the gpass application on my Linux server, but I am serving the display to Mac OS X Lion desktop.  I set-up the configuration to do this for two reasons.

  1. to capture and display screenies
  2. to copy paste data from my native Mac 1Password application into my Linux gpass application.  I do NOT want to retype some of those passwords...
That's pretty much it.  I leave the exploration and use of gpass up to you.  I'll do a follow-up tutorial quick-post on how-to set-up XForwarding on Linux to your remote desktop (Mac) via secure shell.
Thanks for reading - hope this helps!


Part 5: Setting-up a Linux Development Machine: Virtual Hosts in Apache2

When I am working on code project, I isolate that project within it's own directory/repository.  Further, it matters not if I'm starting a completely new project, or if I'm branching off the trunk of an existing project.  As a means of imposing order over chaos, I isolate the existing project within it's own sandbox both on the filesystem and via Apache2.

To do so requires an understanding, somewhat, of the mechanics of Apache2, DNS, and your localhost.  A minimal understanding, trust me.

What it, in return, gives you is an isolated view of your code project from the web-server perspective.  Cookies are isolated by domain, your document root is isolated to a single directory/repository, and you not only put your log files, just for that domain, where ever you want but you can also name them anything you want as well.

What I'll provide you with in this installment is a rudimentary understanding of the mechanics behind virtual hosting using Apache2, a template configuration file to get you going, and the basic steps necessary to get the whole mess working.  Let's get started...

When you start a new project, if you're checking it out from a source-code repository, you'll typically assign it to a directory somewhere common.  For example, within your home directory, you may have a folder named "code" and beneath that folder, other folders that describe either the project or the programming language you're working in.  Doesn't really matter as the point is this:  you've isolated your code repository from everything else on your filesystem, right?

It really doesn't matter, to Apache2, where you create your filesystem repository.  As long as the webserver pseudo-user has access permissions to the directory, then you can access the files within that directory via a web browser.  The webserver has to be configured to be told that, for a given domain name, where is the documentRoot for that domain.

Some of you, at this point, may be asking: what's a domain name and why is it important?  Well, a domain name is simply a name you've assigned to the project to keep it separate, at least in your own head, from the other projects you may, or may not, have running on your development machine.  For example, I create a new project called newWidget and it's currently in the 1.4 revision.  I'm ready to branch and write some new features for the product so, using whatever sccs tool, I branch off the trunk and create the 1.5 branch.

I check that branch out to a directory in /lampdev/php/newWidget115.  I now need to do two basic things:

  1. invent some domain name that will be used exclusively for this project and resolve the domain to my localhost
  2. create a virtual host in apache so that apache knows that http://newW115 points to my localhost

The reasons, apart from what we've already discussed, is to keep your local DNS services on your local machine.  If you, before entering any configuration information, entered: http://newW115 into a browser url bar, chances are very good you're going to end-up on a search page (I'm using Chrome) or get some sort of browser error.

So the first step is to define the new domain name (again, given that we're already checked the code out into the aforementioned directory) to the local system so that all requests to that domain are resolved locally through our name services.  To do this, we're going to sudo edit the /etc/hosts file.

This file, /etc/hosts, is the first thing checked whenever your local name services is trying to resolve a host name.  If it finds a host-to-IP alias in this file, all further attempts at resolution are halted as it has successfully resolved the host name.  Edit /etc/hosts to resolve your new domain.  It should look something like this:

[cc lang='bash' line_numbers='false']    localhost codemonkey codemonkey.shallop.com codeMonkey.shallop.com newW115


The way /etc/hosts works is that you first list an IP address for the domain to resolve to - in this case, we're using which is TCP/IP speak for your local host.  Next we list all of the domain names that are going to resolve to this IP address.  In the example above, we're resolving localhost, codemonkey, codemonkey.shallop.com, codeMonkey.shallop.com, and the new domain: newW115 all to

Whenever I type one of these domains, for example, in to a web browser URL bar, my local host domain services won't go out to my network name servers to resolve the domain name -- it's telling the requesting service that it's  Note, too, that you can alias multiple domain names to the same machine.

Side Note -- this is how you can blacklist certain domains from your browsing experience.  Simple resolve that domain to that's an article for another day...

You can also have multiple entries resolving to the same IP address.  It would have been just as correct for me to have listed by /etc/hosts file as:

[cc lang='bash' line_numbers='false']     localhost     codemonkey     codeMonkey     codemonkey.shallop.com     codeMonkey.shallop.com     newW115


Finally, also note that a domain extension isn't really required.  We can name our domain pretty much anything we want and as long as you universally use that spelling (and case), then it will resolve locally.

Now that the domain is resolving locally, the next step is to tell Apache2 how to handle the request.  When you type: http://newW115 at the browser, the browser will query local services and receive a response that the domain is handled locally.  Apache2 will then say: "Oh, if it's local, then were do I go to get the files and stuff?"

The configuration for Apache2 is done with virtual hosting.  Technically, you can do this without virtual hosting -- but you can only do it for one domain.  If you want to locally-host multiple domains, you have to use virtual hosting.

The Apache2 configuration file lives in: /etc/httpd/conf and is named: httpd.conf.  This is the main configuration file for Apache2.  Some installations use a sub-directory, usually called something like: vhostsd.conf, and stores the vhosts.conf file within that directory.  That's ok, too.  Apache2 is versatile that way but, for our purposes, we're going to maintain the virtual host configuration(s) within the main conf file.

However, if you wanted to use a separate file for Virtual Hosting, all you need in your httpd.conf file is the directive:

[cc lang='apache' line_numbers='false']

# Virtual hosts Include conf/extra/httpd-vhosts.conf


At the very end of httpd.conf, there's a section called: Name-Based Virtual hosting.  We're going to append this virtual host configuration to the end of this file.

Allow me to side-step for a quick second.  Consider if we were to install phpMyAdmin locally on our server because this is how we want to administer our mySQL database.  We can install the program files anywhere as phpMyAdmin is just another LAMP application, right?  Were we to do that, then we would need a <Directory> directive to Apache2 telling Apache2 where to look for phpMyAdmin.  The domain for phpMyAdmin would still be localhost, or or whatever else you'd defined in /etc/hosts.  The location of the application can live anywhere and we're using the conf file to tell Apache2 how to find and serve it to us when requested.

[cc lang='apache' line_numbers='false']

Alias /phpMyAdmin "/opt/local/www/phpmyadmin" <Directory "/opt/local/www/phpmyadmin"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory>


What this <Directory> directive simply does is tell Apache2 where to look for phpMyAdmin if I enter something like: http://localhost/phpMyAdmin in the URL bar of my browser.  It's not the same thing as giving phpMyAdmin it's own domain at all.

I do this with a lot of my web applications: phpMyAdmin, mcmon, ajaxmytop, nagios, etc., simply because I don't want to remember where the fill path name is of the applications.  It's easier to type: http://localhost/phpMyAdmin that it is to type: http://localhost/webapps/database/phpMyAdmin.

Ok, so back to domains.  Here's the template for the virtual host we've created in /etc/hosts: newW115:

[cc lang='apache' line_numbers='false']

<VirtualHost *:80> ServerName  newW115 ServerAdmin mshallop@gmail.com DocumentRoot /code/webapps/LAMP/newWidget/1-15

DirectoryIndex  index.php

<Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /code/webapps/LAMP/newWidget/1-15> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory>

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory>

ErrorLog /var/logs/115_error.log

LogFormat       "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat       "%h %l %u %t \"%r\" %>s %b" common LogFormat       "%{Referer}i -> %U" referer LogFormat       "%{User-agent}i" agent        CustomLog       /var/logs/115_log common        ErrorLog        /var/logs/115_error_log

# Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn

CustomLog /var/logs/115_access.log combined ServerSignature On



This is a pretty minimal configuration -- but it's the boilerplate template I use for all new domains and it works.  The lines that in boldface are the lines you should change to match your environment.  Note that you can pretty much put files, such as the log files, where ever you wish.  I changed the names from my normal location but, as a rule, I maintain the entire environment outside of the root filesystem.

Once you've made your changes and saved the file, you'll need to restart Apache2 so that it will read the new configuration.  If there are errors in your configuration file, Apache2 will let you know and will refuse to start.  Make sure you've corrected all errors and, once the server successfully restarts, you should be able to type: http://newW115 into your browser URL bar and have that domain resolve locally, and serve files from the directory you specified in the httpd.conf file.

Over time, as you add additional projects and create new code-domains, you can simply add the new <VirtualHost> directives, appending them, to the httpd.conf file as needed.  When you expire and remove hosts and files, don't forget to remove them from the Apache configuration as well.

And that's pretty much it.  This is a simple thing to set-up as we didn't delve into anything that wasn't plain-vanilla.  For example: SSL configurations, .htacces, or the re-write engine.  That's for another day, another article.

Hope this helps...

Part 4: Installing Apache Thrift: Linux Development Environment

Previously, we dealt with getting a working LAMP development environment up and running on a fresh CentOS 6 install.  We next dealt with the installation of PHPStorm and our JDK issues.

In this, and the next issue, I'm going to talk about the Thrift framework and getting it installed and running.

Thrift was originally developed by Facebook, was entered into open source in 2007, and became part of the Apache incubator the next year.

Thrift, according to Apache, is "a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, and OCaml."

What it is in plainspeak is an API framework for your LAMP application.

Why I want it:  I want to use Thrift for our project because of the nature of the project.  (A social-networking concept.)  Because the application will rely heavily on data-storage calls, I've decided to implement the data access layer as an API instead of a more-traditional OOP model.  Thrift, as the API framework, allows me complete freedom on the back-end of the API.  I can implement the API in a variety of languages, although I'll probably use PHP.

Thrift also provides me with a strongly-typed interface to the API.  Like XML-RPC, calls to the API are well-defined beforehand and must comply with the typed definition of both the methods used, and the data exchanged to/from said methods.

My personal experience with Thrift is limited -- I used it as an API for a product concept at a former employer.  The calling application would invoke the API and make requests to the API which, in turn, would do a "bunch of stuff" and return a well-defined "data ball" (a json object) back to the calling stub for processing and display.

The other concept that makes me embrace Thrift as the controller for my LAMP application is that I can completely encapsulate the data layer from the front-end developers.  They do not need to know if the data is stored within mongodb, mysql, or a flat file.  All they need is the data.  The query language is hidden; front-end developers should not need to write data-access code.

I'll talk more about the glories of Thrift later.  For now, let's just get it installed and running...

On our Linux system, we have to do some preliminary installation of packages first.  Luckily, if you hit the Thrift Wiki, you'll find pretty much everything you need to do a successful install.  Be warned, however.  Sparseness of documentation could easily be one of the hallmarks of Thrift.  Read carefully, and then read again before punching the enter key on your keyboard.  Make sure you understand what it is you're about to do.

Ok.  Let's get some non-LAMP development tools installed.  Our first command will be to install most of the pre-requisite packages needed by Thrift:

[cc lang='bash' line_numbers='false']

#  sudo yum install automake libtool flex bison pkgconfig gcc-c++ boost-devel libevent-devel zlib-devel python-devel ruby-devel


This  will install the base development packages you're going to need.  Once this has completed, you should also install the open-SSL development libs as the build will fail without it.  (At least, if failed on my install.)

[cc lang='bash' line_numbers='false']

#  sudo yum install openssl-devel.x86_64


Installing this package will also pick-up all the dependent packages you'll need to complete the install.

Next, download the Thrift tarball from the site and move the package somewhere within what will become your DocumentRoot path for Apache2.

[cc lang='bash' line_numbers='false']

#  tar xvzf thrift-0.7.0.tar.gz


Once you've expanded the tarball, cd into the thrift directory and follow the instructions to make the Thrift packages and libraries.  I did this pretty much exactly as told and my installation went without a problem.

At this point, we've only built and installed the Thrift libraries (installed in /usr/lib, I believe...).  In the next installation, we're going to install the PHP src directory and make it visible to our application's docRoot.

Part 3: Creating Linux Development Environment (PHPStorm and the JDK)

I stopped working yesterday on the installation because I hit a pothole installing PHPStorm by JetBrains.

As I mentioned in the previous article, and in case you're just tuning in, I am first working towards a LAMP development environment on an older PC running 64-bit Linux.  We've decided on CentOS 6 as the base distribution and installed the LAMP stack yesterday.  I installed the PHPStorm package but hit a snag when I received an error message telling me that it required the JDK runtime ... thingys.  (Whatever - I assiduously avoid Java.)

I installed the openjdk packages with yum and got PHPStorm to start-up, albeit with many dire warnings and threats to the graphics system.  Apparently PHPStorm is comfortable running only with the jdk from SUN/Oracle.

I then downloaded and RPMd the SUN/Oracle version of the jdk and restarted.  What happened next were error messages telling me that I need to set-up the java (dk) environment correctly as, now, the two were conflicting with each other.

[cc lang='bash' ]

ERROR: Cannot start WebIde. No JDK found to run WebIde.  Please validate either WEBIDE_JDK, JDK_HOME or JAVA_HOME environment variable points to valid JDK installation.


See, in the linux world, the PHPStorm is launched from a shell script.  It checks your environment for the JDK through these variables and, if correctly defined, launches the IDE.

There's a java-sdk configuration file located as /etc/java/java.conf -- don't make the same mistake I made and edit this file to re-direct/create environment  variables so they point to the SUN/Oracle version of the JDK.

The SUN/Oracle version of the Java SDK installed in: /usr/java/jdk1.7.0/ which will change for your system depending, I'd assume, on your distribution and version of the SDK.

To reconcile the conflicts, I used the yum installer to remove any traces of the openjdk -- all packages were removed and then I did a yum clean all to reset the environment.

Since I'm the only user on this system, I next cd'd into my home directory and pulled up the .bashrc file - this will modify the bash shell environment for every terminal session I start.  I added the following two lines to the .bashrc:

[cc lang='bash' line_numbers='true']

JDK_HOME=/usr/java/jdk1.7.0 export JDK_HOME


I exited the editor and reloaded my bash environment: [cc lang='bash' line_numbers='false']

# . ./.bashrc


From there, all I need to do is start the PHPStorm shell script which launches the application and I'm good to go!

You can install the PHPStorm folder anywhere.  Using your bashrc file, you can make an alias to the start-up shell script so that you can launch the IDE anywhere from the CLI environment.

[cc lang='bash' line_numbers='false']

alias phpstorm='nohup /home/user/folder/PhpStorm/bin/PhpStorm.sh &'


The nohup allows the program to ignore SIGHUP -- in other words, if you close the terminal from where you launched PHPStorm, you will not close PHPStorm as well.  The ampersand (&) at the end of the command tells the shell interpreter to launch the application as a "background" task which frees up your terminal session so that you can continue to use the shell while PHPStorm is running.

At this stage, I'm pretty much good to go for basic LAMP development.  I've got a running mySQL server, Apache2 is good to go, and PHP5 is installed.  I will enhance my environment by adding a few packages such as:

I also want to give some thought to virtual hosts -- I'll cover this topic in a future post -- within my local Apache2 environment, I'm going to want to establish several different virtual host environments, each of which point to a different documentRoot location (or code repository) depending on which application/environment I'm currently working on.
I'll also have to plan my filesystem repositories carefully -- for the most part, I'll be working as a subversion server for home-projects, while also working as a subversion client for work projects.
Which reminds me - in the first article in this series, I reported on the filesystem utilization following a clean Fedora 15 install.  Here's the state of the current filesystem (CentOS 6) following the LAMP stack install, and the install of the PHPStorm IDE and the SUN/Oracle JDK:
/ (root):   50gb, used 6%, 47gb available
/boot:      485mb, used 11%, 409mb available
/home:    864gb, used 1%, 820gb available
I'm looking pretty good for user filesystems but I'll want to check my mySQL configuration and ensure that databases are being created in the /home filesystem and not in the /root filesystem also.
Ok - done for this weekend and off to play some Rift!  Hope this helps someone!
PS: If you want some detailed tutorials on installing any of the supplemental packages I listed at the end of this article, please leave a comment!