Down-Rev Oracle JDK/JRE 8 to JDK/JRE 7 on Ubuntu 14.04

PhpStorm8 failed to execute correctly.  JetBrains informed me that the new release is not playing well with the Oracle JDK version 8 -- this post walks you through down-rev'ing JDK8 to JDK7, after which, PhpStorm8 performed flawlessly.

Part 4: Installing Apache Thrift: Linux Development Environment

Previously, we dealt with getting a working LAMP development environment up and running on a fresh CentOS 6 install.  We next dealt with the installation of PHPStorm and our JDK issues.

In this, and the next issue, I'm going to talk about the Thrift framework and getting it installed and running.

Thrift was originally developed by Facebook, was entered into open source in 2007, and became part of the Apache incubator the next year.

Thrift, according to Apache, is "a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, and OCaml."

What it is in plainspeak is an API framework for your LAMP application.

Why I want it:  I want to use Thrift for our project because of the nature of the project.  (A social-networking concept.)  Because the application will rely heavily on data-storage calls, I've decided to implement the data access layer as an API instead of a more-traditional OOP model.  Thrift, as the API framework, allows me complete freedom on the back-end of the API.  I can implement the API in a variety of languages, although I'll probably use PHP.

Thrift also provides me with a strongly-typed interface to the API.  Like XML-RPC, calls to the API are well-defined beforehand and must comply with the typed definition of both the methods used, and the data exchanged to/from said methods.

My personal experience with Thrift is limited -- I used it as an API for a product concept at a former employer.  The calling application would invoke the API and make requests to the API which, in turn, would do a "bunch of stuff" and return a well-defined "data ball" (a json object) back to the calling stub for processing and display.

The other concept that makes me embrace Thrift as the controller for my LAMP application is that I can completely encapsulate the data layer from the front-end developers.  They do not need to know if the data is stored within mongodb, mysql, or a flat file.  All they need is the data.  The query language is hidden; front-end developers should not need to write data-access code.

I'll talk more about the glories of Thrift later.  For now, let's just get it installed and running...

On our Linux system, we have to do some preliminary installation of packages first.  Luckily, if you hit the Thrift Wiki, you'll find pretty much everything you need to do a successful install.  Be warned, however.  Sparseness of documentation could easily be one of the hallmarks of Thrift.  Read carefully, and then read again before punching the enter key on your keyboard.  Make sure you understand what it is you're about to do.

Ok.  Let's get some non-LAMP development tools installed.  Our first command will be to install most of the pre-requisite packages needed by Thrift:

[cc lang='bash' line_numbers='false']

#  sudo yum install automake libtool flex bison pkgconfig gcc-c++ boost-devel libevent-devel zlib-devel python-devel ruby-devel


This  will install the base development packages you're going to need.  Once this has completed, you should also install the open-SSL development libs as the build will fail without it.  (At least, if failed on my install.)

[cc lang='bash' line_numbers='false']

#  sudo yum install openssl-devel.x86_64


Installing this package will also pick-up all the dependent packages you'll need to complete the install.

Next, download the Thrift tarball from the site and move the package somewhere within what will become your DocumentRoot path for Apache2.

[cc lang='bash' line_numbers='false']

#  tar xvzf thrift-0.7.0.tar.gz


Once you've expanded the tarball, cd into the thrift directory and follow the instructions to make the Thrift packages and libraries.  I did this pretty much exactly as told and my installation went without a problem.

At this point, we've only built and installed the Thrift libraries (installed in /usr/lib, I believe...).  In the next installation, we're going to install the PHP src directory and make it visible to our application's docRoot.

Part 3: Creating Linux Development Environment (PHPStorm and the JDK)

I stopped working yesterday on the installation because I hit a pothole installing PHPStorm by JetBrains.

As I mentioned in the previous article, and in case you're just tuning in, I am first working towards a LAMP development environment on an older PC running 64-bit Linux.  We've decided on CentOS 6 as the base distribution and installed the LAMP stack yesterday.  I installed the PHPStorm package but hit a snag when I received an error message telling me that it required the JDK runtime ... thingys.  (Whatever - I assiduously avoid Java.)

I installed the openjdk packages with yum and got PHPStorm to start-up, albeit with many dire warnings and threats to the graphics system.  Apparently PHPStorm is comfortable running only with the jdk from SUN/Oracle.

I then downloaded and RPMd the SUN/Oracle version of the jdk and restarted.  What happened next were error messages telling me that I need to set-up the java (dk) environment correctly as, now, the two were conflicting with each other.

[cc lang='bash' ]

ERROR: Cannot start WebIde. No JDK found to run WebIde.  Please validate either WEBIDE_JDK, JDK_HOME or JAVA_HOME environment variable points to valid JDK installation.


See, in the linux world, the PHPStorm is launched from a shell script.  It checks your environment for the JDK through these variables and, if correctly defined, launches the IDE.

There's a java-sdk configuration file located as /etc/java/java.conf -- don't make the same mistake I made and edit this file to re-direct/create environment  variables so they point to the SUN/Oracle version of the JDK.

The SUN/Oracle version of the Java SDK installed in: /usr/java/jdk1.7.0/ which will change for your system depending, I'd assume, on your distribution and version of the SDK.

To reconcile the conflicts, I used the yum installer to remove any traces of the openjdk -- all packages were removed and then I did a yum clean all to reset the environment.

Since I'm the only user on this system, I next cd'd into my home directory and pulled up the .bashrc file - this will modify the bash shell environment for every terminal session I start.  I added the following two lines to the .bashrc:

[cc lang='bash' line_numbers='true']

JDK_HOME=/usr/java/jdk1.7.0 export JDK_HOME


I exited the editor and reloaded my bash environment: [cc lang='bash' line_numbers='false']

# . ./.bashrc


From there, all I need to do is start the PHPStorm shell script which launches the application and I'm good to go!

You can install the PHPStorm folder anywhere.  Using your bashrc file, you can make an alias to the start-up shell script so that you can launch the IDE anywhere from the CLI environment.

[cc lang='bash' line_numbers='false']

alias phpstorm='nohup /home/user/folder/PhpStorm/bin/ &'


The nohup allows the program to ignore SIGHUP -- in other words, if you close the terminal from where you launched PHPStorm, you will not close PHPStorm as well.  The ampersand (&) at the end of the command tells the shell interpreter to launch the application as a "background" task which frees up your terminal session so that you can continue to use the shell while PHPStorm is running.

At this stage, I'm pretty much good to go for basic LAMP development.  I've got a running mySQL server, Apache2 is good to go, and PHP5 is installed.  I will enhance my environment by adding a few packages such as:

I also want to give some thought to virtual hosts -- I'll cover this topic in a future post -- within my local Apache2 environment, I'm going to want to establish several different virtual host environments, each of which point to a different documentRoot location (or code repository) depending on which application/environment I'm currently working on.
I'll also have to plan my filesystem repositories carefully -- for the most part, I'll be working as a subversion server for home-projects, while also working as a subversion client for work projects.
Which reminds me - in the first article in this series, I reported on the filesystem utilization following a clean Fedora 15 install.  Here's the state of the current filesystem (CentOS 6) following the LAMP stack install, and the install of the PHPStorm IDE and the SUN/Oracle JDK:
/ (root):   50gb, used 6%, 47gb available
/boot:      485mb, used 11%, 409mb available
/home:    864gb, used 1%, 820gb available
I'm looking pretty good for user filesystems but I'll want to check my mySQL configuration and ensure that databases are being created in the /home filesystem and not in the /root filesystem also.
Ok - done for this weekend and off to play some Rift!  Hope this helps someone!
PS: If you want some detailed tutorials on installing any of the supplemental packages I listed at the end of this article, please leave a comment!

Setting-up a Linux Development Client - Part 2 - CentOS 6 Install

In the last install, I wrote about how I decided to try Fedora Linux after a nearly 10-year hiatus from the product.  Unfortunately, as it turned out, my fears were not groundless and I am going to scrap the install in favor of CentOS before I get in so deep that making the switch-out becomes prohibitive.

I am going to continue to try Gnome as my desktop, however, as I did like what I saw for Gnome under Fedora.  While I have always used KDE in the past, it was always accompanied by a wistful bit of: "I'll bet the grass is greener over there..." kind of thinking.  Anyone that's ever spent anytime looking over the Gnome application offerings vs. the KDE application offerings will agree.

Time to stop wondering and start trying.  I've downloaded the CentOS 6 x86_64 CD ISO and have booted into the desktop.  It's not nearly as polished, pretty, as the Fedora desktop -- it looks more like a traditional windows set-up with the desktop icons falling down the left-side of the screen and the top-bottom menu bars.  While simpler in appearance, it's also intuitive and easier to use.  Less eye-candy also means less CPU/GPU crunching resulting in improved responsiveness.  (Dragging a window around in the Fedora desktop on my hardware platform was like a bit like being on a strong hallucinogenic.  Or so I've been told.)

Anyway, I locate the "Install to Hard Drive" icon and click it...

The CentOS 6 installer opens a window in the middle of desktop (as opposed to Fedora taking over the entire desktop) and presents you with the same two start-up options: installation language and installation destination.  (As I mentioned in the previous article, CentOS is a child of Fedora.  I expect things to be similar.  Stuff working is one such expectation.)

CentOS gives me the same options as the Fedora installer - except with less eye-candy.  For example: when asking to input the root password, I'm not shown a bar indicating password strength.  I just type in my password and that's pretty much it.  Also, like the previous install, I'm not going to choose the encrypted filesystem, and I'm going to go with the defaults for filesystem partitioning.

While this is installing, I'll yak about why I've chosen these two distributions as my first-two choices.  Ubuntu offers a great installation and configuration experience.  However, after messing around with Linux distributions for 30 years, I can't quite shake the feeling that Ubuntu in the Granimal of linux installs.

Don't get me wrong - it's a great install in that everything works, is highly automated, and requires little, if any, user intervention from the machine's administrator.  And that's probably what bugs me the most about Ubuntu.  As a Linux guy, I want (need) more interaction with my OS.  If I was content to let me OS run off and make all the most-important decisions without asking me, I'd use Windows.  Ubuntu fulfills a great niche - it introduces Windows users to Linux.  I'd install Ubuntu on my Dad's PC.

I've also bypassed SuSE Linux -- which is surprising considering that, for nearly a decade-and-a-half, all I would consider running and installing was SuSE.  This flavor of Linux, like most things German, is precise, exacting and mechanically sound.  Correct, even.  It's also overbearing, heavy-handed and leaves deep footprints.  The other problems that I have with SuSE is that it can be difficult to find packages tailored for it's installation base.  While SuSE enjoys a wide-variety of software, there always seems to be those few-dozen packages you want to install but can't locate the ports to the SuSE distribution.  In that, it's like the Dewey (Malcolm in the Middle reference) of Linux installs: unprepossessing and brilliant but relatively scarce when it comes to applicable resourcing.

I've never been a big fan of Debian simply because they move in geological-timeframes when it comes to engineering releases.  Oh, look, kernel 2.26.9999 is out!  (Debian: happy with 2.123, thank you.)  Geh.  What it lacks in contemporary packaging, it more than makes up with in stability.  I, on the other hand, tend to blow through distributions like the end is near so Debian isn't really for me.

I tried Mandriva once and, as a result, got sucked into this weird mail hell back when I was running my own DNS and MX servers.  I really tried to make it work but it just got too ... weird for me.  It may have improved in recent years but I've never had enough of it catch my eye to really care enough to revisit it.

Rebooting the CentOS 6 Live CD was better than the Fedora Live CD as CentOS actually gave me a 'reboot' option whereas Fedora would only let me 'suspend'...whatever that means...

I configured the user and the network time and then was presented with an alert: "Insufficient Memory to Start kdump" ... which made me think I had crashed the install...turns out, it was just telling me I couldn't start the monitor itself.

On to the login...

Well, CentOS 6 is definitely a derivative of Fedora 15.  Although the desktop is radically different, the first thing I try is FireFox -- and am immediately told that I can't access any off-site web page.  Although I can ping and resolve hosts from terminal, FireFox cannot do so from the browser.  So the same crappy DNS issue which plagues Fedora was inherited by CentOS.  Great.  Starting to get an idea of where all this is eventually going to end up...

The network configuration applet in CentOS allows me to edit and add google's nameserver and things start to work in the browser immediately thereafter. For some reason, I wasn't able to get this to work in Fedora so, bonus.  Also, my screen resolution is at the highest at 1280 x 1024 and that gives me a happy, too.

I start the software update and am informed that all my software is currently up-to-date and I do not need to additional software.  That strikes me more as a software I run yum update from the command line as root (side note: either I didn't see the option to create my new user as an admin, or it didn't exist, but regardless, I can't sudo...) and I'm suddenly off-and-installing 237 total packages... so, clearly something in the GUI version of the software update failed and now I'm thinking that, because I didn't have sudo privileges, it was my account exec'ing the command.

CentOS 6 will allow you to login graphically as root.  And thereafter puts so many scare-ware pops on the screen that you eventually, submissively, quietly and quickly edit the sudousers file and logout.  Now that my main account has sudo access, I never need to hit root again.

Quick download and now Chrome is my default browser...time to try to install some development tools...

The first package I'm going to install, from the Add/Remove Software package manager, is the MySQL server and related files package which is an 8.1mb download...I have to also install dependent packages for perl support and client programs and shared libs, which is ok...PHP 5.3.2 is the next item to be installed and I install all packages except for postgres.

At this point, I have a LAMP stack installed, but it's not running...  starting off with mysql:

[cc lang='bash' line_numbers='false']

# sudo chkconfig --level 2345 mysqld on

# sudo /etc/init.d/mysqld start

# mysql -uroot

mysql> use mysql;

mysql> update mysql set password=password('yourPasswordHere') where user='root';

mysql> exit;


This set of commands sets-up mysql to run at start time (run levels 2, 3, 4, and 5) and then starts the mysql server.  Next, you invoke mysql as root and reset the root password to something other than the default, which is nothing.

--> mySQL is now running.

For Apache, we're going to leave virtual hosts alone for a future article, and just make sure that the webserver will execute at boot, and that we can serve system information...

[cc lang='bash' line_numbers='false']

# sudo chkconfig --level 2345 httpd on

# sudo /usr/sbin/apachectl start


If you ps -ef | grep httpd you'll see a list of the running apache can also open up http://localhost in a browser window and you should see the CentOS Apache 2 Test Page.  Now we have to confirm that we have PHP installed and running, along with a few other modules.  By default, your web server DocumentRoot is in /var/www/html.  Using the terminal, cd into this directory and type the following:

[cc lang='bash' line_numbers='false']

# sudo vi snitch.php






This creates a little snitch file in your DocumentRoot which you can load in a browser -- it then dumps your LAMP configuration to your browser window.  At the very top of the display, it should tell you what version of PHP you're running.  (Mine reports version 5.3.2.)  Important to me, at this stage, is that I have memcache, soap, mysql, and ODBC drivers installed.

The last stage for me is to install my IDE.  I own a license for JetBrains PHPStorm which I personally prefer.  It's not freeware but if you can afford the license costs, it's probably the best IDE you can get for the price.  I use it on all environments (Mac, Windows and Linux).  I also noticed that you can install the Eclipse IDE using the software installer -- this is very similar to PHPStorm.

To get PHPStorm up and running, I need the SUN/Oracle version of the JDK -- not the openJDK.  I did get it running, but not without DIRE and URGENT messages prophesying  the END OF THE WORLD, or at least my video display, should I continue.  Point is, I did get it installed, configured, licensed.  Then I de-installed the openJDK and went hunting for the SUN/Oracle JDK.

Which will be covered in the next installment...

Secure Access to Cloud-based Source-Code

I've had this idea for about a week now -- I want to store my working source-code tree in the cloud, securely, so that I can access it from my machine at work, or from home.  I use a laptop at work as my primary machine -- which is cool -- but I really hate lugging the damn thing back-and-forth from work to home. In my mind, it introduces risk - shoving a laptop into a backpack and trundling 70 or so miles (one-way) just doesn't seem, to me, to be the best way to treat delicate electronics, even if said device was designed to be portable.  There's also the additional liability of theft or loss of the device.

Like most geeks, my home machine makes a far better development environment because it has significantly more display real-estate, more memory, and faster everything else.  I don't have to hunt for a spare power receptacle to plug the laptop into, or work off the kitchen table because my desk is already at capacity.

So I came up with the idea (and I'm not claiming this to be original - but it does work) that if I could store my source code in the cloud, then all I'd need is a duplicate operating environment (apache, mysql and the db contents, etc.) while I ran my development source from the cloud, pushing it to the stage-server when necessary, thereby always maintaining the code in a consistent state across platforms.

I need the repository to be stored under subversion, and I want really decent encryption so that if the account gets hacked, my code isn't exposed.  (Protect corporate assets.)

Oh, and I want it to be free.   :-)

And to be large enough to store my entire project.  (I like CloudApp and DropBox, but I don't feel they offer either enough space for what I need to do, or the ability to access the remote "device" as a filesystem.)  Here we go...

I decided to use a service called ZumoDrive for my cloud storage simply because they offer 2Gb of mountable file storage.  Unlike others, this storage is visible as a mounted drive on your desktop.  It's also compatible with Mac OS X, Linux, Windows, and mobile devices (Android, iPhone and Palm Pre).

Go to their website - I've linked it above - and download the appropriate file -- they pretty much walk you though every process -- and install the file on your machine.  On my Mac, the install requires only 4.7Mb of disk space.  Sweet.  Once installed, you'll be asked to create an account or log in to an existing account.

I first installed this application on my desktop machine.  I created a login/password using my favorite account management tool, 1Password, and I chose a randomly-generated 16-bit password.  As an aside, the more I use 1Password, the more I rely on their password-generator.  I'm breaking the mold of the rotating-three passwords and using something that's both non-intuitive and random to protect my accounts.

Once I've created an account, and logged-in to the account, I'm now in the Zumo landing page, or what they term the "Dojo".  Initially, Zumo offers you 1Gb of free storage but if you take an extra couple of minutes to run through their tutorial and examples, they'll quickly bump you (promoting your account in "belts") to the maximum storage freely available of 2Gb.  Sweet.

What you now should have is a mounted drive on your desktop that looks like any of the other connected devices.  This virtual drive is accessed like any other drive which allows you full file-system management to the device.  (You can copy and delete files and stuff.)

Before we do anything, we want to prepare the device.  I was wondering if the data you stored on the cloud device was secure so I searched-for and found this blog post from Zumo which basically states:

The file being uploaded is transferred to the ZumoDrive server which is hosted by Amazon's EC2 (Elastic Compute Cloud).  It is done via 256-bit SSL encryption.  SSL is the same type of encryption used when you log into your bank's secure website.  The EC2 is the workhorse.  It's the liaison between the client on your computer and the ZumoDrive datacenter (which is hosted by Amazon S3; more on this below).  It also services the ZumoDrive website.

Ok, so the data you store on the device is already being encrypted.  Cool.  But what if you want it to be really, really encrypted?  Should you be this paranoid?  Consider the excerpts from this article:

"As set forth in our privacy policy, and in compliance with United States law, Dropbox cooperates with United States law enforcement when it receives valid legal process, which may require Dropbox to provide the contents of your private Dropbox ... It is also worth noting that all companies that store user data (Google, Amazon, etc.) are not above the law and must comply with court orders and have similar statements in their respective terms of service."

I am only storing source code.  But it's not my source-code -- it's the IP of my company.  Therefore, your encryption methods do not satisfy my requirements and I shall have to devise an alternative strategy.

It's called Truecrypt.

I've already extensively blogged about Truecrypt in a 3-part post so I'm not going to cover the basics here.  Go read or review the series if you need a refresher on using Truecrypt and creating secure files and volumes.

Start TrueCrypt and create an encrypted file on your Zumo device.  I'm going to create a 500Mb encrypted file container on the Zumo drive.  I'm going to use multi-pass encryption scheme with a strong hash algorithm.  I'm going to use my 1Password program to randomly generate a password to control access to this container.  I'm creating this as a FAT filesystem, fully formatted.  Total time to format was about 10 seconds.

Once I've created the file container, I have to mount it.  I do this by selecting the file from within the Zumo filesystem share, using the TrueCrypt manager/program, and clicking "Mount".  I supply my password and the encrypted file container is now mounted to my desktop.  I now have two devices mounted -- the original Zumo drive and the TrueCrypt file container within the Zumo drive.

Were someone to access my Zumo device, without the TrueCrypt module, all they will see is a 500Mb file with my container name.  Stored inside the container is what appears to be random bits.  Perfect.

I can still store files to the Zumo device, along side my container file -- they will not be encrypted using TrueCrypt however.

The next step is to install my source code repository.  I use the best PHP IDE in the whole multi-verse:  PHPStorm by JetBrains.  It's so good, that this checkout process will be amazingly simple and fast.  My code repository will be checked-out from, and completely maintained in, subversion by the IDE.  Sweet.

I realize you may not yet be enlightened yet to PHPStorm and that's ok.  Use your inferior product to valiantly struggle to get the source code out from repository into the TrueCrypt volume.  You may even be successful!

For the rest of us, we simply select: Checkout From Version Control -> Subversion, and select the repository (svn+ssh://...).  The tricky part, for us Mac users, is that we have to locate the filesytem destination for the source-code files under the directory Volumes beneath the root mount-point.  In the Volumes directory, you will see (at least) two Volumes -- one for your Zumo drive and one for your encrypted container within the Zumo Drive.

If you want your source code to be stored in the encrypted container, within the Zumo drive, you have to select the encrypted container.  In the screenie shown, my container is creatively named "NO NAME".  This is the device I will select to check-out my source code into.  I'll create additional paths within the container that are proprietary to the application.

Once that's done, I click "OK" and, in the next dialog, tell PHPStorm to check-out from the HEAD and include external locations.  The final dialog asks me for version compatibility and I select 1.6.  The check-out process within PHPStorm takes off.

For my code base, checkout the entire code library and storing it to the encrypted volume took less than four minutes for slightly more than 100Mb worth of files.  True, there is processing overhead in retrieving the encrypted file form storage and de-crypting the file on-the-fly (and the same holds true in reverse) but the files are secure and my company's IP is protected.

From this point, as long as I have both the Zumo drive, and the TrueCrypt, software installed on whatever platform, I can access my source code, securely, from the cloud while ensuring that the source code itself remains current.

With the exception of the JetBrains PHPStorm IDE, the Zumo drive and the TrueCrypt software programs are open-source and free to use.

As a final caveat, this system puts you at the mercy of your ISP -- if there's no internet access, then you'll be unable to access your files.  Also, you'll want to make sure that you stage your source code regularly.  My general rule-of-thumb is to commit only when I reach the point where I don't want to have to re-engineer and re-type in the new code, or modifications, that I would lose should something happen to my cloud repository.

Final Notes

Zumo drive maintains it's files locally on your machine so that you always have off-line access to your files.  Therefore, you're really not at the mercy of your ISP.  You are, however, at the mercy of your hard drive space and your connectivity upload speed.  It took me hours to upload my 500Mb encrypted file container to Zumo -- at 77K/sec, it was not only painful, but it also bogged down my network speeds so that other idle entertainments (WoW) were pretty much impossible.

The upload speed lag, which I blame both my ISP and Zumo (throttling) for, meant that earning "achievements" from Zumo wasn't instantaneous -- I had to wait for network processing to complete before I could earn my "belts".  Eventually I did get up to my 2Gb of free storage so it's all good.