Monday, October 20, 2014

Packer, Vagrant and Windows

Fun day of setting up Packer with configurations my colleague put together :)
Packer is a way for you to build a Vagrant box locally with all the software and configurations you need, without having to transfer an enormous VBox around. Very neato.
On running a simple "packer build ", your OS should build from scratch. First problem I encounter:
More info here
The executable 'bsdtar' Vagrant is trying to run was not
found in the %PATH% variable. This is an error. Please verify
this software is installed and on the path.

What do you expect I'd do? I locate bsdtar.exe in "C:\HashiCorp\Vagrant\embedded\mingw\bin", and add the path to my PATH. Then I get another error:
The box failed to unpackage properly. Please verify that the box
file you're trying to add is not corrupted and try again. The
output from attempting to unpackage (if any):

x Vagrantfile
x box.ovf
x metadata.json
x ubuntu1404-disk1.vmdk: Write failed
Packer/bsdtar.EXE: Error exit delayed from previous errors.

Well that was useless. Long story short, it seems like a bug when upgrading Vagrant from an older version to a newer version. I upgraded from 1.6.3 to 1.6.5. Uninstalled my current and reinstalling Vagrant 1.6.5 fixed the issue.

Getting past the intial setup, we wrote scripts to automate installation of a particular IBM product: Maximo. Here's some of the dependencies:
  • Install WebSphere Application Server or Oracle WebLogic, ~2-3GB
  • A database (I used Oracle 11g R2), minimalistically this takes about 5-10GB space
  • Yum packages (e.g. Ant, Oracle DB pre-reqs)
  • Open File descriptors, kernel property changes
  • Running maxinst.sh
Installing Weblogic and Oracle DB doesn't actually taking a long time. Maxinst takes the bulk of the time, and has a tendency to fail. A couple of key notes I took for Packer:
  • The documentation suggests using a post processor to keep "intermediary artfacts" (the vbox) like so:
    
      "post-processors": [
        {
          "output": "builds/centos65-wwm-base.box",
          "type": "vagrant",
          "keep_input_artifact": true
        }
      ]
    

    The trouble is, I still get "Deleting output directory" at the end of a failed build, which means "keep_input_artifact" only works if your build succeeds (I'm guessing, I never tried). Horrible stuff, you're going to automatically delete 3 hours worth of builds with no way for me to keep my vbox? Not happy HashiCorp.
  • I like to lock my screen while stuff runs in the background. With Packer? Bad idea.

Wednesday, October 8, 2014

SoapUI working with IBM JRE

In short: there is no support from Smartbear to support the IBM JRE, all efforts lead to a response of "use the Sun JRE".
Why would you use the IBM JRE? This is to send JMS messages to WebSphere's SI Bus, where the Application Server has Global Security turned on. You are required to set these 2 JVM properties:
-Dcom.ibm.CORBA.ConfigURL
-Dcom.ibm.SSL.ConfigURL

If you don't do this and attempt to send a message, you get a WsnInitialContext exception.
Once you've configured soapui-pro.sh to use the IBM JRE, you'll find that you won't be able to activate/use your license (even if you'd activated it while using the Sun JRE). You'll go through the process of re-activating your license, but be told you're missing a valid license.
After a day's effort of trying different things, such as moving across Sun's JRE providers into IBM JRE's "java.security" file, I ended up decompiling soapUI's code. It appears soapUI's decryption method is "RSA - SunJCE - 512", which requires the "BouncyCastle" security provider. The solution was to add this line to the JRE's java.security file:
security.provider.1=org.bouncycastle.jce.provider.BouncyCastleProvider
Voila, you can now activate your license. Although...

SoapUI Pro 5.1.2 has a gotcha when running testrunner.sh. It will attempt to validate your license as well, and requires you to have X11 forwarding enabled (no matter what). So if you're like me and are running SoapUI Pro on a headless Linux environment, you're stuffed. We ended up downgrading to 5.0.0, where this X11 port forward is not required.