Friday, January 17, 2020

Connecting PS4 controllers to PC

I use my 50" TV as my PC screen, which means my peripherals have to be wireless. I recently bought Pummel Party on Steam, and plays better on a controller. My PC doesn't have a Bluetooth card installed, so I went hunting for options. There is this: https://www.playstation.com/en-gb/explore/accessories/dualshock-4-usb-wireless-adaptor/ But what isn't common knowledge is most controllers - ones I have are PS4/XBox/Nintendo Switch - can connect to PC using Bluetooth! I went looking around in shops for the USB wireless adapter, and of course I couldn't find any. Then in a glass display at Qisahn, I find this: https://www.8bitdo.com/wireless-usb-adapter/ Great, it lets me connect all my wireless controllers wirelessly! Brought it home to try, but to my dismay Steam didn't automatically detect my controller as being a PS4 controller. Before upgrading the firmware, it recognized the controller as "Game controller", and after upgrading just became "Controller", but with none of the correct input maps! Not only that but I couldn't map the Analog Stick at all. After more research I got a simple $15 USB-Bluetooth Adapter. Spent a couple of days fumbling to get my controller connected to it. When I tried to connect a second controller, it just wouldn't "stay connected". I was on Windows 7 up until last week and the Bluetooth driver just wasn't working. I even got DS4Windows installed, but made no difference. Since Windows 7 support was ending 14th January, I thought heck now there's probably a legit reason to upgrade to Windows 10 - to get this Bluetooth driver working. I had to uninstall the Bluetooth driver (and unplug it) as well as my Avast Antivirus, otherwise the Windows 10 upgrade would just fail at the BIOS. After upgrade, everything works pretty seamlessly!

Wednesday, December 25, 2019

Adding ActiveDirectory users to Jenkins

I work(ed) at a Windows-centric organization. Jenkins running on Windows can cause quite a stir. When adding AD users to Jenkins under Configure Global Security, the first problem you’ll encounter is, in Jenkins, users are case-sensitive. That means if your AD user is JOHN, you’ll need to add both JOHN and john, otherwise when the user decides to login with small case, it won’t work. A bigger problem is, once you cross about 50 users, you’ll start getting exception as documented in JENKINS-26963 - Form too large 213549>200000. The quick fix is to add a JVM parameter to jenkins.xml. If you’re running Jenkins behind jetty on Windows, you actually need to do this instead: prunmgr.exe //ES//Jenkins (You can get the edit string from Windows services) Funnily enough, if you visit https://wiki.jenkins.io/display/JENKINS/Jetty, the lone comment on that page addresses the above. Considering Jenkins is bundled on Jetty you would think this was better documented.

Friday, December 20, 2019

Mono-repo with Lerna

Dev team wanted to use lerna - defined bootstrap and postinstall commands that called lerna. Lerna bootstrap default behaviour uses npm ci This will fail if no package-lock.json. If you set postinstall this will never exist Package-lock not sustainable in git when many developers contributing code Thus will fail with nipm package-lock doesnt exist. We had to use —no-ci and sacrifice the speed boost. Lerna bootstrap will also run forever if downstream npm install demands output. For eg in my case semantic.json had backslashes for paths, and gulp returns a prompt - semantic.json exists do you want to Skip Install. Lerna with loglevel silly will get stuck at ‘npm install’ on the leaf and not say anything. I found advise about not putting lerna bootstrap in a postinstall command, however that is no longer applicable. Lerna publish creates git tags for every subpackage but only if its changed, so you end up with a mess of tags with different versions. So to make sure we have 1 version for everything in the repository, I use sed to replace version in the root and packages package.json, but also change the versions of local dependencies. Lerna bootstrap calls node-gyp rebuild, which must connect to internet to download node. Unless you set nodeconfig to a local installation. Easiest way was to set config in .npmrc. Since lerna uses webpack, I got away with installing this globally. For gulp, even if I installed globally, it still complained the command didn’t exist. The solution was to —save-dev and make it a devDependency. Some places actually suggest to not use lerna bootstrap and switch to using file specifiers for local dependencies. This ends up a greater headache - lerna bootstrap downloads a ton of gulp dependencies (gulp-help, gulp-concat, etc.). Don’t heed that advice.

Sunday, November 3, 2019

Generating SHA-256 checksums for Maven artifacts

This one is thoroughly undocumented. Didn't go through the plugins code to work this one out, it worked purely by chance...

My organization requires that for all artifacts to be released, a SHA-25 checksum needs to be generated. I've standardized on pom.xml for all projects in order to upload artifacts to Nexus. My alternative was to upload artifacts via Jenkins in a pipeline using the Nexus uploader block, however it doesn't seem there's a simple way to get identify ahead of time the artifacts that would be built in the Maven dependency tree. If there was, I could just run "sha256sum" on this list... I did try suggestion from here but if I recall it didn't list artifacts from child modules: https://stackoverflow.com/questions/36936238/create-a-list-of-artifacts-that-are-build-by-a-maven-project

Far easier to use a Maven plugin with "mvn deploy". Cue the checksum-maven-plugin:
https://checksum-maven-plugin.nicoulaj.net/examples/generating-project-artifacts-checksums.html

Looks simple enough. Quote: "This configuration will generate checksum digest files for the project main and attached artifacts".

      <plugin>
        <groupId>net.nicoulaj.maven.plugins</groupId>
        <artifactId>checksum-maven-plugin</artifactId>
        <version>1.8</version>
        <executions>
          <execution>
            <goals>
              <goal>artifacts</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <!-- put your configurations here -->
        </configuration>
      </plugin>
When I tried this plugin, no SHA-256 checksum gets generated.

Within the responses on the Github repository for that plugin, I find this:
https://github.com/nicoulaj/checksum-maven-plugin/issues/39


<plugin>
    <groupId>net.ju-n.maven.plugins</groupId>
    <artifactId>checksum-maven-plugin</artifactId>
    <version>1.3</version>
    <executions>                   
        <execution>
            <id>checksum-artifacts</id>
            <phase>package</phase>
            <goals>
                <goal>artifacts</goal>
            </goals>
            <configuration>
                <csvSummary>false</csvSummary>
                <shasumSummary>true</shasumSummary>
                <shasumSummaryFile>sha512-libs.sum/shasumSummaryFile>
                <individualFiles>false</individualFiles>
                <algorithms>
                    <algorithm>SHA-512</algorithm>
                </algorithms>
                <types>
                    <type>jar</type>
                </types>
                <scopes>
                    <scope>runtime</scope>
                </scopes>
            </configuration>
        </execution>
    </executions>
</plugin>
I guess it used to have a different group name prior to version 1.5, however this version of the plugin was at least printing a line in the Maven output indicating this plugin was getting invoked. I thought my plugin wasn't even getting called! However still no checksum was getting generated.

I'd almost given up, until my colleague started using the plugin and SHA-256s were getting generated and auto-uploaded to Nexus. After a bit of digging, I found that only artifacts in the "${workspace}/target" directory were getting checksums generated. My artifacts were getting generated in child module folders, and any arbitrary directory the project called for - e.g. from using maven exec plugin or antrun.

The solution was to add an extra antrun step to move any built artifact into the project root's target directory, and then use attach-artifacts to include it for upload to Nexus. In some of my cases, I had to add an extra module to perform this after other child module's had completed building. Sometimes the Maven reactor wouldn't order the child module's properly, especially if the module's were built with proprietary Maven plugins (e.g. Temenos products). Not the user-friendly experience I expected for generating checksums!

Wednesday, February 22, 2017

MySQL driver and Fuse

The internet caused me a headache this past week.
All the guides on using MySQL with Fuse in a project utilizing Blueprint DSL will demonstrate something like this:
1. In your POM declare <Import-Package>com.mysql.jdbc</Import-Package>
2. Install mysql-connector to your OSGi container using "osgi:install"
3. Install your app.

Here's some links: Fuse examples on Git, http://stackoverflow.com/questions/30307288/mysql-connector-in-osgi-environment-gradle-noclassdeffounderror, http://freemanfang.blogspot.sg/2012/03/how-to-use-jdbc-driver-in-osgi.html, http://www.liquid-reality.de/display/liquid/2012/01/13/Apache+Karaf+Tutorial+Part+6+-+Database+Access

If you were getting error "java.lang.ClassNotFoundException: com.mysql.jdbc.Driver not found", you would probably come across the above guides. However, you would be severely misled, because they all do not address the fundamental issue: MySQL changed their package name of the Driver.class.
I guess this is one of the downfalls of proprietary libraries, they change package names at will, and there are 0 OSGi articles that mention this. So the easy fix to your solution would be:
1. Change this in your POM:

      <plugin>
        <groupId>org.apache.felix</groupId>
        <artifactId>maven-bundle-plugin</artifactId>
        <version>${version.maven-bundle-plugin}</version>
        <extensions>true</extensions>
        <configuration>
          <instructions>
             <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
             <Bundle-Description>${project.description}</Bundle-Description>
             <Import-Package>com.mysql.cj.jdbc, com.ibm.mq.jms, com.ibm.mq, com.ibm.mq.constants ,org.springframework.jdbc.*, org.apache.commons.dbcp,*;resolution:=optional</Import-Package>
             <DynamicImport-Package>*</DynamicImport-Package>
          </instructions>
        </configuration>
      </plugin>


2. Change this in your blueprint:

    <bean class="org.apache.commons.dbcp.BasicDataSource" id="dataSource">
        <property name="driverClassName" value="com.mysql.cj.jdbc.Driver"/>
        <property name="url" value="jdbc:mysql://localhost:3306/yourdb"/>
        <property name="username" value="user"/>
        <property name="password" value="pass"/>
    </bean>



I really hope this saves some people a heap of time!!

Tuesday, January 17, 2017

SoapUI and MQ on Windows

WebSphere MQ is a fairly complex piece of software, with concepts ranging from Connection Factories, Topics, Subscribers, Channels, etc. So getting SoapUI to connect to it requires a decent amount of technical know-how.
There's ample SoapUI documentation on picking up & sending messages to ActiveMQ, however for IBM MQ it's a bit sparse. This is an attempt to document, step by step, how to get SoapUI to quickly hook up to MQ, in Windows 7, taking into consideration UAC.
  1. Get WebSphere MQ Developer edition, if you don't have it already. I made the mistake of getting the evaluation trial.
  2. If working locally, create a queue manager in MQ, then open up command prompt, run "runmqsc ", and type "ALTER QMGR CHLAUTH(DISABLED)". You don't need to worry about channel authentication for development work, but if you insist, it took me some time to figure this out but you need to first create a server-connection channel (you had the option to do this on installation of MQ), open channel properties and under MCA, replace *NOACCESS with MUSR_MQADMIN (if using default domain/users).
  3. Go to %SOAPUI_HOME%/bin, open up "soapui.bat", and edit this line so it becomes:
    set CLASSPATH=%SOAPUI_HOME%soapui-5.3.0.jar;%SOAPUI_HOME%..\lib\*;C:\Program Files\IBM\WebSphere MQ\java\lib\*
    Then run this bat file as Administrator.
  4. Load up a WSDL in SoapUI. To save yourself some time, use the sample SoapUI SOAP tutorial which comes with the SoapUI installation. On Windows this is put in C:/Users/username/SoapUI-Tutorials by default.
  5. Run HermesJMS from within SoapUI. Configure the path to HermesJMS when it prompts you.
  6. Create a new session. This guide can take you through *most* of the way: Guide
    But you might get numerous errors about classes not being runnable. In my classpath group I ended up with this, just to be sure.
  7. If you left channel authentication on, you need to connect through the server-connection channel. Your session configuration needs to look like this:

    (And yes that's a Mac UI, Mac users can follow these instructions)
  8. Be careful not to leave "MQQueueConnectionFactory" as the Connection Factory for the session if you're getting classpath errors, otherwise your session will become corrupt and you'll need to delete the HermesJMS hermes-config.xml file and start over.
  9. Right click session, Discover, and HermesJMS should find all your queues.

Sunday, December 18, 2016

Finding the Eclipse test client URL

Generate a bottom-up web service and deploy to Eclipse Tomcat, and the internal browser automatically pops up:


Now, what if you close that browser? Well you're in a predicament, you either have to:
- regenerate the test client project
- guess the URL of the sample test project

This caused me much grief, so for reference, here's the URL:
http://localhost:/Client/sampleProxy/TestClient.jsp
Where your port is either the Tomcat port or the monitor port.