Wednesday, April 12, 2017

Lessons learned Docker microservices architecture with Spring Boot

Introduction

During my last project consisting of a Docker microservices architecture, built with Spring Boot, using RabbitMQ as communication channel, I learned a bunch of lessons, here's a summary of them.

Architecture

Below is a high level overview of the architecture that was used.


Docker

  • Run 1 process/service/application per docker container (or put stuff in init.d but that's not intended use of docker)

  • Starting background processes in the CMD cause container to exit. So either have a script waiting at the end (e.g tail -f /dev/null) or keep the process (i.e the one prefixed with CMD) running in the foreground. Other useful Dockerfile tips you can find here

  • As far as I can tell Docker checks if Dockerfile has changed, and if so, creates a new image instance (diffs only?)

  • Basic example to start RabbitMq docker image, as used in the build tool:

    $ docker pull 172.18.19.20/project/rabbitmq:latest
    docker rm -f build-server-rabbitmq
    $ # Map the RabbitMQ regular and console ports
    $ docker run -d -p 5672:5672 -p 15672:15672 --name build-server-rabbitmq 172.18.19.20/rabbitmq:latest

  • If there's no docker0 interface (check by running command ifconfig) then probably there are ^M characters in the config file at /etc/default/docker/docker.config. To fix it, perform a dos2unix on that file.

  • Check for errors at startup of docker in /var/log/upstart/docker.log

  • If your docker push <image> asks for a login (and you don't expect that) or it returns some weird html like "</html>" then you're probably missing the host in front of the image name, e.g: 172.18.19.20:6000/projectname/some-service:latest

  • Stuff like /var/log/messages is not visible in a Docker container, but is in its host! So look there for example to find out why a process is not starting/gets killed at startup without any useful logging (like we had with clamd)

  • How to remove old dangling unused docker images: docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

Spring Boot

  • Some jackson2 dependencies were missing from the generated Spring Initializr project, noticed when creating unittests. These dependencies were additionally needed in scope test:

    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-databind</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-annotations</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-core</artifactid>
      <version>2.5.0</version></dependency>


    Not sure anymore why these then didn't get <scope>test</scope> then... Guess it was needed also in some regular code... :)

  • In the Spring Boot AMQP Quick Start the last param has name .with(queueName) during binding, but that's the topic key! (which is related to the binding key used at sending), so not the queue name.

  • Spring Boot Actuator's /health will check all related dependencies! So if you have a dependency in your pom.xml to a project which uses spring-boot-starter-amqp, /health will check now for an AMQP queue being up! So add a for those if you don't want that.

  • Spring Boot's default AppAplicationIT probably needs a @DirtiesContext for your tests, otherwise the tests might re-use or create more beans than you think (we saw that in our message receiver tests helper class).

  • @Transactional in Spring: by default only for unchecked exceptions!! It's documented but still a thing to watch out for.

  • And of course: Spring's @Transactional does not work on private methods (due to proxy stuff it creates)

  • To see in Spring Boot the transaction logging, put this in application.properties:

    logging.level.org.springframework.jdbc=TRACE
    logging.level.org.springframework.transaction=TRACE


    Note that by default @Transactional just rolls back, it does not log anything, so if you don't log your runtime exceptions, you won't see much in your logs.

  • mockMvc from spring is not really invoking from "outside", our spring sec context filter (for which you can use @Secured(role)) was allowing calls while no authentication was provided for. RestTemplate seems to work from "the outside".

  • Scan order can mess up @ControllerAdvice error handler it seems. Had to change the order sometimes:

    Setup:
    - Controller is in: com.company.request.web.
    - General error controller is in com.company.common package.

    Had to change
    @ComponentScan(value = {"com.company.security", "com.company.common", "com.company.cassandra", "com.company.module", "com.company.request"})

    to

    @ComponentScan(value = {"com.company.security", "com.company.cassandra", "com.company.module", "com.company.request", "com.company.common"})

    Note that the general error controller has now been put in last

  • Spring Boot footprint seems relatively big especially for microservices. At least 500MB or something is needed, so we have quite big machines for about 20 services. Maybe plain Spring (iso Spring boot) might be more lightweight...

Bamboo build server

  • When Bamboo gets slow and the CPU seems quite busy and memory availability on its server seems fine, increase the Xmss and Xmsx (or related). Found this out because the java Bamboo process was running out of heap sometimes, increasing heap also fixed performance.

  • To have Bamboo builds fail on quality gates not met in SonarQube, install in Sonar the build breaker plugin. See the plugin docs and Update Center. This FAQ says so.

Stash

  • The Stash (now called Bitbucket) API: in /rest/git/1.0/projects/{projectKey}/repos/{repositorySlug}/tags a 'slug' is just a repository name. 

Microservices with event based architecture

  • When you do microservices, IMMEDIATELY take into account during coding + reviews that multiple instances can do concurrent access to database.

    This has affect on your queries. Most likely correct implementation for uniqueness check on inserts:
    1- add unique constraint
    2- run insert
    3- catch uniqueness exception --> you know it already exists. Solution with SELECT NOT EXISTS is not guaranteed unique.

  • Also take deleting of data (e.g user deletes himself) into account from the start. Especially when using events and/or eventual consistency in combination with an account-balance or similar. Because what if one services in the whole chain of things to execute for a delete fails? Has the user still some money left on his/her account then? In short: take care of CRUD.

  • Multiple services are sending the same event? That can indicate 2 services are doing the same thing --> Not good probably.

  • Microservices advantages:

    - Forces you to better think about where to put stuff in comparison to monolith where you more often can be tempted to "just do a quick fix".
    - language independency for service implementation: choose the best language for the job

    Disadvantages:
    - more time needed for design
    - eventual consistency is quite tough to understand & work with, also conceptually
    - infrastructure is more complex including all communication between services

    More cons can be found here.

Tomcat

  • Limit the maximum size of what can be posted to a servlet is not as easy as it seems for REST services:

    - maxPostSize in Tomcat is enforced only for specific contenttype: Tomcat only enforces that limit if the content type is application/x-www-form-urlencoded

    - And the other 3 below XML options are for multipart only:

    <multipart-config>
      <!-- 52MB max -->
      <max-file-size>52428800</max-file-size>
      <max-request-size>52428800</max-request-size>
      <file-size-threshold>0</file-size-threshold></multipart-config>


    So that one won't work for uploading just a byte[]. The only solution is in the servlet (e.g Spring @Controller) you'll have to check for the limit you want to allow.

  • maxthreads seems set to be unlimited by default or something. 50 seems to perform better. (workerthreads) 

Security

  • To securely generate a random number: SecureRandom randomGenerator = SecureRandom.getInstance("NativePRNG");

  • Good explanation of secure use of a salt to use for hashing can be found here

Cassandra

  • Unique constraints are not possible in Cassandra, so there you will even have to implement unique constraints in the business logic (and make it eventually consistent)

  • CassandraOperations query for one field:

    Select select = QueryBuilder.select(MultiplePaymentRequestRequesterEntityKey.ID).from(MultiplePaymentRequestRequesterEntity.TABLE_NAME);
    select.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
    select.where(QueryBuilder.eq(MultiplePaymentRequestRequesterEntityKey.REQUESTER, requester));
    return cassandraTemplate.queryForList(select, UUID.class);


    See also here.

  • Note below two keys don't seem to get picked up by Cassandra in the Spring Data Cassandra version 1.1.4.RELEASE:  

    <groupid>org.springframework.data</groupid>
    <artifactid>spring-data-cassandra</artifactid>

    @PrimaryKeyColumn(name = OTHER_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meUserId;

    @PrimaryKeyColumn(name = ME_USER_ID, ordinal = 1, type = PrimaryKeyType.CLUSTERED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meId;

    This *does* get picked up: put it into a separate class:

    @Data
    @AllArgsConstructor
    @PrimaryKeyClass
    public class HistoryKey implements Serializable {

      @PrimaryKeyColumn(name = HistoryEntity.ME_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID meUserId;

      @PrimaryKeyColumn(name = HistoryEntity.OTHER_USER_ID, ordinal = 1, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID otherUserId;

      @PrimaryKeyColumn(name = HistoryEntity.CREATED, ordinal = 2, type = PrimaryKeyType.CLUSTERED, ordering = Ordering.DESCENDING)
      private Date created;

    }

  • Don't use Cassandra for all types of use cases. An RDMS still has its value, e.g for ACID requirements. Cassandra is eventually consistent.

Kubernetes

Miscellaneous

  • Use dig for DNS resolving problems

  • Use pgAdmin III for PostgreSQL GUI

  • To stop SonarQube complaining about unused private fields when using Lombok @Data annotation: add to each of those classes @SuppressWarnings("PMD.UnusedPrivateField")

  • Managed to not need transactions nor XA transactions for message publishing, message reading, store db, message sending, by using the confirm + ack mechanism.
    And allow message to be read again. DB then sees: oh already stored (or do an upsert).
    So,when processing message from the queue:
    1- store in db
    2- send message on queue
    3- only then ack back to queue that read was successful

  • Performance: instantiate the Jackson2 ObjectMapper once as static, not in each call, so:
    private static final ObjectMapper mapper = new ObjectMapper();
  • Javascript: when an exception occurs in a callback and it is not handled, processing just ends. Promises have better error handling.

  • clamd would not start correctly; it would try to start but then show 'Killed' when started via the commandline. Turns out it runs out of memory when starting up.  Though we had enough RAM (16G total, 3G free), it turns out clamd needs swap configured!

  • Linux bash shell script to loop through projects for tagging with projects with spaces in their name:

    PROJECTS="
      project1
      project space2
    ";
    IFS=$'\n'
    for PROJECT in $PROJECTS
    do
      TRIM_LEADING_SPACE_PROJECT="$(echo -e "${PROJECT}" | sed -e 's/^[[:space:]]*//')"
      echo "Cloning '$TRIM_LEADING_SPACE_PROJECT'"
      git clone --depth=1 http://$USER:$GITPASSWD@github.com/projects/$TRIM_LEADING_SPACE_PROJECT.git
    done

  • OpenVPN in Windows 10: Sometimes it hangs on "Connecting..."  It doesn't show the popup to enter username/pwd. Go to View logs. Then when you see: Enter management password in the logs: ???? you have to kill the OpenVPN Daemon under Processes tab (windows taskmanager). The service is stopped when exiting the app but that's not enough!

  • javascript/nodejs log every call that comes in:

    app.use(function (req, res, next) {
      console.log('Incoming request = ' + new Date(), req.method, req.url);
      logger.debug('log at debug level');
      next()
    }

  • If ever your mouse is suddenly not working anymore your VirtualBox guest, kill the process in your guest-machine mentioned in comment 5 here. After that the mouse works again in your vbox guest.

  • Fix Firefox to version 45.0.0 for selenium driver tests:

    sudo apt-get install -y firefox=45.0.2+build1-0ubuntu1
    sudo apt-mark hold firefox

  • Setting the cookie attribute Secure (indicating cookie should only be sent over httpS) can be seen when using curl to request the URL(s) that should send that cookie plus the new attribute, even when using HTTP. See also my previous post.

    But when using a browser and HTTP, you probably won't see the secure cookie appear in the cookie store. This is (probably) because the browser knows not to store it in that case because it's HTTP being used.

  • Idempotency within services is key for resilience and be able to resend an event or perform an API call again.

Wednesday, January 18, 2017

OpenVPN how to route all IPV4 traffic through the OpenVPN tunnel

Introduction

Originally I was connected from a Windows 10 machine via OpenVPN to a network (segment?) for "our" project. I could access all servers and websites related to it.  But when switching to another project (using the same OpenVPN settings) I could only access the new project's servers when at the premise of that project. At home or from any other place, I could not get to the servers, e.g Jenkins. The error shown was "This site can't be reached" in Chrome. See screenshot below for the exact error:



But I could get to the microservices pods directly by IP address, e.g 172.18.33.xyz (xyz are not the same in below example IP addresses, just obfuscators). So quite strange.

The administrator of the OpenVPN server didn't know how to fix the problem either. Suggested was to make sure "to route all IPV4 traffic through VPN". That made me search on the interwebs and I found below solution to work, without having to change any server settings. (I did not even have access to those server settings.)

Analyzing the problem

A) Trying the website with the hostname:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.xyz] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    20 ms    19 ms    20 ms  d13.xs4all.com [195.109.5.xyz]
  3    22 ms    22 ms    22 ms  3d13.xs4all.com [195.109.7.xyz]
...


B) Trying the well-known google gateway:
C:\Users\moi>tracert 8.8.8.8
Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:
  1     1 ms     1 ms     1 ms  MODEM [192.169.178.x]
  2    21 ms    20 ms    21 ms  d12.xs4all.com [195.109.5.xyz]
...

Hmm so its route goes via the same initial gateway for both external IPs and the hostname, so not via the VPN.

C) Trying with the IP that works (note not the IP for the hostname from above!):
C:\Users\moi>tracert 172.18.33.xyz
Tracing route to 172.18.33.xyz over a maximum of 30 hops
  1    97 ms    21 ms    20 ms  192.169.200.xyz
  2    45 ms    98 ms    29 ms  172.16.11.xyz
  3   130 ms    65 ms    68 ms  172.16.11.xyz
...

As you can see, the first entrypoint gateway is a different one, and most likely the wrong one.

The solution

The solution was to add this to the .ovpn OpenVPN configuration file:

route-method exe
route-delay 2
redirect-gateway def1

For me even only the last line (redirect-gateway def1) was sufficient, but for others the other two lines had to be added too.

D) After adding the setting, you can see the IP of the gateway changed to, the what turns out to, be the correct one:
C:\Users\moi>tracert website.eu
Tracing route to website.eu [183.45.163.yyy] over a maximum of 30 hops:
  1   143 ms    31 ms    21 ms  192.169.200.xyz
  2    21 ms    20 ms    21 ms  static.services.de [88.20.160.xyz]
  3    21 ms    21 ms    25 ms  10.31.17.xyz
  4    25 ms    21 ms    91 ms  10.31.17.xyz
...

References used:
- http://superuser.com/questions/120069/routing-all-traffic-through-openvpn-tunnel
- http://askubuntu.com/questions/665394/how-to-route-all-traffic-through-openvpn-using-network-manager

Wednesday, December 14, 2016

Nodejs Express enabling HSTS and cookie secure attribute


To enable HSTS (adds Strict-Transport-Security header to the response) for Node + Express using Helmet, this should be sufficient to add to your app.js:

var hsts = require('hsts')

app.use(hsts({
  maxAge: 15552000  // 180 days in seconds
}))

But a few things didn't work for me. First of all I had to add the force: true attribute. Also I was using an older version of HSTS ([0.14.0 instead of current latest release 2.0.0), so the maxAge was not in seconds but in milliseconds. It says in the README at github but I hadn't checked the version we were using...

To enable the 'secure' attribute for cookies (so they are only sent over HTTPs), I added the cookie-session module. But there is an issue with the secure flag. See the code below for more details.

Below is a full viewcounter.js application that shows the result for enabling hsts and secure cookes. Note also the comments in the code.

Start it with: node viewcounterhelmet.js

viewcounterhelmet.js:

var cookieSession = require('cookie-session')
var helmet = require('helmet');
var express = require('express')

var app = express()

app.use(helmet());
// Works, makes in response show Cache-Control, Pragma, Expires: app.use(helmet.noCache());
var oneYearInSeconds = 31536000;
app.use(helmet.hsts({
  maxAge: oneYearInSeconds,
  includeSubDomains: true,
  force: true
}));

var expiryDate = Date.now() + 60 * 60 * 1000; // 1 hour in milliseconds
app.use(cookieSession({
  name: 'session',
  secret: '10dfaf09-cf6f-43a9-b40b-4eaacbcceb8a',
  maxAge: expiryDate,
  // Not setting secure attr does not show it Secure
  // secure : true, // Set to true and no cookies at all in response. Created ticket for this for v1.2.0
  // secure : false, // Set to false it indeed doesn't show the Secure value with the cookie; it does show the cookies
  secureProxy: true, // Since this is running behind a proxy. But deprecated when using 2.0.0-alpha! Says to use secure option but that stops passing on cookies. When set to true, the cookie is set to Secure. If commented out, cookie not set to Secure
  httpOnly: true  // Don't allow javascript to access the cookie
}))

app.get('/', function (req, res, next) {
  // Update something in the session, needed for a cookie to appear
  req.session.views = (req.session.views || 0) + 1

  // Write response
  res.end(req.session.views + ' views')
})

app.listen(3000)

So when issuing the following curl command, this is the result:

vagrant$ curl -c - -v http://localhost:3000/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Download-Options: noopen
< X-XSS-Protection: 1; mode=block
< Surrogate-Control: no-store
< Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=31536; includeSubDomains
* Added cookie session="eyJ2aWV3cyI6MX0=" for domain localhost, path /, expire 2961374488
< Set-Cookie: session=eyJ2aWV3cyI6MX0=; path=/; expires=Sun, 04 Nov 2063 04:01:28 GMT; secure; httponly
* Added cookie session.sig="DJaPtrG-tmTnVr33fOWXqWGnVlw" for domain localhost, path /, expire 2961374488
< Set-Cookie: session.sig=DJaPtrG-tmTnVr33fOWXqWGnVlw; path=/; expires=Sun, 04 Nov 2063 04:01:28 GMT; secure; httponly
< Date: Fri, 02 Dec 2016 13:30:45 GMT
< Connection: keep-alive
< Content-Length: 7
<
* Connection #0 to host localhost left intact
1 views# Netscape HTTP Cookie File
# http://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.

#HttpOnly_localhost     FALSE   /       TRUE    2961374488      session eyJ2aWV3cyI6MX0=
#HttpOnly_localhost     FALSE   /       TRUE    2961374488      session.sig     DJaPtrG-tmTnVr33fOWXqWGnVlw

Notice the Strict-Transport-Security header and the set cookies with the 'secure' attribute set.

But when setting secure:true (also for the currently latest version 1.2.0), this is the result:

vagrant$ curl -c - -v http://localhost:3000/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Content-Type-Options: nosniff
< X-Frame-Options: SAMEORIGIN
< X-Download-Options: noopen
< X-XSS-Protection: 1; mode=block
< Surrogate-Control: no-store
< Cache-Control: no-store, no-cache, must-revalidate, proxy-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=31536; includeSubDomains
< Date: Tue, 13 Dec 2016 11:27:37 GMT
< Connection: keep-alive
< Content-Length: 7
* Connection #0 to host localhost left intact

So no cookies anymore, while they should be there!  Reported issue here.

Versions used:
  • node js: 6.7.0
  • express: 4.13.30
  • cookie-session: 1.2.0
  • helmet: 0.14.0




Saturday, December 3, 2016

Kubernetes Software Components Architecture Diagram

For a recent innovative project we were able to work with Kubernetes 1.3 as tool for automating deployment, scaling, and management of containerized applications. In our case (micro)services.


A lot of documentation is available, but I was missing an as-complete-as-possible architecture picture of all components within k8s. Or sometimes the link between certain components was not shown, or a related but very relevant component for the system was missing (e.g. Flannel). Below is the result of this: a full components architecture diagram of all components in Kubernetes. It was hard to fit everything in, but I think I have everything for the level of detail I wanted to capture.

The following components can be found in the diagram: kubectl, etcd, master node, worker node, controller manager, scheduler, etcd-watch, API server, Docker, kubeproxy, deployments, container, pod, service, kubelet, label selectors, config maps, secrets, daemon sets, jobs, flannel (for subnet), client app accessing the services.

Let me know in the comments if I'm missing anything! 😊


Inspiration for this diagram came from (amongst other things):

Wednesday, February 10, 2016

Tomcat 7 request processing threading architecture and performance tuning

Tomcat has many parameters for performance tuning (see http://tomcat.apache.org/tomcat-7.0-doc/config/http.html), but for some attribute descriptions it is not a 100% clear how some properties affect each other.

A pretty good explanation regarding acceptCount and maxThreads can be found here: http://techblog.netflix.com/2015/07/tuning-tomcat-for-high-throughput-fail.html 
But that article is missing a ... picture! That's what I tried to create here, with the numbers 1-5 indicating the steps described in the above article:



Just in case should the Netflix URL ever get lost and for easy reference, here's the 5 steps:




The Tomcat attribute maxThreads worked best in my case with value 50, as it won't saturate the machine/CPU. (due to too many workerthreads, many context-switches)


To set maxThreads when using Spring Boot + embedded Tomcat: http://stackoverflow.com/questions/31432514/how-to-modify-tomcat8-acceptcount-in-spring-boot


Other remarks


Referenced material




Wednesday, December 9, 2015

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.255.235.17 (Timeout during read)

In a recent project, seemingly randomly, this exception occurred when doing a CQL 'select' statement from a Spring Boot project to Cassandra:

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.255.235.17 (Timeout during read), /10.255.235.16 (Timeout during read))
...


After a lot of research, some people seemed to have reported the same issue. But no clear answer anywhere. Except that some Cassandra driver versions might be the cause of it: they mark (all) the node(s) as down and don't recognize it when it becomes available again.

But, the strange this is we have over 10 (micro) services running, all running at with least 2 instances. But only one of these services had this timeout problem. So it almost couldn't be the driver.... Though it did seem to be related with not using the connection for a while, because often our end-to-end tests just ran fine, time after time. But after a few hours, the tests would just fail. Then we didn't see the pattern yet...

But, as a test, we decided to let nobody use the environment against which the end-to-end tests run for a few hours; especially also because some of the below articles do mention as a solution to set the heartbeat (keep-alive) of the driver.

And indeed, the end-to-end tests started failing again after the grace period. Then we realized it: all our services have a Spring Boot health-check implemented, which is called every X seconds. EXCEPT the service that has the timeouts; it only recently got connected with Cassandra!

After fixing that, the error disappeared! Of course depending on the healthcheck for a connection staying alive is not the ideal solution. A better solution is probably setting the heartbeat interval on the driver on Cluster creation:

var poolingOptions = new PoolingOptions()
  .SetCoreConnectionsPerHost(1)
  .SetHeartBeatInterval(10000);
var cluster = Cluster
  .Builder()
  .AddContactPoints(hosts).
  .WithPoolingOptions(poolingOptions)
  .Build();


In the end it was the firewall which resets all TCP connections every two hours!

References

Tips to analyse the problem:

Similar error reports



Wednesday, December 2, 2015

iOS9 / iOS 9 / iOS 9.1 / ATS 9: An SSL error has occurred and a secure connection to the server cannot be made due to old sha1 signed certificate

In iOS 9 and higher apps, a higher level of ciphers is required for a certificate for Forward Secrecy.

Before iOS 9, it was possible to let a site within a webview forward/redirect to another SSL protected site.
For example it was possible to let another site redirect to this one in iOS 8: https://www.securesuite.co.uk/

But since iOS 9 it is not allowed anymore and you'll get an error like:

An SSL error has occurred and a secure connection to the server cannot be made." UserInfo={NSUnderlyingError=0x7f9855dcb520 {Error Domain=kCFErrorDomainCFNetwork Code=-1200 "An SSL error has occurred and a secure connection to the server cannot be made." UserInfo={_kCFStreamErrorDomainKey=3, NSLocalizedRecoverySuggestion=Would you like to connect to the server anyway?, _kCFNetworkCFStreamSSLErrorOriginalValue=-9802, _kCFStreamPropertySSLClientCertificateState=0, NSLocalizedDescription=An SSL error has occurred and a secure connection to the server cannot be made., NSErrorFailingURLKey=https://www.securesuite.co.uk


You'll just see a blank/white screen in the webview; no errors or whatsoever on the screen.

The by-default supported list in iOS 9 and higher can be found here: https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/InfoPlistKeyReference/Articles/CocoaKeys.html#//apple_ref/doc/uid/TP40009251-SW35

You can test easily whether a webpage/site is accepted at this site: https://www.ssllabs.com/ssltest/analyze.html

So when run for the above mentioned securesuite site, it tells us even in red the signature is still using the old SHA-1:


The site also even checks different clients like browsers and mobile operating systems like Android and iOS. And see the error in the Handshake Simulation section for ATS 9/ iOS 9: Client requires SHA2 certificate signatures

Running it against https://www.mastercard.com, which has SHA-2 as signature algorithm, the forwarding does work in iOS 8 and ios 9+: 
And the iOS 9 client also likes it:

To still be able to have iOS 9 and higher apps work with those less-secure sites which still use SHA-1, you can specify which domains are "ok-ish", i.e whitelist per domain. 
In the sections in https://developer.apple.com/library/prerelease/ios/documentation/General/Reference/InfoPlistKeyReference/Articles/CocoaKeys.html#//apple_ref/doc/uid/TP40009251-SW35 you can read how to whitelist: "Allowing Insecure Connection to a Single Server" and "Allowing Lowered Security to a Single Server" and "Using ATS For Your Servers and Allowing Insecure Connections Elsewhere".


Sunday, September 27, 2015

JBoss EAP 6.2.0 + Camel + CXF + JSF 2.2.8 Project Summary

Introduction

This blogpost is a short summary of a project I did a short while ago in 2014.


Tools used

  1. Oracle's jdk1.8.0_05 with target 7
  2. JBoss EAP 6.2.0
  3. Testng 6.8 (instead of JUnit)
  4. Camel 2.13
  5. CXF (for calls to external webservices)
  6. Less (CSS similar a bit to SASS)
  7. Spring 4.0
  8. Lombok
  9. Mockito 1.0
  10. Hamcrest 1.3
  11. DBUnit 2.5
  12. Embedded tomcat 7 for integration tests with maven 3
  13. Sonar, Confluence, Sourcetree (for Git over SVN)
  14. Spring-rabbit 1.2 for accessing RabbitMQ
  15. JSF 2.2.8
  16. Hibernate 5.1.1
  17. Hsqldb 2.3

Lessons learned

  1. My first time use of the Maven plugin dependency which gives warnings of undeclared and used or unused declared dependencies:

    mvn dependency:analyze-only

    Very handy. You *can't* trust on analyze-only though! In the end you still need to perform a full build to make sure all still works. The plugin can only determine dependencies.

    It is possible that some "hidden" non-explicit dependency on an @Entity class creates a FK, which causes the project (with the missing explicit ) hbm.ddl.auto=create not be able to drop a table because it doesn't know about that FK (and thus can't drop that and thus not the table).

    Luckily in log (perhaps you need to set the hibernate level to log all queries/statements) you can see what exactly is being dropped and thus can you spot the missing 'drop contraint xyz' line, where xyz is the constraint the current project doesn't know about. Add it as to the project's pom.xml and the statement should appear. Note that mvn dependency:analyze-only will complain about it as 'unused declared dependency'. Probably this FK dependency is an incorrect project setup anyway.


  2. Sonar's //NOSONAR does not seem to ignore code-coverage violations... Only issues (e.g the Blocker, Critical etc rules).

  3. Camel 2.13: example expression to only have file moved if same file prefix file but with extension .ready already exists:

    // Reference: http://camel.apache.org/file-language.html
    private static final String FILE_READY_TO_BE_PICKED_UP_EXPRESSION = "doneFileName=$simple{file:name.noext}.ready";


  4. Parsing in jodatime:

    private final DateTimeParser[] parsers = {
        DateTimeFormat.forPattern(SOME_DATE_FORMAT).getParser()
    };

    DateTimeFormatter formatter = new DateTimeFormatterBuilder().append(null, parsers).toFormatter().withZone(DateTimeZone.UTC);

    And the other direction:

    String nowText = DateTime.now().toString(SOME_DATE_FORMAT);

  5. Suppose you have two maven profiles A and B. When running mvn clean install -PA,B it appeared only the <exclude> of the last profile was being used (as seen when adding the -X parameter when running the mvn command).

    Problem was that both profiles had the same <id> in the <execution> part!

    Solution: prefix the execution ids to make them unique. E.g:

    <id>A-tests</id>
    and
    <id>B-tests</id>

  6. TestNG (and JUnit vx.y in a bit different way but same mechanism) supports groups to divide tests in for example unit vs integration tests.
    But what if you have a combination of previous unittests which don't have that annotation yet, and you don't want to have to update all tests classes?
    Because if you have specified in your root pom.xml, then the tests w/o the groups attribute in the @Test annotation  won't get picked up by the maven-sure-fire plugin.

    For that you need this specified with your maven-surefire-plugin configuration:

    <plugin>
    ...
    <configuration>
        <!-- Need to specify an empty variable. Leaving the tags empty does not make it overrule the defined in the inmotiv-root/pom.xml -->
        <groups>${emptyProperty}</groups>
    </configuration>
    ...
    </plugin>

    Note the ${emptyProperty} property. It has *no* value set!

    Would you remove the ${emptyProperty}, so you have <groups><groups/>, or replace the line with <groups/> you'll see the test class won't get run! My guess is that maven just removes the empty tag, but won't if you put in a variable...

  7. When renaming files by change the casing (uppercase or lowercase) that are already git controlled, use

    git mv -f FILE file
    otherwise GIT won't see it.




Saturday, January 31, 2015

Recreate Vagrant box when original .box is gone from vagrantcloud

Introduction

Suppose you've setup a cool Vagrant image based upon a box from the Vagrant Hashicorp cloud. But suddenly one day the URL is gone! So the original box is gone. E.g this one, which gives a 404 page: https://atlas.hashicorp.com/box-cutter/boxes/ubuntu1404-desktop
Your 'vagrant up' still works, vagrant will just complain it can't find the box anymore, but the VirtualBox image will start up fine.

But you are still getting new team-members on board, and you want them to use the Vagrant setup you created; but the URL is gone so the base box can't be downloaded anymore. What to do?
Well, it turns out you can recreate the .box file which the above URL is pointing to.
This post describes how.

Steps

  1. Locate where the Vagrant box from the URL is stored on your local machine. On Windows 7 it is C:/Users/ttlnews/.vagrant.d/boxes 
    At this location are all the base Vagrant boxes downloaded. So without any changes/provisioning made to them. The exact directory of the above ubuntu1404-desktop URL is: C:/Users/ttlnews/.vagrant.d/boxes/box-cutter-VAGRANTSLASH-ubuntu1404-desktop

  2. Create a zip of the deepest directory in there. In my case it I 7zipped all the files in C:/Users/ttlnews/.vagrant.d/boxes/box-cutter-VAGRANTSLASH-ubuntu1404-desktop/1.0.11/virtualbox So not the directory, just the files, like the box.ovf, metadata.json, *.vmdk, Vagrantfile. The resulting file was: box-cutter-VAGRANTSLASH-ubuntu1404-desktop.box.7z

    This is what it should look like in the .7z file:


  3. Rename the file to box-cutter-VAGRANTSLASH-ubuntu1404-desktop.box.

  4. Put it in c:/tmp or wherever you want; make sure to change accordingly in the below steps.

  5. In the directory where you have your (already existing because you have created it successfully before from the now-defunct box URL) Vagrantfile add the following line below config.vm.box:

    config.vm.box_url = "file:///c:/tmp/box-cutter-VAGRANTSLASH-ubuntu1404-desktop.box"

    Tip 1: when testing this out, it's safer to copy your Vagrantfile + whatever else you had to a new directory, so you won't mess up your working Vagrant setup.

    Tip 2: in that new setup from tip 1, also change the value of config.vm.box in the Vagrantfile, so you can clearly identify this new image.

  6. Run 'vagrant up' and you should see it copy (extract) the files, start a VirtualBox and start provisioning the box!
Main source for this solution: https://docs.vagrantup.com/v2/boxes/format.html



Thursday, November 20, 2014

JasperReports/iReports dynamically show and hide fields conditionally

Introduction

With JasperReports and its visual designer iReports you can have (blocks of) fields collapse based on certain conditions.
You can use a subreport, but that's not necessary. This post describes how to show/hide fields without subreports.

How

Basically you have to do two things:

  1. Put the elements (static text, textfields etc) you want to hide on a frame. Make sure the fields are not behind the frame.
  2. Set the correct properties on the frame, including the expression to show/hide it dynamically. By hiding the frame you hide all the fields on it. 
The most important thing is to make sure no other element besides those that need to be on it is overlapping with the frame! Below is a screenshot to make it clearer.
There are three static and three textfields on the frame. Plus two (separate!) vertical lines.


Note the green, red and black vertical lines in the screenshot: the red and black vertical lines are not on the frame at all. The frame has its own green vertical line. That way JasperReports can shrink it based on the condition. Otherwise it doesn't know what to do with the overlap, and doesn't (can't?) collapse it. Notice also the frame is selected (little square blue boxes on its lines) to make clear from where to where the frame runs.
So basically you have to make sure the frame is all by itself, all its elements only on the frame, and no other elements overlapping the frame.

PS: it might be possible to do it per field, but since I needed at least a label + field to be shown/hidden, I didn't even try that.

To put in the other necessary settings, you have two ways of doing that:

  1. Via the XML:

    Put in the condition of the frame. So if theObject is not null, the frame will be shown. Also make sure you add 'isRemoveLineWhenBlank="true"'. E.g:

  2. Or via the Eclipse properties tab: see below screenshot:


    Notice setting the flag and the expression, see the arrow pointing at $F, that's the start of the same above expression in the XML.