Showing posts with label spring boot. Show all posts
Showing posts with label spring boot. Show all posts

Wednesday, September 27, 2023

SpringDoc OpenAPI Swagger generated Swagger API shows incorrect class with same name

Introduction

When you have multiple classes with the same name in your classpath, SpringDoc with Swagger API annotations potentially picks the wrong class with the same name when generating the Swagger UI documentation.


Suppose you have these classes:

  • org.example.BookDto
  • org.example.domain.BookDto
     

And you specified your endpoint like this, where you want to have it use org.example.BookDto:

  @Operation(summary = "Get a list of books for a given shop")
  @ApiResponses(
    value = [
      ApiResponse(
        responseCode = "200",
        description = "A list of books",
        content = [Content(mediaType = "application/json",
                    array = ArraySchema(schema = Schema(implementation = BookDto::class)))]
      )
    ]
  )
  @GetMapping("/books/{shopId}")
  fun getBooksByShopId(
    @Parameter(description = "Shop to search for")
    @PathVariable shopId: Long
  ): List<BookDto> {
    return bookService.getBooksByShopId(shopId)
      .map { BooksMapper.mapDto(it) }
  }

Then whatever it finds first on the classpath will be visible in https://localhost:8080/swagger-ui.html. Not necessarily the class you meant, it might pick org.example.domain.BookDto.  

Setup:

  • Spring Boot 3
  • Kotlin 1.8
  • Springdoc OpenAPI 2.2.0
     

Solution

Several solutions exist:

Solution 1

Specify in your application.yml:

springdoc:
 use-fqn: true

 

Disadvantage: the Swagger documentation in the swagger-ui.html endpoint has then the fully specified package classpath + classname in it. Looks ugly. 

Solution 2

Setting it in the @Bean configuration:

import io.swagger.v3.core.jackson.TypeNameResolver
  @Bean
  fun openAPI(): OpenAPI? {

    TypeNameResolver.std.setUseFqn(true)
    return OpenAPI()
      .addServersItem(Server().url("/"))
      .info(
        Info().title("Books Microservice")
          .description("The Books Microservice")
          .version("v1")
      )
      .externalDocs(
        ExternalDocumentation()
          .description("Books Microservice documentation")
          .url("https://github.com/myproject/README.md")
      )
  }

Disadvantage: also in this solution the Swagger documentation in the swagger-ui.html endpoint has then the fully specified package classpath + classname in it. Looks ugly.

Solution 3

You can create your own ModelConverters, but that is much more work. Examples here:  https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Extensions#extending-core-resolver and https://groups.google.com/g/swagger-swaggersocket/c/kKM546QXGY0

Solution 4

Make sure for each endpoint you specify the response class with full class package path:

@Operation(summary = "Get a list of books for a given shop")
  @ApiResponses(
    value = [
      ApiResponse(
        responseCode = "200",
        description = "A list of books",
        content = [Content(mediaType = "application/json",
                    array = ArraySchema(schema = Schema(implementation =
org.example.BookDto::class)))]
      )
    ]
  )
  @GetMapping("/books/{shopId}")
  fun getBooksByShopId(
    @Parameter(description = "Shop to search for")
    @PathVariable shopId: Long
  ): List<BookDto> {
    return bookService.getBooksByShopId(shopId)
      .map { BooksMapper.mapDto(it) }
  }

 See the bold Schema implementation value for what changed.


 

 

Wednesday, November 16, 2022

Spring JDBC and MySql using UUIDs in Java and VARCHAR(36) in database incorrect string value solution

Introduction

Using H2 as initial embedded database for a Spring Boot application worked fine. H2 is very forgiving in many situations and of course only tries to emulate the real target database, in this case MySql 8.0.
So after connecting my Spring Boot application to MySql, this error started to appear when inserting a row in a table with a java.util.UUID property as 'id' field:  

    java.sql.SQLException: Incorrect string value: '\xAC\xED\x00\x05sr...' for column 'id' at row 1

A quick internet search showed that potentially my character set and collate setting for the tables were using 3 bytes instead of 4 for UTF-8 storage.
But the database, tables and columns all had utf8mb4 specified as CHARACTER SET and COLLATE, since I'm using MySql 8.0. So that 3 vs 4 bytes for UTF-8 issue did not apply for me. 

Solution

Then I found this blog https://petrepopescu.tech/2021/01/how-to-use-string-uuid-in-hibernate-with-mysql/ that at least when using Hibernate, it doesn't know how to convert UUIDs to strings (varchars), and you need to specify a Hibernate-provided converter. But the Hibernate annotation of course did not work for Spring Data JDBC.

Luckily there is a way to write your own converters for Spring JDBC datatypes.

Implementing that fixed that initial error message. Note the example in the above post is missing the @Configuration annotation on the MyJdbcConfiguration class.

But then the error happened again during this custom JdbcTemplate select query:

    String query = "SELECT DISTINCT r.id, user_id FROM recipe r WHERE user_id = ?";
    List<Object> parameterValues = new ArrayList<>();
    parameterValues.add(userId);
    Object[] parameterValuesArray = parameterValues.toArray();
    jdbcTemplate.query(query, parameterValuesArray, new JdbcRecipeRowMapper());


This was the output of that query, including the parameters used in the query:

Executing prepared SQL statement [SELECT DISTINCT r.id, user_id, r.name, vegetarian, number_of_servings, instructions, r.created_at, r.updated_at FROM recipe r INNER JOIN ingredient i ON r.id = i.recipe_id WHERE user_id = ? AND vegetarian = ? AND number_of_servings = ? AND instructions LIKE ? AND  i.name IN (?) ]
2022-09-14 15:46:38.062 TRACE 16148 --- [nio-7000-exec-2] o.s.jdbc.core.StatementCreatorUtils      : Setting SQL statement parameter value: column index 1, parameter value [e26b2a0a-3d2c-442f-8fa1-f26336d5a9d3], value class [java.util.UUID], SQL type unknown
2022-09-14 15:46:38.068 TRACE 16148 --- [nio-7000-exec-2] o.s.jdbc.core.StatementCreatorUtils      : Setting SQL statement parameter value: column index 2, parameter value [true], value class [java.lang.Boolean], SQL type unknown
2022-09-14 15:46:38.068 TRACE 16148 --- [nio-7000-exec-2] o.s.jdbc.core.StatementCreatorUtils      : Setting SQL statement parameter value: column index 3, parameter value [3], value class [java.lang.String], SQL type unknown
2022-09-14 15:46:38.068 TRACE 16148 --- [nio-7000-exec-2] o.s.jdbc.core.StatementCreatorUtils      : Setting SQL statement parameter value: column index 4, parameter value [%Step%], value class [java.lang.String], SQL type unknown
2022-09-14 15:46:38.068 TRACE 16148 --- [nio-7000-exec-2] o.s.jdbc.core.StatementCreatorUtils      : Setting SQL statement parameter value: column index 5, parameter value [spinach], value class [java.lang.String], SQL type unknown
2022-09-14 16:06:37.952 DEBUG 19748 --- [nio-7000-exec-2] o.s.jdbc.core.JdbcTemplate               : SQLWarning ignored: SQL state 'HY000', error code '1366', message [Incorrect string value: '\xAC\xED\x00\x05sr...' for column 'user_id' at row 1]

So that only shows a warning message at DEBUG level, not even as an error, so at first I missed it completely! The query just returned 0 results.

    SQLWarning ignored: SQL state 'HY000', error code '1366', message [Incorrect string value: '\xAC\xED\x00\x05sr...' for column 'user_id' at row 1]

Makes sense though that this custom query has the same issue, since my string-based query of course does not use the converters that I configured earlier.
So I had to change the third line to explicitly convert the value to a string, so Spring JDBC will pass it on as a string:

        parameterValues.add(userId.toString());

Note: maybe implementing a placeholder interface for each repository like this would also have fixed it for the Spring generated methods/queries like save(), remove() etc, e.g: interface RecipeRepository extends Repository<Recipe, UUID>. Did not try that out.

Other related links used to get to the above solution:


Thursday, October 13, 2022

Migrating Java 17 Spring Boot 2.7.3 application to Kotlin 1.7.20

Introduction

This blogpost describes the challenges encountered when migrating a Java 17 Spring Boot 2.7.3 application to Kotlin 1.7.20. 

Other libraries/tools used in the project:
- Swagger (OpenAPI 3.0.3)
- Spring boot 2.7.3
- Liquibase
- H2 in mem + file based
- JUnit5 with Mockito and Mockito-Kotlin
- MySql 8.0
- Actuator
- Maven 3
Tip: after migration of most of the .java files manually, I found this Spring tutorial which helps to avoid having to add 'open' to @Component, @Service etc annotated classes.
It includes the kotlin-spring-plugin and Kotlin JPA plugin to support Kotlin features better, including support for JSR-305 annotations + Spring nullability annotations. Usually you'd also want to include jackson-module-kotlin for serialization and deserialization of Kotlin classes.
Note that my application uses generated Java classes from an OpenAPI 3 Swagger yaml file, which are returned in the REST API, so therefore not needed.

Total code reduction after migration:
Java: 4718
Kotlin: 2860

So about 44% reduction in code. Not bad.

Steps

Convert POJOs

As first I converted a simple POJO which in my case had @Data and @Builder Lombok annotations.
Open that POJO Java file. Hit ctrl-alt-shift-k. Or, find it in the IntelliJ 'actions' search panel:




Then make sure to rebuild to project by enforcing Maven to rebuild if it didn't do that.
Then I had to redo it on the POJO class.

I was a bit surprised what came out:

    @Builder
    @Data
    class Ingredient {
        private val name: String = null
        private val recipeId: UUID = null
        private val createdAt: LocalDateTime = null
        private val updatedAt: LocalDateTime = null
    }

I would have expected the Lombok annotations to be gone. But on the other hand IntelliJ can't know how to fix them I guess (see below for more on Lombok migration).
And maybe because of the @Data it made all fields private...

I did do expect more like this:

    @Builder
    @Data
    class Ingredient(val name: String, recipeId: UUID, createdAt: LocalDateTime, 
                        updatedAt: LocalDateTime)

But even when I remove the Lombok annotations, still the fields are created as private fields, not as part of the primary constructor... Maybe because of the other annotations on some of the fields, like @Id and @Version?

I manually converted it some more, into this:

    data class Ingredient(val name: String, val recipeId: UUID, val createdAt: LocalDateTime, val         
                        updatedAt: LocalDateTime)

Then I converted all uses of the Builder in the Java class to the regular (primary) constructor of the Kotlin data class. E.g:

    new Ingredient(ingredient, recipeId, createdAt, createdAt)

Doing this for all POJOs would be quite some work. And later on, you want to convert those Java instance creations to Kotlin constructors anyway, with named parameters.
So I didn't do this for all classes, I started to skip this step of replacing builders with constructors.

Now first let's try to rebuild the project with 'mvn clean install' for example.

That gave an error: the newly created Kotlin Ingredient class (symbol) could not be found.   Note that IntelliJ was able to find all dependencies just fine.
The answer to that can be found here
I applied the solution where you move your .kt file into its new src/main/kotlin/x/y/z package. Make sure to mark that src/main/kotlin directory as a source directory in IntelliJ.

But still an error: 

    Cannot find symbol (Ingredient)

So that didn't fix it, so applied the accepted solution, so to make sure the compilation order is Kotlin then Java.
After this change in the pom.xml, IntelliJ couldn't find the Spring Boot application class anymore. 'mvn clean install' ran up to the tests, but many failed due to:

    java.lang.NoClassDefFoundError: kotlin/reflect/full/KClasses 

See next section below for how those were fixed.
I changed also in the Kotlin plugin in the pom.xml the JVM target to: <jvmTarget>1.17</jvmTarget>
After that, IntelliJ compiled fine again.

Then trying to run the application with 'mvn spring-boot:run' gave this error:

    Compilation failure
    Unknown JVM target version: 1.17
    Supported versions: 1.6, 1.8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18

So the <jvmTarget> needs to be 17 (not 1.17) apparently. And then it almost started, but I got the same error as when running the tests:

    java.lang.ClassNotFoundException: kotlin.reflect.full.KClasses

Note also this warning showed up during the build, that needs to be checked and fixed, because it refers to Kotlin JDK8 and we use Java 17:

    [INFO] Scanning for projects...
    [WARNING] 
    [WARNING] Some problems were encountered while building the effective model for com.project:kotlinrecipes:jar:1.0.0
    [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must be unique: org.jetbrains.kotlin:kotlin-stdlib-jdk8:jar -> duplicate declaration of version ${kotlin.version} @ line 159, column 15
    [WARNING] 
    [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.


Fixing the Java tests Part 1

Fixing the tests with the error java.lang.ClassNotFoundException: kotlin.reflect.full.KClasses

Adding this dependency, as also mentioned here, fixed it:

<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-reflect</artifactId>
<version>${kotlin.version}</version>
</dependency>

Outstanding question for this stdlib dependency for me was, why is Kotlin stdlib using Java 8?

<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-stdlib-jdk8</artifactId>   // Can't this be Java 17?
<version>${kotlin.version}</version>
</dependency>

And: the Kotlin standard library kotlin-stdlib targets Java 6 and above. There are extended versions of the standard library that add support for some of the features of JDK 7 and JDK 8.
And when you include kotlin-stdlib-jdk8, it will pull kotlin-stdlib-jdk7 and kotlin-stdlib
And also note: https://stackoverflow.com/questions/65731542/why-is-there-no-kotlin-stdlib-jdk11. So basically, all fine since Kotlin just doesn't use any API from Java's JDK higher than 8.

After this, the application started fine, connected to the MySql Docker instance and the REST endpoints worked all fine.

Migrating the Spring Boot application class

The IntelliJ converter worked fine. The @Bean annotation in that class was migrated correctly too. Constants were correctly put in a companion object {} block.
But when starting the application this message showed up:

    org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: @Configuration class 'RecipesApplication' may not be final. Remove the final modifier to continue.

Strange: it says the class might not be final, remove the final modifier... Sounds contradicting!
I made the class 'open' (because the default is public final) and then it worked.

Then for the @Bean I got:

    org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: @Bean method 'encoder' must not be private or final; change the method's modifiers to continue

I made the method 'encoder' also 'open' and then it worked.

If you have other issues, check this post for more tips. 

Migrating a Spring Boot @RestController

I applied the IntelliJ converter. Its result was pretty good. Inheritance from the OpenAPI 3 generated Java code was correctly applied. 
I had to add the Kotlin logging to replace the @Slf4j static logger, see here.
Note the default static 'log' field generated by @Slf4j is after applying that named private val logger = KotlinLogging.logger {}. But you can name it as you want of course.

When running the application, again the controller also had to be made open, to allow it to be subclassed (as can be seen in the tip in the introduction section, Spring and some other frameworks require classes to be open (extendable)).
After this change, the controller worked fine.

I also modified the generated code a bit by adding a @NotNull annotation (for something which had already a '?' so was incorrect anyway I realized afterwards), but then at runtime only I sadly got this error:

    javax.validation.ConstraintDeclarationException: HV000151: A method overriding another method must not redefine the parameter constraint configuration, but method UserController#loginUser(JwtRequest) redefines the configuration of UsersApi#loginUser(JwtRequest).

Removing the incorrectly added @NotNull fixed that problem; and it is unnecessary in combination with the '?' too anyway.

Migrating @Component service

Got this error after adding the logger:

    java.lang.NullPointerException: Cannot invoke "mu.KLogger.info(String)" because "this.logger" is null

Strange, the @RestController did not have that issue. Though that one is overriding a (generated OpenAPI 3) class.
This question triggered me, so I added the 'open' keyword to the methods in the @Component class, to make it non-final so Spring has access to it with its proxies too.

That worked.

Converting Spring @Repository

For interfaces, the question was: a findByUsername(username) can return null when it doesn't find the user. How to best define that in Kotlin? Allow null to be returned (but at least the caller then has to handle the null possibility)? Or use Optional in that case? Or is there another better solution?
No clear best answer to me, e.g: https://discuss.kotlinlang.org/t/how-to-deal-with-database-null-return-according-kotlin-null-safety-feature/2546
I went for having the repo return '?'. But another repo already was using Optional, so left that there too. Will have to decide on consistency here...
See this post on how null can be seen positively and also the String.toIntOrNull() function extension built into Kotlin! :) Based on that, I went for allowing repository functions to return 0.
 

Converting @MappedCollection(idColumn = "RECIPE_ID")

Only had to make sure using the arrayOf() and using a MutableSet because you usually want to add elements to the child (collection):

        @OneToMany(cascade = arrayOf(CascadeType.ALL), orphanRemoval = true, 
                    targetEntity = Ingredient::class)
        @JoinColumn(name = "recipe_id")
        var ingredients: MutableSet<Ingredient> = HashSet<Ingredient>(),

Use @NotNull or not

Is it useful to have the @NotNull annotation, while it is only done at runtime? And doesn't Kotlin also throw an exception when you pass in a null value to a parameter that is already non-nullable by default (i.e it doesn't have the '?' appended to its type)?
Seems like double because Kotlin inserts the @NotNull into the code
Maybe you'd put it in when Java code is calling your Kotlin code, to make it more explicit - and the Java code can then validate on it?

Converting @Configuration class

A class annotated with that has also to be open: 

    @Configuration
    internal open class JdbcConfig : AbstractJdbcConfiguration() {...}

I noticed the Kotlin converter from IntelliJ didn't like always comments at the end of a line of Java code. Sometimes the '()' of a method call were then put on the wrong line.

Fixing the Java tests Part 2

While still as Java code, some failed with this:

    org.mockito.exceptions.base.MockitoException: 
    Cannot mock/spy class com.project.kotlinrecipes.user.infra.JwtUserDetailsServiceImpl
    Mockito cannot mock/spy because :
     - final class

So that was easy, I made those classes 'open'.

I also had to make methods 'open' used in Mockito.when() matchers, because otherwise it complains: 

    org.mockito.exceptions.misusing.InvalidUseOfMatchersException: 
    Invalid use of argument matchers!
    0 matchers expected, 1 recorded:

But also verify() started to fail:

    verify(jwtTokenUtil, Mockito.times(0)).validateToken(isA(String.class), isA(UserDetails.class));
    java.lang.NullPointerException: Parameter specified as non-null is null: method
    com.project.kotlinrecipes.infra.security.JwtTokenUtil.validateToken, parameter userDetails

That also meant added the 'open' keyword to that method.

Converting JUnit 5 tests with Mockito to Kotlin

IntelliJ's auto-converter works pretty good. Except that it converts

    private JdbcIngredientRowMapper jdbcIngredientRowMapper;

    @BeforeEach
    public void setUp() {
        jdbcIngredientRowMapper = new JdbcIngredientRowMapper();
    }

to:

    private var jdbcIngredientRowMapper: JdbcIngredientRowMapper? = null

    @BeforeEach
    fun setUp() {
        jdbcIngredientRowMapper = JdbcIngredientRowMapper()
    }

But it can be made nullsafe by changing it to:

    private lateinit var jdbcIngredientRowMapper: JdbcIngredientRowMapper
    
    @BeforeEach
    fun setUp() {
        jdbcIngredientRowMapper = JdbcIngredientRowMapper()
    }

I also introduced the `test description` notation, e.g:

    @Test
    fun `Should map row`() {}

I use in some tests:

    @ParameterizedTest
    @MethodSource("filterNullPermutations")

That filterNullPermutations has to be a static method. IntelliJ made it a companion object with that method 'private'. I added @JvmStatic to make it accessible for the @MethodSource.

I had to add mockito-kotlin library for better interoperability for this case:  
ArgumentMatchers.isA(class) for Kotlin methods that don't allow null values be put in (which isA() and any() for example can return).
And by adding this library I could now also use 'whenever' instead of '`when`'.

isA(MyClass.class) in Java had to be converted to:  

    whenever(jwtTokenUtil.generateToken(isA<UserDetails>))).thenReturn(BEARER_TOKEN_VALUE)

Or even shorter:

       whenever(jwtTokenUtil.generateToken(isA())).thenReturn(BEARER_TOKEN_VALUE)

using the mockito-kotlin library, which creates an instance (instead of the regular mockito which can return null). See here for more explanation. 

Replaced all mock() with Kotlin style: val mockBookService : BookService = mock()

The generated code did not work for this TestRestTemplate.exchange() call in Java:

        // Set up find parameters
        Map<String, String> uriVariables = new HashMap<>();
        uriVariables.put("vegetarian", "true");
        uriVariables.put("numberOfServings", "3");
        uriVariables.put("includedIngredients", "onion");
        uriVariables.put("excludedIngredients", "fish");
        uriVariables.put("instructions", "First");

        HttpHeaders headers = new HttpHeaders();
        headers.set(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON_VALUE);
        HttpEntity<?> entity = new HttpEntity<>(headers);

        // When
        ResponseEntity<List<RecipeResponse>> foundRecipesEntity = testRestTemplate.exchange("/recipes/findByFilter?vegetarian={vegetarian}&numberOfServings={numberOfServings}&includedIngredients={includedIngredients}&excludedIngredients={excludedIngredients}&instructions={instructions}",
                HttpMethod.GET, entity, new ParameterizedTypeReference<>() {}, uriVariables);


Became after IntelliJs converter applied:

        // Set up find parameters
        val uriVariables: MutableMap<String, String?> = HashMap()
        uriVariables["vegetarian"] = "true"
        uriVariables["numberOfServings"] = "3"
        uriVariables["includedIngredients"] = "onion"
        uriVariables["excludedIngredients"] = "fish"
        uriVariables["instructions"] = "First"
        val headers = HttpHeaders()
        headers[HttpHeaders.ACCEPT] = MediaType.APPLICATION_JSON_VALUE
        val entity: HttpEntity<*> = HttpEntity<Any>(headers)

        // When
        val foundRecipesEntity: ResponseEntity<List<RecipeResponse>> =
            testRestTemplate.exchange<List<RecipeResponse>>("/recipes/findByFilter?vegetarian={vegetarian}&numberOfServings={numberOfServings}&includedIngredients={includedIngredients}&excludedIngredients={excludedIngredients}&instructions={instructions}",
                HttpMethod.GET, entity, object : ParameterizedTypeReference<List<RecipeResponse?>?>() {}, uriVariables
            )

But .exchange() was red underlined, no matching method to invoke found. I had to change it into this:

        // Set up find parameters
        val uriVariables: MutableMap<String, String> = HashMap()
        uriVariables["vegetarian"] = "true"
        uriVariables["numberOfServings"] = "3"
        uriVariables["includedIngredients"] = "onion"
        uriVariables["excludedIngredients"] = "fish"
        uriVariables["instructions"] = "First"
        val headers = HttpHeaders()
        headers[HttpHeaders.ACCEPT] = MediaType.APPLICATION_JSON_VALUE

        // When
        val foundRecipesEntity: ResponseEntity<List<RecipeResponse>>? =
            testRestTemplate.exchange(
                "/recipes/findByFilter?vegetarian={vegetarian}&numberOfServings={numberOfServings}&includedIngredients={includedIngredients}&excludedIngredients={excludedIngredients}&instructions={instructions}",
                HttpMethod.GET, HttpEntity("parameters", headers),
                typeReference<List<RecipeResponse>>(), uriVariables
            )

With the typeReference method added (you can also do it inline BTW):

    private inline fun <reified T> typeReference() = object : ParameterizedTypeReference<T>() {}

And after some more cleaning up this worked too (can you spot the differences with the generated Kotlin from IntelliJ?):

        // Set up find parameters
        val uriVariables: MutableMap<String, String> = HashMap()
        uriVariables["vegetarian"] = "true"
        uriVariables["numberOfServings"] = "3"
        uriVariables["includedIngredients"] = "onion"
        uriVariables["excludedIngredients"] = "fish"
        uriVariables["instructions"] = "First"
        val headers = HttpHeaders()
        headers[HttpHeaders.ACCEPT] = MediaType.APPLICATION_JSON_VALUE
        val entity: HttpEntity<*> = HttpEntity<Any>(headers)

        // When
        val foundRecipesEntity: ResponseEntity<List<RecipeResponse>> =
            testRestTemplate.exchange(
                "/recipes/findByFilter?vegetarian={vegetarian}&numberOfServings={numberOfServings}&includedIngredients={includedIngredients}&excludedIngredients={excludedIngredients}&instructions={instructions}",
                HttpMethod.GET, entity,
                typeReference<List<RecipeResponse>>(), uriVariables
            )


Note that MockK can be the next improvements to the Kotlin code, to allow more Kotlin-style of notation.

Method documentation generation

I noticed my IntelliJ 2022.2.1 does not generate @param, @return etc documentation when typing /** above a (private) function definition.

Miscellaneous

I still have a few references to Java classes in the code, like this one:

        httpSecurity.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter::class.java)

Could not find a way to avoid having to reference a Java class in Kotlin this directly.

And at the end of the process I removed all Lombok annotations and its dependency in the pom.xml.

Bonus tip

Spring's Kotlin extensions overview can be found here.


Wednesday, September 14, 2022

Connection refused Spring Boot application to MySql in Docker solution

Introduction

The Spring Boot Java application was initially using the H2 embedded database, first the in-memory version (losing all data each run), then the file based version. The database structure is created using Liquibase.

Then the requirement was to have the Spring Boot Java application still not dockerized (just started with mvn spring-boot:run), but have it connect to a MySql 8 database than runs inside a Docker container. Most examples you can find have also the Spring Boot application in a Docker container.
The official Spring documentation only shortly mentions the possibility of running MySql in Docker, but no more details on how to do that: https://spring.io/guides/gs/accessing-data-mysql/

This is how the setup should be working:


I set up that docker container using the Windows 10 WSL 1 shell within IntelliJ (note WSL 2 is now recommended):

docker pull mysql/mysql-server:8.0
docker run --name recipesmysql8 -d mysql/mysql-server:8.0

After changing the One Time Password (OTP) for root, creating the database user, creating a 'recipes' database, this is how docker ps looked:

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS                PORTS                       NAMES
3a507fc66690        mysql/mysql-server:8.0   "/entrypoint.sh mysq…"   6 days ago          Up 6 days (healthy)   3306/tcp, 33060-33061/tcp   recipesmysql8

And MySql is indeed running: docker logs recipesmysql8 shows:

2022-08-31T19:02:05.722742Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.30) starting as process 1
2022-08-31T19:02:05.755538Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2022-08-31T19:02:06.104813Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2022-08-31T19:02:06.416522Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2022-08-31T19:02:06.416574Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2022-08-31T19:02:06.443572Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2022-08-31T19:02:06.443740Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.30'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MyS
QL Community Server - GPL.


So all running fine, ports seemed fine. The Spring Boot application configuration for JDBC is this:

spring.datasource.url=jdbc:mysql://localhost:3306/recipes
spring.datasource.username=recipes
spring.datasource.password=mypasswd
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

But then the mvn spring-boot:run gave this relatively vague error:

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    ...
   Caused by: java.net.ConnectException: Connection refused: no further information

So unable to connect, but why? It does not seem an incorrect username/password combination, then I would expect some unauthorized/not authenticated type of message.
Then I found this handy command determining what the real IP should be of the machine that the MySql docker runs in:

docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)

I found my MySql container at: /recipesmysql8 - 172.17.0.2
Same result, shorter answer: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' recipesmysql8
Or even just without the filtering: docker inspect recipesmysql8

So the JDBC configuration I changed to:

spring.datasource.url=jdbc:mysql://172.17.0.2:3306/recipes
spring.datasource.username=recipes
spring.datasource.password=mypasswd
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

But then the mvn spring-boot:run gave this timeout error:

Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    ...
    Caused by: java.net.ConnectException: Connection timed out: no further information

So a similar error message, but this time a connection timeout. Is maybe the problem connecting from within IntelliJ where I execute the mvn spring-boot:run command, to Docker that runs in WSL?

I tried by disabling the antivirus software and local firewall and WIFI, but those didn't work either, at most a different error message like:

Caused by: java.net.NoRouteToHostException: No route to host: no further information

And even with the mysql shell client does it give an error (bit less clear):

mysql -h 172.17.0.2 -P 3306 --protocol=tcp -u root -p
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on '172.17.0.2' (11)

Solution

Then I stopped my container, started a new one with explicitly ports-mapping specified:

docker run --name recipesmysql8v2 -p 3306:3306 -d mysql/mysql-server:8.0

docker ps output:
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS                   PORTS
    NAMES
819333cf718e        mysql/mysql-server:8.0   "/entrypoint.sh mysq…"   2 minutes ago       Up 2 minutes (healthy)   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060-33061/tc
p   recipesmysql8v2

Then a different error message for mysql -h 127.0.0.1 -P 3306 --protocol=tcp -u recipes -p appeared:

ERROR 1130 (HY000): Host '172.17.0.1' is not allowed to connect to this MySQL server

That looked promising.

Note: due to having a new container, I had to reset the OTP for user 'root' again of course. You can find that OTP by issuing docker logs recipesmysqlv2 | grep GENERATED.
Then issue these commands:

docker exec -it your_container_name_or_id bash
mysql -u root -p
Enter password: <enter the one found with the above grep shell command>
ALTER USER 'root'@'localhost' IDENTIFIED BY 'your secret password';

Create the Spring Boot application database again: create database recipes;
Add the Spring Boot application user again: create user 'recipes'@'%' identified by 'L7z$11Oeylh4';

Give that user only the necessary permissions:
grant create, select, insert, delete, update on recipes.* to 'recipes'@'%';

After that, these entries should be in the mysql.user table:
mysql> SELECT host, user FROM mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| %         | recipes          |
| localhost | healthchecker    |
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
| localhost | root             |
+-----------+------------------+
6 rows in set (0.00 sec)


Note the recipes user entry with host '%', which allows access from any host, which you probably want to make more secure in a production environment. See https://downloads.mysql.com/docs/mysql-secure-deployment-guide-8.0-en.pdf for tips.

Now you should be able to connect from the mysql client prompt in several ways:

mysql -h 127.0.0.1 -P 3306 --protocol=tcp -u recipes -p
mysql -h localhost -P 3306 --protocol=tcp -u recipes -p
mysql -h 0.0.0.0 -P 3306 --protocol=tcp -u recipes -p

Note that the 172.17.0.2 still does not connect! For that to work you'll have to probably add that host to the above mysql.user table (not tried to see if that works).

Instead of mysql client you can also use the standard *nix command telnet to see if at least the port is reachable:

telnet 127.0.0.1 3306
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.


(I don't remember what it showed when the initial issue with the docker container named 'recipesmysql8' was there)

And after starting the Spring Boot application: connection worked and tables and indexes were created!

Note: only from within the docker image can you connect like this: docker exec -it recipesmysql8v2 bash and execute mysql -u root -p after having issued ALTER USER 'root'@'localhost' IDENTIFIED BY 'nR128^n8f3kx';
Because running mysql -h 127.0.0.1 -P 3306 --protocol=tcp -u root -p from within the commandline WSL still gives: ERROR 1045 (28000): Access denied for user 'root'@'172.17.0.1' (using password: YES)

Probable cause: when you only have MySql in a Docker container, it is not in the same "network" as Docker containers, so the port 3306 is not reachable from outside Docker runtime. So you will have to tell Docker how you want to expose the port(s). Another way to solve this is to have the Spring Boot application also as a Docker application; and potentially configure it via docker-compose, see here for an example on how to do this: https://www.javainuse.com/devOps/docker/docker-mysql.



Wednesday, April 6, 2022

Configuring MySQL test-containers in your Spring Boot Java Integration Tests

Introduction

In your Integration Tests (IT) you often try to use the H2 in-memory database, to improve the speed of your integration tests. But on the other hand you want to mimic the production database as much as possible in your integration-tests.

Setting H2 in MySQL database compatibility mode tries to emulate MySQL as much as possible, but only a small subset of the differences are implemented. What for example not works correctly in H2 for JSON fields is that it escapes strings with "". For that reason you usually want to switch to for example starting a Docker database testcontainer in your IT tests, which uses a real MySQL database. With the Java-specific version in https://github.com/testcontainers/testcontainers-java.


Configuration

There are several good-to-know tips when configuring the testcontainers in your ITs.

  1. The simplest configuration is using a datasource URL in the Spring Boot properties file. This has the disadvantage that whatever database name you specify, the testcontainers library still creates a DB named 'test'. So below it will be named 'integration_test_db' you'd think, but it is still named 'test' when the IT runs:

    spring.datasource.url=jdbc:tc:mysql:5.7.32:///integration_test_db?sessionVariables=sql_mode='STRICT_TRANS_TABLES'&TC_MY_CNF=mysql&TC_INITSCRIPT=mysql/init_mysql_integration_tests.sql

    To be able to do everything on the started database, including giving it the name you want, use this URL (or see below the Java version). Notice the user 'root' in the URL:
    spring.datasource.url=jdbc:tc:mysql:5.7.32:///integration_test_db?user=root&password=&sessionVariables=sql_mode='STRICT_TRANS_TABLES'&TC_MY_CNF=mysql&TC_INITSCRIPT=mysql/init_mysql_integration_tests.sql

    Via: https://github.com/testcontainers/testcontainers-java/issues/932

  2. To initialize your database, specify the script via this extra datasource URL variable:

    TC_INITSCRIPT=mysql/init_mysql_integration_tests.sql

    Note the default directory it looks in is ..../resources for the scripts. So the full path is ..../resources/mysql/

  3. An example to prevent GROUP BY error from strict mode, add this in your TC_INITSCRIPT:

    SET GLOBAL sql_mode = 'STRICT_TRANS_TABLES';
    SET SESSION sql_mode = 'STRICT_TRANS_TABLES';


  4. To configure it in the IT Java class itself:

    @RunWith(SpringRunner.class)
    @SpringBootTest(classes = { SomeClassA.class, SomeClassB.class, ApplicationConfiguration.class}, webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
    @TestPropertySource(locations = {
            "classpath:/application-test-mysql.properties" })
    @ContextConfiguration(initializers = {ThisITClass.Initializer.class})

    static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
            public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
                TestPropertyValues.of(
                        "spring.datasource.url=" + mySQLContainer.getJdbcUrl(),
                        "spring.datasource.username=" + mySQLContainer.getUsername(),
                        "spring.datasource.password=" + mySQLContainer.getPassword()
                ).applyTo(configurableApplicationContext.getEnvironment());
            }
        }

    @ClassRule
    public static MySQLContainer mySQLContainer = new MySQLContainer<>("mysql:5.7.31")
                .withUsername("root") // So now you can do a GRANT too for example
                .withPassword("") // Only possible for user 'root'
                .withEnv("MYSQL_ROOT_HOST", "%")
                .withDatabaseName("integration_test_db") // So this name will now be used, not 'test'
                .withInitScript("mysql/init_mysql_integration_tests.sql")

                ;

  5. The MySQL docker image used in the tests is retrieved from DockerHub https://hub.docker.com/_/mysql

Sunday, December 26, 2021

Sentry in Spring Boot application reports uncaught exception while @RestControllerAdvice is handling exception

Introduction

Even when you have a @RestControllerAdvice configured and you know it gets invoked, it can be that Sentry.io's monitoring library still reports it as an (uncaught) exception. Reason for this is that when you configure Sentry as specified without setting its order in the exception-handler-resolver chain, it's by default set as lowest, so invoked as first in the chain. 

Solution

Thus to fix this, set it to a higher order number. Safest seems to set it to Integer.MAX_VALUE, that way it will always be invoked as the latest.

The most recent Sentry integration examples can be found here for Spring and here for Spring Boot. Maybe you want to consider logging via Logback or Log4J too, see the related Sentry integration examples on the same page.

Another solution is overriding the getOrder() method for older Sentry versions. Below is an example, using the suggesting configuration from this old obsolete Sentry page.

@Configuration
@Slf4j
public class SentryConfig {
    @Bean
    public HandlerExceptionResolver sentryExceptionResolver() {

        return new SentryExceptionResolver() {
            @Override
            public ModelAndView resolveException(HttpServletRequest request,
                    HttpServletResponse response,
                    Object handler,
                    Exception ex) {
                log.info("Sentry resolving this exception: ", ex);
                // You could skip exceptions that are considered client-side errors and return null in that case so it is considered taken care of.
                return super.resolveException(request, response, handler, ex);
            }

            @Override
            public int getOrder() {
                // Ensure other resolver(s) can run first, otherwise this handler is invoked as first and thus reporting an issue for Sentry
                return Integer.MAX_VALUE;
            }
        };
    }
    @Bean
    public ServletContextInitializer sentryServletContextInitializer() {
        return new io.sentry.spring.SentryServletContextInitializer();
    }
}


Similar solutions mentioned here: https://stackoverflow.com/questions/48401974/how-to-have-my-own-error-handlers-before-sentry-in-a-spring-application


Thursday, November 19, 2020

Spring @Scheduled using DynamoDB AWS X-Ray throws SegmentNotFoundException: failed to begin subsegment

Introduction

AWS X-Ray is designed to automatically intercept incoming web requests, see at this introduction.  And also here

But when you start your own thread (either via a Runnable or plain new Thread() or @Scheduled), X-Ray cannot initialise itself: there are no web requests to intercept for it. Then it throws an exception like this: 

Suppressing AWS X-Ray context missing exception (SegmentNotFoundException): Failed to begin subsegment named 'AmazonDynamoDBv2': segment cannot be found.

In the above example the distributed DynamoDB Lock Client was used, which uses DDB in its implementation for acquiring and releasing a lock.

Regular web requests were not throwing this X-Ray exception.

Investigation

Amazon explains that crucial bit of knowledge that web requests are "automagically" setting up the X-Ray recorder a bit here

But that was not fully explaining it with an example. E.g only adding 

AWSXRay.beginSubsegment("AmazonDynamoDBv2") 

(and ending it) but that didn't fix it. That then gave this exception:

Suppressing AWS X-Ray context missing exception (SubsegmentNotFoundException): Failed to end subsegment: subsegment cannot be found.

My suspicion here is that the lock-client already closed the exactly same named subsegment "AmazonDynamoDBv2".
Also, I'm not creating any worker thread myself, Spring is doing it for me.

Note that you at least can avoid exceptions be thrown by setting environment variable AWS_XRAY_CONTEXT_MISSING   to LOG_ERROR. That will only log the above exception.

Solution

Creating a 'parent' segment and setting the trace entity and the subsegment did the job:

Entity parentSegment = AWSXRay.beginSegment("beginSegmentForSomeScheduledTask");
AWSXRay.getGlobalRecorder().setTraceEntity(parentSegment);
AWSXRay.beginSubsegment("AmazonDynamoDBv2");

I did test only creating the subsegment and only creating and setting the parentSegment. But those raised the exception again.
I did not further investigate whether the "AmazonDynamoDBv2" name of the subsegment is essential.
And of course the matching closeSegment() and closeSubsegment() calls of course; not the reverse order: close the last one begun as first.

This thread pointed me in the right direction. This was a next option I would have tried next: setting up my own filter to run it earlier in the (filter) chain; though the @Scheduled task of course does not have a filter. Another workaround would have been to put the X-Ray logging at a very high level:

logging.level.com.amazonaws.xray = SEVERE

Also helped in explaining is this X-Ray reported issue.

Update: additionally, the error was also due to DynamoDB Lock Client! I did not specify withCreateHeartbeatBackgroundThread() when creating the lock client, but the exception of X-Ray showed that it was trying to send a heartbeat. After explicitly setting withCreateHeartbeatBackgroundThread(false) the exception (and error) regarding segment cannot found was fully fixed.




Thursday, December 28, 2017

Logback DBAppender sometimes gives error on AWS Aurora: IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

LOGBack DBAppender IllegalStateException


Sometimes when starting a Spring Boot application with Logback DBAppender configured for PostgreSQL or AWS Aurora in logback-spring.xml, it gives this error:

java.lang.IllegalStateException: Logback configuration error detected: ERROR in ch.qos.logback.core.joran.spi.Interpreter@22:16 - RuntimeException in Action for tag [appender] java.lang.IllegalStateException: DBAppender cannot function if the JDBC driver does not support getGeneratedKeys method *and* without a specific SQL dialect

The error can be quite confusing. From the documentation it says that Logback should be able to detect the dialect from the driver class.

But apparently it doesn't. Sometimes. After investigating, it turns out that this error is also given when the driver can't connect correctly to the database. Because it will then not be able to find the metadata either, which it uses to detect the dialect. And thus you get this error too in that case!
A confusing error message indeed.

A suggestion in some post was to specify the <sqlDialect> tag, but that is not needed anymore in recent Logback versions. Indeed, it now gives these errors when putting it in logback-spring.xml file either below <password> or below <connectionSource>:

ERROR in ch.qos.logback.core.joran.spi.Interpreter@25:87 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][connectionSource][dataSource][sqlDialect]]
or
ERROR in ch.qos.logback.core.joran.spi.Interpreter@27:79 - no applicable action for [sqlDialect], current ElementPath  is [[configuration][appender][sqlDialect]]
To get a better error message it's better to implement the setup of the LogBack DBAppender in code, instead of in the logback-spring.xml. See for examples here and here.




Wednesday, April 12, 2017

Lessons learned Docker microservices architecture with Spring Boot

Introduction

During my last project consisting of a Docker microservices architecture, built with Spring Boot, using RabbitMQ as communication channel, I learned a bunch of lessons, here's a summary of them.

Architecture

Below is a high level overview of the architecture that was used.


Docker

  • Run 1 process/service/application per docker container (or put stuff in init.d but that's not intended use of docker)

  • Starting background processes in the CMD cause container to exit. So either have a script waiting at the end (e.g tail -f /dev/null) or keep the process (i.e the one prefixed with CMD) running in the foreground. Other useful Dockerfile tips you can find here

  • As far as I can tell Docker checks if Dockerfile has changed, and if so, creates a new image instance (diffs only?)

  • Basic example to start RabbitMq docker image, as used in the build tool:

    $ docker pull 172.18.19.20/project/rabbitmq:latest
    docker rm -f build-server-rabbitmq
    $ # Map the RabbitMQ regular and console ports
    $ docker run -d -p 5672:5672 -p 15672:15672 --name build-server-rabbitmq 172.18.19.20/rabbitmq:latest

  • If there's no docker0 interface (check by running command ifconfig) then probably there are ^M characters in the config file at /etc/default/docker/docker.config. To fix it, perform a dos2unix on that file.

  • Check for errors at startup of docker in /var/log/upstart/docker.log

  • If your docker push <image> asks for a login (and you don't expect that) or it returns some weird html like "</html>" then you're probably missing the host in front of the image name, e.g: 172.18.19.20:6000/projectname/some-service:latest

  • Stuff like /var/log/messages is not visible in a Docker container, but is in its host! So look there for example to find out why a process is not starting/gets killed at startup without any useful logging (like we had with clamd)

  • How to remove old dangling unused docker images: docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

Spring Boot

  • Some jackson2 dependencies were missing from the generated Spring Initializr project, noticed when creating unittests. These dependencies were additionally needed in scope test:

    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-databind</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-annotations</artifactid>
      <version>2.5.0</version></dependency>
    <dependency>
      <groupid>com.fasterxml.jackson.core</groupid>
      <artifactid>jackson-core</artifactid>
      <version>2.5.0</version></dependency>


    Not sure anymore why these then didn't get <scope>test</scope> then... Guess it was needed also in some regular code... :)

  • In the Spring Boot AMQP Quick Start the last param has name .with(queueName) during binding, but that's the topic key! (which is related to the binding key used at sending), so not the queue name.

  • Spring Boot Actuator's /health will check all related dependencies! So if you have a dependency in your pom.xml to a project which uses spring-boot-starter-amqp, /health will check now for an AMQP queue being up! So add a for those if you don't want that.

  • Spring Boot's default AppAplicationIT probably needs a @DirtiesContext for your tests, otherwise the tests might re-use or create more beans than you think (we saw that in our message receiver tests helper class).

  • @Transactional in Spring: by default only for unchecked exceptions!! It's documented but still a thing to watch out for.

  • And of course: Spring's @Transactional does not work on private methods (due to proxy stuff it creates)

  • To see in Spring Boot the transaction logging, put this in application.properties:

    logging.level.org.springframework.jdbc=TRACE
    logging.level.org.springframework.transaction=TRACE


    Note that by default @Transactional just rolls back, it does not log anything, so if you don't log your runtime exceptions, you won't see much in your logs.

  • mockMvc from spring is not really invoking from "outside", our spring sec context filter (for which you can use @Secured(role)) was allowing calls while no authentication was provided for. RestTemplate seems to work from "the outside".

  • Scan order can mess up @ControllerAdvice error handler it seems. Had to change the order sometimes:

    Setup:
    - Controller is in: com.company.request.web.
    - General error controller is in com.company.common package.

    Had to change
    @ComponentScan(value = {"com.company.security", "com.company.common", "com.company.cassandra", "com.company.module", "com.company.request"})

    to

    @ComponentScan(value = {"com.company.security", "com.company.cassandra", "com.company.module", "com.company.request", "com.company.common"})

    Note that the general error controller has now been put in last

  • Spring Boot footprint seems relatively big especially for microservices. At least 500MB or something is needed, so we have quite big machines for about 20 services. Maybe plain Spring (iso Spring boot) might be more lightweight...

Bamboo build server

  • When Bamboo gets slow and the CPU seems quite busy and memory availability on its server seems fine, increase the Xmss and Xmsx (or related). Found this out because the java Bamboo process was running out of heap sometimes, increasing heap also fixed performance.

  • To have Bamboo builds fail on quality gates not met in SonarQube, install in Sonar the build breaker plugin. See the plugin docs and Update Center. This FAQ says so.

Stash

  • The Stash (now called Bitbucket) API: in /rest/git/1.0/projects/{projectKey}/repos/{repositorySlug}/tags a 'slug' is just a repository name. 

Microservices with event based architecture

  • When you do microservices, IMMEDIATELY take into account during coding + reviews that multiple instances can do concurrent access to database.

    This has affect on your queries. Most likely correct implementation for uniqueness check on inserts:
    1- add unique constraint
    2- run insert
    3- catch uniqueness exception --> you know it already exists. Solution with SELECT NOT EXISTS is not guaranteed unique.

  • Also take deleting of data (e.g user deletes himself) into account from the start. Especially when using events and/or eventual consistency in combination with an account-balance or similar. Because what if one services in the whole chain of things to execute for a delete fails? Has the user still some money left on his/her account then? In short: take care of CRUD.

  • Multiple services are sending the same event? That can indicate 2 services are doing the same thing --> Not good probably.

  • Microservices advantages:

    - Forces you to better think about where to put stuff in comparison to monolith where you more often can be tempted to "just do a quick fix".
    - language independency for service implementation: choose the best language for the job

    Disadvantages:
    - more time needed for design
    - eventual consistency is quite tough to understand & work with, also conceptually
    - infrastructure is more complex including all communication between services

    More cons can be found here.

Tomcat

  • Limit the maximum size of what can be posted to a servlet is not as easy as it seems for REST services:

    - maxPostSize in Tomcat is enforced only for specific contenttype: Tomcat only enforces that limit if the content type is application/x-www-form-urlencoded

    - And the other 3 below XML options are for multipart only:

    <multipart-config>
      <!-- 52MB max -->
      <max-file-size>52428800</max-file-size>
      <max-request-size>52428800</max-request-size>
      <file-size-threshold>0</file-size-threshold></multipart-config>


    So that one won't work for uploading just a byte[]. The only solution is in the servlet (e.g Spring @Controller) you'll have to check for the limit you want to allow.

  • maxthreads seems set to be unlimited by default or something. 50 seems to perform better. (workerthreads) 

Security

  • To securely generate a random number: SecureRandom randomGenerator = SecureRandom.getInstance("NativePRNG");

  • Good explanation of secure use of a salt to use for hashing can be found here

Cassandra

  • Unique constraints are not possible in Cassandra, so there you will even have to implement unique constraints in the business logic (and make it eventually consistent)

  • CassandraOperations query for one field:

    Select select = QueryBuilder.select(MultiplePaymentRequestRequesterEntityKey.ID).from(MultiplePaymentRequestRequesterEntity.TABLE_NAME);
    select.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
    select.where(QueryBuilder.eq(MultiplePaymentRequestRequesterEntityKey.REQUESTER, requester));
    return cassandraTemplate.queryForList(select, UUID.class);


    See also here.

  • Note below two keys don't seem to get picked up by Cassandra in the Spring Data Cassandra version 1.1.4.RELEASE:  

    <groupid>org.springframework.data</groupid>
    <artifactid>spring-data-cassandra</artifactid>

    @PrimaryKeyColumn(name = OTHER_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meUserId;

    @PrimaryKeyColumn(name = ME_USER_ID, ordinal = 1, type = PrimaryKeyType.CLUSTERED)
    @CassandraType(type = DataType.Name.UUID)
    private UUID meId;

    This *does* get picked up: put it into a separate class:

    @Data
    @AllArgsConstructor
    @PrimaryKeyClass
    public class HistoryKey implements Serializable {

      @PrimaryKeyColumn(name = HistoryEntity.ME_USER_ID, ordinal = 0, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID meUserId;

      @PrimaryKeyColumn(name = HistoryEntity.OTHER_USER_ID, ordinal = 1, type = PrimaryKeyType.PARTITIONED)
      @CassandraType(type = DataType.Name.UUID)
      private UUID otherUserId;

      @PrimaryKeyColumn(name = HistoryEntity.CREATED, ordinal = 2, type = PrimaryKeyType.CLUSTERED, ordering = Ordering.DESCENDING)
      private Date created;

    }

  • Don't use Cassandra for all types of use cases. An RDMS still has its value, e.g for ACID requirements. Cassandra is eventually consistent.

Kubernetes

Miscellaneous

  • Use dig for DNS resolving problems

  • Use pgAdmin III for PostgreSQL GUI

  • To stop SonarQube complaining about unused private fields when using Lombok @Data annotation: add to each of those classes @SuppressWarnings("PMD.UnusedPrivateField")

  • Managed to not need transactions nor XA transactions for message publishing, message reading, store db, message sending, by using the confirm + ack mechanism.
    And allow message to be read again. DB then sees: oh already stored (or do an upsert).
    So,when processing message from the queue:
    1- store in db
    2- send message on queue
    3- only then ack back to queue that read was successful

  • Performance: instantiate the Jackson2 ObjectMapper once as static, not in each call, so:
    private static final ObjectMapper mapper = new ObjectMapper();
  • Javascript: when an exception occurs in a callback and it is not handled, processing just ends. Promises have better error handling.

  • clamd would not start correctly; it would try to start but then show 'Killed' when started via the commandline. Turns out it runs out of memory when starting up.  Though we had enough RAM (16G total, 3G free), it turns out clamd needs swap configured!

  • Linux bash shell script to loop through projects for tagging with projects with spaces in their name:

    PROJECTS="
      project1
      project space2
    ";
    IFS=$'\n'
    for PROJECT in $PROJECTS
    do
      TRIM_LEADING_SPACE_PROJECT="$(echo -e "${PROJECT}" | sed -e 's/^[[:space:]]*//')"
      echo "Cloning '$TRIM_LEADING_SPACE_PROJECT'"
      git clone --depth=1 http://$USER:$GITPASSWD@github.com/projects/$TRIM_LEADING_SPACE_PROJECT.git
    done

  • OpenVPN in Windows 10: Sometimes it hangs on "Connecting..."  It doesn't show the popup to enter username/pwd. Go to View logs. Then when you see: Enter management password in the logs: ???? you have to kill the OpenVPN Daemon under Processes tab (windows taskmanager). The service is stopped when exiting the app but that's not enough!

  • javascript/nodejs log every call that comes in:

    app.use(function (req, res, next) {
      console.log('Incoming request = ' + new Date(), req.method, req.url);
      logger.debug('log at debug level');
      next()
    }

  • If ever your mouse is suddenly not working anymore your VirtualBox guest, kill the process in your guest-machine mentioned in comment 5 here. After that the mouse works again in your vbox guest.

  • Fix Firefox to version 45.0.0 for selenium driver tests:

    sudo apt-get install -y firefox=45.0.2+build1-0ubuntu1
    sudo apt-mark hold firefox

  • Setting the cookie attribute Secure (indicating cookie should only be sent over httpS) can be seen when using curl to request the URL(s) that should send that cookie plus the new attribute, even when using HTTP. See also my previous post.

    But when using a browser and HTTP, you probably won't see the secure cookie appear in the cookie store. This is (probably) because the browser knows not to store it in that case because it's HTTP being used.

  • Idempotency within services is key for resilience and be able to resend an event or perform an API call again.