2017/07/09

Automatic DB migrations for Spring Boot with Liquibase

Introduction

I have lately written a short tutorial on Building REST services with Spring Boot, JPA and MySQL, with Part 1 and Part 2.

I decided to add an essential part to any serious project with an SQL store: management of Database Migrations.

In a real world where requirements will change, or when schemas cannot be fully designed up front, you will be facing a real problem sooner than later: how to manage changes in Database schema, once the application or web service is running?

I wrote an article some time ago, which still seems to be quite valid. You can read there the details of a recommended development workflow to cope with Database migrations in all phases of development.

In this article, I will apply the same ideas from my previous article to a more up-to-date app: a Spring Boot Rest Web service with MySQL.

Adding the Liquibase plug-in for Maven

Let's add the Liquibase plug-in.
What we want to achieve at this first step is to be able to evolve our model and keep working as before adding Liquibase: the Hibernate Maven plugin will take care of recreating the schema up-to-date every time we run tests or lunch the application.

We add the dependency:



Our initial Liquibase changeset will be empty.

We need to add the Database Changelog File for liquibase: the file where all changes managed by Liquibase are registered. It will initially be empty.
We add the file src/main/resources/db/db.changelog.xml:



We add the liquibase.properties file:



We finally add the plug-in with relevant configuration:



With this configuration, we are telling Liquibase not to try any database migrations. This is our normal workflow of updating model with JPA annotations and running tests. Hibernate plugin takes care of dropping and re-creating the database.

This is what we see if we run mvn clean test




Generating DB diff automatically with Liquibase: First migration

We have finished evolving our model and adding the additional logic and tests.
We are now happy and ready to commit a change.
This point could even be the very first version of your DB schema!

Let's generate the DB diff with Liquibase.
The generated diff file will be incorporated to the registered Liquibase DB schema updates. Additionally, when running our app, Liquibase will take care of migrating the DB schema to the latest version registered in our codebase.

To make all this magic happen, let's add a profile in our pom file so we can generate the DB diff anytime.


Let's generate our first Liquibase migration with mvn clean process-test-resources -Pdb-diff:



Liquibase has generated for us the file src/main/resources/db/db-20170709_144112.changelog.xml with this contents:




Great! We can now add this filename to our global DB changelog file:




Subsequent migrations with Liquibase

 To check our migration mechanism works well, let's update our model with a version field and generate again a DB diff via mvn process-test-resources -Pdb-diff.

Liquibase generates this file:



This seems like magic!

Automatic DB migration embedded in the app

Having added liquibase in our dependencies has also included a Liquibase Spring Bean to our app. This bean runs at application startup, checks the registered changesets against the app DB and brings the DB schema up-to-date automatically by applying any needed migrations.

It would be good to see this in action at development time, so we can test it.

Let's add another profile to our maven project for this.



This profiles skips any schema generation by the Hibernate plugin and drops the database. This way, when the app starts, the Liquibase Spring bean will enter into action and be forced to run all registered migrations.

If we now run the app via mvn clean spring-boot:run -Pdb-test:



Works as expected!

Source Code and Additional Info

Source code for this tutorial here.
Additional Liquibase maven plugin info: mvn liquibase:help
More info about Liquibase here.
Some more info in my previous article about DB migration.

2017/05/29

Avoiding undesired web Scraping and fake Web Search engines in Ruby on Rails

Introduction

If you have developed a nice web app with a lot of content, you will sooner or later face undesired web scraping.

The undesired web scraping will sometimes come from an unmasked bot, with user agents such as Go-http-client, curl, Java and others. But sometimes you will have to deal with bots pretending to be almighty Googlebot or some other legitimate bot.

In this article I will propose you a defense to mitigate undesired web scraping, and to detect fake bots disguised under a legitimate bot name (user agent), without compromising the response time.

This defense can be integrated in any rack-based web app, such as Ruby on Rails or Sinatra.

Request Throttling

If your website has a lot of content, any reasonable human visitor will not access many pages. Let's say that your visitor is a very avid reader and enjoys a lot your content. How many pages do you think it can visit:
  • per minute?
  • per hour?
  • per day?
Our defense strategy will be based on accumulating the number of requests coming from a single IP address for different slots of time.
When one IP address exceeds a pre-configured reasonably high number of requests for the given interval, our app will respond with an HTTP 429 "Too many requests" code.

To the rescue comes rack-attack: a rack middleware for blocking and throttling abusive requests.

Rack-attack stores request information in a configurable cache, with Redis and Memcached as some of the possible cache stores. If you are using Resque, you will probably want to use Redis for rack-attack too.


Here's a possible implementation of rack-attack:



Let's go through the code.

Any request whose path starts with one of these entries will be a candidate for throttling:


We set up a reasonable maximum number of requests for each of the intervals of time we will consider for request throttling:

This is arbitrary and you can choose different intervals of time.

We would like to limit the number of requests within 60 seconds coming from the same IP:


When this throttle block returns a non-falsey value, a counter will be incremented in the Rack::Attack.cache. If the throttle's limit is exceeded, the request will be blocked.

We will modify slightly the default rack_attack algorithm to allow legitimate web indexers in a timely manner.
Here's the new implementation of the algorithm:


Our new algorithm is basically the same as the original rack_attack one, except for the addition of these lines which check if the request comes from one of our allowed Search crawlers:


What this block does is:
  • Check if the request comes from a Search  Engine, identified by its user agent
  • If that's the case, assume it's true and verify offline the authenticity of the bot, so we do not delay the response. If it turns out to be fake, it will be blocked in subsequent requests

The performance of this algorithm will tipically be of just a few milliseconds.

Here's the Rails ActiveJob that will verify the authenticity of the bot. This can be implemented by a Resque queue.



Verify Bot


Let's see a possible implementation of VerifyBot.
Methods that VerifyBot will have:
  • verify: given a user agent and IP, verify the authenticity of the bot
  • allowed_user_agent: true for the user agents from bots we will allow
  • fake_bot: true for bots already verified as fake
  • allowed_bot: true for bots already verified as authentic

VerifyBot will use Redis to cache already verified bots and marked either as safe or fake. These two lists will be stored as Redis sets.





With these, only the implementation of the BotValidator is missing to complete the puzzle.

Bot Validator

Popular search engines authenticity can be verified by a reverse-forward DNS lookup. For instance, this is what Google recommends to verify Googlebot:
  1. Run a reverse DNS lookup on the accessing IP address
     
  2. Verify that the domain name is in either googlebot.com or google.com
     
  3. Run a forward DNS lookup on the domain name retrieved in step 1. Verify that it is the same as the original accessing IP address


Our BotValidator will have two main methods:
  • allowed_user_agent: true for users agents from bots we will allow
  • do_validation: true if the user agent can be authenticated. Will raise exception in case of a fake bot

Subclasses for each bot we want to validate will implement the methods:
  • validates? : true if responsible of validation for the given user agent
  • is_valid? : true when the bot is validated for the given user agent and IP address
Here's the implementation:


Subclass ReverseForwardDnsValidator implements the mentioned validation strategy that many search engines and bots follow.

To validate Googlebot or Bingbot, we will only need to subclass ReverseForwardDnsValidator and implement:
  • validates? : true if passed user_agent is the one the class validates
  • valid_hosts: array of valid reverse DNS host name terminations

Other subclasses for different validations can be added. For intance, one to validate Facebook bot, a generic one for Reverse-only DNS validation, etc.

2017/05/28

Building REST services with Spring Boot, JPA and MySql: Part 2

Introduction

In the first part of this tutorial we saw how to build a skeleton java app from scratch based on the Spring framework and implemented the persistence to MySql database.


In this second part, we will implement a REST web service with the Spring framework.


I'll be using maven 3, version 3.0.5 and Java 8 SDK. Google around for installation of these in your environment.

Step 2: Implement a REST endpoint with Spring


In order to use the Spring framework as the basis for our REST endpoint, we need to add the necessary dependencies to our existing pom.xml:



We already have a model persisted to MySql and now we will add a controller with a method index that retrieves all persisted instances of our model.

We will annotate this method so that it is published as a REST endpoint when running our app within a Servlet container.



The Spring annotations added to our code are:
  • @RestController This declares our class as a controller returning domain objects instead of views. Spring will take care of the JSON serialization automatically via Jackson serializer

  •  @RequestMapping(value="/games", method = RequestMethod.GET) This maps GET requests for the path /games to our controller method.

We can now add a test for our new REST endpoint.



In our test, instead of running our controller within an external application Server, we use the Spring class MockMvc, which will direct requests to our controller, making our test faster.

If we now run mvn clean test:


Running our REST endpoint

We are now ready to package our app and run it.

If we run mvn clean package:



We now have a jar and we can just run it. Yes!!! That's right: we can just run it directly!
Spring has generated an uber jar: a jar with all needed dependencies to run our app, including an embedded servlet container: by default Tomcat, but you can  easily change it for Jetty or any other of your preference.

If we launch the command
 

java -jar target/spring-boot-mysql-0.0.1-SNAPSHOT.jar


We can see on the console a Tomcat has started and is listening on 8080 for requests!

Source Code

Source code on GitHub

2017/05/21

Building REST services with Spring Boot, JPA and MySql: Part 1

Introduction

In this tutorial, we will see how to build a skeleton Java app from scratch based on the Spring framework and capable of having an evolving model persisted on MySql and a related REST web service.

As requirements are changing continuously, we will be handling updates to our model, which in the end translate as updates to our underlying database schema, with Liquibase: a database migration tool

For an overview of how you can manage Database migrations in your development lifecycle, have a look at one of my previous articles: Automatic DB migration for Java web apps with Liquibase

I'll be using maven 3, version 3.0.5 and Java 8 SDK. Google around for installation of these in your environment.

Step 1: Persist a model with JPA and Hibernate

Let's start with what Spring gives us in Spring Initializr for  a maven project with dependencies JPA and MySql.

Here's the generated POM.



In order to have a non-failing maven project, we need to add the details of the database schema to our project.

The resulting properties section in the POM:


And we add these properties to application.properties:


If we launch  mvn clean package  we have now a successful build.

For details on how to create and assign user permissions on MySql, Google is your friend :-)

Adding our Model and Repository

Let's add a sample Model class to our app.



And a Repository interface to access the persisted data. Spring Data will automatically generate the implementation for us.



We can now add a test to load all instances from our repository and verify it is working correctly.



In order to populate our Database for tests, we have the option of using Spring annotations directly in our Java unit test source code.

In this case, we will be using the maven-db-unit plugin instead.

Our updated pom.xml:



And our src/test/resources/sample-data.xml for the unit tests.




If we now run our test with mvn clean test, we have a build failure: we have no tables in our MySql schema and dbunit cannot insert the test data.



At this point, we need to generate a DDL script for our schema.

There are a number of options. You could opt for a Spring solution.

We will apply a more generic solution from a third party which works on Spring and non-Spring frameworks: Hibernate Maven Plugin from juplo.de. This is a completely new implementation of the Hibernate Maven plugin updated to Hibernate 5.

We need to add these lines to our pom.xml:



And the file src/test/resources/hibernate.properties needed by the hibernate-maven-plugin:


Notice in the updated pom.xml:

  • The hibernate-maven-plugin must appear before the dbunig-maven-plugin: the database tables will be created before the dbunit sample data is inserted.

  • Additionally, the file src/test/resources/hibernate.properties needs to be filtered by the standard maven resources plugin.

If we run mvn clean test, our test is finally passing after creating the database tables and populating them with unit test data:


We leave for a future part publishing a REST web service for our model and handling automatic Database migration with Liquibase.

Source code: GitHub

Check Part 2 of this tutorial here.