Devoxx UK 2018 Takeaways - Part 1
Date posted
9 July 2018
Reading time
25 Minutes
Devoxx UK 2018 Takeaways - Part 1
Last month, I had the opportunity to attend the Devoxx UK conference in London on 9th??? ??11th May. Attracting around 1,200 developers each year, Devoxx is one of the most popular Java-focused conferences in the UK, but also focuses on multiple tracks including serverless/cloud, containers and infrastructure technologies, architecture, modern web, big data and AI, security, and future technologies.
Launched in 2001, Devoxx is also spread across Belgium, France, Poland, Morocco, Ukraine, and organised by local developer groups, truly making it a series of tech events 'from developers, for developers'.
I got a great insight into the latest technologies used in the industry, learn and experience particular topics which interested me the most, and network with various developers and leaders in the Java community.
The event itself is split across three days.
The first day was the 'Deep Dive Day', where a number of experts run practical hands-on sessions, giving me the chance to delve more into a specific technology than is possible during regular conference sessions. I will dive into more detail on the two sessions I attended.
The second and third days were the main conference days, providing a variety of talks across the various tracks I mentioned before. I will give a summary, as well as the important points I learned, from each talk I attended.
Baking a Microservice PI(e)
In the morning, I attended Antonio Goncalves and Roberto Cortez's session on building, managing and deploying microservices using Java and different frameworks. This involved going through a journey of a microservices architecture, discovering the various problems that can arise, and eventually finding the solution. Microservices are small, separate autonomous services which communicate together synchronously or asynchronously. A microservice architecture follows the single responsibility principle. This type of architecture is useful for two main reasons:- Microservices are business focused - As they are small and separate from each other, teams can be smaller and more agile, and business features can be delivered as services, as well as more quickly.
- Microservices can be scaled independently - You can create more instances of a particular microservice to help address unpredictable loads (e.g. for e-commerce websites during the Christmas period). Each service can be deployed, managed, and then redeployed independently without compromising the integrity of an application.
Overview of microservice architecture
Antonio was building an entire microservice architecture based on a book store, where you can create, view, edit and delete books. We used MicroProfile to develop our microservices, but there are many other tools you can use to develop microservices, including:- Dropwizard
- Lagom
- Vert.x
- Spring Boot
Monitoring
Monitoring is important to view any issues your environments are having, and so your WebOps team do not have to ssh into every node on an environment having problems. This was demonstrated through a humorous scenario between Antonio (the developer) and Roberto (the web operations engineer). Antonio tested the application locally, which was then deployed to the Raspberry Pi. Roberto then tested the application on the Raspberry Pi, where he noticed an issue and was unable to create a book. He used ssh to connect to the Book API node, and noticed in the logs there was a Java connection exception trying to do a GET request on a hardcoded localhost URL, which obviously will not work on the Raspberry Pi environment.'Never trust a developer who says it works on their machine.'The ELK stack can be used to centralise and transform your logs in a single location, which are being sent from all the different nodes. Kibana is used to easily visualise the logs, which uses Elasticsearch to search and store the logs from Logstash. This is one of the most common monitoring systems used, and I have experience of using in the past, though there are other ways to do monitoring, including:
- Syslog
- Fluentd
- Splunk
- Flume
- Logmatic.io
Registering
We have two microservices deployed to a number of specific Raspberry Pis, therefore they are able to communicate with each other. However, if we deployed our microservices to the cloud, we do not know what server they will be deployed to. So, how do the microservices discover each other? This is where I learned about the importance of registering. We have a Raspberry Pi on the system running Consul, which is used to connect and configure services. So, when we deploy a microservice, or scale any of our microservices, first it will communicate with Consul and register itself by name, so the other services can discover it by name. (You can think of Consul as DNS resolution). We also set up Consul to run a health check, to ping our services every x mins to see if they are still there and running (returns HTTP status code 200). Other tools which can be used for registering:- JNDI
- Apache Zookeeper
- Netflix Eureka
Swagger
I learned about the Open API Initiative (OAI), whose role is to standardize how REST APIs are described/documented. Throughout the codebase, we used Swagger for documenting our microservices on what endpoints we can call, what are the parameters we can pass, and what are the status codes/data returned to us. In summary, Swagger is the implementation, Open API is the specification. We add Swagger annotations to our Java API code, which generates a Swagger contract in JSON format. Then, by using Swagger Code Gen, we can generate code from our Swagger contract including client stubs. You can find out more information about the Swagger Ecosystem here: https://swagger.io/community/Securing Microservices
We have a Raspberry Pi on the system running an API Gateway. The API Gateway sits between the Angular frontend layer and Java API layer, and is the single entry point for all client requests. It is similar to a Proxy; you expose the service (our Book API) endpoints in the Proxy, and your client (our Angular application) calls the service through the Proxy. It is safe to make the Number API a public API, and the HTTP GET endpoint in the Book API public. But the API Gateway allows us to implement security (by verifying the client is authorised to perform the request) for the Book API HTTP POST / PUT / DELETE endpoints. Examples of API Gateways:- Amazon API Gateway
- Apigee Gateway
- Tribestream API Gateway
- How to build, document and deploy several microservices spread on different nodes.
- How to make your microservices talk to each other.
- How to scale your services, deal with network failures and high traffic.
- How to monitor your distributed system.
- How to authenticate and manage authorization.
- And, as Antonio put it, 'Microservices are hard. Microservices are bloody hard.'
Testing Java Microservices
To keep with the theme of microservices, in the afternoon I thought it would be useful to attend Alex Soto and Andy Gumbrecht's talk on testing strategies for a microservice architecture. To start, I learned about the anatomy of a microservice (which I obviously had an understanding of at this point), and the evolution of testing. With any software we develop, testing is a crucial stage, and we generally follow the same testing plan:- Manual testing - Play the role of an end user and manually execute test cases without using any automation tools.
- Automated testing - Through unit, integration and UI testing, to validate individual components (e.g. controller, model), the interaction between different components (e.g. API and database), and the journeys through a system work as expected.
- Now, especially in Agile development, we have adopted a 'Test First' approach were we write our automated tests before our functional code is written, through Test-Driven Development (TDD) and Behaviour-Driven Development (BDD).
Service Virtualization
This was a very hands-on lab on testing microservices. We used two microservices, one called villains and one called crimes, which were both developed using Vert.x. The idea was the Villains service was a consumer of (invokes) the Crimes service, therefore we needed to write a test for the Villains service that verifies this interaction with the Crimes service. With microservices, this can be complex, since a producer (in our case, the Crimes service) can have many dependencies (such as a database etc), which we need to ensure has started for our testing. To get around this, service virtualization can emulate these dependencies. This allows you to simulate an API, i.e. capture, modify, and playback responses from an API. This makes testing much faster, and helps simulate hard to reproduce situations when dealing with a real API, e.g. simulate an API being down, or an API sending you bad responses (without taking down the real API). It also allows you to write tests before the actual service has been built, which follows TDD. For this lab, we used Hoverfly to isolate the Crimes service using service virtualization, specifying the service endpoint to react to, and what response to return.Contract testing
Contract testing is a way to ensure that services, such as an API (our consumer microservice, Villains) and a client (our provider microservice, Crimes), can communicate with each other. Without contract testing, the only way to know that services can communicate is by using expensive and brittle integration tests, which is even more difficult in a microservice architecture when you have multiple consumers and providers communicating with each other. So how does Contract testing work? A contract between a consumer and provider is called a pact. Each pact is a collection of interactions. Each interaction describes:- An expected request (what the consumer is expected to send to the provider)
- A minimal expected response (the parts of the response the consumer wants the provider to return)
Testing in Production
With Continuous Delivery, traditionally we develop our feature, verify it with QA, deploy it to a staging environment for further testing, and then finally deploy it to production. Unfortunately, we do not live in a perfect world, and production then explodes. What can we do to avoid this in future? We can solve this by using a technique called Blue-Green Deployment. This is where you run two identical production environments, both called Blue and Green. At any time, only one of the environments is live (in this example, Blue), which serves all the production traffic, and the other environment is idle (in this example, Green). So next time we are doing a software release, we will deploy our new version to the environment that is not live (Green). Once the application is fully tested in Green, and we have ensured there are no problems, we can switch all incoming production traffic to point to Green instead of Blue. This also eliminates downtime, and reduces risk with production problems (e.g., if there is an unexpected problem with your new version on Green, you can immediately roll back to the last version by switching the production traffic back to Blue). Things I learned from these labs:- With traditional software testing, there is never a guarantee that your application will actually function correctly in your Production environment.
- For a Microservices architecture, unit, integration and UI testing is not enough. Testing strategies such as Service Virtualization, Contract Testing and Blue/Green Deployments are essential for ensuring your microservices architecture works as expected.
- 'Change is the essential process of all of existence' - Spock