Typed/annotated version (with what I remember from discussion, and apologies for my handwriting!):
(From discussion at QCon London 2015, Wednesday 4th March, 2:30PM time slot)
The problem:
Avoiding dependency hell
When you have numerous services continuously deployed, a change of behaviour in one system can cause a problem to occur in another system.
How to manage these changes across system boundaries?
(“Version hell” is a reference to http://en.wikipedia.org/wiki/Dependency_hell)
Some approaches suggested:
- Unify the code base. Even if you have numerous (micro)services, it may be practical to put them all within the same version control system project (same Git “project”, same “trunk” in SVN, etc); this, at least, defines a set of versions which are expected to be compatible. When deploying, the deployment system can skip deploying when a service’s version is equivalent to the one currently deployed
- Version the APIs between the services was suggested. (Numerous issues around this are described here: http://stackoverflow.com/questions/389169/best-practices-for-api-versioning). This was acknowledged to have much potential overhead, and not always to be the right approach.
- Define consumer contracts .
We also talked about:
- Integration testing is often a problem with microservices: what scope to test at? Testing with broad scope can run slowly (example: provision a Hadoop cluster, populate with realistic data, verify output of various queries). More parallelism is the obvious answer. (Example case: 5h turnaround time for test suite).
Whiteboard pic. attached: