Microservices in our architecture should not have to worry about HTTPS/SSL, authentication, routing, tracing... This should all be done by middleware. We use istio for this.
Isto runs a sidecar proxy along each microservice. This sidecar manages logging, tracing, encryption...
View Istio microservice example for an example.
Each microservice has a test.yaml file with the following content:
name: pinger-app version: v0.0.1 path: manifests databases: - type: mongodb user: admin service: name: mongo dependencies: - name: ping-app repository: https://gitlab.com/tcorp-k8s/ping-app.git revision: production environment: - name: DB_URI value: mongodb://[email protected] testResources: # optional memory: 10M cpu: 10m
It will then try to model the dependencies as a DAG and deploy them from the leaves to the root. If there are dependency cycles the dependency name is used as a consistent selector. If two dependencies point to the same app, but with a different revision, the latest revision is used. Note that any directly listed dependencies in the test.yaml take ultimate priority in the configuration (of environment variables etc.)
The beauty of this setup is that the application never actually knows it is being tested! The annoying part is that the tester will be ran as a separate kubernetes job (the result indicates how the test went) and thus should not be able to test the database.
Ideally all state (istio and pod logs) will be archived so it can be analysed through a webapp when the tests fail.
Another issue is with applications calling third party apps (eg. AAD or bol.com, these applications have to know that they are being tested to emulate these endpoints).
Finally as gitlab does not really like pushing to separate images with another name as the current repo, it may be required to build the test image on the fly, or we could simply define our test in python and inject it into the python docker image through a volume mount and a command option.
tests: - name: setup description: Loads some example data into the backend type: python folder: tests file: setup.py childs: - name: test-ping-response type: python folder: tests file: test-ping-response.py
How the system should work: Ideally the system executes the parent first and then goes onto executing the child and a job. If there are multiple childs, the cluster state will revert to the original.
You may also want to override dependencies their environment variables, this is also possible.
And there you go, my first true open source contribution to the world: Kubertest.
Note that ideally the test.yaml is a CRD consumed by the kubertest-server.
To save on startup time, you setup your test cluster once and the kubertest-server purges your state after each test. This also allows you to expose a ui to view test logs etc.