Hello Testkube — Power to testers on k8s!
With my testing-hat on I’ve often contemplated why testing is so tightly tied to development tooling and related workflows.
On one hand it’s obvious; tests need to be run as part of builds to make sure the built artefacts are working as they should — and a tight integration with build tools makes that easier.
On the other hand it’s cumbersome; when I need to run tests to validate some bug-fix or functionality I often end up running an entire build instead of just the tests required. Or if I want manage the orchestration and execution of my tests I have to do this in the CI/CD tool-of-the-month, and my possibilities are constrained by the tool itself, and by what access I’m given as a tester. Test results is another area that is generally lacking; integrating results for all different types of tests that are being run all day and night isn’t top priority for most CI/CD pipelines, I’ll often have to cobble together my own dashboard based on the testing and DevOps tools being used to build (i.e. test) our applications.
So here comes Kubernetes! Could that make things easier for me when I’m testing? Both yay and neigh it seems. Thanks to k8s I can get to access logs in a generic way (to debug why my tests are failing), and provisioning test environments is somewhat easier (especially with a GitOps approach), but on the other hand I often have to grapple with cluster networking, remote access, etc. And the afore-mentioned challenges related to CI/CD, test results, orchestration/execution, etc are still there. Unfortunately many testing tools aren’t really “made for k8s”, which can result in somewhat clunky approaches when using them to test applications running on Kubernetes.
So without much a-do, say hello to TestKube, striving to make testing a little easier for testers on Kubernetes!
TestKube takes a somewhat opinionated approach to testing for Kubernetes:
- Tests should be included in the state of your cluster; a cluster should be able to validate that its applications work as intended
- Test orchestration and execution should not be tied to a specific CI/CD tool; it should be done in the cluster itself
- Test execution can be initiated by both internal and external triggers, for example k8s resource updates (internal) or CI/CD pipelines (external)
- Native k8s constructs should be used for define tests and related artefacts as much as possible; CRDs, ConfigMaps, etc.
- Test results should be aggregated in a common format and exposed to external tooling via APIs.
- The underlying architecture should be open and modular
(those last 2 are admittedly pretty generic)
By “tests”, TestKube means any kind of test; functional, performance, security, conformance, unit, etc. — whatever is needed to validate an application running under k8s.
This is ambitious, we know, but our vision is clear and with our first public release we’ve taken some initial steps:
- Tests are defined as “scripts” in a cluster using a CRD
- Test execution is performed by dedicated “executors”
- Test execution can be triggered via a kubectl plugin or by talking directly to the underlying API Server
- Test results are stored in a somewhat generic format — and high-level metrics are available to Prometheus/Grafana/etc.
- Test scripts and executors follow a generic format that is possible to use/implement for any type of test (we’ll see about that)
To chisel this all out we’ve started with supporting the execution of Postman Collections using a Postman Executor (built on Newman) for testing APIs, but our plan to support more types of scripts is obvious; Cypress, Selenium, K6, JMeter, SoapUI, Cucumber, etc could all be supported with corresponding executors.
As with our other recent releases (Monokle and Kusk), and any other newly launched open-source project for that matter, we’re aching for feedback and engagement; let share your thoughts on testing with Kubernetes and on how TestKube could help to make that a little easier.
TestKube is brought to you by Kubeshop (Open-Source Accelerator/Incubator)