Common Errors Found in Kubernetes Manifests

Apr 12, 2022
7 min
read
Christian Lete
Product Leader
monokle

Learn about the common errors in Kubernetes manifests and how Monokle can help you identify and fix errors instantly.

Share on Twitter
Share on LinkedIn
Share on Reddit
Share on HackerNews
Copy URL

Table of Contents

No items found.

Try Monokle Desktop Today

You’ve spent weeks researching Kubernetes and its concepts, from clusters to control planes to pods to ConfigMap. Against all odds, you have finally launched a testing cluster on AWS. As a developer, this is all uncharted territory for you. Because Kubernetes is part of the “new” full-stack your organization has embraced, you need to learn a new way of deploying code.

Eventually,  you write the Kubernetes manifest that defines your deployment, type out:

`kubectl apply -f /path/to/your-manifest.yaml`

hit ENTER, and...

*An error.*

Do you know what the error means? 

Even if you’ve seen it before or read about it somewhere deep in the Kubernetes docs, how do you go about fixing it? How to adapt to the way your organization deploys on Kubernetes with pre-deployment tools that point out errors long before you spend time running `kubectl apply` and `kubectl logs`...

We've been building Monokle - a suite of tools created to manage all pre-deployment tasks and policies before errors make it to your cluster. This unique toolkit consists of [Monokle Desktop](https://kubeshop.github.io/monokle/getting-started), [CLI](https://monokle.io/blog/monokle-cli-flexible-kubernetes-yaml-validation) & [Monokle Policy Management IDE](https://app.monokle.com/). We hope to help YAML experts and newbies alike to edit, debug, and manage manifests easier than ever before.

## What is a Kubernetes YAML Manifest?

**A Kubernetes manifest is a YAML file that describes each component or resource of your deployment and the state you want your cluster to be in once applied.**

It contains information about the resource's metadata, specification, and status. The specifications include details such as the container image to use, the number of replicas to create, the service ports to expose, and so on. Once the manifest is applied, Kubernetes works to ensure that the actual state matches the desired state specified in the manifest. This helps to automate the deployment and management of Kubernetes resources.

Here is an example for a ReplicaSet and three Nginx pods:

```
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx
 labels:
   app: nginx
spec:
 selector:
   matchLabels:
     app: nginx
 template:
   metadata:
     labels:
       app: nginx
   spec:
     containers:
     - image: nginx
       name: nginx
       ports:
       - containerPort: 8080
```

Once you’ve created a deployment, you can always edit your manifest file and re-apply it using `kubectl apply…` to declare a new state, scale the number of pods, or clean up your cluster.

## Where Kubernetes manifests meet Helm and Kustomize

If you’ve spent time in the Kubernetes ecosystem, you’ve probably heard about tools such as Helm and Kustomize. Both extend manifests to improve the developer experience around Kubernetes. 

Let’s be clear about what they are:

- **[Helm](https://helm.sh/): A package manager for Kubernetes that uses Charts, which are Go-based templates that ultimately generate YAML-based manifests for deployments.**

For example, you can use a simple command like `helm install prometheus prometheus-community/prometheus` to deploy a functional monitoring agent (and all its requisite resources) on your existing cluster without writing a single line of YAML.

- **[Kustomize](https://kustomize.io/): A configuration management tool for customizing Kubernetes objects. With Kustomize, you can take an existing manifest and apply overrides without touching the original YAML file.**

While these tools can help avoid some errors, by either delivering community-validated manifests or narrowing the changes you make on existing manifests, unfortunately they don’t help you to validate your manifests, suggest fixes nor to display relationships across the resources. 

## Why are Kubernetes manifest errors difficult to find?

Some manifest-related errors show up right away, like in the scenario that kicked this piece off.

The `error: error validating output` is a good signal that you need to open your manifest file and look for a syntax error or a missing resource. Others only cause havoc *after* `kubectl apply`… exits in a way that implies everything is OK, yet show up as errors later. You can get sucked into the cycle of searching for the different STATUS codes shown by `kubectl get pods` and reading page after page of Kubernetes documentation for details. That’s because `kubectl apply`… exits only once your cluster has accepted your deployment, not when the deployment is running error-free with full functionality. You might never know your deployment has failed until you run `kubectl get pods` or try navigating to your application. Monokle will point out syntax and all kinds of other possible errors and dependencies saving developers time before actual deployments happen.

## Common Kubernetes manifest errors

To help you solve some errors you might already have on your plate, let’s jump into some of the most common, why they happen, and how to solve them.

### 1. Indentation

As a configuration language, YAML uses maps and arrays to understand how the resources in your manifest are related to one another. 

Both lists and maps are defined in YAML using indentation. So, your resources and configurations must be correctly associated.

YAML is quite relaxed with the rules around indentation, not forcing any number of spaces per line, so the important rules are to use spaces and be consistent with indentation throughout your manifests.

### 2. Maps vs. arrays

In YAML, maps are key-value dictionaries, while arrays are essentially lists, and they have unique syntaxes.

To borrow from our example manifest above, this is a map:

```
metadata:
 name: nginx
 labels:
   app: nginx
```

And here’s a list:
```
containers:
     - image: nginx
        name: nginx
         ports:
         - containerPort: 8080
```

The difference? Lists are set off by hyphens (-). 

Maps are used when there’s a single value for a given key.

Arrays define a list of similar objects, like several containers that should deploy in conjunction. 

Often the reasons for using one versus the other in a manifest aren’t clear, and only make sense once you’ve dove into documentation.

### 3. `invalid type for…` or `got "string", expected "integer"`, aka expected vs. received values

Let's say you need to define the port that a container should expose itself on, and instead of supplying an integer— `port: 80` —you supply a string— `port: "80"` . Kubernetes won’t know how to handle the wrong type of value, and the error output you receive in return might not be particularly helpful in finding the line responsible.

### 4. Typos

Hard-to-catch issues, (e.g., starting a manifest with a typelike `apiVerion` instead of `apiVersion`), can create error validation errors that often don't help to diagnose the issue. If you’ve validated your indentation and know you have strings and integers where they should be, check again for spelling or camelCase erros—no YAML validator will be able to find those for you.

### 5. Invalid references between resources 

Manifest will likely reference other resources, such as ConfigMaps,  secrets defined elsewhere in the file, or a separate file entirely.

References must be right for a deployment to work. Typos or changed resource labels will often prompt this error if references elsewhere in the manifest(s) haven’t been updated. 

### 6. Invalid output from Helm charts and Kustomize

One downside to these useful tools is that they can abstract away too much, especially for developers who are still getting comfortable with manifests.

For example, when you run Kustomize, it returns a big YAML file or multiple smaller files. 

Your only options for double-checking the output are 

  1. reading through it with a fine-toothed comb, 
  2. running each portion through external validation tools, or 
  3. jumping into the deploy-troubleshoot-deploy cycle right away.

The same goes for Helm. It’s a helpful resource for getting started, but because you’re not working on the YAML “by hand,” you don’t know what’s happening inside the files. 

So, just because a manifest comes from Helm doesn’t mean it’s error-free or doesn’t need some customization on your part.

## How to test Kubernetes manifest errors

Kubernetes comes with a few methods of simplifying how you identify errors in manifests.

- `kubectl apply --dry-run`: This runs a local dry run that can help catch some glaring errors by printing the object that would be sent but doesn’t give you much information about how the server will apply your manifest. Using `--dry-run=server` goes one step further by testing your manifest against the server’s state, but it doesn’t persist the resource(s) you’re trying to apply.

- `kubectl rollout`: Follow up `kubectl apply...` with `kubectl rollout status deployment your-app` to see the status of your cluster, where it is still waiting, and whether it’s successful, shortening the gap between attempting a rollout and knowing if it’s failed.

- Add a `rollout` to your CI/CD pipeline: You can block your CI pipeline from finishing while you wait for `kubectl rollout` to exit, and if there’s a non-zero return code, you can fail the task.

Add a readiness/liveness probe: The easiest way to know that a deployment is successful is by pinging your application at an appropriate endpoint and getting a 200 OK response. Kubernetes supports this with simple [liveness containers](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request) that let you know if your application is running as expected.

**While these can help you catch errors a little faster once you’ve started a deployment, they’re still too late.**

## Pre-test for Kubernetes manifest errors with a manifest IDE

So far, we’ve covered the state of manifest testing and some tips on common manifest mistakes.

One problem remains: : How do you make the process of editing and managing manifests more efficient?

The status quo—a cycle of deploying and failing and fixing ad infinitum—means you’re still engineering YAML files, not developing new features for your application and team . 

Enter [Monokle](https://monokle.kubeshop.io/), an open-source visual tool that simplifies everyday tasks around Kubernetes to help developers be more productive without tedious YAML-engineering. It uses built-in intelligence and syntax checking, giving a comprehensive view of manifests integrity spotting errors before you deployments.

As a result, you can be effective in Kubernetes on day one without looking up a single YAML syntax rule.

Kubernetes Error debugging view

### Monokle's basic components are:

#### File Explorer

Monokle’s **File Explorer** helps you find all your relevant manifest files, which is particularly useful for those complex deployments. 

#### Navigator

The **Navigator** converts the complex relationships between Kubernetes resources into easily-understood workloads, networks, and access controls and even gives you visual syntax or reference error alerts, including their number and severity. 

#### Editor

And the **Editor**, which can also switch to a form-fill mode for less YAML-engineering, highlights errors with a detailed explanation and a suggested fix.

**Monokle also allows you to**:

- Debug and validate the output of Kustomizations and Helm charts, 

- View diffs between local and remote resources, and 

- Navigating between resource relationships and dependencies. 

## Finishing Up

[Monokle](https://monokle.io/) has everything needed to end a cycle of hunting manifest errors. Download it directly from our site http://monokle.io/download) and get started immediately.

We’d love to hear from you! Please join us on [Discord](https://discord.gg/6zupCZFQbe)!

Related Content