問題描述
如何在 terraform 中運行 kubectl apply 命令 (How To Run kubectl apply commands in terraform)
我開發了一個 terraform 腳本來在 GKE 上創建一個 k8 集群。
成功創建集群後,我有一組 yaml 文件將應用於 k8 集群。
如何在我的 terraform 腳本中調用以下命令?
kubectl create <.yaml>
參考解法
方法 1:
You can use the Terraform kubectl
third party provider. Follow the installation instructions here: Kubectl Terraform Provider
Then simply define a kubectl_manifest
pointing to your YAML file like:
# Get your cluster‑info
data "google_container_cluster" "my_cluster" {
name = "my‑cluster"
location = "us‑east1‑a"
}
# Same parameters as kubernetes provider
provider "kubectl" {
load_config_file = false
host = "https://${data.google_container_cluster.my_cluster.endpoint}"
token = "${data.google_container_cluster.my_cluster.access_token}"
cluster_ca_certificate = "${base64decode(data.google_container_cluster.my_cluster.master_auth.0.cluster_ca_certificate)}"
}
resource "kubectl_manifest" "my_service" {
yaml_body = file("${path.module}/my_service.yaml")
}
This approach has the big advantage that everything is obtained dynamically and does not rely on any local config file (very important if you run Terraform in a CI/CD server or to manage a multicluster environment).
Multi‑object manifest files
The kubectl
provider also offers data sources that help to handle files with multiple objest very easily. From the docs kubectl_filename_list:
data "kubectl_filename_list" "manifests" {
pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "test" {
count = length(data.kubectl_filename_list.manifests.matches)
yaml_body = file(element(data.kubectl_filename_list.manifests.matches, count.index))
}
Extra points: You can templatize your yaml
files. I interpolate the cluster name in the multi‑resource autoscaler yaml file as follows:
resource "kubectl_manifest" "autoscaler" {
yaml_body = templatefile("${path.module}/autoscaler.yaml", {cluster_name = var.cluster_name })
}
方法 2:
There are a couple of ways to achieve what you want to do.
You can use the Terraform resources template_file and null_resource.
Notice that I'm using the trigger to run the kubectl command always you modify the template (you may want to replace create with apply).
data "template_file" "your_template" {
template = "${file("${path.module}/templates/<.yaml>")}"
}
resource "null_resource" "your_deployment" {
triggers = {
manifest_sha1 = "${sha1("${data.template_file.your_template.rendered}")}"
}
provisioner "local‑exec" {
command = "kubectl create ‑f ‑<<EOF\n${data.template_file.your_template.rendered}\nEOF"
}
}
But maybe the best way is to use the Kubernetes provider.
There are two ways to configure it:
- By default your manifests will be deployed in your current context (
kubectl config current‑context
) - The second way is to statically define TLS certificate credentials:
provider "kubernetes" {
host = "https://104.196.242.174"
client_certificate = "${file("~/.kube/client‑cert.pem")}"
client_key = "${file("~/.kube/client‑key.pem")}"
cluster_ca_certificate = "${file("~/.kube/cluster‑ca‑cert.pem")}"
}
Once done that, you can create your own deployment pretty easy. For a basic pod, it'd be something as easy as:
resource "kubernetes_pod" "hello_world" {
metadata {
name = "hello‑world"
}
spec {
container {
image = "my_account/hello‑world:1.0.0"
name = "hello‑world"
}
image_pull_secrets {
name = "docker‑hub"
}
}
}
方法 3:
When creating multiple Terraform resources, it is usually better to use named resource instances than to use a list (count
). If the source file is updated and the order of Kubernetes resources change, it may cause Terraform to delete/create resources just because of the index change. This code creates a key by concatenating kind
and metadata.name
fields:
data "kubectl_file_documents" "myk8s" {
content = file("./myk8s.yaml")
}
resource "kubectl_manifest" "myk8s" {
# Create a map of { "kind‑‑name" => raw_yaml }
for_each = {
for value in [
for v in data.kubectl_file_documents.myk8s.documents : [yamldecode(v), v]
] : "${value.0["kind"]}‑‑${value.0["metadata"]["name"]}" => value.1
}
yaml_body = each.value
}
In the future you may want to use Hashicorp's official kubernetes_manifest resource instead (as of 2.5.0 ‑‑ in beta, buggy):
resource "kubernetes_manifest" "default" {
for_each = {
for value in [
for yaml in split(
"\n‑‑‑\n",
"\n${replace(file("./myk8s.yaml"), "/(?m)^‑‑‑[[:blank:]]*(#.*)?$/", "‑‑‑")}\n"
) :
yamldecode(yaml)
if trimspace(replace(yaml, "/(?m)(^[[:blank:]]*(#.*)?$)+/", "")) != ""
] : "${value["kind"]}‑‑${value["metadata"]["name"]}" => value
}
manifest = each.value
}
方法 4:
In case for the remote url hosting the yaml file, and the yaml file has included multiple configs/objects, here's what can be done.
resource "null_resource" "controller_rancher_installation" {
provisioner "local‑exec" {
command = <<EOT
echo "Downloading rancher config"
curl ‑L https://some‑url.yaml ‑o rancherconfig.yaml
EOT
}
}
data "kubectl_path_documents" "rancher_manifests" {
pattern = "./rancherconfig.yaml"
depends_on = [null_resource.controller_rancher_installation]
}
resource "kubectl_manifest" "spot_cluster_controller" {
count = length(data.kubectl_path_documents.spot_controller_manifests.documents)
yaml_body = element(data.kubectl_path_documents.spot_controller_manifests.documents, count.index)
}
</code></pre>
The idea is to download it firstly, and then apply it. This is based on observation:
pattern = "./rancherconfig.yaml"
doesn't support remote url, only
local files.
"kubectl_manifest"
by default only applies the first
config/object in the yaml file.
方法 5:
Adding some more information to existing answers for 2022 viewers. Given question is missing information about where these terraform & kubectl command are need to be executed
Case 1: Developer is using a local system that is not part of Google Cloud Platform
In this case, when you are using null_resource
to execute a command then your command will run in you local pc not in google cloud.
Case 2: Developer is using a system that is part of Google Cloud Platform like Google Console Terminal or Google Cloud Code
In this case, when you are using null_resource
to execute a command then your command will run in a temporary environment that is actively managed google cloud.
From here, it is possible to execute kubectl
command using terraform. Said that, next question is it a good approach to do? It's not.
GCP is not a free resource. If you are building/using tools, it is important to use the right tool for right resource. In DevOps operations, there are two core areas. One is setting up infra where Terraform works best and next is managing infra where Ansible works best. For the same reason, GCP actively provide support for these.
(by Sunil Gajula、david_g、Ricardo Jover、Yuri Astrakhan、smiletrl、sam)
參考文件