Работа с Google Cloud Platform (compute instance group manager) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute instance group manager) и Terraform в Unix/Linux

Google Cloud Platrorm — это платформа вида «инфраструктура как сервис» (IaaS), позволяющая клиентам создавать, тестировать и развертывать собственные приложения на инфраструктуре Google, в высокопроизводительных виртуальных машинах.

Google Compute Engine предоставляет виртуальные машины, работающие в инновационных центрах обработки данных Google и всемирной сети.

Compute instance group manager — сервис который группирует машины в группу.

Установка terraform в Unix/Linux

Установка крайне примитивная и я описал как это можно сделать тут:

Установка terraform в Unix/Linux

Вот еще полезные статьи по GCP + Terrafrom:

Работа с Google Cloud Platform (compute instance) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute health check) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute target pool) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute forwarding rule) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute firewall) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute disk) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute image) и Terraform в Unix/Linux

Работа с Google Cloud Platform (compute instance template) и Terraform в Unix/Linux

Генерация документации для Terraform с Python в Unix/Linux

Так же, в данной статье, я создал скрипт для автоматической установки данного ПО. Он был протестирован на CentOS 6/7, Debian 8 и на Mac OS X. Все работает должным образом!

Чтобы получить помощь по использованию команд, выполните:

$ terraform --help
Usage: terraform [--version] [--help] <command> [args]

The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.

Common commands:
    apply              Builds or changes infrastructure
    console            Interactive console for Terraform interpolations
    destroy            Destroy Terraform-managed infrastructure
    env                Workspace management
    fmt                Rewrites config files to canonical format
    get                Download and install modules for the configuration
    graph              Create a visual graph of Terraform resources
    import             Import existing infrastructure into Terraform
    init               Initialize a Terraform working directory
    output             Read an output from a state file
    plan               Generate and show an execution plan
    providers          Prints a tree of the providers used in the configuration
    push               Upload this Terraform module to Atlas to run
    refresh            Update local state file against real resources
    show               Inspect Terraform state or plan
    taint              Manually mark a resource for recreation
    untaint            Manually unmark a resource as tainted
    validate           Validates the Terraform files
    version            Prints the Terraform version
    workspace          Workspace management

All other commands:
    debug              Debug output management (experimental)
    force-unlock       Manually unlock the terraform state
    state              Advanced state management

Приступим к использованию!

Работа с Google Cloud Platform (compute instance group manager) и Terraform в Unix/Linux

Первое что нужно сделать — это настроить «Cloud Identity». С помощью сервиса Google Cloud Identity вы сможете предоставлять доменам, пользователям и аккаунтам в организации доступ к ресурсам Cloud, а также централизованно управлять пользователями и группами через консоль администратора Google.

Полезное чтиво:

Установка Google Cloud SDK/gcloud в Unix/Linux

У меня есть папка terraform, в ней у меня будут лежать провайдеры с которыми я буду работать. Т.к в этом примере я буду использовать google_cloud_platform, то создам данную папку и перейду в нее. Далее, в этой папке, стоит создать:

$ mkdir examples modules

В папке examples, я буду хранить так званые «плейбуки» для разварачивания различных служб, например — zabbix-server, grafana, web-серверы и так далее. В modules директории, я буду хранить все необходимые модули.

Начнем писать модуль, но для этой задачи, я создам папку:

$  mkdir modules/compute_instance_group_manager

Переходим в нее:

$ cd modules/compute_instance_group_manager

Открываем файл:

$ vim compute_instance_group_manager.tf

В данный файл, вставляем:

#---------------------------------------------------
# Create compute instance group manager
#---------------------------------------------------
resource "google_compute_instance_group_manager" "compute_instance_group_manager" {
    count                   = "${var.enable_just_instance_template_usage && !var.use_compute_instance_group_manager_default ? 1 : 0}"

    name                    = "${lower(var.name)}-ce-gm-${lower(var.environment)}"
    description             = "${var.description}"
    zone                    = "${var.zone}"
    project                 = "${var.project}"

    base_instance_name      = "${lower(var.base_instance_name)}-${lower(var.environment)}"
    instance_template       = "${var.instance_template}"
    wait_for_instances      = "${var.wait_for_instances}"

    target_pools            = ["${var.target_pools}"]
    target_size             = "${var.target_size}"

    named_port {
        name    = "${var.named_port_name}"
        port    = "${var.named_port_port}"
    }

    update_strategy         = "${var.update_strategy}"
    auto_healing_policies {
        health_check      = "${var.auto_healing_policies_health_check}"
        initial_delay_sec = "${var.auto_healing_policies_initial_delay_sec}"
    }
    rolling_update_policy {
        type                    = "${var.rolling_update_policy_type}"
        minimal_action          = "${var.rolling_update_policy_minimal_action}"
        #max_surge_fixed         = "${var.rolling_update_policy_max_surge_fixed}"
        max_surge_percent       = "${var.rolling_update_policy_max_surge_percent}"
        max_unavailable_fixed   = "${var.rolling_update_policy_max_unavailable_fixed}"
        #max_unavailable_percent = "${var.rolling_update_policy_max_unavailable_percent}"
        min_ready_sec           = "${var.rolling_update_policy_min_ready_sec}"
    }

    lifecycle {
        ignore_changes = []
        create_before_destroy = true
    }
}

resource "google_compute_instance_group_manager" "compute_instance_group_manager_default" {
    count               = "${var.use_compute_instance_group_manager_default ? 1 : 0}"

    name                = "${lower(var.name)}-ce-gm-${lower(var.environment)}"
    zone                = "${var.zone}"

    instance_template   = "${var.instance_template}"
    target_pools        = ["${var.target_pools}"]
    base_instance_name  = "${lower(var.base_instance_name)}-${lower(var.environment)}"

    auto_healing_policies {
        health_check      = "${var.auto_healing_policies_health_check}"
        initial_delay_sec = "${var.auto_healing_policies_initial_delay_sec}"
    }

    lifecycle {
        ignore_changes = []
        create_before_destroy = true
    }
}
#---------------------------------------------------
# Create compute instance group manager with version (IN TESTING. PLEASE DO NOT USE IT FOR NOW)
#---------------------------------------------------
resource "google_compute_instance_group_manager" "compute_instance_group_manager_version" {
    count                   = "${!var.enable_just_instance_template_usage && !var.use_compute_instance_group_manager_default ? 1 : 0}"

    name                    = "${lower(var.name)}-ce-gm-${lower(var.environment)}"
    description             = "${var.description}"
    zone                    = "${var.zone}"
    project                 = "${var.project}"

    base_instance_name      = "${lower(var.base_instance_name)}-${lower(var.environment)}"
    wait_for_instances      = "${var.wait_for_instances}"

    version {
        name                = "${lower(var.base_instance_name)}-${lower(var.environment)}"
        instance_template   = "${var.instance_template}"
        #target_size {
        #    fixed = 1
        #}
    }
    #version {
    #    name               = "${lower(var.base_instance_name)}-${lower(var.environment)}"
    #    instance_template  = "${var.instance_template}"
    #    target_size {
    #        percent = 20
    #    }
    #}

    target_pools            = ["${var.target_pools}"]
    target_size             = "${var.target_size}"

    named_port {
        name    = "${var.named_port_name}"
        port    = "${var.named_port_port}"
    }

    update_strategy         = "${var.update_strategy}"
    auto_healing_policies {
        health_check      = "${var.auto_healing_policies_health_check}"
        initial_delay_sec = "${var.auto_healing_policies_initial_delay_sec}"
    }
    rolling_update_policy {
        type                    = "${var.rolling_update_policy_type}"
        minimal_action          = "${var.rolling_update_policy_minimal_action}"
        #max_surge_fixed         = "${var.rolling_update_policy_max_surge_fixed}"
        max_surge_percent       = "${var.rolling_update_policy_max_surge_percent}"
        max_unavailable_fixed   = "${var.rolling_update_policy_max_unavailable_fixed}"
        #max_unavailable_percent = "${var.rolling_update_policy_max_unavailable_percent}"
        min_ready_sec           = "${var.rolling_update_policy_min_ready_sec}"
    }

    lifecycle {
        ignore_changes = []
        create_before_destroy = true
    }
}

Открываем файл:

$ vim variables.tf

И прописываем:

variable "name" {
    description = "A unique name for the resource, required by GCE. Changing this forces a new resource to be created."
    default     = "TEST"
}

variable "project" {
    description = "The ID of the project in which the resource belongs. If it is not provided, the provider project is used."
    default     = ""
}

variable "environment" {
    description = "Environment for service"
    default     = "STAGE"
}

variable "orchestration" {
    description = "Type of orchestration"
    default     = "Terraform"
}

variable "createdby" {
    description = "Created by"
    default     = "Vitaliy Natarov"
}

variable "zone" {
    description = "The zone that instances in this group should be created in."
    default     = "us-east1-b"
}

variable "base_instance_name" {
    description = "The base instance name to use for instances in this group. The value must be a valid RFC1035 name. Supported characters are lowercase letters, numbers, and hyphens (-). Instances are named by appending a hyphen and a random four-character string to the base instance name."
    default     = "TEST"
}

variable "instance_template" {
    description = "The full URL to an instance template from which all new instances will be created. Conflicts with version"
    default     = ""
}

variable "target_pools" {
    description = "The full URL of all target pools to which new instances in the group are added. Updating the target pools attribute does not affect existing instances."
    default     = []
}

variable "description" {
    description = "An optional textual description of the instance group manager."
    default     = ""
}

variable "update_strategy" {
    description = "(Optional, Default 'RESTART') If the instance_template resource is modified, a value of 'NONE' will prevent any of the managed instances from being restarted by Terraform. A value of 'RESTART' will restart all of the instances at once. 'ROLLING_UPDATE' is supported as [Beta feature]. A value of 'ROLLING_UPDATE' requires rolling_update_policy block to be set"
    default     = "RESTART"
}

variable "target_size" {
    description = "The target number of running instances for this managed instance group. This value should always be explicitly set unless this resource is attached to an autoscaler, in which case it should never be set. Defaults to 0."
    default     = 0
}

variable "wait_for_instances" {
    description = "Whether to wait for all instances to be created/updated before returning. Note that if this is set to true and the operation does not succeed, Terraform will continue trying until it times out."
    default     = "true"
}

variable "rolling_update_policy_type" {
    description = "The type of update. Valid values are 'OPPORTUNISTIC', 'PROACTIVE'"
    default     = "PROACTIVE"
}

variable "rolling_update_policy_minimal_action" {
    description = "Minimal action to be taken on an instance. Valid values are 'RESTART', 'REPLACE'"
    default     = "REPLACE"
}

variable "rolling_update_policy_max_surge_fixed" {
    description = "The maximum number of instances that can be created above the specified targetSize during the update process. Conflicts with max_surge_percent. If neither is set, defaults to 1"
    default     = "1"
}

variable "rolling_update_policy_max_surge_percent" {
    description = "The maximum number of instances(calculated as percentage) that can be created above the specified targetSize during the update process. Conflicts with max_surge_fixed."
    default     = "20"
}

variable "rolling_update_policy_max_unavailable_fixed" {
    description = "The maximum number of instances that can be unavailable during the update process. Conflicts with max_unavailable_percent. If neither is set, defaults to 1"
    default     = "1"
}

variable "rolling_update_policy_max_unavailable_percent" {
    description = "The maximum number of instances(calculated as percentage) that can be unavailable during the update process. Conflicts with max_unavailable_fixed."
    default     = "20"
}

variable "rolling_update_policy_min_ready_sec" {
    description = "Minimum number of seconds to wait for after a newly created instance becomes available. This value must be from range [0, 3600]"
    default     = "50"
}

variable "named_port_name" {
    description = "The name of the port."
    default     = "custom-http"
}

variable "named_port_port" {
    description = "The port number."
    default     = "80"
}

variable "auto_healing_policies_health_check" {
    description = "The health check resource that signals autohealing."
    default     = ""
}

variable "auto_healing_policies_initial_delay_sec" {
    description = "The number of seconds that the managed instance group waits before it applies autohealing policies to new instances or recently recreated instances. Between 0 and 3600."
    default     = "300"
}

variable "enable_just_instance_template_usage" {
    description = "Enable instance template usage. Will be conflict with version. Default - true"
    default     = "true"
}

variable "use_compute_instance_group_manager_default" {
    description = "Enable instance group manager default"
    default     = false
}

Собственно в этом файле храняться все переменные. Спасибо кэп!

Открываем последний файл:

$ vim outputs.tf

И в него вставить нужно следующие строки:

output "name" {
    description = "Name of compute instance group manager"
    value       = "${google_compute_instance_group_manager.compute_instance_group_manager.*.name}"
}

output "self_link" {
    description = "self_link"
    value       = "${google_compute_instance_group_manager.compute_instance_group_manager.*.self_link}"
}

output "instance_group" {
    description = "Instance group"
    value       = "${google_compute_instance_group_manager.compute_instance_group_manager.*.instance_group}"
}

output "gm_self_link_default" {
    description = "self_link"
    value       = "${google_compute_instance_group_manager.compute_instance_group_manager_default.*.self_link}"
}

Переходим теперь в папку google_cloud_platform/examples и создадим еще одну папку для проверки написанного чуда:

$ mkdir compute_instance_group_manager && cd $_

Внутри созданной папки открываем файл:

$ vim main.tf

Вставляем:

#
# MAINTAINER Vitaliy Natarov "vitaliy.natarov@yahoo.com"
#
terraform {
  required_version = "> 0.9.0"
}
provider "google" {
    credentials = "${file("/Users/captain/.config/gcloud/creds/terraform_creds.json")}"
    project     = "terraform-2018"
    region      = "us-east1"
}

module "compute_health_check" {
    source                              = "../../modules/compute_health_check"
    name                                = "TEST"

    project                             = "terraform-2018"

    enable_compute_http_health_check    = true
}

module "compute_target_pool" {
    source                              = "../../modules/compute_target_pool"
    name                                = "TEST"

    project                             = "terraform-2018"
    region                              = "us-east1"

    use_compute_target_pool_default     = true
    health_checks                       = ["testhttphcstage"]
}

module "compute_forwarding_rule" {
    source                          = "../../modules/compute_forwarding_rule"
    name                            = "TEST"

    project                         = "terraform-2018"

    port_range                      = "80"
    target                          = "${element(module.compute_target_pool.default_pool_self_link, 0)}"
}

module "compute_firewall" {
    source                          = "../../modules/compute_firewall"
    name                            = "TEST"

    project                         = "terraform-2018"

    enable_all_ingress              = true
    enable_all_egress               = true

    #enable_all_ingress              = false
    #allow_protocol                  = "icmp"
    #allow_ports                     = ["80", "443"]
}

module "compute_instance_template" {
    source                              = "../../modules/compute_instance_template"
    name                                = "TEST"

    #Create a new boot disk from an image
    disk_source_image                   = "centos-7"
    disk_auto_delete                    = true
    disk_boot                           = true

    #Use an existing disk resource
    #disk_source_image                   = "foo_existing_disk"
    #disk_auto_delete                    = false
    #disk_boot                           = false

    service_account_scopes              = ["userinfo-email", "compute-ro", "storage-ro"]
    can_ip_forward                      = false
    network                             = "default"
    machine_type                        = "n1-highcpu-4"
}

module "compute_instance_group_manager" {
    source                              = "../../modules/compute_instance_group_manager"
    name                                = "TEST"

    #enable_just_instance_template_usage         = "true"
    #use_compute_instance_group_manager_default = false
    #instance_template                          = "${element(module.compute_instance_template.self_link, 0)}"
    #target_pools                               = ["${element(module.compute_target_pool.default_pool_self_link, 0)}"]
    #auto_healing_policies_health_check         = "${element(module.compute_health_check.http_self_link, 0)}"
    #target_size                                = 1

    # Uses for autoscaler (if set true and target_size=0)
    use_compute_instance_group_manager_default  = true
    instance_template                           = "${element(module.compute_instance_template.self_link, 0)}"
    target_pools                                = ["${element(module.compute_target_pool.default_pool_self_link, 0)}"]
    #auto_healing_policies_health_check          = "${element(module.compute_health_check.http_self_link, 0)}"
    target_size                                 = 0
}

Все уже написано и готово к использованию. Ну что, начнем тестирование. В папке с вашим плейбуком, выполняем:

$ terraform init

Этим действием я инициализирую проект. Затем, подтягиваю модуль:

$ terraform get

PS: Для обновление изменений в самом модуле, можно выполнять:

$ terraform get -update

Проверим валидацию:

$ terraform validate

Запускем прогон:

$ terraform plan

Мне вывело что все у меня хорошо и можно запускать деплой:

$ terraform apply

Как видно с вывода, — все прошло гладко! Чтобы удалить созданное творение, можно выполнить:

$ terraform destroy

Весь материал аплоаджу в github аккаунт для удобства использования:

$ git clone https://github.com/SebastianUA/terraform.git

Вот и все на этом. Данная статья «Работа с Google Cloud Platform (compute instance group manager) и Terraform в Unix/Linux» завершена.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *

Этот сайт использует Akismet для борьбы со спамом. Узнайте, как обрабатываются ваши данные комментариев.