
Работа с Google Cloud Platform (storage bucket) и Terraform в Unix/Linux
Google Cloud Platrorm — это платформа вида «инфраструктура как сервис» (IaaS), позволяющая клиентам создавать, тестировать и развертывать собственные приложения на инфраструктуре Google, в высокопроизводительных виртуальных машинах.
Google Compute Engine предоставляет виртуальные машины, работающие в инновационных центрах обработки данных Google и всемирной сети.
Storage bucket — сторедж для хранения данных. С помощью данного сервиса, можно создать статический сайт.
Установка terraform в Unix/Linux
Установка крайне примитивная и я описал как это можно сделать тут:
Установка terraform в Unix/Linux
Вот еще полезные статьи по GCP + Terrafrom:
Работа с Google Cloud Platform (compute instance) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute health check) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute target pool) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute forwarding rule) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute firewall) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute disk) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute image) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute instance template) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute instance group manager) и Terraform в Unix/Linux
Работа с Google Cloud Platform (compute autoscaler) и Terraform в Unix/Linux
Работа с Google Cloud Platform (google kms) и Terraform в Unix/Linux
Генерация документации для Terraform с Python в Unix/Linux
Так же, в данной статье, я создал скрипт для автоматической установки данного ПО. Он был протестирован на CentOS 6/7, Debian 8 и на Mac OS X. Все работает должным образом!
Чтобы получить помощь по использованию команд, выполните:
$ terraform --help Usage: terraform [--version] [--help] <command> [args] The available commands for execution are listed below. The most common, useful commands are shown first, followed by less common or more advanced commands. If you're just getting started with Terraform, stick with the common commands. For the other commands, please read the help and docs before usage. Common commands: apply Builds or changes infrastructure console Interactive console for Terraform interpolations destroy Destroy Terraform-managed infrastructure env Workspace management fmt Rewrites config files to canonical format get Download and install modules for the configuration graph Create a visual graph of Terraform resources import Import existing infrastructure into Terraform init Initialize a Terraform working directory output Read an output from a state file plan Generate and show an execution plan providers Prints a tree of the providers used in the configuration push Upload this Terraform module to Atlas to run refresh Update local state file against real resources show Inspect Terraform state or plan taint Manually mark a resource for recreation untaint Manually unmark a resource as tainted validate Validates the Terraform files version Prints the Terraform version workspace Workspace management All other commands: debug Debug output management (experimental) force-unlock Manually unlock the terraform state state Advanced state management
Приступим к использованию!
Работа с Google Cloud Platform (storage bucket) и Terraform в Unix/Linux
Первое что нужно сделать — это настроить «Cloud Identity». С помощью сервиса Google Cloud Identity вы сможете предоставлять доменам, пользователям и аккаунтам в организации доступ к ресурсам Cloud, а также централизованно управлять пользователями и группами через консоль администратора Google.
Полезное чтиво:
Установка Google Cloud SDK/gcloud в Unix/Linux
У меня есть папка terraform, в ней у меня будут лежать провайдеры с которыми я буду работать. Т.к в этом примере я буду использовать google_cloud_platform, то создам данную папку и перейду в нее. Далее, в этой папке, стоит создать:
$ mkdir examples modules
В папке examples, я буду хранить так званые «плейбуки» для разварачивания различных служб, например — zabbix-server, grafana, web-серверы и так далее. В modules директории, я буду хранить все необходимые модули.
Начнем писать модуль, но для этой задачи, я создам папку:
$ mkdir modules/storage_bucket
Переходим в нее:
$ cd modules/storage_bucket
Открываем файл:
$ vim storage_bucket.tf
В данный файл, вставляем:
#--------------------------------------------------- # Create storage bucket #--------------------------------------------------- resource "google_storage_bucket" "storage_bucket" { #count = "${var.}" name = "${lower(var.name)}-sb-${lower(var.environment)}" project = "${var.project}" location = "${var.location}" storage_class = "${var.storage_class}" force_destroy = "${var.force_destroy}" lifecycle_rule { action { type = "${var.lifecycle_rule_action_type}" storage_class = "${var.lifecycle_rule_action_type == "SetStorageClass" ? var.lifecycle_rule_action_storage_class : "" }" } condition { age = "${var.lifecycle_rule_condition_age}" created_before = "${var.lifecycle_rule_condition_created_before}" is_live = "${var.lifecycle_rule_condition_is_live}" matches_storage_class = ["${var.lifecycle_rule_condition_matches_storage_class}"] num_newer_versions = "${var.lifecycle_rule_condition_num_newer_versions}" } } versioning { enabled = "${var.versioning_enabled}" } website { main_page_suffix = "${var.website_main_page_suffix}" not_found_page = "${var.website_not_found_page}" } cors { origin = ["${var.cors_origin}"] method = ["${var.cors_method}"] response_header = ["${var.cors_response_header}"] max_age_seconds = "${var.cors_max_age_seconds}" } logging { log_bucket = "${var.logging_log_bucket}" log_object_prefix = "${var.logging_log_object_prefix}" } labels { name = "${lower(var.name)}-sb-${lower(var.environment)}" environment = "${lower(var.environment)}" orchestration = "${lower(var.orchestration)}" } lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage bucket acl #--------------------------------------------------- resource "google_storage_bucket_acl" "storage_bucket_acl_role_entity" { count = "${var.enable_storage_bucket_acl && length(var.bucket) > 0 && length(var.role_entity) !=0 ? 1 : 0}" bucket = "${var.bucket}" role_entity = ["${var.role_entity}"] default_acl = "${var.default_acl}" lifecycle { ignore_changes = [] create_before_destroy = true } } resource "google_storage_bucket_acl" "storage_bucket_acl_predefined_acl" { count = "${var.enable_storage_bucket_acl && length(var.bucket) > 0 && length(var.predefined_acl) > 0 ? 1 : 0}" bucket = "${var.bucket}" predefined_acl = "${var.predefined_acl}" default_acl = "${var.default_acl}" lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage bucket iam binding #--------------------------------------------------- resource "google_storage_bucket_iam_binding" "storage_bucket_iam_binding" { count = "${var.enable_storage_bucket_iam_binding ? 1 : 0}" bucket = "${var.bucket}" role = "${var.role}" members = ["${var.members}"] lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage bucket iam member #--------------------------------------------------- resource "google_storage_bucket_iam_member" "storage_bucket_iam_member" { count = "${var.enable_storage_bucket_iam_member ? 1 : 0}" bucket = "${var.bucket}" role = "${var.role}" member = "${element(var.members, 0)}" lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create iam policy for bucket #--------------------------------------------------- data "google_iam_policy" "iam_policy" { binding { role = "${var.role}" members = ["${var.members}"] } } resource "google_storage_bucket_iam_policy" "storage_bucket_iam_policy" { count = "${var.enable_storage_bucket_iam_policy ? 1 : 0}" bucket = "${var.bucket}" policy_data = "${data.google_iam_policy.iam_policy.policy_data}" depends_on = ["data.google_iam_policy.iam_policy"] lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage default object acl #--------------------------------------------------- resource "google_storage_default_object_acl" "storage_default_object_acl" { count = "${var.enable_storage_default_object_acl && length(var.role_entity) > 0 ? 1 : 0}" bucket = "${var.bucket}" role_entity = ["${var.role_entity}"] lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage object acl #--------------------------------------------------- resource "google_storage_object_acl" "storage_object_acl" { count = "${var.enable_storage_object_acl && length(var.role_entity) > 0 ? 1 : 0}" bucket = "${var.bucket}" object = "${var.object}" #predefined_acl = "" role_entity = ["${var.role_entity}"] lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage bucket object #--------------------------------------------------- resource "google_storage_bucket_object" "storage_bucket_object" { count = "${var.enable_storage_bucket_object && length(var.bucket) >0 && length(var.source) >0 ? 1 : 0}" name = "${lower(var.name)}-sb-obj-${lower(var.environment)}" source = "${var.source}" bucket = "${var.bucket}" cache_control = "${var.cache_control}" content_disposition = "${var.content_disposition}" content_encoding = "${var.content_encoding}" content_language = "${var.content_language}" content_type = "${var.content_type}" storage_class = "${var.storage_class}" lifecycle { ignore_changes = [] create_before_destroy = true } } #--------------------------------------------------- # Create storage notification #--------------------------------------------------- resource "google_storage_notification" "storage_notification" { count = "${var.enable_storage_notification && var.topic !="" && var.bucket != "" ? 1 : 0}" bucket = "${var.bucket}" payload_format = "${var.payload_format}" topic = "${var.topic}" event_types = ["${var.event_types}"] object_name_prefix = "${var.object_name_prefix}" custom_attributes { name = "${lower(var.name)}-sb-n-${lower(var.environment)}" environment = "${lower(var.environment)}" orchestration = "${lower(var.orchestration)}" } lifecycle { ignore_changes = [] create_before_destroy = true } }
Открываем файл:
$ vim variables.tf
И прописываем:
variable "name" { description = "(Required) The name of the bucket." default = "TEST" } variable "environment" { description = "Environment for service" default = "STAGE" } variable "orchestration" { description = "Type of orchestration" default = "Terraform" } variable "location" { description = "(Optional, Default: 'US')" default = "US" } variable "force_destroy" { description = "(Optional, Default: false) When deleting a bucket, this boolean option will delete all contained objects. If you try to delete a bucket that contains objects, Terraform will fail that run." default = false } variable "project" { description = "(Optional) The ID of the project in which the resource belongs. If it is not provided, the provider project is used." default = "" } variable "storage_class" { description = "(Optional) The Storage Class of the new bucket. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE." default = "MULTI_REGIONAL" } variable "lifecycle_rule_action_type" { description = "The type of the action of this Lifecycle Rule. Supported values include: Delete and SetStorageClass." default = "SetStorageClass" } variable "lifecycle_rule_action_storage_class" { description = "(Required if action type is SetStorageClass) The target Storage Class of objects affected by this Lifecycle Rule. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE." default = "MULTI_REGIONAL" } variable "lifecycle_rule_condition_age" { description = "(Optional) Minimum age of an object in days to satisfy this condition." default = "30" } variable "lifecycle_rule_condition_created_before" { description = "(Optional) Creation date of an object in RFC 3339 (e.g. 2017-06-13) to satisfy this condition." default = "" } variable "lifecycle_rule_condition_is_live" { description = "(Optional) Defaults to false to match archived objects. If true, this condition matches live objects. Unversioned buckets have only live objects." default = "false" } variable "lifecycle_rule_condition_matches_storage_class" { description = "(Optional) Storage Class of objects to satisfy this condition. Supported values include: MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, STANDARD, DURABLE_REDUCED_AVAILABILITY." default = ["MULTI_REGIONAL"] } variable "lifecycle_rule_condition_num_newer_versions" { description = "(Optional) Relevant only for versioned objects. The number of newer versions of an object to satisfy this condition." default = "2" } variable "versioning_enabled" { description = "(Optional) While set to true, versioning is fully enabled for this bucket." default = "false" } variable "website_main_page_suffix" { description = "(Optional) Behaves as the bucket's directory index where missing objects are treated as potential directories." default = "index.html" } variable "website_not_found_page" { description = "(Optional) The custom object to return when a requested resource is not found." default = "404.html" } variable "cors_origin" { description = "The list of Origins eligible to receive CORS response headers. Note: '*' is permitted in the list of origins, and means 'any Origin'." default = ["*"] } variable "cors_method" { description = "(Optional) The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: '*' is permitted in the list of methods, and means 'any method'." default = ["*"] } variable "cors_response_header" { description = "(Optional) The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains." default = ["*"] } variable "cors_max_age_seconds" { description = "(Optional) The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses." default = "84300" } variable "logging_log_bucket" { description = "(Required) The bucket that will receive log objects." default = "" } variable "logging_log_object_prefix" { description = "(Optional, Computed) The object prefix for log objects. If it's not provided, by default GCS sets this to the log_bucket's name." default = "" } variable "enable_storage_bucket_acl" { description = "Enable storage bucket acl" default = "false" } variable "bucket" { description = "(Required) The name of the bucket it applies to." default = "" } variable "predefined_acl" { description = " (Optional) The canned GCS ACL to apply. Must be set if role_entity is not." default = "" } variable "role_entity" { description = "(Optional) List of role/entity pairs in the form ROLE:entity. See GCS Bucket ACL documentation for more details. Must be set if predefined_acl is not." default = [] } variable "default_acl" { description = "(Optional) Configure this ACL to be the default ACL." default = "" } variable "enable_storage_bucket_iam_binding" { description = "Enable storage bucket iam binding" default = "false" } variable "role" { description = "The role that should be applied. Note that custom roles must be of the format [projects|organizations]/{parent-name}/roles/{role-name}." default = "roles/storage.objectViewer" } variable "members" { description = " (Required) Identities that will be granted the privilege in role." default = [] } variable "enable_storage_bucket_iam_member" { description = "Enable storage bucket iam member" default = "false" } variable "enable_storage_bucket_iam_policy" { description = "Enable storage bucket iam policy" default = "false" } variable "enable_storage_default_object_acl" { description = "Enable storage default object acl" default = "false" } variable "enable_storage_object_acl" { description = "Enable storage object acl" default = "false" } variable "object" { description = "(Required) The name of the object it applies to." default = "" } variable "enable_storage_bucket_object" { description = "Enable storage bucket object" default = "false" } variable "source" { description = "(Optional) A path to the data you want to upload. Must be defined if content is not." default = "" } variable "cache_control" { description = "(Optional) Cache-Control directive to specify caching behavior of object data. If omitted and object is accessible to all anonymous users, the default will be public, max-age=3600" default = "" } variable "content_disposition" { description = "(Optional) Content-Disposition of the object data." default = "" } variable "content_encoding" { description = "(Optional) Content-Encoding of the object data." default = "" } variable "content_language" { description = "(Optional) Content-Language of the object data." default = "" } variable "content_type" { description = "(Optional) Content-Type of the object data. Defaults to 'application/octet-stream' or 'text/plain; charset=utf-8'." default = "" } variable "enable_storage_notification" { description = "Enable storage notification" default = "false" } variable "payload_format" { description = "(Required) The desired content of the Payload. One of 'JSON_API_V1' or 'NONE'." default = "NONE" } variable "topic" { description = "(Required) The Cloud PubSub topic to which this subscription publishes. Expects either the topic name, assumed to belong to the default GCP provider project, or the project-level name, i.e. projects/my-gcp-project/topics/my-topic or my-topic." default = "" } variable "event_types" { description = "(Optional) List of event type filters for this notification config. If not specified, Cloud Storage will send notifications for all event types. The valid types are: 'OBJECT_FINALIZE', 'OBJECT_METADATA_UPDATE', 'OBJECT_DELETE', 'OBJECT_ARCHIVE'" default = ["OBJECT_FINALIZE", "OBJECT_METADATA_UPDATE"] } variable "object_name_prefix" { description = "(Optional) Specifies a prefix path filter for this notification config. Cloud Storage will only send notifications for objects in this bucket whose names begin with the specified prefix." default = "" }
Собственно в этом файле храняться все переменные. Спасибо кэп!
Открываем последний файл:
$ vim outputs.tf
И в него вставить нужно следующие строки:
output "storage_bucket_name" { description = "Name of google storage bucket" value = "${google_storage_bucket.storage_bucket.*.name}" } output "storage_bucket_self_link" { description = "self_link" value = "${google_storage_bucket.storage_bucket.*.self_link}" } output "storage_bucket_url" { description = "URL" value = "${google_storage_bucket.storage_bucket.*.url}" } output "storage_bucket_acl_role_entity_id" { description = "ID for storage bucket acl" value = "${google_storage_bucket_acl.storage_bucket_acl_role_entity.*.id}" } output "storage_bucket_acl_predefined_acl_id" { description = "ID for storage bucket acl" value = "${google_storage_bucket_acl.storage_bucket_acl_predefined_acl.*.id}" } output "storage_bucket_iam_binding_etag" { description = "etag" value = "${google_storage_bucket_iam_binding.storage_bucket_iam_binding.*.etag}" } output "storage_bucket_iam_binding_id" { description = "ID" value = "${google_storage_bucket_iam_binding.storage_bucket_iam_binding.*.id}" } output "storage_bucket_iam_binding_role" { description = "Role" value = "${google_storage_bucket_iam_binding.storage_bucket_iam_binding.*.role}" } output "storage_bucket_iam_member_id" { description = "ID" value = "${google_storage_bucket_iam_member.storage_bucket_iam_member.*.id}" } output "storage_bucket_iam_member_role" { description = "Role" value = "${google_storage_bucket_iam_member.storage_bucket_iam_member.*.role}" } output "storage_bucket_iam_member_etag" { description = "etag" value = "${google_storage_bucket_iam_member.storage_bucket_iam_member.*.etag}" } output "storage_bucket_iam_policy_id" { description = "ID" value = "${google_storage_bucket_iam_policy.storage_bucket_iam_policy.*.id}" } output "storage_bucket_iam_policy_etag" { description = "etag" value = "${google_storage_bucket_iam_policy.storage_bucket_iam_policy.*.etag}" } output "storage_default_object_acl_id" { description = "ID" value = "${google_storage_default_object_acl.storage_default_object_acl.*.id}" } output "storage_object_acl_id" { description = "ID" value = "${google_storage_object_acl.storage_object_acl.*.id}" } output "storage_bucket_object_id" { description = "ID" value = "${google_storage_bucket_object.storage_bucket_object.*.id}" } output "storage_bucket_object_name" { description = "Name" value = "${google_storage_bucket_object.storage_bucket_object.*.name}" } output "google_storage_notification_self_link" { description = "self_link" value = "${google_storage_notification.storage_notification.*.self_link}" }
Переходим теперь в папку google_cloud_platform/examples и создадим еще одну папку для проверки написанного чуда:
$ mkdir storage_bucket && cd $_
Внутри созданной папки открываем файл:
$ vim main.tf
Вставляем:
# # MAINTAINER Vitaliy Natarov "vitaliy.natarov@yahoo.com" # terraform { required_version = "> 0.9.0" } provider "google" { credentials = "${file("/Users/captain/.config/gcloud/creds/terraform_creds.json")}" project = "terraform-2018" region = "us-east1" } module "storage_bucket" { source = "../../modules/storage_bucket" name = "TEST" lifecycle_rule_action_type = "Delete" versioning_enabled = false #enable_storage_bucket_acl = true #bucket = "test-sb-stage" # #enable_storage_bucket_iam_binding = true # #enable_storage_bucket_iam_member = true #members = ["solo.metal@bigmir.net"] # #enable_storage_bucket_iam_policy = true # #enable_storage_default_object_acl = true # #enable_storage_object_acl = true #role_entity = ["OWNER:solo.metal@bigmir.net"] #enable_storage_bucket_object = true # #enable_storage_notification = true }
Все уже написано и готово к использованию. Ну что, начнем тестирование. В папке с вашим плейбуком, выполняем:
$ terraform init
Этим действием я инициализирую проект. Затем, подтягиваю модуль:
$ terraform get
PS: Для обновление изменений в самом модуле, можно выполнять:
$ terraform get -update
Проверим валидацию:
$ terraform validate
Запускем прогон:
$ terraform plan
Мне вывело что все у меня хорошо и можно запускать деплой:
$ terraform apply
Как видно с вывода, — все прошло гладко! Чтобы удалить созданное творение, можно выполнить:
$ terraform destroy
Весь материал аплоаджу в github аккаунт для удобства использования:
$ git clone https://github.com/SebastianUA/terraform.git
Вот и все на этом. Данная статья «Работа с Google Cloud Platform (storage bucket) и Terraform в Unix/Linux» завершена.