diff --git a/README.md b/README.md
index 96672d1..eca0567 100644
--- a/README.md
+++ b/README.md
@@ -14,7 +14,7 @@ RustCloud is a rust library which hides the difference between different APIs p
-
+
## Service Types
@@ -33,160 +33,25 @@ RustCloud is a rust library which hides the difference between different APIs p
### AWS
-* EC2 Compute [Link to example](examples/compute/ec2/ec2.md)
+* EC2 Compute [Link to example](examples/aws/compute/ec2.md)
+* EKS Compute [Link to example](examples/aws/compute/eks.md)
* EC2 Storage [Link to example](examples/storage/aws_storage/aws_storage.md)
-* Amazon Elastic Container Service (Container) [Link to example](examples/container/aws_container/aws_container.md)
-* Elastic Load Balancing [Link to example](examples/loadbalancer/aws_loadbalancer/aws_loadbalancer.md)
-* AWS Route53 (DNS) [Link to example](examples/dns/aws_route53/aws_route53.md)
+* Amazon Elastic Container Service (Container) [Link to example](examples/aws/compute/ecs.md)
+* Elastic Load Balancing [Link to example](examples/aws/network/loadbalancer.md)
+* AWS Route53 (DNS) [Link to example](examples/aws/network/dns.md)
+* AWS DynamoDB (Database) [Link to example](examples/aws/database/dynamodb.md)
+* AWS CloudWatch (Monitoring) [Link to example](examples/aws/management/monitoring.md)
+* AWS IAM [Link to example](examples/aws/security/iam.md)
+* AWS Keymanagement [Link to example](examples/aws/security/kms.md)
### Google
-* Google Compute [Link to example](examples/compute/gce/gce.md)
-* Google Compute Storage [Link to example](examples/storage/google_storage/google_storage.md)
-* Google Container Service (Container) [Link to example](examples/container/google_container/google_container.md)
-* Google Elastic Load Balancing [Link to example](examples/loadbalancer/google_loadbalancer/google_loadbalancer.md)
-* Google DNS [Link to example](examples/dns/google_dns/google_dns.md)
-
-### DigitalOcean
-
-* DigitalOcean Droplet [Link to example](examples/compute/droplet/droplet.md)
-* DigitalOcean LoadBalancer [Link to example](examples/loadbalancer/digiocean_loadbalancer/digiocean_loadbalancer.md)
-* DigitalOcean Storage [Link to example](examples/storage/digiocean_storage/digiocean_storage.md)
-* DigitalOcean DNS [Link to example](examples/dns/digioceandns/digioceandns.md)
-
-### Ali-cloud
-
-* ECS Compute [Link to example](examples/compute/ecs/ecs.md)
-* ECS Storage [Link to example](examples/storage/ali_storage/ali_storage.md)
-* Alibaba Cloud DNS [Link to example](examples/dns/ali_dns/ali_dns.md)
-* Server Load Balancer [Link to example](examples/loadbalancer/ali_loadbalancer/ali_loadbalancer.md)
-* Container Service [Link to example](examples/container/ali_container/ali_container.md)
-
-### Vultr
-
-* Server [Link to example](examples/compute/vultr_compute/vultr_compute.md)
-* Bare Metal [Link to example](examples/baremetal/vultr_baremetal/vultr_baremetal.md)
-* Block Storage [Link to example](examples/storage/vultr_storage/vultr_storage.md)
-* DNS [Link to example](examples/dns/vultr_dns/vultr_dns.md)
-
-Currently, implementations for other cloud providers are being worked on.
-
-## Install
-
-### Linux (Ubuntu)
-
-1. Install golang.
- ```
- $ sudo apt-get update -y
- $ sudo apt-get install golang -y
- ```
-
-2. Set GOPATH environment variable. Run `gedit ~/.bashrc`.
- Copy the following in your .bashrc file:
- ```
- export GOPATH=$HOME/gopath
- export GOBIN=$HOME/gopath/bin
- ```
-
-3. Test your installation by copying the following piece of code in a file. Save the file as *gotest.go*. Run the file using the command `go run gotest.go`. If that command returns “Hello World!”, then Go is successfully installed and functional.
-```golang
-package main
-import "fmt"
-func main() {
- fmt.Printf("Hello World!\n")
-}
-```
-
-4. Now we need to fetch the gocloud repository and other necessary packages. Run the following commands in order:
-```
-$ go get github.com/cloudlibz/gocloud
-$ go get golang.org/x/oauth2
-$ go get cloud.google.com/go/compute/metadata
-```
-
-5. Create a directory called .gocloud in your HOME directory. Download your AWS, Google and DigitalOcean access credentials and store them in a file in your .gocloud folder.
-
- #### AWS:
- Save your AWS credentials in a file named *amazoncloudconfig.json*.
- ```js
- {
- "AWSAccessKeyID": "xxxxxxxxxxxx",
- "AWSSecretKey": "xxxxxxxxxxxx"
- }
- ```
- #### Google Cloud Services:
- Save your Google Cloud credentials in a file named *googlecloudconfig.json*. The file is downloaded in the required format.
- #### DigitalOcean:
- Save your DigitalOcean credentials in a file named *digioceancloudconfig.json*.
- ```js
- {
- "DigiOceanAccessToken": "xxxxxxxxxxxx"
- }
- ```
- #### Ali-cloud:
- Save your Ali-cloud credentials in a file named *alicloudconfig.json*.
- ```js
- {
- "AliAccessKeyID":"xxxxxxxxxxxx",
- "AliAccessKeySecret":"xxxxxxxxxxxx"
- }
- ```
- #### Vultr:
- Save your Vultr credentials in a file named *vultrconfig.json*.
- ```
- {
- "VultrAPIKey":"xxxxxxxxxxxx"
- }
- ```
-
- You can also set your credentials as environment variables.
- #### AWS:
- ```
- export AWSAccessKeyID = "xxxxxxxxxxxx"
- export AWSSecretKey = "xxxxxxxxxxxx"
- ```
- #### Google Cloud Services:
- ```
- export PrivateKey = "xxxxxxxxxxxx"
- export Type = "xxxxxxxxxxxx"
- export ProjectID = "xxxxxxxxxxxx"
- export PrivateKeyID = "xxxxxxxxxxxx"
- export ClientEmail = "xxxxxxxxxxxx"
- export ClientID = "xxxxxxxxxxxx"
- export AuthURI = "xxxxxxxxxxxx"
- export TokenURI = "xxxxxxxxxxxx"
- export AuthProviderX509CertURL = "xxxxxxxxxxxx"
- export ClientX509CertURL = "xxxxxxxxxxxx"
- ```
- #### DigitalOcean:
- ```
- export DigiOceanAccessToken = "xxxxxxxxxxxx"
- ```
- #### Ali-cloud:
- ```
- export AliAccessKeyID = "xxxxxxxxxxxx"
- export AliAccessKeySecret = "xxxxxxxxxxxx"
- ```
- #### Vultr:
- ```
- export VultrAPIKey = "xxxxxxxxxxxx"
- ```
-
-6. You are all set to use gocloud! Check out the following YouTube videos for more information and usage examples:
-https://youtu.be/4LxsAeoonlY?list=PLOdfztY25UNnxK_0KRRHSngJIyVLDKZxq&t=3
-
-## Development setup
-
-```
-$ git clone https://github.com/cloudlibz/gocloud
-$ cd gocloud
-```
-
-## Unit tests
-
-```
-$ cd gocloud
-$ go test -v ./...
-```
-
-Please make sure to delete all your instances, storage blocks, load balancers, containers, and DNS settings once you run the tests by visiting the respective web portals of the cloud providers.
\ No newline at end of file
+* Google Compute [Link to example](examples/gcp/compute/compute_engine.md)
+* Google Compute Storage [Link to example](examples/gcp/storage/storage.md)
+* Google Kubernetes Service [Link to example](examples/gcp/compute/kubenetes.md)
+* Google Elastic Load Balancing [Link to example](examples/gcp/network/loadbalancer.md)
+* Google DNS [Link to example](examples/gcp/network/dns.md)
+* Google Bigtable [Link to example](examples/gcp/database/bigtable.md)
+* Google Notifications [Link to example](examples/gcp/app_services/notifications.md)
+
+Currently, implementations for other cloud providers are being worked on.
\ No newline at end of file
diff --git a/assets/Rustcloud.png b/assets/Rustcloud.png
new file mode 100644
index 0000000..7c39b01
Binary files /dev/null and b/assets/Rustcloud.png differ
diff --git a/rustcloud/src/gcp/gcp_apis/app_services/gcp_notification_service.rs b/rustcloud/src/gcp/gcp_apis/app_services/gcp_notification_service.rs
index dd46af7..a5943e8 100644
--- a/rustcloud/src/gcp/gcp_apis/app_services/gcp_notification_service.rs
+++ b/rustcloud/src/gcp/gcp_apis/app_services/gcp_notification_service.rs
@@ -34,19 +34,31 @@ impl Googlenotification {
list_topic_request = list_topic_request.query(&[("pageToken", page_token)]);
}
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
list_topic_request = list_topic_request
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token));
- let response = list_topic_request.send().await?;
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
-
+ let response = list_topic_request.send().await.map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
let mut list_topic_response = HashMap::new();
- list_topic_response.insert("status".to_string(), status);
+ list_topic_response.insert("status".to_string(), status.as_u16().to_string());
list_topic_response.insert("body".to_string(), body);
-
+ println!("{:?}",list_topic_response);
Ok(list_topic_response)
}
@@ -58,20 +70,33 @@ impl Googlenotification {
let topic = request.get("Topic").expect("Topic is required");
let url = format!("{}/v1/projects/{}/topics/{}", self.base_url, project, topic);
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.request(Method::GET, &url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut get_topic_response = HashMap::new();
- get_topic_response.insert("status".to_string(), status);
+ get_topic_response.insert("status".to_string(), status.as_u16().to_string());
get_topic_response.insert("body".to_string(), body);
Ok(get_topic_response)
@@ -85,20 +110,33 @@ impl Googlenotification {
let topic = request.get("Topic").expect("Topic is required");
let url = format!("{}/v1/projects/{}/topics/{}", self.base_url, project, topic);
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.request(Method::DELETE, &url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut delete_topic_response = HashMap::new();
- delete_topic_response.insert("status".to_string(), status);
+ delete_topic_response.insert("status".to_string(), status.as_u16().to_string());
delete_topic_response.insert("body".to_string(), body);
Ok(delete_topic_response)
@@ -115,7 +153,7 @@ impl Googlenotification {
let create_topic_json_map: HashMap = HashMap::new();
let create_topic_json = json!(create_topic_json_map).to_string();
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.request(Method::PUT, &url)
@@ -123,13 +161,26 @@ impl Googlenotification {
.header(AUTHORIZATION, format!("Bearer {}", token))
.body(create_topic_json)
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut create_topic_response = HashMap::new();
- create_topic_response.insert("status".to_string(), status);
+ create_topic_response.insert("status".to_string(), status.as_u16().to_string());
create_topic_response.insert("body".to_string(), body);
Ok(create_topic_response)
diff --git a/rustcloud/src/gcp/gcp_apis/artificial_intelligence/gcp_automl.rs b/rustcloud/src/gcp/gcp_apis/artificial_intelligence/gcp_automl.rs
deleted file mode 100644
index 205f73e..0000000
--- a/rustcloud/src/gcp/gcp_apis/artificial_intelligence/gcp_automl.rs
+++ /dev/null
@@ -1,287 +0,0 @@
-use crate::gcp::gcp_apis::auth::gcp_auth::retrieve_token;
-use crate::gcp::types::artificial_intelligence::gcp_automl_types::*;
-use reqwest::{header::AUTHORIZATION, Client, Response};
-use serde_json::to_string;
-use std::collections::HashMap;
-
-pub struct AutoML {
- client: Client,
- base_url: String,
- project_id: String,
-}
-
-impl AutoML {
- pub fn new(project_id: &str) -> Self {
- Self {
- client: Client::new(),
- base_url: "https://automl.googleapis.com".to_string(),
- project_id: project_id.to_string(),
- }
- }
-
- pub async fn create_dataset(
- &self,
- location: &str,
- name: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/datasets",
- self.base_url, self.project_id, location
- );
- let request = CreateDatasetRequest {
- parent: format!("projects/{}/locations/{}", self.project_id, location),
- dataset: Dataset {
- display_name: name.to_string(),
- tables_dataset_metadata: HashMap::new(),
- },
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn get_dataset(
- &self,
- location: &str,
- dataset_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/datasets/{}",
- self.base_url, self.project_id, location, dataset_id
- );
- let token = retrieve_token().await?;
- self.client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn import_data_set(
- &self,
- location: &str,
- dataset_id: &str,
- uris: Vec,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/datasets/{}:importData",
- self.base_url, self.project_id, location, dataset_id
- );
- let request = ImportDataSetRequest {
- name: format!(
- "projects/{}/locations/{}/datasets/{}",
- self.project_id, location, dataset_id
- ),
- input_config: InputConfig {
- gcs_source: GcsSource { input_uris: uris },
- },
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn list_models(
- &self,
- location: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models",
- self.base_url, self.project_id, location
- );
- let token = retrieve_token().await?;
- self.client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn create_model(
- &self,
- location: &str,
- dataset_id: &str,
- model_name: &str,
- column_id: &str,
- train_budget: i64,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models",
- self.base_url, self.project_id, location
- );
- let request = CreateModelRequest {
- parent: format!("projects/{}/locations/{}", self.project_id, location),
- model: Model {
- dataset_id: dataset_id.to_string(),
- display_name: model_name.to_string(),
- tables_model_metadata: TablesModelMetadata {
- target_column_spec: TargetColumnSpec {
- name: column_id.to_string(),
- },
- train_budget_milli_node_hours: train_budget,
- },
- },
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn deploy_model(
- &self,
- location: &str,
- model_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models/{}:deploy",
- self.base_url, self.project_id, location, model_id
- );
- let request = DeployModelRequest {
- name: format!(
- "projects/{}/locations/{}/models/{}",
- self.project_id, location, model_id
- ),
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn undeploy_model(
- &self,
- location: &str,
- model_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models/{}:undeploy",
- self.base_url, self.project_id, location, model_id
- );
- let request = UndeployModelRequest {
- name: format!(
- "projects/{}/locations/{}/models/{}",
- self.project_id, location, model_id
- ),
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn get_model(
- &self,
- location: &str,
- model_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models/{}",
- self.base_url, self.project_id, location, model_id
- );
- let token = retrieve_token().await?;
- self.client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn export_dataset(
- &self,
- location: &str,
- dataset_id: &str,
- gcs_uri: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/datasets/{}:exportData",
- self.base_url, self.project_id, location, dataset_id
- );
- let request = ExportDatasetRequest {
- name: format!(
- "projects/{}/locations/{}/datasets/{}",
- self.project_id, location, dataset_id
- ),
- output_config: OutputConfig {
- gcs_destination: GcsDestination {
- output_uri_prefix: gcs_uri.to_string(),
- },
- },
- };
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await?;
- self.client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn delete_model(
- &self,
- location: &str,
- model_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/models/{}",
- self.base_url, self.project_id, location, model_id
- );
- let token = retrieve_token().await?;
- self.client
- .delete(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await
- .map_err(|e| e.into())
- }
-
- pub async fn delete_dataset(
- &self,
- location: &str,
- dataset_id: &str,
- ) -> Result> {
- let url = format!(
- "{}/v1/projects/{}/locations/{}/datasets/{}",
- self.base_url, self.project_id, location, dataset_id
- );
- let token = retrieve_token().await?;
- self.client
- .delete(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await
- .map_err(|e| e.into())
- }
-}
diff --git a/rustcloud/src/gcp/gcp_apis/artificial_intelligence/mod.rs b/rustcloud/src/gcp/gcp_apis/artificial_intelligence/mod.rs
deleted file mode 100644
index 594faf2..0000000
--- a/rustcloud/src/gcp/gcp_apis/artificial_intelligence/mod.rs
+++ /dev/null
@@ -1 +0,0 @@
-pub mod gcp_automl;
\ No newline at end of file
diff --git a/rustcloud/src/gcp/gcp_apis/auth/gcp_auth.rs b/rustcloud/src/gcp/gcp_apis/auth/gcp_auth.rs
index 419244c..04d791e 100644
--- a/rustcloud/src/gcp/gcp_apis/auth/gcp_auth.rs
+++ b/rustcloud/src/gcp/gcp_apis/auth/gcp_auth.rs
@@ -1,8 +1,10 @@
use gcp_auth::{CustomServiceAccount, TokenProvider};
-use std::path::PathBuf;
+use std::{os::unix::process, path::PathBuf};
+use std::env;
+
pub async fn retrieve_token() -> Result> {
- let credentials_path = PathBuf::from("service-account.json");
+ let credentials_path = PathBuf::from(env::var("GOOGLE_APPLICATION_CREDENTIALS")?);
let service_account = CustomServiceAccount::from_file(credentials_path)?;
let scopes = &["https://www.googleapis.com/auth/cloud-platform"];
let token = service_account.token(scopes).await?;
diff --git a/rustcloud/src/gcp/gcp_apis/compute/gcp_compute_engine.rs b/rustcloud/src/gcp/gcp_apis/compute/gcp_compute_engine.rs
index b438a5a..5ebc37e 100644
--- a/rustcloud/src/gcp/gcp_apis/compute/gcp_compute_engine.rs
+++ b/rustcloud/src/gcp/gcp_apis/compute/gcp_compute_engine.rs
@@ -24,123 +24,162 @@ impl GCE {
let mut project_id = String::new();
let mut zone = String::new();
let mut gce_instance = HashMap::new();
-
+
+ // Iterate through the request to populate fields
for (key, value) in request {
match key.as_str() {
- "projectid" => project_id = value.as_str().unwrap().to_string(),
+ "projectid" => {
+ project_id = value
+ .as_str()
+ .ok_or("Invalid or missing 'projectid'")?
+ .to_string();
+ }
"Zone" => {
- zone = value.as_str().unwrap().to_string();
- gce_instance.insert("Zone", value);
+ zone = value.as_str().ok_or("Invalid or missing 'Zone'")?.to_string();
+ gce_instance.insert("zone", value);
}
"selfLink" => {
gce_instance.insert("selfLink", value);
}
"Description" => {
- gce_instance.insert("Description", value);
+ gce_instance.insert("description", value);
}
"CanIPForward" => {
- gce_instance.insert("CanIPForward", value);
+ gce_instance.insert("canIPForward", value);
}
"Name" => {
- gce_instance.insert("Name", value);
+ gce_instance.insert("name", value);
}
"MachineType" => {
- gce_instance.insert("MachineType", value);
+ gce_instance.insert("machineType", value);
}
- "disk" => {
- let disk_param = value.as_array().unwrap();
+ "Disk" => {
+ let disk_param = value
+ .as_array()
+ .ok_or("Invalid 'disk' field, expected an array")?;
let mut disks = vec![];
-
+
for disk_value in disk_param {
- let disk_map = disk_value.as_object().unwrap();
+ let disk_map = disk_value
+ .as_object()
+ .ok_or("Invalid 'disk' entry, expected an object")?;
let mut disk = HashMap::new();
let mut initialize_param = HashMap::new();
-
+
for (disk_key, disk_val) in disk_map {
match disk_key.as_str() {
"Type" => {
- disk.insert("Type", disk_val.clone());
+ disk.insert("type", disk_val.clone());
}
"Boot" => {
- disk.insert("Boot", disk_val.clone());
+ disk.insert("boot", disk_val.clone());
}
"Mode" => {
- disk.insert("Mode", disk_val.clone());
+ disk.insert("mode", disk_val.clone());
}
"AutoDelete" => {
- disk.insert("AutoDelete", disk_val.clone());
+ disk.insert("autoDelete", disk_val.clone());
}
"DeviceName" => {
- disk.insert("DeviceName", disk_val.clone());
+ disk.insert("deviceName", disk_val.clone());
}
"InitializeParams" => {
- let init_params = disk_val.as_object().unwrap();
- initialize_param
- .insert("SourceImage", init_params["SourceImage"].clone());
- initialize_param
- .insert("DiskType", init_params["DiskType"].clone());
- initialize_param
- .insert("DiskSizeGb", init_params["DiskSizeGb"].clone());
- disk.insert("InitializeParams", json!(initialize_param));
+ let init_params = disk_val
+ .as_object()
+ .ok_or("Invalid 'InitializeParams', expected an object")?;
+ initialize_param.insert(
+ "sourceImage",
+ init_params.get("SourceImage")
+ .ok_or("Missing 'SourceImage' in 'InitializeParams'")?
+ .clone(),
+ );
+ initialize_param.insert(
+ "diskType",
+ init_params.get("DiskType")
+ .ok_or("Missing 'DiskType' in 'InitializeParams'")?
+ .clone(),
+ );
+ initialize_param.insert(
+ "diskSizeGb",
+ init_params.get("DiskSizeGb")
+ .ok_or("Missing 'DiskSizeGb' in 'InitializeParams'")?
+ .clone(),
+ );
+ disk.insert("initializeParams", json!(initialize_param));
}
_ => {}
}
}
disks.push(json!(disk));
}
- gce_instance.insert("Disks", json!(disks));
+ gce_instance.insert("disks", json!(disks));
}
"NetworkInterfaces" => {
- let network_interfaces_param = value.as_array().unwrap();
+ let network_interfaces_param = value
+ .as_array()
+ .ok_or("Invalid 'NetworkInterfaces' field, expected an array")?;
let mut network_interfaces = vec![];
-
+
for network_interface_value in network_interfaces_param {
- let network_interface_map = network_interface_value.as_object().unwrap();
+ let network_interface_map = network_interface_value
+ .as_object()
+ .ok_or("Invalid 'NetworkInterface' entry, expected an object")?;
let mut network_interface = HashMap::new();
let mut access_configs = vec![];
-
- for (network_interface_key, network_interface_val) in network_interface_map
- {
+
+ for (network_interface_key, network_interface_val) in network_interface_map {
match network_interface_key.as_str() {
"Network" => {
- network_interface
- .insert("Network", network_interface_val.clone());
+ network_interface.insert("network", network_interface_val.clone());
}
"Subnetwork" => {
network_interface
- .insert("Subnetwork", network_interface_val.clone());
+ .insert("subnetwork", network_interface_val.clone());
}
"AccessConfigs" => {
- let access_configs_param =
- network_interface_val.as_array().unwrap();
+ let access_configs_param = network_interface_val
+ .as_array()
+ .ok_or("Invalid 'AccessConfigs', expected an array")?;
for access_config_value in access_configs_param {
- let access_config_map =
- access_config_value.as_object().unwrap();
+ let access_config_map = access_config_value
+ .as_object()
+ .ok_or("Invalid 'AccessConfig', expected an object")?;
let mut access_config = HashMap::new();
- access_config
- .insert("Name", access_config_map["Name"].clone());
- access_config
- .insert("Type", access_config_map["Type"].clone());
+ access_config.insert(
+ "name",
+ access_config_map
+ .get("Name")
+ .ok_or("Missing 'Name' in 'AccessConfig'")?
+ .clone(),
+ );
+ access_config.insert(
+ "type",
+ access_config_map
+ .get("Type")
+ .ok_or("Missing 'Type' in 'AccessConfig'")?
+ .clone(),
+ );
access_configs.push(json!(access_config));
}
- network_interface
- .insert("AccessConfigs", json!(access_configs));
+ network_interface.insert("accessConfigs", json!(access_configs));
}
_ => {}
}
}
network_interfaces.push(json!(network_interface));
}
- gce_instance.insert("NetworkInterfaces", json!(network_interfaces));
+ gce_instance.insert("networkInterfaces", json!(network_interfaces));
}
"scheduling" => {
- let scheduling_param = value.as_object().unwrap();
+ let scheduling_param = value
+ .as_object()
+ .ok_or("Invalid 'scheduling' field, expected an object")?;
let mut scheduling = HashMap::new();
-
+
for (scheduling_key, scheduling_val) in scheduling_param {
match scheduling_key.as_str() {
"Preemptible" => {
- scheduling.insert("Preemptible", scheduling_val.clone());
+ scheduling.insert("preemptible", scheduling_val.clone());
}
"onHostMaintenance" => {
scheduling.insert("onHostMaintenance", scheduling_val.clone());
@@ -151,19 +190,28 @@ impl GCE {
_ => {}
}
}
- gce_instance.insert("Scheduling", json!(scheduling));
+ gce_instance.insert("scheduling", json!(scheduling));
}
_ => {}
}
}
-
- let gce_instance_json = serde_json::to_string(&gce_instance).unwrap();
+
+ // Convert gce_instance to JSON string
+ let gce_instance_json = serde_json::to_string(&gce_instance)
+ .map_err(|e| format!("Failed to serialize GCE instance: {}", e))?;
+
+ // Construct the URL for the request
let url = format!(
"{}/projects/{}/zones/{}/instances",
self.base_url, project_id, zone
);
-
- let token = retrieve_token().await?;
+
+ // Retrieve the authentication token
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ // Send the HTTP request
let response = self
.client
.post(&url)
@@ -171,108 +219,184 @@ impl GCE {
.header(AUTHORIZATION, format!("Bearer {}", token))
.body(gce_instance_json)
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
-
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ // Check the HTTP response status
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ // Construct the response map
let mut create_node_response = HashMap::new();
- create_node_response.insert("status".to_string(), status);
+ create_node_response.insert("status".to_string(), status.as_u16().to_string());
create_node_response.insert("body".to_string(), body);
-
+
Ok(create_node_response)
}
-
+
pub async fn start_node(
&self,
request: HashMap,
) -> Result, Box> {
- let project_id = request.get("projectid").unwrap();
- let zone = request.get("Zone").unwrap();
- let instance = request.get("instance").unwrap();
+ let project_id = request
+ .get("projectid")
+ .ok_or("Missing 'projectid' in request")?;
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone' in request")?;
+ let instance = request
+ .get("instance")
+ .ok_or("Missing 'instance' in request")?;
+
let url = format!(
- "{}/v1/projects/{}/zones/{}/instances/{}/start",
+ "{}/projects/{}/zones/{}/instances/{}/start",
self.base_url, project_id, zone, instance
);
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let body = "";
- let token = retrieve_token().await?;
let response = self
.client
.post(&url)
.header("Content-Type", "application/json")
+ .header("Content-Length", body.len().to_string())
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
-
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
let mut start_node_response = HashMap::new();
- start_node_response.insert("status".to_string(), status);
+ start_node_response.insert("status".to_string(), status.as_u16().to_string());
start_node_response.insert("body".to_string(), body);
-
+
Ok(start_node_response)
}
-
+
pub async fn stop_node(
&self,
request: HashMap,
) -> Result, Box> {
- let project_id = request.get("projectid").unwrap();
- let zone = request.get("Zone").unwrap();
- let instance = request.get("instance").unwrap();
+ let project_id = request
+ .get("projectid")
+ .ok_or("Missing 'projectid' in request")?;
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone' in request")?;
+ let instance = request
+ .get("instance")
+ .ok_or("Missing 'instance' in request")?;
+
let url = format!(
"{}/projects/{}/zones/{}/instances/{}/stop",
self.base_url, project_id, zone, instance
);
-
- let token = retrieve_token().await?;
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+ let body = "";
let response = self
.client
.post(&url)
.header("Content-Type", "application/json")
+ .header("Content-Length", body.len().to_string())
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
-
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
let mut stop_node_response = HashMap::new();
- stop_node_response.insert("status".to_string(), status);
+ stop_node_response.insert("status".to_string(), status.as_u16().to_string());
stop_node_response.insert("body".to_string(), body);
-
+
Ok(stop_node_response)
}
-
+
pub async fn delete_node(
&self,
request: HashMap,
) -> Result, Box> {
- let project_id = request.get("projectid").unwrap();
- let zone = request.get("Zone").unwrap();
- let instance = request.get("instance").unwrap();
+ let project_id = request.get("projectid")
+ .ok_or("Missing 'projectid' in request")?;
+ let zone = request.get("Zone")
+ .ok_or("Missing 'Zone' in request")?;
+ let instance = request.get("instance")
+ .ok_or("Missing 'instance' in request")?;
let url = format!(
"{}/projects/{}/zones/{}/instances/{}",
self.base_url, project_id, zone, instance
);
- let token = retrieve_token().await?;
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.delete(&url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut delete_node_response = HashMap::new();
- delete_node_response.insert("status".to_string(), status);
+ delete_node_response.insert("status".to_string(), status.as_u16().to_string());
delete_node_response.insert("body".to_string(), body);
-
+
Ok(delete_node_response)
}
@@ -280,25 +404,40 @@ impl GCE {
&self,
request: HashMap,
) -> Result, Box> {
- let project_id = request.get("projectid").unwrap();
- let zone = request.get("Zone").unwrap();
- let instance = request.get("instance").unwrap();
+ let project_id = request.get("projectid")
+ .ok_or("Missing 'projectid' in request")?;
+ let zone = request.get("Zone").ok_or("Missing 'Zone' in request")?;
+ let instance = request.get("instance").ok_or("Missing 'instance' in request")?;
let url = format!(
"{}/projects/{}/zones/{}/instances/{}/reset",
self.base_url, project_id, zone, instance
);
- let token = retrieve_token().await?;
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.post(&url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
let mut reboot_node_response = HashMap::new();
reboot_node_response.insert("status".to_string(), status);
@@ -311,29 +450,50 @@ impl GCE {
&self,
request: HashMap,
) -> Result, Box> {
- let project_id = request.get("projectid").unwrap();
- let zone = request.get("Zone").unwrap();
+ // Retrieve project ID and zone from the request map
+ let project_id = request
+ .get("projectid")
+ .ok_or("Missing 'projectid' in request")?;
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone' in request")?;
+
let url = format!(
"{}/projects/{}/zones/{}/instances/",
self.base_url, project_id, zone
);
-
- let token = retrieve_token().await?;
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
let response = self
.client
.request(Method::GET, &url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
- let status = response.status().as_u16().to_string();
- let body = response.text().await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut list_node_response = HashMap::new();
- list_node_response.insert("status".to_string(), status);
+ list_node_response.insert("status".to_string(), status.as_u16().to_string());
list_node_response.insert("body".to_string(), body);
-
+
Ok(list_node_response)
}
}
diff --git a/rustcloud/src/gcp/gcp_apis/compute/gcp_container.rs b/rustcloud/src/gcp/gcp_apis/compute/gcp_container.rs
new file mode 100644
index 0000000..582d08e
--- /dev/null
+++ b/rustcloud/src/gcp/gcp_apis/compute/gcp_container.rs
@@ -0,0 +1,681 @@
+use crate::gcp::gcp_apis::auth::gcp_auth::retrieve_token;
+use crate::gcp::types::compute::gcp_container_types::*;
+use reqwest::{header::AUTHORIZATION, Client, Error, Response};
+use serde_json::to_string;
+use std::collections::HashMap;
+
+pub struct GCPContainerClient {
+ client: Client,
+ base_url: String,
+}
+
+impl GCPContainerClient {
+ pub fn new() -> Self {
+ Self {
+ client: Client::new(),
+ base_url: "https://container.googleapis.com".to_string(),
+ }
+ }
+
+ pub async fn create_cluster(
+ &self,
+ request: HashMap,
+ ) -> Result, Box> {
+ let mut option = CreateCluster::default();
+ let mut project_id = String::new();
+ let mut zone = String::new();
+
+ // Extract parameters from the request map
+ for (key, value) in request {
+ match key.as_str() {
+ "Project" => {
+ project_id = value
+ .as_str()
+ .ok_or("Invalid or missing 'Project'")?
+ .to_string();
+ }
+ "Name" => {
+ option.name = value
+ .as_str()
+ .ok_or("Invalid or missing 'Name'")?
+ .to_string();
+ }
+ "Zone" => {
+ zone = value
+ .as_str()
+ .ok_or("Invalid or missing 'Zone'")?
+ .to_string();
+ }
+ "network" => {
+ option.network = value
+ .as_str()
+ .ok_or("Invalid or missing 'network'")?
+ .to_string();
+ }
+ "loggingService" => {
+ option.logging_service = value
+ .as_str()
+ .ok_or("Invalid or missing 'loggingService'")?
+ .to_string();
+ }
+ "monitoringService" => {
+ option.monitoring_service = value
+ .as_str()
+ .ok_or("Invalid or missing 'monitoringService'")?
+ .to_string();
+ }
+ "initialClusterVersion" => {
+ option.initial_cluster_version = value
+ .as_str()
+ .ok_or("Invalid or missing 'initialClusterVersion'")?
+ .to_string();
+ }
+ "subnetwork" => {
+ option.subnetwork = value
+ .as_str()
+ .ok_or("Invalid or missing 'subnetwork'")?
+ .to_string();
+ }
+ "masterAuth" => {
+ let master_auth = value
+ .as_object()
+ .ok_or("Invalid or missing 'masterAuth', expected an object")?;
+
+ if let Some(username) = master_auth.get("username") {
+ option.master_auth.username = username
+ .as_str()
+ .ok_or("Invalid or missing 'username' in 'masterAuth'")?
+ .to_string();
+ }
+
+ if let Some(client_certificate_config) =
+ master_auth.get("clientCertificateConfig")
+ {
+ let client_cert_config = client_certificate_config
+ .as_object()
+ .ok_or("Invalid or missing 'clientCertificateConfig', expected an object")?;
+
+ if let Some(issue_client_certificate) =
+ client_cert_config.get("issueClientCertificate")
+ {
+ option.master_auth.client_certificate_config.issue_client_certificate =
+ issue_client_certificate
+ .as_bool()
+ .ok_or("Invalid or missing 'issueClientCertificate'")?;
+ }
+ }
+ }
+ "nodePools" => {
+ let node_pools = value
+ .as_array()
+ .ok_or("Invalid 'nodePools' field, expected an array")?;
+
+ for node_pool_value in node_pools {
+ let node_pool_map = node_pool_value
+ .as_object()
+ .ok_or("Invalid 'nodePool' entry, expected an object")?;
+ let mut node_pool = NodePool::default();
+
+ if let Some(name) = node_pool_map.get("name") {
+ node_pool.name = name
+ .as_str()
+ .ok_or("Invalid or missing 'name' in 'nodePool'")?
+ .to_string();
+ }
+
+ if let Some(initial_node_count_value) = node_pool_map.get("initialNodeCount") {
+ node_pool.initial_node_count = match initial_node_count_value {
+ serde_json::Value::Number(n) => n.as_i64()
+ .ok_or("Invalid 'initialNodeCount' in 'nodePool'")?
+ .try_into()
+ .map_err(|_| "Value out of i32 range")?,
+ serde_json::Value::String(s) => s.parse::()
+ .map_err(|_| "Invalid string in 'initialNodeCount'")?,
+ _ => return Err("Invalid or missing 'initialNodeCount' in 'nodePool'".into()),
+ };
+ }
+
+ // if let Some(initial_node_count) = node_pool_map.get("initialNodeCount") {
+ // // println!("{}", node_pool_map.get("initialNodeCount").as_i64().unwrap());
+ // node_pool.initial_node_count = initial_node_count
+ // .as_i64()
+ // .ok_or("Invalid or missing 'initialNodeCount' in 'nodePool'")?
+ // .try_into()
+ // .map_err(|_| "Value out of i32 range")?;
+ // }
+
+ if let Some(config) = node_pool_map.get("config") {
+ let config_map = config
+ .as_object()
+ .ok_or("Invalid 'config' in 'nodePool', expected an object")?;
+
+ if let Some(machine_type) = config_map.get("machineType") {
+ node_pool.config.machine_type = machine_type
+ .as_str()
+ .ok_or("Invalid or missing 'machineType' in 'config'")?
+ .to_string();
+ }
+
+ if let Some(image_type) = config_map.get("imageType") {
+ node_pool.config.image_type = image_type
+ .as_str()
+ .ok_or("Invalid or missing 'imageType' in 'config'")?
+ .to_string();
+ }
+
+ if let Some(disk_size_gb) = config_map.get("diskSizeGb") {
+ node_pool.config.disk_size_gb = disk_size_gb
+ .as_i64()
+ .ok_or("Invalid or missing 'diskSizeGb' in 'config'")?
+ as i32;
+ }
+
+ if let Some(preemptible) = config_map.get("preemptible") {
+ node_pool.config.preemptible = preemptible
+ .as_bool()
+ .ok_or("Invalid or missing 'preemptible' in 'config'")?;
+ }
+
+ if let Some(oauth_scopes) = config_map.get("oauthScopes") {
+ node_pool.config.oauth_scopes = oauth_scopes
+ .as_array()
+ .ok_or("Invalid or missing 'oauthScopes' in 'config', expected an array")?
+ .iter()
+ .map(|s| s.as_str().unwrap_or("").to_string())
+ .collect();
+ }
+ }
+
+ if let Some(autoscaling) = node_pool_map.get("autoscaling") {
+ let autoscaling_map = autoscaling
+ .as_object()
+ .ok_or("Invalid 'autoscaling' in 'nodePool', expected an object")?;
+
+ if let Some(enabled) = autoscaling_map.get("enabled") {
+ node_pool.autoscaling.enabled = enabled
+ .as_bool()
+ .ok_or("Invalid or missing 'enabled' in 'autoscaling'")?;
+ }
+ }
+
+ if let Some(management) = node_pool_map.get("management") {
+ let management_map = management
+ .as_object()
+ .ok_or("Invalid 'management' in 'nodePool', expected an object")?;
+
+ if let Some(auto_upgrade) = management_map.get("autoUpgrade") {
+ node_pool.management.auto_upgrade = auto_upgrade
+ .as_bool()
+ .ok_or("Invalid or missing 'autoUpgrade' in 'management'")?;
+ }
+
+ if let Some(auto_repair) = management_map.get("AutoRepair") {
+ node_pool.management.auto_repair = auto_repair
+ .as_bool()
+ .ok_or("Invalid or missing 'AutoRepair' in 'management'")?;
+ }
+ }
+
+ option.node_pools.push(node_pool);
+ }
+ }
+ _ => {}
+ }
+ }
+
+ option.zone = zone.clone();
+
+ let mut create_cluster_json_map = serde_json::Map::new();
+ create_cluster_json_map.insert("cluster".to_string(), serde_json::to_value(&option)?);
+
+ let create_cluster_json = serde_json::to_string(&create_cluster_json_map)
+ .map_err(|e| format!("Failed to serialize cluster: {}", e))?;
+
+ let url = format!(
+ "{}/v1/projects/{}/zones/{}/clusters",
+ self.base_url, project_id, zone
+ );
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let response = self
+ .client
+ .post(&url)
+ .header("Content-Type", "application/json")
+ .header(AUTHORIZATION, format!("Bearer {}", token))
+ .body(create_cluster_json)
+ .send()
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ let mut create_cluster_response = HashMap::new();
+ create_cluster_response.insert("status".to_string(), status.as_u16().to_string());
+ create_cluster_response.insert("body".to_string(), body);
+
+ Ok(create_cluster_response)
+ }
+
+ pub async fn stop_task(
+ &self,
+ request: HashMap,
+ ) -> Result, Box> {
+ let project_id = request
+ .get("Project")
+ .ok_or("Missing 'Project'")?
+ .to_string();
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone'")?
+ .to_string();
+ let operation_id = request
+ .get("OperationId")
+ .ok_or("Missing 'OperationId'")?
+ .to_string();
+
+ let url = format!(
+ "{}/v1/projects/{}/zones/{}/operations/{}:cancel",
+ self.base_url, project_id, zone, operation_id
+ );
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let response = self
+ .client
+ .post(&url)
+ .header("Content-Type", "application/json")
+ .header(AUTHORIZATION, format!("Bearer {}", token))
+ .send()
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ let mut stop_task_response = HashMap::new();
+
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ stop_task_response.insert("status".to_string(), status.as_u16().to_string());
+ stop_task_response.insert("body".to_string(), response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ stop_task_response.insert("status".to_string(), status.as_u16().to_string());
+ stop_task_response.insert("body".to_string(), body);
+
+ Ok(stop_task_response)
+ }
+
+
+ pub async fn delete_cluster(
+ &self,
+ request: HashMap,
+ ) -> Result, Box> {
+ // Extract parameters from the request map
+ let project_id = request
+ .get("Project")
+ .ok_or("Missing 'Project'")?;
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone'")?;
+ let cluster_id = request
+ .get("clusterId")
+ .ok_or("Missing 'clusterId'")?;
+
+ // Construct the URL for the request
+ let url = format!(
+ "{}/v1/projects/{}/zones/{}/clusters/{}",
+ self.base_url, project_id, zone, cluster_id
+ );
+
+ // Retrieve the authentication token
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ // Create and send the HTTP request
+ let response = self
+ .client
+ .delete(&url)
+ .header("Content-Type", "application/json")
+ .header(AUTHORIZATION, format!("Bearer {}", token))
+ .send()
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ // Check the HTTP response status
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ // Construct the response map
+ let mut delete_cluster_response = HashMap::new();
+ delete_cluster_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_cluster_response.insert("body".to_string(), body);
+
+ Ok(delete_cluster_response)
+ }
+
+ pub async fn create_service(
+ &self,
+ request: HashMap,
+ ) -> Result, Box> {
+ let mut option = NodePoolService::default();
+ let mut project_id = String::new();
+ let mut cluster_id = String::new();
+ let mut zone = String::new();
+
+ // Extract parameters from the request map
+ for (key, value) in request {
+ match key.as_str() {
+ "Project" => {
+ project_id = value
+ .as_str()
+ .ok_or("Invalid or missing 'Project'")?
+ .to_string();
+ }
+ "Name" => {
+ option.name = value
+ .as_str()
+ .ok_or("Invalid or missing 'Name'")?
+ .to_string();
+ }
+ "Zone" => {
+ zone = value
+ .as_str()
+ .ok_or("Invalid or missing 'Zone'")?
+ .to_string();
+ }
+ "clusterId" => {
+ cluster_id = value
+ .as_str()
+ .ok_or("Invalid or missing 'clusterId'")?
+ .to_string();
+ }
+ "statusMessage" => {
+ option.status_message = value
+ .as_str()
+ .ok_or("Invalid or missing 'statusMessage'")?
+ .to_string();
+ }
+ "initialNodeCount" => {
+ option.initial_node_count = value
+ .as_i64()
+ .ok_or("Invalid or missing 'initialNodeCount'")?
+ as i32;
+ }
+ "selfLink" => {
+ option.self_link = value
+ .as_str()
+ .ok_or("Invalid or missing 'selfLink'")?
+ .to_string();
+ }
+ "version" => {
+ option.version = value
+ .as_str()
+ .ok_or("Invalid or missing 'version'")?
+ .to_string();
+ }
+ "status" => {
+ option.status = value
+ .as_str()
+ .ok_or("Invalid or missing 'status'")?
+ .to_string();
+ }
+ "config" => {
+ let config = value
+ .as_object()
+ .ok_or("Invalid or missing 'config', expected an object")?;
+
+ if let Some(machine_type) = config.get("machineType") {
+ option.config.machine_type = machine_type
+ .as_str()
+ .ok_or("Invalid or missing 'machineType' in 'config'")?
+ .to_string();
+ }
+
+ if let Some(image_type) = config.get("imageType") {
+ option.config.image_type = image_type
+ .as_str()
+ .ok_or("Invalid or missing 'imageType' in 'config'")?
+ .to_string();
+ }
+
+ if let Some(disk_size_gb) = config.get("diskSizeGb") {
+ option.config.disk_size_gb = disk_size_gb
+ .as_i64()
+ .ok_or("Invalid or missing 'diskSizeGb' in 'config'")?
+ as i32;
+ }
+
+ if let Some(preemptible) = config.get("preemptible") {
+ option.config.preemptible = preemptible
+ .as_bool()
+ .ok_or("Invalid or missing 'preemptible' in 'config'")?;
+ }
+
+ if let Some(oauth_scopes) = config.get("oauthScopes") {
+ option.config.oauth_scopes = oauth_scopes
+ .as_array()
+ .ok_or("Invalid or missing 'oauthScopes' in 'config', expected an array")?
+ .iter()
+ .map(|s| s.as_str().unwrap_or("").to_string())
+ .collect();
+ }
+
+ if let Some(service_account) = config.get("ServiceAccount") {
+ option.config.service_account = service_account
+ .as_str()
+ .ok_or("Invalid or missing 'ServiceAccount' in 'config'")?
+ .to_string();
+ }
+
+ if let Some(local_ssd_count) = config.get("localSsdCount") {
+ option.config.local_ssd_count = local_ssd_count
+ .as_i64()
+ .ok_or("Invalid or missing 'localSsdCount' in 'config'")?
+ as i32;
+ }
+ }
+ "autoscaling" => {
+ let autoscaling = value
+ .as_object()
+ .ok_or("Invalid or missing 'autoscaling', expected an object")?;
+
+ if let Some(enabled) = autoscaling.get("enabled") {
+ option.autoscaling.enabled = enabled
+ .as_bool()
+ .ok_or("Invalid or missing 'enabled' in 'autoscaling'")?;
+ }
+
+ if let Some(min_node_count) = autoscaling.get("minNodeCount") {
+ option.autoscaling.min_node_count = min_node_count
+ .as_i64()
+ .ok_or("Invalid or missing 'minNodeCount' in 'autoscaling'")?
+ as i32;
+ }
+
+ if let Some(max_node_count) = autoscaling.get("maxNodeCount") {
+ option.autoscaling.max_node_count = max_node_count
+ .as_i64()
+ .ok_or("Invalid or missing 'maxNodeCount' in 'autoscaling'")?
+ as i32;
+ }
+ }
+ "instanceGroupUrls" => {
+ option.instance_group_urls = value
+ .as_array()
+ .ok_or("Invalid or missing 'instanceGroupUrls', expected an array")?
+ .iter()
+ .map(|s| s.as_str().unwrap_or("").to_string())
+ .collect();
+ }
+ "management" => {
+ let management = value
+ .as_object()
+ .ok_or("Invalid or missing 'management', expected an object")?;
+
+ if let Some(auto_upgrade) = management.get("autoUpgrade") {
+ option.management.auto_upgrade = auto_upgrade
+ .as_bool()
+ .ok_or("Invalid or missing 'autoUpgrade' in 'management'")?;
+ }
+
+ if let Some(auto_repair) = management.get("AutoRepair") {
+ option.management.auto_repair = auto_repair
+ .as_bool()
+ .ok_or("Invalid or missing 'AutoRepair' in 'management'")?;
+ }
+ }
+ _ => {}
+ }
+ }
+
+ let mut create_service_json_map = serde_json::Map::new();
+ create_service_json_map.insert(
+ "nodePool".to_string(),
+ serde_json::to_value(&option)?,
+ );
+
+
+ let create_service_json = serde_json::to_string(&create_service_json_map)
+ .map_err(|e| format!("Failed to serialize node pool: {}", e))?;
+
+ let url = format!(
+ "{}/v1/projects/{}/zones/{}/clusters/{}/nodePools",
+ self.base_url, project_id, zone, cluster_id
+ );
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let response = self
+ .client
+ .post(&url)
+ .header("Content-Type", "application/json")
+ .header(AUTHORIZATION, format!("Bearer {}", token))
+ .body(create_service_json)
+ .send()
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ let mut create_service_response = HashMap::new();
+ create_service_response.insert("status".to_string(), status.as_u16().to_string());
+ create_service_response.insert("body".to_string(), body);
+
+ Ok(create_service_response)
+ }
+
+ pub async fn delete_service(
+ &self,
+ request: HashMap,
+ ) -> Result, Box> {
+ let project_id = request
+ .get("Project")
+ .ok_or("Missing 'Project'")?
+ .to_string();
+ let zone = request
+ .get("Zone")
+ .ok_or("Missing 'Zone'")?
+ .to_string();
+ let cluster_id = request
+ .get("clusterId")
+ .ok_or("Missing 'clusterId'")?
+ .to_string();
+ let node_pool_id = request
+ .get("nodePoolId")
+ .ok_or("Missing 'nodePoolId'")?
+ .to_string();
+
+ let url = format!(
+ "{}/v1/projects/{}/zones/{}/clusters/{}/nodePools/{}",
+ self.base_url, project_id, zone, cluster_id, node_pool_id
+ );
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let response = self
+ .client
+ .delete(&url)
+ .header("Content-Type", "application/json")
+ .header(AUTHORIZATION, format!("Bearer {}", token))
+ .send()
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ let mut delete_service_response = HashMap::new();
+
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{:?}", response_text);
+ delete_service_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_service_response.insert("body".to_string(), response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
+
+ delete_service_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_service_response.insert("body".to_string(), body);
+
+ Ok(delete_service_response)
+ }
+}
diff --git a/rustcloud/src/gcp/gcp_apis/compute/gcp_kubernetes.rs b/rustcloud/src/gcp/gcp_apis/compute/gcp_kubernetes.rs
deleted file mode 100644
index d1e6d35..0000000
--- a/rustcloud/src/gcp/gcp_apis/compute/gcp_kubernetes.rs
+++ /dev/null
@@ -1,194 +0,0 @@
-use crate::gcp::gcp_apis::auth::gcp_auth::retrieve_token;
-use crate::gcp::types::compute::gcp_kubernetes_types::*;
-use reqwest::{header::AUTHORIZATION, Client, Error, Response};
-use serde_json::to_string;
-
-pub struct GCPKubernetesClient {
- client: Client,
- base_url: String,
-}
-
-impl GCPKubernetesClient {
- pub fn new() -> Self {
- Self {
- client: Client::new(),
- base_url: "https://container.googleapis.com/v1beta1".to_string(),
- }
- }
-
- pub async fn create_cluster(
- &self,
- request: CreateClusterRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters",
- self.base_url, request.projectId, request.zone
- );
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn delete_cluster(
- &self,
- request: DeleteClusterRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}",
- self.base_url, request.project_id, request.zone, request.cluster_id
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .delete(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn list_clusters(
- &self,
- request: ListClustersRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters",
- self.base_url, request.project_id, request.zone
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn get_cluster(
- &self,
- request: GetClusterRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}",
- self.base_url, request.project_id, request.zone, request.cluster_id
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn create_node_pool(
- &self,
- request: CreateNodePoolRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}/nodePools",
- self.base_url, request.projectId, request.zone, request.clusterId
- );
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn delete_node_pool(
- &self,
- request: DeleteNodePoolRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}/nodePools/{}",
- self.base_url,
- request.project_id,
- request.zone,
- request.cluster_id,
- request.node_pool_id
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .delete(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn get_node_pool(
- &self,
- request: GetNodePoolRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}/nodePools/{}",
- self.base_url,
- request.project_id,
- request.zone,
- request.cluster_id,
- request.node_pool_id
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn list_node_pools(
- &self,
- request: ListNodePoolsRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locationss/{}/clusters/{}/nodePools",
- self.base_url, request.project_id, request.zone, request.cluster_id
- );
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .get(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .send()
- .await?;
- Ok(response)
- }
-
- pub async fn set_addons_config(
- &self,
- request: SetAddonsConfigRequest,
- ) -> Result {
- let url = format!(
- "{}/projects/{}/locations/{}/clusters/{}:setAddons",
- self.base_url, request.projectId, request.zone, request.clusterId
- );
- let body = to_string(&request).unwrap();
- let token = retrieve_token().await.unwrap();
- let response = self
- .client
- .post(&url)
- .header(AUTHORIZATION, format!("Bearer {}", token))
- .body(body)
- .send()
- .await?;
- Ok(response)
- }
-}
diff --git a/rustcloud/src/gcp/gcp_apis/database/gcp_bigtable.rs b/rustcloud/src/gcp/gcp_apis/database/gcp_bigtable.rs
index 1ddab16..2748aeb 100644
--- a/rustcloud/src/gcp/gcp_apis/database/gcp_bigtable.rs
+++ b/rustcloud/src/gcp/gcp_apis/database/gcp_bigtable.rs
@@ -1,3 +1,5 @@
+use std::collections::HashMap;
+
use crate::gcp::gcp_apis::auth::gcp_auth::retrieve_token;
use crate::gcp::types::database::gcp_bigtable_types::*;
use reqwest::{header::AUTHORIZATION, Client};
@@ -24,7 +26,7 @@ impl Bigtable {
parent: &str,
page_token: Option<&str>,
view: Option<&str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!("{}/v2/{}/tables", self.base_url, parent);
let mut request_builder = self.client.get(&url);
@@ -35,89 +37,131 @@ impl Bigtable {
request_builder = request_builder.query(&[("view", view)]);
}
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = request_builder
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
let status = response.status();
- let body = response.text().await?;
-
- Ok(json!({
- "status": status.as_u16(),
- "body": body,
- }))
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut list_table_response = HashMap::new();
+ list_table_response.insert("status".to_string(), status.as_u16().to_string());
+ list_table_response.insert("body".to_string(), body);
+ Ok(list_table_response)
}
pub async fn delete_tables(
&self,
name: &str,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!("{}/v2/{}", self.base_url, name);
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let response = self
.client
.delete(&url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;;
let status = response.status();
- let body = response.text().await?;
-
- Ok(json!({
- "status": status.as_u16(),
- "body": body,
- }))
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut delete_table_response = HashMap::new();
+ delete_table_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_table_response.insert("body".to_string(), body);
+ Ok(delete_table_response)
}
pub async fn describe_tables(
&self,
name: &str,
- ) -> Result> {
- let url = format!("{}/v2/{}", self.base_url, name);
+ mask: &str,
+ table: Table,
+ ) -> Result, Box> {
+ let url = format!("{}/v2/{}?updateMask={}", self.base_url, name, mask);
+
+ let token = retrieve_token()
+ .await
+ .map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
+ let body = serde_json::to_string(&table)
+ .map_err(|e| format!("Failed to serialize request body: {}", e))?;
- let token = retrieve_token().await?;
let response = self
.client
.patch(&url)
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
+ .body(body)
.send()
- .await?;
-
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
let status = response.status();
- let body = response.text().await?;
-
- Ok(json!({
- "status": status.as_u16(),
- "body": body,
- }))
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut describe_table_response = HashMap::new();
+ describe_table_response.insert("status".to_string(), status.as_u16().to_string());
+ describe_table_response.insert("body".to_string(), body);
+ Ok(describe_table_response)
}
+
- pub async fn create_tables(
+ pub async fn create_table(
&self,
parent: &str,
table_id: &str,
table: Table,
- initial_splits: Vec,
- cluster_states: ClusterStates,
- ) -> Result> {
+ initial_splits: Option>,
+ ) -> Result, Box> {
let url = format!("{}/v2/{}/tables", self.base_url, parent);
-
+
let create_bigtable = CreateBigtable {
table_id: table_id.to_string(),
table,
initial_splits,
- cluster_states,
};
- let body = to_string(&create_bigtable).unwrap();
-
- let token = retrieve_token().await?;
+
+ // Serialize the request body
+ let body = serde_json::to_string(&create_bigtable)
+ .map_err(|e| format!("Failed to serialize request body: {}", e))?;
+
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
+
let response = self
.client
.post(&url)
@@ -125,14 +169,23 @@ impl Bigtable {
.header("Content-Type", "application/json")
.header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
-
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
let status = response.status();
- let body = response.text().await?;
-
- Ok(json!({
- "status": status.as_u16(),
- "body": body,
- }))
+ if !status.is_success() {
+ let response_text = response.text().await?;
+ println!("{}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut create_table_response = HashMap::new();
+ create_table_response.insert("status".to_string(), status.as_u16().to_string());
+ create_table_response.insert("body".to_string(), body);
+ Ok(create_table_response)
}
-}
+}
\ No newline at end of file
diff --git a/rustcloud/src/gcp/gcp_apis/mod.rs b/rustcloud/src/gcp/gcp_apis/mod.rs
index 8ebd63f..aab9a9e 100644
--- a/rustcloud/src/gcp/gcp_apis/mod.rs
+++ b/rustcloud/src/gcp/gcp_apis/mod.rs
@@ -1,6 +1,6 @@
pub mod app_services;
pub mod compute;
-pub mod artificial_intelligence;
+// pub mod artificial_intelligence;
pub mod database;
pub mod network;
pub mod auth;
diff --git a/rustcloud/src/gcp/gcp_apis/network/gcp_dns.rs b/rustcloud/src/gcp/gcp_apis/network/gcp_dns.rs
index 409baee..60bdedf 100644
--- a/rustcloud/src/gcp/gcp_apis/network/gcp_dns.rs
+++ b/rustcloud/src/gcp/gcp_apis/network/gcp_dns.rs
@@ -5,37 +5,39 @@ use reqwest::{header::AUTHORIZATION, Client};
use serde_json::to_string;
use std::collections::HashMap;
use std::error::Error;
-
+use serde_json::json;
const UNIX_DATE: &str = "%a %b %e %H:%M:%S %Z %Y";
const RFC3339: &str = "%Y-%m-%dT%H:%M:%S%.f%:z";
+use std::time::SystemTime;
+use std::time::UNIX_EPOCH;
+
pub struct GoogleDns {
client: Client,
base_url: String,
- project: String,
}
impl GoogleDns {
- pub fn new(project: &str) -> Self {
+ pub fn new() -> Self {
Self {
client: Client::new(),
base_url: "https://www.googleapis.com".to_string(),
- project: project.to_string(),
}
}
async fn get_authorization_header(&self) -> Result> {
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
Ok(format!("Bearer {}", token))
}
pub async fn list_resource_dns_record_sets(
&self,
+ project_id: String,
options: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!(
"{}/dns/v1/projects/{}/managedZones/{}/rrsets",
- self.base_url, self.project, options["managedZone"]
+ self.base_url, project_id, options["managedZone"]
);
let mut req = self.client.get(&url);
@@ -56,31 +58,91 @@ impl GoogleDns {
}
let auth_header = self.get_authorization_header().await?;
- let response = req.header(AUTHORIZATION, auth_header).send().await?;
- Ok(response)
+ let response = req.header(AUTHORIZATION, auth_header).send().await.map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut list_dns_response = HashMap::new();
+ list_dns_response.insert("status".to_string(), status.as_u16().to_string());
+ list_dns_response.insert("body".to_string(), body);
+ Ok(list_dns_response)
}
pub async fn create_dns(
&self,
- param: &HashMap<&str, &str>,
- ) -> Result> {
- let project = param["Project"];
- let option = CreateDns {
- creation_time: chrono::Utc::now().to_rfc3339(),
- description: param["Description"].to_string(),
- dns_name: param["DnsName"].to_string(),
- name_servers: param["nameServers"]
- .split(',')
- .map(|s| s.to_string())
- .collect(),
- id: param["Id"].to_string(),
- kind: param["Kind"].to_string(),
- name: param["Name"].to_string(),
- name_server_set: param["nameServerSet"].to_string(),
+ project_id: String,
+ param: HashMap,
+ ) -> Result, Box> {
+ let mut option = CreateDns {
+ creation_time: None,
+ description: None,
+ dns_name: None,
+ name_servers: None,
+ id: None,
+ kind: None,
+ name: None,
+ name_server_set: None,
};
+ for (key, value) in param {
+ match key.as_str() {
+ "CreationTime" => {
+ if let Some(val) = value.as_str() {
+ option.creation_time = Some(SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs().to_string());
+ }
+ },
+ "Description" => {
+ if let Some(val) = value.as_str() {
+ option.description = Some(val.to_string());
+ }
+ },
+ "DnsName" => {
+ if let Some(val) = value.as_str() {
+ option.dns_name = Some(val.to_string());
+ }
+ },
+ "nameServers" => {
+ if let Some(val) = value.as_array() {
+ option.name_servers = Some(val.iter().filter_map(|v| v.as_str().map(|s| s.to_string())).collect());
+ }
+ },
+ "Id" => {
+ if let Some(val) = value.as_str() {
+ option.id = Some(val.to_string());
+ }
+ },
+ "Kind" => {
+ if let Some(val) = value.as_str() {
+ option.kind = Some(val.to_string());
+ }
+ },
+ "Name" => {
+ if let Some(val) = value.as_str() {
+ option.name = Some(val.to_string());
+ }
+ },
+ "nameServerSet" => {
+ if let Some(val) = value.as_str() {
+ option.name_server_set = Some(val.to_string());
+ }
+ },
+ _ => {}
+ }
+ }
+
+ // let create_dns_json = serde_json::to_value(option)?;
- let body = to_string(&option).unwrap();
- let url = format!("{}/dns/v1/projects/{}/managedZones", self.base_url, project);
+
+ let body = to_string(&option).map_err(|e| format!("Failed to serialize request body: {}", e))?;
+ let url = format!("{}/dns/v1/projects/{}/managedZones", self.base_url, project_id);
let auth_header = self.get_authorization_header().await?;
let response = self
@@ -90,18 +152,35 @@ impl GoogleDns {
.header("Content-Type", "application/json")
.body(body)
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut create_dns_response = HashMap::new();
+ create_dns_response.insert("status".to_string(), status.as_u16().to_string());
+ create_dns_response.insert("body".to_string(), body);
+ Ok(create_dns_response)
- Ok(response)
}
pub async fn list_dns(
&self,
+ project_id: String,
options: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!(
"{}/dns/v1/projects/{}/managedZones/",
- self.base_url, self.project
+ self.base_url, project_id
);
let mut req = self.client.get(&url);
@@ -114,17 +193,33 @@ impl GoogleDns {
}
let auth_header = self.get_authorization_header().await?;
- let response = req.header(AUTHORIZATION, auth_header).send().await?;
- Ok(response)
+ let response = req.header(AUTHORIZATION, auth_header).send().await.map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut list_dns_response = HashMap::new();
+ list_dns_response.insert("status".to_string(), status.as_u16().to_string());
+ list_dns_response.insert("body".to_string(), body);
+ Ok(list_dns_response)
}
pub async fn delete_dns(
&self,
+ project_id: String,
options: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!(
"{}/dns/v1/projects/{}/managedZones/{}",
- self.base_url, self.project, options["managedZone"]
+ self.base_url, project_id, options["managedZone"]
);
let auth_header = self.get_authorization_header().await?;
@@ -134,8 +229,22 @@ impl GoogleDns {
.header(AUTHORIZATION, auth_header)
.header("Content-Type", "application/json")
.send()
- .await?;
-
- Ok(response)
+ .await.map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut delete_dns_response = HashMap::new();
+ delete_dns_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_dns_response.insert("body".to_string(), body);
+ Ok(delete_dns_response)
}
}
diff --git a/rustcloud/src/gcp/gcp_apis/network/gcp_loadbalancer.rs b/rustcloud/src/gcp/gcp_apis/network/gcp_loadbalancer.rs
index 0328604..2561a11 100644
--- a/rustcloud/src/gcp/gcp_apis/network/gcp_loadbalancer.rs
+++ b/rustcloud/src/gcp/gcp_apis/network/gcp_loadbalancer.rs
@@ -5,6 +5,10 @@ use reqwest::{header::AUTHORIZATION, Client};
use serde_json::to_string;
use std::collections::HashMap;
use std::error::Error;
+use serde_json::json;
+
+use std::time::SystemTime;
+use std::time::UNIX_EPOCH;
const UNIX_DATE: &str = "%a %b %e %H:%M:%S %Z %Y";
const RFC3339: &str = "%Y-%m-%dT%H:%M:%S%.f%:z";
@@ -25,57 +29,102 @@ impl GoogleLoadBalancer {
}
async fn get_authorization_header(&self) -> Result> {
- let token = retrieve_token().await?;
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
Ok(format!("Bearer {}", token))
}
pub async fn create_load_balancer(
&self,
- param: &HashMap<&str, &str>,
- ) -> Result> {
+ param: HashMap,
+ ) -> Result, Box> {
+ let mut project = String::new();
+ let mut region = String::new();
let mut option = TargetPools {
- name: "".to_string(),
- health_checks: Vec::new(),
- description: "".to_string(),
- backup_pool: "".to_string(),
- failover_ratio: 0,
- id: "".to_string(),
- instances: Vec::new(),
- kind: "".to_string(),
- session_affinity: "".to_string(),
- region: "".to_string(),
- self_link: "".to_string(),
- creation_timestamp: Utc::now().to_rfc3339(),
+ name: None,
+ health_checks: None,
+ description: None,
+ backup_pool: None,
+ failover_ratio: None,
+ id: None,
+ instances: None,
+ kind: None,
+ session_affinity: None,
+ self_link: None,
+ region: None,
+ creation_timestamp: None,
};
- let project = param.get("Project").unwrap().to_string();
- let region = param.get("Region").unwrap().to_string();
-
- for (key, value) in param {
- match *key {
- "Name" => option.name = value.to_string(),
- "Region" => option.region = value.to_string(),
+ for (key, value) in param.iter() {
+ match key.as_str() {
+ "Project" => {
+ if let Some(val) = value.as_str() {
+ project = val.to_string();
+ }
+ },
+ "Name" => {
+ if let Some(val) = value.as_str() {
+ option.name = Some(val.to_string());
+ }
+ },
+ "Region" => {
+ if let Some(val) = value.as_str() {
+ region = val.to_string();
+ }
+ },
"healthChecks" => {
- option.health_checks = value.split(',').map(|s| s.to_string()).collect()
- }
- "description" => option.description = value.to_string(),
- "BackupPool" => option.backup_pool = value.to_string(),
- "failoverRatio" => option.failover_ratio = value.parse().unwrap(),
- "id" => option.id = value.to_string(),
- "Instances" => option.instances = value.split(',').map(|s| s.to_string()).collect(),
- "kind" => option.kind = value.to_string(),
- "sessionAffinity" => option.session_affinity = value.to_string(),
- "selfLink" => option.self_link = value.to_string(),
- _ => (),
+ if let Some(val) = value.as_array() {
+ option.health_checks = Some(val.iter().filter_map(|v| v.as_str().map(|s| s.to_string())).collect());
+ }
+ },
+ "description" => {
+ if let Some(val) = value.as_str() {
+ option.description = Some(val.to_string());
+ }
+ },
+ "BackupPool" => {
+ if let Some(val) = value.as_str() {
+ option.backup_pool = Some(val.to_string());
+ }
+ },
+ "failoverRatio" => {
+ if let Some(val) = value.as_f64() {
+ option.failover_ratio = Some(val);
+ }
+ },
+ "id" => {
+ if let Some(val) = value.as_str() {
+ option.id = Some(val.to_string());
+ }
+ },
+ "Instances" => {
+ if let Some(val) = value.as_array() {
+ option.instances = Some(val.iter().filter_map(|v| v.as_str().map(|s| s.to_string())).collect());
+ }
+ },
+ "kind" => {
+ if let Some(val) = value.as_str() {
+ option.kind = Some(val.to_string());
+ }
+ },
+ "sessionAffinity" => {
+ if let Some(val) = value.as_str() {
+ option.session_affinity = Some(val.to_string());
+ }
+ },
+ "selfLink" => {
+ if let Some(val) = value.as_str() {
+ option.self_link = Some(val.to_string());
+ }
+ },
+ _ => {}
}
}
- option.region = format!(
- "https://www.googleapis.com/compute/v1/projects/{}/zones/{}",
- project, region
- );
+ option.region = Some(format!("https://www.googleapis.com/compute/v1/projects/{}/zones/{}", project, region));
+
+ option.creation_timestamp = Some(SystemTime::now().duration_since(UNIX_EPOCH)?.as_secs().to_string());
- let body = to_string(&option)?;
+ let body = to_string(&option).map_err(|e| format!("Failed to serialize GCE instance: {}", e))?;
let url = format!(
"{}/compute/beta/projects/{}/regions/{}/targetPools",
self.base_url, project, region
@@ -89,15 +138,31 @@ impl GoogleLoadBalancer {
.header("Content-Type", "application/json")
.body(body)
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
- Ok(response)
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut create_loadbalancer_response = HashMap::new();
+ create_loadbalancer_response.insert("status".to_string(), status.as_u16().to_string());
+ create_loadbalancer_response.insert("body".to_string(), body);
+ Ok(create_loadbalancer_response)
}
pub async fn delete_load_balancer(
&self,
options: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!(
"{}/compute/beta/projects/{}/regions/{}/targetPools/{}",
self.base_url, options["Project"], options["Region"], options["TargetPool"]
@@ -110,15 +175,31 @@ impl GoogleLoadBalancer {
.header(AUTHORIZATION, auth_header)
.header("Content-Type", "application/json")
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
- Ok(response)
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut delete_loadbalancer_response = HashMap::new();
+ delete_loadbalancer_response.insert("status".to_string(), status.as_u16().to_string());
+ delete_loadbalancer_response.insert("body".to_string(), body);
+ Ok(delete_loadbalancer_response)
}
pub async fn list_load_balancer(
&self,
options: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let url = format!(
"{}/compute/beta/projects/{}/regions/{}/targetPools",
self.base_url, options["Project"], options["Region"]
@@ -131,15 +212,31 @@ impl GoogleLoadBalancer {
.header(AUTHORIZATION, auth_header)
.header("Content-Type", "application/json")
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
- Ok(response)
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut list_loadbalancer_response = HashMap::new();
+ list_loadbalancer_response.insert("status".to_string(), status.as_u16().to_string());
+ list_loadbalancer_response.insert("body".to_string(), body);
+ Ok(list_loadbalancer_response)
}
pub async fn attach_node_with_load_balancer(
&self,
param: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let project = param["Project"];
let target_pool = param["TargetPool"];
let region = param["Region"];
@@ -161,7 +258,7 @@ impl GoogleLoadBalancer {
.collect();
json_map.insert("instances", instance_list);
- let body = to_string(&json_map)?;
+ let body = to_string(&json_map).map_err(|e| format!("Failed to serialize request body: {}", e))?;
let auth_header = self.get_authorization_header().await?;
let response = self
@@ -171,15 +268,31 @@ impl GoogleLoadBalancer {
.header("Content-Type", "application/json")
.body(body)
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
- Ok(response)
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut attach_node_with_load_balancer_response = HashMap::new();
+ attach_node_with_load_balancer_response.insert("status".to_string(), status.as_u16().to_string());
+ attach_node_with_load_balancer_response.insert("body".to_string(), body);
+ Ok(attach_node_with_load_balancer_response)
}
pub async fn detach_node_with_load_balancer(
&self,
param: &HashMap<&str, &str>,
- ) -> Result> {
+ ) -> Result, Box> {
let project = param["Project"];
let target_pool = param["TargetPool"];
let region = param["Region"];
@@ -201,7 +314,7 @@ impl GoogleLoadBalancer {
.collect();
json_map.insert("instances", instance_list);
- let body = to_string(&json_map)?;
+ let body = to_string(&json_map).map_err(|e| format!("Failed to serialize request body: {}", e))?;
let auth_header = self.get_authorization_header().await?;
let response = self
@@ -211,8 +324,23 @@ impl GoogleLoadBalancer {
.header("Content-Type", "application/json")
.body(body)
.send()
- .await?;
-
- Ok(response)
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+ let status = response.status();
+ if !status.is_success() {
+ let response_text= response.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ let body = response
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+ println!("{:?}", body);
+ let mut detach_node_with_load_balancer_response = HashMap::new();
+ detach_node_with_load_balancer_response.insert("status".to_string(), status.as_u16().to_string());
+ detach_node_with_load_balancer_response.insert("body".to_string(), body);
+ Ok(detach_node_with_load_balancer_response)
}
}
diff --git a/rustcloud/src/gcp/gcp_apis/storage/gcp_storage.rs b/rustcloud/src/gcp/gcp_apis/storage/gcp_storage.rs
index afdedab..a5dc9ca 100644
--- a/rustcloud/src/gcp/gcp_apis/storage/gcp_storage.rs
+++ b/rustcloud/src/gcp/gcp_apis/storage/gcp_storage.rs
@@ -3,23 +3,23 @@ use reqwest::header::AUTHORIZATION;
use serde_json::Value;
use std::collections::HashMap;
-struct GoogleStorage {
+pub struct GoogleStorage {
client: reqwest::Client,
base_url: String,
}
impl GoogleStorage {
- fn new() -> Self {
+ pub fn new() -> Self {
GoogleStorage {
client: reqwest::Client::new(),
base_url: "https://www.googleapis.com/compute/v1".to_string(),
}
}
- async fn create_disk(
+ pub async fn create_disk(
&self,
request: HashMap,
- ) -> Result, reqwest::Error> {
+ ) -> Result, Box> {
let mut option = HashMap::new();
let mut project_id = String::new();
let mut zone = String::new();
@@ -29,146 +29,180 @@ impl GoogleStorage {
match key.as_str() {
"projectid" => project_id = value.as_str().unwrap_or_default().to_string(),
"Name" => {
- option.insert("Name", value);
+ option.insert("name", value);
}
"Zone" => zone = value.as_str().unwrap_or_default().to_string(),
"Type" => disk_type = value.as_str().unwrap_or_default().to_string(),
"SizeGb" => {
- option.insert("SizeGb", value);
+ option.insert("sizeGb", value);
}
"SourceImageEncryptionKeys" => {
- option.insert("SourceImageEncryptionKeys", value);
+ option.insert("sourceImageEncryptionKey", value);
}
"DiskEncryptionKeys" => {
- option.insert("DiskEncryptionKeys", value);
+ option.insert("diskEncryptionKey", value);
}
"SourceSnapshotEncryptionKeys" => {
- option.insert("SourceSnapshotEncryptionKeys", value);
+ option.insert("sourceSnapshotEncryptionKey", value);
}
"Licenses" => {
- option.insert("Licenses", value);
+ option.insert("licenses", value);
}
"Users" => {
- option.insert("Users", value);
+ option.insert("users", value);
}
"CreationTimestamp" => {
- option.insert("CreationTimestamp", value);
+ option.insert("creationTimestamp", value);
}
"Description" => {
- option.insert("Description", value);
+ option.insert("description", value);
}
"ID" => {
- option.insert("ID", value);
+ option.insert("id", value);
}
"Kind" => {
- option.insert("Kind", value);
+ option.insert("kind", value);
}
"LabelFingerprint" => {
- option.insert("LabelFingerprint", value);
+ option.insert("labelFingerprint", value);
}
"SourceSnapshotID" => {
- option.insert("SourceSnapshotID", value);
+ option.insert("sourceSnapshotID", value);
}
"Status" => {
- option.insert("Status", value);
+ option.insert("status", value);
}
"LastAttachTimestamp" => {
- option.insert("LastAttachTimestamp", value);
+ option.insert("lastAttachTimestamp", value);
}
"LastDetachTimestamp" => {
- option.insert("LastDetachTimestamp", value);
+ option.insert("lastDetachTimestamp", value);
}
"Options" => {
- option.insert("Options", value);
+ option.insert("options", value);
}
"SelfLink" => {
- option.insert("SelfLink", value);
+ option.insert("selfLink", value);
}
"SourceImage" => {
- option.insert("SourceImage", value);
+ option.insert("sourceImage", value);
}
"SourceImageID" => {
- option.insert("SourceImageID", value);
+ option.insert("sourceImageID", value);
}
"SourceSnapshot" => {
- option.insert("SourceSnapshot", value);
+ option.insert("sourceSnapshot", value);
}
_ => {}
}
}
+ let Zone = Value::String(format!("projects/{}/zones/{}", project_id, zone));
+ let Type = Value::String(format!(
+ "projects/{}/zones/{}/diskTypes/{}",
+ project_id.clone(), zone.clone(), disk_type.clone()
+ ));
option.insert(
- "Zone",
- &Value::String(format!("projects/{}/zones/{}", project_id, zone)),
+ "zone",
+ &Zone,
);
+
option.insert(
- "Type",
- &Value::String(format!(
- "projects/{}/zones/{}/diskTypes/{}",
- project_id, zone, disk_type
- )),
+ "type",
+ &Type,
);
let create_disk_json = serde_json::to_string(&option).unwrap();
let url = format!(
"{}/projects/{}/zones/{}/disks",
- self.base_url, project_id, zone
+ self.base_url, project_id.clone(), zone.clone()
);
- let token = retrieve_token().await.unwrap();
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let resp = self
.client
.post(&url)
.header("Content-Type", "application/json")
- .header(AUTHORIZATION, token)
+ .header(AUTHORIZATION, format!("Bearer {}", token))
.body(create_disk_json)
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
+
+
+ let status = resp.status();
+ if !status.is_success() {
+ let response_text = resp.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = resp
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
- let mut body = String::new();
let mut response: HashMap = HashMap::new();
response.insert(
"status".to_string(),
- Value::Number(resp.status().as_u16().into()),
+ Value::Number(status.as_u16().into()),
);
response.insert("body".to_string(), Value::String(body));
Ok(response)
}
- async fn delete_disk(
+ pub async fn delete_disk(
&self,
request: HashMap,
- ) -> Result, reqwest::Error> {
+ ) -> Result, Box> {
let url = format!(
"{}/projects/{}/zones/{}/disks/{}",
self.base_url, request["projectid"], request["Zone"], request["disk"]
);
- let token = retrieve_token().await.unwrap();
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let resp = self
.client
.delete(&url)
.header("Content-Type", "application/json")
- .header(AUTHORIZATION, token)
+ .header(AUTHORIZATION, format!("Bearer {}", token))
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
- let mut body = String::new();
+
+ let status = resp.status();
+ if !status.is_success() {
+ let response_text = resp.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = resp
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut response = HashMap::new();
response.insert(
"status".to_string(),
- Value::Number(resp.status().as_u16().into()),
+ Value::Number(status.as_u16().into()),
);
response.insert("body".to_string(), Value::String(body));
Ok(response)
}
- async fn create_snapshot(
+ pub async fn create_snapshot(
&self,
request: HashMap,
- ) -> Result, reqwest::Error> {
+ ) -> Result, Box> {
let mut snapshot = HashMap::new();
let mut project_id = String::new();
let mut zone = String::new();
@@ -178,54 +212,54 @@ impl GoogleStorage {
match key.as_str() {
"projectid" => project_id = value.as_str().unwrap_or_default().to_string(),
"Name" => {
- snapshot.insert("Name", value);
+ snapshot.insert("name", value);
}
"Zone" => zone = value.as_str().unwrap_or_default().to_string(),
"disk" => disk = value.as_str().unwrap_or_default().to_string(),
"CreationTimestamp" => {
- snapshot.insert("CreationTimestamp", value);
+ snapshot.insert("creationTimestamp", value);
}
"Description" => {
- snapshot.insert("Description", value);
+ snapshot.insert("description", value);
}
"DiskSizeGb" => {
- snapshot.insert("DiskSizeGb", value);
+ snapshot.insert("diskSizeGb", value);
}
"ID" => {
- snapshot.insert("ID", value);
+ snapshot.insert("id", value);
}
"Kind" => {
- snapshot.insert("Kind", value);
+ snapshot.insert("kind", value);
}
"LabelFingerprint" => {
- snapshot.insert("LabelFingerprint", value);
+ snapshot.insert("labelFingerprint", value);
}
"SelfLink" => {
- snapshot.insert("SelfLink", value);
+ snapshot.insert("selfLink", value);
}
"StorageBytes" => {
- snapshot.insert("StorageBytes", value);
+ snapshot.insert("storageBytes", value);
}
"Status" => {
- snapshot.insert("Status", value);
+ snapshot.insert("status", value);
}
"SourceDiskID" => {
- snapshot.insert("SourceDiskID", value);
+ snapshot.insert("sourceDiskID", value);
}
"SourceDisk" => {
- snapshot.insert("SourceDisk", value);
+ snapshot.insert("sourceDisk", value);
}
"StorageBytesStatus" => {
- snapshot.insert("StorageBytesStatus", value);
+ snapshot.insert("storageBytesStatus", value);
}
"Licenses" => {
- snapshot.insert("Licenses", value);
+ snapshot.insert("licenses", value);
}
"SourceDiskEncryptionKeys" => {
- snapshot.insert("SourceDiskEncryptionKeys", value);
+ snapshot.insert("sourceDiskEncryptionKey", value);
}
"SnapshotEncryptionKeys" => {
- snapshot.insert("SnapshotEncryptionKeys", value);
+ snapshot.insert("snapshotEncryptionKey", value);
}
_ => {}
}
@@ -236,26 +270,40 @@ impl GoogleStorage {
"{}/projects/{}/zones/{}/disks/{}/createSnapshot",
self.base_url, project_id, zone, disk
);
- let token = retrieve_token().await.unwrap();
+ let token = retrieve_token().await.map_err(|e| format!("Failed to retrieve token: {}", e))?;
let resp = self
.client
.post(&url)
.header("Content-Type", "application/json")
- .header(AUTHORIZATION, token)
+ .header(AUTHORIZATION, format!("Bearer {}", token))
.body(create_snapshot_json)
.send()
- .await?;
+ .await
+ .map_err(|e| format!("Failed to send request: {}", e))?;
- let mut body = String::new();
+ let status = resp.status();
+ if !status.is_success() {
+ let response_text = resp.text().await?;
+ println!("{:?}", response_text);
+ return Err(format!("Request failed with status: {}", status).into());
+ }
+
+ // Parse the response body
+ let body = resp
+ .text()
+ .await
+ .map_err(|e| format!("Failed to read response body: {}", e))?;
+
+ println!("{:?}", body);
let mut response = HashMap::new();
response.insert(
"status".to_string(),
- Value::Number(resp.status().as_u16().into()),
+ Value::Number(status.as_u16().into()),
);
response.insert("body".to_string(), Value::String(body));
Ok(response)
}
-}
+}
\ No newline at end of file
diff --git a/rustcloud/src/gcp/types/artificial_intelligence/gcp_automl_types.rs b/rustcloud/src/gcp/types/artificial_intelligence/gcp_automl_types.rs
index be53b98..d961932 100644
--- a/rustcloud/src/gcp/types/artificial_intelligence/gcp_automl_types.rs
+++ b/rustcloud/src/gcp/types/artificial_intelligence/gcp_automl_types.rs
@@ -3,7 +3,7 @@ use std::collections::HashMap;
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateDatasetRequest {
- pub parent: String,
+ // pub parent: String,
pub dataset: Dataset,
}
diff --git a/rustcloud/src/gcp/types/compute/gcp_container_types.rs b/rustcloud/src/gcp/types/compute/gcp_container_types.rs
new file mode 100644
index 0000000..70aa513
--- /dev/null
+++ b/rustcloud/src/gcp/types/compute/gcp_container_types.rs
@@ -0,0 +1,147 @@
+use serde::{Deserialize, Serialize};
+use serde_json::Value;
+
+// Represents Google Container attributes and methods
+
+// Represents the attributes of CreateCluster
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct CreateCluster {
+ pub name: String,
+ pub zone: String,
+ pub network: String,
+ #[serde(rename = "loggingService")]
+ pub logging_service: String,
+ #[serde(rename = "monitoringService")]
+ pub monitoring_service: String,
+ #[serde(rename = "initialClusterVersion")]
+ pub initial_cluster_version: String,
+ pub subnetwork: String,
+ #[serde(rename = "legacyAbac")]
+ pub legacy_abac: LegacyAbac,
+ #[serde(rename = "masterAuth")]
+ pub master_auth: MasterAuth,
+ #[serde(rename = "nodePools")]
+ pub node_pools: Vec,
+}
+
+// Represents the legacyAbac attributes of CreateCluster
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct LegacyAbac {
+ pub enabled: bool,
+}
+
+// Represents the masterAuth attributes of CreateCluster
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct MasterAuth {
+ pub username: String,
+ #[serde(rename = "clientCertificateConfig")]
+ pub client_certificate_config: ClientCertificateConfigs,
+}
+
+// Represents the ClientCertificateConfigs attributes of MasterAuth
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct ClientCertificateConfigs {
+ #[serde(rename = "issueClientCertificate")]
+ pub issue_client_certificate: bool,
+}
+
+// Represents the config attributes of NodePool
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct Config {
+ #[serde(rename = "machineType")]
+ pub machine_type: String,
+ #[serde(rename = "imageType")]
+ pub image_type: String,
+ #[serde(rename = "diskSizeGb")]
+ pub disk_size_gb: i32,
+ pub preemptible: bool,
+ #[serde(rename = "oauthScopes")]
+ pub oauth_scopes: Vec,
+}
+
+// Represents the autoscaling attributes of NodePool
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct Autoscaling {
+ pub enabled: bool,
+ #[serde(rename = "minNodeCount")]
+ pub min_node_count: i32,
+ #[serde(rename = "maxNodeCount")]
+ pub max_node_count: i32,
+}
+
+// Represents the management attributes of NodePool
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct Management {
+ #[serde(rename = "autoUpgrade")]
+ pub auto_upgrade: bool,
+ #[serde(rename = "autoRepair")]
+ pub auto_repair: bool,
+}
+
+// Represents NodePool attributes of CreateCluster
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct NodePool {
+ pub name: String,
+ #[serde(rename = "initialNodeCount")]
+ pub initial_node_count: i64,
+ pub config: Config,
+ pub autoscaling: Autoscaling,
+ pub management: Management,
+}
+
+// Represents NodePool attributes of CreateService
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct NodePoolService {
+ #[serde(rename = "config")]
+ pub config: ConfigService,
+ pub name: String,
+ #[serde(rename = "statusMessage")]
+ pub status_message: String,
+ pub autoscaling: AutoscalingService,
+ #[serde(rename = "initialNodeCount")]
+ pub initial_node_count: i32,
+ pub management: ManagementService,
+ #[serde(rename = "selfLink")]
+ pub self_link: String,
+ pub version: String,
+ #[serde(rename = "instanceGroupUrls")]
+ pub instance_group_urls: Vec,
+ pub status: String,
+}
+
+// Represents config attributes of NodePool for CreateService
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct ConfigService {
+ #[serde(rename = "machineType")]
+ pub machine_type: String,
+ #[serde(rename = "imageType")]
+ pub image_type: String,
+ #[serde(rename = "oauthScopes")]
+ pub oauth_scopes: Vec,
+ pub preemptible: bool,
+ #[serde(rename = "localSsdCount")]
+ pub local_ssd_count: i32,
+ #[serde(rename = "diskSizeGb")]
+ pub disk_size_gb: i32,
+ #[serde(rename = "serviceAccount")]
+ pub service_account: String,
+}
+
+// Represents autoscaling attributes of NodePool for CreateService
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct AutoscalingService {
+ #[serde(rename = "maxNodeCount")]
+ pub max_node_count: i32,
+ #[serde(rename = "minNodeCount")]
+ pub min_node_count: i32,
+ pub enabled: bool,
+}
+
+// Represents management attributes of NodePool for CreateService
+#[derive(Debug, Default, Serialize, Deserialize)]
+pub struct ManagementService {
+ #[serde(rename = "autoRepair")]
+ pub auto_repair: bool,
+ #[serde(rename = "autoUpgrade")]
+ pub auto_upgrade: bool,
+}
diff --git a/rustcloud/src/gcp/types/compute/gcp_kubernetes_types.rs b/rustcloud/src/gcp/types/compute/gcp_kubernetes_types.rs
deleted file mode 100644
index 167f548..0000000
--- a/rustcloud/src/gcp/types/compute/gcp_kubernetes_types.rs
+++ /dev/null
@@ -1,100 +0,0 @@
-use std::collections::HashMap;
-
-use serde::{Deserialize, Serialize};
-
-// Define request and response structs based on your API specification
-#[derive(Debug, Serialize)]
-pub struct CreateClusterRequest {
- pub projectId: String,
- pub zone: String,
- pub cluster: HashMap,
-}
-
-#[derive(Debug, Serialize)]
-pub struct DeleteClusterRequest {
- pub project_id: String,
- pub zone: String,
- pub cluster_id: String,
-}
-
-#[derive(Debug, Deserialize)]
-struct ListClustersResponse {
- // Define fields based on response structure
-}
-
-#[derive(Debug, Serialize)]
-pub struct ListClustersRequest {
- pub project_id: String,
- pub zone: String,
-}
-
-#[derive(Debug, Serialize)]
-pub struct GetClusterRequest {
- pub project_id: String,
- pub zone: String,
- pub cluster_id: String,
-}
-
-#[derive(Debug, Deserialize)]
-struct GetClusterResponse {
- // Define fields based on response structure
-}
-
-#[derive(Debug, Serialize)]
-pub struct CreateNodePoolRequest {
- pub projectId: String,
- pub zone: String,
- pub clusterId: String,
- pub nodePool: HashMap,
-}
-
-#[derive(Debug, Deserialize)]
-struct CreateNodePoolResponse {
- // Define fields based on response structure
-}
-
-#[derive(Debug, Serialize)]
-pub struct DeleteNodePoolRequest {
- pub project_id: String,
- pub zone: String,
- pub cluster_id: String,
- pub node_pool_id: String,
-}
-
-#[derive(Debug, Deserialize)]
-struct GetNodePoolResponse {
- // Define fields based on response structure
-}
-
-#[derive(Debug, Serialize)]
-pub struct GetNodePoolRequest {
- pub project_id: String,
- pub zone: String,
- pub cluster_id: String,
- pub node_pool_id: String,
-}
-
-#[derive(Debug, Serialize)]
-pub struct ListNodePoolsRequest {
- pub project_id: String,
- pub zone: String,
- pub cluster_id: String,
-}
-
-#[derive(Debug, Deserialize)]
-struct ListNodePoolsResponse {
- // Define fields based on response structure
-}
-
-#[derive(Debug, Serialize)]
-pub struct SetAddonsConfigRequest {
- pub projectId: String,
- pub zone: String,
- pub clusterId: String,
- pub addonsConfig: HashMap, // Add other fields as required
-}
-
-#[derive(Debug, Deserialize)]
-struct SetAddonsConfigResponse {
- // Define fields based on response structure
-}
diff --git a/rustcloud/src/gcp/types/compute/mod.rs b/rustcloud/src/gcp/types/compute/mod.rs
index 4f80217..cf0e434 100644
--- a/rustcloud/src/gcp/types/compute/mod.rs
+++ b/rustcloud/src/gcp/types/compute/mod.rs
@@ -1,2 +1,2 @@
pub mod gcp_compute_engine_type;
-pub mod gcp_kubernetes_types;
+pub mod gcp_container_types;
diff --git a/rustcloud/src/gcp/types/database/gcp_bigtable_types.rs b/rustcloud/src/gcp/types/database/gcp_bigtable_types.rs
index d7a6c61..69c0230 100644
--- a/rustcloud/src/gcp/types/database/gcp_bigtable_types.rs
+++ b/rustcloud/src/gcp/types/database/gcp_bigtable_types.rs
@@ -1,3 +1,5 @@
+use std::collections::HashMap;
+
use serde::{Deserialize, Serialize};
// InitialSplits struct represents InitialSplits.
@@ -9,16 +11,30 @@ pub struct InitialSplits {
// Table struct represents Table.
#[derive(Debug, Serialize, Deserialize)]
pub struct Table {
- pub granularity: String,
- pub name: String,
+ pub name: Option,
+ #[serde(rename = "clusterStates")]
+ pub cluster_states: Option>,
+ #[serde(rename = "columnFamilies")]
+ pub column_families: Option>,
+ pub granularity: Option,
+ #[serde(rename = "restoreInfo")]
+ pub restore_info: Option,
+ #[serde(rename = "changeStreamConfig")]
+ pub change_stream_config: Option,
+ #[serde(rename = "deletionProtection")]
+ pub deletion_protection: Option,
+ pub stats: Option,
+ #[serde(rename = "automatedBackupPolicy")]
+ pub automated_backup_policy: Option,
}
-// ClusterStates struct represents ClusterStates.
#[derive(Debug, Serialize, Deserialize)]
pub struct ClusterStates {
+ #[serde(rename = "replicationState")]
pub replication_state: String,
+ #[serde(rename = "encryptionInfo")]
+ pub encryption_info: Vec,
}
-
// GcRule struct represents GcRule.
#[derive(Debug, Serialize, Deserialize)]
pub struct GcRule {
@@ -31,6 +47,6 @@ pub struct GcRule {
pub struct CreateBigtable {
pub table_id: String,
pub table: Table,
- pub initial_splits: Vec,
- pub cluster_states: ClusterStates,
+ #[serde(rename = "initialSplits")]
+ pub initial_splits: Option>,
}
diff --git a/rustcloud/src/gcp/types/network/gcp_dns_types.rs b/rustcloud/src/gcp/types/network/gcp_dns_types.rs
index 7f523c4..ad0c992 100644
--- a/rustcloud/src/gcp/types/network/gcp_dns_types.rs
+++ b/rustcloud/src/gcp/types/network/gcp_dns_types.rs
@@ -2,12 +2,16 @@ use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateDns {
- pub creation_time: String,
- pub description: String,
- pub dns_name: String,
- pub name_servers: Vec,
- pub id: String,
- pub kind: String,
- pub name: String,
- pub name_server_set: String,
+ #[serde(rename = "creationTime")]
+ pub creation_time: Option,
+ pub description: Option,
+ #[serde(rename = "dnsName")]
+ pub dns_name: Option,
+ #[serde(rename = "nameServers")]
+ pub name_servers: Option>,
+ pub id: Option,
+ pub kind: Option,
+ pub name: Option,
+ #[serde(rename = "nameServerSet")]
+ pub name_server_set: Option,
}
diff --git a/rustcloud/src/gcp/types/network/gcp_loadbalancer_types.rs b/rustcloud/src/gcp/types/network/gcp_loadbalancer_types.rs
index c5e3582..6915503 100644
--- a/rustcloud/src/gcp/types/network/gcp_loadbalancer_types.rs
+++ b/rustcloud/src/gcp/types/network/gcp_loadbalancer_types.rs
@@ -2,16 +2,22 @@ use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize)]
pub struct TargetPools {
- pub name: String,
- pub health_checks: Vec,
- pub description: String,
- pub backup_pool: String,
- pub failover_ratio: i32,
- pub id: String,
- pub instances: Vec,
- pub kind: String,
- pub session_affinity: String,
- pub region: String,
- pub self_link: String,
- pub creation_timestamp: String,
+ pub name: Option,
+ #[serde(rename = "healthChecks")]
+ pub health_checks: Option>,
+ pub description: Option,
+ #[serde(rename = "backupPool")]
+ pub backup_pool: Option,
+ #[serde(rename = "failoverRatio")]
+ pub failover_ratio: Option,
+ pub id: Option,
+ pub instances: Option>,
+ pub kind: Option,
+ #[serde(rename = "sessionAffinity")]
+ pub session_affinity: Option,
+ pub region: Option,
+ #[serde(rename = "selfLink")]
+ pub self_link: Option,
+ #[serde(rename = "creationTimestamp")]
+ pub creation_timestamp: Option,
}
diff --git a/rustcloud/src/main.rs b/rustcloud/src/main.rs
index df69255..c5c0aae 100644
--- a/rustcloud/src/main.rs
+++ b/rustcloud/src/main.rs
@@ -33,12 +33,12 @@ pub mod gcp {
pub mod app_services {
pub mod gcp_notification_service;
}
- pub mod artificial_intelligence {
- pub mod gcp_automl;
- }
+ // pub mod artificial_intelligence {
+ // pub mod gcp_automl;
+ // }
pub mod compute {
pub mod gcp_compute_engine;
- pub mod gcp_kubernetes;
+ pub mod gcp_container;
}
pub mod database {
pub mod gcp_bigtable;
diff --git a/rustcloud/src/tests/gcp_automl_operations.rs b/rustcloud/src/tests/gcp_automl_operations.rs
deleted file mode 100644
index 0445bca..0000000
--- a/rustcloud/src/tests/gcp_automl_operations.rs
+++ /dev/null
@@ -1,134 +0,0 @@
-use crate::gcp::gcp_apis::artificial_intelligence::gcp_automl::*;
-use std::collections::HashMap;
-use tokio::test;
-
-async fn create_client() -> AutoML {
- AutoML::new("your_project_id")
-}
-
-#[tokio::test]
-async fn test_create_dataset() {
- let client = create_client().await;
-
- let location = "your_location";
- let name = "your_dataset_name";
-
- let result = client.create_dataset(location, name).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_get_dataset() {
- let client = create_client().await;
-
- let location = "your_location";
- let dataset_id = "your_dataset_id";
-
- let result = client.get_dataset(location, dataset_id).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_import_data_set() {
- let client = create_client().await;
-
- let location = "your_location";
- let dataset_id = "your_dataset_id";
- let uris = vec!["gs://your_bucket/your_file.csv".to_string()];
-
- let result = client.import_data_set(location, dataset_id, uris).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_list_models() {
- let client = create_client().await;
-
- let location = "your_location";
-
- let result = client.list_models(location).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_create_model() {
- let client = create_client().await;
-
- let location = "your_location";
- let dataset_id = "your_dataset_id";
- let model_name = "your_model_name";
- let column_id = "your_column_id";
- let train_budget = 1000;
-
- let result = client
- .create_model(location, dataset_id, model_name, column_id, train_budget)
- .await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_deploy_model() {
- let client = create_client().await;
-
- let location = "your_location";
- let model_id = "your_model_id";
-
- let result = client.deploy_model(location, model_id).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_undeploy_model() {
- let client = create_client().await;
-
- let location = "your_location";
- let model_id = "your_model_id";
-
- let result = client.undeploy_model(location, model_id).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_get_model() {
- let client = create_client().await;
-
- let location = "your_location";
- let model_id = "your_model_id";
-
- let result = client.get_model(location, model_id).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_export_dataset() {
- let client = create_client().await;
-
- let location = "your_location";
- let dataset_id = "your_dataset_id";
- let gcs_uri = "gs://your_bucket/your_export_path/";
-
- let result = client.export_dataset(location, dataset_id, gcs_uri).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_delete_model() {
- let client = create_client().await;
-
- let location = "your_location";
- let model_id = "your_model_id";
-
- let result = client.delete_model(location, model_id).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_delete_dataset() {
- let client = create_client().await;
-
- let location = "your_location";
- let dataset_id = "your_dataset_id";
-
- let result = client.delete_dataset(location, dataset_id).await;
- assert!(result.is_ok());
-}
diff --git a/rustcloud/src/tests/gcp_bigtable_operations.rs b/rustcloud/src/tests/gcp_bigtable_operations.rs
index 5fe8351..50a3e00 100644
--- a/rustcloud/src/tests/gcp_bigtable_operations.rs
+++ b/rustcloud/src/tests/gcp_bigtable_operations.rs
@@ -1,3 +1,5 @@
+use std::collections::HashMap;
+
use crate::gcp::gcp_apis::database::gcp_bigtable::*;
use crate::gcp::types::database::gcp_bigtable_types::*;
use serde_json::json;
@@ -11,59 +13,75 @@ async fn create_client() -> Bigtable {
async fn test_list_tables() {
let client = create_client().await;
- let parent = "projects/your_project_id/instances/your_instance_id";
+ let parent = "projects/rare-daylight-403814/instances/rustcloudtest";
let result = client.list_tables(parent, None, None).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert_eq!(response["status"], 200);
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
async fn test_delete_tables() {
let client = create_client().await;
- let name = "projects/your_project_id/instances/your_instance_id/tables/your_table_id";
+ let name = "projects/rare-daylight-403814/instances/rustcloudtest/tables/your_table_id1";
let result = client.delete_tables(name).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert_eq!(response["status"], 200);
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
async fn test_describe_tables() {
let client = create_client().await;
- let name = "projects/your_project_id/instances/your_instance_id/tables/your_table_id";
- let result = client.describe_tables(name).await;
+ let name = "projects/rare-daylight-403814/instances/rustcloudtest/tables/test";
+ let mut table = Table {
+ name: Some("Test2".to_string()),
+ cluster_states: Some(HashMap::new()),
+ column_families: Some(HashMap::new()),
+ granularity: None,
+ restore_info: None,
+ change_stream_config: None,
+ deletion_protection: None,
+ stats: None,
+ automated_backup_policy: None,
+ };
+ let result = client.describe_tables(name, "changeStreamConfig", table).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert_eq!(response["status"], 200);
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
async fn test_create_tables() {
let client = create_client().await;
- let parent = "projects/your_project_id/instances/your_instance_id";
- let table_id = "your_table_id";
- let table = Table {
- // Populate Table struct fields
- };
- let initial_splits = vec![InitialSplits {
- // Populate InitialSplits struct fields
- }];
- let cluster_states = ClusterStates {
- // Populate ClusterStates struct fields
+ let parent = "projects/rare-daylight-403814/instances/rustcloudtest";
+ let table_id = "your_table_id1";
+ let mut table = Table {
+ name: None,
+ cluster_states: Some(HashMap::new()),
+ column_families: Some(HashMap::new()),
+ granularity: None,
+ restore_info: None,
+ change_stream_config: None,
+ deletion_protection: None,
+ stats: None,
+ automated_backup_policy: None,
};
+
+ let initial_splits: Option> = None;
+
let result = client
- .create_tables(parent, table_id, table, initial_splits, cluster_states)
+ .create_table(parent, table_id, table, initial_splits)
.await;
assert!(result.is_ok());
let response = result.unwrap();
- assert_eq!(response["status"], 200);
+ assert_eq!(response["status"], "200".to_string());
}
diff --git a/rustcloud/src/tests/gcp_compute_operations.rs b/rustcloud/src/tests/gcp_compute_operations.rs
index 8526bc2..ddfb0a3 100644
--- a/rustcloud/src/tests/gcp_compute_operations.rs
+++ b/rustcloud/src/tests/gcp_compute_operations.rs
@@ -8,13 +8,42 @@ async fn create_client() -> GCE {
}
#[tokio::test]
-async fn test_create_node() {
+async fn test_create_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), json!("your_project_id"));
- request.insert("Zone".to_string(), json!("your_zone"));
- request.insert("Name".to_string(), json!("your_instance_name"));
+ let initialize_params = json!({
+ "SourceImage": "https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-8-jessie-v20160301",
+ "DiskType": "projects/rare-daylight-403814/zones/us-east4-c/diskTypes/pd-standard",
+ "DiskSizeGb": "10",
+ });
+
+ let disk = json!([{
+ "Boot": true,
+ "AutoDelete": false,
+ "DeviceName": "bokya",
+ "Type": "PERSISTENT",
+ "Mode": "READ_WRITE",
+ "InitializeParams": initialize_params,
+ }]);
+
+ let access_configs = json!([{
+ "Name": "external-nat",
+ "Type": "ONE_TO_ONE_NAT",
+ }]);
+
+ let network_interfaces = json!([{
+ "Network": "global/networks/default",
+ "Subnetwork": "projects/rare-daylight-403814/regions/us-east4/subnetworks/default",
+ "AccessConfigs": access_configs,
+ }]);
+ request.insert("projectid".to_string(), json!("rare-daylight-403814"));
+ request.insert("Zone".to_string(), json!("us-east4-c"));
+ request.insert("Name".to_string(), json!("alpha-123-xyz"));
+ request.insert("MachineType".to_string(), json!("zones/us-east4-c/machineTypes/n1-standard-1"));
+ request.insert("Disk".to_string(), disk);
+ request.insert("NetworkInterfaces".to_string(), network_interfaces);
+
// Add other required fields for the create_node request here.
let result = client.create_node(request).await;
@@ -22,64 +51,65 @@ async fn test_create_node() {
}
#[tokio::test]
-async fn test_start_node() {
+async fn test_start_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), "your_project_id".to_string());
- request.insert("Zone".to_string(), "your_zone".to_string());
- request.insert("instance".to_string(), "your_instance_name".to_string());
+ request.insert("projectid".to_string(), "rare-daylight-403814".to_string());
+ request.insert("Zone".to_string(), "us-east4-c".to_string());
+ request.insert("instance".to_string(), "alpha-123-xyz".to_string());
let result = client.start_node(request).await;
assert!(result.is_ok());
}
#[tokio::test]
-async fn test_stop_node() {
+async fn test_stop_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), "your_project_id".to_string());
- request.insert("Zone".to_string(), "your_zone".to_string());
- request.insert("instance".to_string(), "your_instance_name".to_string());
+ request.insert("projectid".to_string(), "rare-daylight-403814".to_string());
+ request.insert("Zone".to_string(), "us-east4-c".to_string());
+ request.insert("instance".to_string(), "alpha-123-xyz".to_string());
let result = client.stop_node(request).await;
assert!(result.is_ok());
}
#[tokio::test]
-async fn test_delete_node() {
+async fn test_delete_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), "your_project_id".to_string());
- request.insert("Zone".to_string(), "your_zone".to_string());
- request.insert("instance".to_string(), "your_instance_name".to_string());
+ request.insert("projectid".to_string(), "rare-daylight-403814".to_string());
+ request.insert("Zone".to_string(), "us-east4-c".to_string());
+ request.insert("instance".to_string(), "alpha-123-xyz".to_string());
let result = client.delete_node(request).await;
assert!(result.is_ok());
}
#[tokio::test]
-async fn test_reboot_node() {
+async fn test_reboot_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), "your_project_id".to_string());
- request.insert("Zone".to_string(), "your_zone".to_string());
- request.insert("instance".to_string(), "your_instance_name".to_string());
+ request.insert("projectid".to_string(), "rare-daylight-403814".to_string());
+ request.insert("Zone".to_string(), "asia-south2-a".to_string());
+ request.insert("instance".to_string(), "alpha-123-xyz".to_string());
let result = client.reboot_node(request).await;
assert!(result.is_ok());
}
#[tokio::test]
-async fn test_list_node() {
+async fn test_list_gcp_node() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("projectid".to_string(), "your_project_id".to_string());
- request.insert("Zone".to_string(), "your_zone".to_string());
+ request.insert("projectid".to_string(), "rare-daylight-403814".to_string());
+ // request.insert("Zone".to_string(), "us-east2-4".to_string());
+ request.insert("Zone".to_string(), "us-central1-b".to_string());
let result = client.list_node(request).await;
assert!(result.is_ok());
diff --git a/rustcloud/src/tests/gcp_container_operations.rs b/rustcloud/src/tests/gcp_container_operations.rs
new file mode 100644
index 0000000..ab8da9b
--- /dev/null
+++ b/rustcloud/src/tests/gcp_container_operations.rs
@@ -0,0 +1,83 @@
+use std::collections::HashMap;
+
+use crate::gcp::gcp_apis::compute::gcp_container::*;
+use crate::gcp::types::compute::gcp_container_types::*;
+use serde_json::json;
+use tokio::test;
+
+
+async fn create_client() -> GCPContainerClient {
+ GCPContainerClient::new()
+}
+
+#[tokio::test]
+async fn test_create_gcp_container_cluster() {
+ let client = create_client().await;
+
+ let nodepools = vec![
+ HashMap::from([
+ ("name".to_string(), "default-pool".to_string()),
+ ("initialNodeCount".to_string(), serde_json::to_string(&1).unwrap()),
+ ]),
+ ];
+
+ let mut request = HashMap::new();
+ request.insert("Project".to_string(), json!("rare-daylight-403814".to_string()));
+ request.insert("Name".to_string(), json!("cluster-3".to_string()));
+ request.insert("Zone".to_string(), json!("us-central1-a".to_string()));
+ request.insert("nodePools".to_string(), json!(nodepools));
+
+ let result = client.create_cluster(request).await;
+ println!("{:?}", result);
+ assert!(result.is_ok(), "Failed to create cluster");
+ let response = result.unwrap();
+ assert_eq!(response["status"], "200".to_string());
+}
+
+#[tokio::test]
+async fn test_delete_gcp_container_cluster() {
+ let client = create_client().await;
+
+ let mut request = HashMap::new();
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
+ request.insert("clusterId".to_string(), "cluster-3".to_string());
+ request.insert("Zone".to_string(), "us-central1-a".to_string());
+
+ let result = client.delete_cluster(request).await;
+ assert!(result.is_ok(), "Failed to delete cluster");
+ let response = result.unwrap();
+ assert_eq!(response["status"], "200".to_string());
+}
+
+#[tokio::test]
+async fn test_create_service_for_gcp_container() {
+ let client = create_client().await;
+
+ let mut request = HashMap::new();
+ request.insert("Project".to_string(), json!("rare-daylight-403814".to_string()));
+ request.insert("clusterId".to_string(), json!("cluster-3".to_string()));
+ request.insert("Zone".to_string(), json!("us-central1-a".to_string()));
+ request.insert("Name".to_string(), json!("nodepool".to_string()));
+ request.insert("status".to_string(), json!("STATUS_UNSPECIFIED".to_string()));
+
+ let result = client.create_service(request).await;
+ assert!(result.is_ok(), "Failed to create service");
+ let response = result.unwrap();
+ assert_eq!(response["status"], "200".to_string());
+}
+
+#[tokio::test]
+async fn test_delete_service_for_gcp_container() {
+ let client = create_client().await;
+
+ let mut request = HashMap::new();
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
+ request.insert("clusterId".to_string(), "cluster-3".to_string());
+ request.insert("Zone".to_string(), "us-central1-a".to_string());
+ request.insert("nodePoolId".to_string(), "nodepool".to_string());
+
+ let result = client.delete_service(request).await;
+ assert!(result.is_ok(), "Failed to delete service");
+ let response = result.unwrap();
+ assert_eq!(response["status"], "200".to_string());
+}
\ No newline at end of file
diff --git a/rustcloud/src/tests/gcp_dns_operations.rs b/rustcloud/src/tests/gcp_dns_operations.rs
index 48753bb..6d5a64d 100644
--- a/rustcloud/src/tests/gcp_dns_operations.rs
+++ b/rustcloud/src/tests/gcp_dns_operations.rs
@@ -1,74 +1,78 @@
use crate::gcp::gcp_apis::network::gcp_dns::*;
use std::collections::HashMap;
use tokio::test;
+use serde_json::json;
async fn create_client() -> GoogleDns {
- GoogleDns::new("your_project_id")
+ GoogleDns::new()
}
#[tokio::test]
async fn test_list_resource_dns_record_sets() {
let client = create_client().await;
-
+ let project_id = "rare-daylight-403814".to_string();
let mut options = HashMap::new();
- options.insert("managedZone", "your_managed_zone");
+ options.insert("managedZone", "rustcloudtest");
options.insert("maxResults", "10");
- let result = client.list_resource_dns_record_sets(&options).await;
+ let result = client.list_resource_dns_record_sets(project_id,&options).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
async fn test_create_dns() {
let client = create_client().await;
+ let project_id = "rare-daylight-403814".to_string();
let mut params = HashMap::new();
- params.insert("Project", "your_project_id");
- params.insert("Description", "Test DNS Description");
- params.insert("DnsName", "test.dns.name.");
- params.insert(
- "nameServers",
- "ns-cloud-a1.googledomains.com,ns-cloud-a2.googledomains.com",
- );
- params.insert("Id", "12345");
- params.insert("Kind", "dns#managedZone");
- params.insert("Name", "test-dns");
- params.insert("nameServerSet", "test-name-server-set");
-
- let result = client.create_dns(¶ms).await;
+ params.insert("Project".to_string(), json!("rare-daylight-403814"));
+ params.insert("Description".to_string(), json!("Test DNS Description"));
+ params.insert("DnsName".to_string(), json!("test.dns1.name."));
+ // params.insert(
+ // "nameServers",
+ // "ns-cloud-a1.googledomains.com,ns-cloud-a2.googledomains.com",
+ // );
+ // params.insert("Id", "12345");
+ params.insert("Kind".to_string(), json!("dns#managedZone"));
+ params.insert("Name".to_string(), json!("test-dns1"));
+ // params.insert("nameServerSet", "test-name-server-set");
+
+ let result = client.create_dns(project_id, params).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string())
}
#[tokio::test]
-async fn test_list_dns() {
+async fn test_list_gcp_dns() {
let client = create_client().await;
+ let project_id = "rare-daylight-403814".to_string();
let mut options = HashMap::new();
options.insert("maxResults", "10");
- let result = client.list_dns(&options).await;
+ let result = client.list_dns(project_id, &options).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
async fn test_delete_dns() {
let client = create_client().await;
+ let project_id = "rare-daylight-403814".to_string();
let mut options = HashMap::new();
- options.insert("managedZone", "your_managed_zone");
+ options.insert("managedZone", "rustcloudtest");
- let result = client.delete_dns(&options).await;
+ let result = client.delete_dns(project_id, &options).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
diff --git a/rustcloud/src/tests/gcp_kubernetes_operations.rs b/rustcloud/src/tests/gcp_kubernetes_operations.rs
deleted file mode 100644
index 875a5e6..0000000
--- a/rustcloud/src/tests/gcp_kubernetes_operations.rs
+++ /dev/null
@@ -1,136 +0,0 @@
-use crate::gcp::gcp_apis::compute::gcp_kubernetes::*;
-use crate::gcp::types::compute::gcp_kubernetes_types::*;
-use tokio::test;
-
-async fn create_client() -> GCPKubernetesClient {
- GCPKubernetesClient::new()
-}
-
-#[tokio::test]
-async fn test_create_cluster() {
- let client = create_client().await;
-
- let request = CreateClusterRequest {
- projectId: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- // Add other required fields for CreateClusterRequest here.
- };
-
- let result = client.create_cluster(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_delete_cluster() {
- let client = create_client().await;
-
- let request = DeleteClusterRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- cluster_id: "your_cluster_id".to_string(),
- };
-
- let result = client.delete_cluster(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_list_clusters() {
- let client = create_client().await;
-
- let request = ListClustersRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- };
-
- let result = client.list_clusters(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_get_cluster() {
- let client = create_client().await;
-
- let request = GetClusterRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- cluster_id: "your_cluster_id".to_string(),
- };
-
- let result = client.get_cluster(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_create_node_pool() {
- let client = create_client().await;
-
- let request = CreateNodePoolRequest {
- projectId: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- clusterId: "your_cluster_id".to_string(),
- // Add other required fields for CreateNodePoolRequest here.
- };
-
- let result = client.create_node_pool(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_delete_node_pool() {
- let client = create_client().await;
-
- let request = DeleteNodePoolRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- cluster_id: "your_cluster_id".to_string(),
- node_pool_id: "your_node_pool_id".to_string(),
- };
-
- let result = client.delete_node_pool(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_get_node_pool() {
- let client = create_client().await;
-
- let request = GetNodePoolRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- cluster_id: "your_cluster_id".to_string(),
- node_pool_id: "your_node_pool_id".to_string(),
- };
-
- let result = client.get_node_pool(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_list_node_pools() {
- let client = create_client().await;
-
- let request = ListNodePoolsRequest {
- project_id: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- cluster_id: "your_cluster_id".to_string(),
- };
-
- let result = client.list_node_pools(request).await;
- assert!(result.is_ok());
-}
-
-#[tokio::test]
-async fn test_set_addons_config() {
- let client = create_client().await;
-
- let request = SetAddonsConfigRequest {
- projectId: "your_project_id".to_string(),
- zone: "your_zone".to_string(),
- clusterId: "your_cluster_id".to_string(),
- // Add other required fields for SetAddonsConfigRequest here.
- };
-
- let result = client.set_addons_config(request).await;
- assert!(result.is_ok());
-}
diff --git a/rustcloud/src/tests/gcp_loadbalancer_operations.rs b/rustcloud/src/tests/gcp_loadbalancer_operations.rs
index 95a0718..fb368fa 100644
--- a/rustcloud/src/tests/gcp_loadbalancer_operations.rs
+++ b/rustcloud/src/tests/gcp_loadbalancer_operations.rs
@@ -1,34 +1,36 @@
use crate::gcp::gcp_apis::network::gcp_loadbalancer::*;
use std::collections::HashMap;
use tokio::test;
+use serde_json::json;
+
async fn create_client() -> GoogleLoadBalancer {
- GoogleLoadBalancer::new("your_project_id")
+ GoogleLoadBalancer::new("rare-daylight-403814")
}
#[tokio::test]
-async fn test_create_load_balancer() {
+async fn test_create_gcp_load_balancer() {
let client = create_client().await;
let mut params = HashMap::new();
- params.insert("Project", "your_project_id");
- params.insert("Name", "test-lb");
- params.insert("Region", "us-central1");
- params.insert("healthChecks", "healthCheck1,healthCheck2");
- params.insert("description", "Test Load Balancer");
- params.insert("BackupPool", "backup-pool");
- params.insert("failoverRatio", "0.1");
- params.insert("id", "12345");
- params.insert("Instances", "instance1,instance2");
- params.insert("kind", "compute#targetPool");
- params.insert("sessionAffinity", "NONE");
- params.insert("selfLink", "");
-
- let result = client.create_load_balancer(¶ms).await;
+ params.insert("Project".to_string(), json!("rare-daylight-403814"));
+ params.insert("Name".to_string(), json!("test-2b"));
+ params.insert("Region".to_string(), json!("us-central1"));
+ // params.insert("healthChecks", "healthCheck1,healthCheck2");
+ // params.insert("description", "Test Load Balancer");
+ // params.insert("BackupPool", "backup-pool");
+ // params.insert("failoverRatio", "0.1");
+ // params.insert("id", "12345");
+ params.insert("Instances".to_string(), json!("https://www.googleapis.com/compute/v1/projects/rare-daylight-403814/zones/us-central1-b/instances/instance-20240902-124323"));
+ // params.insert("kind", "compute#targetPool");
+ // params.insert("sessionAffinity", "NONE");
+ // params.insert("selfLink", "");
+
+ let result = client.create_load_balancer(params).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
@@ -36,7 +38,7 @@ async fn test_delete_load_balancer() {
let client = create_client().await;
let mut options = HashMap::new();
- options.insert("Project", "your_project_id");
+ options.insert("Project", "rare-daylight-403814");
options.insert("Region", "us-central1");
options.insert("TargetPool", "test-lb");
@@ -44,22 +46,22 @@ async fn test_delete_load_balancer() {
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
-async fn test_list_load_balancer() {
+async fn test_list_gcp_load_balancer() {
let client = create_client().await;
let mut options = HashMap::new();
- options.insert("Project", "your_project_id");
+ options.insert("Project", "rare-daylight-403814");
options.insert("Region", "us-central1");
let result = client.list_load_balancer(&options).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
@@ -67,16 +69,16 @@ async fn test_attach_node_with_load_balancer() {
let client = create_client().await;
let mut params = HashMap::new();
- params.insert("Project", "your_project_id");
- params.insert("TargetPool", "test-lb");
+ params.insert("Project", "rare-daylight-403814");
+ params.insert("TargetPool", "test-2b");
params.insert("Region", "us-central1");
- params.insert("Instances", "instance1,instance2");
+ params.insert("Instances", "https://www.googleapis.com/compute/v1/projects/rare-daylight-403814/zones/us-east4-c/instances/alpha-123-xyz");
let result = client.attach_node_with_load_balancer(¶ms).await;
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
#[tokio::test]
@@ -84,7 +86,7 @@ async fn test_detach_node_with_load_balancer() {
let client = create_client().await;
let mut params = HashMap::new();
- params.insert("Project", "your_project_id");
+ params.insert("Project", "rare-daylight-403814");
params.insert("TargetPool", "test-lb");
params.insert("Region", "us-central1");
params.insert("Instances", "instance1,instance2");
@@ -93,5 +95,5 @@ async fn test_detach_node_with_load_balancer() {
assert!(result.is_ok());
let response = result.unwrap();
- assert!(response.status().is_success());
+ assert_eq!(response["status"], "200".to_string());
}
diff --git a/rustcloud/src/tests/gcp_notification_operations.rs b/rustcloud/src/tests/gcp_notification_operations.rs
index edf47ed..6bbfc55 100644
--- a/rustcloud/src/tests/gcp_notification_operations.rs
+++ b/rustcloud/src/tests/gcp_notification_operations.rs
@@ -11,9 +11,9 @@ async fn test_list_topic() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("Project".to_string(), "your_project_id".to_string());
- request.insert("PageSize".to_string(), "10".to_string());
- request.insert("PageToken".to_string(), "token".to_string());
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
+ // request.insert("PageSize".to_string(), "10".to_string());
+ // request.insert("PageToken".to_string(), "");
let result = client.list_topic(request).await;
assert!(result.is_ok());
@@ -24,7 +24,7 @@ async fn test_get_topic() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("Project".to_string(), "your_project_id".to_string());
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
request.insert("Topic".to_string(), "your_topic_name".to_string());
let result = client.get_topic(request).await;
@@ -36,7 +36,7 @@ async fn test_delete_topic() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("Project".to_string(), "your_project_id".to_string());
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
request.insert("Topic".to_string(), "your_topic_name".to_string());
let result = client.delete_topic(request).await;
@@ -48,7 +48,7 @@ async fn test_create_topic() {
let client = create_client().await;
let mut request = HashMap::new();
- request.insert("Project".to_string(), "your_project_id".to_string());
+ request.insert("Project".to_string(), "rare-daylight-403814".to_string());
request.insert("Topic".to_string(), "your_topic_name".to_string());
let result = client.create_topic(request).await;
diff --git a/rustcloud/src/tests/gcp_storage_operations.rs b/rustcloud/src/tests/gcp_storage_operations.rs
index 60e8fbf..53e986e 100644
--- a/rustcloud/src/tests/gcp_storage_operations.rs
+++ b/rustcloud/src/tests/gcp_storage_operations.rs
@@ -3,6 +3,7 @@ use serde_json::json;
use std::collections::HashMap;
use tokio::test;
+
async fn create_storage_client() -> GoogleStorage {
GoogleStorage::new()
}
@@ -12,8 +13,8 @@ async fn test_create_disk() {
let client = create_storage_client().await;
let mut params = HashMap::new();
- params.insert("projectid".to_string(), json!("your_project_id"));
- params.insert("Name".to_string(), json!("test-disk"));
+ params.insert("projectid".to_string(), json!("rare-daylight-403814"));
+ params.insert("Name".to_string(), json!("test-disk-1"));
params.insert("Zone".to_string(), json!("us-central1-a"));
params.insert("Type".to_string(), json!("pd-standard"));
params.insert("SizeGb".to_string(), json!(10));
@@ -30,9 +31,9 @@ async fn test_delete_disk() {
let client = create_storage_client().await;
let mut params = HashMap::new();
- params.insert("projectid".to_string(), "your_project_id".to_string());
+ params.insert("projectid".to_string(), "rare-daylight-403814".to_string());
params.insert("Zone".to_string(), "us-central1-a".to_string());
- params.insert("disk".to_string(), "test-disk".to_string());
+ params.insert("disk".to_string(), "test-disk-1".to_string());
let result = client.delete_disk(params).await;
@@ -46,10 +47,10 @@ async fn test_create_snapshot() {
let client = create_storage_client().await;
let mut params = HashMap::new();
- params.insert("projectid".to_string(), json!("your_project_id"));
+ params.insert("projectid".to_string(), json!("rare-daylight-403814"));
params.insert("Name".to_string(), json!("test-snapshot"));
params.insert("Zone".to_string(), json!("us-central1-a"));
- params.insert("disk".to_string(), json!("test-disk"));
+ params.insert("disk".to_string(), json!("test-disk-1"));
let result = client.create_snapshot(params).await;
diff --git a/rustcloud/src/tests/mod.rs b/rustcloud/src/tests/mod.rs
index 871f155..a12760c 100644
--- a/rustcloud/src/tests/mod.rs
+++ b/rustcloud/src/tests/mod.rs
@@ -10,11 +10,11 @@ mod aws_iam_operations;
mod aws_kms_operations;
mod aws_loadbalancer_operations;
mod aws_monitoring_operations;
-mod gcp_automl_operations;
+// mod gcp_automl_operations;
mod gcp_bigtable_operations;
mod gcp_compute_operations;
mod gcp_dns_operations;
-mod gcp_kubernetes_operations;
+mod gcp_container_operations;
mod gcp_loadbalancer_operations;
mod gcp_notification_operations;
mod gcp_storage_operations;