Skip to content

Commit

Permalink
Presigned URLs support, both GET and PUT, closes #54 (#94)
Browse files Browse the repository at this point in the history
* Bring presigned URLs back

* More docs, presigned_put
  • Loading branch information
durch committed Jun 22, 2020
1 parent 40c0133 commit 91ad14d
Show file tree
Hide file tree
Showing 8 changed files with 278 additions and 87 deletions.
56 changes: 44 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,22 @@
Rust library for working with Amazon S3 or arbitrary S3 compatible APIs, fully compatible with **async/await** and `futures ^0.3`

### Intro

Modest interface towards Amazon S3, as well as S3 compatible object storage APIs such as Wasabi, Yandex or Minio.
Supports `put`, `get`, `list`, `delete`, operations on `tags` and `location`.
Supports `put`, `get`, `list`, `delete`, operations on `tags` and `location`.

Additionally a dedicated `presign_get` `Bucket` method is available. This means you can upload to s3, and give the link to select people without having to worry about publicly accessible files on S3. This also means that you can give people
a `PUT` presigned URL, meaning they can upload to a specific key in S3 for the duration of the presigned URL.

**[AWS, Yandex and Custom (Minio) Example](https://github.com/durch/rust-s3/blob/master/s3/bin/simple_crud.rs)**

#### Presign

| | |
| ----- | ---------------------------------------------------------------------------------------------- |
| `PUT` | [presign_put](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.presign_put) |
| `GET` | [presign_get](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.presign_get) |

#### GET

There are a few different options for getting an object. `async` and `sync` methods are generic over `std::io::Write`,
Expand All @@ -29,7 +40,7 @@ while `tokio` methods are generic over `tokio::io::AsyncWriteExt`.

#### PUT

Each `GET` method has a put companion `sync` and `async` methods are generic over `std::io::Read`,
Each `GET` method has a `PUT` companion `sync` and `async` methods are generic over `std::io::Read`,
while `tokio` methods are generic over `tokio::io::AsyncReadExt`.

| | |
Expand All @@ -40,39 +51,60 @@ while `tokio` methods are generic over `tokio::io::AsyncReadExt`.
| `sync` | [put_object_stream_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.put_object_stream_blocking) |
| `tokio` | [tokio_put_object_stream](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.tokio_put_object_stream) |

### What else is cool? -> Broken and tracked at [#54](https://github.com/durch/rust-s3/issues/54)
#### List

| | |
| ------- | ---------------------------------------------------------------------------------------------------------- |
| `async` | [list](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.list) |
| `sync` | [list_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.list_blocking) |

The main cool feature is that `put` commands return a presigned link to the file you uploaded
This means you can upload to s3, and give the link to select people without having to worry about publicly accessible files on S3.
#### DELETE

### Configuration
| | |
| ------- | -------------------------------------------------------------------------------------------------------------------- |
| `async` | [delete_object](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.delete_object) |
| `sync` | [delete_object_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.delete_object_blocking) |

Getter and setter functions exist for all `Link` params... You don't really have to touch anything there, maybe `amz-expire`,
it is configured for one week which is the maximum Amazon allows ATM.
#### Location

| | |
| ------- | ---------------------------------------------------------------------------------------------------------- |
| `async` | [location](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.location) |
| `sync` | [location_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.location_blocking) |

#### Tagging

| | |
| ------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `async` | [put_object_tagging](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.put_object_tagging) |
| `sync` | [put_object_tagging_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.put_object_tagging_blocking) |
| `async` | [get_object_tagging](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.get_object_tagging) |
| `sync` | [get_object_tagging_blocking](https://durch.github.io/rust-s3/s3/bucket/struct.Bucket.html#method.get_object_tagging_blocking) |

### Usage (in `Cargo.toml`)

```toml
[dependencies]
rust-s3 = "0.22.3"
rust-s3 = "0.22.8"
```

#### Disable SSL verification for endpoints, useful for custom regions

```toml
[dependencies]
rust-s3 = {version = "0.22.3", features = ["no-verify-ssl"]}
rust-s3 = {version = "0.22.8", features = ["no-verify-ssl"]}
```

#### Fail on HTTP error responses

```toml
[dependencies]
rust-s3 = {version = "0.22.3", features = ["fail-on-err"]}
rust-s3 = {version = "0.22.8", features = ["fail-on-err"]}
```

#### Use path style addressing, needed for Minio compatibility

```toml
[dependencies]
rust-s3 = {version = "0.22.3", features = ["path-style"]}
rust-s3 = {version = "0.22.8", features = ["path-style"]}
```
2 changes: 1 addition & 1 deletion s3/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "rust-s3"
version = "0.22.7"
version = "0.22.8"
authors = ["Drazen Urch", "Nick Stevens"]
description = "Tiny Rust library for working with Amazon S3 and compatible object storage APIs"
repository = "https://github.com/durch/rust-s3"
Expand Down
1 change: 1 addition & 0 deletions s3/bin/simple_crud.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ pub fn main() -> Result<(), S3Error> {
// Put a "test_file" with the contents of MESSAGE at the root of the
// bucket.
let (_, code) = bucket.put_object_blocking("test_file", MESSAGE.as_bytes(), "text/plain")?;
// println!("{}", bucket.presign_get("test_file", 604801)?);
assert_eq!(200, code);

// Get the "test_file" contents and make sure that the returned message
Expand Down
57 changes: 53 additions & 4 deletions s3/src/bucket.rs
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,56 @@ pub struct Bucket {
pub extra_query: Query,
}

fn validate_expiry(expiry_secs: u32) -> Result<()> {
if 604800 < expiry_secs {
return Err(S3Error::from(format!("Max expiration for presigned URLs is one week, or 604.800 seconds, got {} instead", expiry_secs).as_ref()));
}
Ok(())
}

impl Bucket {
/// Get a presigned url for getting object on a given path
///
/// # Example:
///
/// ```rust,no_run
/// use s3::bucket::Bucket;
/// use awscreds::Credentials;
///
/// let bucket_name = "rust-s3-test";
/// let region = "us-east-1".parse().unwrap();
/// let credentials = Credentials::default_blocking().unwrap();
/// let bucket = Bucket::new(bucket_name, region, credentials).unwrap();
///
/// let url = bucket.presign_get("/test.file", 86400).unwrap();
/// println!("Presigned url: {}", url);
/// ```
pub fn presign_get<S: AsRef<str>>(&self, path: S, expiry_secs: u32) -> Result<String> {
validate_expiry(expiry_secs)?;
let request = Request::new(self, path.as_ref(), Command::PresignGet { expiry_secs });
Ok(request.presigned()?)
}
/// Get a presigned url for putting object to a given path
///
/// # Example:
///
/// ```rust,no_run
/// use s3::bucket::Bucket;
/// use awscreds::Credentials;
///
/// let bucket_name = "rust-s3-test";
/// let region = "us-east-1".parse().unwrap();
/// let credentials = Credentials::default_blocking().unwrap();
/// let bucket = Bucket::new(bucket_name, region, credentials).unwrap();
///
/// let url = bucket.presign_put("/test.file", 86400).unwrap();
/// println!("Presigned url: {}", url);
/// ```
pub fn presign_put<S: AsRef<str>>(&self, path: S, expiry_secs: u32) -> Result<String> {
validate_expiry(expiry_secs)?;
let request = Request::new(self, path.as_ref(), Command::PresignPut { expiry_secs });
Ok(request.presigned()?)
}
/// Instantiate a new `Bucket`.
///
/// # Example
Expand Down Expand Up @@ -183,7 +232,7 @@ impl Bucket {
Ok(request.response_data_to_writer_future(writer).await?)
}

/// Stream file from S3 path to a local file, generic over T: Write, async.
/// Stream file from S3 path to a local file, generic over T: Write, async.
///
/// # Example:
///
Expand Down Expand Up @@ -815,7 +864,7 @@ impl Bucket {
loop {
results.push(result.clone());
if !result.0.is_truncated {
break
break;
}
match result.0.next_continuation_token {
Some(token) => {
Expand Down Expand Up @@ -907,7 +956,7 @@ impl Bucket {
/// Get a reference to the AWS access key.
pub fn access_key(&self) -> Option<String> {
if let Some(access_key) = self.credentials.access_key.clone() {
Some(access_key.replace('\n',""))
Some(access_key.replace('\n', ""))
} else {
None
}
Expand All @@ -916,7 +965,7 @@ impl Bucket {
/// Get a reference to the AWS secret key.
pub fn secret_key(&self) -> Option<String> {
if let Some(secret_key) = self.credentials.secret_key.clone() {
Some(secret_key.replace('\n',""))
Some(secret_key.replace('\n', ""))
} else {
None
}
Expand Down
14 changes: 10 additions & 4 deletions s3/src/command.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use reqwest::Method;

#[derive(Clone)]
#[derive(Clone, Debug)]
pub enum Command<'a> {
DeleteObject,
DeleteObjectTagging,
Expand All @@ -19,14 +19,20 @@ pub enum Command<'a> {
delimiter: Option<String>,
continuation_token: Option<String>
},
GetBucketLocation
GetBucketLocation,
PresignGet {
expiry_secs: u32
},
PresignPut {
expiry_secs: u32
}
}

impl<'a> Command<'a> {
pub fn http_verb(&self) -> Method {
match *self {
Command::GetObject | Command::ListBucket { .. } | Command::GetBucketLocation | Command::GetObjectTagging => Method::GET,
Command::PutObject { .. } | Command::PutObjectTagging { .. } => Method::PUT,
Command::GetObject | Command::ListBucket { .. } | Command::GetBucketLocation | Command::GetObjectTagging | Command::PresignGet { .. } => Method::GET,
Command::PutObject { .. } | Command::PutObjectTagging { .. } | Command::PresignPut { .. } => Method::PUT,
Command::DeleteObject | Command::DeleteObjectTagging => Method::DELETE,
}
}
Expand Down
1 change: 1 addition & 0 deletions s3/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ simpl::err!(S3Error, {
Io@std::io::Error;
Region@awsregion::AwsRegionError;
Creds@awscreds::AwsCredsError;
UrlParse@url::ParseError;
});

const LONG_DATE: &str = "%Y%m%dT%H%M%SZ";
Expand Down
102 changes: 82 additions & 20 deletions s3/src/request.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,16 @@ impl<'a> Request<'a> {
}
}

pub fn presigned(&self) -> Result<String> {
let expiry = match self.command {
Command::PresignGet { expiry_secs} => expiry_secs,
Command::PresignPut { expiry_secs} => expiry_secs,
_ => unreachable!()
};
let authorization = self.presigned_authorization()?;
Ok(format!("{}&X-Amz-Signature={}", self.presigned_url_no_sig(expiry)?, authorization))
}

fn url(&self) -> Url {
let mut url_str = if cfg!(feature = "path-style") {
format!(
Expand Down Expand Up @@ -159,6 +169,33 @@ impl<'a> Request<'a> {
)
}

fn presigned_url_no_sig(&self, expiry: u32) -> Result<Url> {
Ok(Url::parse(&format!(
"{}{}",
self.url(),
signing::authorization_query_params_no_sig(
&self.bucket.access_key().unwrap(),
&self.datetime,
&self.bucket.region(),
expiry
)
))?)
}

fn presigned_canonical_request(&self, headers: &HeaderMap) -> Result<String> {
let expiry = match self.command {
Command::PresignGet { expiry_secs } => expiry_secs,
_ => unreachable!()
};
let canonical_request = signing::canonical_request(
self.command.http_verb().as_str(),
&self.presigned_url_no_sig(expiry)?,
headers,
"UNSIGNED-PAYLOAD",
);
Ok(canonical_request)
}

fn string_to_sign(&self, request: &str) -> String {
signing::string_to_sign(&self.datetime, &self.bucket.region(), request)
}
Expand All @@ -175,6 +212,21 @@ impl<'a> Request<'a> {
)?)
}

fn presigned_authorization(&self) -> Result<String> {
let mut headers = HeaderMap::new();
headers.insert(
header::HOST,
HeaderValue::from_str(&self.bucket.self_host()).unwrap(),
);
let canonical_request = self.presigned_canonical_request(&headers)?;
let string_to_sign = self.string_to_sign(&canonical_request);
let mut hmac = signing::HmacSha256::new_varkey(&self.signing_key()?)?;
hmac.input(string_to_sign.as_bytes());
let signature = hex::encode(hmac.result().code());
// let signed_header = signing::signed_header_string(&headers);
Ok(signature)
}

fn authorization(&self, headers: &HeaderMap) -> Result<String> {
let canonical_request = self.canonical_request(headers);
let string_to_sign = self.string_to_sign(&canonical_request);
Expand Down Expand Up @@ -364,16 +416,21 @@ impl<'a> Request<'a> {
// This must be last, as it signs the other headers, omitted if no secret key is provided
if self.bucket.secret_key().is_some() {
let authorization = self.authorization(&headers)?;
headers.insert(header::AUTHORIZATION, match authorization.parse() {
Ok(authorization) => authorization,
Err(_) => return Err(S3Error::from(
format!(
"Could not parse AUTHORIZATION header value {}",
authorization
)
.as_ref(),
))
});
headers.insert(
header::AUTHORIZATION,
match authorization.parse() {
Ok(authorization) => authorization,
Err(_) => {
return Err(S3Error::from(
format!(
"Could not parse AUTHORIZATION header value {}",
authorization
)
.as_ref(),
))
}
},
);
}

// The format of RFC2822 is somewhat malleable, so including it in
Expand All @@ -382,16 +439,21 @@ impl<'a> Request<'a> {
// range and can't be used again e.g. reply attacks. Adding this header
// after the generation of the Authorization header leaves it out of
// the signed headers.
headers.insert(header::DATE, match self.datetime.to_rfc2822().parse() {
Ok(date) => date,
Err(_) => return Err(S3Error::from(
format!(
"Could not parse DATE header value {}",
self.datetime.to_rfc2822()
)
.as_ref(),
))
});
headers.insert(
header::DATE,
match self.datetime.to_rfc2822().parse() {
Ok(date) => date,
Err(_) => {
return Err(S3Error::from(
format!(
"Could not parse DATE header value {}",
self.datetime.to_rfc2822()
)
.as_ref(),
))
}
},
);

Ok(headers)
}
Expand Down
Loading

0 comments on commit 91ad14d

Please sign in to comment.