Skip to content

Commit

Permalink
update doc for the issue #12, #13
Browse files Browse the repository at this point in the history
  • Loading branch information
wjo1212 committed Jan 25, 2018
1 parent 97eaec8 commit 5d2dc91
Show file tree
Hide file tree
Showing 3 changed files with 36 additions and 21 deletions.
31 changes: 19 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,28 +262,28 @@ Examples:
which outputs:

```json
{
"count": 3,
"logstores": ["logstore3", "logstore1", "logstore2"],
"total": 3
}
[ {"__source__": "ip1", "key": "log1"}, {"__source__": "ip2", "key": "log2"} ]
```

You could use below `--jmes-filter` to filter it:
You could use below `--jmes-filter` to break log into each line:

```shell
> aliyun log get_logs ... --jmes-filter="logstores[2:]"
> aliyun log get_logs ... --jmes-filter="join('
', map(&to_string(@), @))"
```

Then you will be the name list of second logstore and later ones as below:
**Note** there's a string containing a newline passed to `jmes-filter`.

output:

```shell
["logstore1", "logstore2"]
{"__source__": "ip1", "key": "log1"}
{"__source__": "ip2", "key": "log2"}
```

### Further Process
You may want to process the output using your own cmd. For example, if you may want to break the logs into each line.
you could append thd command with a `|` on linux/unix:
You could use `>>` to store the output to a file. or you may want to process the output using your own cmd.
For example, there's another way to if you may want to break the logs into each line. you could append thd command with a `|` on linux/unix:

```shell
| python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));"
Expand All @@ -297,6 +297,7 @@ aliyun log get_log .... | | python2 -c "from __future__ import print_function;im
```



## Command Reference

### Command Specification
Expand Down Expand Up @@ -583,7 +584,7 @@ All the commands support below optional global options:
```

- get_logs
- Format of parameter:
- Format of parameter:

```json
{
Expand All @@ -598,9 +599,15 @@ All the commands support below optional global options:
"reverse": "true"
}
```
- It will fetch all data when `line` is passed as -1. But if have large volume of data exceeding 1GB, better to use `get_log_all`

- get_log_all
- this API is similar as `get_logs`, but it will fetch data iteratively and output them by chunk. It's used for large volume of data fetching.

- get_histograms
- pull_logs
- pull_log
- this API is similar as `pull_logs`, but it allow readable parameter and allow to fetch data iteratively and output them by chunk. It's used for large volume of data fetching.

<h3 id="10-shipper-management">10. Shipper management</h3>
- create_shipper
Expand Down
24 changes: 15 additions & 9 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,28 +262,28 @@ region-endpoint=cn-hangzhou.log.aliyuncs.com
以上命令的输出是:

```json
{
"count": 3,
"logstores": ["logstore3", "logstore1", "logstore2"],
"total": 3
}
[ {"__source__": "ip1", "key": "log1"}, {"__source__": "ip2", "key": "log2"} ]
```

通过以下命令可以获取第二以及后面的Logstore的名字:
通过以下命令将日志分隔为每一行:

```shell
> aliyun log get_logs ... --jmes-filter="logstores[2:]"
> aliyun log get_logs ... --jmes-filter="join('
', map(&to_string(@), @))"
```

**注意** 这里传入了一个包含换行符的字符串给`jmes-filter`.

输出:

```shell
["logstore1", "logstore2"]
{"__source__": "ip1", "key": "log1"}
{"__source__": "ip2", "key": "log2"}
```


<h2 id="进一步处理">进一步处理</h2>
某些情况下, 你需要使用其他命令进行处理, 例如需要把json格式的日志, 分行打印. 在Linux/Unix下, 你可以在命令后通过添加一个`|`来进一步处理.
你可以使用`>>`来讲输出存储到一个文件. 某些时候, 你需要使用其他命令进行处理, 例如, 这里介绍另一个把json格式的日志分行打印的方法. 在Linux/Unix下, 你可以在命令后通过添加一个`|`来进一步处理.

```shell
| python2 -c "from __future__ import print_function;import json;map(lambda x: print(json.dumps(x).encode('utf8')), json.loads(raw_input()));"
Expand Down Expand Up @@ -598,9 +598,15 @@ def create_logstore(self, project_name, logstore_name, ttl=2, shard_count=30):
"reverse": "true"
}
```
- 但参数`line`传入-1时, 就回获取所有. 但是当数据量很大, 超过1GB时, 最好使用`get_log_all`

- get_log_all
-`get_logs`一样, 但是迭代式获取数据并输出, 适合大容量的数据获取.

- get_histograms
- pull_logs
- pull_log
-`pull_logs`类似, 但是迭代式获取数据并输出, 适合大容量的数据获取.

<h3 id="10-投递管理">10. 投递管理</h3>

Expand Down
2 changes: 2 additions & 0 deletions doc/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,8 +115,10 @@ Logs
.. autosummary::
put_logs
pull_logs
pull_log
get_log
get_logs
get_log_all
get_histograms
get_project_logs

Expand Down

0 comments on commit 5d2dc91

Please sign in to comment.