Skip to content

Commit

Permalink
Merge pull request #725 from jmbrunskill/master
Browse files Browse the repository at this point in the history
Updated scraper example to use scraper repo
  • Loading branch information
JoeyZwicker committed Aug 17, 2016
2 parents 51ab58e + bb63c6e commit cde5790
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions examples/scraper/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,8 +173,8 @@ name where it stores its output results. In our example, the pipeline was named
There are a couple of different ways to retrieve the output. We can read a single output file from the “scraper” `repo` in the same fashion that we read the input data:

```shell
$ pachctl list-file urls 09a7eb68995c43979cba2b0d29432073 urls
$ pachctl get-file urls 09a7eb68995c43979cba2b0d29432073 urls/www.imgur.com/index.html
$ pachctl list-file scraper 09a7eb68995c43979cba2b0d29432073 urls
$ pachctl get-file scraper 09a7eb68995c43979cba2b0d29432073 urls/www.imgur.com/index.html
```

Using `get-file` is good if you know exactly what file you’re looking for, but for this example we want to just see all the scraped pages. One great way to do this is to mount the distributed file system locally and then just poke around.
Expand All @@ -198,6 +198,7 @@ other local filesystem. Try:
```shell
$ ls ~/pfs
urls
scraper
```
You should see the urls repo that we created.

Expand Down

0 comments on commit cde5790

Please sign in to comment.