Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache imports protected by a semantic integrity check #533

Merged
merged 3 commits into from
Aug 8, 2018

Conversation

Gabriella439
Copy link
Collaborator

... as standardized in dhall-lang/dhall-lang#208

For example, given this file:

http://prelude.dhall-lang.org/package.dhall sha256:2a84d3c5a420e2549f1dfb145fa2a6956481d99f022a77bc31152c29a008dbfa

The first time you interpret the file it is retrieved, parsed, and normalized
like normal:

$ time dist/build/dhall/dhall <<< './test.dhall'
{ `Bool` :
    { and :
…
    }
}

real    0m6.524s
user    0m0.720s
sys     0m0.158s

... and the second time you get a significant speedup by hitting the
local cache:

$ time dist/build/dhall/dhall <<< './test.dhall'
{ `Bool` :
    { and :
…
    }
}

real    0m0.162s
user    0m0.063s
sys     0m0.070s

The main contributors to the speed up are:

  • Retrieving a single local file is faster than retrieving multiple
    remote files
  • Decoding binary is faster than parsing text
  • The expression is cached in normal form, so there is no need to
    re-normalize

... as standardized in dhall-lang/dhall-lang#208

For example, given this file:

```haskell
http://prelude.dhall-lang.org/package.dhall sha256:2a84d3c5a420e2549f1dfb145fa2a6956481d99f022a77bc31152c29a008dbfa
```

The first time you interpret the file it is retrieved, parsed, and normalized
like normal:

```bash
$ time dist/build/dhall/dhall <<< './test.dhall'
{ `Bool` :
    { and :
…
    }
}

real    0m6.524s
user    0m0.720s
sys     0m0.158s
```

... and the second time you get a significant speedup by hitting the
local cache:

```
$ time dist/build/dhall/dhall <<< './test.dhall'
{ `Bool` :
    { and :
…
    }
}

real    0m0.162s
user    0m0.063s
sys     0m0.070s
```

The main contributors to the speed up are:

* Retrieving a single local file is faster than retrieving multiple
  remote files
* Decoding binary is faster than parsing text
* The expression is cached in normal form, so there is no need to
  re-normalize
@sellout
Copy link
Collaborator

sellout commented Aug 4, 2018

We’ve really been looking forward to this at work. I’ll try to review it on Monday.
/cc @FintanH

@FintanH
Copy link
Collaborator

FintanH commented Aug 4, 2018

Much excite!

@Gabriella439
Copy link
Collaborator Author

Also, the locally cached Prelude is 5 KB:

$ cat ~/.cache/dhall/2a84d3c5a420e2549f1dfb145fa2a6956481d99f022a77bc31152c29a008dbfa | wc -c
5040

... and if space ever became an issue we could eventually compress things further:

$ cat ~/.cache/dhall/2a84d3c5a420e2549f1dfb145fa2a6956481d99f022a77bc31152c29a008dbfa | gzip | wc -c
1249

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants