-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changed spark-api project name to source{d} engine in documentation #110
Conversation
Codecov Report
@@ Coverage Diff @@
## master #110 +/- ##
=======================================
Coverage 90.79% 90.79%
=======================================
Files 22 22
Lines 619 619
Branches 53 53
=======================================
Hits 562 562
Misses 57 57 Continue to review full report at Codecov.
|
README.md
Outdated
|
||
```bash | ||
$ spark-shell --packages com.github.src-d:spark-api:master-SNAPSHOT --repositories https://jitpack.io | ||
``` | ||
|
||
To start using spark-api from the shell you must import everything inside the `tech.sourced.api` package (or, if you prefer, just import `SparkAPI` and `ApiDataFrame` classes): | ||
To start using source{d} engine from the shell you must import everything inside the `tech.sourced.api` package (or, if you prefer, just import `SparkAPI` and `ApiDataFrame` classes): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'll have to change this as well when we rename it in the code
|
|
MORE CHANGES
|
_examples/README.md
Outdated
@@ -14,6 +14,8 @@ Here you can find a list of annotated *spark-api* examples: | |||
|
|||
- [pyspark's shell classifying languages and extracting UASTs](pyspark/pyspark-shell-lang-and-uast.md) | |||
|
|||
-[pyspark's shell querying UASTs with XPath](pyspark/pyspark-shell-xpath-query.md) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing whitespace between -
and [
changes the MD formatting
|
||
Finally, `extract_tokens()` method will generate a column `tokens` based on the previous generated column `result`. | ||
|
||
```python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be it's better to remove info string python
here and in all similar cases, as the code below is not just a Python and gets highlighted very random
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I'll show the code examples just as a plain text
👍 |
@bzz solved |
scala> val engine = Engine(spark, "/path/to/siva-files") | ||
engine: tech.sourced.engine.Engine = tech.sourced.engine.Engine@7e18b9e6 | ||
|
||
scala> api.getRepositories.getHEAD.getFiles.classifyLanguages.where('lang === "Python").extractUASTs.queryUAST("//*[@roleIdentifier]", "uast", "result").extractTokens("result", "tokens").select('path, 'lang, 'uast, 'tokens).show |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
api -> engine
Great job @mcarmonaa ! One thing I have noticed, while going though all https://github.com/mcarmonaa/spark-api/blob/fix/doc-project-name/_examples/scala/ - is that it's very annoying not to be able to copy the full commands from the docs.
but if that would have been
import tech.sourced.engine._
val engine = Engine(spark, "/path/to/siva-files")
engine.getRepositories.getHEAD.getFiles.classifyLanguages.where('lang === "Python").extractUASTs.queryUAST("//*[@roleIdentifier]", "uast", "result").extractTokens("result", "tokens").select('path, 'lang, 'uast, 'tokens).show it would have next advantages:
What do you guys think? |
README.md
Outdated
|
||
You can launch our docker container which contains some Notebooks examples just running: | ||
|
||
docker run --name spark-api-jupyter --rm -it -p 8888:8888 -v $(pwd)/path/to/siva-files:/repositories --link bblfsh:bblfsh srcd/spark-api-jupyter | ||
docker run --name engine-jupyter --rm -it -p 8888:8888 -v $(pwd)/path/to/siva-files:/repositories --link bblfsh:bblfsh srcd/engine-jupyter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be updated: bblfsh
is now bblfshd
since line 71
#111 has been merged, we can merge this as soon as it's ready |
LAST CHANGES
|
Python build fails for latest changes. Also, can you rebase? There were massive changes with the renaming in the code |
…enaming engine module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, good job!
@@ -26,6 +28,8 @@ Here you can find a list of annotated *spark-api* examples: | |||
|
|||
- [spark-shell classifying languages and extracting UASTs](scala/spark-shell-lang-and-uast.md) | |||
|
|||
- [spark-shell querying UASTs with XPath](scala/spark-shell-xpath-query.md) | |||
|
|||
### jupyter notebooks | |||
|
|||
- [Basic example](notebooks/Basic%2BExample.ipynb) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When rendered, this link gets broken i.e here https://github.com/mcarmonaa/spark-api/tree/fix/doc-project-name/_examples
Is that expected to work only on docSrv or shall it point elsewhere?
Looks great, thank you @mcarmonaa ! LGTM sans minor issue from above |
Closes #66
As first step to renaming, only those references to
spark-api
in documentation that don't brake links or commands have been changed tosource{d} engine
Related to Change project name #66