Skip to content

Commit

Permalink
Various improvements: if parsed URL has no path, display netloc. …
Browse files Browse the repository at this point in the history
…Fixed bug in how `binding_info` is set when paging through request history
  • Loading branch information
kylebebak committed Oct 14, 2017
1 parent 7068380 commit 1aa1000
Show file tree
Hide file tree
Showing 9 changed files with 39 additions and 9 deletions.
9 changes: 9 additions & 0 deletions README.md
Expand Up @@ -76,6 +76,7 @@ If you're looking for an HTTP client you should try Requester __even if you've n
+ [Export Tests to Runnable Script](#export-tests-to-runnable-script)
- [Benchmarking Tool](#benchmarking-tool)
- [Export/Import with cURL, HTTPie](#exportimport-with-curl-httpie)
+ [Debugging/Exploring Network Activity](#debuggingexploring-network-activity)
- [Special Keyword Arguments](#special-keyword-arguments)
- [Commands](#commands)
- [Response Tab Commands](#response-tab-commands)
Expand Down Expand Up @@ -658,6 +659,14 @@ Prefer [HTTPie](https://httpie.org/) instead of cURL? You can also export reques
Exporting works seamlessly with env vars. Just highlight a group of requests and look for __Requester: Export To cURL__ or __Requester: Export To HTTPie__ in the command palette. For importing it's __Requester: Import From cURL__. Exporting to HTTPie supports a bunch of features, including basic and digest authentication, file downloads, and even sessions. For sessions, just highlight your env block along with the requests you want to export.


### Debugging/Exploring Network Activity
cURL is the lingua franca of HTTP clients. Want to explore requests made by some site you're trying to scrape or reverse-engineer?

Open your browser's developer tools, go to the network tab, filter on `XHR` if you want, and refresh the page. Find the request you're looking for and copy it as cURL. Bring it into Sublime, run __Requester: Import From cURL__, and run your request.

In less than a minute you can pull requests, credentials and all, out of your browser and into your laboratory.


## Special Keyword Arguments
Requester's syntax is basically identical to Requests' syntax, but it adds support for the following special `kwargs`.

Expand Down
2 changes: 1 addition & 1 deletion commands/graphql.py
Expand Up @@ -175,7 +175,7 @@ def get_completions(gql, idx, schema):
except ImportError:
raise Exception('Install graphql-py with pip for GraphQL autocomplete')

try:
try: # monkey-patch this class, the `t_NULL` method breaks parsing
delattr(GraphQLLexer, 't_NULL')
except AttributeError:
pass
Expand Down
2 changes: 1 addition & 1 deletion commands/import_export.py
Expand Up @@ -152,7 +152,7 @@ def run(self, edit):
view = self.view.window().new_file()
view.run_command('requester_replace_view_text',
{'text': header + '\n\n\n' + '\n\n\n'.join(requests) + '\n', 'point': 0})
view.set_syntax_file('Packages/Python/Python.sublime-syntax')
view.set_syntax_file('Packages/Requester/syntax/requester-source.sublime-syntax')
view.set_name('requests')
view.set_scratch(True)

Expand Down
2 changes: 1 addition & 1 deletion commands/other.py
Expand Up @@ -56,7 +56,7 @@ def run(self, online=False):
webbrowser.open_new_tab('http://requester.org')
return
show_read_only_doc_view(self.window.new_file(),
sublime.load_resource('Packages/Requester/README.md'),
sublime.load_resource('Packages/Requester/docs/_content/body.md'),
'Requester Documentation')


Expand Down
13 changes: 8 additions & 5 deletions commands/request.py
Expand Up @@ -21,7 +21,7 @@
platform = sublime.platform()


def response_tab_bindings(can_save=False):
def response_tab_bindings():
"""Returns string with special key bindings for response tab commands.
"""
replay = '[cmd+r]' if platform == 'osx' else '[ctrl+r]'
Expand Down Expand Up @@ -122,9 +122,12 @@ def set_response_view_name(view, res=None):
config = sublime.load_settings('Requester.sublime-settings')
max_len = int(config.get('response_tab_name_length', 32))
try: # short but descriptive, to facilitate navigation between response tabs, e.g. using Goto Anything
path = parse.urlparse(res.url).path
parsed = parse.urlparse(res.url)
path = parsed.path
if path and path[-1] == '/':
path = path[:-1]
if not path:
path = parsed.netloc
name = '{}: {}'.format(res.request.method, path)
except:
name = view.settings().get('requester.name')
Expand Down Expand Up @@ -535,14 +538,16 @@ def run(self):
binding_info = view.settings().get('requester.binding_info', None)
if binding_info is None:
return
file, old_request = binding_info
if not file or not old_request:
return

try:
request = parse_requests(view.substr(sublime.Region(0, view.size())), n=1)[0]
except Exception as e:
sublime.error_message('Save Error: there are no valid requests in your response view: {}'.format(e))
if request.startswith('requests.'):
request = request[len('requests.'):]
file, old_request = binding_info

if not os.path.isfile(file):
sublime.error_message('Save Error: requester file\n"{}"\nno longer exists'.format(file))
Expand Down Expand Up @@ -587,8 +592,6 @@ def set_save_info_on_view(view, request):
"""Set file name and request string on view.
"""
file = view.settings().get('requester.file', None)
if file is None:
return
if request.startswith('requests.'):
request = request[len('requests.'):]
view.settings().set('requester.binding_info', (file, request))
Expand Down
8 changes: 8 additions & 0 deletions docs/_content/body.md
Expand Up @@ -571,6 +571,14 @@ Prefer [HTTPie](https://httpie.org/) instead of cURL? You can also export reques
Exporting works seamlessly with env vars. Just highlight a group of requests and look for __Requester: Export To cURL__ or __Requester: Export To HTTPie__ in the command palette. For importing it's __Requester: Import From cURL__. Exporting to HTTPie supports a bunch of features, including basic and digest authentication, file downloads, and even sessions. For sessions, just highlight your env block along with the requests you want to export.


### Debugging/Exploring Network Activity
cURL is the lingua franca of HTTP clients. Want to explore requests made by some site you're trying to scrape or reverse-engineer?

Open your browser's developer tools, go to the network tab, filter on `XHR` if you want, and refresh the page. Find the request you're looking for and copy it as cURL. Bring it into Sublime, run __Requester: Import From cURL__, and run your request.

In less than a minute you can pull requests, credentials and all, out of your browser and into your laboratory.


## Special Keyword Arguments
Requester's syntax is basically identical to Requests' syntax, but it adds support for the following special `kwargs`.

Expand Down
1 change: 1 addition & 0 deletions docs/_content/toc.md
Expand Up @@ -36,6 +36,7 @@
+ [Export Tests to Runnable Script](#export-tests-to-runnable-script)
- [Benchmarking Tool](#benchmarking-tool)
- [Export/Import with cURL, HTTPie](#exportimport-with-curl-httpie)
+ [Debugging/Exploring Network Activity](#debuggingexploring-network-activity)
- [Special Keyword Arguments](#special-keyword-arguments)
- [Commands](#commands)
- [Response Tab Commands](#response-tab-commands)
Expand Down
3 changes: 2 additions & 1 deletion messages.json
Expand Up @@ -27,5 +27,6 @@
"2.18.0": "messages/2.18.0.txt",
"2.19.0": "messages/2.19.0.txt",
"2.19.1": "messages/2.19.1.txt",
"2.19.2": "messages/2.19.2.txt"
"2.19.2": "messages/2.19.2.txt",
"2.20.0": "messages/2.20.0.txt"
}
8 changes: 8 additions & 0 deletions messages/2.20.0.txt
@@ -0,0 +1,8 @@
# New Features
Response tab name: if parsed URL has no `path`, display `netloc` (base path), instead of displaying nothing.


Also, the default length of response tab names is limit to 32 characters. You can change this limit in settings.

# Bug Fixes
Fixed bug in saving requests back to requester file when paging through request history.

0 comments on commit 1aa1000

Please sign in to comment.