Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Paster command that can be run in a cron #3

Closed
wants to merge 18 commits into from

4 participants

@nigelbabu
Owner

This is definitely not perfect but will do the job. Things that need to be fixed before we merge this into the datastore extension, but this should do for now.

  • Use the get_action commands for the datastore API. I haven't done this yet, because it was useful for debugging not to do that.
  • Check existence of site_url and fail if it isn't set. Only ever a problem with developer sites, I'd think, so it's not a big deal for SA. **This is no longer required since I use get_action() directly.
ckanext/sa/commands.py
((144 lines not shown))
+ [
+ messytables.types.StringType,
+ messytables.types.IntegerType,
+ messytables.types.FloatType,
+ messytables.types.DecimalType,
+ messytables.types.DateUtilType
+ ],
+ strict=True
+ )
+ logger.info('Guessed types: {0}'.format(guessed_types))
+ row_set.register_processor(types_processor(guessed_types, strict=True))
+ row_set.register_processor(stringify_processor())
+
+ ckan_url = context['site_url'].rstrip('/')
+
+ datastore_create_request_url = '%s/api/3/action/datastore_create' % (ckan_url)

Why not use the API directly? I know we do this in the datastorer but we only do it because we used to have the old datastore which only an http interface and I reused the code. We should use the API now that we rewrite it anyway :smile:

@nigelbabu I should have read you comments on the pr. So you are already aware of this.

@nigelbabu Owner

Yeah, I'll remove it before I put this into the datastore. This was an "as soon as possible" thing, so I copied as much from the datastore as I could. I managed to change everything but the code in push_to_datastore to use the API :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@johnglover

Cheers for this Nigel. Here are a few initial comments:

  • I think that this should be in ckanext-datastorer instead of ckanext-sa. At a later date it might be moved into the datastore extension in core.
  • You can then reuse a lot of code from ckanext-datastorer directly such as DATA_FORMATS, stringify, etc.

commands.py

  • There are quite a lot of PEP8 violations that should be tied up as much as possible (I know some of our older files have these as well but we should be cleaning up as we go and not introducing additional violations if we can avoid it).
  • The command class should not be called DataStore which is potentially confusing, I would suggest something like AddToDatastore.
  • Typo on line 88.
  • Lines 91-92 don't seem to do anything.
  • In some places in the code errors are printed where as in others logger is used, it would be good to be consistent with this.
  • DATA_FORMATS can probably live at the top level along side TYPE_MAPPING.
  • The values returned on lines 165 and 203 aren't used anywhere.

fetch_resource.py

  • The fetch_resource.py file needs some cleanup. This looks to be taken from the archiver, but I don't think this code needs to write to the task_status table as it is not a celery task. Also if we are using logic layer functions in commands.py then this should happen in fetch_resource.py as well instead of doing HTTP requests using the API.
  • I'm not sure about fetch_resource.py in general, it would probably be better to refactor this code in the archiver so that it can be used here and then just depend on the archiver, but maybe this is better for a later revision of this as it would take a bit of work.
@nigelbabu
Owner

I've pushed a bunch of changes after your review, could you take a look again? The only thing left is the logging, which I'd like to have a chat about when you're about.

@domoritz domoritz referenced this pull request in ckan/ckan
Closed

Datastore paster commands for importing #676

ckanext/sa/commands.py
@@ -0,0 +1,248 @@
+import datetime
+import itertools
+import messytables
+from messytables import (AnyTableSet, types_processor, headers_guess,
+ headers_processor, type_guess, offset_processor)
+from pylons import config
+from ckan.lib.cli import CkanCommand
+import ckan.logic as logic
+import ckan.model as model
+from fetch_resource import download
+
+import logging as log

I think we should follow the convention used in other CKAN extensions here:

import logging
log = logging.getLogger()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
ckanext/sa/commands.py
((37 lines not shown))
+ 'application/vnd.ms-excel',
+ 'application/xls',
+ 'application/octet-stream',
+ 'text/comma-separated-values',
+ 'application/x-zip-compressed',
+ 'application/zip',
+]
+
+
+class DatastorerException(Exception):
+ pass
+
+
+class AddToDataStore(CkanCommand):
+ """
+ Upload all resources from the FileStore to the DataStore

This is not strictly true, the resources do not have to be in the filestore (they can be standard CKAN external resources).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
ckanext/sa/commands.py
((41 lines not shown))
+ 'application/x-zip-compressed',
+ 'application/zip',
+]
+
+
+class DatastorerException(Exception):
+ pass
+
+
+class AddToDataStore(CkanCommand):
+ """
+ Upload all resources from the FileStore to the DataStore
+
+ Usage:
+
+ paster datastore [package-id]

The current command is paster datastore_upload

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@johnglover johnglover commented on the diff
ckanext/sa/commands.py
((174 lines not shown))
+ in zip(headers, guessed_type_names)],
+ 'records': data
+ }
+ response = logic.get_action('datastore_create')(
+ context,
+ data_dict
+ )
+ return response
+
+ # Delete any existing data before proceeding. Otherwise
+ # 'datastore_create' will append to the existing datastore. And if the
+ # fields have significantly changed, it may also fail.
+ log.info('Deleting existing datastore (it may not exist): '
+ '{0}.'.format(resource['id']))
+ try:
+ logic.get_action('datastore_delete')(

Is there a nice way to check that the resource exists before calling delete? It's a bit confusing to get error messages about resources not being found here. If not we can ignore this for now.

@nigelbabu Owner

I don't think there's a nice way. The only way I can think of is to do a search on the datastore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
@johnglover

@nigelbabu Thanks Nigel. It seems quite close now, worked for me in terms of getting resources into the datastore. I have added a few comments inline.

The only other thing is that there is a dependency on messytables that is not declared anywhere. Other extensions (eg: ckanext-harvest) have gone the route of adding a pip-requirements.txt to the extension root, this is probably a good idea in this case as well.

@nigelbabu
Owner

@johnglover Do you have any more comments?

@johnglover

@nigelbabu No, if you add this to the datastorer I'll merge it.

@adamamyl

What's the status of this PR?

@nigelbabu
Owner

This was merged into datastorer.

@nigelbabu nigelbabu closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
View
248 ckanext/sa/commands.py
@@ -0,0 +1,248 @@
+import datetime
+import itertools
+import messytables
+from messytables import (AnyTableSet, types_processor, headers_guess,
+ headers_processor, type_guess, offset_processor)
+from pylons import config
+from ckan.lib.cli import CkanCommand
+import ckan.logic as logic
+import ckan.model as model
+from fetch_resource import download
+
+import logging
+log = logging.getLogger()
+
+TYPE_MAPPING = {
+ messytables.types.StringType: 'text',
+ # 'int' may not be big enough,
+ # and type detection may not realize it needs to be big
+ messytables.types.IntegerType: 'numeric',
+ messytables.types.FloatType: 'float',
+ messytables.types.DecimalType: 'numeric',
+ messytables.types.DateType: 'timestamp',
+ messytables.types.DateUtilType: 'timestamp'
+}
+
+
+DATA_FORMATS = [
+ 'csv',
+ 'tsv',
+ 'text/csv',
+ 'txt',
+ 'text/plain',
+ 'text/tsv',
+ 'text/tab-separated-values',
+ 'xls',
+ 'application/ms-excel',
+ 'application/vnd.ms-excel',
+ 'application/xls',
+ 'application/octet-stream',
+ 'text/comma-separated-values',
+ 'application/x-zip-compressed',
+ 'application/zip',
+]
+
+
+class DatastorerException(Exception):
+ pass
+
+
+class AddToDataStore(CkanCommand):
+ """
+ Upload all resources with a url and a mimetype/format matching allowed
+ formats to the DataStore
+
+ Usage:
+
+ paster datastore_upload
+ - Update all resources.
+ """
+ summary = __doc__.split('\n')[0]
+ usage = __doc__
+ min_args = 0
+ max_args = 1
+ MAX_PER_PAGE = 50
+ max_content_length = int(config.get('ckanext-archiver.max_content_length',
+ 50000000))
+
+ def _get_all_packages(self):
+ page = 1
+ context = {
+ 'model': model,
+ }
+ while True:
+ data_dict = {
+ 'page': page,
+ 'limit': self.MAX_PER_PAGE,
+ }
+ packages = logic.get_action('current_package_list_with_resources')(
+ context, data_dict)
+ if not packages:
+ raise StopIteration
+ for package in packages:
+ yield package
+ page += 1
+
+ def command(self):
+ """
+ Parse command line arguments and call the appropriate method
+ """
+ if self.args and self.args[0] in ['--help', '-h', 'help']:
+ print self.__doc__
+ return
+
+ self._load_config()
+ user = logic.get_action('get_site_user')({'model': model,
+ 'ignore_auth': True}, {})
+ packages = self._get_all_packages()
+ context = {
+ 'username': user.get('name'),
+ 'user': user.get('name'),
+ 'model': model
+
+ }
+ for package in packages:
+ for resource in package.get('resources', []):
+ mimetype = resource['mimetype']
+ if mimetype and not(mimetype in DATA_FORMATS or
+ resource['format'].lower()
+ in DATA_FORMATS):
+ log.warn('Skipping resource {0} from package {1} because '
+ 'MIME type {2} or format {3} is '
+ 'unrecognized'.format(resource['url'],
+ package['name'],
+ mimetype,
+ resource['format'])
+ )
+ continue
+ log.info('Datastore resource from resource {0} from '
+ 'package {0}'.format(resource['url'],
+ package['name']))
+ self.push_to_datastore(context, resource)
+
+ def push_to_datastore(self, context, resource):
+ try:
+ result = download(
+ context,
+ resource,
+ self.max_content_length,
+ DATA_FORMATS
+ )
+ except Exception as e:
+ log.exception(e)
+ return
+ content_type = result['headers'].get('content-type', '')\
+ .split(';', 1)[0] # remove parameters
+
+ f = open(result['saved_file'], 'rb')
+ table_sets = AnyTableSet.from_fileobj(
+ f,
+ mimetype=content_type,
+ extension=resource['format'].lower()
+ )
+
+ ##only first sheet in xls for time being
+ row_set = table_sets.tables[0]
+ offset, headers = headers_guess(row_set.sample)
+ row_set.register_processor(headers_processor(headers))
+ row_set.register_processor(offset_processor(offset + 1))
+ row_set.register_processor(datetime_procesor())
+
+ log.info('Header offset: {0}.'.format(offset))
+
+ guessed_types = type_guess(
+ row_set.sample,
+ [
+ messytables.types.StringType,
+ messytables.types.IntegerType,
+ messytables.types.FloatType,
+ messytables.types.DecimalType,
+ messytables.types.DateUtilType
+ ],
+ strict=True
+ )
+ log.info('Guessed types: {0}'.format(guessed_types))
+ row_set.register_processor(types_processor(guessed_types, strict=True))
+ row_set.register_processor(stringify_processor())
+
+ guessed_type_names = [TYPE_MAPPING[type(gt)] for gt in guessed_types]
+
+ def send_request(data):
+ data_dict = {
+ 'resource_id': resource['id'],
+ 'fields': [dict(id=name, type=typename) for name, typename
+ in zip(headers, guessed_type_names)],
+ 'records': data
+ }
+ response = logic.get_action('datastore_create')(
+ context,
+ data_dict
+ )
+ return response
+
+ # Delete any existing data before proceeding. Otherwise
+ # 'datastore_create' will append to the existing datastore. And if the
+ # fields have significantly changed, it may also fail.
+ log.info('Deleting existing datastore (it may not exist): '
+ '{0}.'.format(resource['id']))
+ try:
+ logic.get_action('datastore_delete')(

Is there a nice way to check that the resource exists before calling delete? It's a bit confusing to get error messages about resources not being found here. If not we can ignore this for now.

@nigelbabu Owner

I don't think there's a nice way. The only way I can think of is to do a search on the datastore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
+ context,
+ {'resource_id': resource['id']}
+ )
+ except Exception as e:
+ log.exception(e)
+
+ log.info('Creating: {0}.'.format(resource['id']))
+
+ # generates chunks of data that can be loaded into ckan
+ # n is the maximum size of a chunk
+ def chunky(iterable, n):
+ it = iter(iterable)
+ while True:
+ chunk = list(
+ itertools.imap(
+ dict, itertools.islice(it, n)))
+ if not chunk:
+ return
+ yield chunk
+
+ count = 0
+ for data in chunky(row_set.dicts(), 100):
+ count += len(data)
+ send_request(data)
+
+ log.info("There should be {n} entries in {res_id}.".format(
+ n=count,
+ res_id=resource['id']
+ ))
+
+ resource.update({
+ 'webstore_url': 'active',
+ 'webstore_last_updated': datetime.datetime.now().isoformat()
+ })
+
+ logic.get_action('resource_update')(context, resource)
+
+
+def stringify_processor():
+ def to_string(row_set, row):
+ for cell in row:
+ if not cell.value:
+ cell.value = None
+ else:
+ cell.value = unicode(cell.value)
+ return row
+ return to_string
+
+
+def datetime_procesor():
+ ''' Stringifies dates so that they can be parsed by the db
+ '''
+ def datetime_convert(row_set, row):
+ for cell in row:
+ if isinstance(cell.value, datetime.datetime):
+ cell.value = cell.value.isoformat()
+ cell.type = messytables.StringType()
+ return row
+ return datetime_convert
View
307 ckanext/sa/fetch_resource.py
@@ -0,0 +1,307 @@
+from datetime import datetime
+import hashlib
+import httplib
+import json
+import logging as log
+import os
+import requests
+import tempfile
+import urllib
+import urlparse
+import ckan.logic as logic
+
+
+HTTP_ERROR_CODES = {
+ httplib.MULTIPLE_CHOICES: "300 Multiple Choices not implemented",
+ httplib.USE_PROXY: "305 Use Proxy not implemented",
+ httplib.INTERNAL_SERVER_ERROR: "Internal server error on the remote "
+ "server",
+ httplib.BAD_GATEWAY: "Bad gateway",
+ httplib.SERVICE_UNAVAILABLE: "Service unavailable",
+ httplib.GATEWAY_TIMEOUT: "Gateway timeout",
+ httplib.METHOD_NOT_ALLOWED: "405 Method Not Allowed"
+}
+
+
+class DownloadError(Exception):
+ pass
+
+
+class ChooseNotToDownload(Exception):
+ pass
+
+
+class LinkCheckerError(Exception):
+ pass
+
+
+class LinkInvalidError(LinkCheckerError):
+ pass
+
+
+class LinkHeadRequestError(LinkCheckerError):
+ pass
+
+
+class CkanError(Exception):
+ pass
+
+
+def _clean_content_type(ct):
+ # For now we should remove the charset from the content type and
+ # handle it better, differently, later on.
+ if 'charset' in ct:
+ return ct[:ct.index(';')]
+ return ct
+
+
+def download(context, resource, max_content_length, data_formats,
+ url_timeout=30):
+ '''Given a resource, tries to download it.
+
+ If the size or format is not acceptable for download then
+ ChooseNotToDownload is raised.
+
+ If there is an error performing the download then
+ DownloadError is raised.
+ '''
+
+ url = resource['url']
+
+ if (resource.get('resource_type') == 'file.upload' and
+ not url.startswith('http')):
+ url = context['site_url'].rstrip('/') + url
+
+ link_context = "{}"
+ link_data = json.dumps({
+ 'url': url,
+ 'url_timeout': url_timeout
+ })
+
+ headers = json.loads(link_checker(link_context, link_data))
+
+ resource_format = resource['format'].lower()
+ ct = _clean_content_type(headers.get('content-type', '').lower())
+ cl = headers.get('content-length')
+
+ resource_changed = False
+
+ if resource.get('mimetype') != ct:
+ resource_changed = True
+ resource['mimetype'] = ct
+
+ # this is to store the size in case there is an error, but the real size
+ # check is done after dowloading the data file, with its real length
+ if cl is not None and (resource.get('size') != cl):
+ resource_changed = True
+ resource['size'] = cl
+
+ # make sure resource content-length does not exceed our maximum
+ if cl and int(cl) >= max_content_length:
+ if resource_changed:
+ _update_resource(context, resource)
+ # record fact that resource is too large to archive
+ log.warning('Resource too large to download: %s > max (%s). '
+ 'Resource: %s %r', cl, max_content_length, resource['id'],
+ url)
+ raise ChooseNotToDownload("Content-length %s exceeds maximum allowed "
+ "value %s" % (cl, max_content_length))
+
+ # check that resource is a data file
+ if data_formats != 'all' and not (resource_format in data_formats or
+ ct.lower() in data_formats):
+ if resource_changed:
+ _update_resource(context, resource)
+ log.warning('Resource wrong type to download: %s / %s. Resource: %s '
+ '%r', resource_format, ct.lower(), resource['id'], url)
+ raise ChooseNotToDownload('Of content type "%s" which is not a '
+ 'recognised data file for download' % ct)
+
+ # get the resource and archive it
+ try:
+ res = requests.get(url, timeout=url_timeout)
+ except requests.exceptions.ConnectionError, e:
+ raise DownloadError('Connection error: %s' % e)
+ except requests.exceptions.HTTPError, e:
+ raise DownloadError('Invalid HTTP response: %s' % e)
+ except requests.exceptions.Timeout, e:
+ raise DownloadError('Connection timed out after %ss' % url_timeout)
+ except requests.exceptions.TooManyRedirects, e:
+ raise DownloadError('Too many redirects')
+ except requests.exceptions.RequestException, e:
+ raise DownloadError('Error downloading: %s' % e)
+ except Exception, e:
+ raise DownloadError('Error with the download: %s' % e)
+
+ length, hash, saved_file = _save_resource(resource, res,
+ max_content_length)
+
+ # check if resource size changed
+ if unicode(length) != resource.get('size'):
+ resource_changed = True
+ resource['size'] = unicode(length)
+
+ # check that resource did not exceed maximum size when being saved
+ # (content-length header could have been invalid/corrupted, or not accurate
+ # if resource was streamed)
+ #
+ # TODO: remove partially archived file in this case
+ if length >= max_content_length:
+ if resource_changed:
+ _update_resource(context, resource)
+ # record fact that resource is too large to archive
+ log.warning('Resource found to be too large to archive: %s > max (%s).'
+ ' Resource: %s %r', length, max_content_length,
+ resource['id'], url)
+ raise ChooseNotToDownload("Content-length after streaming reached "
+ "maximum allowed value of %s"
+ % max_content_length)
+
+ # zero length usually indicates a problem too
+ if length == 0:
+ if resource_changed:
+ _update_resource(context, resource)
+ # record fact that resource is zero length
+ log.warning('Resource found was zero length - not archiving. '
+ 'Resource: %s %r', resource['id'], url)
+ raise DownloadError("Content-length after streaming was zero")
+
+ # update the resource metadata in CKAN if the resource has changed
+ if resource.get('hash') != hash:
+ resource['hash'] = hash
+ try:
+ # This may fail for archiver.update() as a result of the resource
+ # not yet existing, but is necessary for dependant extensions.
+ _update_resource(context, resource)
+ except:
+ pass
+
+ log.warning('Resource downloaded: id=%s url=%r cache_filename=%s length=%s'
+ ' hash=%s', resource['id'], url, saved_file, length, hash)
+
+ return {'length': length,
+ 'hash': hash,
+ 'headers': headers,
+ 'saved_file': saved_file}
+
+
+def link_checker(context, data):
+ """
+ Check that the resource's url is valid, and accepts a HEAD request.
+
+ Raises LinkInvalidError if the URL is invalid
+ Raises LinkHeadRequestError if HEAD request fails
+
+ Returns a json dict of the headers of the request
+ """
+ data = json.loads(data)
+ url_timeout = data.get('url_timeout', 30)
+
+ error_message = ''
+ headers = {}
+
+ # Find out if it has unicode characters, and if it does, quote them
+ # so we are left with an ascii string
+ url = data['url']
+ try:
+ url = url.decode('ascii')
+ except:
+ parts = list(urlparse.urlparse(url))
+ parts[2] = urllib.quote(parts[2].encode('utf-8'))
+ url = urlparse.urlunparse(parts)
+ url = str(url)
+
+ # parse url
+ parsed_url = urlparse.urlparse(url)
+ # Check we aren't using any schemes we shouldn't be
+ allowed_schemes = ['http', 'https', 'ftp']
+ if not parsed_url.scheme in allowed_schemes:
+ raise LinkInvalidError("Invalid url scheme")
+ # check that query string is valid
+ # see: http://trac.ckan.org/ticket/318
+ # TODO: check urls with a better validator?
+ # eg: ll.url (http://www.livinglogic.de/Python/url/Howto.html)?
+ elif any(['/' in parsed_url.query, ':' in parsed_url.query]):
+ raise LinkInvalidError("Invalid URL")
+ else:
+ # Send a head request
+ try:
+ res = requests.head(url, timeout=url_timeout)
+ headers = res.headers
+ except httplib.InvalidURL, ve:
+ log.warning("Could not make a head request to %r, error is: %s. "
+ "Package is: %r. This sometimes happens when using an "
+ "old version of requests on a URL which issues a 301 "
+ "redirect. Version=%s", url, ve, data.get('package'),
+ requests.__version__)
+ raise LinkHeadRequestError("Invalid URL or Redirect Link")
+ except ValueError, ve:
+ log.warning("Could not make a head request to %r, error is: %s. "
+ "Package is: %r.", url, ve, data.get('package'))
+ raise LinkHeadRequestError("Could not make HEAD request")
+ except requests.exceptions.ConnectionError, e:
+ raise LinkHeadRequestError('Connection error: %s' % e)
+ except requests.exceptions.HTTPError, e:
+ raise LinkHeadRequestError('Invalid HTTP response: %s' % e)
+ except requests.exceptions.Timeout, e:
+ raise LinkHeadRequestError('Connection timed out after %ss'
+ % url_timeout)
+ except requests.exceptions.TooManyRedirects, e:
+ raise LinkHeadRequestError('Too many redirects')
+ except requests.exceptions.RequestException, e:
+ raise LinkHeadRequestError('Error during request: %s' % e)
+ except Exception, e:
+ raise LinkHeadRequestError('Error with the request: %s' % e)
+ else:
+ if not res.ok or res.status_code >= 400:
+ if res.status_code in HTTP_ERROR_CODES:
+ error_message = ('Server returned error: %s'
+ % HTTP_ERROR_CODES[res.status_code])
+ else:
+ error_message = ("URL unobtainable: Server returned "
+ "HTTP %s" % res.status_code)
+ raise LinkHeadRequestError(error_message)
+ return json.dumps(headers)
+
+
+def _update_resource(context, resource):
+ """
+ Use CKAN API to update the given resource.
+ Returns the content of the response.
+
+ """
+ resource['last_modified'] = datetime.now().isoformat()
+ try:
+ logic.get_action('resource_update')(context, resource)
+ except Exception as e:
+ log.exception(e)
+ raise CkanError('ckan failed to update resource')
+
+
+def _save_resource(resource, response, max_file_size, chunk_size=1024*16):
+ """
+ Write the response content to disk.
+
+ Returns a tuple:
+
+ (file length: int, content hash: string, saved file path: string)
+ """
+ resource_hash = hashlib.sha1()
+ length = 0
+
+ fd, tmp_resource_file_path = tempfile.mkstemp()
+
+ with open(tmp_resource_file_path, 'wb') as fp:
+ for chunk in response.iter_content(chunk_size=chunk_size,
+ decode_unicode=False):
+ fp.write(chunk)
+ length += len(chunk)
+ resource_hash.update(chunk)
+
+ if length >= max_file_size:
+ break
+
+ os.close(fd)
+
+ content_hash = unicode(resource_hash.hexdigest())
+ return length, content_hash, tmp_resource_file_path
View
2  pip-requirements.txt
@@ -0,0 +1,2 @@
+# Requirements to run commands.py
+messytables>=0.5.0
View
7 setup.py
@@ -24,8 +24,11 @@
],
entry_points=\
"""
- [ckan.plugins]
+ [paste.paster_command]
+ datastore_upload = ckanext.sa.commands:AddToDataStore
+
+ [ckan.plugins]
# Add plugins here
- sa_customizations=ckanext.sa.plugin:SACustomizations
+ sa_customizations=ckanext.sa.plugin:SACustomizations
""",
)
Something went wrong with that request. Please try again.