Skip to content
This repository has been archived by the owner on Mar 24, 2021. It is now read-only.

Commit

Permalink
Feature tests working with fake Stagecraft server
Browse files Browse the repository at this point in the history
Backdrop now gets its config from Stagecraft rather than storing it
itself. We need a fake Stagecraft server to return config for feature
tests where we cannot stub out the Stagecraft client interface.
  • Loading branch information
nick-gravgaard committed Mar 7, 2014
1 parent efe8412 commit d0c62d1
Show file tree
Hide file tree
Showing 23 changed files with 261 additions and 99 deletions.
2 changes: 1 addition & 1 deletion backdrop/admin/config/development.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,5 +12,5 @@
except ImportError:
from development_environment_sample import *

STAGECRAFT_URL = 'http://stagecraft.perfplat.dev:3204'
STAGECRAFT_URL = 'http://localhost:8080'
STAGECRAFT_DATA_SET_QUERY_TOKEN = 'stagecraft-data-set-query-token-fake'
1 change: 1 addition & 0 deletions backdrop/admin/config/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,5 @@
MONGO_PORT = 27017

from test_environment import *

from development import STAGECRAFT_URL, STAGECRAFT_DATA_SET_QUERY_TOKEN
4 changes: 2 additions & 2 deletions backdrop/core/repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@ def _get_url(url):
try:
response.raise_for_status()
except requests.HTTPError as e:
if e.code == 404:
if e.response.status_code == 404:
return None
raise
raise e

This comment has been minimized.

Copy link
@robyoung

robyoung Mar 12, 2014

Contributor

Doesn't explicitly re-raising the exception rather than just calling raise trash the stack trace?


return response.content

Expand Down
2 changes: 1 addition & 1 deletion backdrop/read/config/development.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,5 +45,5 @@
"lpa_journey": True,
}

STAGECRAFT_URL = 'http://stagecraft.perfplat.dev:3204'
STAGECRAFT_URL = 'http://localhost:8080'
STAGECRAFT_DATA_SET_QUERY_TOKEN = 'stagecraft-data-set-query-token-fake'
2 changes: 1 addition & 1 deletion backdrop/write/config/development.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,5 @@
except ImportError:
from development_environment_sample import *

STAGECRAFT_URL = 'http://stagecraft.perfplat.dev:3204'
STAGECRAFT_URL = 'http://localhost:8080'
STAGECRAFT_DATA_SET_QUERY_TOKEN = 'stagecraft-data-set-query-token-fake'
15 changes: 9 additions & 6 deletions features/admin/csv_upload.feature
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@
Feature: CSV Upload

Scenario: Upload CSV data
Given I have a bucket named "my_bucket"
And bucket setting upload_format is "csv"
Given I have a bucket named "my_bucket" with settings

This comment has been minimized.

Copy link
@robyoung

robyoung Mar 12, 2014

Contributor

Yes! This is a really nice way of doing this.

| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "my_bucket"
And a file named "data.csv"
Expand All @@ -28,8 +29,9 @@ Feature: CSV Upload
city,città
coffee,caffè
"""
And I have a bucket named "my_bucket"
And bucket setting upload_format is "csv"
And I have a bucket named "my_bucket" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "my_bucket"
When I go to "/my_bucket/upload"
Expand Down Expand Up @@ -59,8 +61,9 @@ Feature: CSV Upload
2013-01-01,2013-01-07,abc,287
2013-01-01,2013-01-07,def,425
"""
And I have a bucket named "bucket_with_auto_id"
And bucket setting upload_format is "csv"
And I have a bucket named "bucket_with_auto_id" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "bucket_with_auto_id"
When I go to "/bucket_with_auto_id/upload"
Expand Down
20 changes: 12 additions & 8 deletions features/admin/csv_upload_validation.feature
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,9 @@ Feature: csv upload validation
Pawel,27,Polish,male
Max,35,Italian,male
"""
And I have a bucket named "foo"
And bucket setting upload_format is "csv"
And I have a bucket named "foo" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "foo"
When I go to "/foo/upload"
Expand All @@ -25,8 +26,9 @@ Feature: csv upload validation
Pawel,27,Polish,male
Max,35,Italian
"""
And I have a bucket named "foo"
And bucket setting upload_format is "csv"
And I have a bucket named "foo" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "foo"
When I go to "/foo/upload"
Expand All @@ -37,8 +39,9 @@ Feature: csv upload validation

Scenario: file too large
Given a file named "data.csv" of size "1000000" bytes
And I have a bucket named "foo"
And bucket setting upload_format is "csv"
And I have a bucket named "foo" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "foo"
When I go to "/foo/upload"
Expand All @@ -49,8 +52,9 @@ Feature: csv upload validation

Scenario: non UTF8 characters
Given a file named "data.csv" with fixture "bad-characters.csv"
And I have a bucket named "foo"
And bucket setting upload_format is "csv"
And I have a bucket named "foo" with settings
| key | value |
| upload_format | "csv" |
And I am logged in
And I can upload to "foo"
When I go to "/foo/upload"
Expand Down
10 changes: 6 additions & 4 deletions features/admin/excel_upload.feature
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@ Feature: excel upload

Scenario: Upload XLSX file
Given a file named "data.xlsx" with fixture "data.xlsx"
And I have a bucket named "my_xlsx_bucket"
And bucket setting upload_format is "excel"
And I have a bucket named "my_xlsx_bucket" with settings
| key | value |
| upload_format | "excel" |
And I am logged in
And I can upload to "my_xlsx_bucket"
When I go to "/my_xlsx_bucket/upload"
Expand All @@ -18,8 +19,9 @@ Feature: excel upload

Scenario: using _timestamp for an auto id
Given a file named "LPA_MI_EXAMPLE.xls" with fixture "LPA_MI_EXAMPLE.xls"
And I have a bucket named "bucket_with_timestamp_auto_id"
And bucket setting upload_format is "excel"
And I have a bucket named "bucket_with_timestamp_auto_id" with settings
| key | value |
| upload_format | "excel" |
And I am logged in
And I can upload to "bucket_with_timestamp_auto_id"
When I go to "/bucket_with_timestamp_auto_id/upload"
Expand Down
42 changes: 24 additions & 18 deletions features/contrib/evl_upload.feature
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@ Feature: EVL Upload

Scenario: Upload call center volumes
Given a file named "CEG Data.xlsx" with fixture "contrib/CEG Transaction Tracker.xlsx"
and I have a bucket named "evl_ceg_data"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.ceg_volumes"]
and I have a bucket named "evl_ceg_data" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.ceg_volumes"] |
and I am logged in
and I can upload to "evl_ceg_data"
when I go to "/evl_ceg_data/upload"
Expand All @@ -19,9 +20,10 @@ Feature: EVL Upload

Scenario: Upload services volumetrics
Given a file named "EVL Volumetrics.xlsx" with fixture "contrib/EVL Services Volumetrics Sample.xls"
and I have a bucket named "evl_services_volumetrics"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.service_volumetrics"]
and I have a bucket named "evl_services_volumetrics" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.service_volumetrics"] |
and I am logged in
and I can upload to "evl_services_volumetrics"
when I go to "/evl_services_volumetrics/upload"
Expand All @@ -35,9 +37,10 @@ Feature: EVL Upload

Scenario: Upload service failures
Given a file named "EVL Volumetrics.xlsx" with fixture "contrib/EVL Services Volumetrics Sample.xls"
and I have a bucket named "evl_services_failures"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.contrib.evl_upload_filters.service_failures"]
and I have a bucket named "evl_services_failures" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.contrib.evl_upload_filters.service_failures"] |
and I am logged in
and I can upload to "evl_services_failures"
when I go to "/evl_services_failures/upload"
Expand All @@ -53,9 +56,10 @@ Feature: EVL Upload

Scenario: Upload channel volumetrics
Given a file named "EVL Volumetrics.xlsx" with fixture "contrib/EVL Channel Volumetrics Sample.xls"
and I have a bucket named "evl_channel_volumetrics"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.channel_volumetrics"]
and I have a bucket named "evl_channel_volumetrics" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.channel_volumetrics"] |
and I am logged in
and I can upload to "evl_channel_volumetrics"
when I go to "/evl_channel_volumetrics/upload"
Expand All @@ -70,9 +74,10 @@ Feature: EVL Upload

Scenario: Upload customer satisfaction
Given a file named "EVL Satisfaction.xlsx" with fixture "contrib/EVL Customer Satisfaction.xlsx"
and I have a bucket named "evl_customer_satisfaction"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.customer_satisfaction"]
and I have a bucket named "evl_customer_satisfaction" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.core.upload.filters.first_sheet_filter","backdrop.contrib.evl_upload_filters.customer_satisfaction"] |
and I am logged in
and I can upload to "evl_customer_satisfaction"
when I go to "/evl_customer_satisfaction/upload"
Expand All @@ -87,9 +92,10 @@ Feature: EVL Upload

Scenario: Upload evl volumetrics
Given a file named "evl-volumetrics.xls" with fixture "contrib/evl-volumetrics.xls"
and I have a bucket named "evl_volumetrics"
and bucket setting upload_format is "excel"
and bucket setting upload_filters is ["backdrop.contrib.evl_upload_filters.volumetrics"]
and I have a bucket named "evl_volumetrics" with settings
| key | value |
| upload_format | "excel" |
| upload_filters | ["backdrop.contrib.evl_upload_filters.volumetrics"] |
and I am logged in
and I can upload to "evl_volumetrics"
when I go to "/evl_volumetrics/upload"
Expand Down
10 changes: 6 additions & 4 deletions features/end_to_end.feature
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,19 @@ Feature: end-to-end platform test

Scenario: write data to platform
Given I have the data in "dinosaurs.json"
and I have a bucket named "reptiles"
and I have a bucket named "reptiles" with settings
| key | value |
| raw_queries_allowed | true |
and I use the bearer token for the bucket
and bucket setting raw_queries_allowed is true
when I post the data to "/reptiles"
then I should get back a status of "200"

Scenario: write and retrieve data from platform
Given I have the data in "dinosaurs.json"
and I have a bucket named "reptiles"
and I have a bucket named "reptiles" with settings
| key | value |
| raw_queries_allowed | true |
and I use the bearer token for the bucket
and bucket setting raw_queries_allowed is true
when I post the data to "/reptiles"
and I go to "/reptiles?filter_by=size:big"
then I should get back a status of "200"
Expand Down
11 changes: 11 additions & 0 deletions features/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
from backdrop.core.log_handler import get_log_file_handler
from features.support.splinter_client import SplinterClient

from features.support.stagecraft import StagecraftService

sys.path.append(
os.path.join(os.path.dirname(__file__), '..')
)
Expand Down Expand Up @@ -46,6 +48,15 @@ def after_scenario(context, scenario):
handler()
except Exception as e:
log.exception(e)
if server_running(context):
context.mock_stagecraft_server.stop()
context.mock_stagecraft_server = None


def server_running(context):
return 'mock_stagecraft_server' in context and \
context.mock_stagecraft_server and \
context.mock_stagecraft_server.running


def after_feature(context, _):
Expand Down
6 changes: 3 additions & 3 deletions features/read_api/cache_control.feature
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ Feature: the read api should provide cache control headers
then the "Cache-Control" header should be "no-cache"

Scenario: query returns an etag
Given "licensing.json" is in "foo" bucket
and I have a bucket named "foo"
and bucket setting raw_queries_allowed is true
Given "licensing.json" is in "foo" bucket with settings
| key | value |
| raw_queries_allowed | true |
when I go to "/foo"
then the "ETag" header should be ""7c7cec78f75fa9f30428778f2b6da9b42bd104d0""

Expand Down
5 changes: 3 additions & 2 deletions features/read_api/combination_query.feature
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@ Feature: more complex combination of parameters that are used by clients


Background:
Given "licensing_preview.json" is in "licensing" bucket
and bucket setting raw_queries_allowed is true
Given "licensing_preview.json" is in "licensing" bucket with settings
| key | value |
| raw_queries_allowed | true |


Scenario: for an authority get weekly data for the top 3 licences between two points in time
Expand Down
20 changes: 12 additions & 8 deletions features/read_api/filter.feature
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,19 @@ Feature: filtering queries for read api
then I should get back a status of "400"

Scenario: querying for data between two points
Given "licensing.json" is in "foo" bucket
and bucket setting raw_queries_allowed is true
Given "licensing.json" is in "foo" bucket with settings
| key | value |
| raw_queries_allowed | true |
when I go to "/foo?start_at=2012-12-12T01:01:02%2B00:00&end_at=2012-12-14T00:00:00%2B00:00"
then I should get back a status of "200"
and the JSON should have "1" results
and the "1st" result should be "{"_timestamp": "2012-12-13T01:01:01+00:00", "licence_name": "Temporary events notice", "interaction": "success", "authority": "Westminster", "type": "success", "_id": "1236"}"


Scenario: filtering by a key and value
Given "licensing.json" is in "foo" bucket
and bucket setting raw_queries_allowed is true
Given "licensing.json" is in "foo" bucket with settings
| key | value |
| raw_queries_allowed | true |
when I go to "/foo?filter_by=authority:Camden"
then I should get back a status of "200"
and the JSON should have "2" results
Expand All @@ -39,16 +41,18 @@ Feature: filtering queries for read api


Scenario: querying for data between two points and filtered by a key and value
Given "licensing.json" is in "foo" bucket
and bucket setting raw_queries_allowed is true
Given "licensing.json" is in "foo" bucket with settings
| key | value |
| raw_queries_allowed | true |
when I go to "/foo?start_at=2012-12-13T00:00:02%2B00:00&end_at=2012-12-19T00:00:00%2B00:00&filter_by=type:success"
then I should get back a status of "200"
and the JSON should have "1" results
and the "1st" result should be "{"_timestamp": "2012-12-13T01:01:01+00:00", "licence_name": "Temporary events notice", "interaction": "success", "authority": "Westminster", "type": "success", "_id": "1236"}"

Scenario: querying for boolean kind of data
Given "dinosaurs.json" is in "lizards" bucket
and bucket setting raw_queries_allowed is true
Given "dinosaurs.json" is in "lizards" bucket with settings
| key | value |
| raw_queries_allowed | true |
when I go to "/lizards?filter_by=eats_people:true"
then I should get back a status of "200"
and the JSON should have "3" results
Expand Down
2 changes: 0 additions & 2 deletions features/read_api/group.feature
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@ Feature: grouping queries for read api
then I should get back a status of "200"
and the JSON should have "2" results
and the "2nd" result should be "{"authority": "Westminster", "_count": 3}"

Given "licensing_2.json" is in "foo" bucket
when I go to "/foo?group_by=licence_name&filter_by=authority:Westminster"
then I should get back a status of "200"
and the JSON should have "2" results
Expand Down
9 changes: 5 additions & 4 deletions features/read_api/querying_from_service_data_endpoint.feature
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@
Feature: Querying data from service-data endpoint

Scenario: querying data
Given "dinosaurs.json" is in "rawr" bucket
and bucket setting data_group is "dinosaurs"
and bucket setting data_type is "taxonomy"
and bucket setting raw_queries_allowed is true
Given "dinosaurs.json" is in "rawr" bucket with settings
| key | value |
| data_group | "dinosaurs" |
| data_type | "taxonomy" |
| raw_queries_allowed | true |
when I go to "/data/dinosaurs/taxonomy?filter_by=eats_people:true"
then I should get back a status of "200"
and the JSON should have "3" results

0 comments on commit d0c62d1

Please sign in to comment.