Skip to content
This repository has been archived by the owner. It is now read-only.

Use Mixer for reimbursement view tests #171

Merged
merged 6 commits into from May 28, 2017

Conversation

Projects
None yet
4 participants
@cacarrara
Copy link
Contributor

commented May 26, 2017

This change makes use of mixer[1] for fixtures in Reimbursement view tests.

[1] - https://github.com/klen/mixer/

@cacarrara

This comment has been minimized.

Copy link
Contributor Author

commented May 26, 2017

@cuducos take a look. More than a pull request this is a proposal to improve tests.

@coveralls

This comment has been minimized.

Copy link

commented May 26, 2017

Coverage Status

Changes Unknown when pulling a35b6d9 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**.

@cacarrara cacarrara changed the title Use mixer for reimbursement view tests Use Mixer for reimbursement view tests May 26, 2017

@jtemporal jtemporal requested a review from cuducos May 26, 2017

@cuducos
Copy link
Member

left a comment

Hi @cacarrara — that's awesome! As I told @berinhard elsewhere I was kind of lazy/accommodated in addressing fixtures with a proper tool. mixer sounds like a really concise and straightforward way to address it! Many thanks for that.

I made some comments. Most of them concerns this PR. Some are more overarching towards the project (say, using mixer everywhere). Please, let me know if you want to handle these overarching edits, otherwise I wait for the changes related strictly to this PR, then merge to a secondary branch where I can cover the other tests before bringing it to master.

.travis.yml Outdated
@@ -37,6 +37,7 @@ install:
- psql -U postgres -c 'create database "jarbas";'
- yarn install
- python -m pip install -r requirements.txt coveralls
- python -m pip install -r requirements-test.txt coveralls

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

We're trying to install coveralls twice. I'd change the strategy here:

  • Rename requirements-test.txt to requirements-dev.txt
  • Include -r requirements.txt as the first line of in the new requirements.txt
  • Move django-test-without-migrations from requirements.txt to requirements-dev.txt
  • Use only requirements-dev.txt in .travis.yml

from django.core.cache import cache
from django.shortcuts import resolve_url
from django.test import TestCase

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

Unnecessary line: mixer can be imported in the sabe block of Django import (third party imports, according to PEP8)

for fixture in data:
Reimbursement.objects.create(**fixture)

mixer.cycle(5).blend(Reimbursement)

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

Actually the 5 reimbursements created before had controlled differences to make the tests more precise. Surely the previous code was poor and yours is way better. So… just to confirm: 5 here is quite arbitrary (before we need five due to the combination of differences to properly test API requests), right? I mean… when we need actually with a specificity now we create on the test itself, not during setUp, is that right? If so we might reduce from 5 to… whatever, 3. What do you think?

)
url = '{}?{}'.format(self.url, urlencode(search_data))

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

❤️

from jarbas.core.models import Reimbursement
from jarbas.core.tests import sample_reimbursement_data, suspicions

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

If we're replacing this fixture, we can cleanup jarbas/core/tests/__init__.py and use mixer in other tests that depends on them… using mixer sometimes IMHO tends to lead to a bigger mess ; )

if not hasattr(self.reimbursement, result_attr):
continue
expected_value = getattr(self.reimbursement, result_attr)
self.assertEqual(str(result_value), str(expected_value))

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

This logic is pretty clever but I think it makes it less precise what we're are actually testing… IMHO it's a good idea to keep logic extremely simple in tests to make them obvious. That's why here I prefer the long dict: in a glimpse one can know what to expect as result of test_contents, the proper types etc. What do you think?

This comment has been minimized.

Copy link
@cacarrara

cacarrara May 27, 2017

Author Contributor

@cuducos, good points. I'll gona try do not be too extense.

I do not agree completly about that. The test here should test that each API response attribute is equal to the object attribute in database. It's what being done properly. Of course:

response = load(api_response)
mydict = dict(one=1, two=2)
assert mydict == mydict

is in a first glance such more obvious than:

response = load(api_response)
mydict = dict(one=1 two=2)
for key, value in mydict.items():
	return assert response[key] == value

However I still think the positive consequences are higher than negative ones. Think about changing (for any reason) the response content structure (not values). If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement. On the other hand, if we change some business logic that makes API response different from object attribute value, the test still will fail.

I think the tests should be abstract and resilient enought to test requirements properly without being a pain for developers.

It's just my thoughts, let me know what you think.

ps.: about this topic, I would recomend this post: http://blog.cleancoder.com/uncle-bob/2017/05/05/TestDefinitions.html

This comment has been minimized.

Copy link
@cuducos

cuducos May 27, 2017

Member

If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement.

If we change an attribute name we broke the JSON API. So… yep, I think a failing test would be welcomed in that scenario ; )

This comment has been minimized.

Copy link
@cuducos

cuducos May 27, 2017

Member

In addition to my former comment:

The test you're suggesting is indeed interesting, but as a test_context (i.e. testing the data we're getting from the model and passing to the template). But with the class-based view from DRF this step is abstracted (and consequentially not tested).

What test_contents is actually testing is if the JSON output is what our frontend expects. So even if we change some attribute name, we can do some tweaks to avoid breaking compatibility with the frontend (i.e. tests will pass because JSON won't change) — or, surely better, have a failing test to remind us we've broken something and the JSON is not what we expected it to be anymore, so we can adapt the frontend and our expectations….

This comment has been minimized.

Copy link
@cacarrara

cacarrara May 27, 2017

Author Contributor

I got your point and you're right. I was considering the test_content as an unit test, but it's an integration test actually. I'm going to do it.

content = loads(resp.content.decode('utf-8'))
self.assertEqual(expected, content)

@patch('jarbas.core.models.head')
def test_fetch_non_existing_receipt(self, mocked_head):
mocked_head.return_value.status_code = 404
cache.clear()

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

Any specific reason to remove that?

This comment has been minimized.

Copy link
@cacarrara

cacarrara May 27, 2017

Author Contributor

I had understood it was only preparing test to fetch an exiting receipt from a reimbursement without a receipt. So I did the proper preparation instead of clearing cache, what seemed to be an workaround. Since now is easy and simple create new objects I did it.

Was there any other reason to clear the cache?

This comment has been minimized.

Copy link
@cuducos

cuducos May 27, 2017

Member

Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )

return list(map(lambda x: cast(x), parts)) if cast else parts
except ValueError:
# this is far from good
return list()

This comment has been minimized.

Copy link
@cuducos

cuducos May 26, 2017

Member

I'm assuming the except ValueError might happen in cast, right? If so, I don't think we should silence this error. If the user is trying to convert something unconvertible it's better to stop temh than to loose data silently.

@cacarrara

This comment has been minimized.

Copy link
Contributor Author

commented May 27, 2017

@cuducos do you think we should change all tests within only one PR? I think we could do it one PR by test file. It could be easier and better to review/approve so we could minimize chances for errors.

Have temprorarily only some tests running with mixer seems do not raise any issue.

I can handle the tests refactoring.

@cuducos
Copy link
Member

left a comment

@cuducos do you think we should change all tests within only one PR?

That possibility crossed my mind… but we can split that in different PRs, no problem.

content = loads(resp.content.decode('utf-8'))
self.assertEqual(expected, content)

@patch('jarbas.core.models.head')
def test_fetch_non_existing_receipt(self, mocked_head):
mocked_head.return_value.status_code = 404
cache.clear()

This comment has been minimized.

Copy link
@cuducos

cuducos May 27, 2017

Member

Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )

@coveralls

This comment has been minimized.

Copy link

commented May 28, 2017

Coverage Status

Changes Unknown when pulling 5268291 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**.

@cacarrara

This comment has been minimized.

Copy link
Contributor Author

commented May 28, 2017

@cuducos and @jtemporal the changes was done. Take a look 😜

@cuducos

This comment has been minimized.

Copy link
Member

commented May 28, 2017

Wow! Many thanks, @cacarrara <3

@cuducos cuducos merged commit 92fb725 into okfn-brasil:master May 28, 2017

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
coverage/coveralls First build on master at 99.535%
Details

@cuducos cuducos referenced this pull request May 28, 2017

Open

Use Mixer for tests #178

@cacarrara cacarrara deleted the cacarrara:use-mixer-for-fixtures branch May 29, 2017

@jtemporal jtemporal referenced this pull request Jun 2, 2017

Closed

Document requirements-dev #182

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.