Added DB backend for elasticsearch database#97
Conversation
39ff222 to
ab85a08
Compare
ab85a08 to
0ff06e7
Compare
0ff06e7 to
9be42e4
Compare
| - pip install Django~=$DJANGO_VERSION | ||
|
|
||
| before_script: | ||
| - sleep 10 |
There was a problem hiding this comment.
from travis documentation
| services: | ||
| - elasticsearch | ||
|
|
||
| before_install: |
There was a problem hiding this comment.
I need specific version. Default is too old
| return a + b | ||
|
|
||
| Task result will be automatically logged to the ``security.models.CeleryTaskLog``. | ||
|
|
There was a problem hiding this comment.
this method was moved to django-celery-extension
|
|
||
| with assert_raises(CommandError): | ||
| call_command('celery_health_check', max_created_at_diff=max_created_at_diff) | ||
| from .celery_log import CeleryLogTestCase |
There was a problem hiding this comment.
all test was rewritten
| @@ -0,0 +1,69 @@ | |||
| from io import StringIO | |||
There was a problem hiding this comment.
several test helpers
| from .models import CommandLog, CeleryTaskRunLog, CeleryTaskInvocationLog, InputRequestLog, OutputRequestLog | ||
|
|
||
|
|
||
| class store_elasticsearch_log(override_settings): |
There was a problem hiding this comment.
helper for project testing. We need separate index for parallel test processes. Therefore this context processor create new index for every log and remove it int the end of the test/block
| def backend_receiver(signal): | ||
| def _decorator(func): | ||
| def _wrapper(*args, **kwargs): | ||
| if settings.BACKENDS is None or backend_name in settings.BACKENDS: |
There was a problem hiding this comment.
we can turn of backend for storing. So you can read from log but not write in it (good for tests)
| ) | ||
| input_request_logger = getattr(request, 'input_request_logger', None) | ||
| if input_request_logger: | ||
| input_request_logger.update_extra_data({'debug_toolbar': toolbar.render_toolbar()}) |
There was a problem hiding this comment.
maybe toolbar is still broken. I will check it in next pull request
| from security.config import settings | ||
|
|
||
|
|
||
| class SecurityLogger(ContextDecorator, local): |
There was a problem hiding this comment.
every logger exteds this class. Active loggers creates tree (property loggers) therefore you now parent loger. (For example output request inside input request)
| self.id = id or (uuid4() if self.name else None) | ||
| self.parent = SecurityLogger.loggers[-1] if SecurityLogger.loggers else None | ||
| self.related_objects = set(related_objects) if related_objects else set() | ||
| self.slug = slug |
There was a problem hiding this comment.
related_objects, slug and extra_data is extended from parent logger
b8a7e17 to
acf5680
Compare
docs/installation.rst
Outdated
|
|
||
| .. attribute:: SECURITY_BACKENDS | ||
|
|
||
| With this setting you can select which backends will be used to store logs. Default value is ``None`` which means all installed logs are used. |
There was a problem hiding this comment.
Maybe you meant all installed backends are used?
docs/installation.rst
Outdated
|
|
||
| .. attribute:: SECURITY_ELASTICSEARCH_DATABASE | ||
|
|
||
| Setting can be used to set ElasticSearch database configuration. |
docs/installation.rst
Outdated
|
|
||
| .. attribute:: SECURITY_ELASTICSEARCH_AUTO_REFRESH | ||
|
|
||
| Every write to the elasticsearch database will automatically call auto refresh. |
docs/installation.rst
Outdated
|
|
||
| Every write to the elasticsearch database will automatically call auto refresh. | ||
|
|
||
| .. attribute:: SECURITY_LOG_STING_IO_FLUSH_TIMEOUT |
There was a problem hiding this comment.
Do you really mean STING and not STRING?
There was a problem hiding this comment.
yes string thanks
| class Command(BaseCommand): | ||
|
|
||
| def handle(self, **options): | ||
| requests.post('http://test.cz/test') |
There was a problem hiding this comment.
You shouldn't try to access live servers in tests no matter what. Why not http://localhost?
There was a problem hiding this comment.
but the tests should mock requests
There was a problem hiding this comment.
Yeah, but that cannot be guaranteed, and in case of error, you might be touching live server.
There was a problem hiding this comment.
sure I will change it
| databases = ['default', 'security'] | ||
|
|
||
| @data_provider | ||
| def create_user(self, username='test', email='test@test.cz'): |
| expected_run_succeeded_data) as run_succeeded_receiver, \ | ||
| set_signal_receiver(celery_task_run_output_updated) as run_output_updated_receiver, \ | ||
| set_signal_receiver(celery_task_run_failed) as run_failed_receiver, \ | ||
| set_signal_receiver(celery_task_run_retried) as run_retried_receiver: |
There was a problem hiding this comment.
Huh, this is sort of clumsy. Wouldn't be better to just create some helper class, that would automatically register to all these signals, and have internal counters that would increment when signal called. Or something like that. I believe this repeated initialization (signal registering) is not necessary. Something like
with TestSignalReceiver() as receiver:
... # do your stuff
assert_equal(receiver.calls['celery_task_run_succeeded'], 1)
or
assert_equal(receiver.calls, {
'invocation_started_receiver' : 1,
'run_output_updated_receiver' : 6,
...
})
The second way of asserting is even better, because it will fail if any unexpected signal is fired, so you don't have to explicitly assert signals that are not supposed to be sent. (the dict will only contain signals that were fired at least once)
There was a problem hiding this comment.
yes you are right. I will try to use the decorator which I wrote for project log testing and yes I should rewrite the set_signal_receiver to this decorator. Thanks
| from .models import InputRequestLog | ||
|
|
||
|
|
||
| class PerRequestThrottlingValidator(ThrottlingValidator): |
There was a problem hiding this comment.
For these throttling classes, I think they should not be imported directly, but there should be some factory function/class, that will return the right validator according to used backend. From the prespective of project code, I think you don't need to worry which log backend is used in settings. Also, how do you determine, which validator to use when you have multiple backends is use? (for example elastic and SQL)
There was a problem hiding this comment.
Yes you are true. This is next step. I was thinging about the same thing. But again I cannot do whole change in one pull request. Therefore I have next tasks which will be solve it. But yes this is very good point
| & Q('range', start={'gte': timezone.now() - timedelta(seconds=self.timeframe)}) | ||
| & Q('slug', slug=slug) | ||
| ).count() | ||
| return count_same_requests < self.throttle_at |
There was a problem hiding this comment.
I think that this decision logic should be implemented in parent. The child classes should only implement counting of the requests, because that is the only thing that is different across the backends.
There was a problem hiding this comment.
Yes you are right. I wil solve this in next PR. I want to do some api functions in the backends which will return number of requests for some input (some univerzal filter api). And throttling validator will be only one. This is temporary solution
| start__gte=timezone.now() - timedelta(seconds=self.timeframe), | ||
| slug=self.slug | ||
| ).count() | ||
| return count_same_requests <= self.throttle_at |
There was a problem hiding this comment.
In the elastic validators, you have just the = operator here, why? That's why I suggested to unify this in the parent class.
There was a problem hiding this comment.
It is problem with async saving to elasticsearch. In the RDS the result will contain currently logged request. In the elasticsearch DB the current request will not be returned.
6a98de3 to
ed9addd
Compare
ed9addd to
e3ee0e5
Compare
No description provided.