diff --git a/_posts/en/2016-07-13-how-to-be-best-friend-with-the-http-cache.md b/_posts/en/2016-07-13-how-to-be-best-friend-with-the-http-cache.md index be43b465f..3ee8d408e 100644 --- a/_posts/en/2016-07-13-how-to-be-best-friend-with-the-http-cache.md +++ b/_posts/en/2016-07-13-how-to-be-best-friend-with-the-http-cache.md @@ -11,7 +11,7 @@ slug: how-to-be-best-friend-with-the-http-cache oldCategoriesAndTags: [] permalink: /en/how-to-be-best-friend-with-the-http-cache/ --- -I am currently lead developer for a French press website with very high traffic. Along my previous work experiences, I was able to perform development on several other high volumetry sites as well. +I am currently lead developer for a French press website with very high traffic. Along my previous work experiences, I was able to perform development on several other high volumetry sites as well. When with only a dozen or so servers you need to contain traffic peaks between 100,000 and 300,000 short term visitors, the cache ceases to be optional: it becomes an absolute necessity ! Your application may have the best performance possible, you will always be limited by your physical machines - even if this is no longer true in the cloud (with an unlimited budget) – Consequently you must make friends with the cache. @@ -27,23 +27,23 @@ You must have often heard, or said yourself, "empty your cache", when testing th Firefox -Ctrl + F5 +Ctrl + F5 Chrome -Ctrl + F5 ou Shift + F5 ou Ctrl + Shift +  +Ctrl + F5 ou Shift + F5 ou Ctrl + Shift + N Safari -Ctrl + Alt + E +Ctrl + Alt + E Internet Explorer -Ctrl + F5 +Ctrl + F5 Opera -Ctrl + F12 +Ctrl + F12 @@ -122,7 +122,7 @@ This also allows you to manage response 304 which allows the server to send only There is another header with the same principle; it is the Etag header, which is configured with a 'string' generated by the server which changes according to the content of the page. ```sh -Etag: home560 +Etag: home560 ``` Be careful, the calculation of the ETag must be very carefully considered because it governs the cache time of the page, so it must be calculated with the dynamic data of the page. diff --git a/_posts/en/2016-07-19-behat-structure-functional-tests.md b/_posts/en/2016-07-19-behat-structure-functional-tests.md index 5edacbf5c..b40ec2e45 100644 --- a/_posts/en/2016-07-19-behat-structure-functional-tests.md +++ b/_posts/en/2016-07-19-behat-structure-functional-tests.md @@ -21,7 +21,7 @@ In order to ensure that your application is running well, it's important to writ Behat is the most used tool with Symfony to handle your functional tests and that's great because it's really a complete suite. -You should nevertheless know how to use it wisely in order to cover useful and complete test cases and that's the goal of this blog post. +You should nevertheless know how to use it wisely in order to cover useful and complete test cases and that's the goal of this blog post. # Introduction @@ -33,14 +33,14 @@ However, it is important to write the following test types to cover the function * `Integration tests`: Goal of these tests is to ensure that sour code (already unit-tested) which makes the application running is acting as it should when all components are linked together. Idea is to develop and run both integration tests and interface tests with Behat. -Before we can go, please note that we will use a `Selenium` server which will receive orders by `Mink` (a Behat extension) and will pilot our browser (Chrome in our configuration). +Before we can go, please note that we will use a `Selenium` server which will receive orders by `Mink` (a Behat extension) and will pilot our browser (Chrome in our configuration). To be clear on the architecture we will use, here is a scheme that will resume the role of all elements: !["Behat architecture schema"](/_assets/posts/2016-07-19-behat-structure-functional-tests/behat_en.jpg) ## Behat set up -First step is to install Behat and its extensions as dependencies in our `composer.json` file: +First step is to install Behat and its extensions as dependencies in our `composer.json` file: ```json "require-dev": { @@ -53,7 +53,7 @@ First step is to install Behat and its extensions as dependencies in our `compos } ``` -In order to make your future contexts autoloaded, you also have to add this little `PSR-4` section: +In order to make your future contexts autoloaded, you also have to add this little `PSR-4` section: ```json "autoload-dev": { @@ -63,7 +63,7 @@ In order to make your future contexts autoloaded, you also have to add this litt } ``` -Now, let's create our **behat.yml** file in our project root directory in order to define our tests execution. +Now, let's create our **behat.yml** file in our project root directory in order to define our tests execution. Here is the configuration file we will start with: @@ -91,20 +91,20 @@ default: output_path: %paths.base%/web/reports/behat ``` -We will talk of all of these sections in their defined order so let's start with the **suites** section which is empty at this time but we will implement it later when we will have some contexts to add into it. +We will talk of all of these sections in their defined order so let's start with the **suites** section which is empty at this time but we will implement it later when we will have some contexts to add into it. Then, we load some Behat extensions: * `Behat\Symfony2Extension` will allow us to inject Symfony services into our contexts (useful for integrations tests mostly), -* `Behat\MinkExtension` will allow us to pilot Selenium (drive itself the Chrome browser) so we fill in all the necessary information like the hostname, the Selenium server port number and the base URL we will use for testing, -* `emuse\BehatHTMLFormatter\BehatHTMLFormatterExtension` will generate a HTML report during tests execution (which is great to show to our customer for instance). +* `Behat\MinkExtension` will allow us to pilot Selenium (drive itself the Chrome browser) so we fill in all the necessary information like the hostname, the Selenium server port number and the base URL we will use for testing, +* `emuse\BehatHTMLFormatter\BehatHTMLFormatterExtension` will generate a HTML report during tests execution (which is great to show to our customer for instance). -Finally, in the `formatters` section we keep the `pretty` formatter in order to keep an output in our terminal and the HTML reports will be generated at the same time in the `web/reports/behat` directory in order to make them available over HTTP (it should not be a problem as you should not execute functional tests in production, be careful to restrict access in this case). -Now that Behat is ready and configured we will prepare our functional tests that we will split into two distinct Behat suites: `integration` and `interface`. +Finally, in the `formatters` section we keep the `pretty` formatter in order to keep an output in our terminal and the HTML reports will be generated at the same time in the `web/reports/behat` directory in order to make them available over HTTP (it should not be a problem as you should not execute functional tests in production, be careful to restrict access in this case). +Now that Behat is ready and configured we will prepare our functional tests that we will split into two distinct Behat suites: `integration` and `interface`. # Writing functional tests (features) In our example, we will write tests in order to ensure that a new user can register over a registration page. -We will have to start by writing our tests scenarios (in a `.feature` file) that we will put into a `features/` directory located at the project root directory. +We will have to start by writing our tests scenarios (in a `.feature` file) that we will put into a `features/` directory located at the project root directory. So for instance, we will have the following scenario: @@ -278,7 +278,7 @@ The only difference here is that in this context we ask Mink to ask to Selenium # Context definition -One more thing now, we have to add previously created contexts in our `suites` section in the `behat.yml` configuration file. +One more thing now, we have to add previously created contexts in our `suites` section in the `behat.yml` configuration file. {% raw %} ```yaml @@ -298,19 +298,19 @@ suites: ``` {% endraw %} -It is important to see here that we can clearly split these kind of tests into two distinct parts `integration` and `interface`: each one will be executed with its own contexts. +It is important to see here that we can clearly split these kind of tests into two distinct parts `integration` and `interface`: each one will be executed with its own contexts. -Also, as we have loaded the Symfony2 extension during the Behat set up, we have the possibility to inject Symfony services in our contexts and that case occurs here with the `acme.registration.registerer` service. +Also, as we have loaded the Symfony2 extension during the Behat set up, we have the possibility to inject Symfony services in our contexts and that case occurs here with the `acme.registration.registerer` service. # Tests execution In order to run all tests, simply execute in the project root directory: `bin/behat -c behat.yml`. If you want to run the integration tests only: `bin/behat -c behat.yml --suite=integration`. -HTML report will be generated under the `web/reports/behat/` as specified in the configuration that will allow you to have a quick overview of failed tests which is cool when you have a lot of tests. +HTML report will be generated under the `web/reports/behat/` as specified in the configuration that will allow you to have a quick overview of failed tests which is cool when you have a lot of tests. # Link multiple contexts together -At last, sometime you could need information from another context. For instance, imagine that you have a second step just after the register step. You will have to create two new `IntegrationProfileContext` and `MinkProfileContext` contexts. +At last, sometime you could need information from another context. For instance, imagine that you have a second step just after the register step. You will have to create two new `IntegrationProfileContext` and `MinkProfileContext` contexts. We will only talk about integration context in the following to simplify understanding. @@ -352,7 +352,7 @@ class IntegrationProfileContext implements Context ``` {% endraw %} -You now have an accessible property **$registerContext** and can access informations from this context. +You now have an accessible property **$registerContext** and can access informations from this context. # Conclusion diff --git a/_posts/en/2016-09-08-php-7-1-dummies-candidates.md b/_posts/en/2016-09-08-php-7-1-dummies-candidates.md index fd28d7875..ffc756e92 100644 --- a/_posts/en/2016-09-08-php-7-1-dummies-candidates.md +++ b/_posts/en/2016-09-08-php-7-1-dummies-candidates.md @@ -18,7 +18,7 @@ permalink: /en/php-7-1-dummies-candidates/ --- Some time ago, well almost one year ago (time just flies!), I wrote about PHP 7.0\. Ten months later, things are moving again: PHP 7.1 is in RC1 stage. -This article doesn't pretend to be a list of all the modifications, but points out the new interesting features (you'll find at the bottom a link to all PHP 7.1 RFC's which have been used to write this article). Moreover, you need to know and understand all features introduced in PHP 7.0. +This article doesn't pretend to be a list of all the modifications, but points out the new interesting features (you'll find at the bottom a link to all PHP 7.1 RFC's which have been used to write this article). Moreover, you need to know and understand all features introduced in PHP 7.0. # RC1? @@ -30,9 +30,9 @@ As far as we know, PHP 7.1 should be released anytime soon, at least before the ## Nullable types -In my opinion, it's the most interesting feature of PHP 7.1\. As you might know (I hope so!), PHP 7.0 allowed to type hint scalar in parameters of functions, but also type hint returns (both classes and scalars). However, there was something missing: the ability to pass or return null when using type hinting. +In my opinion, it's the most interesting feature of PHP 7.1\. As you might know (I hope so!), PHP 7.0 allowed to type hint scalar in parameters of functions, but also type hint returns (both classes and scalars). However, there was something missing: the ability to pass or return null when using type hinting. -Since an image (ok it's a video) is worth a thousand words, you can see above the behavior of PHP 7.0 when giving null to type hinted methods or functions (it's also the case with PHP5): +Since an image (ok it's a video) is worth a thousand words, you can see above the behavior of PHP 7.0 when giving null to type hinted methods or functions (it's also the case with PHP5): [![](https://asciinema.org/a/84925.png)](https://asciinema.org/a/84925){:rel="nofollow"} @@ -48,7 +48,7 @@ As you can see, we can now, without using default parameters (such as = null), g ## Multi-Catch -It has long been possible to do multi-catching with the use of multiple catch blocks, one by one. Yet, it can be redundant, especially when we want to handle the same way two exceptions which have nothing in common. Here is how you should use it: +It has long been possible to do multi-catching with the use of multiple catch blocks, one by one. Yet, it can be redundant, especially when we want to handle the same way two exceptions which have nothing in common. Here is how you should use it: [![](https://asciinema.org/a/84954.png)](https://asciinema.org/a/84954){:rel="nofollow"} @@ -56,17 +56,17 @@ As you can see, I only used two exceptions, but I could have used much more if n ## Void type -Another new type has been introduced, the void type. Here is its behavior: +Another new type has been introduced, the void type. Here is its behavior: [![](https://asciinema.org/a/84952.png)](https://asciinema.org/a/84952){:rel="nofollow"} -As shown in this video, it's okay to use a return with nothing behind, but it's strictly forbidden to return null. From this previous test, I asked myself a weird and useless question: is it possible to prefix our void type with our nullable operator? The video proves that it luckily can't be. +As shown in this video, it's okay to use a return with nothing behind, but it's strictly forbidden to return null. From this previous test, I asked myself a weird and useless question: is it possible to prefix our void type with our nullable operator? The video proves that it luckily can't be. At first sight, the use of void may seem useless (mainly in this precise exemple) but it is not. Used in an interface, it ensures that implementations deflect too much from the original purpose of the interface. ## Iterable type -Following the same pattern of void, an iterable type has also been introduced. Again, its use might not be obvious at first sight because we have the native Traversable interface. Since we move forward type hinting (and embrace it right ? RIGHT ?!), we had no solution to represent both scalar arrays and traversable objects. It was inconsistent since then, because we could pass arrays or traversable objects the same way before type hinting. +Following the same pattern of void, an iterable type has also been introduced. Again, its use might not be obvious at first sight because we have the native Traversable interface. Since we move forward type hinting (and embrace it right ? RIGHT ?!), we had no solution to represent both scalar arrays and traversable objects. It was inconsistent since then, because we could pass arrays or traversable objects the same way before type hinting. It's usable in type hinting of parameters and returns. @@ -76,7 +76,7 @@ Something I found missing the whole time, and which is now solved. Class constan You might want to know that if you don't indicate visibility, it will be public by default, to be compliant with older versions of PHP behaviors. -##  Miscellaneous +## Miscellaneous We can also add randomly in the list of interesting features the following: @@ -90,7 +90,7 @@ We can also add randomly in the list of interesting features the following: You first want to know that you shouldn't use it in production environments! It's a RC for duck's sake! And to answer: -* compile it from source, this [guide](http://php.net/manual/fr/install.windows.building.php){:rel="nofollow"} explains it very clearly; +* compile it from source, this [guide](http://php.net/manual/fr/install.windows.building.php){:rel="nofollow"} explains it very clearly; * use phpenv, which is basically compiling it from source in an automated way. I recommend using the second solution on dev environments since it's not rare professionnally to handle projects not using same PHP versions. PHPEnv allows you to run multiple versions of PHP in CLI, based on the project. I'll certainly do a post to explain how to plug Nginx, PHP-FPM and PHPEnv to have multiple versions of PHP in a HTTP way (on dev env, right ? RIGHT ?!). @@ -99,7 +99,7 @@ I recommend using the second solution on dev environments since it's not rare pr This version, even being minor, comes with a lot of changes. -I'm aware that PHP is not the perfect language, and has many missing features. But we can hope that, one day, we will have native annotations or enumerations for example. The community is constantly moving, and tries really hard to improve PHP and its reputation. +I'm aware that PHP is not the perfect language, and has many missing features. But we can hope that, one day, we will have native annotations or enumerations for example. The community is constantly moving, and tries really hard to improve PHP and its reputation. If you want to know more about PHP 7.1 features, I invite you to read [RFC's](https://wiki.php.net/rfc#php_71){:rel="nofollow"}. diff --git a/_posts/en/2016-09-16-pattern-specification.md b/_posts/en/2016-09-16-pattern-specification.md index 6161939e8..63a751a6e 100644 --- a/_posts/en/2016-09-16-pattern-specification.md +++ b/_posts/en/2016-09-16-pattern-specification.md @@ -13,18 +13,18 @@ oldCategoriesAndTags: permalink: /en/pattern-specification/ --- -Through my different professional experiences, I had to set a lot of business rules in rich web apps. One day, I stumbled upon a different way to deal with those: using the specification pattern. This method has proven to be structuring and deserves some attention if you do not know what it is. +Through my different professional experiences, I had to set a lot of business rules in rich web apps. One day, I stumbled upon a different way to deal with those: using the specification pattern. This method has proven to be structuring and deserves some attention if you do not know what it is. ### Let's dig in Imagine a simple banking application for instance. This app only has clients and bank accounts. A client can have one or multiple accounts, and your job is to create a very simple system of wire transfer between accounts of a same client with this business rule: -* A client cannot transfer money if his account has a balance equals to 0 or less. +* A client cannot transfer money if his account has a balance equals to 0 or less. * The client associated to his debiting account must be active. -You can clearly see the condition which would prevent a transfer from happening. +You can clearly see the condition which would prevent a transfer from happening. -In a simple implementation, you would write it this way: +In a simple implementation, you would write it this way: ```php @@ -130,7 +130,7 @@ The script installation is done in the HTML code, in files public/home.html and ``` -The initialization goes to public/register.js. +The initialization goes to public/register.js. ```javascript // Initialize Firebase @@ -206,11 +206,11 @@ Before starting the server, you have to open permissions to Firebase in order fo ![](/_assets/posts/2016-11-21-push-notification-website/rules.png) -If you restart the server, you will see a token stored in the DB in the "Database" tab of Firebase. +If you restart the server, you will see a token stored in the DB in the "Database" tab of Firebase. ![](/_assets/posts/2016-11-21-push-notification-website/capture-decran-2016-10-26-a-16.24.58.png) -Now that the tokens are stored in the database, we are going to prepare a message that will appear when a push notification occurs. Let's add the following code to the file public/sw.js: +Now that the tokens are stored in the database, we are going to prepare a message that will appear when a push notification occurs. Let's add the following code to the file public/sw.js: ```javascript console.log('Started', self); @@ -238,7 +238,7 @@ self.addEventListener('push', function(event) { }); ``` -It's almost ready! We are going to create a "/sender" url that will allow us to send notifications to all the tokens that we have in the database. To do so, we are going to use the request and firebase modules (npm version). Here is the new package.json: +It's almost ready! We are going to create a "/sender" url that will allow us to send notifications to all the tokens that we have in the database. To do so, we are going to use the request and firebase modules (npm version). Here is the new package.json: ```json { @@ -262,11 +262,11 @@ It's almost ready! We are going to create a "/sender" url that will allow us t } ``` -In the app.js file, we initialize Firebase. You are going to need a server key file. Click the wheel in Firebase, and then "Permissions". You are now taken to another console. +In the app.js file, we initialize Firebase. You are going to need a server key file. Click the wheel in Firebase, and then "Permissions". You are now taken to another console. ![](/_assets/posts/2016-11-21-push-notification-website/permissions.png) -In "Service accounts", create a new account. +In "Service accounts", create a new account. ![](/_assets/posts/2016-11-21-push-notification-website/account.png) @@ -274,7 +274,7 @@ A json file will be downloaded, you need to add it to your project folder. ![JsonFile - Racine](/_assets/posts/2016-11-21-push-notification-website/capture-decran-2016-10-26-a-17.22.04.png) -In the app.js file, we are going to add the route /sender that will send a request of a push notification with all the tokens. +In the app.js file, we are going to add the route /sender that will send a request of a push notification with all the tokens. ```javascript var path = require('path'); diff --git a/_posts/en/2016-12-05-create-atom-package.md b/_posts/en/2016-12-05-create-atom-package.md index afa00b23e..4d1774216 100644 --- a/_posts/en/2016-12-05-create-atom-package.md +++ b/_posts/en/2016-12-05-create-atom-package.md @@ -17,23 +17,23 @@ oldCategoriesAndTags: - package permalink: /en/create-atom-package/ --- -# Introduction to Atom -[Atom](https://atom.io) is an open-source text editor (mostly used by developers){:rel="nofollow"} which is multi-platform and developed by GitHub company. It is based on Electron, the Github-developed framework, which allows developers to build native desktop applications for any operating systems by writing Javascript code. +# Introduction to Atom +[Atom](https://atom.io) is an open-source text editor (mostly used by developers){:rel="nofollow"} which is multi-platform and developed by GitHub company. It is based on Electron, the Github-developed framework, which allows developers to build native desktop applications for any operating systems by writing Javascript code. -The main interesting feature of Atom is that it also has a great package management tool and packages are also written in Javascript so it's quite easy for anyone to create one. This article aims to talk about it. -Finally, its community is also active as it already has a lot of available packages: `5 285` at this time. +The main interesting feature of Atom is that it also has a great package management tool and packages are also written in Javascript so it's quite easy for anyone to create one. This article aims to talk about it. +Finally, its community is also active as it already has a lot of available packages: `5 285` at this time. You can browse all packages by going to the following address: [https://atom.io/packages](https://atom.io/packages){:rel="nofollow"}. -However, if you cannot find a package that fits your needs you can start creating your own and we will see how simple it is. +However, if you cannot find a package that fits your needs you can start creating your own and we will see how simple it is. # Generate your first package -In order to create your own package, don't worry, you will not start from scratch. Indeed, we will use the `Package Generator`  command which is brought to us by Atom core. -To do that, you will just have to navigate into  `Packages` -> `Package Generator` -> `Generate Atom Package`. +In order to create your own package, don't worry, you will not start from scratch. Indeed, we will use the `Package Generator` command which is brought to us by Atom core. +To do that, you will just have to navigate into `Packages` -> `Package Generator` -> `Generate Atom Package`. -In order to generate your package, you can choose the language between `Javascript`  and `Coffeescript` . This article will use Javascript. +In order to generate your package, you can choose the language between `Javascript` and `Coffeescript` . This article will use Javascript. -When the command is executed, Atom will open a new window into your package project, by default named `my-package`. +When the command is executed, Atom will open a new window into your package project, by default named `my-package`. # Package structure @@ -58,11 +58,11 @@ We will now see in details what's inside our package project directory: └── my-package.less ``` -The first element to add to your package is the `package.json`  file which has to contain all information of your package such as its name, version, license type, keywords that will enable you to find your package into Atom registry and also your package dependancies. +The first element to add to your package is the `package.json` file which has to contain all information of your package such as its name, version, license type, keywords that will enable you to find your package into Atom registry and also your package dependancies. -Please also note that there is a section called `activationCommands`  which allows to define the executed command when your package is loaded. +Please also note that there is a section called `activationCommands` which allows to define the executed command when your package is loaded. -Next, we have the `keymaps/my-package.json`  file which allows you to define shortcuts into your package very easily. Here is the default example: +Next, we have the `keymaps/my-package.json` file which allows you to define shortcuts into your package very easily. Here is the default example: ```json { @@ -72,10 +72,10 @@ Next, we have the `keymaps/my-package.json`  file which allows you to define s } ``` -Next, we will go into your package entry point. It is located into `lib/my-package.js` file. -This file exports a default object which contains a `subscriptions`  property and also `activate()`  and `deactivate()`  methods. +Next, we will go into your package entry point. It is located into `lib/my-package.js` file. +This file exports a default object which contains a `subscriptions` property and also `activate()` and `deactivate()` methods. -During package activation (inside `activate()` method), we will register a CompositeDisposable type object inside our `subscriptions`  property and that will allow us to add and maybe later remove some commands offered by our package: +During package activation (inside `activate()` method), we will register a CompositeDisposable type object inside our `subscriptions` property and that will allow us to add and maybe later remove some commands offered by our package: ```js activate(state) { @@ -87,14 +87,14 @@ activate(state) { ``` Now that our command is registered, we can test it by simply typing the following words, into the Atom command palette: `My Package: Toggle`. -This command will execute the code contained in the `toggle()`  method of the class and will display a little modal at the top of the window. +This command will execute the code contained in the `toggle()` method of the class and will display a little modal at the top of the window. You can add as many commands as you want and I really encourage you to decouple your code. -# Add settings for your package +# Add settings for your package -The [Config](https://atom.io/docs/api/latest/Config){:rel="nofollow"} component allows your package to have some settings. +The [Config](https://atom.io/docs/api/latest/Config){:rel="nofollow"} component allows your package to have some settings. -To add a new setting, you just have to define a `config`  property into your package's class which is an object containing each settings definition, as follows: +To add a new setting, you just have to define a `config` property into your package's class which is an object containing each settings definition, as follows: ```json config: { @@ -106,7 +106,7 @@ config: { } ``` -Atom settings allow multiple setting types (`boolean` , `color` , `integer` , `string` , ...) so it can fit your needs on setting values by your users. +Atom settings allow multiple setting types (`boolean` , `color` , `integer` , `string` , ...) so it can fit your needs on setting values by your users. Once it is added, if you reload your package, you will see your package settings appearing into Atom settings: @@ -120,13 +120,13 @@ let gitlabUrl = atom.config.get('gitlabUrl'); # Components overview -So you are now ready to develop your package. We will have a quick overview of some interesting components that Atom brings to you and allows you to use in your package. +So you are now ready to develop your package. We will have a quick overview of some interesting components that Atom brings to you and allows you to use in your package. -## TextEditor: Interact with the text editor +## TextEditor: Interact with the text editor -With the `TextEditor` component, you will be able to insert some text into user's text editor, to save the current file, to go back and forth the history, to move the cursor into editor, to copy/paste into clipboard, to play with line indentation, to scroll, and to do so much more... +With the `TextEditor` component, you will be able to insert some text into user's text editor, to save the current file, to go back and forth the history, to move the cursor into editor, to copy/paste into clipboard, to play with line indentation, to scroll, and to do so much more... -Here are some examples to insert text in a specific position and to save the file automatically: +Here are some examples to insert text in a specific position and to save the file automatically: ```js editor.setCursorBufferPosition([row, column]); @@ -134,11 +134,11 @@ editor.insertText('foo'); editor.save(); ``` -## ViewRegistry & View: Create and display your own window +## ViewRegistry & View: Create and display your own window -These components allow you to create views (modals / windows) inside Atom and display them. +These components allow you to create views (modals / windows) inside Atom and display them. -You have an example of a modal `View` into the default package: +You have an example of a modal `View` into the default package: ```js export default class MyPackageView { @@ -166,7 +166,7 @@ visible: false; modalPanel.show(); ``` -## NotificationManager & Notification: Alert your users with notifications +## NotificationManager & Notification: Alert your users with notifications Your package can also display a variety of notifications from "success" to "fatal error": @@ -197,7 +197,7 @@ We just made a review of the components that I played with but I invite you to r ## Test your package with specs -Our package is now developed but we don't have to forget about the tests. To do that, Atom uses [Jasmine](https://jasmine.github.io){:rel="nofollow"}. +Our package is now developed but we don't have to forget about the tests. To do that, Atom uses [Jasmine](https://jasmine.github.io){:rel="nofollow"}. Your default package already has a prepared test file: @@ -211,22 +211,22 @@ describe('MyPackageView', () => { }); ``` -Jasmine specs tests are written in the following way: +Jasmine specs tests are written in the following way: -* `describe()` : A Jasmine test suite starts with a "describe" function which takes a name as the first argument and a function as the second, -* `it()` : A specification is added by using this function, "it" has to be contained into a specification, -* `expect()` : This one is an assertion, when we expect something to happen. +* `describe()` : A Jasmine test suite starts with a "describe" function which takes a name as the first argument and a function as the second, +* `it()` : A specification is added by using this function, "it" has to be contained into a specification, +* `expect()` : This one is an assertion, when we expect something to happen. This is now your turn to play with Jasmine and test your package logic. -In order to run the specs tests, you just have to navigate into the following menu: `View`  -> `Packages`  -> `Run Package Specs`. +In order to run the specs tests, you just have to navigate into the following menu: `View` -> `Packages` -> `Run Package Specs`. -# Publish your package +# Publish your package Our package is now ready to be deployed! Let's send it. ![Publish](/_assets/posts/2016-12-05-create-atom-package/publish.gif) -To do that, we will use the `apm`  CLI tool which comes with Atom when installing it. +To do that, we will use the `apm` CLI tool which comes with Atom when installing it. After pushing your code into a Github repository, simply go into your package directory and type the following command: @@ -238,12 +238,12 @@ Pushing v0.0.1 tag ✓ ... ``` -This command will be in charge of creating the new version tag into repository and publish this version into the Atom registry. +This command will be in charge of creating the new version tag into repository and publish this version into the Atom registry. Congratulations, your package is now published and available on the following URL: `https://atom.io/packages/`! # Continuous Integration -The final step is to ensure that your package will continue to work in the future when you or your contributors will add new features but also when Atom releases a new beta version. To do that, you can use [Travis-CI](https://travis-ci.org){:rel="nofollow"} on your repository with the following configuration: +The final step is to ensure that your package will continue to work in the future when you or your contributors will add new features but also when Atom releases a new beta version. To do that, you can use [Travis-CI](https://travis-ci.org){:rel="nofollow"} on your repository with the following configuration: ```yaml language: objective-c @@ -268,5 +268,5 @@ env: I personally think that this is a little revolution to allow developers to make their own editor and bring the features they want. -Moreover, the Atom API is already very rich and very simple to use and this is certainly the main reason why the community offers a large number of packages. +Moreover, the Atom API is already very rich and very simple to use and this is certainly the main reason why the community offers a large number of packages. To conclude, as for all libraries, it is not useful to reinvent the wheel by creating already existing packages. The idea is to add features if they don't already exists, in order to enrich your user experience. diff --git a/_posts/en/2016-12-21-understanding-ssltls-part-1.md b/_posts/en/2016-12-21-understanding-ssltls-part-1.md index 7a3f1a3b5..87ad9ec8c 100644 --- a/_posts/en/2016-12-21-understanding-ssltls-part-1.md +++ b/_posts/en/2016-12-21-understanding-ssltls-part-1.md @@ -34,7 +34,7 @@ First of all, a little history lesson: fasten your seat belt, let's start with p ### **So, what is it?** -SSL and TLS are cryptographic protocols that provide communications security. +SSL and TLS are cryptographic protocols that provide communications security. They behave like an additional intermediate layer between the transport layer (TCP) and the applicative one (HTTP, FTP, SMTP...) (see diagram below) @@ -44,7 +44,7 @@ Until now, everything's right! SSL and TLS are invisible to the user, and don't require a usage of protocol of specific application. -_OSI Model with SSL/TLS_ +_OSI Model with SSL/TLS_ ![tls-in-osi](/_assets/posts/2016-12-21-understanding-ssltls-part-1/tls-in-osi.png) @@ -54,53 +54,53 @@ SSL and TLS protocols allow to exchange secure information between to computers. They are responsible for the following three things: -1. **Confidentiality**: it's impossible to spy on exchanged information. Client and server must have the insurance that their conversation can't be listened to by someone else. This is ensured by an **encryption ****algorithm**. -2. **Integrity**: it's impossible to falsify exchanged information. A client and a server must ensure that transmitted messages are neither truncated nor modified (integrity), and that they come from an expected sender. These functionalities are done by **signature of data**. -3. **Authentication**: it allows to be sure of the software identity, the person or corporation with which we communicate. Since SSL **3.0**, the server can also ask the client to authenticate, ensured by the use of **certificates**. +1. **Confidentiality**: it's impossible to spy on exchanged information. Client and server must have the insurance that their conversation can't be listened to by someone else. This is ensured by an **encryption ****algorithm**. +2. **Integrity**: it's impossible to falsify exchanged information. A client and a server must ensure that transmitted messages are neither truncated nor modified (integrity), and that they come from an expected sender. These functionalities are done by **signature of data**. +3. **Authentication**: it allows to be sure of the software identity, the person or corporation with which we communicate. Since SSL **3.0**, the server can also ask the client to authenticate, ensured by the use of **certificates**. -TLS and SSL protocols are based on a combination of several cryptographic concepts, dealing with both **asymmetrical** and **symmetrical encryption** (we'll discuss about this in a related part of this article). +TLS and SSL protocols are based on a combination of several cryptographic concepts, dealing with both **asymmetrical** and **symmetrical encryption** (we'll discuss about this in a related part of this article). -Moreover, these protocols are bound to evolve, independent from cryptographic algorithm and authentication set in a transaction. This allows them to adapt to users needs and have better security because those protocols are not impacted by technical evolution of cryptography (if an encryption becomes obsolete, the protocol can still be exploited by choosing a more secure encryption). +Moreover, these protocols are bound to evolve, independent from cryptographic algorithm and authentication set in a transaction. This allows them to adapt to users needs and have better security because those protocols are not impacted by technical evolution of cryptography (if an encryption becomes obsolete, the protocol can still be exploited by choosing a more secure encryption). **History:** **A - SSL**: -SSL means **Secure Socket Layer.** +SSL means **Secure Socket Layer.** -* Developed by Netscape in **1994,** version **1.0** stayedinternal and had never been released ; -* The first real SSL version is **2.0**, released in **February, **1995**. It's the first implementation of SSL that was banned in march 2011 ([RFC 6176](https://tools.ietf.org/html/rfc6176)){:rel="nofollow"} ; -* In **November, 1996** SSL releases **3.0**, last version to this day, which will inspire **TLS**, its successor. Its specifications were re-edited in august, 2008 in [RFC 6101](https://tools.ietf.org/html/rfc6101)[4](https://fr.wikipedia.org/wiki/Transport_Layer_Security#cite_note-4). The protocol was banned in 2014, following the [POODLE](https://fr.wikipedia.org/wiki/POODLE) breach. The banishment was definitely ratified in June of 2015 ([RFC 7568](https://tools.ietf.org/html/rfc7568)){:rel="nofollow"}. +* Developed by Netscape in **1994,** version **1.0** stayedinternal and had never been released ; +* The first real SSL version is **2.0**, released in **February, **1995**. It's the first implementation of SSL that was banned in march 2011 ([RFC 6176](https://tools.ietf.org/html/rfc6176)){:rel="nofollow"} ; +* In **November, 1996** SSL releases **3.0**, last version to this day, which will inspire **TLS**, its successor. Its specifications were re-edited in august, 2008 in [RFC 6101](https://tools.ietf.org/html/rfc6101)[4](https://fr.wikipedia.org/wiki/Transport_Layer_Security#cite_note-4). The protocol was banned in 2014, following the [POODLE](https://fr.wikipedia.org/wiki/POODLE) breach. The banishment was definitely ratified in June of 2015 ([RFC 7568](https://tools.ietf.org/html/rfc7568)){:rel="nofollow"}. **B - TLS:** -TLS means **Transport Layer Security**. +TLS means **Transport Layer Security**. The development of this protocol has been continued by [IETF](https://www.ietf.org/){:rel="nofollow"}. TLS protocol is not structurally from version 3 of SSL, but modifications in the use of hash functions result in a non-interoperability of both protocols. -Although TLS, like SSLv3, has an ascending compatibility with previous versions, meaning that at the beginning of the **negotiation **phase, client and server negotiate the best version of the protocol available in common. For security reasons (mentioned above), TLS compatibility with SSL v2 has been dropped. +Although TLS, like SSLv3, has an ascending compatibility with previous versions, meaning that at the beginning of the **negotiation **phase, client and server negotiate the best version of the protocol available in common. For security reasons (mentioned above), TLS compatibility with SSL v2 has been dropped. -What also differentiates TLS from SSL is that **asymmetrical keys **generation is a little more secured in SSL than in SSLV3, where not one step is uniquely based on MD5 (where weaknesses have appeared in [cryptanalysis](https://en.wikipedia.org/wiki/Cryptanalysis)){:rel="nofollow"}. +What also differentiates TLS from SSL is that **asymmetrical keys **generation is a little more secured in SSL than in SSLV3, where not one step is uniquely based on MD5 (where weaknesses have appeared in [cryptanalysis](https://en.wikipedia.org/wiki/Cryptanalysis)){:rel="nofollow"}. -* In **January 1993**: IETF publishes **TLS 1.0**. Lots of improvements are then brought: - * October 1999 ([RFC 2712](https://tools.ietf.org/html/rfc2712)) : Added protocol [Kerberos](https://en.wikipedia.org/wiki/Kerberos_(protocol)){:rel="nofollow"} to TLS - * May 2000 ([RFC 2817](https://tools.ietf.org/html/rfc2817) and [RFC 2818](https://tools.ietf.org/html/rfc2818)){:rel="nofollow"} : Migration  to TLS during a HTTP 1.1 session - * June 2002 ([RFC 3268](https://tools.ietf.org/html/rfc3268)) : Support of [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard){:rel="nofollow"} encryption system via TLS -* April 2006 ([RFC 4346](https://tools.ietf.org/html/rfc4346)){:rel="nofollow"} : Publication of **TLS 1.1**. -* August 2008 ([RFC 5246](https://tools.ietf.org/html/rfc5246)){:rel="nofollow"} : Publication of **TLS 1.2**. -* March 2011 ([RFC 6176](https://tools.ietf.org/html/rfc6176)){:rel="nofollow"} : SSLv2 compatibility of all TLS versions dropped. -* April 2014: first draft of **TLS 1.3**. -* June 2015 ([RFC 7568](https://tools.ietf.org/html/rfc7568)){:rel="nofollow"} : compatibility with SSLv2 and SSLv3 dropped. -* October 2015: new draft of **TLS 1.3** +* In **January 1993**: IETF publishes **TLS 1.0**. Lots of improvements are then brought: + * October 1999 ([RFC 2712](https://tools.ietf.org/html/rfc2712)) : Added protocol [Kerberos](https://en.wikipedia.org/wiki/Kerberos_(protocol)){:rel="nofollow"} to TLS + * May 2000 ([RFC 2817](https://tools.ietf.org/html/rfc2817) and [RFC 2818](https://tools.ietf.org/html/rfc2818)){:rel="nofollow"} : Migration to TLS during a HTTP 1.1 session + * June 2002 ([RFC 3268](https://tools.ietf.org/html/rfc3268)) : Support of [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard){:rel="nofollow"} encryption system via TLS +* April 2006 ([RFC 4346](https://tools.ietf.org/html/rfc4346)){:rel="nofollow"} : Publication of **TLS 1.1**. +* August 2008 ([RFC 5246](https://tools.ietf.org/html/rfc5246)){:rel="nofollow"} : Publication of **TLS 1.2**. +* March 2011 ([RFC 6176](https://tools.ietf.org/html/rfc6176)){:rel="nofollow"} : SSLv2 compatibility of all TLS versions dropped. +* April 2014: first draft of **TLS 1.3**. +* June 2015 ([RFC 7568](https://tools.ietf.org/html/rfc7568)){:rel="nofollow"} : compatibility with SSLv2 and SSLv3 dropped. +* October 2015: new draft of **TLS 1.3** **Browsers:** Most browsers support TLS 1.0\. Browsers supporting by default TLS 1.1 and 1.2 are: * Apple Safari 7 and next; -* Google Chrome and Chromium 30 and next; +* Google Chrome and Chromium 30 and next; * Microsoft Internet Explorer 11 and next; * Mozilla Firefox 27 and next; * Opera 17 and next. diff --git a/_posts/en/2017-01-20-redux-structure-frontend-applications.md b/_posts/en/2017-01-20-redux-structure-frontend-applications.md index 6badd9434..e4b71c2cb 100644 --- a/_posts/en/2017-01-20-redux-structure-frontend-applications.md +++ b/_posts/en/2017-01-20-redux-structure-frontend-applications.md @@ -20,73 +20,73 @@ permalink: /en/redux-structure-frontend-applications/ --- Javascript ecosystem is really rich: full of developers but also full of frameworks and libraries. -When you want to develop a frontend application, whatever its rendering framework, you will have to structure things into your project in order to organize the data management with views. This case occurs particularly when you use component rendering frameworks like `React` or `VueJS`. -Historically, this has been needed by [React](https://facebook.github.io/react/) so that's why Facebook has open sourced its tool named [Flux](http://facebook.github.io/flux/){:rel="nofollow"}. +When you want to develop a frontend application, whatever its rendering framework, you will have to structure things into your project in order to organize the data management with views. This case occurs particularly when you use component rendering frameworks like `React` or `VueJS`. +Historically, this has been needed by [React](https://facebook.github.io/react/) so that's why Facebook has open sourced its tool named [Flux](http://facebook.github.io/flux/){:rel="nofollow"}. Here is the philosophy: ![Flux Diagram](/_assets/posts/2017-01-20-redux-structure-frontend-applications/flux-diagram.png) -Your application declare `actions`  for each components. These actions allow you to define the state of your data which is stored in a `store` . This stores continually maintains your `view`  up-to-date. -We have a drawback in this case because you have to define one store per component. This is working but on large applications you can feel limited with it. -In June 2015, Dan Abramov has launched [Redux](http://redux.js.org/){:rel="nofollow"} which simplify store management because you only have one store for all your application. +Your application declare `actions` for each components. These actions allow you to define the state of your data which is stored in a `store` . This stores continually maintains your `view` up-to-date. +We have a drawback in this case because you have to define one store per component. This is working but on large applications you can feel limited with it. +In June 2015, Dan Abramov has launched [Redux](http://redux.js.org/){:rel="nofollow"} which simplify store management because you only have one store for all your application. -All of your application components can access to the whole state. +All of your application components can access to the whole state. -For more information about Redux/Flux differences I encourage you to have a look at [Dan's answer](http://stackoverflow.com/questions/32461229/why-use-redux-over-facebook-flux/32920459#32920459){:rel="nofollow"} on this subject. +For more information about Redux/Flux differences I encourage you to have a look at [Dan's answer](http://stackoverflow.com/questions/32461229/why-use-redux-over-facebook-flux/32920459#32920459){:rel="nofollow"} on this subject. # Installation This article will deal about how to install and use Redux on your own projects. -Please keep in mind that Redux can be used with multiple rendering frameworks like React or VueJS. -To install Redux, you will just need the `redux` npm (or yarn) package. +Please keep in mind that Redux can be used with multiple rendering frameworks like React or VueJS. +To install Redux, you will just need the `redux` npm (or yarn) package. -If you use Redux into a React application, you will also need the `react-redux`  package or even the `vue-redux`  if you want to use it on a VueJS project. +If you use Redux into a React application, you will also need the `react-redux` package or even the `vue-redux` if you want to use it on a VueJS project. ```bash $ yarn add redux ``` -Nothing more, you can now start to use Redux. +Nothing more, you can now start to use Redux. -# Basic usage +# Basic usage -As previously described, you will have to instanciate a new `store`  that will allow to store the state of all your application. -In order to instanciate this store, you will have to give to it some `reducers` . Reducers contain methods that change the state of your application. -These state changes occur when an `action`  is dispatched by your application. +As previously described, you will have to instanciate a new `store` that will allow to store the state of all your application. +In order to instanciate this store, you will have to give to it some `reducers` . Reducers contain methods that change the state of your application. +These state changes occur when an `action` is dispatched by your application. -Here we are, we have the 3 things needed by a Redux application: `actions`, `reducers` and a `store`. -We will use a simple practical case: a counter that we can increment or decrement with a given value. +Here we are, we have the 3 things needed by a Redux application: `actions`, `reducers` and a `store`. +We will use a simple practical case: a counter that we can increment or decrement with a given value. Here is our target arborescence: ``` src/ ├── actions -│   └── counter.js +│ └── counter.js ├── constants -│   └── ActionTypes.js +│ └── ActionTypes.js ├── reducers -│   ├── another.js -│   ├── counter.js -│   └── index.js +│ ├── another.js +│ ├── counter.js +│ └── index.js └── store └── configureStore.js ``` ## Actions -Let's write an actions containing file that will implement our 2 actions: increment and decrement. -Before all, we will store these actions names into constants in order to keep our code clear and comprehensible as we will always call these constants in all of our code. +Let's write an actions containing file that will implement our 2 actions: increment and decrement. +Before all, we will store these actions names into constants in order to keep our code clear and comprehensible as we will always call these constants in all of our code. -Start by creating a `src/constants/ActionTypes.js`  file with the following content: +Start by creating a `src/constants/ActionTypes.js` file with the following content: ```js export const INCREMENT = 'INCREMENT'; export const DECREMENT = 'DECREMENT'; ``` -Great, we will now write actions that correspond to these constants in a `src/actions/counter.js` file: +Great, we will now write actions that correspond to these constants in a `src/actions/counter.js` file: ```js import * as types from '../constants/ActionTypes'; @@ -95,13 +95,13 @@ export const increment = (value) => ({ type: types.INCREMENT, value }); export const decrement = (value) => ({ type: types.DECREMENT, value }); ``` -You have just created your 2 actions (`increment`  and `decrement`) which each have a type property (required) and a value to add or remove to the current counter value. +You have just created your 2 actions (`increment` and `decrement`) which each have a type property (required) and a value to add or remove to the current counter value. ## Reducers We will now write reducers functions that correspond to the actions we previously wrote in order to update the value in our application state. -This will be written in the `src/reducers/counter.js` file: +This will be written in the `src/reducers/counter.js` file: ```js import { INCREMENT, DECREMENT } from '../constants/ActionTypes'; @@ -128,12 +128,12 @@ export default function counter(state = initialState, action) { } ``` -You got the idea, we have our actions wrapped into a `switch() { case ... }`  and directly return the store updated with new values. -You can also observe that we have initialized an initial state (initialState) in order to prepare our application state with some default values. +You got the idea, we have our actions wrapped into a `switch() { case ... }` and directly return the store updated with new values. +You can also observe that we have initialized an initial state (initialState) in order to prepare our application state with some default values. -`Note:` You can write as many reducers as you need in your application so you can clearly split your code application. +`Note:` You can write as many reducers as you need in your application so you can clearly split your code application. -Only point if you declare multiple reducers into your application is that you will have to combine them here in a file named `src/reducers/index.js`  as follows: +Only point if you declare multiple reducers into your application is that you will have to combine them here in a file named `src/reducers/index.js` as follows: ```js import { combineReducers } from 'redux'; @@ -151,8 +151,8 @@ export default reducers; ## Store -You have your actions and your reducers so let's dive into the final step: store creation! -Store will be created in a `src/store/configureStore.js`  file with only these couple of lines: +You have your actions and your reducers so let's dive into the final step: store creation! +Store will be created in a `src/store/configureStore.js` file with only these couple of lines: ```js import { createStore } from 'redux'; @@ -167,14 +167,14 @@ const configureStore = () => { export default configureStore; ``` -You just have to call the Redux's `createStore()`  API function in order to create your store. +You just have to call the Redux's `createStore()` API function in order to create your store. In order to go further, please note that this function can take a maximum of 3 arguments: * one or many combines reducers, -* a pre-loaded state (*optional*), corresponding to an initial state, -* some "enhancers" (*optionals*), which are some callbacks such as middlewares. +* a pre-loaded state (*optional*), corresponding to an initial state, +* some "enhancers" (*optionals*), which are some callbacks such as middlewares. -A middleware is a callback that is executed each time Redux can the `dispatch()`  function so each time an action is triggered. +A middleware is a callback that is executed each time Redux can the `dispatch()` function so each time an action is triggered. Here is a simple middleware that logs each dispatched actions: @@ -199,14 +199,14 @@ const configureStore = () => { export default configureStore; ``` -Do not forget to call the `applyMiddleware()` function when you pass your function to the store argument. +Do not forget to call the `applyMiddleware()` function when you pass your function to the store argument. # React use case -Principles are exactly the same when you want to use Redux on a React application. However, the `react-redux`  library brings some cool additional features to fit with React. -Indeed, thanks to this library, you will be able to map your React components `props`  with the Redux state and actions. +Principles are exactly the same when you want to use Redux on a React application. However, the `react-redux` library brings some cool additional features to fit with React. +Indeed, thanks to this library, you will be able to map your React components `props` with the Redux state and actions. -Let's take a concrete case: a `Counter`  component which could be a component for our previous use case: +Let's take a concrete case: a `Counter` component which could be a component for our previous use case: ```js import React, { PropTypes } from 'react'; @@ -242,18 +242,18 @@ export default connect( )(Counter); ``` -This way, we are able to retrieve our props values which came from the Redux store but also an `actions` property that will allow us to dispatch Redux events when we will call it. +This way, we are able to retrieve our props values which came from the Redux store but also an `actions` property that will allow us to dispatch Redux events when we will call it. Main things to note here are: -* `mapStateToProps`  is a function that allows to map our Redux **state properties** with `React properties`, -* `mapDispatchToProps`  is a function that allows to map Redux `actions` with `React properties`. +* `mapStateToProps` is a function that allows to map our Redux **state properties** with `React properties`, +* `mapDispatchToProps` is a function that allows to map Redux `actions` with `React properties`. -These two functions are applied thanks to the `connect()`  function brought by the `react-redux` library. +These two functions are applied thanks to the `connect()` function brought by the `react-redux` library. -`Note:` We have to use the `bindActionCreators()`  function over our `CounterActions`  because this is an object that contains actions functions so this function will allows React to call the `dispatch()`  Redux function when React will call the functions in order to have them correctly triggered. +`Note:` We have to use the `bindActionCreators()` function over our `CounterActions` because this is an object that contains actions functions so this function will allows React to call the `dispatch()` Redux function when React will call the functions in order to have them correctly triggered. # Conclusion -If we put in parallel the download numbers of Redux (`1 303 720 download over the previous month)` with the `2 334 221 downloads of React`, we can conclude that Redux is today `very used` and seems very much `appreciated` by developers because it's a `simple` solution that can greatly help you to structure your application. -Redux brings, in my opinion, a `real solution` to structure complex (or large) business applications and bring that to the React and VueJS (and others) communities. +If we put in parallel the download numbers of Redux (`1 303 720 download over the previous month)` with the `2 334 221 downloads of React`, we can conclude that Redux is today `very used` and seems very much `appreciated` by developers because it's a `simple` solution that can greatly help you to structure your application. +Redux brings, in my opinion, a `real solution` to structure complex (or large) business applications and bring that to the React and VueJS (and others) communities. diff --git a/_posts/en/2017-01-31-rabbitmq-publish-consume-retry-messages.md b/_posts/en/2017-01-31-rabbitmq-publish-consume-retry-messages.md index dce0a4514..83fa37972 100644 --- a/_posts/en/2017-01-31-rabbitmq-publish-consume-retry-messages.md +++ b/_posts/en/2017-01-31-rabbitmq-publish-consume-retry-messages.md @@ -20,7 +20,7 @@ permalink: /en/rabbitmq-publish-consume-retry-messages/ --- ![Swarrot Logo](/_assets/posts/2017-01-23-publier-consommer-reessayer-des-messages-rabbitmq/logo.png) -RabbitMQ is a message broker, allowing to process things asynchronously. There's already an [article](https://blog.eleven-labs.com/fr/creer-rpc-rabbitmq/){:rel="nofollow"} written about it, if you're not familiar with RabbitMQ. +RabbitMQ is a message broker, allowing to process things asynchronously. There's already an [article](https://blog.eleven-labs.com/fr/creer-rpc-rabbitmq/){:rel="nofollow"} written about it, if you're not familiar with RabbitMQ. What I'd like to talk to you about is the lifecycle of a message, with error handling. Everything in a few lines of code. @@ -30,17 +30,17 @@ Therefore, we're going to configure a RabbitMQ virtual host, publish a message, The technical solution is based on two libraries: -* [RabbitMQ Admin Toolkit](https://github.com/odolbeau/rabbit-mq-admin-toolkit){:rel="nofollow"} : PHP library the interacts with the HTTP API of our RabbitMQ server, to create exchanges, queues... -* [Swarrot](https://github.com/swarrot/swarrot){:rel="nofollow"} : PHP library to consume and publish our messages. +* [RabbitMQ Admin Toolkit](https://github.com/odolbeau/rabbit-mq-admin-toolkit){:rel="nofollow"} : PHP library the interacts with the HTTP API of our RabbitMQ server, to create exchanges, queues... +* [Swarrot](https://github.com/swarrot/swarrot){:rel="nofollow"} : PHP library to consume and publish our messages. -Swarrot is compatible with the amqp extension of PHP, as well as the [php-amqplib](https://github.com/php-amqplib/php-amqplib) library. The PHP extension has a certain advantage on performance (written in C) over the library, based on [benchmarks](https://odolbeau.fr/blog/benchmark-php-amqp-lib-amqp-extension-swarrot.html). To install the extension, click [here](https://serverpilot.io/community/articles/how-to-install-the-php-amqp-extension.html){:rel="nofollow"}. -The main adversary to Swarrot, [RabbitMqBundle](https://github.com/php-amqplib/RabbitMqBundle){:rel="nofollow"}, is not compatible with the PHP extension, and is not as simple in both configuration and usage. +Swarrot is compatible with the amqp extension of PHP, as well as the [php-amqplib](https://github.com/php-amqplib/php-amqplib) library. The PHP extension has a certain advantage on performance (written in C) over the library, based on [benchmarks](https://odolbeau.fr/blog/benchmark-php-amqp-lib-amqp-extension-swarrot.html). To install the extension, click [here](https://serverpilot.io/community/articles/how-to-install-the-php-amqp-extension.html){:rel="nofollow"}. +The main adversary to Swarrot, [RabbitMqBundle](https://github.com/php-amqplib/RabbitMqBundle){:rel="nofollow"}, is not compatible with the PHP extension, and is not as simple in both configuration and usage. ## Configuration Our first step will be to create our RabbitMQ configuration: our exchange and our queue. -The RabbitMQ Admin Toolkit library, developed by _[odolbeau](https://github.com/odolbeau){:rel="nofollow"},_ allows us to configure our vhost very easily. Here is a basic configuration declaring an exchange and a queue, allowing us to send our mascot Wilson and his fellow friends to space: +The RabbitMQ Admin Toolkit library, developed by _[odolbeau](https://github.com/odolbeau){:rel="nofollow"},_ allows us to configure our vhost very easily. Here is a basic configuration declaring an exchange and a queue, allowing us to send our mascot Wilson and his fellow friends to space: ```yaml # default_vhost.yml @@ -62,7 +62,7 @@ The RabbitMQ Admin Toolkit library, developed by _[odolbeau](https://github.co routing_key: send_astronaut_to_space ``` -Here, we ask the creation of an exchange named "default", and a queue named "send_astronaut_to_space", bound to our exchange via a homonym routing key. +Here, we ask the creation of an exchange named "default", and a queue named "send_astronaut_to_space", bound to our exchange via a homonym routing key. A binding represents a relation between a queue and an exchange. Let's launch the command to create our vhost: @@ -81,11 +81,11 @@ If you connect to the RabbitMQ management interface (ex: http://127.0.0.1:15672/ ![Capture of exchanges created](/_assets/posts/2017-01-23-publier-consommer-reessayer-des-messages-rabbitmq/create_exchanges.png) -Click on the _Exchanges_ tab: an exchange named _default_ has been created, with a binding to our queue as indicated in our terminal. +Click on the _Exchanges_ tab: an exchange named _default_ has been created, with a binding to our queue as indicated in our terminal. ![Capture of queues created](/_assets/posts/2017-01-23-publier-consommer-reessayer-des-messages-rabbitmq/create_queues.png) -Now click on the _Queues_ tab: _send_astronaut_to_space_ is also here. +Now click on the _Queues_ tab: _send_astronaut_to_space_ is also here. Let's take a look at the publication and consumption of messages. @@ -100,7 +100,7 @@ After installing the bundle, we have to configure it: ```yaml # app/config/config.yml swarrot: - provider: pecl # pecl or amqp_lib + provider: pecl # pecl or amqp_lib connections: rabbitmq: host: '%rabbitmq_host%' @@ -123,11 +123,11 @@ This is a configuration example. The interesting part comes around the "consumer Every message published in an exchange will be routed to a queue according to its routing jey. Therefore, we need to process a message stored in a queue. Using Swarrot, special things called _processors_ are in charge of this. -To consume a message, we need to create our own processor. As indicated in the documentation, a processor is just a Symfony service who needs to implement the _ProcessInterface_ interface. +To consume a message, we need to create our own processor. As indicated in the documentation, a processor is just a Symfony service who needs to implement the _ProcessInterface_ interface. ![Swarrot - Middleware stack](https://camo.githubusercontent.com/8ac89cd415aebfb1026b2278093dbcc986b126da/68747470733a2f2f646f63732e676f6f676c652e636f6d2f64726177696e67732f642f3145615f514a486f2d3970375957386c5f62793753344e494430652d41477058527a7a6974416c59593543632f7075623f773d39363026683d373230){:rel="nofollow"} -The particularity of processors is that they work using middlewares, allowing to add behavior before and/or after the processing of our message (our processor). That's why there is a _middleware_stack_ parameter, that holds two things: _swarrot.processor.exception_catcher_ and _swarrot.processor.ack_. Although optional, these middlewares bring nice flexibility. We'll come back on this later on. +The particularity of processors is that they work using middlewares, allowing to add behavior before and/or after the processing of our message (our processor). That's why there is a _middleware_stack_ parameter, that holds two things: _swarrot.processor.exception_catcher_ and _swarrot.processor.ack_. Although optional, these middlewares bring nice flexibility. We'll come back on this later on. ```php get('swarrot.publisher')->publish('send_astronaut_to_space_publisher', $message); ``` -The service _swarrot.publisher_ deals with publishing our message. Simple right? +The service _swarrot.publisher_ deals with publishing our message. Simple right? -After setting up _queues_, published and consumed a message, we now have a good view of the life-cycle of a message. +After setting up _queues_, published and consumed a message, we now have a good view of the life-cycle of a message. ## Handling errors @@ -189,11 +189,11 @@ One last aspect I'd like to share with you today is about errors while consuming Setting aside implementation problems in your code, it's possible that you encounter exceptions, due to external causes. For instance, you have a processor that makes HTTP calls to an outside service. The said service can be temporarily down, or returning an error. You need to publish a message and make sure that this one is not lost. Wouldn't it be great to publish this message again if the service does not respond? And do so after a certain amount of time? Somewhere along the way, I've been confronted to this problem. We knew such things could happen and we needed to automatically "retry" our messages publication. -I'm going to show you how to proceed, keeping our example _send_astronaut_to_space._ Let's decide that we're going to retry the publication of our message 3 times maximum. To do that, we need 3 retry queues. Fortunately, configuration of retry queues and exchanges is so easy with [RabbitMQ Admin Toolkit](https://github.com/odolbeau/rabbit-mq-admin-toolkit){:rel="nofollow"}: we only need one line! Let's see this more closely : +I'm going to show you how to proceed, keeping our example _send_astronaut_to_space._ Let's decide that we're going to retry the publication of our message 3 times maximum. To do that, we need 3 retry queues. Fortunately, configuration of retry queues and exchanges is so easy with [RabbitMQ Admin Toolkit](https://github.com/odolbeau/rabbit-mq-admin-toolkit){:rel="nofollow"}: we only need one line! Let's see this more closely : ```yaml # default_vhost.yml -# ... +# ... queues: send_astronaut_to_space: durable: true @@ -228,8 +228,8 @@ Create binding between exchange default and queue send_astronaut_to_space (with We still create a default exchange. Then, many things are done: -* Creation of an exchange called _dl_ and queues _queues_ _send_astronaut_to_space and_ _send_astronaut_to_space_dl_ : we'll come back on this later on. -* Creation of an exchange called _retry_ and queues _send_astronaut_to_space_retry_1_, _send_astronaut_to_space_retry_2_ and _send_astronaut_to_space_retry_3_: here is the interesting part, all queues that will be used to do a retry of our message. +* Creation of an exchange called _dl_ and queues _queues_ _send_astronaut_to_space and_ _send_astronaut_to_space_dl_ : we'll come back on this later on. +* Creation of an exchange called _retry_ and queues _send_astronaut_to_space_retry_1_, _send_astronaut_to_space_retry_2_ and _send_astronaut_to_space_retry_3_: here is the interesting part, all queues that will be used to do a retry of our message. Now let's configure our consumer. @@ -238,7 +238,7 @@ With Swarrot, handling of retries is very easy to configure. Do you remember tho ```yaml # app/config/config.yml consumers: -# ... +# ... middleware_stack: - configurator: swarrot.processor.exception_catcher - configurator: swarrot.processor.ack @@ -255,13 +255,13 @@ With Swarrot, handling of retries is very easy to configure. Do you remember tho routing_key: send_astronaut_to_space ``` -The main difference with our previous configuration is located around the parameter _middleware_stack_: we need to add the processor _swarrot.processor.retry_, with its retry strategy: +The main difference with our previous configuration is located around the parameter _middleware_stack_: we need to add the processor _swarrot.processor.retry_, with its retry strategy: * the name of the retry exchange (defined above) * the number of publishing attempts * the pattern of retry queues -The workflow works this way: if the message is not _acknowledged_ followingan exception the first time, it will be published in the _retry_ exchange_,_ with routing key_send_astronaut_to_space_retry_1\._ Then, 5 seconds later, the message is published back in our main queue _send_astronaut_to_space_. If another error is encountered, it will be republished in the retry exchange, with the routing key _send_astronaut_to_space_retry_2_, and 25 seconds later the message will be back on our main queue. Same thing one last time with 100 seconds. +The workflow works this way: if the message is not _acknowledged_ followingan exception the first time, it will be published in the _retry_ exchange_,_ with routing key_send_astronaut_to_space_retry_1\._ Then, 5 seconds later, the message is published back in our main queue _send_astronaut_to_space_. If another error is encountered, it will be republished in the retry exchange, with the routing key _send_astronaut_to_space_retry_2_, and 25 seconds later the message will be back on our main queue. Same thing one last time with 100 seconds. ```bash bin/console swarrot:consume:send_astronaut_to_space send_astronaut_to_space @@ -273,10 +273,10 @@ bin/console swarrot:consume:send_astronaut_to_space send_astronaut_to_space [2017-01-12 12:55:51] app.ERROR: [ExceptionCatcher] An exception occurred. This exception has been caught. {"swarrot_processor":"exception","exception":"[object] (Exception(code: 0): An error occurred while consuming hello at /home/gus/dev/swarrot/src/AppBundle/Processor/SendAstronautToSpace.php:12)"} ``` -When creating our virtual host, we saw that an exchange called _dl ,_ associated to a queue _send_astronaut_to_space_dl_ has been created. This queue is our message's last stop if the retry mechanism is not able to successfully publish our message (an error is still encountered after each retry). -If we look closely the details of queue _send_astronaut_to_space_, we see that "_x-dead-letter-exchange_" is equal to"_dl_", and that "_x-dead-letter-routing-key_" is equal to "_send_astronaut_to_space_", corresponding to our binding explained previously. +When creating our virtual host, we saw that an exchange called _dl ,_ associated to a queue _send_astronaut_to_space_dl_ has been created. This queue is our message's last stop if the retry mechanism is not able to successfully publish our message (an error is still encountered after each retry). +If we look closely the details of queue _send_astronaut_to_space_, we see that "_x-dead-letter-exchange_" is equal to"_dl_", and that "_x-dead-letter-routing-key_" is equal to "_send_astronaut_to_space_", corresponding to our binding explained previously. -On every error in our processor, the _retryProcessor_ will catch this error, and republish our message in the retry queue as many times as we've configured it. Then Swarrot will hand everything to RabbitMQ to route our message to the queue queue _send_astronaut_to_space_dl._ +On every error in our processor, the _retryProcessor_ will catch this error, and republish our message in the retry queue as many times as we've configured it. Then Swarrot will hand everything to RabbitMQ to route our message to the queue queue _send_astronaut_to_space_dl._ ## Conclusion diff --git a/_posts/en/2017-02-22-consul-service-discovery-failure-detection.md b/_posts/en/2017-02-22-consul-service-discovery-failure-detection.md index 6140e33f6..8b3f6caa3 100644 --- a/_posts/en/2017-02-22-consul-service-discovery-failure-detection.md +++ b/_posts/en/2017-02-22-consul-service-discovery-failure-detection.md @@ -20,7 +20,7 @@ oldCategoriesAndTags: permalink: /en/consul-service-discovery-failure-detection/ --- # Introduction -Consul is a product developed in Go language by the HashiCorp company and was born in 2013. +Consul is a product developed in Go language by the HashiCorp company and was born in 2013. Consul has multiple components but its main goal is to manage the services knowledge in an architecture (which is service discovery) and also allows to ensure that all contacted services are always available by adding health checks on them. Basically, Consul will bring us a DNS server that will resolve IP addresses of a host's services, depending on which one will be healthy. @@ -46,7 +46,7 @@ As you can see, we'll have 3 Docker machines: * A machine that will act as our "`node 01`" with an HTTP service that will run on it (Swarm), * A machine that will act as our "`node 02`" with an HTTP service that will run on it (Swarm). -We'll also add on our two nodes a Docker container with Registrator image that will facilitate the discovery of Docker containers into Consul. +We'll also add on our two nodes a Docker container with Registrator image that will facilitate the discovery of Docker containers into Consul. For more information about Registrator, you can visit: [https://gliderlabs.com/registrator/](https://gliderlabs.com/registrator/){:rel="nofollow"}. Let's start to install our architecture! @@ -89,7 +89,7 @@ Then, open in your browser the following URL: `http://:8500`. ## Second machine: Node 01 -Now, it's time to create t`he machine that corresponds to `our first Docker Swarm cluster node and that will also receive the master role for our cluster (we need one...): +Now, it's time to create t`he machine that corresponds to `our first Docker Swarm cluster node and that will also receive the master role for our cluster (we need one...): ```bash $ docker-machine create -d virtualbox \ @@ -100,8 +100,8 @@ $ docker-machine create -d virtualbox \ --engine-opt="cluster-advertise=eth1:2376" swarm-node-01 ``` -As you can see, we've added the `--swarm-discovery`  option with our Consul machine IP address and port 8500 that corresponds to the Consul API. This way, Docker Swarm will use the Consul API to store machine information with the rest of the cluster. -We'll now configure our environment to use this machine and install a `Registrator`  container on top of it in order to auto-register our new services (Docker containers). +As you can see, we've added the `--swarm-discovery` option with our Consul machine IP address and port 8500 that corresponds to the Consul API. This way, Docker Swarm will use the Consul API to store machine information with the rest of the cluster. +We'll now configure our environment to use this machine and install a `Registrator` container on top of it in order to auto-register our new services (Docker containers). To do that, type the following: @@ -119,12 +119,12 @@ $ docker run -d \ consul://$(docker-machine ip consul):8500 ``` -Here, you can notice that we share the host Docker socket in the container. This solution could be a [controversial](https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html) solution but in our example case, forgive me about that ;){:rel="nofollow"} +Here, you can notice that we share the host Docker socket in the container. This solution could be a [controversial](https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html) solution but in our example case, forgive me about that ;){:rel="nofollow"} -If you want to register services to Consul I recommend to register them using the Consul API in order to keep control on what's added in your Consul. +If you want to register services to Consul I recommend to register them using the Consul API in order to keep control on what's added in your Consul. -The `-ip`  option allows to precise to Registrator that we want to register our services with the given IP address: so here the Docker machine IP address and not the Docker container internal IP address. -We are now ready to start our HTTP service. This one is located under the "ekofr/http-ip" Docker image which is a simple Go HTTP application that renders "hello, " with the IP address of the current container. +The `-ip` option allows to precise to Registrator that we want to register our services with the given IP address: so here the Docker machine IP address and not the Docker container internal IP address. +We are now ready to start our HTTP service. This one is located under the "ekofr/http-ip" Docker image which is a simple Go HTTP application that renders "hello, " with the IP address of the current container. In order to fit this article needs, we will also create a different network for the two Docker machines in order to clearly identify IP addresses for our two services. @@ -153,7 +153,7 @@ hello from 172.18.0.X ## Third machine: Node 02 -We'll now repeat most steps we've ran for our first node but we'll change some values. First, create the Docker machine: +We'll now repeat most steps we've ran for our first node but we'll change some values. First, create the Docker machine: ```bash $ docker-machine create -d virtualbox \ @@ -211,9 +211,9 @@ http-ip.service.consul. 0 IN A 192.168.99.102 In other words, a kind of load balancing will be done on one of these services when you'll try to join them `http://http-ip.service.consul`. Ok, but what about the load balancing repartition? Are we able to define a priority and/or weight for each services? -Sadly, the answer is no, actually. An issue is currently opened about that on Github in order to bring this support. You can find it here: [https://github.com/hashicorp/consul/issues/1088](https://github.com/hashicorp/consul/issues/1088){:rel="nofollow"}. +Sadly, the answer is no, actually. An issue is currently opened about that on Github in order to bring this support. You can find it here: [https://github.com/hashicorp/consul/issues/1088](https://github.com/hashicorp/consul/issues/1088){:rel="nofollow"}. -Indeed, if we look in details about `SRV`  DNS record type, here is what we get: +Indeed, if we look in details about `SRV` DNS record type, here is what we get: ```bash $ dig @$(docker-machine ip consul) http-ip.service.consul SRV @@ -240,7 +240,7 @@ Here, we have an IP address that corresponds to each HTTP service that we have r We'll now add a Health Check to our service in order to ensure that it can be use safely by our users. -In this case we'll start to return on our node 01 and suppress the container named `ekofr/http-ip`  in order to recreate it with a Health Check: +In this case we'll start to return on our node 01 and suppress the container named `ekofr/http-ip` in order to recreate it with a Health Check: ```bash $ eval $(docker-machine env swarm-node-01) @@ -264,7 +264,7 @@ $ docker run -d \ ekofr/http-ip ``` -You can do the same thing on your node 02 (by paying attention to modify the `node-01`  values to `node-02` ) and you should now visualize these checks on the Consul web UI: +You can do the same thing on your node 02 (by paying attention to modify the `node-01` values to `node-02` ) and you should now visualize these checks on the Consul web UI: ![Consul Infrastructure Schema](/_assets/posts/2017-02-22-consul-service-discovery-failure/checks.png) diff --git a/_posts/en/2017-03-15-cqrs-pattern-2.md b/_posts/en/2017-03-15-cqrs-pattern-2.md index 50f1af803..df5003f9f 100644 --- a/_posts/en/2017-03-15-cqrs-pattern-2.md +++ b/_posts/en/2017-03-15-cqrs-pattern-2.md @@ -16,7 +16,7 @@ oldCategoriesAndTags: permalink: /en/cqrs-pattern-2/ --- -CQRS, which means _Command_ _Query Responsibility Segregation_, comes from CQS (_Command Query Separation_) introduced by Bertrand Meyer in _Object Oriented Software Construction_. Meyer states that every method should be either a _query_ or a _command_. +CQRS, which means _Command_ _Query Responsibility Segregation_, comes from CQS (_Command Query Separation_) introduced by Bertrand Meyer in _Object Oriented Software Construction_. Meyer states that every method should be either a _query_ or a _command_. The difference between CQS and CQRS is that every CQRS object is divided in two objects: one for the query and one for the command. @@ -31,59 +31,59 @@ We can clearly see the separation between writing parts and reading ones: the us ## Commands -A command tells our application to do something. Its name always uses the indicative tense, like _TerminateBusiness_ or _SendForgottenPasswordEmail_. It's very important not to confine these names to _create, change, delete..._ and to really focus on the use cases (see _CQRS Documents_ at the end of this document for more information). +A command tells our application to do something. Its name always uses the indicative tense, like _TerminateBusiness_ or _SendForgottenPasswordEmail_. It's very important not to confine these names to _create, change, delete..._ and to really focus on the use cases (see _CQRS Documents_ at the end of this document for more information). A command captures the intent of the user. No content in the response is returned by the server, only queries are in charge of retrieving data. ## Queries -Using different _data stores_ in our application for the command and query parts seems to be a very interesting idea. As Udi Dahan explains very well in his article _Clarified CQRS_, we could create a user interface oriented database, which would reflect what we need to display to our user. We would gain in both performance and speed. -Dissociating our data stores (one for the modification of data and one for reading) does not imply the use of relational databases for both of them for instance. Therefore, it would be more thoughtful to use a database that can read our queries fastly. +Using different _data stores_ in our application for the command and query parts seems to be a very interesting idea. As Udi Dahan explains very well in his article _Clarified CQRS_, we could create a user interface oriented database, which would reflect what we need to display to our user. We would gain in both performance and speed. +Dissociating our data stores (one for the modification of data and one for reading) does not imply the use of relational databases for both of them for instance. Therefore, it would be more thoughtful to use a database that can read our queries fastly. If we separate our data sources, how can we still make them synchronized? Indeed, our "read" data store is not supposed to be aware that a command has been sent! This is where events come into play. ## Events -An event is a notification of something that _has_ happened. Like a command, an event must respect a rule regarding names. Indeed, the name of an event always needs to be in the past tense, because we need to notify other parties listening on our event that a command has been completed. For instance, _UserRegistered_ is a valid event name. +An event is a notification of something that _has_ happened. Like a command, an event must respect a rule regarding names. Indeed, the name of an event always needs to be in the past tense, because we need to notify other parties listening on our event that a command has been completed. For instance, _UserRegistered_ is a valid event name. Events are processed by one or more consumers. Those consumers are in charge of keeping things synchronized in our query store. -Very much like commands, events are messages. The difference with a command is summed up here: a command is characterized by an action that _must_ happen, and an event by something that _has_ happened. +Very much like commands, events are messages. The difference with a command is summed up here: a command is characterized by an action that _must_ happen, and an event by something that _has_ happened. -## Pros and Cons +## Pros and Cons The different benefits from this segregation are numerous. Here are a few: -* _Scalability :_ The number of reads being far higher than the number of modifications inside an application, applying the CQRS pattern allows to focus independently on both concerns. One main advantage of this separation is scalability: we can scale our reading part differently from our writing part (allocate more resources, different types of database). -* _Flexibility_ : it's easy to update or add on the reading side, without changing anything on the writing side. Data consistency is therefore not altered. +* _Scalability :_ The number of reads being far higher than the number of modifications inside an application, applying the CQRS pattern allows to focus independently on both concerns. One main advantage of this separation is scalability: we can scale our reading part differently from our writing part (allocate more resources, different types of database). +* _Flexibility_ : it's easy to update or add on the reading side, without changing anything on the writing side. Data consistency is therefore not altered. One of the main con is to convince the members of a team that its benefits justify the additional complexity of this solution, as underlines the excellent article called _CQRS Journey_ (see below). ## Usage -The CQRS pattern must be used inside a _bounded context_ (key notion of _Domain Driven Development_), or a business component of your application. Indeed, although this pattern has an impact on your code at several places, it must not be at the higher level of your application. +The CQRS pattern must be used inside a _bounded context_ (key notion of _Domain Driven Development_), or a business component of your application. Indeed, although this pattern has an impact on your code at several places, it must not be at the higher level of your application. ## Event Sourcing -What's interesting with CQRS is _event sourcing_. It can be used without application of the CQRS pattern, but if we use CQRS, _event sourcing_ appears like a must-have. -_Event sourcing_ consists in saving every event that is occurring, in a database, and therefore having a back-up of facts. In a design of _event sourcing_, you cannot delete or modify events, you can only add more. This is beneficial for our business and our information system, because we can know at a specific time what is the status of a command, a user or something else. Also, saving events allows us to rebuild the series of events and gain time in analysis. -One of the examples given by Greg Young in the conference _Code on the Beach_, is the balance in a bank account. It can be considered like a column in a table that we update every time money is debited or credited on the account. One other approach would be to store in our database all transactions that enabled us to get to this balance. It then becomes easier to be sure that the indicated amount is correct, because we keep a trace of events that have occurred, without being able to change them. +What's interesting with CQRS is _event sourcing_. It can be used without application of the CQRS pattern, but if we use CQRS, _event sourcing_ appears like a must-have. +_Event sourcing_ consists in saving every event that is occurring, in a database, and therefore having a back-up of facts. In a design of _event sourcing_, you cannot delete or modify events, you can only add more. This is beneficial for our business and our information system, because we can know at a specific time what is the status of a command, a user or something else. Also, saving events allows us to rebuild the series of events and gain time in analysis. +One of the examples given by Greg Young in the conference _Code on the Beach_, is the balance in a bank account. It can be considered like a column in a table that we update every time money is debited or credited on the account. One other approach would be to store in our database all transactions that enabled us to get to this balance. It then becomes easier to be sure that the indicated amount is correct, because we keep a trace of events that have occurred, without being able to change them. -We will not go into more details on this principle in this article. A very interesting in-depth study is available in the article _CQRS Journey_ (see below). +We will not go into more details on this principle in this article. A very interesting in-depth study is available in the article _CQRS Journey_ (see below). ## Recap -CQRS is a simple pattern, which enables fantastic possibilities. It consists in separating the reading part from the writing part, with _queries_ and _commands._ +CQRS is a simple pattern, which enables fantastic possibilities. It consists in separating the reading part from the writing part, with _queries_ and _commands._ Many advantages result from its usage, especially in term of flexibility and scaling. -_Event sourcing_ completes theCQRS pattern by saving the history that determines the current state of our application. This is very useful in domains like accounting, because you get in your database a series of events (like financial transactions for instance) that cannot be modified or deleted. +_Event sourcing_ completes theCQRS pattern by saving the history that determines the current state of our application. This is very useful in domains like accounting, because you get in your database a series of events (like financial transactions for instance) that cannot be modified or deleted. ### To go further -[CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf "CQRS Documents"){:rel="nofollow"}, _Greg Young_ +[CQRS Documents](https://cqrs.files.wordpress.com/2010/11/cqrs_documents.pdf "CQRS Documents"){:rel="nofollow"}, _Greg Young_ -[CQRS Journey](https://msdn.microsoft.com/en-us/library/jj554200.aspx "Exploring CQRS and Event Sourcing"){:rel="nofollow"}, _Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, Mani Subramanian_ +[CQRS Journey](https://msdn.microsoft.com/en-us/library/jj554200.aspx "Exploring CQRS and Event Sourcing"){:rel="nofollow"}, _Dominic Betts, Julián Domínguez, Grigori Melnik, Fernando Simonazzi, Mani Subramanian_ -[Clarified CQRS](http://www.udidahan.com/2009/12/09/clarified-cqrs/){:rel="nofollow"}, _Udi Dahan_ +[Clarified CQRS](http://www.udidahan.com/2009/12/09/clarified-cqrs/){:rel="nofollow"}, _Udi Dahan_ -[CQRS and Event Sourcing - Code on the Beach 2014](https://www.youtube.com/watch?v=JHGkaShoyNs){:rel="nofollow"}, _Greg Young_ +[CQRS and Event Sourcing - Code on the Beach 2014](https://www.youtube.com/watch?v=JHGkaShoyNs){:rel="nofollow"}, _Greg Young_ diff --git a/_posts/en/2017-04-12-http2-future-present.md b/_posts/en/2017-04-12-http2-future-present.md index f587e8334..b078780f1 100644 --- a/_posts/en/2017-04-12-http2-future-present.md +++ b/_posts/en/2017-04-12-http2-future-present.md @@ -21,58 +21,58 @@ oldCategoriesAndTags: - tls permalink: /en/http2-future-present/ --- -Remember, in `may 1996`, the very first HTTP protocol version (HTTP/1.0) was born. -This protocol is described as a [RFC 1945](https://tools.ietf.org/html/rfc1945){:rel="nofollow"}. +Remember, in `may 1996`, the very first HTTP protocol version (HTTP/1.0) was born. +This protocol is described as a [RFC 1945](https://tools.ietf.org/html/rfc1945){:rel="nofollow"}. -But time as passed and web applications has evolved a lot. We are now creating web applications that brings more and more logic in the browser and for that we need to load more and more assets: this means that we have to load multiple CSS which sometimes are making animations in the browser or also complex operations, more and more Javascript and images too. +But time as passed and web applications has evolved a lot. We are now creating web applications that brings more and more logic in the browser and for that we need to load more and more assets: this means that we have to load multiple CSS which sometimes are making animations in the browser or also complex operations, more and more Javascript and images too. -`HTTP/1.1` release has offered and allowed us to use a kind of new web technologies we've known these last years, but web application are more and more done on smartphones and other connected devices, which is why the needs are now focused on web browsing performances. +`HTTP/1.1` release has offered and allowed us to use a kind of new web technologies we've known these last years, but web application are more and more done on smartphones and other connected devices, which is why the needs are now focused on web browsing performances. -After a first step made by Google in 2009 with the `SPDY` protocol, `HTTP/2` is finally going in the same direction with the [RFC 7540](https://tools.ietf.org/html/rfc7540){:rel="nofollow"}. +After a first step made by Google in 2009 with the `SPDY` protocol, `HTTP/2` is finally going in the same direction with the [RFC 7540](https://tools.ietf.org/html/rfc7540){:rel="nofollow"}. # HTTP/2 Introduction -Nowadays, HTTP/2 protocol is supported by most browsers and it's important to point out. While writing this blog post, only Opera Mini does not implement the new protocol, as shown on the following table: +Nowadays, HTTP/2 protocol is supported by most browsers and it's important to point out. While writing this blog post, only Opera Mini does not implement the new protocol, as shown on the following table: ![Can I use HTTP/2?](/_assets/posts/2017-04-12-http2-future-present/caniuse.jpg) -That being said, you can consider upgrading your own web applications to HTTP/2 as soon as possible and thus offer high browsing performances to your visitors. +That being said, you can consider upgrading your own web applications to HTTP/2 as soon as possible and thus offer high browsing performances to your visitors. -Indeed, HTTP/2 will bring to you new features which will help a lot on improving your applications. We will describe them in the rest of this article. +Indeed, HTTP/2 will bring to you new features which will help a lot on improving your applications. We will describe them in the rest of this article. ## TLS native support -Even if encryption is` not mandatory`, most of browsers today only support HTTP/2 using TLS associated encryption and this is not really a problem because it's highly recommended to obtain a SSL certificate, which can be done easily and for free with services like Let's Encrypt. This will also help you to secure your application if they are not already. +Even if encryption is` not mandatory`, most of browsers today only support HTTP/2 using TLS associated encryption and this is not really a problem because it's highly recommended to obtain a SSL certificate, which can be done easily and for free with services like Let's Encrypt. This will also help you to secure your application if they are not already. -For information, if you choose to not use encryption, the protocol diminutive will be `h2c` , where it will be `h2`  if you do. +For information, if you choose to not use encryption, the protocol diminutive will be `h2c` , where it will be `h2` if you do. -If you want more information about how to [improve SSL exchanges security](https://vincent.composieux.fr/article/improve-ssl-exchanges-safety-made-by-your-web-server){:rel="nofollow"}, I highly invite you to read my blog post on this topic. +If you want more information about how to [improve SSL exchanges security](https://vincent.composieux.fr/article/improve-ssl-exchanges-safety-made-by-your-web-server){:rel="nofollow"}, I highly invite you to read my blog post on this topic. ## Stream Multiplexing -HTTP/1 resources were loaded one by one as you can see below on a HTTP/1 application waterfall. HTTP/2 will allow to gain a lot of time on "waiting time" because multiple resources could be sent/downloaded by the client using the same HTTP stream (which is often called binary stream). +HTTP/1 resources were loaded one by one as you can see below on a HTTP/1 application waterfall. HTTP/2 will allow to gain a lot of time on "waiting time" because multiple resources could be sent/downloaded by the client using the same HTTP stream (which is often called binary stream). ![Waterfall HTTP?](/_assets/posts/2017-04-12-http2-future-present/waterfall_http.jpg) -Here, time passed and displayed in green color is corresponding to wait time before resource loading. Purple time is corresponding to resource loading time (TTFB - Time To First Byte) and finally the grey time is corresponding on the resource reception to the client. +Here, time passed and displayed in green color is corresponding to wait time before resource loading. Purple time is corresponding to resource loading time (TTFB - Time To First Byte) and finally the grey time is corresponding on the resource reception to the client. -Here is a waterfall of resources loading using the HTTP/2 protocol: +Here is a waterfall of resources loading using the HTTP/2 protocol: ![Waterfall HTTP/2?](/_assets/posts/2017-04-12-http2-future-present/waterfall_http2.jpg) You can clearly see here that time allocated to wait on resources (old-green time) has disappeared completely and all resources are clearly loaded in the same time because they are in the same stream. -Moreover, given that search engines are taking the page load time as an important metric for upgrading rank of websites, this is a great reason to go on HTTP/2 protocol: it will be a great improvement for your SEO. +Moreover, given that search engines are taking the page load time as an important metric for upgrading rank of websites, this is a great reason to go on HTTP/2 protocol: it will be a great improvement for your SEO. -In order to help you to visualize the resource loading speed time, here is a demo comparing HTTP/1 and HTTP/2 made by Golang: +In order to help you to visualize the resource loading speed time, here is a demo comparing HTTP/1 and HTTP/2 made by Golang: * Chargement avec `HTTP/1.1` : [http://http2.golang.org/gophertiles?latency=0](http://http2.golang.org/gophertiles?latency=0){:rel="nofollow"} * Chargement avec `HTTP/2` : [https://http2.golang.org/gophertiles?latency=0](https://http2.golang.org/gophertiles?latency=0){:rel="nofollow"} -## HPACK: Headers compression +## HPACK: Headers compression This new protocol version also comes with headers compression that are sent by the server in order to optimize stream exchanges. @@ -87,10 +87,10 @@ accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0 accept-encoding: gzip, deflate, sdch, br ``` -On my next request, headers `:authority` , `:method` , `:scheme` , `accept`  and `accept-encoding`  will probably not change. +On my next request, headers `:authority` , `:method` , `:scheme` , `accept` and `accept-encoding` will probably not change. HTTP/2 will be able to compress them to gain some space in the response. -In order to let you test header compression by yourself, I invite you to use the [h2load](https://nghttp2.org/documentation/h2load-howto.html){:rel="nofollow"} tool, which you can use to make a benchmark on HTTP/2 calls, by making here two requests: +In order to let you test header compression by yourself, I invite you to use the [h2load](https://nghttp2.org/documentation/h2load-howto.html){:rel="nofollow"} tool, which you can use to make a benchmark on HTTP/2 calls, by making here two requests: ```bash $ h2load https://vincent.composieux.fr -n 2 | grep traffic @@ -102,7 +102,7 @@ We can see here that my compressed headers `saved me 25.29%` of exchange size. This is a little revolution on browser resource loading. -Indeed, your web server will be able to push assets in your browser's cache that your application will need or oculd have needed, and this, only when you detect that your client will need that resource. +Indeed, your web server will be able to push assets in your browser's cache that your application will need or oculd have needed, and this, only when you detect that your client will need that resource. In order to preload a resource, you just have to send a HTTP header formatted in the following way: @@ -110,7 +110,7 @@ In order to preload a resource, you just have to send a HTTP header formatted in Link: ;rel="preload";as="font" ``` -You can of course define multiple `Link` headers, and `as`  attributes can also take the following values: `font` , `image` , `style`  or `script`. +You can of course define multiple `Link` headers, and `as` attributes can also take the following values: `font` , `image` , `style` or `script`. It's also possible to use the HTML markup to preload your resources: @@ -118,7 +118,7 @@ It's also possible to use the HTML markup to preload your resources: ``` -Also, if you use the Symfony PHP framework, please note that this one has implemented assets preloading in version 3.3. To use it, you just have to use the preload wrapper this way: +Also, if you use the Symfony PHP framework, please note that this one has implemented assets preloading in version 3.3. To use it, you just have to use the preload wrapper this way: {% raw %} ``` @@ -128,15 +128,15 @@ Also, if you use the Symfony PHP framewor For more information about this feature, you can visit: [http://symfony.com/blog/new-in-symfony-3-3-asset-preloading-with-http-2-push](http://symfony.com/blog/new-in-symfony-3-3-asset-preloading-with-http-2-push){:rel="nofollow"} -Also note that a new Symfony component is currently in review [on this Pull Request](https://github.com/symfony/symfony/pull/22273) in order to manage all available links that allow to preload or push (preload, preset, prerender, ...){:rel="nofollow"}. +Also note that a new Symfony component is currently in review [on this Pull Request](https://github.com/symfony/symfony/pull/22273) in order to manage all available links that allow to preload or push (preload, preset, prerender, ...){:rel="nofollow"}. ## Server Hints (prefetch) -Please note that this method is not related to HTTP/2 because it's available for a long time but it's interesting to compare it with `preload`. +Please note that this method is not related to HTTP/2 because it's available for a long time but it's interesting to compare it with `preload`. -So, preload will load a resource `into your browser's cache` but prefetch will ensure that the client doesn't already have this resource and will retrieve it `only if the client needs it`. +So, preload will load a resource `into your browser's cache` but prefetch will ensure that the client doesn't already have this resource and will retrieve it `only if the client needs it`. -Its use case is quite the same depending on you not having have to specify the `as` attribute: +Its use case is quite the same depending on you not having have to specify the `as` attribute: ``` Link: ; rel=prefetch @@ -144,7 +144,7 @@ Link: ; rel=prefetch # Use the HTTP/2 protocol -In the case of your web server being nginx, HTTP/2 protocol is supported since the `1.9.5 version` and you just have to specify in the `listen`  attribute that you want to use http2: +In the case of your web server being nginx, HTTP/2 protocol is supported since the `1.9.5 version` and you just have to specify in the `listen` attribute that you want to use http2: ``` server { @@ -152,21 +152,21 @@ server { ... ``` -In order to ensure that HTTP/2 is fully enabled on your server, I invite you to type `nginx -V`  and check if you have the `--with-http_v2_module`  compilation option enabled. Additionally, you can also check that your OpenSSL version used by nginx is recent. +In order to ensure that HTTP/2 is fully enabled on your server, I invite you to type `nginx -V` and check if you have the `--with-http_v2_module` compilation option enabled. Additionally, you can also check that your OpenSSL version used by nginx is recent. Starting from here, you just have to restart your web server in order to enable the new protocol. -`Note:` For your information if http/2 is not supported by the client's browser, your web server will automatically fallback on http/1.1 protocol. +`Note:` For your information if http/2 is not supported by the client's browser, your web server will automatically fallback on http/1.1 protocol. -On Apache web server side, `versions 2.4.12` and greater also support HTTP/2. +On Apache web server side, `versions 2.4.12` and greater also support HTTP/2. -Globally, HTTP/2 protocol activation is quite simple. If you come from Javascript world, you have to know that a [http2](https://www.npmjs.com/package/http2){:rel="nofollow"} package is available in order to instanciate an `express` server with the new protocol version. +Globally, HTTP/2 protocol activation is quite simple. If you come from Javascript world, you have to know that a [http2](https://www.npmjs.com/package/http2){:rel="nofollow"} package is available in order to instanciate an `express` server with the new protocol version. # Conclusion -HTTP/2 can be used starting today on your web applications and can only be good (for your users mostly) but also for your application on multiple things: performances, SEO, encryption. +HTTP/2 can be used starting today on your web applications and can only be good (for your users mostly) but also for your application on multiple things: performances, SEO, encryption. -It's also really easy to setup and support a large panel of languages which should really help you to go forward. +It's also really easy to setup and support a large panel of languages which should really help you to go forward. -So, what's next? No work has begun until now on a HTTP/3 or a new version after HTTP/2 but future and technologies should reserve us a third version of this highly used protocol! +So, what's next? No work has begun until now on a HTTP/3 or a new version after HTTP/2 but future and technologies should reserve us a third version of this highly used protocol! diff --git a/_posts/en/2017-04-20-upload-file-ajax.md b/_posts/en/2017-04-20-upload-file-ajax.md index 375925ec8..170446904 100644 --- a/_posts/en/2017-04-20-upload-file-ajax.md +++ b/_posts/en/2017-04-20-upload-file-ajax.md @@ -181,7 +181,7 @@ Now let's move to the client side with JavaScript implementation. ### Client-side implementation with JavaScript -As a PHP developer, I think this part is the most interesting. This is where the magic of **[AJAX](https://fr.wikipedia.org/wiki/Ajax_(informatique)){:rel="nofollow"}** will take place. As a reminder, AJAX  stands for **Asynchronous JavaScript XML**, and allows the browser to interact with the server asynchronously. +As a PHP developer, I think this part is the most interesting. This is where the magic of **[AJAX](https://fr.wikipedia.org/wiki/Ajax_(informatique)){:rel="nofollow"}** will take place. As a reminder, AJAX stands for **Asynchronous JavaScript XML**, and allows the browser to interact with the server asynchronously. [XMLHttpRequest](https://developer.mozilla.org/fr/docs/Web/API/XMLHttpRequest/Utiliser_XMLHttpRequest){:rel="nofollow"} is a browser-accessible JavaScript object that allows you to create AJAX requests. @@ -276,7 +276,7 @@ Small demo in GIF and full code here [https://github.com/lepiaf/file-upload](ht ### To conclude -We have seen together how to implement the upload of a file asynchronously with AJAX and Symfony. This method allows you to encode and send the file as a binary data stream. Unlike a base64 file encoding, it does not inflate the file's weight on the network. The representation of the file in base64 increases the weight of the file by **~33%**. For a few kilobytes file this increase in weight is not significant, but with a file of several megabytes, this has a significant impact. In addition, the file is properly managed by the browser and the server. This makes the upload more efficient and allows the use of a file resource representation on the server-side  (**$_FILES** on the PHP side). +We have seen together how to implement the upload of a file asynchronously with AJAX and Symfony. This method allows you to encode and send the file as a binary data stream. Unlike a base64 file encoding, it does not inflate the file's weight on the network. The representation of the file in base64 increases the weight of the file by **~33%**. For a few kilobytes file this increase in weight is not significant, but with a file of several megabytes, this has a significant impact. In addition, the file is properly managed by the browser and the server. This makes the upload more efficient and allows the use of a file resource representation on the server-side (**$_FILES** on the PHP side). Références : diff --git a/_posts/en/2017-06-14-amp-web-3-0.md b/_posts/en/2017-06-14-amp-web-3-0.md index 856c650ec..072297d4b 100644 --- a/_posts/en/2017-06-14-amp-web-3-0.md +++ b/_posts/en/2017-06-14-amp-web-3-0.md @@ -40,11 +40,11 @@ The project relies on 3 components allowing to propose this new user experience: - AMP HTML -AMP is HTML by default, but in some cases new tags are set up to allow an increased speed of execution. For example `````` , `````` , etc... +AMP is HTML by default, but in some cases new tags are set up to allow an increased speed of execution. For example `````` , `````` , etc... - AMP JS -AMP proposes a new use of javascript in your web pages, by the mean of an exhaustive list of optimized javascript libraries. You can find it [here](https://github.com/ampproject/amphtml/tree/master/src){:rel="nofollow"}. But the main optimization lies in the desynchronization of the external javascript libraries to your site. That's why most AMP pages do not display any advertising. The latter remains the obsession of the web performance. +AMP proposes a new use of javascript in your web pages, by the mean of an exhaustive list of optimized javascript libraries. You can find it [here](https://github.com/ampproject/amphtml/tree/master/src){:rel="nofollow"}. But the main optimization lies in the desynchronization of the external javascript libraries to your site. That's why most AMP pages do not display any advertising. The latter remains the obsession of the web performance. - AMP Cache @@ -59,7 +59,7 @@ To go further in the description of the principles of AMP I invite you to read t As AMP is nothing more than some HTML, we will start with the mandatory phase of setting up of the metas of our page. -Begin by create file index.amp, And then put the tag indicating the type of document. +Begin by create file index.amp, And then put the tag indicating the type of document. ```html @@ -128,9 +128,9 @@ To be present in Google's AMP search as seen above, you need to set up the [sche ``` -We now have a basic page. But is it really AMP? In order to perform an audit, AMP has implemented two systems. The simplest is directly online, at this url https://validator.ampproject.org/. Your second option is to display your page in the browser by adding ```#development=1```  to your url. In your chrome console you should now see the validation errors. +We now have a basic page. But is it really AMP? In order to perform an audit, AMP has implemented two systems. The simplest is directly online, at this url https://validator.ampproject.org/. Your second option is to display your page in the browser by adding ```#development=1``` to your url. In your chrome console you should now see the validation errors. -A web page is good, but you must put some CSS. This is also where AMP is quite restrictive. You can use only one external CSS file, otherwise the CSS must be in a specific tag. You must also explicitly set the size of each item. +A web page is good, but you must put some CSS. This is also where AMP is quite restrictive. You can use only one external CSS file, otherwise the CSS must be in a specific tag. You must also explicitly set the size of each item. ```html @@ -194,4 +194,4 @@ If you only have one page and nothing more, you have to put the canonical link o Congratulations, you've made a big step in the web performance! -There are many other features available, such as setting up a [login system](https://www.ampproject.org/docs/tutorials/login_requiring), a [live blog](https://www.ampproject.org/docs/tutorials/live_blog){:rel="nofollow"} but also advertising, analytics ... I invite you to look at the page https://www.ampproject.org/docs/guides/ which will inspire you for your web application. +There are many other features available, such as setting up a [login system](https://www.ampproject.org/docs/tutorials/login_requiring), a [live blog](https://www.ampproject.org/docs/tutorials/live_blog){:rel="nofollow"} but also advertising, analytics ... I invite you to look at the page https://www.ampproject.org/docs/guides/ which will inspire you for your web application. diff --git a/_posts/en/2017-06-15-construct-structure-go-graphql-api.md b/_posts/en/2017-06-15-construct-structure-go-graphql-api.md index 1a363426b..d76ebfda4 100644 --- a/_posts/en/2017-06-15-construct-structure-go-graphql-api.md +++ b/_posts/en/2017-06-15-construct-structure-go-graphql-api.md @@ -35,27 +35,27 @@ Here is the file structure I propose to create in this blog post that seems to m ```bash . ├── app -│   ├── config.go -│   ├── config.json -│   └── config_test.go +│ ├── config.go +│ ├── config.json +│ └── config_test.go ├── security -│   ├── security.go -│   └── security_test.go +│ ├── security.go +│ └── security_test.go ├── mutations -│   ├── mutations.go -│   ├── mutations_test.go -│   ├── user.go -│   └── user_test.go +│ ├── mutations.go +│ ├── mutations_test.go +│ ├── user.go +│ └── user_test.go ├── queries -│   ├── queries.go -│   ├── queries_test.go -│   ├── user.go -│   └── user_test.go +│ ├── queries.go +│ ├── queries_test.go +│ ├── user.go +│ └── user_test.go ├── types -│   ├── role.go -│   ├── role_test.go -│   ├── user.go -│   └── user_test.go +│ ├── role.go +│ ├── role_test.go +│ ├── user.go +│ └── user_test.go └── main.go ``` diff --git a/_posts/en/2017-07-05-take-care-your-mails.md b/_posts/en/2017-07-05-take-care-your-mails.md index bb67d3646..961f2a049 100644 --- a/_posts/en/2017-07-05-take-care-your-mails.md +++ b/_posts/en/2017-07-05-take-care-your-mails.md @@ -46,7 +46,7 @@ The only way to communicate with your mailbox and especially Gmail or Inbox is H We all know that developing e-mails is not the simplest thing in the world. Setting up a development environment can be very difficult and time consuming. But no worries, Google thought about it! -Just go to [https://script.google.com](https://script.google.com){:rel="nofollow noreferrer"} and put the following script in the Code.gs  file +Just go to [https://script.google.com](https://script.google.com){:rel="nofollow noreferrer"} and put the following script in the Code.gs file ```javascript @@ -60,7 +60,7 @@ function testSchemas() { }); } ``` -Then add a ```TEMPLATE NAME```  file containing the html of your email. +Then add a ```TEMPLATE NAME``` file containing the html of your email. ```html @@ -147,7 +147,7 @@ Now if you "run" your script, Google will ask you the right to send you an email ``` -#### A call-to-action +#### A call-to-action ```html diff --git a/_posts/en/2017-08-03-continuous-improvement-how-to-run-agile-retrospective.md b/_posts/en/2017-08-03-continuous-improvement-how-to-run-agile-retrospective.md index c91e393b3..fdee2287b 100644 --- a/_posts/en/2017-08-03-continuous-improvement-how-to-run-agile-retrospective.md +++ b/_posts/en/2017-08-03-continuous-improvement-how-to-run-agile-retrospective.md @@ -20,11 +20,11 @@ oldCategoriesAndTags: permalink: /en/continuous-improvement-how-to-run-agile-retrospective/ --- -We usually share on this blog our technical expertise around web and mobile development or architecture. Today, I would like to address another expertise, equally important: **our methodology**. +We usually share on this blog our technical expertise around web and mobile development or architecture. Today, I would like to address another expertise, equally important: **our methodology**. -At Eleven Labs, our agile methodology is mainly based on the SCRUM framework. I will have the opportunity to share other blog posts to describe more precisely our methods and tools. In this article, we are going to talk about a must-have of all the agile methods, and perhaps the most widely used agile ceremony: the retrospective meeting. +At Eleven Labs, our agile methodology is mainly based on the SCRUM framework. I will have the opportunity to share other blog posts to describe more precisely our methods and tools. In this article, we are going to talk about a must-have of all the agile methods, and perhaps the most widely used agile ceremony: the retrospective meeting. -We will focus on the SCRUM sprint retrospective in the context of a web development project, but most of the tools mentioned here could be used with any other agile frameworks and almost all types of projects. +We will focus on the SCRUM sprint retrospective in the context of a web development project, but most of the tools mentioned here could be used with any other agile frameworks and almost all types of projects. [![SCRUM Sprint Retrospective](/_assets/posts/2017-02-16-amelioration-continue-comment-animer-vos-retrospectives-agiles/eleven-labs-scrum-sprint-focus-retrospective.png)](/_assets/posts/2017-02-16-amelioration-continue-comment-animer-vos-retrospectives-agiles/eleven-labs-scrum-sprint-focus-retrospective.png){: .center-image .no-link-style} @@ -33,40 +33,40 @@ We will focus on the SCRUM sprint retrospective in the context of a web develo Goal? ===== -At the end of the sprint, or the end of any other development cycle, the purpose of the sprint review is to analyze the product increment developed during this iteration, i.e. 'what' was built, whereas **the purpose of the retrospective is to look at the processes and tools** used to achieve this, i.e. **'how' it was built**. +At the end of the sprint, or the end of any other development cycle, the purpose of the sprint review is to analyze the product increment developed during this iteration, i.e. 'what' was built, whereas **the purpose of the retrospective is to look at the processes and tools** used to achieve this, i.e. **'how' it was built**. -The ultimate goal is to make the team even **more productive** for the next cycle. In other words, the goal is to **increase the velocity** (number of points delivered by the team during each iteration) **and the quality** of the product. +The ultimate goal is to make the team even **more productive** for the next cycle. In other words, the goal is to **increase the velocity** (number of points delivered by the team during each iteration) **and the quality** of the product. -In concrete terms, the discussions during this meeting should make it possible to write an **improvement action plan**. The team must commit to follow these actions during the next iteration. These actions can be described as technical 'stories' or 'improvements' prioritized within a dedicated improvement backlog, and each can be assigned to a particular member who then commits to carry out this task. Therefore a dedicated capacity of the team must be reserved to be able to handle these actions in parallel of other functional 'user stories'. +In concrete terms, the discussions during this meeting should make it possible to write an **improvement action plan**. The team must commit to follow these actions during the next iteration. These actions can be described as technical 'stories' or 'improvements' prioritized within a dedicated improvement backlog, and each can be assigned to a particular member who then commits to carry out this task. Therefore a dedicated capacity of the team must be reserved to be able to handle these actions in parallel of other functional 'user stories'. When? ===== -Each day we obviously have many opportunities to improve our tools or processes. However the team usually tends to rush because of day-to-day emergencies. It's therefore decisive to **schedule regularly a dedicated time** for the team to meet, step back, and find ways to improve on the long run. +Each day we obviously have many opportunities to improve our tools or processes. However the team usually tends to rush because of day-to-day emergencies. It's therefore decisive to **schedule regularly a dedicated time** for the team to meet, step back, and find ways to improve on the long run. This meeting usually takes place **at the end of each iteration** (sprint or release for example), and allows to build a strong base for the next ones. That's why we talk about **continuous improvement**: at the end of each iteration, we **learn** from previous experiences and **adapt** our processes step by step. -Whether it be at the beginning of the project or years later, a team can always get better. It's best to be in a team that is able to learn and improve than being in one always stuck at the same stage. +Whether it be at the beginning of the project or years later, a team can always get better. It's best to be in a team that is able to learn and improve than being in one always stuck at the same stage. How much time? ============== Like every agile meeting, and in order to be more efficient, the retrospective must always be **time-boxed**. -To ensure this maximum duration is never exceeded, this timebox has to be **clearly announced before** the meeting even starts. In addition, a member of the team can be entitled with the role of '**Timekeeper**'. His role would be to regularly inform the team about the time spent and still to spend. To be noted: his role is only to notify the team about the time. He should not interrupt the meeting and discussions by all means or urge everyone with this timing. +To ensure this maximum duration is never exceeded, this timebox has to be **clearly announced before** the meeting even starts. In addition, a member of the team can be entitled with the role of '**Timekeeper**'. His role would be to regularly inform the team about the time spent and still to spend. To be noted: his role is only to notify the team about the time. He should not interrupt the meeting and discussions by all means or urge everyone with this timing. We commonly agree on the fact that this ceremony shouldn't last more than 45 minutes per week of sprint, i.e. **1 hour 30 minutes for 2 weeks iterations**. It may even be possible to reduce this maximum duration to 1 hour with a more experienced team. -For larger teams, if we want to decrease this total duration along with the goal of letting everyone speak, it may be a good thing to timebox every member's speaking time, or to split the team in smaller groups. We will focus on these techniques in the next parts. +For larger teams, if we want to decrease this total duration along with the goal of letting everyone speak, it may be a good thing to timebox every member's speaking time, or to split the team in smaller groups. We will focus on these techniques in the next parts. Who? ==== -Every member of the **development team (developers, testers...) along with the Product Owner** (or Product Manager) participate in this retrospective. Thus, thanks to this ceremony, the team can improve its development processes, and the Product Owner can focus on improving his communication or his specifications, so that the whole organization becomes more productive in the end. +Every member of the **development team (developers, testers...) along with the Product Owner** (or Product Manager) participate in this retrospective. Thus, thanks to this ceremony, the team can improve its development processes, and the Product Owner can focus on improving his communication or his specifications, so that the whole organization becomes more productive in the end. -Generally, every **stakeholder** working on a daily basis and in touch with the development team can participate. For example, it can be interesting to make the ops team supervisor -who is delivering the team's product to production- participate if he's not a part of the development team already. It's crucial to involve him as much as possible in this continuous improvement cycle because, if the team aims to be more productive, not only it has to produce more, but also to deliver its products on the final environment. +Generally, every **stakeholder** working on a daily basis and in touch with the development team can participate. For example, it can be interesting to make the ops team supervisor -who is delivering the team's product to production- participate if he's not a part of the development team already. It's crucial to involve him as much as possible in this continuous improvement cycle because, if the team aims to be more productive, not only it has to produce more, but also to deliver its products on the final environment. On his side, the **SCRUM Master** acts as a **facilitator**: he makes the tools available for everyone and helps the team to self-manage and plan its improvement actions. His role is **more important when the project is starting** but with experience, the **self-managed team** will be able to lead this retrospective on its own, without him. This will allow the SCRUM Master to focus on the external disturbances instead following the team's internal improvements. @@ -91,7 +91,7 @@ Before starting to talk, it may be good to have a global vision on how every tea - 2 : 'I won't say much, and will let the others bring up the issues' - 1 : 'I can't discuss anything, and will agree with anything said or commanded' -For these grades to be objective, they have to be given anonymously. The facilitator can then gather the results of the vote and communicate them to the team who could then analyze them. +For these grades to be objective, they have to be given anonymously. The facilitator can then gather the results of the vote and communicate them to the team who could then analyze them. The goal is of course to reach -on average- the highest possible grade, to bring the discussion the farthest as possible. @@ -109,7 +109,7 @@ In the same way, the 5 grades of the Safety Check described above can be display In the case of a serious issue, a single word can be enough to initiate the discussion. With experience, the facilitator will be able to feel this kind of situations even before starting the retrospective, and will be able to propose to the team members to write one word only on a post-it, within a limited amount of time, before any member can explain his word in front of the whole team. -- **Post-it split into several categories** +- **Post-it split into several categories** The most common practice, and usually the most efficient, is to ask to everyone to summarize their ideas on post-its according to different categories: @@ -124,7 +124,7 @@ The most common practice, and usually the most efficient, is to ask to everyone *
Post-its, as used during our retrospectives
* -No matter the categories, the idea is to define a short time (5 to 10 minutes) during which everyone puts their ideas on post-its. Then every member takes turn and goes to pin it on the board, in the right category, while explaining his point to the team. The last person to go, or the one who volunteers, could then gather the similar ideas to help the team achieve a global view of what has been pointed out. +No matter the categories, the idea is to define a short time (5 to 10 minutes) during which everyone puts their ideas on post-its. Then every member takes turn and goes to pin it on the board, in the right category, while explaining his point to the team. The last person to go, or the one who volunteers, could then gather the similar ideas to help the team achieve a global view of what has been pointed out. For larger teams, this kind of exercise may take too much time. A solution can be to limit the number of post-its per person to make sure everyone stays concise and focused about what he has to say. @@ -138,7 +138,7 @@ In the same perspective of walking in other team members' shoes, a variation of - **Timeline** -To gather all data of an iteration, we can also ask the team to draw the sprint history, from the planning to the retrospective. This way everyone comes and adds their missing facts on the chronological line drawn on the board, explaining each fact, positive or negative. +To gather all data of an iteration, we can also ask the team to draw the sprint history, from the planning to the retrospective. This way everyone comes and adds their missing facts on the chronological line drawn on the board, explaining each fact, positive or negative. An iteration of the SCRUM sprint kind being short, this technique doesn't always make sense, because it's not always easy to put a specific fact at its right place on the timeline. So this exercise is usually used with long timelines, for a release retrospective, or a whole project. @@ -150,7 +150,7 @@ The previous techniques allow to have a good insight on the problems encountered On the previous step, we've seen that anyone could express, in many ways, both positive and negative points in front of the rest of the team. -When it's essential to stimulate the team, the facilitator can ask anyone to dig a bit deeper into their personal analysis, by asking everyone to work on each negative point and turn it into a positive one along with an idea of improvement. This way everyone will share only positive points or improvements in front of the team. +When it's essential to stimulate the team, the facilitator can ask anyone to dig a bit deeper into their personal analysis, by asking everyone to work on each negative point and turn it into a positive one along with an idea of improvement. This way everyone will share only positive points or improvements in front of the team. Not only does it help boosting the teams' spirit, but this technique also allows to find more quickly the solutions which will help establish the actions plan. @@ -158,7 +158,7 @@ Not only does it help boosting the teams' spirit, but this technique also allows This method aims to guide the team, by facing it with 4 successive questions, so it can decide on the actions to run: -- **O** : 'Objective': We focus on the unbiased **facts**: 'What happened?' +- **O** : 'Objective': We focus on the unbiased **facts**: 'What happened?' - **R** : 'Reflective': 'How do you **feel** about these facts?' - **I** : 'Interpretive': 'What is the **meaning** of this? How can we **interpret** it? What does it mean?' - **D** : 'Decisional': At last, we get to the **decisions**: 'What actions could be considered in order to prevent that in the future? What commitments can we make?'' diff --git a/_posts/en/2017-08-24-api-platform-en.md b/_posts/en/2017-08-24-api-platform-en.md index c0b424eb3..44f498071 100644 --- a/_posts/en/2017-08-24-api-platform-en.md +++ b/_posts/en/2017-08-24-api-platform-en.md @@ -15,7 +15,7 @@ oldCategoriesAndTags: permalink: /en/build-an-api-with-api-platform/ --- -API Platform defines itself as a « PHP framework to build modern web APIs ». Indeed, this tool will help us rapidly build a rich and easily usable API. +API Platform defines itself as a « PHP framework to build modern web APIs ». Indeed, this tool will help us rapidly build a rich and easily usable API. Why reinvent the wheel? API Platform comes with a bunch of _features_ like an automatic documentation, filters, sorts, and many more. In this article, we're going to build an API using API Platform, and talk about some of the features that comes with it. I will assume that API platform has already been installed in your project. diff --git a/_posts/en/2020-10-13-anemic-domain.md b/_posts/en/2020-10-13-anemic-domain.md index 48de37cab..87f39046d 100644 --- a/_posts/en/2020-10-13-anemic-domain.md +++ b/_posts/en/2020-10-13-anemic-domain.md @@ -115,7 +115,7 @@ class ArticleService } ``` -Looking back at our code, you might be thinking « it looks pretty standard to me, what’s wrong with it? ». Well, if you look at it conceptually, does it make sense? Is it logical to create an empty shell `new Article()` with no properties at all at first? Then setting a title? Then a content? I doubt that you'd be comfortable reading a blank page with nothing in it. +Looking back at our code, you might be thinking « it looks pretty standard to me, what’s wrong with it? ». Well, if you look at it conceptually, does it make sense? Is it logical to create an empty shell `new Article()` with no properties at all at first? Then setting a title? Then a content? I doubt that you'd be comfortable reading a blank page with nothing in it. ### Time goes by diff --git a/_posts/fr/2011-11-01-authentification-sur-symfony2.md b/_posts/fr/2011-11-01-authentification-sur-symfony2.md index 3c40c40d3..bdd2a3c25 100644 --- a/_posts/fr/2011-11-01-authentification-sur-symfony2.md +++ b/_posts/fr/2011-11-01-authentification-sur-symfony2.md @@ -37,7 +37,7 @@ Puis l'étape de l'autorisation de l'utilisateur qui elle aussi peut s'effectuer - objets sécurisés Dans notre exemple, nous allons créer une partie admin dans le projet, cette partie ne sera disponible que pour les utilisateurs ayant le rôle d'administrateur. -Tout ce passe dans le fichier app/config/security.yml, qui va nous permettre de mettre en place les pages que nous voulons protéger ainsi que la page de login. +Tout ce passe dans le fichier app/config/security.yml, qui va nous permettre de mettre en place les pages que nous voulons protéger ainsi que la page de login. Tout d'abord, nous allons ajouter un firewall en donnant le type d'authentificaiton que nous souhaitons, ici, c'est un formulaire de login qui aura pour accès l'url /login, pour la vérification du formulaire il aura comme url /login_check et enfin l'url de logout. {% raw %} @@ -127,7 +127,7 @@ logout: defaults: { _controller: ClycksBundle:Default:logout } ``` -Pour login_check, il n'y a pas besoin de controller, Symfony le fait pour vous :). +Pour login_check, il n'y a pas besoin de controller, Symfony le fait pour vous :). Il ne reste plus qu'à remplir le controller pour afficher le formulaire d'authentification. ```php diff --git a/_posts/fr/2014-01-06-creer-rpc-rabbitmq.md b/_posts/fr/2014-01-06-creer-rpc-rabbitmq.md index 8d451a2d3..17fd6293f 100644 --- a/_posts/fr/2014-01-06-creer-rpc-rabbitmq.md +++ b/_posts/fr/2014-01-06-creer-rpc-rabbitmq.md @@ -20,8 +20,8 @@ permalink: /fr/creer-rpc-rabbitmq/ --- RabbitMQ est un gestionnaire de queue, il permet donc de conserver des messages et de les lire via une autre tâche. Une présentation plus approfondie sera faite dans un autre article. Dans cet article, nous allons nous intéresser à un concept important dans RabbitMQ : le RPC. -Un RPC (remote procedure call) permet d'envoyer un message à une queue et d'en attendre la réponse, pour mieux comprendre ce concept, partons d'un exemple simple : la génération d'une url de contenu externalisée. -Il y a donc un client qui envoie un contenu dans une queue RabbitMQ afin de connaitre l'url générée. Le client n'a alors besoin que d'une méthode "call". +Un RPC (remote procedure call) permet d'envoyer un message à une queue et d'en attendre la réponse, pour mieux comprendre ce concept, partons d'un exemple simple : la génération d'une url de contenu externalisée. +Il y a donc un client qui envoie un contenu dans une queue RabbitMQ afin de connaitre l'url générée. Le client n'a alors besoin que d'une méthode "call". ```php Dans la vie, il n'y a pas que Symfony — Un collègue +> Dans la vie, il n'y a pas que Symfony — Un collègue -Les frameworks sont indispensables au monde des entreprises, mais occultent parfois les évolutions d'un langage. C'est le cas de PHP 7, qui même si sa sortie est largement relayée, est caché derrière d'autres projets portés par le langage. À l'aube d'un changement potentiellement radical dans la façon de développer en PHP, il est important de souligner les évolutions apportées et leurs conséquences. +Les frameworks sont indispensables au monde des entreprises, mais occultent parfois les évolutions d'un langage. C'est le cas de PHP 7, qui même si sa sortie est largement relayée, est caché derrière d'autres projets portés par le langage. À l'aube d'un changement potentiellement radical dans la façon de développer en PHP, il est important de souligner les évolutions apportées et leurs conséquences. # PHP 6 @@ -29,21 +29,21 @@ Parmi les fonctionnalités prévues dans cette version on peut évoquer : * Support natif des annotations * Multi-thread & meilleur support 64 bits -Néanmoins, aucune version stable n'est jamais sortie, même si de nombreux livres sur le sujet sont sortis durant ces quelques années. Afin d'éviter toute confusion avec PHP 6, la nouvelle version de PHP est donc passée à 7. +Néanmoins, aucune version stable n'est jamais sortie, même si de nombreux livres sur le sujet sont sortis durant ces quelques années. Afin d'éviter toute confusion avec PHP 6, la nouvelle version de PHP est donc passée à 7. # Les origines de PHP 7 -Afin de comprendre l'origine de PHP 7, il est nécessaire de parler des problèmes de performance de l'interpréteur PHP. Clairement orienté pour le web, le langage souffre néanmoins de nombreux défauts, notamment lorsqu'il est question de performance et de rapidité d'exécution. +Afin de comprendre l'origine de PHP 7, il est nécessaire de parler des problèmes de performance de l'interpréteur PHP. Clairement orienté pour le web, le langage souffre néanmoins de nombreux défauts, notamment lorsqu'il est question de performance et de rapidité d'exécution. -Confrontés à ces problèmes, la société Facebook ; reposant sur PHP ; lance en 2008 l'initiative d'un projet basé sur PHP avec plusieurs améliorations, autant situées au niveau des paradigmes du langage, que sur son exécution. Le projet viendra finalement à terme sous le nom de HHVM, et sera utilisé en production par la société, en multipliant par deux la vitesse d'exécution du langage, via une transformation en [bytecode](https://en.wikipedia.org/wiki/Bytecode){:rel="nofollow noreferrer"} du code source. +Confrontés à ces problèmes, la société Facebook ; reposant sur PHP ; lance en 2008 l'initiative d'un projet basé sur PHP avec plusieurs améliorations, autant situées au niveau des paradigmes du langage, que sur son exécution. Le projet viendra finalement à terme sous le nom de HHVM, et sera utilisé en production par la société, en multipliant par deux la vitesse d'exécution du langage, via une transformation en [bytecode](https://en.wikipedia.org/wiki/Bytecode){:rel="nofollow noreferrer"} du code source. -Étant distribué librement, HHVM fait son chemin depuis quelques années comme alternative non-officielle au moteur PHP, employé ça et là par quelques sociétés, mais également cité dans de nombreux benchmarks. +Étant distribué librement, HHVM fait son chemin depuis quelques années comme alternative non-officielle au moteur PHP, employé ça et là par quelques sociétés, mais également cité dans de nombreux benchmarks. -Afin d'endiguer la montée d'HHVM, la communauté des développeurs du moteur PHP se doit de répondre avec une solution officielle. S'il s'agit au départ d'un nettoyage des API, la branche dérive rapidement sur une refonte du moteur nommé "PHP-NG" (New Generation). Cette branche sera par la suite réintégrée à la branche principale du projet en 2014. Au même moment, PHP 6 sera officiellement annulé et l'intégration de ce nouveau moteur permettra la création de PHP 7. +Afin d'endiguer la montée d'HHVM, la communauté des développeurs du moteur PHP se doit de répondre avec une solution officielle. S'il s'agit au départ d'un nettoyage des API, la branche dérive rapidement sur une refonte du moteur nommé "PHP-NG" (New Generation). Cette branche sera par la suite réintégrée à la branche principale du projet en 2014. Au même moment, PHP 6 sera officiellement annulé et l'intégration de ce nouveau moteur permettra la création de PHP 7. # Les nouveautés -La refonte du moteur est une des nouveautés majeures de PHP 7 puisqu'il multiple par deux la vitesse d'exécution du code source. Mais de nombreuses fonctionnalités ont été proposées, parfois acceptées, et parfois refusées. Cet article se veut être un résumé des changements majeurs et non une liste exhaustive. +La refonte du moteur est une des nouveautés majeures de PHP 7 puisqu'il multiple par deux la vitesse d'exécution du code source. Mais de nombreuses fonctionnalités ont été proposées, parfois acceptées, et parfois refusées. Cet article se veut être un résumé des changements majeurs et non une liste exhaustive. ## Spaceship operator @@ -67,11 +67,11 @@ usort($r, function($a, $b) { }); ``` -Un opérateur qui simplifie donc la vie des développeurs. Cependant, l'importance de cet opérateur est négligeable sur du code orienté objet, celui-ci se contentant de comparer les valeurs des attributs. Il aurait été intéressant de créer une interface de type Comparable comme ce qu'il existe en Java, afin de mieux gérer la comparaison entre objets. +Un opérateur qui simplifie donc la vie des développeurs. Cependant, l'importance de cet opérateur est négligeable sur du code orienté objet, celui-ci se contentant de comparer les valeurs des attributs. Il aurait été intéressant de créer une interface de type Comparable comme ce qu'il existe en Java, afin de mieux gérer la comparaison entre objets. -##  Null coalesce operator +## Null coalesce operator -Autre opérateur ajouté, il sert deux buts : les tests et l'affectation. Jusqu'ici, il fallait tester l'existence d'une variable avant de l'affecter à une autre par le biais d'une condition (en général un ternaire). Ici, l'opérateur simplifie encore une fois le travail des développeurs : +Autre opérateur ajouté, il sert deux buts : les tests et l'affectation. Jusqu'ici, il fallait tester l'existence d'une variable avant de l'affecter à une autre par le biais d'une condition (en général un ternaire). Ici, l'opérateur simplifie encore une fois le travail des développeurs : ```php setLogger(new class implements LoggerInterface { ## Scalar Type Hinting -PHP a toujours été reconnu pour son typage faible et sa permissivité parfois extrême, qui peut mener à des incohérences et de nombreuses heures de debug. Dans cette nouvelle version de PHP, le typage fort est probablement l'une des plus importantes évolutions du langage, et ce n'est pas sans débats que celle-ci a été intégrée. Il aura en effet fallu pas moins de 5 propositions pour faire accepter cette fonctionnalité. +PHP a toujours été reconnu pour son typage faible et sa permissivité parfois extrême, qui peut mener à des incohérences et de nombreuses heures de debug. Dans cette nouvelle version de PHP, le typage fort est probablement l'une des plus importantes évolutions du langage, et ce n'est pas sans débats que celle-ci a été intégrée. Il aura en effet fallu pas moins de 5 propositions pour faire accepter cette fonctionnalité. -Le but est d'autoriser le typage des types primitifs (ou scalaires) en argument des méthodes ou fonctions, comme c'est déjà le cas pour les objets, les tableaux et les fonctions anonymes. Étant donné le changement majeur apporté, il a été décidé que ce typage fort serait optionnel. Pour l'activer, il faudra utiliser l'instruction : "declare(strict_types=1);". Par ailleurs cette instruction doit être la première après avoir déclaré le tag " +Le but est d'autoriser le typage des types primitifs (ou scalaires) en argument des méthodes ou fonctions, comme c'est déjà le cas pour les objets, les tableaux et les fonctions anonymes. Étant donné le changement majeur apporté, il a été décidé que ce typage fort serait optionnel. Pour l'activer, il faudra utiliser l'instruction : "declare(strict_types=1);". Par ailleurs cette instruction doit être la première après avoir déclaré le tag " Il est important de préciser que de l'autocast peut-être réalisé dans certains cas par le moteur, et qu'il reste possible de forcer le cast manuellement lors de l'appel d'une fonction ou méthode. -Un exemple de contournement : +Un exemple de contournement : ```php

Ne plus avoir les dépendances internes dans #composer ça me paraît pas mal ! #Symfony_Live

— Jonathan (@CaptainJojo42) 7 avril 2016

-Mais comme le dit Fabien Potencier, le mieux est d'avoir les deux gestions de répertoire, avoir une gestion monolithique et avoir un répertoire par projet que l'on souhaite suivre. Le but étant de pouvoir partager des morceaux du projet sans avoir à donner l'ensemble du projet, cela est intéressant surtout lors de la réalisation de projet avec des prestataires externes. +Mais comme le dit Fabien Potencier, le mieux est d'avoir les deux gestions de répertoire, avoir une gestion monolithique et avoir un répertoire par projet que l'on souhaite suivre. Le but étant de pouvoir partager des morceaux du projet sans avoir à donner l'ensemble du projet, cela est intéressant surtout lors de la réalisation de projet avec des prestataires externes. -La difficulté d'avoir cette gestion pour les répertoires, est de s'outiller pour suivre l'ensemble des projets. Durant la conférence, Fabien Potencier nous a montré comment il gère cet ensemble plutôt compliqué à l'aide de "git sub-tree" qui a du être recodé suite à des problèmes de lenteur d'exécution (jusqu'a 1 jour complet). +La difficulté d'avoir cette gestion pour les répertoires, est de s'outiller pour suivre l'ensemble des projets. Durant la conférence, Fabien Potencier nous a montré comment il gère cet ensemble plutôt compliqué à l'aide de "git sub-tree" qui a du être recodé suite à des problèmes de lenteur d'exécution (jusqu'a 1 jour complet).