diff --git a/Early Returns/Baby Steps/task.md b/Early Returns/Baby Steps/task.md index 0cd46df..0b2c0ff 100644 --- a/Early Returns/Baby Steps/task.md +++ b/Early Returns/Baby Steps/task.md @@ -2,21 +2,21 @@ First, let's consider a concrete example of a program in need of early returns. Let's assume we have a database of user entries. -The access to the database is resource-heavy, and the user data is large. -Because of this, we only operate on user identifiers and retrieve the user data from the database only if needed. +Accessing this database is resource-intensive, and the user data is extensive. +As a result, we only operate on user identifiers and retrieve the user data from the database only when necessary. -Now, imagine that many of those user entries are invalid in one way or the other. +Now, imagine that many of those user entries are invalid in some way. For the brevity of the example code, we'll confine our attention to incorrect emails: those that either -contain a space character or have the number of `@` symbols which is different from `1`. -In the latter tasks, we'll also discuss the case when the user with the given ID does not exist in the database. +contain a space character or have a count of `@` symbols different from `1`. +In subsequent tasks, we'll also discuss the case when the user with the given ID does not exist in the database. We'll start with a sequence of user identifiers. Given an identifier, we first retrieve the user data from the database. This operation corresponds to the *conversion* in the previous lesson: we convert an integer number into an -instance of class `UserData`. +instance of the `UserData` class. Following this step, we run *validation* to check if the email is correct. -Once we found the first valid instance of `UserData`, we should return it immediately without processing -of the rest of the sequence. +Upon locating the first valid instance of `UserData`, we should return it immediately, ceasing any further processing +of the remaining sequence. ```scala 3 object EarlyReturns: @@ -60,8 +60,8 @@ object EarlyReturns: ``` The typical imperative approach is to use an early return from a `for` loop. -We perform the conversion followed by validation and, if the data is valid, we return the data, wrapped in `Some`. -If no valid user data has been found, then we return None after going through the whole sequence of identifiers. +We perform the conversion, followed by validation, and if the data is found valid, we return it, wrapped in `Some`. +If no valid user data has been found, we return `None` after traversing the entire sequence of identifiers. ```scala 3 /** @@ -74,12 +74,12 @@ If no valid user data has been found, then we return None after going through th None ``` -This solution is underwhelming because it uses `return` which is not idiomatic in Scala. +This solution is underwhelming because it uses `return`, which is not idiomatic in Scala. A more functional approach is to use higher-order functions over collections. -We can `find` a `userId` in the collection, for which `userData` is valid. -But this necessitates calling `complexConversion` twice, because `find` returns the original identifier instead -of the `userData`. +We can `find` a `userId` in the collection for which the `userData` is valid. +However, this necessitates calling `complexConversion` twice, as `find` returns the original identifier rather +than the `userData`. ```scala 3 /** @@ -92,12 +92,12 @@ of the `userData`. ``` Or course, we can run `collectFirst` instead of `find` and `map`. -This implementation is more concise than the previous, but we still cannot avoid running the conversion twice. -In the next lesson, we'll use a custom `unapply` method to get rid of the repeated computations. +This implementation is more concise than the previous one, but it still doesn't allow us to avoid running the conversion twice. +In the next lesson, we'll use a custom `unapply` method to eliminate the need for these repeated computations. ```scala 3 /** - * A more concise implementation which uses `collectFirst`. + * A more concise implementation, which uses `collectFirst`. */ def findFirstValidUser3(userIds: Seq[UserId]): Option[UserData] = userIds.collectFirst { @@ -108,23 +108,23 @@ In the next lesson, we'll use a custom `unapply` method to get rid of the repeat ### Exercise -Let's come back to one of our examples from an earlier module. -You are managing a cat shelter and keeping track of cats, their breeds and coats in a database. +Let's revisit one of our examples from an earlier module. +You are managing a cat shelter and keeping track of cats, their breeds, and coat types in a database. -You notice that there are a lot of mistakes in the database introduced by a previous employee: there are short-haired mainecoons, long-haired sphynxes, and other inconsistensies. -You don't have time to fix the database right now, because you see a potential adopter coming into the shelter. -Your task is to find the first valid entry in the database to present the potential adopter with a cat. +You notice numerous mistakes in the database made by a previous employee: there are short-haired Maine Coons, long-haired Sphynxes, and other inconsistensies. +You don't have the time to fix the database right now because you see a potential adopter coming into the shelter. +Your task is to find the first valid entry in the database and present the potential adopter with a cat. -Implement `catConversion` method that fetches a cat from the `catDatabase` in the `Database.scala` file by its identifier. -To do so, you will first need to consult another database "table" `adoptionStatusDatabase` to find out the name of a cat. +Implement the `catConversion` method, which fetches a cat from the `catDatabase` in the `Database.scala` file by its identifier. +To do this, you will first need to consult another database "table", `adoptionStatusDatabase`, to find out the cat's name. -Then implement `furCharacteristicValidation` that checks that the fur characteristics in the database entry makes sense for the cat's particular breed. -Consult the map `breedCharacteristics` for the appropriate fur characteristics for each bread. +Then, implement the `furCharacteristicValidation` that checks if the fur characteristics in the database entry make sense for the cat's particular breed. +Consult the `breedCharacteristics` map for the appropriate fur characteristics for each breed. Finally, implement the search using the conversion and validation methods: -* `imperativeFindFirstValidCat` that works in the imperative fashion. -* `functionalFindFirstValidCat`, in the functional style. -* `collectFirstFindFirstValidCat` using the `collectFirst` method. +* `imperativeFindFirstValidCat`, which works in an imperative fashion. +* `functionalFindFirstValidCat`, utilizing an functional style. +* `collectFirstFindFirstValidCat`, using the `collectFirst` method. -Ensure that your search does not traverse the whole database. -We put some simple logging in the conversion and validation methods so that you could make sure of that. \ No newline at end of file +Ensure that your search does not traverse the entire database. +We've put some simple logging within the conversion and validation methods so that you can verify this. diff --git a/Early Returns/Breaking Boundaries/task.md b/Early Returns/Breaking Boundaries/task.md index 8d4a5ee..3625cd1 100644 --- a/Early Returns/Breaking Boundaries/task.md +++ b/Early Returns/Breaking Boundaries/task.md @@ -1,13 +1,13 @@ ## Breaking Boundaries Similarly to Java and other popular languages, Scala provides a way to break out of a loop. -Since Scala 3.3, it's achieved with a composition of boundaries and breaks which provides a cleaner alternative to +Since Scala 3.3, it's achieved with a composition of boundaries and breaks, which provides a cleaner alternative to non-local returns. With this feature, a computational context is established with `boundary:`, and `break` returns a value from within the enclosing boundary. Check out the [implementation](https://github.com/scala/scala3/blob/3.3.0/library/src/scala/util/boundary.scala) if you want to know how it works under the hood. -One important thing is that it ensures that the users never call `break` without an enclosing `boundary` thus making +One important thing is that it ensures that the users never call `break` without an enclosing `boundary`, thus making the code much safer. The following snippet showcases the use of boundary/break in its simplest form. @@ -24,15 +24,15 @@ Since it's the end of the method, it immediately returns `Some(userData)`. None ``` -Sometimes there are multiple boundaries, in this case one can add labels to `break` calls. +Sometimes, there are multiple boundaries, and in such cases, one can add labels to `break` calls. This is especially important when there are embedded loops. -One example of using labels can be found [here](https://gist.github.com/bishabosha/95880882ee9ba6c53681d21c93d24a97). +An example of using labels can be found [here](https://gist.github.com/bishabosha/95880882ee9ba6c53681d21c93d24a97). ### Exercise Finally, let's use boundaries to achieve the same result. -Let's try using lazy collection to achieve the same goal as in the previous tasks. +Let's try using a lazy collection to achieve the same goal as in the previous tasks. * Use a boundary to implement `findFirstValidCat`. * Copy the implementations of `furCharacteristicValidation` and `nonAdoptedCatConversion` from the previous task. diff --git a/Early Returns/Lazy Collection to the Rescue/task.md b/Early Returns/Lazy Collection to the Rescue/task.md index fe97ce0..817e471 100644 --- a/Early Returns/Lazy Collection to the Rescue/task.md +++ b/Early Returns/Lazy Collection to the Rescue/task.md @@ -1,14 +1,14 @@ ## Lazy Collection to the Resque One more way to achieve the same effect of an early return is to use the concept of a lazy collection. -A lazy collection doesn't store all its elements computed and ready to access. +A lazy collection doesn't store all its elements computed and ready for access. Instead, it stores a way to compute an element once it's needed somewhere. -This makes it possible to simply traverse the collection until we encounter the element which fulfills the conditions. -Since we aren't interested in the rest of the collection, its elements won't be computed. +This makes it possible to simply traverse the collection until we encounter an element that fulfills the conditions. +Since we aren't interested in the rest of the collection, those elements won't be computed. -As we've already seen a couple of modules ago, there are several ways to make a collection lazy. -The first one is by using [iterators](https://www.scala-lang.org/api/current/scala/collection/Iterator.html): we can call the `iterator` method on our sequence of identifiers. -Another way is to use [views](https://www.scala-lang.org/api/current/scala/collection/View.html) as we've done in one of the previous modules. +As we've already seen a couple of modules ago, there are several ways to convert a collection into a lazy one. +The first is by using [iterators](https://www.scala-lang.org/api/current/scala/collection/Iterator.html): we can call the `iterator` method on our sequence of identifiers. +Another way is to use [views](https://www.scala-lang.org/api/current/scala/collection/View.html), as we've done in one of the previous modules. Try comparing the two approaches on your own. ```scala 3 @@ -22,7 +22,7 @@ Try comparing the two approaches on your own. ### Exercise -Let's try using lazy collection to achieve the same goal as in the previous tasks. +Let's try using a lazy collection to achieve the same goal as in the previous tasks. * Use a lazy collection to implement `findFirstValidCat`. * Copy the implementations of `furCharacteristicValidation` and `nonAdoptedCatConversion` from the previous task. diff --git a/Early Returns/The Problem/task.md b/Early Returns/The Problem/task.md index 8b21e4f..71b8e6d 100644 --- a/Early Returns/The Problem/task.md +++ b/Early Returns/The Problem/task.md @@ -1,23 +1,23 @@ ## The Problem It is often the case that we do not need to go through all the elements in a collection to solve a specific problem. -For example, in the Recursion chapter of the previous module we saw a function to search for a key in a box. -It was enough to find a key, any key, and there wasn't any point continuing the search in the box after one had been found. +For example, in the Recursion chapter of the previous module, we saw a function to search for a key in a box. +It was enough to find a single key, and there was no point in continuing the search in the box after one had been found. -The problem might get trickier the more complex data is. -Consider an application designed to track the members of your team, detailing which projects they worked on and the +The problem might get trickier as data becomes more complex. +Consider an application designed to track your team members, detailing the projects they worked on and the specific days they were involved. -Then the manager of the team may use the application to run complicated queries such as the following: -* Find an occurrence of a day when the team worked more person-hours than X. -* Find an example of a bug which took more than Y days to fix. - -It's common to run some kind of conversion on an element of the original data collection into a derivative entry which -describes the problem domain better. -Then this converted entry is validated with a predicate to decide whether it's a suitable example. -Both the conversion and the verification may be expensive, which makes the naive implementation such as we had for the -key searching problem inefficient. -In languages such as Java you can use `return` to stop the exploration of the collection once you've found your answer. -You would have an implementation which looks somewhat like this: +Then, the team manager could use the application to run complicated queries such as the following: +* Find an instance when the team worked more person-hours than X in a day. +* Find an example of a bug that took longer than Y days to fix. + +It's common to run some kind of conversion on an element of the original data collection into a derivative entry that +better describes the problem domain. +Then, this converted entry is validated with a predicate to decide whether it's a suitable example. +Both the conversion and the verification may be expensive, which makes a naive implementation, like our +key search example, inefficient. +In languages such as Java, you can use `return` to stop the exploration of the collection once you've found your answer. +The implementation might look something like this: ```java Bar complexConversion(Foo foo) { @@ -37,14 +37,14 @@ Bar findFirstValidBar(Collection foos) { } ``` -Here we enumerate the elements of the collection `foos` in order, running the `complexConversion` on them followed by -the `complexValidation`. -If we find the element for which `complexValidation(bar)` succeeds, than the converted entry is immediately returned +Here, we enumerate the elements of the collection `foos` sequentially, running `complexConversion` on them, followed by +`complexValidation`. +If we find the element for which `complexValidation(bar)` succeeds, the converted entry is immediately returned, and the enumeration is stopped. -If there was no such element, then `null` is returned after all the elements of the collection are explored in vain. +If there was no such element, then `null` is returned after the entire collection has been explored without success. How do we apply this pattern in Scala? -It's tempting to translate this code line-by-line in Scala: +It's tempting to translate this code line-by-line directly into Scala: ```scala 3 def complexConversion(foo: Foo): Bar = ... @@ -59,16 +59,16 @@ def findFirstValidBar(seq: Seq[Foo]): Option[Bar] = { } ``` -We've replaced `null` with the more appropriate `None`, but otherwise the code stayed the same. +We've replaced `null` with the more appropriate `None`, but otherwise, the code remains the same. However, this is not good Scala code, where the use of `return` is not idiomatic. Since every block of code in Scala is an expression, the last expression within the block is what is returned. You can write `x` instead of `return x` for the last expression, and it would have the same semantics. -Once `return` is used in the middle of a block, the programmer can no longer rely on that the last statement is the one +Once `return` is used in the middle of a block, the programmer can no longer rely on the last statement as the one returning the result from the block. -This makes the code less readable, makes it harder to inline code and ruins referential transparency. +This makes the code less readable, makes it harder to inline code, and ruins referential transparency. Thus, using `return` is considered a code smell and should be avoided. -In this module we'll explore more idiomatic ways to do early returns in Scala. +In this module, we'll explore more idiomatic ways to do early returns in Scala. diff --git a/Early Returns/Unapply/task.md b/Early Returns/Unapply/task.md index d89ff8e..45ba11e 100644 --- a/Early Returns/Unapply/task.md +++ b/Early Returns/Unapply/task.md @@ -1,8 +1,8 @@ ## Unapply -Unapply methods form a basis of pattern matching. -Its goal is to extract data compacted in objects. -We can create a custom extractor object for user data validation with the suitable unapply method, for example: +Unapply methods form the basis of pattern matching. +Their goal is to extract data encapsulated in objects. +We can create a custom extractor object for user data validation with a suitable unapply method, for example: ```scala 3 object ValidUser: @@ -25,11 +25,11 @@ As a result, we get this short definition of our search function. } ``` -It's at this point that an observant reader is likely to protest. +At this point, an observant reader is likely to protest. This solution is twice as long as the imperative one we started with, and it doesn't seem to do anything extra! One thing to notice here is that the imperative implementation is only concerned with the "happy" path. -What if there are no records in the database for some of the user identifiers? -The conversion function becomes partial, and, being true to the functional method, we need to return optional value: +But what if there are no records in the database for some of the user identifiers? +The conversion function becomes partial, and, adhering to the functional method, we need to return an optional value: ```scala 3 /** @@ -39,8 +39,8 @@ The conversion function becomes partial, and, being true to the functional metho ``` The partiality of the conversion will unavoidably complicate the imperative search function. -The code still has the same shape, but it has to go through additional hoops to accommodate partiality. -Note that every time a new complication arises in the business logic, it has to be reflected inside +The code still has the same shape, but it has to go through additional loops to accommodate partiality. +Note that every time a new complication arises in the business logic, it has to be reflected within the `for` loop. ```scala 3 @@ -74,15 +74,15 @@ search function stays the same. } ``` -In general, there might be several ways in which user data might be valid. +In general, there might be several ways in which user data could be valid. Imagine that there is a user who doesn't have an email. -In this case `complexValidation` returns `false`, but the user may still be valid. +In this case, `complexValidation` returns `false`, but the user might still be valid. For example, it may be an account that belongs to a child of another user. -We don't need to message the child, instead it's enough to reach out to their parent. +We don't need to message the child; instead, it's enough to reach out to their parent. Even though this case is less common than the one we started with, we still need to keep it mind. -To do it, we can create a different extractor object with its own `unapply` and pattern match against it -if the first validation failed. -We do run the conversion twice in this case, but it is less important because of how rare this case is. +To account for it, we can create a different extractor object with its own `unapply` and pattern match against it +if the first validation fails. +We do run the conversion twice in this case, but its impact is less significant due to the rarity of this scenario. ```scala 3 object ValidUserInADifferentWay: @@ -98,10 +98,10 @@ We do run the conversion twice in this case, but it is less important because of Both extractor objects work in the same way. They run a conversion method, which may or may not succeed. -If conversion succeeds, its result is validated and returned when valid. -All this is done with the `unapply` method whose implementation stays the same regardless of the other methods. +If the conversion succeeds, its result is validated and returned when it is valid. +All of this is done with the `unapply` method, whose implementation stays the same regardless of the other methods. This forms a nice framework which can be abstracted as a trait we call `Deconstruct`. -It has the `unapply` method which calls two abstract methods `convert` and `validate` that operate on generic +It has the `unapply` method that calls two abstract methods, `convert` and `validate`, which operate on generic types `From` and `To`. ```scala 3 @@ -126,7 +126,7 @@ It uses `safeComplexConversion` and `complexValidation` respectively. ``` Finally, the search function stays the same, but now it uses the `unapply` method defined in -the `Deconstruct` trait while pattern matching: +the `Deconstruct` trait during pattern matching: ```scala 3 def findFirstValidUser8(userIds: Seq[UserId]): Option[UserData] = @@ -137,16 +137,16 @@ the `Deconstruct` trait while pattern matching: ### Exercise -You have noticed that the first cat with a valid fur pattern you found had already been adopted. -Now you need to include the check whether a cat is still in the shelter in the validation. +You have noticed that the first cat found with a valid fur pattern has already been adopted. +Now you need to include a check in the validation to ensure that the cat is still in the shelter. -* Implement `nonAdoptedCatConversion` to only select the cats that are still up for adoption +* Implement `nonAdoptedCatConversion` to only select cats that are still up for adoption. * Copy your implementation of the `furCharacteristicValidation` function from the previous task. -* Implement your custom `unapply` method for the `ValidCat` object, and use it to write the `unapplyFindFirstValidCat` function. Validation of the fur characteristics should not be run on cats who have been adopted. +* Implement your custom `unapply` method for the `ValidCat` object, and use it to write the `unapplyFindFirstValidCat` function. The validation of fur characteristics should not be run on cats who have been adopted. -Next, you notice that there are some inaccuracies in coat patterns: no bengal cat can be of solid color! +Next, you notice that there are some inaccuracies in the coat patterns: no Bengal cat can be of a solid color! * Implement the validation of the coat pattern using a custom `unapply` method. -* Use `ValidPattern` object that extends the `Deconstruct` trait. +* Use the `ValidPattern` object that extends the `Deconstruct` trait. * Use the custom `unapply` method in the `findFirstCatWithValidPattern` function. diff --git a/Expressions over Statements/Pure vs Impure Functions/task.md b/Expressions over Statements/Pure vs Impure Functions/task.md index 4938b11..52942be 100644 --- a/Expressions over Statements/Pure vs Impure Functions/task.md +++ b/Expressions over Statements/Pure vs Impure Functions/task.md @@ -25,13 +25,13 @@ Its performance should neither be influenced by the external world nor impact it You might argue that pure functions seem entirely useless. If they cannot interact with the outer world or mutate anything, how is it possible to derive any value from them? -Why even use pure functions? -The fact is, they conform much better than impure counterparts. +Why should we even use pure functions? +The fact is, they conform much better than their impure counterparts. Since there are no hidden interactions, it's much easier to verify that your pure function does what it is supposed to do and nothing more. Moreover, they are much easier to test, as you do not need to mock a database if the function never interacts with one. -Some programming languages, such as Haskell, restrict impurity and reflect any side effects in types. +Some programming languages, such as Haskell, limit impurity and reflect any side effects in their types. However, it can be quite restricting and is not an approach utilized in Scala. The idiomatic method is to write your code in such a way that the majority of it is pure, and impurity is only used where it is absolutely necessary, similar to what we did with mutable data. @@ -49,5 +49,5 @@ def g(x: Int): Int = ### Exercise -Implement the pure function `calculateAndLogPure` which does the same thing as `calculateAndLogImpure`, but without -using global variable. +Implement the pure function `calculateAndLogPure`, which does the same thing as `calculateAndLogImpure`, but without +using a global variable. diff --git a/Expressions over Statements/Tail Recursion/task.md b/Expressions over Statements/Tail Recursion/task.md index 9127a62..b5cc683 100644 --- a/Expressions over Statements/Tail Recursion/task.md +++ b/Expressions over Statements/Tail Recursion/task.md @@ -7,7 +7,7 @@ Each time a function is called, some information regarding the call is placed on allocated. This information is kept there until all computations within the function are completed, after which the stack is deallocated (the information about the function call is removed from the stack), and the computed value is returned. -If a function calls another function, the stack is allocated again before deallocating the previous function's call. What is worse, we wait until the inner call is complete, its stack frame is deallocated, and its value returned to compute +If a function calls another function, the stack is allocated again before deallocating the previous function's call. What is worse, we wait until the inner call is complete, its stack frame is deallocated, and its value returned before we can compute the result of the caller function. This is especially significant for recursive functions because the depth of the call stack can be astronomical. @@ -37,7 +37,7 @@ Calling `factorial` with a large enough argument (like `10000` on my computer) r computation doesn't produce any result. Don't get discouraged! -There is a well-known optimisation technique capable of mitigating this issue. +There is a well-known optimization technique capable of mitigating this issue. It involves rewriting your recursive function into a tail-recursive form. In this form, the recursive call should be the last operation the function performs. For example, `factorial` can be rewritten as follows: @@ -50,13 +50,13 @@ def factorial(n: BigInt): BigInt = go(n, 1) ``` -We add a new parameter `accumulator` to the recursive function where we keep track of the partially computed +We add a new parameter `accumulator` to the recursive function to keep track of the partially computed multiplication. -Notice that the recursive call to `go` is the last operation that happens in the `else` branch of the `if` condition. +Notice that the recursive call to `go` is the last operation in the `else` branch of the `if` condition. Whatever value the recursive call returns is simply returned by the caller. There is no reason to allocate any stack frames because nothing is awaiting the result of the recursive call to enable further computation. -Smart enough compilers (and the Scala compiler is one of them) is capable to optimize away the unnecessary stack +Smart enough compilers (and the Scala compiler is one of them) can optimize away the unnecessary stack allocations in this case. Go ahead and try to find an `n` such that the tail-recursive `factorial` results in a stack overflow. Unless something goes horribly wrong, you should not be able to find such an `n`. @@ -64,12 +64,12 @@ Unless something goes horribly wrong, you should not be able to find such an `n` By the way, do you remember the key searching function we implemented in the previous task? Have you wondered how we got away not keeping track of a collection of boxes to look through? The trick is that the stack replaces that collection. -All the boxes to be considered are somewhere on the stack, patiently waiting their turn. +All the boxes to be considered are somewhere on the stack, patiently awaiting their turn. Is there a way we can make that function tail-recursive? Yes, of course, there is! Similar to the `factorial` function, we can create a helper function `go` with an extra parameter `boxesToLookIn` -to keep track of the boxes to search the key in. +to keep track of the boxes to search for the key in. This way, we can ensure that `go` is tail-recursive, i.e., either returns a value or calls itself as its final step. ```scala 3 @@ -90,10 +90,10 @@ def lookForKey(box: Box): Option[Key] = In Scala, there is a way to ensure that your function is tail-recursive: the `@tailrec` annotation from `scala.annotation.tailrec`. It checks if your implementation is tail-recursive and triggers a compiler error if it is not. -We recommend using this annotation to ensure that the compiler is capable of optimizing your code, even through its +We recommend using this annotation to ensure that the compiler is capable of optimizing your code, even through future changes. ### Exercise Implement tail-recursive functions for reversing a list and finding the sum of digits in a non-negative number. -We annotated the helper functions with `@tailrec` so that the compiler can verify this property for us. +We've annotated the helper functions with `@tailrec` so that the compiler can verify this property for us. diff --git a/Expressions over Statements/What is an Expression/task.md b/Expressions over Statements/What is an Expression/task.md index 1d8831f..8f68486 100644 --- a/Expressions over Statements/What is an Expression/task.md +++ b/Expressions over Statements/What is an Expression/task.md @@ -57,7 +57,7 @@ Depending on the condition, we execute one of the two `println` statements. Notice that no value is returned. Instead, everything the function does is a side effect of printing to the console. This style is not considered idiomatic in Scala. -Instead, it's preferably for a function to return a string value, which is then printed, like so: +Instead, it's preferable for a function to return a string value, which is then printed, like so: ```scala 3 def even(number: Int): String = { @@ -72,7 +72,7 @@ def main(): Unit = { } ``` -This way, you separate the logic of computing the values from outputting them. +This way, you separate the logic of computing values from outputting them. It also makes your code more readable. ### Exercise diff --git a/Functions as Data/anonymous_functions/task.md b/Functions as Data/anonymous_functions/task.md index ae71bf1..67dbd6b 100644 --- a/Functions as Data/anonymous_functions/task.md +++ b/Functions as Data/anonymous_functions/task.md @@ -1,7 +1,7 @@ # Anonymous functions -An anonymous function is a function that, quite literally, does not have a name. I -t is defined only by its arguments list and computations. +An anonymous function is a function that, quite literally, does not have a name. It +is defined only by its argument list and computations. Anonymous functions are also known as lambda functions, or simply lambdas. Anonymous functions are particularly useful when we need to pass a function as an argument to another function, or when we want to create a function that is only used once and is not worth defining separately. @@ -21,7 +21,7 @@ To do that, we use the `map` method. We define an anonymous function `x => x * 2` and give it to the `map` method as its only argument. The `map` method applies this anonymous function to each element of `numbers` and returns a new list, which we call `doubled`, containing the doubled values. -Anonymous functions can access variables that are in scope at their definition. +Anonymous functions can access variables that are within scope at the time of their definition. Consider the `multiplyList` function, which multiplies every number in a list by a `multiplier`. The parameter `multiplier` can be used inside `map` without any issues. @@ -32,7 +32,7 @@ def multiplyList(multiplier: Int, numbers: List[Int]): List[Int] = ``` -When a parameter is only used once in the anonymous function, Scala allows omitting the argument's name by using `_` instead. +When a parameter is used only once in the anonymous function, Scala allows omitting the argument's name and using `_` instead. However, note that if a parameter is used multiple times, you must use names to avoid confusion. The Scala compiler will report an error if you fail to adhere to this rule. @@ -44,15 +44,15 @@ def multiplyPairs(numbers: List[(Int, Int)]): List[Int] = numbers.map((x, y) => // Scala associates the wildcards with the parameters in the order they are passed. def multiplyPairs1(numbers: List[(Int, Int)]): List[Int] = numbers.map(_ * _) -// We compute a square of each element of the list using an anonymous function. +// We compute the square of each element in the list using an anonymous function. def squareList(numbers: List[Int]): List[Int] = numbers.map(x => x * x) // In this case, omitting parameters' names is disallowed. -// You can see how it can be confusing, if you compare it with `multiplyPairs1`. +// You can see how it can be confusing if you compare it with `multiplyPairs1`. def squareList1(numbers: List[Int]): List[Int] = numbers.map(_ * _) ``` ## Exercise -Implement the `multiplyAndOffsetList` function that multiplies and offsets each element of the list. +Implement the `multiplyAndOffsetList` function, which multiplies and offsets each element in the list. Use `map` and an anonymous function. diff --git a/Functions as Data/filter/task.md b/Functions as Data/filter/task.md index 454fa54..7811e03 100644 --- a/Functions as Data/filter/task.md +++ b/Functions as Data/filter/task.md @@ -20,7 +20,7 @@ case class Cat(name: String, color: Color) // Let’s import the Color enum values for better readability import Color._ -// We create four cats, two black, one white, and one ginger +// We create four cats: two black, one white, and one ginger val felix = Cat("Felix", Black) val snowball = Cat("Snowball", White) val garfield = Cat("Garfield", Ginger) @@ -48,6 +48,6 @@ There are multiple cats available, and you wish to adopt a cat with one of the f * The cat is calico. * The cat is fluffy. -* The cat's breed is Abyssinian. +* The cat is of the Abyssinian breed. -To simplify decision making, you first identify all cats which possess at least one of the characteristics above. Your task is to implement the necessary functions and then apply the filter. +To simplify decision making, you first identify all the cats which possess at least one of the characteristics above. Your task is to implement the necessary functions and then apply the filter. diff --git a/Functions as Data/foldLeft/task.md b/Functions as Data/foldLeft/task.md index 515921f..6826ba2 100644 --- a/Functions as Data/foldLeft/task.md +++ b/Functions as Data/foldLeft/task.md @@ -2,23 +2,23 @@ `def foldLeft[B](acc: B)(f: (B, A) => B): B` -The `foldLeft` method is another method in Scala collections that can be percieved as a generalized version of `map`, but generalized differently than `flatMap`. +The `foldLeft` method is another method in Scala collections that can be perceived as a generalized version of `map`, but generalized differently than `flatMap`. Let's say that we call `foldLeft` on a collection of elements of type `A`. `foldLeft` takes two arguments: the initial "accumulator" of type `B` (usually different from `A`) and a function `f`, which again takes two arguments: the accumulator (of type `B`) and the element from the original collection (of type `A`). `foldLeft` starts its work by taking the initial accumulator and the first element of the original collection and assigning them to `f`. The `f` function uses these two to produce a new version of the accumulator — i.e., a new value of type `B`. -This new value, the new accumulator, is again provided to `f`, this time together with the second element in the collection. -The process repeats itself until all elements of the original collection have been iterated over. +This new value, the updated accumulator, is again provided to `f`, this time together with the second element in the collection. +The process repeats until all elements of the original collection have been iterated over. The final result of `foldLeft` is the accumulator value after processing the last element of the original collection. -The "fold" part of the `foldLeft` method's name derives from the idea that `foldLeft`'s operation might be viewed as "folding" a collection of elements, one into another, until ultimately, a single value — the final result. -The suffix "left" is there to indicate that in the case of ordered collections, we are proceeding from the beginning of the collection (left) to its end (right). +The "fold" part of the `foldLeft` method's name derives from the idea that `foldLeft`'s operation might be viewed as "folding" a collection of elements, one into another, until ultimately, a single value — the final result — is produced. +The suffix "left" indicates that in the case of ordered collections, we are proceeding from the beginning of the collection (left) to its end (right). There is also `foldRight`, which works in the reverse direction. Let's see how we can implement a popular coding exercise, *FizzBuzz*, using `foldLeft`. In *FizzBuzz*, we are supposed to print out a sequence of numbers from 1 to a given number (let's say 100). However, each time the number we are to print out is divisible by 3, we print "Fizz"; if it's divisible by 5, we print "Buzz"; and if it's divisible by 15, we print "FizzBuzz". -Here is how we can accomplish this with foldLeft in Scala 3: +Here is how we can accomplish this with `foldLeft` in Scala 3: ```scala def fizzBuzz(n: Int): Int | String = n match @@ -36,9 +36,9 @@ val fizzBuzzList = numbers.foldLeft[List[Int | String]](Nil) { (acc, n) => acc : println(fizzBuzzList) ``` -First, we write the `fizzBuzz` method, which takes an `Int` and returns either an `Int` (the number that it received) or a `String: "Fizz", "Buzz", or "FizzBuzz". +First, we write the `fizzBuzz` method, which takes an `Int` and returns either an `Int` (the number that it received) or a `String`: "Fizz", "Buzz", or "FizzBuzz". With the introduction of union types in Scala 3, -we can declare that our method can return any of two or more unrelated types, but it will definitely be one of them. +we can declare that our method can return any one of two or more unrelated types. However, it is guaranteed that the return will be one of them. Next, we create a range of numbers from 1 to 100 using `1 to 100`. @@ -47,7 +47,7 @@ We call the `foldLeft` method on the numbers range, stating that the accumulator The second argument to `foldLeft` is a function that takes the current accumulator value (`acc`) and an element from the numbers range (`n`). This function calls our `fizzBuzz` method with the number and appends the result to the accumulator list using the `:+` operator. -Once all the elements have been processed, `foldLeft returns the final accumulator value, which is the complete list of numbers and strings "Fizz", "Buzz", and "FizzBuzz", replaceing numbers that were divisible by 3, 5, and 15, respectively. +Once all the elements have been processed, `foldLeft returns the final accumulator value, which is the complete list of numbers and strings "Fizz", "Buzz", and "FizzBuzz", replacing the numbers that were divisible by 3, 5, and 15, respectively. Finally, we print out the results. diff --git a/Functions as Data/foreach/task.md b/Functions as Data/foreach/task.md index 96d14d6..32d1ad7 100644 --- a/Functions as Data/foreach/task.md +++ b/Functions as Data/foreach/task.md @@ -8,7 +8,7 @@ We assume that `f` performs side effects (we can ignore the `U` result type of t You can think of the `foreach` method as a simple for-loop that iterates over a collection of elements without altering them. Note that in functional programming, we try to avoid side effects. -In this course, you will learn how to achieve the same results functionally, but in the beginning, `foreach` can be helpful display computing results, debug, and experiment. +In this course, you will learn how to achieve the same results functionally, but in the beginning, `foreach` can be helpful for displaying computing results, debugging, and experimentation. In the following example, we will use `foreach` to print out the name and color of each of our four cats. diff --git a/Functions as Data/functions_returning_functions/task.md b/Functions as Data/functions_returning_functions/task.md index 4cda4c9..5140fe7 100644 --- a/Functions as Data/functions_returning_functions/task.md +++ b/Functions as Data/functions_returning_functions/task.md @@ -17,7 +17,7 @@ val calc = new CalculatorPlusN(3) calc.add(1 , 2) ``` -Now, instead of having a class that stores this additional number `n`, we can create and return the adder function to achieve the same result: +Now, instead of having a class that stores this additional number `n`, we can create and return the `adder` function to achieve the same result: ```scala // Define a function that takes a fixed number and returns a new function that adds it to its input @@ -30,12 +30,12 @@ val add = addFixedNumber(3) add(1, 2) ``` -In the above example, we define a function `addFixedNumber` that takes an integer `n` and returns a new function that takes two integers, `x` and `y`, and returns the sum of `n` and `x` and `y`. +In the above example, we define a function `addFixedNumber` that takes an integer `n` and returns a new function, which takes two integers, `x` and `y`, and returns the sum of `n`, `x`, and `y`. Note the return type of `addFixedNumber` — it's a function type `(Int, Int) => Int`. -Then, we define the new function adder inside `addFixedNumber`, which captures the value of `n` and adds it to its own two arguments, `x` and `y`. +Then, we define the new function, `adder`, inside `addFixedNumber`, which captures the value of `n` and adds it to its own two arguments, `x` and `y`. The `adder` function is then returned as the result of `addFixedNumber`. -We then construct a specialized function add by calling `addFixedNumber(n: Int)` with `n` equal to `3`. +We then construct a specialized function `add` by calling `addFixedNumber(n: Int)` with `n` equal to `3`. Now, we can call `add` on any two integers; as a result, we will get the sum of these integers plus `3`. Scala provides special syntax for functions returning functions, as shown below: @@ -49,7 +49,7 @@ val add = addFixedNumber(3) The first argument of the function `addFixedNumber` is enclosed within its own set of parentheses, while the second and third arguments are enclosed within another pair of parentheses. The function `addFixedNumber` can then be supplied with only the first argument, which creates a function expecting the next two arguments: `x` and `y`. -You can also call the function with all three arguments, but they should be enclosed in separate parentheses: `addFixedNumber1(3)(4, 5)` instead of `addFixedNumber(3,4,5)`. +You can also call the function with all three arguments, but they should be enclosed in separate parentheses: `addFixedNumber1(3)(4, 5)` rather than `addFixedNumber(3,4,5)`. Notice that you cannot pass only two arguments into the function written in this syntax: `addFixedNumber1(3)(4)` is not allowed. diff --git a/Functions as Data/map/task.md b/Functions as Data/map/task.md index 1c5a753..cccf643 100644 --- a/Functions as Data/map/task.md +++ b/Functions as Data/map/task.md @@ -3,11 +3,11 @@ `def map[B](f: A => B): Iterable[B]` The `map` method works on any Scala collection that implements `Iterable`. -It takes a function `f` and applies it to each element in the collection, similar to `foreach`. However, in the case of `map`, we are more interested in the results of `f` and not its side effects. +It takes a function `f` and applies it to each element in the collection, similar to `foreach`. However, in the case of `map`, we are more interested in the results of `f` than its side effects. As you can see from the declaration of `f`, it takes an element of the original collection of type `A` and returns a new element of type `B`. Finally, the map method returns a new collection of elements of type `B`. -In a special case, `B` can be the same as `A`, so for example, we use the `map` method to take a collection of cats of certain colors and create a new collection of cats of different colors. -But, we can also, for example, take a collection of cats and create a collection of cars with colors that match the colors of our cats. +In a special case, `B` can be the same as `A`. So, for example, we could use the `map` method to take a collection of cats of certain colors and create a new collection of cats of different colors. +But, we could also take a collection of cats and create a collection of cars with colors that match the colors of our cats. ```scala // We define the Color enum @@ -46,6 +46,6 @@ Therefore, instead of a `Set`, we need a collection that allows multiple identic ## Exercise -In functional programming, we usually separate performing side effects from computations. +In functional programming, we usually separate side effects from computations. For example, if we want to print all fur characteristics of a cat, we first transform each characteristic into a `String`, and then output each one in a separate step. Implement the `furCharacteristicsDescription` function, which completes this transformation using `map`. diff --git a/Functions as Data/partial_fucntion_application/task.md b/Functions as Data/partial_fucntion_application/task.md index 97adb5f..5cdafd9 100644 --- a/Functions as Data/partial_fucntion_application/task.md +++ b/Functions as Data/partial_fucntion_application/task.md @@ -1,11 +1,11 @@ # Partial function application Returning functions from functions is related to, but not the same as, [partial application](https://en.wikipedia.org/wiki/Partial_application). -The former allows you create functions that behave as though they have a "hidden" list of arguments that you provide at the moment of creation, rather than at the moment of usage. -Each function returns a new function that accepts the next argument until all arguments are accounted for, and the final function returns the result. +The former allows you to create functions that behave as though they have a "hidden" list of arguments provided at the moment of creation, rather than at the moment of use. +Each function returns a new function that accepts the next argument until all arguments have been processed. The final function then returns the result. -On the other hand, partial function application refers the process of assigning fixed values to some of the arguments of a function and returning a new function that only takes the remaining arguments. +On the other hand, partial function application refers to the process of assigning fixed values to some of a function's arguments and returning a new function that only takes the remaining arguments. The new function is a specialized version of the original function with some arguments already provided. -This technique enables code reuse — we can write a more generic function and then construct its specialized versions to use in various contexts. +This technique enables code reuse — we can write a more generic function and then construct its specialized versions for use in various contexts. Here's an example: ```scala @@ -24,6 +24,6 @@ Finally, we call `add3` with only two arguments, obtaining the same result as wi ## Exercise -Implement a function `filterList` that can then be partially applied. +Implement a function `filterList`, which can then be partially applied. You can use the `filter` method in the implementation. diff --git a/Functions as Data/passing_functions_as_arguments/task.md b/Functions as Data/passing_functions_as_arguments/task.md index ababb5d..addc8f8 100644 --- a/Functions as Data/passing_functions_as_arguments/task.md +++ b/Functions as Data/passing_functions_as_arguments/task.md @@ -3,7 +3,7 @@ We can pass a named function as an argument to another function just as we would pass any other value. This is useful, for example, when we want to manipulate data in a collection. There are many methods in Scala collections classes that operate by accepting a function as an argument and applying it in some way to each element of the collection. -In the previous chapter, we saw how we can use the map method on a sequence of numbers to double them. +In the previous chapter, we saw how we can use the `map` method on a sequence of numbers to double them. Now let's try something different. Imagine that you have a bag of cats with different colors, and you want to separate out only the black cats. @@ -25,11 +25,11 @@ val bagOfBlackCats = bagOfCats.filter(cat => cat.color == Color.Black) ``` In Scala 3, we can use enums to define colors. -Then, we create a class `Cat`, which has a value for the color of the cat. Next, we create a "bag" of cats, which is a set with three cats: one black, one white, and one ginger. -Finally, we use the `filter` method and provide it with an anonymous function as an argument. This function takes an argument of the class `Cat` and will return `true` if the color of the cat is black. -The `filter` method will apply this function to each cat in the original set and create a new set with only those cats for which the function returns `true`. +Then, we create a class `Cat`, which includes a value for the color of the cat. Next, we create a "bag" of cats, which is a set containing three cats: one black, one white, and one ginger. +Finally, we use the `filter` method and provide it with an anonymous function as an argument. This function takes an argument of the `Cat` class and returns `true` if the cat's color is black. +The `filter` method will apply this function to each cat in the original set and create a new set containing only those cats for which the function returns `true`. -However, our function that checks if the cat is black doesn't have to be anonymous. The `filter method will work with a named function just as well. +However, our function that checks if the cat is black doesn't have to be anonymous. The `filter method will work just as well with a named function. ```scala def isCatBlack(cat: Cat): Boolean = cat.color == Color.Black @@ -42,4 +42,4 @@ So far, you've seen examples of how this is done with `map` and `filter` — two ## Exercise -Implement a function to check whether the cat is white or ginger and pass it as an argument to `filter` to create a bag of white or ginger cats. +Implement a function to check whether the cat is white or ginger and pass it as an argument to `filter` to create a bag containing only white or ginger cats. diff --git a/Functions as Data/scala_collections_overview/task.md b/Functions as Data/scala_collections_overview/task.md index 3899fd5..f0521b2 100644 --- a/Functions as Data/scala_collections_overview/task.md +++ b/Functions as Data/scala_collections_overview/task.md @@ -10,11 +10,11 @@ By default, Scala encourages the use of immutable collections because they are s Here's an overview of the main traits and classes: 1. `Iterable`: All collections that can be traversed in a linear sequence extend `Iterable`. It provides methods like `iterator`, `map`, `flatMap`, `filter`, and others, which we will discuss shortly. -2. `Seq`: This trait represents sequences, i.e., ordered collections of elements. It extends `Iterable` and provides methods like `apply(index: Int): T` (which allows you access an element at a specific index) and `indexOf(element: T): Int` (which returns the index of the first occurrence in the sequence that matches the provided element, or -1 if the element can't be found). Some essential classes implementing the `Seq` trait include `List`, `Array`, `Vector`, and `Queue`. -3. `Set`: Sets are unordered collections of unique elements. It extends Iterable but not `Seq` — you can't assign fixed indices to its elements. The most common implementation of `Set` is `HashSet`. -4. `Map`: A map is a collection of key-value pairs. It extends Iterable and provides methods like `get`, `keys`, `values`, `updated`, and more. It's unordered, similar to `Set`. The most common implementation of `Map` is `HashMap`. +2. `Seq`: This trait represents sequences, i.e., ordered collections of elements. It extends `Iterable` and provides methods like `apply(index: Int): T` (which allows you to access an element at a specific index) and `indexOf(element: T): Int` (which returns the index of the first occurrence in the sequence that matches the provided element, or -1 if the element can't be found). Some essential classes implementing the `Seq` trait include `List`, `Array`, `Vector`, and `Queue`. +3. `Set`: Sets are unordered collections of unique elements. It extends `Iterable` but not `Seq` — you can't assign fixed indices to its elements. The most common implementation of `Set` is `HashSet`. +4. `Map`: A map is a collection of key-value pairs. It extends `Iterable` and provides methods like `get`, `keys`, `values`, `updated`, and more. It's unordered, similar to `Set`. The most common implementation of `Map` is `HashMap`. We will now quickly review some of the most frequently used methods of Scala collections: `filter`, `find`, `foreach`, `map`, `flatMap`, and `foldLeft`. In each case, you will see a code example and be asked to do an exercise using the given method. -Please note that many other methods exist. We encourage you to consult the [Scala collections documentation](https://www.scala-lang.org/api/current/scala/collection/index.html) and browse through them. Being aware of their existence and realizing that you can use them instead of constructing some logic yourself may save you a substantial amount of effort. +Please note that there are many other methods available. We encourage you to consult the [Scala collections documentation](https://www.scala-lang.org/api/current/scala/collection/index.html) and browse through them. Being aware of their existence and realizing that you can use them instead of constructing your own logic may save a substantial amount of effort. diff --git a/Functions as Data/total_and_partial_functions/task.md b/Functions as Data/total_and_partial_functions/task.md index e7856a1..bf4cafe 100644 --- a/Functions as Data/total_and_partial_functions/task.md +++ b/Functions as Data/total_and_partial_functions/task.md @@ -58,15 +58,15 @@ val blackCats: Seq[Cat] = animals.collect { } ``` In this example, we first create an enum `Color` with three values: `Black`, `White`, and `Ginger`. -We define a trait, `Animal`, with two abstract methods: `name` and `color`. -We create case classes, `Cat` and `Dog`, that extend the `Animal` trait, and override the name and color methods with respective values. +We define a trait `Animal` with two abstract methods: `name` and `color`. +We create case classes `Cat` and `Dog` that extend the `Animal` trait, and override the `name` and `color` methods with respective values. Then, we create three instances of `Cat` (two black and one ginger) and two instances of `Dog` (one black and one white). -We consolidate them all into a sequence of type `Seq[Animal]`. +We consolidate all these instances into a sequence of type `Seq[Animal]`. Ultimately, we use the `collect` method on the sequence to create a new sequence containing only black cats. -The collect method applies a partial function to the original collection and constructs a new collection with only the elements for which the partial function is defined. -You can perceive it as combibibg the filter and map methods. -In the above example, we provide collect with the following partial function: +The `collect` method applies a partial function to the original collection and constructs a new collection containing only the elements for which the partial function is defined. +You can perceive it as the combination of the `filter` and `map` methods. +In the above example, we provide `collect` with the following partial function: ```scala case cat: Cat if cat.color == Black => cat @@ -74,8 +74,8 @@ case cat: Cat if cat.color == Black => cat The `case` keyword at the beginning tells us that the function will provide a valid result only in the following case: the input value needs to be of the type `Cat` (not just any `Animal` from our original sequence), and the color of that cat needs to be `Black`. -If these conditions are met, the function will return the cat, however, as an instance of the type `Cat`, not just `Animal`. -Thanks to this, we can specify that the new collection created by the collect method is a sequence of the type `Seq[Cat]`. +If these conditions are met, the function will return the cat, but as an instance of the type `Cat`, not just `Animal`. +As a result, we can specify that the new collection created by the `collect` method is a sequence of the type `Seq[Cat]`. ## Exercise diff --git a/Functions as Data/what_is_a_function/task.md b/Functions as Data/what_is_a_function/task.md index 34ddb05..1f08a67 100644 --- a/Functions as Data/what_is_a_function/task.md +++ b/Functions as Data/what_is_a_function/task.md @@ -1,12 +1,12 @@ # What is a function? A function is a standalone block of code that takes arguments, performs some calculations, and returns a result. -It may or may not have side effects; that is, it may have access to the data in the program, and should the data be modifiable, the function might alter it. +It may or may not have side effects; that is, it may access the data in the program, and if the data is modifiable, the function might alter it. If it doesn't — meaning, if the function operates solely on its arguments — we state that the function is pure. -In functional programming, we use pure functions whenever possible, although this rule does have important exceptions, -which we will discuss them later. +In functional programming, we use pure functions whenever possible, although this rule does have important exceptions, +which we will discuss later. The main difference between a function and a method is that a method is associated with a class or an object. -On the other hand, a function is treated just like any other value in the program: it can be created in any place in the code, passed as an argument, returned from another function or method, etc. +On the other hand, a function is treated just like any other value in the program: it can be created anywhere in the code, passed as an argument, returned from another function or method, etc. Consider the following code: ```Scala @@ -26,18 +26,18 @@ class Calculator: Both `add` functions take two input parameters, `x` and `y`, perform a pure computation of adding them together, and return the result. They do not alter any external state. In the first case, we define a function with the `def` keyword. -After def comes the function's name, then the list of arguments with their types, then the result type of the function, and then the function's calculations, that is, `x + y`. +After `def` comes the function's name, then the list of arguments with their types, then the result type of the function, and then the function's calculations, that is, `x + y`. -Compare this with the second approach to define a function, with the `val` keyword, which we also use for all other kinds of data. -Here, after `val` comes the function's name, then the type of the function, `(Int, Int) => Int`, -which consists of both the argument types and the result type, then come the arguments (this time without the types), and finally the implementation. +Compare this with the second approach to define a function using the `val` keyword, which we also use for all other kinds of data. +Here, after `val`, comes the function's name, followed by the type of the function, `(Int, Int) => Int`. +This consists of both the argument types and the result type. Next come the arguments (this time without the types), and finally the implementation. You will probably find the first way to define functions more readable, and you will use it more often. -However, it is important to remember that in Scala, a function is data, just like integers, strings, and instances of case classes — and it can be defined as data if needed. +However, it is important to remember that in Scala, a function is treated as data, just like integers, strings, and instances of case classes — and it can be defined as data if needed. -The third example illustrates a method. -We simply call it `add`. -Its definition appears the same as the definition of the function `addAsFunction`, but we refer to add as a method because it is associated with the class `Calculator`. -In this way, if we create an instance of `Calculator`, we can call `add` on it, and it will have access to the internal state of the instance. +The third example illustrates a method, +which we simply call `add`. +Its definition mirrors that of the `addAsFunction`, however, we refer to `add` as a method because it is associated with the `Calculator` class. +In this way, if we create an instance of `Calculator`, we can call `add` on it, granting us access to the internal state of the instance. It is also possible, for example, to override it in a subclass of `Calculator`. ```scala @@ -68,6 +68,6 @@ A video: ## Exercise -Implement multiplication as both a function and as a value; additionally, implement multiplication as a method of a class. +Implement multiplication both as a function and a value; additionally, implement multiplication as a method within a class. diff --git a/Immutability/A View/task.md b/Immutability/A View/task.md index 4413d66..91732db 100644 --- a/Immutability/A View/task.md +++ b/Immutability/A View/task.md @@ -1,16 +1,16 @@ ## A View A view in Scala collections is a lazy rendition of a standard collection. -While a lazy list needs to be constructed as such, you can create a view from any "eager" Scala collection by calling `.view` on it. +While a lazy list needs intentional construction, you can create a view from any "eager" Scala collection simply by calling `.view` on it. A view computes its transformations (like map, filter, etc.) in a lazy manner, -meaning these operations are not immediately executed; instead, they are computed on the fly each time a new element is requested, -which can enhabce both performance and memory usage. +meaning these operations are not immediately executed; instead, they are computed on the fly each time a new element is requested. +This can enhabce both performance and memory usage. On top of that, with a view, you can chain multiple operations without the need for intermediary collections — the operations are applied to the elements of the original "eager" collection only when requested. This can be particularly beneficial in scenarios where operations like map and filter are chained, so a significant number of elements can be filtered out, eliminating the need for subsequent operations on them. -Let's consider an example where we use a view to find the first even number squared that is greater than 100 from a list of numbers. +Let's consider an example where we use a view to find the first squared even number in a list that is greater than 100. ```scala 3 val numbers = (1 to 100).toList @@ -38,7 +38,7 @@ println(firstEvenSquareGreaterThan100_View) Without using a view, all the numbers in the list are initially squared and then filtered, even though we are only interested in the first square that satisfies the condition. With a view, transformation operations are computed lazily. -Therefore, squares are calculated, and conditions are checked for each element sequentially until the first match is found. +Therefore, squares are calculated and conditions are checked sequentially for each element until the first match is found. This avoids unnecessary calculations and hence is more efficient in this scenario. To learn more about the methods of Scala View, read its [documentation](https://www.scala-lang.org/api/current/scala/collection/View.html). @@ -47,5 +47,5 @@ To learn more about the methods of Scala View, read its [documentation](https:// Consider a simplified log message: it is a comma-separated string where the first substring before the comma specifies its severity, the second substring is the numerical error code, and the last one is the message itself. -Implement the function `findLogMessage`, which searches for the first log message with the given `severity` and `errorCode` within a list. +Implement the function `findLogMessage`, which searches for the first log message matching a given `severity` and `errorCode` within a list. As the list is assumed to be large, utilize `view` to avoid creating intermediate data structures. diff --git a/Immutability/Berliner Pattern/task.md b/Immutability/Berliner Pattern/task.md index eaaf2a2..f1b9941 100644 --- a/Immutability/Berliner Pattern/task.md +++ b/Immutability/Berliner Pattern/task.md @@ -7,29 +7,29 @@ Thankfully, you can get the best of both worlds with the languages that combine In particular, Scala was specifically designed with this fusion in mind. The Berliner Pattern is an architectural pattern introduced by Bill Venners and Frank Sommers at Scala Days 2018 in Berlin. -Its goal is to restrict mutability to only those parts of a program for where it is unavoidable. -The application can be thought of as divided into three layers: +Its goal is to restrict mutability to only those parts of a program where it is unavoidable. +The application can be thought of as being divided into three layers: -* The external layer, which has to interact with the outside world. - This layer enables the application to communicate with other programs, services, or the operating system. +* The external layer, which has to interact with the outside world, + enabling the application to communicate with other programs, services, or the operating system. It's practically impossible to implement this layer in a purely functional way, but the good news is that there is no need to do so. -* The internal layer, where we connect to databases or write into files. +* The internal layer, where we connect to databases or write to files. This part of the application is usually performance-critical, so it's only natural to use mutable data structures here. * The middle layer, which connect the previous two. This is where our business logic resides and where functional programming shines. -Pushing mutability to the thin inner and outer layers offers its advantages. -First of all, the more we restrict the data, the more future-proof our code becomes. -We not only provide more information to the compiler, but also signal future developers that some data ought not to be modified. +Pushing mutability to the thin inner and outer layers offers several benefits. +First of all, the more we restrict data, the more future-proof our code becomes. +We not only provide more information to the compiler, but we also signal to future developers that some data should not be modified. Secondly, it simplifies the writing of concurrent code. When multiple threads can modify the same data, we may quickly end up in an invalid state, making it complicated to debug. -There is no need to resort to mutexes, monitors, or other patterns when there is no actual way to modify data. +There is no need to resort to mutexes, monitors, or other such patterns when there is no actual way to modify the data. -Finally, the common pattern in imperative programming with mutable data involves first assigning some default value to a variable, +Finally, a common pattern in imperative programming with mutable data involves first assigning some default value to a variable, and then modifying it. -For example, you start with an empty collection and then populate it with some specific values. +For example, you might start with an empty collection and then populate it with some specific values. However, default values are evil. Coders often forget to change them into something meaningful, leading to many bugs, such as the billion-dollar mistake caused by using `null`. @@ -38,14 +38,14 @@ We encourage you to familiarize yourself with this pattern by watching the [orig ### Exercise -We provide you with a sample implementation of the application that handles creating, modifying, and deleting users in a database. -We mock the database and http layers, and your task will be to implement methods processing requests following the Berliner pattern. +We provide you with a sample implementation of an application that handles creating, modifying, and deleting users in a database. +We mock the database and HTTP layers, and your task is to implement methods for processing requests following the Berliner pattern. Start by implementing the `onNewUser` and `onVerification` methods in `BerlinerPatternTask.scala`. -We provide the implementations for the database and the client for these methods, so you could familiarize yourself +We provide the implementations for the database and client for these methods so you can familiarize yourself with the application. Execute the `run` script in `HttpClient.scala` to make sure your implementation works correctly. -Then implement the functionality related to changing the password as well as removing users. +Then, implement the functionality related to password changes and user removals. You will need to implement all layers for these methods, so check out `Database.scala` and `HttpClient.scala`. Don't forget to uncomment the last several lines in the `run` script for this task. diff --git a/Immutability/Case Class Copy/task.md b/Immutability/Case Class Copy/task.md index 9cb36a6..dc23fe2 100644 --- a/Immutability/Case Class Copy/task.md +++ b/Immutability/Case Class Copy/task.md @@ -1,25 +1,25 @@ ## Case Class Copy In Scala, case classes automatically come equipped with a few handy methods upon declaration, one of which is the `copy` method. -The `copy` method is used to create a new instance of the case class, which is a copy of the original instance; however, you can also +The `copy` method is used to create a new instance of the case class, which is a copy of the original one; however, you can also modify some (or none) of the fields during the copying process. This feature adheres to functional programming principles, where immutability is often favored. -You can derive new instances while maintaining the immutability of existing instances, and so, for example, -avoid bugs where two threads work on the same data structure, each assuming that it is the sole modifier of it. +You can derive new instances while maintaining the immutability of existing ones. Consequently, this helps +prevent bugs that may occur when two threads work on the same data structure, each assuming that it is the sole modifier of it. Another valuable characteristic of the `copy` method is that it’s a convenient and readable means of creating new instances of the same case class. Instead of building one from scratch, you can grab an existing instance and make a copy modified to your liking. Below, you will find a Scala example using a User case class with mandatory `firstName` and `lastName` fields, along with optional `email`, `twitterHandle`, and `instagramHandle` fields. -We will first create one user with its default constructor and then another with the `copy` method of the first one. +We will first create a user with its default constructor and then generate another user with the `copy` method from the first one. Note that: * `originalUser` is initially an instance of `User` with `firstName = "Jane"`, `lastName = "Doe"`, and `email = "jane.doe@example.com"`. The other fields use their default values (i.e., `None`). -* `updatedUser` is created using the copy method on `originalUser`. - This creates a new instance with the same field values as `originalUser`, except for the fields provided as parameters to `copy`: +* `updatedUser` is created using the `copy` method on `originalUser`. + This creates a new instance with the same field values as `originalUser`, except for those provided as parameters to `copy`: * `email` is updated to `"new.jane.doe@example.com"` * `twitterHandle` is set to `"@newJaneDoe"` * `originalUser` remains unmodified after the `copy` method is used, adhering to the principle of immutability. @@ -51,6 +51,6 @@ println(s"Updated user: $updatedUser") ### Exercise Let's unravel the `copy` function. -Implement your own function `myCopy` that operates in exactly the same way as `copy` does. -You should be able to pass values only for those fields you wish to modify. +Implement your own function, `myCopy`, which operates identically to `copy`. +You should be able to pass values only for those fields that you wish to modify. As a result, a new copy of the instance should be created. diff --git a/Immutability/Comparison of View and Lazy Collection/task.md b/Immutability/Comparison of View and Lazy Collection/task.md index f9d239c..d2d40e0 100644 --- a/Immutability/Comparison of View and Lazy Collection/task.md +++ b/Immutability/Comparison of View and Lazy Collection/task.md @@ -1,19 +1,19 @@ ## Comparison of View and Lazy List -Now you may wonder why Scala has both lazy lists and views, and when to use which one. -Here's a short list of key differences of both approaches to lazy computation: +Now you may be wondering why Scala has both lazy lists and views, and when to use which one. +Here's a short list highlighting the key differences between these two approaches to lazy computation: * Construction: - * View: You can create a view of any Scala collection by calling `.view` on it. - * Lazy List: You must create it from scratch with the `#::` operator or other methods. + * View: You can create a view from any Scala collection by calling `.view` on it. + * Lazy List: You must create it from scratch with the `#::` operator or other specific methods. * Caching: - * View: Does not cache results. Each access recomputes values through the transformation pipeline unless forced into + * View: It does not cache results. Each access recomputes values through the transformation pipeline unless forced into a concrete collection. - * Lazy List: Once an element is computed, it is cached for future access, preventing recomputation. + * Lazy List: Once an element is computed, it is cached for future access to prevent unnecessary recomputation. * Commonly used for: - * View: Chain transformations on collections when we want to avoid the creation of intermediate collections. - * Lazy List: Ideal when working with potentially infinite sequences and when previously computed results might be + * View: Perfect for chaining transformations on collections when we want to avoid creating intermediate collections. + * Lazy List: Ideal for working with potentially infinite sequences and when previously computed results might be accessed multiple times. diff --git a/Immutability/Lazy List/task.md b/Immutability/Lazy List/task.md index 6cb1f65..e4e26fe 100644 --- a/Immutability/Lazy List/task.md +++ b/Immutability/Lazy List/task.md @@ -1,16 +1,16 @@ ## Lazy List -A lazy list in Scala is a collection that evaluates its elements lazily: each element is computed only once, +A lazy list in Scala is a collection that evaluates its elements lazily, with each element computed just once, the first time it is needed, and then stored for subsequent access. -Lazy lists can be infinite: their elements are computed on-demand. Hence, if your program keeps accessing the next element +Lazy lists can be infinite, with their elements computed on-demand. Hence, if your program keeps accessing the next element in a loop, the lazy list will inevitably grow until the program fails with an out-of-memory error. In practice, however, you will likely need only a finite number of elements. -While this number might be large and unknown from the start, since the lazy list will compute only -explicitly requested values, it allows developers to work with large datasets or sequences in a memory-efficient manner. -In such cases, a lazy list provides a convenient method to implement the logic for computing the consecutive elements +While this number might be large and unknown from the start, the lazy list will compute only +explicitly requested values, enabling developers to work with large datasets or sequences in a memory-efficient manner. +In such cases, a lazy list provides a convenient method to implement the logic for computing consecutive elements until you decide to stop. -You can use it in certain specific cases where otherwise, you would need to code an elaborate data structure with mutable fields -and a method that would compute new values for those fields. +You can use it in certain specific cases where you would otherwise need to code an elaborate data structure with mutable fields +and a method to compute new values for those fields. Below is an example of how to generate a Fibonacci sequence using a lazy list in Scala: @@ -35,18 +35,18 @@ In the above code: immediately upon the lazy list's construction, but only later when `fib` already exists and we want to access one of its elements. * `fib.zip(fib.tail)` takes two sequences, `fib` and its tail (i.e., `fib` without its first element), and zips them together into pairs. The Fibonacci sequence is generated by summing each pair `(a, b) => a + b` of successive Fibonacci numbers. -* `take(10)` is used to fetch the first 10 Fibonacci numbers from the lazy list, and `foreach(println)` prints them. - Note that the Fibonacci sequence is theoretically infinite, but it doesn't cause any issues or out-of-memory errors +* `take(10)` is used to fetch the first 10 Fibonacci numbers from the lazy list, and `foreach(println)` prints them out. + Note that the Fibonacci sequence is theoretically infinite, but this doesn't cause any issues or out-of-memory errors (at least not yet), thanks to lazy evaluation. * Alternatively, you can use `takeWhile` to compute consecutive elements of the lazy list until a certain requirement is fulfilled. * The methods opposite to `take` and `takeWhile` — `drop` and `dropWhile` — can be used to compute and then ignore - a certain number of elements in the lazy list or compute and ignore elements until a certain requirement is met. - These methods can be chained. - For example, `fib.drop(5).take(5)` will compute the first 10 elements of the Fibonacci sequence but will ignore the first 5 of them. + a certain number of elements in the lazy list or to compute and ignore elements until a certain requirement is met. + These methods can be chained together. + For example, `fib.drop(5).take(5)` will compute the first 10 elements of the Fibonacci sequence but will disregard the first 5. -To learn more about the methods of Scala's `LazyList`, read its [documentation](https://www.scala-lang.org/api/current/scala/collection/immutable/LazyList.html). +To learn more about the methods of Scala's `LazyList`, read the [documentation](https://www.scala-lang.org/api/current/scala/collection/immutable/LazyList.html). ### Exercise -Implement the function that generates an infinite lazy list of prime numbers in ascending order. +Implement a function that generates an infinite lazy list of prime numbers in ascending order. Use the Sieve of Eratosthenes algorithm. diff --git a/Immutability/Lazy Val/task.md b/Immutability/Lazy Val/task.md index 771849f..2014ce2 100644 --- a/Immutability/Lazy Val/task.md +++ b/Immutability/Lazy Val/task.md @@ -2,20 +2,20 @@ **Laziness** refers to the deferral of computation until it is necessary. This strategy can enhance performance and allow programmers to work with infinite data structures, among other benefits. -In a lazy evaluation strategy, expressions are not evaluated when bound to a variable but when used for the first time. +With a lazy evaluation strategy, expressions are not evaluated when bound to a variable, but rather when used for the first time. If they are never used, they are never evaluated. -In some contexts, lazy evaluation can also avert exceptions, since it can prevent the evaluation of erroneous computations. +In some contexts, lazy evaluation can also prevent exceptions by avoiding the evaluation of erroneous computations. In Scala, the keyword `lazy` is used to implement laziness. -When `lazy` is used in a `val` declaration, the initialization of that `val` is deferred until its first access. +When `lazy` is used in a `val` declaration, the initialization of that `val` is deferred until it's first accessed. Here’s a breakdown of how `lazy val` works internally: * **Declaration**: When a `lazy val` is declared, no memory space is allocated for the value, and no initialization code is executed. * **First access**: Upon the first access of the `lazy val`, the expression on the right-hand side of the `=` operator is evaluated, and the resultant value is stored. - This computation generally happens thread-safe to avoid potential issues in a multi-threaded context - (there’s a check-and-double-check mechanism to ensure that the value is only computed once, even in a concurrent environment). -* **Subsequent accesses**: For any subsequent accesses, the previously computed and stored value is returned directly + This computation generally happens in a thread-safe manner to avoid potential issues in a multi-threaded context. + There’s a check-and-double-check mechanism to ensure the value is computed only once, even in a concurrent environment. +* **Subsequent accesses**: During any subsequent accesses, the previously computed and stored value is returned directly, without re-evaluating the initializing expression. Consider the following example: @@ -43,7 +43,7 @@ println(s"time now is $now") // should take only a few milliseconds at most In the above code: * The `lazy val lazyComputedValue` is declared but not computed immediately upon declaration. -* Once it is accessed in the first `println` that includes it, the computation is executed, `"Computing..."` is printed to the console, - and the computation (here simulated with `Thread.sleep(1000)`) takes place before returning the value `42`. -* Any subsequent accesses to `lazyComputedValue`, like the second `println`, do not trigger the computation again. +* Once it is accessed in the first `println` statement that includes it, the computation is executed, `"Computing..."` is printed to the console, + and the computation (here simulated with `Thread.sleep(1000)`) takes place before the value `42` is returned. +* Any subsequent accesses to `lazyComputedValue`, like in the second `println` statement, do not trigger the computation again. The stored value (`42`) is used directly. diff --git a/Immutability/Scala Collections instead of Imperative Loops/task.md b/Immutability/Scala Collections instead of Imperative Loops/task.md index 2daa92a..66dabb4 100644 --- a/Immutability/Scala Collections instead of Imperative Loops/task.md +++ b/Immutability/Scala Collections instead of Imperative Loops/task.md @@ -1,20 +1,20 @@ ## Scala Collections instead of Imperative Loops In the imperative programming style, you will often find the following pattern: a variable is initially set to some -default value, such as an empty collection, an empty string, a zero, or null. -Then, step-by-step, initialization code runs in a loop to create the proper value . -After this process, the value assigned to the variable does not change anymore — or if it does, +default value, such as an empty collection, an empty string, zero, or null. +Then, step-by-step, initialization code runs in a loop to create the proper value. +Beyond this process, the value assigned to the variable does not change anymore — or if it does, it’s done in a way that could be replaced by resetting the variable to its default value and rerunning the initialization. However, the potential for modification remains, despite its redundancy. Throughout the whole lifespan of the program, it hangs like a loose end of an electric cable, tempting everyone to touch it. -Functional Programming, on the other hand, allows us to build useful values without the need for initial default values and temporary mutability. -Even a highly complex data structure can be computed using a higher-order function extensively and then -assigned to a constant, preventing future modifications. -If we need an updated version, we can create a new data structure instead of modifying the old one. +Functional programming, on the other hand, allows us to build useful values without the need for initial default values or temporary mutability. +Even a highly complex data structure can be computed extensively using a higher-order function before being +assigned to a constant, thus preventing future modifications. +If we need an updated version, we can create a new data structure rather than modifying the old one. Scala provides a rich library of collections — `Array`, `List`, `Vector`, `Set`, `Map`, and many others — and includes methods for manipulating these collections and their elements. -You have already learned about some of those methods in the first chapter. +You have already learned about some of these methods in the first chapter. In this chapter, you will learn more about how to avoid mutability and leverage immutability to write safer and sometimes even more performant code. diff --git a/Immutability/The Builder Pattern/task.md b/Immutability/The Builder Pattern/task.md index 7515242..2aaf21f 100644 --- a/Immutability/The Builder Pattern/task.md +++ b/Immutability/The Builder Pattern/task.md @@ -1,28 +1,28 @@ ## The Builder Pattern -The Builder pattern is a design pattern often used in object-oriented programming to provide +The builder pattern is a design pattern often used in object-oriented programming to provide a flexible solution for constructing complex objects. It's especially handy when an object needs to be created with numerous possible configuration options. The pattern involves separating the construction of a complex object from its representation -so that the same construction process can make different representations. +so that the same construction process can yield different representations. -Here's why the Builder Pattern is used: +Here's why the builder pattern is used: * To encapsulate the construction logic of a complex object. * To allow an object to be constructed step by step, often through method chaining. * To avoid having constructors with many parameters, which can be confusing and error-prone (often referred to as the telescoping constructor anti-pattern). -Below is a Scala example using the Builder Pattern to create instances of a `User` case class, with mandatory `firstName` +Below is a Scala example using the builder pattern to create instances of a `User` case class, with mandatory `firstName` and `lastName` fields and optional `email`, `twitterHandle`, and `instagramHandle` fields. Note that: -* The `User` case class defines a user with mandatory `firstName` and `lastName`, optional `email`, `twitterHandle`, and `instagramHandle`. -* `UserBuilder` facilitates the creation of a `User` object. - The mandatory parameters are specified in the builder's constructor, while methods like `setEmail`, `setTwitterHandle`, +* The `User` case class defines a user with mandatory `firstName` and `lastName` fields, along with optional `email`, `twitterHandle`, and `instagramHandle` fields. +* `UserBuilder` facilitates the creation of a `User` object, with + mandatory parameters specified in the builder's constructor. Methods like `setEmail`, `setTwitterHandle`, and `setInstagramHandle` are available to set optional parameters. Each of these methods returns the builder itself, enabling method chaining. * Finally, the execution of the `build` method employs all specified parameters (whether default or set) to construct a `User` object. -This pattern keeps object creation understandable and clean, mainly when dealing with objects that can have multiple optional parameters. +This pattern keeps the process of object creation clear and straightforward, particularly when dealing with objects possessing multiple optional parameters. diff --git a/Pattern Matching/Case Class/task.md b/Pattern Matching/Case Class/task.md index b373b81..c0036ba 100644 --- a/Pattern Matching/Case Class/task.md +++ b/Pattern Matching/Case Class/task.md @@ -9,15 +9,15 @@ otherwise we would have to code manually: unidiomatic in Scala. Instances of case classes should serve as immutable data structures, as modifying them can result in less intuitive and readable code. -2. A case class provides a default constructor with public, read-only parameters, thus reducing theboiler-plate associated with case class instantiation. +2. A case class provides a default constructor with public, read-only parameters, thus reducing the boilerplate associated with case class instantiation. 3. Scala automatically defines some useful methods for case classes, such as `toString`, `hashCode`, and `equals`. The `toString` method gives a string representation of the object, `hashCode` is used for hashing collections like `HashSet` and `HashMap`, - and `equals` checks structural equality, rather than reference equality - (i.e., checks the equality of the respective fields of the case class, - rather than verifying if the two references point to the same object). -4. Case classes come with the `copy` method that can be used to create a copy of the case class instance: - exactly the same as the original or with some parameters modified + and `equals` checks structural equality rather than reference equality. + In other words, it checks the equality of the respective fields of the case class, + rather than verifying if the two references point to the same object. +4. Case classes come with a `copy` method that can be used to create a copy of the case class instance. + This can be exactly the same as the original or with some parameters modified (the signature of the `copy` method mirrors that of the default constructor). 5. Scala automatically creates a companion object for the case class, which contains factory `apply` and `unapply` methods. @@ -28,13 +28,13 @@ otherwise we would have to code manually: 7. On top of that, case classes are conventionally not extended. They can extend traits and other classes, but they shouldn't be used as superclasses for other classes. Technically though, extending case classes is not forbidden by default. - If you want to ensure that a case class is be extended, mark it with the `final` keyword. + If you want to ensure that a case class isn't extended, mark it with the `final` keyword. You should already be familiar with some of these features, as we used them in the previous module. The difference here is that we want you to focus on distinct aspects that you'll see in the examples and exercises. Below is a simple example of a case class that models cats. -We create a `Cat` instance called `myCat` and then use pattern matching against `Cat` to access its name and color. +We create a `Cat` instance called `myCat` and then use pattern matching on `Cat` to access its name and color. ```scala 3 case class Cat(name: String, color: String) diff --git a/Pattern Matching/Case Objects/task.md b/Pattern Matching/Case Objects/task.md index 593f664..95ea2dd 100644 --- a/Pattern Matching/Case Objects/task.md +++ b/Pattern Matching/Case Objects/task.md @@ -1,16 +1,16 @@ # Case Objects -You might have noticed in the example of a binary tree implemented with sealed trait hierarchies, +You might have noticed in the example of a binary tree implemented with sealed trait hierarchies that we used a *case object* to introduce the `Stump` type. In Scala, a case object is a special type of object that combines characteristics and benefits of both a case class and an object. Similar to a case class, a case object comes equipped with a number of auto-generated methods like `toString`, `hashCode`, and `equals`, and they can be directly used in pattern matching. On the other hand, just like any regular object, a case object is a singleton, i.e., there's exactly one instance of it in the entire JVM. -Case objects are used in place of case classes when there's no need for parametrization — when you don't need to carry data, -but you still want to benefit from pattern matching capabilities of case classes. -In Scala 2, case objects implementing a common trait were the default way of achieving enum functionality. +Case objects are used in place of case classes when there's no need for parametrization — when you don't need to carry data +yet still want to benefit from the pattern matching capabilities of case classes. +In Scala 2, implementing a common trait using case objects was the default way of achieving enum functionality. This is no longer necessary in Scala 3, which introduced enums, but case objects are still useful in more complex situations. For example, you may have noticed that to use case objects as enums, we make them extend a shared sealed trait. @@ -24,11 +24,11 @@ case object Unauthorized extends AuthorizationStatus def authorize(userId: UserId): AuthorizationStatus = ... ``` -Here, `AuthorizationStatus` is a sealed trait and `Authorized` and `Unauthorized` are the only two case objects extending it. -This means that the result of calling the authorize method can only ever be either `Authorized` or `Unauthorized`. +Here, `AuthorizationStatus` is a sealed trait, and `Authorized` and `Unauthorized` are the only two case objects extending it. +This means that the result of calling the authorize method can be either `Authorized` or `Unauthorized`. There is no other response possible. -However, imagine that you're working on code which uses a library or a module you no longer want to modify. +However, imagine that you're working on code that uses a library or module you no longer want to modify. In that case, the initial author of that library or module might have used case objects extending a non-sealed trait to make it easier for you to add your own functionality: @@ -52,16 +52,16 @@ override def authorize(userId: UserId): AuthorizationStatus = ``` Here, we extend the functionality of the original code by adding a possibility that the user, despite being authorized to perform a given operation, -encountered an issue and was logged out. +encounters an issue and is logged out. Now they need to log in again before they are able to continue. This is not the same as simply being `Unauthorized`, so we add a third case object to the set of those extending `AuthorizationStatus`: we call it `LoggedOut`. -If the original author had used a sealed trait to define `AuthorizationStatus`, or if they had used an enum, we wouldn't have been able to do that. +If the original author had used a sealed trait to define `AuthorizationStatus` or had used an enum, we wouldn't have been able to do that. ### Exercise We're modeling bots that move on a 2D plane (see the `Coordinates` case class). -There are various kinds of bots (see the `Bot` trait), which move a distinct number of cells at a time. +There are various kinds of bots (see the `Bot` trait), each moving a distinct number of cells at a time. Each bot moves in one of four directions (see the `Direction` trait). Determine whether the traits should be sealed or not and modify them accordingly. Implement the `move` function. diff --git a/Pattern Matching/Destructuring/task.md b/Pattern Matching/Destructuring/task.md index 6c0609f..5ece98b 100644 --- a/Pattern Matching/Destructuring/task.md +++ b/Pattern Matching/Destructuring/task.md @@ -2,19 +2,19 @@ Destructuring in Scala refers to the practice of breaking down an instance of a given type into its constituent parts. You can think of it as the inversion of construction. -In a constructor, or an `apply` method, we use a collection of parameters to create a new instance of a given type. +In a constructor or an `apply` method, we use a collection of parameters to create a new instance of a given type. When destructuring, we start with an instance of a given type and decompose it into values that, at least in theory, could be used again to create an exact copy of the original instance. -Additionally, similar to how an apply method can serve as a smart constructor that performs certain complex operations before creating an instance, +Additionally, just as an `apply` method can serve as a smart constructor that performs certain complex operations before creating an instance, we can implement a custom method, called `unapply`, that intelligently deconstructs the original instance. It's a very powerful and expressive feature of Scala, often seen in idiomatic Scala code. The `unapply` method should be defined in the companion object. -It usually takes the instance of the associated class as its only argument, and returns an option of what’s contained within the instance. -In the simplest case, this will just be the class's fields: one if there is only one field in the class, -otherwise a pair, triple, quadruple, and so on. +It usually takes the instance of the associated class as its only argument and returns an option of what’s contained within the instance. +In the simplest case, this will just be the class's fields: one if there is only one field, +or otherwise a pair, triple, quadruple, and so on. Scala automatically generates simple `unapply` methods for case classes. -In such case, unapply will just break the given instance into a collection of its fields, as shown in the following example: +In such cases, `unapply` just breaks the given instance into a collection of its fields, as shown in the following example: ```scala 3 case class Person(name: String, age: Int) @@ -24,12 +24,12 @@ val Person(johnsName, johnsAge) = john println(s"$johnsName is $johnsAge years old.") ``` -As you can notice, similarly to how we don't need to explicitly write `apply` to create an instance of the `Person` case class, +As you can notice, just as we don't need to explicitly write `apply` to create an instance of the `Person` case class, we also don't need to explicitly write `unapply` to break an instance of the `Person` case class back into its fields: `johnsName` and `johnsAge`. -However, you will not see this way of using destructuring very often in Scala. -After all, if you already know exactly what case class you have, and you only need to read its public fields, +However, you will not often see this way of using destructuring in Scala. +After all, if you already know exactly which case class you have and you only need to read its public fields, you can do so directly — in this example, with `john.name` and `john.age`. Instead, `unapply` becomes much more valuable when used together with pattern matching. @@ -53,7 +53,7 @@ val snowy = Cat("Snowy", White, 1) val midnight = Cat("Midnight", Black, 4) ``` -We have two cats (Fluffy and Snowy) that are one year old, and three cats (Mittens, Ginger, and Midnight) that are older than one year. +We have two cats (Fluffy and Snowy) who are one year old, and three cats (Mittens, Ginger, and Midnight) who are older than one year. Next, let's put these cats in a Seq: ```scala 3 @@ -72,15 +72,15 @@ cats.foreach { ``` In this code, we're using pattern matching to destructure each Cat object. -We're also using a guard `if age > 1` to check the age of the cat. +We're also using a guard, `if age > 1`, to check the age of the cat. If the age is more than one, we print out the message for adult cats. -If the age is not more than one (i.e., it's one or less), we print out the message for kittens. +If the age is one or less, we print out the message for kittens. Note that in the second case expression, we're using the wildcard operator `_` to ignore the age value, -because we don't need to check it — if a cat instance is destructured in the second case, +since we don't need to check it — if a cat instance is destructured in the second case, it means that the cat's age was already checked in the first case and failed that test. -Also, if we wanted to handle a case where one of the fields has a single constant value -(unlike in the first case above, where any age larger than `1` suits as well), we can simply substitute it for the field: +Also, if we need to handle a case where one of the fields has a specific constant value +(unlike in the first case above, where any age greater than `1` is suitable), we can directly specify that value in place of the field: ```scala 3 cats.foreach { @@ -96,15 +96,15 @@ cats.foreach { ### Exercise RGB stands for Red, Green, and Blue. It is a color model used in digital imagining -that represents colors by combining intensities of these three primary colors. This allowing electronic devices +that represents colors by combining intensities of these three primary colors. This allows electronic devices to create a wide spectrum of colors. Sometimes, a fourth component called Alpha is also used to describe the transparency. -Each component can be any integer number withing the range `0 .. 255`, with `0` meaning no color, +Each component can be any integer withing the range `0 .. 255`, with `0` meaning no color, and `255` representing the maximum color intensity. For example, the color red is represented when Red is `255`, while Green and Blue are `0`. -In this exercise, implement the function `colorDescription`, which transforms the given RGB color into a string. -It should pattern destruct the color, examine the RGB components, and return the name of the color in case it is one of +In this exercise, implement the function `colorDescription`, which transforms a given RGB color into a string. +It should deconstruct the color, examine the RGB components, and return the name of the color in case it is one of the following: `"Black", "Red", "Green", "Blue", "Yellow", "Cyan", "Magenta", "White"`. -Otherwise, it should just return the result of the `toString()` application. +Otherwise, it should just return the result of the `toString()` method. Please ignore the alpha channel when determining the color name. diff --git a/Pattern Matching/Enums/task.md b/Pattern Matching/Enums/task.md index e5b794f..356806a 100644 --- a/Pattern Matching/Enums/task.md +++ b/Pattern Matching/Enums/task.md @@ -1,16 +1,16 @@ # Enum An enumeration (or enum) is a type that represents a finite set of distinct values. -Enumerations are commonly used to limit the set of possible values of a field, +Enumerations are commonly used to limit the set of possible values for a field, thus improving code clarity and reliability. Since a field cannot be set to something outside a small set of well-known values, -we can make sure that the logic we implement handles all possibile options and -there are no unconsidered scenarios. +we can make sure that the logic we implement handles all possible options and +that there are no unconsidered scenarios. In Scala 3, enumerations are created using the `enum` keyword. Each value of the enum is an object of the *enumerated type*. -Scala 3 enums can also have parameterized values and methods. -You have already seen this in our previous examples where we used enums to define the colors for cat fur: +Scala 3 enums can also have parameterized values and methods. +You have already seen this in our previous examples, where we used enums to define the colors for cat fur: ```scala 3 enum Color: @@ -21,7 +21,7 @@ However, Scala 3 enums are even more powerful than that. In fact, they are more versatile than their counterparts in many other programming languages. Enums in Scala 3 can also be used as algebraic data types (also known as sealed trait hierarchies in Scala 2). -You can have an enum with cases that carry different data. +You can have an enum with cases that carry different types of data. Here's an example: ```scala 3 @@ -50,7 +50,7 @@ val tree: Tree[Int] = In this example, `Tree` is an enum that models a binary tree data structure. Binary trees are used in many areas of computer science, including sorting, searching, and efficient data access. -They consist of nodes where each node can have at most two subtrees. +They consist of nodes, each of which can have at most two subtrees. Here, we implement a binary tree with an enum `Tree[A]`, which allows the nodes of the tree to be one of three possible kinds: * a `Branch`, which has a value of type `A` and two subtrees, `left` and `right`, @@ -59,17 +59,17 @@ to be one of three possible kinds: Please note that our implementation of a binary tree is slightly different from the classic one. You may notice that ours is a bit redundant: -a `Leaf` is, in all practical sense, the same as a `Branch` where both subtrees are stumps. -But having `Leaf` as a separate enum case allows us to write the code for building +a `Leaf` is, in all practical senses, the same as a `Branch` where both subtrees are stumps. +However, having `Leaf` as a separate enum case allows us to write the code for building the tree in a more concise way. ## Exercise -Implement a function that checks if the tree is balanced. +Implement a function that checks if a tree is balanced. A balanced binary tree meets the following conditions: * The absolute difference between the heights of the left and right subtrees at any node is no greater than 1. -* For each node, its left subtree is a balanced binary tree -* For each node, its right subtree is a balanced binary tree +* For each node, its left subtree is a balanced binary tree. +* For each node, its right subtree is a balanced binary tree. -For an extra challenge, try to accomplish this in one pass. +For an extra challenge, try to accomplish this in a single pass. diff --git a/Pattern Matching/Pattern Matching/task.md b/Pattern Matching/Pattern Matching/task.md index c45794e..6931d2f 100644 --- a/Pattern Matching/Pattern Matching/task.md +++ b/Pattern Matching/Pattern Matching/task.md @@ -1,11 +1,11 @@ # Pattern Matching Pattern matching is one of the most important features in Scala. -It’s so vital that we might risk saying that it’s not *a* feature of Scala, but *the* defining feature. -It affects every other part of the programming language to the point where it’s difficult to talk about anything in Scala +It’s so vital that we might risk saying it’s not just *a* feature of Scala, but *the* defining feature. +It affects every other part of the programming language to the extent that it’s difficult to discuss any aspect of Scala without at least mentioning or using pattern matching in a code example. You have already seen it — the match/case statements, the partial functions, and the destructuring of instances of case classes. -In this lesson, we will touch upon case classes and objects, ways to construct and deconstruct them, enums, and a neat +In this lesson, we will explore case classes and objects, ways to construct and deconstruct them, enums, and a neat programming trick called the `newtype` pattern. diff --git a/Pattern Matching/Sealed Traits Hierarchies/task.md b/Pattern Matching/Sealed Traits Hierarchies/task.md index 0649465..1f820bb 100644 --- a/Pattern Matching/Sealed Traits Hierarchies/task.md +++ b/Pattern Matching/Sealed Traits Hierarchies/task.md @@ -1,13 +1,13 @@ # Sealed Traits Hierarchies -Sealed traits in Scala are used to represent restricted class hierarchies that provide exhaustive type checking. +Sealed traits in Scala are used to represent restricted class hierarchies, providing exhaustive type checking. When a trait is declared as sealed, it can only be extended within the same file. -This allows the compiler to know all the subtypes, which allows for more precise compile-time checking. +This restriction enables the compiler to identify all subtypes, allowing for more precise compile-time checking. -With the introduction of enums in Scala 3, many use cases of sealed traits are now covered by them, and their syntax is more concise. -However, sealed traits are more flexible than enums — they allow for the addition of new behavior in each subtype. +With the introduction of enums in Scala 3, many use cases of sealed traits are now covered by enums, and their syntax is more concise. +However, sealed traits are more flexible than enums — they allow for the addition of new behaviors in each subtype. For instance, we can override the default implementation of a given method differently in each case class that extends the parent trait. -In enums, all enum cases share the same methods and fields. +In contrast, in enums, all cases share the same methods and fields. ```scala 3 sealed trait Tree[+A]: @@ -39,7 +39,7 @@ val tree: Tree[Int] = ## Exercise Our trees are immutable, so we can compute their heights and check if they are balanced at the time of creation. -To do this, we added the `height` and `isBalanced` members into the `Tree` trait declaration. +To do this, we added the `height` and `isBalanced` members to the `Tree` trait declaration. The only thing that is left is to override these members in all classes that extend the trait in this exercise. This way, no extra passes are needed to determine whether a tree is balanced. diff --git a/Pattern Matching/Smart Constructors and the apply Method/task.md b/Pattern Matching/Smart Constructors and the apply Method/task.md index f25e0b0..f8bd253 100644 --- a/Pattern Matching/Smart Constructors and the apply Method/task.md +++ b/Pattern Matching/Smart Constructors and the apply Method/task.md @@ -10,12 +10,12 @@ class Cat: cat() // returns "meow" ``` -Technically this sums it up — you can implement `apply` any way you want, for any reason you want. -However, by convention, the most popular way to use `apply` is as a smart constructor. -This convention is very important, and we would advise you to follow it. +Technically, this sums it up — you can implement `apply` any way you want, for any purpose. +However, by convention, `apply` is most popularly used as a smart constructor. +This convention is very important, and we strongly advise adhering to it. There are a few other ways you can use `apply`. -For example, the Scala collections library often uses it to retrieve data from a collection. This might look +For example, the Scala collections library often employs it to retrieve data from a collection. This usage might appear as if Scala has traded the square brackets, common in more traditional languages, for parentheses: ```scala 3 @@ -37,8 +37,8 @@ This pattern can be especially useful in situations where: * You need to enforce a specific protocol for object creation, such as caching objects, creating singleton objects, or generating objects through a factory. The idiomatic way to use `apply` as a smart constructor is to place it in the companion object of a class -and call it with the name of the class and a pair of parentheses. -For example, let's consider again the `Cat` class with a companion object that has an `apply` method: +and call it by using the name of the class followed by a pair of parentheses. +For example, let's consider the `Cat` class again, which has a companion object that includes an `apply` method: ```scala 3 class Cat private (val name: String, val age: Int) @@ -51,11 +51,11 @@ object Cat: val fluffy = Cat("Fluffy", -5) // the age of Fluffy is set to 0, not -5 ``` -The `Cat` class has a primary constructor that takes a `String` and an `Int` to set the name and age of the new cat, respectivelym. +The `Cat` class has a primary constructor that takes a `String` and an `Int` to set the name and age of the new cat, respectively. Besides, we create a companion object and define the `apply` method in it. -This way, when we later call `Cat("fluffy", -5)`, the `apply` method, not the primary constructor, is invoked. +This way, when we later call `Cat("Fluffy", -5)`, the `apply` method, not the primary constructor, is invoked. In the `apply` method, we check the provided age of the cat, and if it's less than zero, we create a cat instance -with the age set to zero, instead of the input age. +with the age set to zero, instead of using the input age. Please also notice how we distinguish between calling the primary constructor and the `apply` method. When we call `Cat("Fluffy", -5)`, the Scala 3 compiler checks if a matching `apply` method exists. @@ -63,22 +63,22 @@ If it does, the `apply` method is called. Otherwise, Scala 3 calls the primary constructor (again, if the signature matches). This makes the `apply` method transparent to the user. If you need to call the primary constructor explicitly, bypassing the `apply` method, you can use the `new` keyword, -for example, `new Cat(name age)`. +for example, `new Cat(name, age)`. We use this trick in the given example to avoid endless recursion — if we didn't, calling `Cat(name, age)` or `Cat(name, 0)` -would again call the `apply` method. +would again trigger the `apply` method. -You might wonder how to prevent the user from bypassing our `apply` method by calling the primary constructor `new Cat("Fluffy", -5)`. +You might wonder how to prevent users from bypassing our `apply` method by calling the primary constructor `new Cat("Fluffy", -5)`. Notice that in the first line of the example, where we define the `Cat` class, there is a `private` keyword between the name of the class and the parentheses. -The `private` keyword in this position means that the primary constructor of the class `Cat` can be called only by -the methods of the class or its companion object. +The `private` keyword in this position means that the primary constructor of the `Cat` class can only be called by +methods within the class or its companion object. This way, we can still use `new Cat(name, age)` in the `apply` method, since it is in the companion object, -but it's unavailable to the user. +but it remains unavailable to the user. ## Exercise Consider the `Dog` class, which contains fields for `name`, `breed`, and `owner`. -Sometimes a dog get lost, and the person who finds it knows as little about the dog as its name on the collar. +Sometimes a dog gets lost, and the person who finds it may know as little about the dog as its name on the collar. Until the microchip is read, there is no way to know who the dog's owner is or what breed the dog is. To allow for the creation of `Dog` class instances in these situations, it's wise to use a smart constructor. We represent the potentially unknown `breed` and `owner` fields with `Option[String]`. diff --git a/Pattern Matching/The Newtype Pattern/task.md b/Pattern Matching/The Newtype Pattern/task.md index 3e936e5..342e101 100644 --- a/Pattern Matching/The Newtype Pattern/task.md +++ b/Pattern Matching/The Newtype Pattern/task.md @@ -2,25 +2,25 @@ The *newtype pattern* in Scala is a way of creating new types from existing ones that are distinct at compile time but share the same runtime representation. -This can be useful for adding more meaning to simple types, to enforce type safety, and to avoid mistakes. +This approach can be useful for adding more meaning to simple types, enforcing type safety, and avoiding mistakes. For example, consider a scenario where you are dealing with user IDs and product IDs in your code. Both IDs are of type `Int`, but they represent completely different concepts. Using `Int` for both may lead to bugs where you accidentally pass a user ID where a product ID was expected, or vice versa. -The compiler wouldn't catch these errors because both IDs are of the same type, Int. +The compiler wouldn't catch these errors because both IDs are of the same type, `Int`. -With the newtype pattern, you can create distinct types for `UserId` and `ProductId` that wrap around Int, providing more safety: +With the newtype pattern, you can create distinct types for `UserId` and `ProductId` that wrap around `Int`, providing more safety: ```scala 3 case class UserId(value: Int) extends AnyVal case class ProductId(value: Int) extends AnyVal ``` -These are called value classes in Scala. `AnyVal` is a special trait in Scala — when you extend it with a case class +These are called value classes in Scala. `AnyVal` is a special trait in Scala — when extended by a case class that has only a single field, you're telling the compiler that you want to use the newtype pattern. -The compiler will use this information to catch any bugs that could arise if you were to confuse integers used -for user IDs with integers used for product IDs. But then, at a later phase, it strips the type information from the data, -leaving only a bare `Int`, so that your code incurs no overhead at runtime. +The compiler uses this information to catch any bugs, such as confusing integers used +for user IDs with yjose used for product IDs. However, at a later phase, it strips the type information from the data, +leaving only a bare `Int`, so that your code incurs no runtime overhead. Now, if you have a function that accepts a `UserId`, you can no longer mistakenly pass a `ProductId` to it: ```scala 3 @@ -36,8 +36,8 @@ val productId = ProductId(456) val user = getUser(userId) // This is fine ``` -In Scala 3, a new syntax has been introduced for creating newtypes using *opaque type aliases*, but the concept remains the same. -The above example would look like as follows in Scala 3: +In Scala 3, a new syntax has been introduced for creating newtypes using *opaque type aliases*, although the concept remains the same. +The above example would look as follows in Scala 3: ```scala 3 object Ids: @@ -61,16 +61,16 @@ val user = getUser(userId) // This is fine ``` As you can see, some additional syntax is required. -Since an opaque type is just a kind of type alias, not a case class, we need to manually define `apply` methods +Since an opaque type is essentially a type alias and not a case class, we need to manually define `apply` methods for both `UserId` and `ProductId`. -Also, it's essential to define them inside an object or a class — they cannot be top-level definitions. -On the other hand, opaque types integrate very well with extension methods, which is another new feature in Scala 3. +Also, it's essential to define these methods within an object or a class — they cannot be top-level definitions. +On the other hand, opaque types integrate very well with extension methods, another new feature in Scala 3. We will discuss this in more detail later. ### Exercise -One application of the opaque types is expressing units of measure. -For example, in a fitness tracker, the distance can be input by the user in either feet or meters, +One application of opaque types is expressing units of measure. +For example, in a fitness tracker, users can input the distance either in feet or meters, based on their preferred measurement system. -Implement functions for tracking the distance in different units and the `show` function to display +Implement functions for tracking distance in different units and a `show` function to display the tracked distance in the preferred units. diff --git a/README.md b/README.md index bf21ef6..be69fa9 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,9 @@ # Functional Programming in Scala [![official JetBrains project](http://jb.gg/badges/official.svg)](https://confluence.jetbrains.com/display/ALL/JetBrains+on+GitHub) -

This is an introductory course to Functional Programming in Scala. The course is designed for learners who already have some basic knowledge of Scala. The course covers the core concepts of functional programming, including functions as data, immutability, and pattern matching. The course also includes hands-on examples and exercises to help students practice their new skills.

+

This is an introductory course on Functional Programming in Scala, designed for learners who already have some basic knowledge of Scala. The course covers core concepts of functional programming, including functions as data, immutability, and pattern matching. It also includes hands-on examples and exercises to help students practice their new skills.

Have fun and good luck!

## Want to know more? -If you have questions about the course or the tasks, or if you find any errors, feel free to ask questions and participate in discussions within the repository [issues](https://github.com/jetbrains-academy/Functional_Programming_Scala/issues). +If you have any questions about the course or the tasks, or if you find any errors, feel free to ask questions and participate in discussions within the repository [issues](https://github.com/jetbrains-academy/Functional_Programming_Scala/issues). ## Contribution Please be sure to review the [project's contributing guidelines](https://github.com/jetbrains-academy#contribution-guidelines) to learn how to help the project.