You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/finite_markov.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -212,11 +212,11 @@ One natural way to answer questions about Markov chains is to simulate them.
212
212
213
213
(To approximate the probability of event $E$, we can simulate many times and count the fraction of times that $E$ occurs).
214
214
215
-
Nice functionality for simulating Markov chains exists in [QuantEcon.py](https://quantecon.org/quantecon-py).
215
+
Nice functionality for simulating Markov chains exists in [QuantEcon.py](https://quantecon.org/quantecon-py/).
216
216
217
217
* Efficient, bundled with lots of other useful routines for handling Markov chains.
218
218
219
-
However, it's also a good exercise to roll our own routines --- let's do that first and then come back to the methods in [QuantEcon.py](https://quantecon.org/quantecon-py).
219
+
However, it's also a good exercise to roll our own routines --- let's do that first and then come back to the methods in [QuantEcon.py](https://quantecon.org/quantecon-py/).
220
220
221
221
In these exercises, we'll take the state space to be $S = 0,\ldots, n-1$.
222
222
@@ -231,7 +231,7 @@ The Markov chain is then constructed as discussed above. To repeat:
231
231
232
232
To implement this simulation procedure, we need a method for generating draws from a discrete distribution.
233
233
234
-
For this task, we'll use `random.draw` from [QuantEcon](https://quantecon.org/quantecon-py), which works as follows:
234
+
For this task, we'll use `random.draw` from [QuantEcon](https://quantecon.org/quantecon-py/), which works as follows:
235
235
236
236
```{code-cell} python3
237
237
ψ = (0.3, 0.7) # probabilities over {0, 1}
@@ -294,7 +294,7 @@ always close to 0.25, at least for the `P` matrix above.
294
294
295
295
### Using QuantEcon's Routines
296
296
297
-
As discussed above, [QuantEcon.py](https://quantecon.org/quantecon-py) has routines for handling Markov chains, including simulation.
297
+
As discussed above, [QuantEcon.py](https://quantecon.org/quantecon-py/) has routines for handling Markov chains, including simulation.
298
298
299
299
Here's an illustration using the same P as the preceding example
300
300
@@ -306,7 +306,7 @@ X = mc.simulate(ts_length=1_000_000)
306
306
np.mean(X == 0)
307
307
```
308
308
309
-
The [QuantEcon.py](https://quantecon.org/quantecon-py) routine is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link) and much faster.
309
+
The [QuantEcon.py](https://quantecon.org/quantecon-py/) routine is [JIT compiled](https://python-programming.quantecon.org/numba.html#numba-link) and much faster.
310
310
311
311
```{code-cell} ipython
312
312
%time mc_sample_path(P, sample_size=1_000_000) # Our homemade code version
@@ -556,7 +556,7 @@ $$
556
556
It's clear from the graph that this stochastic matrix is irreducible: we can eventually
557
557
reach any state from any other state.
558
558
559
-
We can also test this using [QuantEcon.py](https://quantecon.org/quantecon-py)'s MarkovChain class
559
+
We can also test this using [QuantEcon.py](https://quantecon.org/quantecon-py/)'s MarkovChain class
560
560
561
561
```{code-cell} python3
562
562
P = [[0.9, 0.1, 0.0],
@@ -775,7 +775,7 @@ One option is to regard solving system {eq}`eq:eqpsifixed` as an eigenvector pr
775
775
$\psi$ such that $\psi = \psi P$ is a left eigenvector associated
776
776
with the unit eigenvalue $\lambda = 1$.
777
777
778
-
A stable and sophisticated algorithm specialized for stochastic matrices is implemented in [QuantEcon.py](https://quantecon.org/quantecon-py).
778
+
A stable and sophisticated algorithm specialized for stochastic matrices is implemented in [QuantEcon.py](https://quantecon.org/quantecon-py/).
779
779
780
780
This is the one we recommend:
781
781
@@ -1322,7 +1322,7 @@ $$
1322
1322
1323
1323
Tauchen's method {cite}`Tauchen1986` is the most common method for approximating this continuous state process with a finite state Markov chain.
1324
1324
1325
-
A routine for this already exists in [QuantEcon.py](https://quantecon.org/quantecon-py) but let's write our own version as an exercise.
1325
+
A routine for this already exists in [QuantEcon.py](https://quantecon.org/quantecon-py/) but let's write our own version as an exercise.
1326
1326
1327
1327
As a first step, we choose
1328
1328
@@ -1363,13 +1363,13 @@ The exercise is to write a function `approx_markov(rho, sigma_u, m=3, n=7)` that
0 commit comments