This repo contains Pythonic versions of exercises based on Stephen Grider's The Coding Interview Bootcamp: Algorithms + Datastructures found on Udemy. The JavaScript repo is here.
I created this, as I wanted to apply what I learned in JavaScript from Stephen's course, toward learning Python.
This repo is not intended to be a rip off of Stephen's work or a replacement for his very informative course targeted at JavaScript. Consider it more of a Pythonic ode.
π I pinged him about doing this, but didn't get a response.
This repo is also not a substitute for the ton of material he covers, erudite teaching he provides, and the practical experience he shares throughout the course lectures. In fact, you're not likely to get a whole lot out of these exercises without having access to the course material. So if you really want a good start toward learning these the concepts behind these exercises (in JavaScript) and then apply them in Python, go buy his course and enjoy what I've created here! π π¦ π π± π π π
- Oodles of exercises to practice challenging coding questions
- Extensive unit tests to verify your solutions
- Example solutions (often more than one) for every exercise
- Python 3: some tests won't work if you use Python 2
- A testing framework ( e.g. pytest or unnitest )
- A text editor: Atom, Sublime Text, Vim, Visual Studio Code, etc.
- A terminal
πΎ If you're running Windows 10 try Windows Subsystem for Linux.
π There is an excellent Python extension for Visual Studio Code. Check it out here.
- Go to a place on your machine where you want to download this repo (e.g.
cd ~/workspace/
): - Run one of the following commands:
git clone git@github.com:kimfucious/pythonic_algocasts.git
git clone https://github.com/kimfucious/pythonic_algocasts.git
This repo has both a requirements.txt
file and a Pipfile
for making installing dependencies easy.
You don't need to install any dependencies, if you're planning on just using unittest
to test your solutions.
If you want to use pytest, you can install it manually (see below); otherwise it will be installed by pip
or pipenv
along with the following packages:
- A beta version of HTMLTestRunner (creates HTML reports)
- liveReload (small dev server to display HTML reports)
- pylint (a Python code linter)
- other sub-dependencies
I'm using pip3 throughout this documentation, but pip should also work fine.
You probably already have pip, but here's some info on installation and upgrading.
You probably also want to create an new Python environment. To do that, navigate to this downloaded repo's root directory, and run the following:
python3 -m venv env
source env/bin/activate
βοΈ The above commands assume you are using bash or zsh. Refer to the below or see here for using
venv
in other shells.
Platform | Shell | Command to activate virtual environment |
---|---|---|
Posix | bash/zsh | $ source /bin/activate |
fish | $ . /bin/activate.fish | |
csh/tcsh | $ source /bin/activate.csh | |
Windows | cmd.exe | C:> \Scripts\activate.bat |
PowerShell | PS C:> \Scripts\Activate.ps1 |
Run pip3 list
, and you should see only two files:
Package Version
---------- -------
pip 10.0.1
setuptools 39.0.1
You'll get prompted to upgrade pip at this point. Go ahead and do that, if you want.
Next, run:
pip3 install -r requirements.txt
Once this completes, run pip3 list
again, and Bob's your uncle.
Pipenv is awesome. You can learn about it here.
After installing Pipenv, navigate to this project's root and run the following:
pipenv install
pipenv shell
βοΈ Do not remove any of the
__init__.py
files from within the repo, or it will break test discovery.
Starter files for each exercise with instructions that describe what the solution should be.
While you can, for the most part, approach these exercises in any order. The course does tend to build upon itself, so you may want to follow the order in which the lectures are covered in Stephen's course.
- String Reversal (reversestring)
- Palindromes
- Integer Reversal (reverseint)
- MaxChars
- FizzBuzz
- List Chunking (chunk)
- Anagrams
- Sentance Capitalization (capitalize)
- Printing Steps (steps)
- Two Sided Steps (pyramid)
- Vowels
- Matrix Spiral (matrix)
- Fibonacci (fib and fib_memoized)
- Queue
- Weave
- Stack
- Queue From Stack (qfroms)
- Linked Lists (linkedlist)
- Midpoint
- Circular
- From Last (from last)
- Tree
- Level Width (levelwidth)
- Binary Search Tree (bst)
- Validating Binary Search Trees (validate)
- Events
- Bubble Sort, Selection Sort, and Merge Sort (sorting)
Some of the exercies rely on pre-built components (probably not the right word), like: classes, linked lists, queues, and stacks. You'll build such things as you work through the excercises; however, these helpers are in place for exercises with a different, specific focus.
There are two html files in here:
- The
events_example.html
file is for the events exercise. Frankly, you won't get much out of this unless you have access to Stephen's course. - The
linkedlist_directions.html
file is a set of instructions for the fairly long linked list exercise.
This is where HTML test reports will be generated.
I've setup an HTML report generator that will allow you to display the test results in a web browser.
You can get this going by running the following command from the root of this project.
python3 reports.py
To view the reports, open http://localhost:8080 in your browser, after running the above command.
There are a few things happening to make this work.
- I've setup a test suite containing all of the tests in the
run_all_tests.py
file. - This file also uses James Sloan's beta version of HTMLTestRunner. I'm using the beta, because James has made it possible to combine multiple test cases into one report. If/when this pull request comes through, I'll update this project to use the updated release.
- The
reports.py
file launches livereload, which is a dev server that serves up thetest_results.html
file. - livereload is setup to watch all
*.py
files in the exercises directory. Any changes to these files, kicks off therun_all_tests.py
process, which overwrites any existingtest_results.html
file in the reports directory, it then waits a few seconds, and automatically reloads the browser window.
βοΈ If you're running into an issue where the browser refreshes before the tests complete, like if you're running fib(39), you can adjust the delay in the
reports.py
file on the line shown below:
server.watch("./exercises/*.py",
shell("python3 run_all_tests.py --quiet"), delay=3)
This process is not perfect. Admittedly, the console output is not pretty. If you want nice output in the console, checkout pytest
in Testing. Also, if there's something really wrong (e.g. SyntaxError) with an exercise solution, the process will crash before any report is generated. This is usually indicated by something red in the console output, that might resemble this with a bunch of nasty stuff after it, usually ending with the type of error being thrown:
[E 180819 00:44:57 server:75] yadda, yadda, yadda SyntaxError: invalid syntax...
When this does happen, the report will reload in the browser, but it will be using old data so the results will not be accurate.
At this point, you can try running your tests manually (see Testing below) to see more clearly what's going on and fix your solution.
Lastly, to generate HTML reports manually without livereload, run the following from the root directory:
python3 run_all_tests.py
Here are the examples solutions to the exercises that you can use to compare against your own work or to peek at for inspiration.
Feel free to recommend your solution(s) via a pull request if you come up with anything better, cleaner, clearer, more-pythonic, etc. Please use doctype
to document your solutions, instead of comments.
Example:
def function(x):
"""
My super cool solution is based on using kittens, rainbows, and lambdas.
You can read about those things at http://kittensrainbowsandlambdas.org
"""
pass
π Solutions containing advertisements or other forms of spam/self-promotion won't be considered. This is a learning space, not a marketplace.
There is one test file for each exercise. The naming convention should make it evident as to what tests what.
All of the test files should be considered done, meaning that you should not need to edit any of them, except for commenting out the @unittest.skip()
lines when you're ready to test (see Skipping and Unskipping Tests). They should just work and are just there to test the solutions that you create for the exercises.
πͺ² If you find any mistakes in the tests or have any recomendations to improve them, please raise an issue.
Some of the test files contain one or more test cases (i.e. Classes) and these can contain one to several test methods. See the Testing section for more info on tests.
Note that the __init__.py
files that you see in the directory structure (and the directory structure itself) are there to enable test discovery. Don't move things around or delete any __init__.py
files, unless you know that you really want to.
At the risk of sounding redundant, all tests have been tested using both unittest
and pytest
.
While unittest
is built in to Python, pytest requires installation.
Here's some documentation:
To discover all tests, if you're using unittest
, you can run discovery by doing the following:
- Navigate to the root directory of this repo in your terminal
- Run
python3 -m unittest discover
As far as I know, pytest
doesn't have a discovery option, nor is it needed. And, along those lines, python3 -m unittest
will also discover tests, so take that for what it is.
To run all unit tests:
- Navigate to the root directory of this repo in your terminal
- Do one of the below:
βοΈ Remember that we're using Python 3. Don't forget to type
python3
(not python), and don't forget the-m
bit.
python3 -m pytest
python3 -m unittest
To run an individual unit test:
- Navigate to the root directory of this repo.
- Run either of the below, where
exercise.py
is the name of thing you're trying to test, liketest_anagrams.py
, for example.
π You can also navigate to the tests directory, and run
python3 -m pytest test_excercise.py
.
python3 -m pytest tests/test_excercise.py
python3 -m unittest tests/test_excercise.py
π‘ adding --verbose to either of the two above commands will give you a more output in your test results.
You'll have noticed that all of the tests are skipped when you first try to run them. This is so that you don't see a load of failures and/or errors on exercises that you haven't event attempted to work on yet.
The following line (i.e. decorator) in a test file will skip an entire test case (multiple methods) or single test cases depending on where it's located.
@unittest.skip("skip the following stuff")
In most of the test files, there is one skip decorator located just above the Class defining the test case. In other, more lengthy test files, like Linked List, there might be more cases and, thus, more skip decorators, allowing you to open up the test cases slowly as you progress through the exercises.
To allow a test case to run, simply comment out the decorator, like this (or delete it if you're feeling bold):
# @unittest.skip("skip the following stuff")