Skip to content

izzatkarimov/data-structures-and-algorithms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Data Structures and Algorithms Notes

dsa-banner

Overview

This repository contains my personal notes on Data Structures and Algorithms, with links to websites, courses, books, and other valuable materials.

Table of Contents

Resources

The following is a list of online courses, YouTube videos, and websites to help learn Data Structures & Algorithms. Note that these are not in order of completion.

All Resources Type Links
LeetCode website Website Link
NeetCode website Website Link
AlgoMap website Website Link
TopSWE website Website Link
LeetCode Patterns website Website Link
NeetCode Roadmap website Website Link
VisuAlgo website Website Link
Data Structure Visualizations website Website Link
Big-O CheatSheet website Website Link
Data Structure Visualization website Website Link
NeetCode Courses course Course Link
C++ Data Structures & Algorithms + LEETCODE Exercises course Course Link
The Last Algorithms Course You'll Need course Course Link
Algorithms and Data Structures Tutorial - Full Course for Beginners youtube-video Video Link
A & DS English Course - Pavel Mavrin youtube-playlist Playlist Link
NeetCode YouTube Videos youtube-channel Channel Link
Coding Interview University github-repo Repository Link
Awesome Algorithms github-repo Repository Link

Websites

Courses

YouTube Channels / Playlists / Videos

Github Repositories

Books

Practice

Notes

Getting Started

What is a Data Structure?

A data structure is a systematic way of organizing, managing, and storing data to enable efficient processing and retrieval. It defines how data elements relate, how they are accessed, and what operations can be performed on them, and how efficient these operations are.

It can be compared to a container—just as different containers hold items in various ways, a data structure organizes and stores data in a specific manner according to our needs.

What are the types of data structures?

Data structures are classified into primitive and non-primitive types.

  • Primitive Data Structures:
    • Integer, Float, Character, Boolean.
  • Non-Primitive Data Structures:
    • Linear: Arrays, Linked Lists, Stacks, Queues.
    • Non-Linear: Trees, Graphs.
    • Hash-Based: Hash Tables.

What are Primitive Data Structures?

Primitive Data Structures are the fundamental types of data directly supported by a programming language. They store a single value and are the building blocks for more complex data structures.

The core four primitive types are: (int, float, char, bool) and are universal across most languages, but some languages extend the list with types like string, byte, short, long, double, and unsigned int. Whether they are considered primitive depends on the language.

What are Non-Primitive Data Structures?

Non-primitive data structures are complex structures that store multiple values and are derived from primitive data types. They can store homogeneous or heterogeneous elements, support dynamic size, and perform complex operations.

Common Types of Non-Primitive Data Structures:

  • Arrays: Fixed-size collections of homogeneous elements.
  • Linked Lists: Chain of nodes, each pointing to the next.
  • Stacks: Follow LIFO (Last In First Out) principle.
  • Queues: Follow FIFO (First In First Out) principle.
  • Trees: Hierarchical structure of nodes (e.g., binary tree, AVL tree).
  • Graphs: Non-linear, consisting of vertices and edges.
  • Hash Tables: Key-value pairs with constant time access.
  • Heaps: Tree-based structures with heap property (max or min).

What is a Linear Data Structure?

Data structure in which data elements are arranged sequentially or linearly, where each element is attached to its previous and next adjacent elements, is called a linear data structure.

Examples of linear data structures are array, stack, queue, linked list, etc.

What is a Non-Linear Data Structure?

A non-linear data structure is one where elements do not have a sequential relationship. Instead, elements can have multiple connections to other elements. Non-linear structures are used to represent hierarchical or graph-based relationships, such as trees or graphs.

Examples of non-linear data structures are trees and graphs.

What is a Hash-Based Data Structure?

A hash-based data structure uses a hash function to map keys to indices or addresses, enabling fast retrieval, insertion, and deletion of data. The hash function transforms the key into an integer index, which is used to store the corresponding value. Key examples include hash tables and hash sets.

Simpler Analogy: A hash-based data structure is like a library system. Imagine each book in the library has a unique ID (the key), and the hash function is like the system that turns that ID into a specific shelf number (the index). When you want to find or add a book, the system quickly tells you the exact shelf number where it is located. This way, you can easily retrieve, add, or remove books without having to search through all the shelves.

Based on this analogy, hash tables would be where books are stored with their details and hash sets would be where only the books themselves are stored without extra details.

What is a Static Data Structure?

A static data structure has a fixed size at the time of creation and cannot be resized during the program's execution. Memory allocation is predetermined, making it memory-efficient, but lack of flexibility is its key limitation.

An example: Arrays: Fixed-size, contiguous collection of elements.

(Note: Arrays can be dynamic in some contexts, but as a static structure, their size is fixed at creation.)

What is Dynamic Data Structure?

A dynamic data structure can grow or shrink in size during the program’s execution. Memory is allocated and deallocated as needed, allowing for flexibility in managing elements. This structure is ideal when the number of elements is not fixed.

Examples:

  • Linked Lists: Collection of nodes that can grow or shrink dynamically.
  • Dynamic Stacks: Stack that resizes as elements are added or removed.
  • Dynamic Queues: Queue that resizes dynamically with elements being added/removed.
  • Hash Tables: Resizes automatically when a capacity threshold is reached.

What is Big O Notation?

Big O notation is a mathematical concept used to describe the efficiency of algorithms in terms of time complexity and space complexity. It is a measure of how the runtime of an algorithm or memory usage of an algorithm grows as the input size (usually denoted as n) grows.

To reiterate, Big O can be categorized into two types:

  • Time Complexity: Measures how the runtime of an algorithm grows as the input size grows.
  • Space Complexity: Measures how memory usage increases as input size grows.

Image

big-o-complexity-chart

Common Big O Notations

Image

O(1): Constant Time

An algorithm has O(1) complexity if its execution time does not depend on the input size (n). It always takes a constant number of operations, regardless of how large the input is.

Short description: Time does not depend on input size - always takes the same time.

Examples:

Image

O(log n): Logarithmic Time

An algorithm has O(log n) complexity if the number of operations decreases exponentially as the input size (n) increases. Instead of processing each element one by one, the algorithm repeatedly divides the problem into smaller parts, typically by a factor of 2 (or another base).

This means that the time required to complete the algorithm grows much more slowly compared to linear time complexities, such as O(n). Logarithmic time complexity is particularly efficient for large data sets, making it a desirable characteristic in algorithm design.

Short Description: Reduces problem size each step.

Examples:

Image

O(n): Linear Time

An algorithm has O(n) complexity if its execution time increases linearly with the input size (n). This means that if the input size doubles, the time taken by the algorithm also roughly doubles.

Short Description: Time increases linearly with the input size.

Examples:

Image

O(n log n): Linearithmic Time

O(n log n), or linearithmic time complexity, represents an algorithm whose execution time grows in proportion to the product of the size of the input data set, denoted as n, and the logarithm of that size.

This means that as the input size increases, the time required to complete the algorithm increases at a rate that is faster than linear (O(n)) but slower than quadratic (O(n²)). Linearithmic time complexity is particularly significant in algorithms that involve both linear iterations and logarithmic divisions.

Short Description: The runtime of the algorithm grows linearly with the input AND with the logarithm of the input size. Common in Sorting Algorithms.

Examples:

Image

O(n^2): Quadratic Time

O(n²), or quadratic time complexity, describes an algorithm whose execution time grows proportionally to the square of the size of the input data set, denoted as n.

This means that if the input size doubles, the execution time increases by a factor of four (since 2² = 4). Quadratic time complexity is commonly associated with algorithms that involve nested iterations over the input data, where each element is compared or processed with every other element.

Short Description: Typical in Algorithms with Nested Loops.

Examples:

Image

O(2^n): Exponential Time

An algorithm has O(2ⁿ) complexity if its execution time doubles with each additional input element (n). This means that if the input size increases by 1, the number of operations doubles.

Exponential time complexity typically arises in problems that involve combinatorial growth, such as recursion with multiple branching paths, brute-force approaches, and exhaustive searches. These algorithms become extremely slow for large inputs and are generally impractical beyond small values of n.

Short Description: Doubles with each additional input.

Examples:

Image

O(n!): Factorial Time

Factorial growth is extremely fast—even faster than exponential growth (O(2ⁿ)). As a result, O(n!) algorithms become impractical very quickly beyond small values of n.

Short Description: Extremely inefficient! Runtime grows extremely fast!

Examples:

Image

Common Big O Notations Summary

big-o-summary

Additional Big O Notations

O (n * m): Bilinear Time

An algorithm has O(n * m) complexity when its execution time is proportional to the product of two independent input sizes, n and m. This typically occurs when processing two separate data structures or iterating over a 2D matrix where n represents the number of rows and m represents the number of columns.

Unlike O(n²), where n and m are equal, in O(n * m), n and m can be different, making it more general than quadratic complexity.

Short Description: This is the time complexity of a nested loop where the inner loop runs m times for each iteration of the outer loop.

Examples:

Screenshot 2025-02-03 at 3 02 54 AM

O(n^3): Cubic Time

An algorithm has O(n³) complexity when its execution time grows proportionally to the cube of the input size. This typically happens when three nested loops iterate over the input, processing each combination of elements.

This means that if the input size increases, the execution time increases at a rate that is proportional to n × n × n, or n3. This type of complexity is typically seen in algorithms that involve three nested loops, where each loop iterates over the input data.

Short Description: The runtime of the algorithm grows cubically with the input size.

Examples:

Screenshot 2025-02-03 at 3 06 55 AM

O(c^n) : Exponential Time Complexity

An algorithm has O(cⁿ) complexity when its execution time grows exponentially with the size of the input, where c is a constant greater than 1, and n is the size of the input. This type of growth is extremely fast, making the algorithm impractical for even relatively small values of n (typically, when n > 20).

In this case, for each additional unit of n, the time taken to execute the algorithm grows by a constant factor (c).

The growth rate of O(c^n) is extremely steep. For instance, if c=2, then for an input size of 10, the number of operations would be 2^10=1024. If the input size increases to 20, it jumps to 2^20=1,048,576.

Short Description: The runtime of the algorithm grows exponentially with the input size.

Examples:

Screenshot 2025-02-03 at 3 26 46 AM

O(sqrt(n)): Square Root Time Complexity

O(√n), or square root time complexity, describes an algorithm whose execution time grows proportionally to the square root of the size of the input data set, denoted as n.

This means that as the input size increases, the time required to complete the algorithm increases at a rate proportional to n^1/2 . Square root time complexity is relatively efficient compared to linear or quadratic complexities, making it suitable for certain types of problems.

Short Description: The runtime of the algorithm grows as the square root of the input size.

Examples:

Screenshot 2025-02-03 at 3 27 54 AM

Data Structures

Arrays

Linked Lists

Stacks

Queues

Trees

Hash Tables

Heaps / Priority Queues

Graphs

Tries

Algorithms

Sorting

Searching

Two Pointers

Sliding Window

Recursion

Dynamic Programming

Greedy

Backtracking

Bit Manipulation

Divide & Conquer

Graph

🔼 Back to Top

About

personal notes on data structures and algorithms

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published