Programming problems & Data Structures In Rust
A collection of Rust implementations for common programming problems, interview questions, and fundamental data structures. This project serves as a resource for those looking to understand algorithmic problem-solving in Rust with detailed explanations and efficient implementations.
⚡ About the Project
This repository provides well-structured and optimized solutions to classical algorithmic problems using Rust. Each problem includes:
- Clear Problem Descriptions: Understanding the problem statement.
- Rust Implementations: Efficiently written, idiomatic Rust code.
- Algorithmic Insights: Explanation of the approach, trade-offs, and optimizations.
- Test Cases: Ensuring correctness and performance.
✨ Featured Problems & Implementations
The project covers a variety of topics, including:
🔢 Array & Number Problems
- Segregate negatives & positives
- Buy and sell stock once
- Contains duplicate
- Contains nearby duplicate
- Maximum subarray sum
- Two sum
- Maximum product subarray
- Product of array except self
- K nearest points from a given point
🔠 String Problems
- Segregate RGB characters
- Longest common prefix
- Longest common suffix
📈 Dynamic Programming & Recursion
- Longest increasing subsequence
- Number of ways to reach matrix end
- Palindrome partitioning
- N-Queens problem
- Subsets (backtracking, iterative, bit manipulation)
- Combination sum (multiple variants)
🔍 Search & Sorting
- Minimum in sorted rotated array
- Insert a new interval to a list of sorted intervals
- Search in a row and column-wise sorted matrix
- Search a target in a sorted rotated array
🌳 Data Structures Implementations
- Binary Search Tree (BST) with parent pointers
- Trie (Prefix Tree) for efficient string searching
- Min Heap & Max Heap for priority queue operations
🦀 Why Rust?
Rust provides memory safety, performance, and concurrency without the overhead of garbage collection. This makes it an excellent choice for implementing algorithmic solutions that require efficiency and reliability.
📜 Example: Binary Search Tree (BST) with Parent Pointers
A BST (Binary Search Tree) allows efficient searching, insertion, and deletion of elements. Our implementation uses parent pointers to facilitate operations such as node deletion efficiently.
🌲 Node Definition
#![allow(unused)] fn main() { struct Node<T: Ord + Default + Clone + std::fmt::Debug> { key: T, left: Option<Rc<RefCell<Tree<T>>>>, right: Option<Rc<RefCell<Tree<T>>>>, parent: Option<Weak<RefCell<Node<T>>>>, } }
🔧 Key Features
- Uses
Rc<RefCell<Node<T>>
for shared ownership and interior mutability. - Weak parent references to prevent memory leaks.
🏔️ Example: Max Heap Implementation
A Max Heap ensures that the parent node is always larger than its children. It is useful for implementing priority queues.
#![allow(unused)] fn main() { #[derive(Debug)] pub struct MaxHeap<T: Ord> { elements: Vec<T>, } }
🔹 Heap Operations
- Insertion: Adds an element and maintains heap property.
- Deletion: Removes the largest element and rebalances the heap.
💡 Who Is This For?
This project is perfect for:
- Rust developers looking to practice algorithms.
- Engineers preparing for coding interviews.
- Students learning data structures and algorithms.
🤝 Contributing
Contributions are welcome! Feel free to open issues, suggest improvements, or submit pull requests.
Note: The source code of the repository can be found here
A rust binary search with parent pointers
Why binary search tree?
Binary search trees are useful - very useful. They offer fast insertion and deletion time and unlike linked list provide fast lookup time. Arrangement of entries are ordered. By convention - for a given entry, smaller entries are always on its left and biggers are always on the right.
Binary search tree implementation in rust
Binary search trees (henceforth called BSTs) can be implemented in rust without parent pointers. But deletion of an entry becomes somewhat clunky without parent pointers.
Here we implement a BST in rust with parent pointers. Both left and right child point to their parent. We make use of shared references - since same parent is shared by both children as well as we need to access it from multiple contexts. Reference counted pointer alone - Rc - is not very useful in itself - atleast in this context - since we 'Rc' provides mutable reference to its inner content - only if there is no other existing reference to it already. This, obviously, is not the case here - because children will be pointing to it all the time. We need to be able to mutate (for example -change its value or delete) it - while there are outstanding child references to a it. We need shared interior mutability. This is where RefCell comes handy - it provides interior mutability when no existing 'borrow' (mutable and immutable) is in the context and multiple shared references when there exists no mutable borrow.
One last thing though. If parent contains(optionally) a child and child points back at its parent, while deleting a parent - its reference count will still be 1(or 2). Since parent's reference count is not 0 - it would not be dropped - it would be lingering in memory leading to a leaky situation. This is certainly undesirable and we don't want this. Which is why we want our parent pointers to be weak ones so that we can drop parents even when there are outstanding weak references to it.
With these understanding so far - next we define our Tree and Node.
Tree and Node definition
We define our Tree as follows:
#[derive(Debug, Default)]
pub struct Tree<T: Ord + Default + Clone + std::fmt::Debug>(Option<Rc<RefCell<Node<T>>>>);
Our Tree is tuple struct which may or may not contain a Node. The Node itself is wrapped inside a RefCell for interior mutability. The RefCell, in turn is wrapped inside a Rc for shared access.
Our tree's keys (aka entries) are generic T. T must be of type Ord
because that is how we decide
which side of the tree an entry lands in.
The generic type T
is also implements Default
. This we are utilising for flushing out key values while deleting a Node from the tree. This becomes necessary because rust is grumpy about holes in
memory.
Clone was not strictly necessary - but we need it later - when we implement Iterator
for the
tree.
Debug is, of course, for printing the tree. Next, we define our Node
.
We define our Node as follows:
struct Node<T: Ord + Default + Clone + std::fmt::Debug> {
key: T,
left: Option<Rc<RefCell<Tree<T>>>>,
right: Option<Rc<RefCell<Tree<T>>>>,
parent: Option<Weak<RefCell<Node<T>>>>,
}
Node defintion is easy to understand. We have key
of type T. Keys are what we insert into the tree.
We have left and right children - which are optional Trees contained with RefCell
and Rc
. We are
treating each child as a Tree
in its own right.
Our parent is an optional Weak
reference contained within a RefCell
.
Note: All nodes will have a weak referene to a parent node except for root which would have no parent
Next we will look at some basic API's for Tree
and Node
.
Basic APIs
Following are some of the basic APIs in the Node
construct to start with:
fn new(value: T) -> Self {
Self {
key: value,
left: None,
right: None,
parent: None,
}
}
The API above is internal and used to new up a bare bone Node
struct with a value of type T.
fn wrapped_node(value: T) -> Option<Rc<RefCell<Self>>> {
Some(Rc::new(RefCell::new(Node::new(value))))
}
We need our Node
to be wrapped in RefCell
and Rc
. Above is a helper that avoids code
repetition.
Following are some of additional helper APIs that are used internally:
//Get a reference to `Node` key
fn key(&self) -> &T {
&self.key
}
//Does this node has a left child tree
fn has_left(&self) -> bool {
self.left.is_some()
}
//Does this Tree rooted at this has a right child tree
fn has_right(&self) -> bool {
self.right.is_some()
}
//Get a shared handle to the root of left child tree
fn left_node(&self) -> Option<Rc<RefCell<Node<T>>>> {
self.left
.as_ref()
.and_then(|tree| tree.borrow().0.as_ref().map(Rc::clone))
}
//Is the given key has the same value as the left tree root node
//Used when deleting nodes from the tree
fn is_left_child(&self, key: &T) -> bool {
Self::left_node(self)
.as_ref()
.map(|node| node.borrow().key() == key)
.unwrap_or(false)
}
//Get a shared handle to the right tree's root node
fn right_node(&self) -> Option<Rc<RefCell<Node<T>>>> {
self.right
.as_ref()
.and_then(|tree| tree.borrow().0.as_ref().map(Rc::clone))
}
//Node's parent is a weak reference that we initialize when the node
//is inserted to the tree - we need to upgrade it to a strong reference
//to get to the underlying parent node
fn upgrade_parent(&self) -> Option<Rc<RefCell<Node<T>>>> {
self.parent.as_ref().and_then(|weak| weak.upgrade())
}
//Replace this node's key with the value might be there inside the input
//Used during delete. If this node is being deleted, then this node's key
//is flushed out with minimum node's value that is on the right side of
//this node
fn replace_key(&mut self, key: Option<T>) -> Option<T> {
key.map(|k| std::mem::replace(&mut self.key, k))
}
//To avoid already borrowed error - if Rc<RefCell> pointing to same location
fn right_parent<'a>(
this: Option<&'a Rc<RefCell<Node<T>>>>,
that: Option<&'a Rc<RefCell<Node<T>>>>,
) -> Option<&'a Rc<RefCell<Node<T>>>> {
match (this, that) {
(None, None) => None,
(Some(_), None) => this,
(None, Some(_)) => that,
(Some(this_one), Some(that_one)) => match Rc::ptr_eq(this_one, that_one) {
true => this,
false => that,
},
}
}
//Clone this parent which is a weak reference
//Used during deletion of a node. When the minimum node is taken
//out from the right side of the node being deleted, the minimum node's
//right tree(if any) - has to be hoisted up to point to the minimum
//node's parent
fn parent(&self) -> Option<Weak<RefCell<Node<T>>>> {
self.parent.as_ref().map(Weak::clone)
}
Following sre some helper API's for the Tree
struct:
//Initialize a new tree with the value
pub fn new(value: T) -> Self {
Tree(Some(Rc::new(RefCell::new(Node::new(value)))))
}
//Get a shared handle to the root of the tree
fn root(&self) -> Option<Rc<RefCell<Node<T>>>> {
self.0.as_ref().map(Rc::clone)
}
//Create new tree rooted at the input node
fn new_branch(node: Node<T>) -> Option<Rc<RefCell<Tree<T>>>> {
Some(Rc::new(RefCell::new(Tree(Some(Rc::new(RefCell::new(
node,
)))))))
}
//Find the min - given a node. Result could be given node itself
//if no more left branch is there
fn min(node: &Rc<RefCell<Node<T>>>) -> Option<Rc<RefCell<Node<T>>>> {
match node.borrow().left_node() {
Some(ref left_node) => Self::min(left_node),
None => Some(Rc::clone(node)),
}
}
Now that helper APIs are out of the way - next we will look at how to populate the tree.
Tree population
The insert
function in the Tree
struct is responsible for populating the tree. It's straight-
forward - it looks at the current tree's root - if it does not exist - creates a new Node
with
the given input key - sets it as the root node of the tree.
If the current tree's root exists, the input key is compared to the root's key and the decision
is made to as to whether to go the left or right of the tree. This is done recursively. Once the
right spot is found - a new Tree
branch is created with the input key, its parent is set and the
newly created tree is added as left or right child of the parent.
Note: We downgrade parent's reference because children always maintain a weak pointer to its parent. Also, we don't allow duplicate entry.
Following is insert
function in its entirety:
//Populate tree with a new key
//Duplicate keys are not added
//Recursively calls itself to find place to add the supplied key
pub fn insert(&mut self, value: T) {
match self.0 {
Some(ref mut curr_tree_root) => {
let mut node = curr_tree_root.borrow_mut();
if node.key > value {
match node.left {
Some(ref mut tree) => Self::insert(&mut tree.borrow_mut(), value),
None => {
let parent = Some(Rc::downgrade(&Rc::clone(curr_tree_root)));
let mut left = Node::new(value);
left.parent = parent;
node.left = Tree::new_branch(left);
}
}
} else if node.key < value {
match node.right {
Some(ref mut tree) => Self::insert(&mut tree.borrow_mut(), value),
None => {
let parent = Some(Rc::downgrade(&Rc::clone(curr_tree_root)));
let mut right = Node::new(value);
right.parent = parent;
node.right = Tree::new_branch(right);
}
}
}
}
None => self.0 = Node::wrapped_node(value),
}
}
Note: We are using
Node::new(value)
,Tree::new_branch(right)
andNode::wrapped_node(value)
from the previous Basic APIs section here.
Next we present the Tree::find
function which is utilized to find a target Node
that has to be
deleted.
Tree find
The tree find
function is used to find a Node (Option<Rc<RefCell<Node<T>>>>)
corresponding to a
given key reference. It recursively visit tree's left or right branches - till it finds the node or
returns None
otherwise.
Following is the find
function defintion:
//Find the node containing the supplied key reference
fn find(&self, key: &T) -> Option<Rc<RefCell<Node<T>>>> {
match self.0 {
Some(ref node) if node.borrow().key() == key => Some(Rc::clone(node)),
Some(ref node) if node.borrow().key() > key => match node.borrow().left {
Some(ref left) => Self::find(&left.borrow(), key),
None => None,
},
Some(ref node) if node.borrow().key() < key => match node.borrow().right {
Some(ref right) => Self::find(&right.borrow(), key),
None => None,
},
Some(_) => None, //Make the compiler happy
None => None,
}
}
Now with find
function in place, next we will look at tree's delete
function. But before that
we will look at Node::delete
function because tree's delete
is heavily dependent on Node::delete
.
Node delete (Node has two children)
Node::delete
function is invoked when target node being deleted has both right and left chidren.
We find the minimum node on the right of target. Once we find the minimum, then we find its parent
and convert parent weak reference to a strong reference.
Next we find the right child of min if any - this right child should be now hoisted to point to min'sparent since min is being evicted to replace deleted node's content with min's content.
Finally. we return the target node's content from this function.
Following is the whole Node::delete
function with inline comments:
//Delete an node when it has two children
fn delete(mut target: Option<Rc<RefCell<Node<T>>>>) -> Option<T> {
let min = target
.as_ref()
.and_then(|target| target.borrow().right_node().as_ref().and_then(Tree::min));
let min_parent = min.as_ref().and_then(|min| min.borrow().upgrade_parent());
let right_parent = Node::right_parent(target.as_ref(), min_parent.as_ref());
let left_or_right = right_parent.as_ref().and_then(|parent| {
min.as_ref()
.map(|min| parent.borrow().is_left_child(min.borrow().key()))
});
let min = match right_parent {
None => None,
Some(parent) => match left_or_right {
None => None,
Some(true) => parent.borrow_mut().left.take(),
Some(false) => parent.borrow_mut().right.take(),
},
};
let mut min = min.map(|tree| tree.take()).and_then(|tree| tree.root());
let min_right = min
.as_ref()
.and_then(|min| min.borrow_mut().right.take())
.and_then(|tree| tree.borrow().root());
if let Some(ref min_right) = min_right {
min_right.borrow_mut().parent = right_parent
.as_ref()
.map(|parent| Rc::downgrade(&Rc::clone(parent)));
}
if let Some(parent) = right_parent {
match left_or_right {
None => {}
Some(true) => {
parent.borrow_mut().left = Tree::with_node(min_right.as_ref().cloned())
}
Some(false) => {
parent.borrow_mut().right = Tree::with_node(min_right.as_ref().cloned())
}
}
}
match target {
Some(ref mut target) => target
.borrow_mut()
.replace_key(min.take().map(|min| min.take().key)),
None => None,
}
}
}
The above function takes care of deleting a target node when it has both children. We have separated out the code to handle the case when the deleted node has only one or no child but has got a parent. Let's take a look at that in the next section.
Node delete (One or no child)
Node delete_child
function gets invoked when the target node being deleted has only left or right
child or no child but has got a parent. We evict the target node and take out its key, parent and
its left or right child in a tuple.
Next we set the deleted node's left or right child's parent to deleted node's parent and finally return the deleted node's key.
Following is the snippet for delete_child
function:
//Delete a node with single child or no child but node being deleted has parent
//left: bool -> Should we delete left or right child?
fn delete_child(&mut self, left: bool) -> Option<T> {
//First take out the left or right child based on the flag passed in
let deleted = match left {
true => self.left.take(),
false => self.right.take(),
};
//result is tuple of the form as shown below
//result = (deleted.key, deleted.parent, left or right child of deleted)
let result = match deleted
.and_then(|tree| tree.take().0)
.map(|node| node.take())
.map(|node| (node.key, node.parent, node.left.or(node.right)))
{
//Set deleted node's left or right child's parent to the parent of deleted
Some((key, parent, mut tree)) => {
if let Some(ref mut inner) = tree {
if let Some(ref mut tree_node) = inner.borrow_mut().0 {
tree_node.borrow_mut().parent = parent;
}
}
//(deleted.key, left or right of deleted
(Some(key), tree)
}
None => (None, None),
};
//Set self left right to deleted left or right
match left {
true => self.left = result.1,
false => self.right = result.1,
}
//deleted key
result.0
}
Now that we have discussed Node
's delete APIs - we can go back to Tree
's delete API. Lets do thatnext.
Tree delete
While deleting a node from the tree - we first find the target node that is getting deleted. If it
can not be found - we have no further business - we return None
.
If we find a target node to delete - we need to consinder few scenarios:
- Is it the root node that we are deleting? Then target will have no parent.
- It has no child - take node out - leaving the tree empty.
- It has left child - hoist left child up - take out its parent reference.
- It has right child - hoist right child up - take out its parent reference.
- It has both children - delegate to
Node::delete
- It is not the root node. Then the target has got a parent.
- It has no child or has left or right child - delegte to
node.delete_child
- It has both children - delegate to
Node::delete
- It has no child or has left or right child - delegte to
Below is the tree delete function in entirety:
//Delete a node with key that equals the supplied key
//Returns the deleted key as Some(key) or None otherwise
pub fn delete(&mut self, key: &T) -> Option<T> {
let target = Self::find(self, key);
match target {
None => None,
Some(ref node) => {
let has_left = node.borrow().has_left();
let has_right = node.borrow().has_right();
let has_both = has_left && has_right;
let no_child = !has_left && !has_right;
let has_parent = node.borrow().parent.is_some();
match has_parent {
false => {
//Delete root - root has no parent ref - hence differential treatment
match (no_child, has_left, has_right, has_both) {
(true, false, false, false) => {
self.0.take().map(|root| root.take().key)
}
//Has left child - remove left child's parent ref and set it as
//tree root
(false, true, false, false) => {
let root = self.root().take();
self.0 = root.as_ref().and_then(|root| {
root.borrow().left_node().map(|node| {
node.borrow_mut().parent.take();
node
})
});
//Return root's key
root.map(|root| root.take().key)
}
//Has right child - remove right child's parent ref and set it as
//tree root
(false, false, true, false) => {
let root = self.root().take();
self.0 = root.as_ref().and_then(|root| {
root.borrow().right_node().map(|node| {
node.borrow_mut().parent.take();
node
})
});
//Return root's key
root.map(|root| root.take().key)
}
//Has got both children - delete to Node::delete
(false, true, true, true) => Node::delete(target),
(_, _, _, _) => None,
}
}
//target node being deleted has got a parent
true => match (no_child, has_left, has_right, has_both) {
(true, false, false, false)
| (false, true, false, false)
| (false, false, true, false) => {
let root = self.root().take();
self.0 = root.as_ref().and_then(|root| {
root.borrow().right_node().map(|node| {
node.borrow_mut().parent.take();
node
})
});
//Return root's key
root.map(|root| root.take().key)
}
//Has got both children - delete to Node::delete
(false, true, true, true) => Node::delete(target),
(_, _, _, _) => None,
}
}
//target node being deleted has got a parent
true => match (no_child, has_left, has_right, has_both) {
(true, false, false, false)
| (false, true, false, false)
| (false, false, true, false) => {
let parent = node.borrow().upgrade_parent();
//Is it left or right? If no child then left is false
let left = parent
.as_ref()
.map_or(false, |parent| parent.borrow().is_left_child(key));
//Delefate to node.delete_child with boolean flag left
parent.and_then(|parent| parent.borrow_mut().delete_child(left))
}
//Has parent, has two children
//Delegate to Node::delete
(false, true, true, true) => Node::delete(target),
(_, _, _, _) => None,
},
}
}
}
}
Next we look at some other API's that the tree struct exposes.
Additional APIs
Next we discusss additional public APIs built on top of our core Tree
struct.
Tree minimum
The minimum
pub API returns the minimum entry in the tree if the tree is not empty. It call anoth-ther internal API called min
. The min
function accepts a reference to Rc<RefCell<Node<T>>>
and returns OptionRc<RefCell<Node<T>>>>
. Its does so by going to the left of the passed in node
reference as far as possible or returns current left's root node otherwise.
The tree minimum
just calls the internal min
function passing its root node's reference and
returns either returned Node
's (if found) key or None
otherwise.
Below are the public minimum
and internal min
APIs.
//Returns the minimum key value (if any) or `None` otherwise
//Delegates to internal min function
pub fn minimum(&self) -> Option<T> {
let node = self.root();
match node {
None => None,
Some(ref inner) => Self::min(inner).map(|n| n.borrow().key.clone()),
}
}
Note: This is one place where we are using clone on T i.e.
key.clone()
above.
//Find the min - given a node. Result could be given node itself
//if no more left branch is there
fn min(node: &Rc<RefCell<Node<T>>>) -> Option<Rc<RefCell<Node<T>>>> {
match node.borrow().left_node() {
Some(ref left_node) => Self::min(left_node),
None => Some(Rc::clone(node)),
}
}
Tree exists
This API return true
or false
based on whether a Node
corresponding to a input key reference
exists or not in the tree. It does so by recursively checking left or/and the right tree for the
presence of the key.
Following is the function definition:
//Does a key exists in the tree?
pub fn exists(&self, key: &T) -> bool {
match self.0 {
Some(ref node) => {
node.borrow().key() == key || {
let in_left = match node.borrow().left {
Some(ref tree) => Self::exists(&tree.borrow(), key),
None => false,
};
let in_right = match node.borrow().right {
Some(ref tree) => Self::exists(&tree.borrow(), key),
None => false,
};
in_left || in_right
}
}
None => false,
}
}
Tree identical
The tree is_dentical
function checks if the current tree is identical to another tree that is
being passed in as input.
If both trees root's values match - then it proceeds to check whether or not left and right subtree-s are also identical or not. This is done recursively.
Following is the definition:
//Is this tree is identical to other tree?
pub fn is_identical(&self, other: &Self) -> bool {
match self.0 {
Some(ref this) => match other {
Tree(Some(ref that)) => {
if this.borrow().key == that.borrow().key {
let this_left = &this.borrow().left;
let that_left = &that.borrow().left;
let this_right = &this.borrow().right;
let that_right = &that.borrow().right;
let left_matched = match this_left {
Some(ref this_tree) => match that_left {
Some(ref that_tree) => {
return Self::is_identical(
&this_tree.borrow(),
&that_tree.borrow(),
);
}
None => false,
},
None => that_left.is_none(),
};
let right_matched = match this_right {
Some(ref this_tree) => match that_right {
Some(ref that_tree) => {
return Self::is_identical(
&this_tree.borrow(),
&that_tree.borrow(),
);
}
None => false,
},
None => that_right.is_none(),
};
left_matched && right_matched
} else {
false
}
}
Tree(None) => false,
},
None => match other {
Tree(Some(_)) => false,
Tree(None) => true,
},
}
}
Tree contains
The tree contains
API tells whether current tree is exactly similar to the input tree that is
passed in or a tree similar to the input tree exists somewhere in left or right half of the currenttree. It makes use of is_identical
function of the tree and recursion to compute the result.
Below is the function defintion:
//Does this contains the other tree?
pub fn contains(&self, other: &Self) -> bool {
match self {
Tree(None) => match other {
Tree(_) => false,
},
Tree(Some(ref this)) => match other {
Tree(None) => true,
that @ Tree(_) => {
if Self::is_identical(self, that) {
return true;
}
let left_contains = match this.borrow().left {
Some(ref tree) => Self::contains(&tree.borrow(), that),
None => false,
};
let right_contains = match this.borrow().right {
Some(ref tree) => Self::contains(&tree.borrow(), that),
None => false,
};
left_contains || right_contains
}
},
}
}
Tree iterator
Calling tree.iter()
returns an iterator of OptionM<T>
and iterator.next()
returns the keys level wise. Also, tree.iter()
can be
called repeated since it does not consume the tree.
Following are the relevant implementation details:
The Iter
struct defintion:
#[derive(Debug)]
pub struct Iter<T: Ord + Default + Clone + std::fmt::Debug> {
next: Option<VecDeque<Rc<RefCell<Node<T>>>>>,
}
The tree iter
funtion:
//Get an iterator for the tree's keys
//Remember - calling iter on the tree would not consume the tree
//iterator.next would return Option<T>
//T is cloned
//Keys would would be returned level wise
pub fn iter(&self) -> Iter<T> {
Iter {
next: self.root().map(|node| {
let mut next = VecDeque::new();
next.push_front(node);
next
}),
}
}
The Iterator
trait implementation:
impl<T: Ord + Default + Clone + std::fmt::Debug> Iterator for Iter<T> {
type Item = T;
//Level wise iterator
fn next(&mut self) -> Option<Self::Item> {
match self.next {
None => None,
Some(ref mut queue) => {
let popped = queue.pop_back();
match popped {
None => None,
Some(ref node) => {
let node = node.borrow();
if let Some(ref left) = node.left_node() {
queue.push_front(Rc::clone(left));
}
if let Some(ref right) = node.right_node() {
queue.push_front(Rc::clone(right));
}
Some(node.key.clone())
}
}
}
}
}
}
Note: We cloning T when calling
iterator.next()
.
Tree into iterator
Unlike tree.iter()
tree.into_iter()
returns an iterator of Option<T>
that consumes the tree
elements one by one when called next
on it. Also, every time next
is called, the root of the
tree is popped.
Following sre the revelvant definitions:
IntoIter
struct defintion:
#[derive(Debug)]
pub struct IntoIter<'a, T: Ord + Default + Clone + std::fmt::Debug> {
tree: Option<&'a mut Tree<T>>,
}
Tree into_iter
funtion:
//Returns an iterator that consumes the tree elements one by one
//when calling next on it
//Root of the tree is eviced when next is called on the iterator
pub fn into_iter(&mut self) -> IntoIter<'_, T> {
IntoIter {
tree: match self {
Tree(None) => None,
Tree(_) => Some(self),
},
}
}
Iterator trait implementation for IntoIter
:
impl<T: Ord + Default + Clone + std::fmt::Debug> Iterator for IntoIter<'_, T> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
match self.tree {
None => None,
Some(ref mut tree) => match tree.0 {
None => None,
Some(ref mut node) => {
let key = node.borrow().key.clone();
tree.delete(&key);
Some(key)
}
},
}
}
}
Tree height
Tree height
function:
//Find the height of the tree
pub fn height(&self) -> usize {
let root = self.root();
match root {
None => 0,
Some(ref node)
if node.borrow().left_node().is_none() & node.borrow().right_node().is_none() =>
{
1
}
Some(ref node) => {
let left_tree_height = node
.borrow()
.left
.as_ref()
.map(|tree| Self::height(&tree.borrow()))
.unwrap_or(0);
let right_tree_height = node
.borrow()
.right
.as_ref()
.map(|tree| Self::height(&tree.borrow()))
.unwrap_or(0);
1 + std::cmp::max(left_tree_height, right_tree_height)
}
}
}
Lowest common ancestor (LCA)
Here we present how to find the lowest common ancestor for two given keys. LCA is the immediate parent that the given keys share. Or the parent could be one the keys - if that key happens to the parent of the other given key.
The LCA would be in the left or right side of a node - if the node's key is bigger or smaller than both the given keys. If that condition trips - that would mean given keys are on the either side of the node or one of the keys is the node itself and the other would be beneath it.
Following is the LCA implentation:
//Return the lowest common ancestor for two given keys
pub fn lowest_common_ancestor(&self, this: &T, that: &T) -> Option<T> {
if let Some(ref root) = self.root() {
let root = root.borrow();
if root.key() < this && root.key() < that {
if let Some(ref right) = root.right {
return Self::lowest_common_ancestor(&right.borrow(), this, that);
} else {
return None;
}
} else if root.key() > this && root.key() > that {
if let Some(ref left) = root.left {
return Self::lowest_common_ancestor(&left.borrow(), this, that);
} else {
return None;
}
} else {
return Some(root.key().clone());
}
} else {
None
}
}
Nth smallest
To find nth smallest key in the tree we go all the way to the left. Once we are there, we ask the question - is the current position is the nth position? If so, we return the node's key at that position. Otherwise, we move up - repeat the process.
Following is the routine for finding the nth smallest:
//Find nth smallest in the binary seach tree
pub fn nth_smallest(&self, nth: usize) -> Option<T> {
let mut current_pos = 0;
let mut result = None;
Self::nth_smallest_helper(self.root(), &mut current_pos, nth, &mut result);
result
}
fn nth_smallest_helper(
node: Option<Rc<RefCell<Node<T>>>>,
current_pos: &mut usize,
nth: usize,
result: &mut Option<T>,
) {
if let Some(inner) = node {
Self::nth_smallest_helper(inner.borrow().left_node(), current_pos, nth, result);
*current_pos += 1;
if *current_pos == nth {
*result = Some(inner.borrow().key().clone());
}
Self::nth_smallest_helper(inner.borrow().right_node(), current_pos, nth, result);
}
}
Kth smallest
Here we present the iterative solution to find the kth smallest number. To do that we maintain a stack. We keep going left as long as possible and keep pushing the nodes we encounter (to back step) to the stack. When the lowest node is found - we pop from the stack and increment our internal counter. We check if interanl counter equals to the kth value we are looking for - If so, we return the node's key. Otherwise we start going up - either by going to the right or popping up previous node entry in the stack.
Following snippet of code shows how we get the kth smallest iteratively:
//kth smallest element - iterative
pub fn kth_smallest(&self, k: usize) -> Option<T> {
let mut curr = self.root();
let mut stack = Vec::new();
let mut n = 0;
while curr.is_some() || !stack.is_empty() {
while curr.is_some() {
stack.push(curr.as_ref().cloned());
curr = curr.and_then(|curr| curr.borrow().left_node());
}
curr = stack.pop().flatten();
n += 1;
if n == k {
return curr.map(|curr| curr.borrow().key().clone());
}
curr = curr.and_then(|curr| curr.borrow().right_node());
}
None
}
From sorted array
Here we present how to create a height balanced binary search tree from a sorted array. Height balanced BST is such that - height of subtrees in the left and right side of the root would not differ by more than 1. The algorithm we use is recursive.
We pick the mid point of the sorted array - make it the root of the tree. Then we - take half from next of the mid point till the end the array - make a recursive call to make the right subtree of them root. We do the same thing for left subtree - only difference is that this time we take the half from start of the array till the element before the mid point.
Following is the implementation:
//Create a height balanced tree from a sorted array
//The array passed in gets mutated - its elements are replaced with default
//values for type `T`
pub fn from_sorted_array(array: &mut [T]) -> Option<Self> {
fn wrap_tree<T: Ord + Default + Clone + std::fmt::Debug>(
tree: Option<Tree<T>>,
) -> Option<Rc<RefCell<Tree<T>>>> {
match tree {
None => None,
tree @ Some(_) => tree.map(|tree| Rc::new(RefCell::new(tree))),
}
}
fn from_array<T: Ord + Default + Clone + std::fmt::Debug>(
array: &mut [T],
left: i32,
right: i32,
) -> Option<Tree<T>> {
if left > right {
return None;
} else {
let mid = left + (right - left) / 2;
let tree = Tree::new(std::mem::take(&mut array[mid as usize]));
let right = from_array(array, mid + 1, right);
let left = from_array(array, left, mid - 1);
let mut right = wrap_tree(right);
let mut left = wrap_tree(left);
if let Some(ref mut root) = tree.root() {
root.borrow_mut().left = left.as_ref().cloned();
root.borrow_mut().right = right.as_ref().cloned();
}
if let Some(ref mut left) = left {
if let Some(ref mut tree_node) = left.borrow_mut().root() {
tree_node.borrow_mut().parent = tree
.root()
.as_ref()
.map(|root| Rc::downgrade(&Rc::clone(root)));
}
}
if let Some(ref mut right) = right {
if let Some(ref mut tree_node) = right.borrow_mut().root() {
tree_node.borrow_mut().parent = tree
.root()
.as_ref()
.map(|root| Rc::downgrade(&Rc::clone(root)));
}
}
return Some(tree);
}
}
from_array(array, 0, (array.len() - 1) as i32)
}
Note: We are making use of nested functions above.
Tree validation
Below we present the implementation of an iterative algoritm to validate the binary search tree. The idea here is to go the left most node - and then traverse the tree in order. When traversing tree in order - we expect every next element to be bigger than the previous element and lesser than the root element. When we cross the root - we want every element be greater than the root and its previous element.
###Following function achieves the same:
//Validate the if the tree is valid
//We use an interative process here
pub fn validate(&self) -> bool {
match self.0 {
None => true,
Some(ref root) => {
let mut current = Some(Rc::clone(root));
let mut stack = Vec::new();
let mut previous = None;
let mut crossed_root = false;
let mut first = Some(false);
while current.is_some() || !stack.is_empty() {
while current.is_some() {
stack.push(current.as_ref().cloned());
current = current.and_then(|inner| inner.borrow().left_node());
}
current = stack.pop().and_then(|popped| popped);
//Has moved to the right side of the tree
if !crossed_root {
crossed_root = current.as_ref().map(|curr| Rc::ptr_eq(curr, root)).unwrap();
if let Some(first) = first.as_mut() {
*first = crossed_root;
}
}
match previous {
None => previous = current.as_ref().map(Rc::clone),
Some(ref prev) => match current.as_ref().map(|curr| {
(
crossed_root,
&mut first,
curr.borrow().key() > prev.borrow().key(),
curr.borrow().key() < root.borrow().key(),
)
}) {
Some((false, Some(false), true, true)) => {
previous = current.as_ref().map(Rc::clone)
}
Some((true, Some(true), true, false)) => {
if let Some(first) = first.as_mut() {
*first = false;
}
previous = current.as_ref().map(Rc::clone)
}
Some((true, Some(false), true, false)) => {
previous = current.as_ref().map(Rc::clone)
}
_ => return false,
},
}
current = current.and_then(|inner| inner.borrow().right_node());
}
true
}
}
}
Tree update
We can update the value of any node in the binary search tree. We search the key to be
updated by using Iterator
find method any mutate its value.
Tree update implementation
//Update a node key in the tree
pub fn update(&mut self, key: &T, new_val: T) -> bool {
let mut node = self.node_iter().find(|node| node.borrow().key() == key);
match node {
None => false,
Some(ref mut inner) => {
inner.borrow_mut().replace_key(Some(new_val));
true
}
}
}
Dubley linkedlist
Here we implement a doubly link with quite a rich set of APIs. We provide following apis:
push_front
pop_front
push_back
pop_back
delete
iter
- Returns a double ended iterator. Stops returning values when pointers meet. Immutable, the list remains intact.inter_into
- Returns a double ended iterator that consumes the elements. List becomes empty, oncethe iterator is fully traversed.
Let's look at the Node
and List
definitions next.
Node and List
Below we present the defintion of Node
. Member definitions are quite intuitive. We maintain a weak
previous reference to the preceeing Node
.
#[derive(Debug, Default)]
struct Node<T: std::fmt::Debug + Default + Clone + PartialEq> {
key: T,
next: Option<Rc<RefCell<Node<T>>>>,
prev: Option<Weak<RefCell<Node<T>>>>,
}
//New up a new node with a value
impl<T: std::fmt::Debug + Default + Clone + PartialEq> Node<T> {
pub fn new(key: T) -> Self {
Self {
key,
next: None,
prev: None,
}
}
}
//We implement `From` trait to wrap up a node in RefCell and Rc
impl<T: std::fmt::Debug + Default + Clone + PartialEq> From<Node<T>>
for Option<Rc<RefCell<Node<T>>>>
{
fn from(node: Node<T>) -> Self {
Some(Rc::new(RefCell::new(node)))
}
}
Following is the defintion of List
:
#[derive(Debug)]
pub struct List<T: std::fmt::Debug + Default + Clone + PartialEq> {
head: Option<Rc<RefCell<Node<T>>>>,
tail: Option<Rc<RefCell<Node<T>>>>,
}
Push front and push back
Push to the front of the list:
//Push to the front of the list
pub fn push_front(&mut self, key: T) {
let node = Node::new(key).into();
match self.head {
None => {
self.head = node;
self.tail = self.head.as_ref().map(Rc::clone);
}
Some(ref mut head) => {
head.borrow_mut().prev = node.as_ref().map(|node| Rc::downgrade(&Rc::clone(node)));
self.head = node.map(|node| {
node.borrow_mut().next = Some(Rc::clone(head));
node
});
}
}
}
Above, we create a new node, set it as head and tail - if the list is empty. Otherwise, make the new node the head of the list, make the existing head to point to it via a weak reference and make new node's next
point to the existing head.
Push to the back of the list:
Again, we set the newly created node as the head
and tail
of the list - if the list is empty.
Otherwise, make the new node point to existing tail via a weak reference, make the existing tail to
point to new node and set new node as the tail of the list.
Code for pushing to the back of the list:
//Push to the back of the list
pub fn push_back(&mut self, key: T) {
let mut node = Node::new(key).into();
match self.tail {
None => {
self.head = node;
self.tail = self.head.as_ref().map(Rc::clone);
}
Some(ref mut tail) => {
self.tail = node.take().map(|node| {
node.borrow_mut().prev = Some(Rc::downgrade(&Rc::clone(tail)));
tail.borrow_mut().next = Some(Rc::clone(&node));
node
});
}
}
}
Next, we would look at pop_front
and pop_back
.
Pop front and pop back
While popping from the front of the list, we take out existing head, make it's next the new head of
the list. While doing so, we also take out previous pointer of the new head, which was pointing at
the outgoing head. If the new head turns out to be None
- we also set list tail to None
. We
return the outgoing head's content.
Below is the code for popping from the front:
//Pop out from the front of the list
pub fn pop_front(&mut self) -> Option<T> {
match self.head.take() {
None => None,
Some(ref mut head) => {
self.head = head.borrow_mut().next.take().map(|next| {
next.borrow_mut().prev.take();
next
});
if self.head.is_none() {
self.tail.take();
}
//Use of default
Some(head.take().key)
}
}
}
While popping from the back, we take out the list's tail, convert the tail's previous pointer to a
strong reference, make the previous the new tail of the list. If the new tail turns out to be None
we set head of the list to None
as well. Finally, we return the outgoing tail's content from the
call.
Belwo is the code for popping from the back:
//Pop out from the back of the list
pub fn pop_back(&mut self) -> Option<T> {
match self.tail.take() {
None => None,
Some(ref mut tail) => {
self.tail = tail.borrow().prev.as_ref().and_then(|prev| {
prev.upgrade().map(|prev| {
prev.borrow_mut().next = None;
prev
})
});
if self.tail.is_none() {
self.head.take();
}
//Use of default
Some(tail.take().key)
}
}
}
List delete
While deleting from a list, we need to consider two cases - is the node being deleted the last or first node or is an inner node? The way we handle them is different. To make things simple, we have implemented few internal helper functions. We turn to those helper functions in the next section.
Delete helpers
To figure out whether a target node that is being deleted is the first or last node in the list, we compare it to the head or the tail of the list. This comparsion we do by using the ptr_eq.
Below are the codes for achieving the same:
//Is the passed in node reference the first in the list?
fn is_first(&self, node: &Rc<RefCell<Node<T>>>) -> bool {
match self.head {
None => false,
Some(ref head) => Rc::ptr_eq(head, node),
}
}
//Is the passed in node reference the last in the list?
fn is_last(&self, node: &Rc<RefCell<Node<T>>>) -> bool {
match self.tail {
None => false,
Some(ref tail) => Rc::ptr_eq(tail, node),
}
}
One we have figured that a target node that is being deleted is either first or last, its just a
matter of calling pop_front
or pop_back
accordingly.
Deleting a inner target node:
To delete inner target node we first get a handles to its previous and next nodes. Once we find them, we need to rewire the linkages appropriately.
We clone the the outgoing target's previous pointer, which is a weak pointer reference, set it as theprevious pointer of the outgoing node's next node.
We upgrade previous (prev
) weak pointer to get a strong refernce to get to previous node and set
next's cloned reference as its next. Then we return the content of outgoing target node.
Following is snippet for deleting an inner node:
//Delete a node that has previous and next
fn delete_inner(&mut self, target: &Rc<RefCell<Node<T>>>) -> Option<T> {
let prev = target.borrow_mut().prev.take();
let next = target.borrow_mut().next.take();
if let Some(ref next) = next {
next.borrow_mut().prev = prev.as_ref().cloned();
}
if let Some(ref prev) = prev {
if let Some(prev) = prev.upgrade() {
prev.borrow_mut().next = next.as_ref().map(Rc::clone);
}
}
Some(target.take().key)
}
Now, that we know how the helpers work, we present the actual delete
function - which is quite
simple.
Delete public API:
//Delete a key from the list. We try to find the by using iterator `find` method.
//If found - we check if it is the first or last key in the list. If the found node
//happens to be first or last - we call pop_front or pop_back as required.
//If the key is in between head and and tail - the deletion is handled accordingly.
pub fn delete(&mut self, key: &T) -> Option<T> {
match self.node_iter().find(|node| node.borrow().key == *key) {
None => None,
Some(ref target) => match (self.is_first(target), self.is_last(target)) {
(true, true) | (true, false) => self.pop_front(),
(false, true) => self.pop_back(),
(_, _) => self.delete_inner(target),
},
}
}
Note: To find the target node that is being deleted, we are making use of
Iterator
's find api. We have not talked aboutList
's iterator implementation yet - which we will turn to next. `
List iterators
We implement two variants of iteratots for our list - one that evicts elements from the the list while the iterator is being traversed and the other that does not mutate the underlying list. Both these variants can be traversed from the front as well from the back because we implement double ended iterator for them.
First we will look at mutable variant - because if pretty simple.
Mutable iterator
We iterate through iterator elements by calling next on it - this means we need to maintain the iterator's state. Folloing is a
struct that mutabley borrows the underlying list and stores it. The life time paramter 'a is there
because we are holding a reference to the list - we don't own the list. An IntoIterator
instance
can exist only as long as unerlying list struct instance is alive. The iterator can not outlive the
list struct instance that it is borrowing.
pub struct IntoIterator<'a, T: std::fmt::Debug + Default + Clone + PartialEq> {
list: &'a mut List<T>,
}
Getting an IntoIterator instance:
We have added following function to the List
struct to return an IntoIterator
instance. While
returning the IntoIterator
struct instance - list passes it its own mutable self reference. And
because we have a mutable reference to the list - we can modify it as long as we return it back in
some shape - either intact or altered.
pub fn into_iter(&mut self) -> IntoIterator<'_, T> {
//Taking a mutable reference to the list
IntoIterator { list: self }
}
Node: We are breaking rust's convention here. Idiomatic rust consumes the collection as a whole if elements are dropped while iterating. Here we are not destroying the list though post iteration it bcomes empty. We could have consumed the list instead.
Now that we have struct that we have named IntoIterator
and it has a mutable list inside it - can
we iterate over the elements inside the list? Not yet. We have not added the iterator behaviour to
our struct yet. We need to implement the Iterator trait for our IntoIterator
struct for that. Whichis what we will do next.
Note: The
Iterator
trait has quite a bunch of methods defined in it. But we need to implement only thenext
method of it because rests are implemented based onnext
.
Implement Iterator for IntoIterator:
//Iterator that consumes the list elements from the front
impl<'a, T: std::fmt::Debug + Default + Clone + PartialEq> Iterator for IntoIterator<'a, T> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
self.list.pop_front()
}
}
Our implementation could not have been any more simpler than this - we just pop the elements from the front!
Mutable back iterator
Our IntoIterator
implements Iterator
- hence we can iterate forward traversing the elements.
What if we wanted to traverse the elements back? For that we have to implement DoubleEndedIterator. To implement double ended
iterator it is mandatary that we implement Iterator
first for our struct - whcih we have already
done.
To implement DoubleEndedIterator
we need to provide an implementation of the next_back
method
and our implementation is no-brainer - we just call pop_back on our list!
Following is our implementation of DoubleEndedIterator:
impl<'a, T: std::fmt::Debug + Default + Clone + PartialEq> DoubleEndedIterator
for IntoIterator<'a, T>
{
fn next_back(&mut self) -> Option<Self::Item> {
self.list.pop_back()
}
}
Immutable bi-directional iterator
We want to be able to iterate the elemnts of our list as many times as we want - but our list
should not change. We take an iterator from out list, travese the elements, exhaust the iterator,
take another one and so on. This where immutable iterator comes in. Immutable iterator generally
provides an access to Option<&T>
. We can not alter underlying List
with an immutable shared
Option<&T>
. We can read or may be clone it - if clone is supported for T
.
In our case, we are breaking rust convention again - instead of returning Option<&T>
, we are
going to return Option<T>
. But that does not change anything about the immutability of our
underlying List
- it would still remain intact. We clone our &T
and return Option<T>
.
The reason for returning Option<T>
and not Option<&T>
is that our T
s are burried deep inside
of RefCell
s which themselves are insides' of Rc
s. Its seems pretty damn hard to give a &T
out
from Rc<RefCell<Node<T>>>
- I have not figured it out yet - not sure if that would be possible
without resorting to unsafe rust.
Hence we stick it our good friend - Option<T>
.
In the previous section - we have implemented DoubleEndedIterator
when we had a mutable referece
to our underlying List
. It was easy because we made use of pop_front
and pop_back
and we were
able get elements from right ends in right order.
In the immutable case, we don't want to maintain a mutable reference to our underlying list. We
aslo want our immutable iterator to be able traverse forward as well as backward and while doing so, we don't want to trample each other. In another words, while calling next
, we don't want our
iterator to return elements which we have already seen by calling next_back
and vice versa.
Again, when we call iter
on our list to get a bi-directional iterator - the struct that gets
returned should be simple enough and should not have lot of code to achieve the funtionality that
we want. We want to handle case whether to return an element or not, because it may already have
been returned by next_back
or next
, internally. We want that piece of logic to reside in some
other struct. Ok, enough talk. Let's get down to code.
Immutable iterator struct defintion that caller gets:
pub struct Iter<T: std::fmt::Debug + Default + Clone + PartialEq> {
nodes: NodeIterator<T>,
}
Above struct holds an instance of NodeIterator
that has head and tail of the list as members.
Iterator trait implementation for Iter
:
//Itearor that returns Option<T>
//Values are cloned
//Underlying list remain intact
impl<T: std::fmt::Debug + Default + Clone + PartialEq> Iterator for Iter<T> {
type Item = T;
fn next(&mut self) -> Option<Self::Item> {
self.nodes
.next()
.as_ref()
.map(|next| next.borrow().key.clone())
}
}
DoubleEndedIterator implementation:
//Iterate back
//Calling next_back should not returned elements seen by calling next and vice versa
impl<T: std::fmt::Debug + Default + Clone + PartialEq> DoubleEndedIterator for Iter<T> {
fn next_back(&mut self) -> Option<Self::Item> {
self.nodes
.next_back()
.as_ref()
.map(|prev| prev.borrow().key.clone())
}
}
Our iterator and double ended iterator implementations are simple enough and much of the logic is
hidden inside NodeIterator
. But it turns out our NodeIterator
is also not very complex at all.
NodeIterator struct defintion:
//This struct holds head and tail of the list
//ptr_crossed flag indicates whether front and back iterators crossed each other
struct NodeIterator<T: std::fmt::Debug + Default + Clone + PartialEq> {
head: Option<Rc<RefCell<Node<T>>>>,
tail: Option<Rc<RefCell<Node<T>>>>,
ptr_crossed: bool,
}
Iterator trait implementation for NodeIterator:
NodeIterator
struct has references to head and tail of the list. These references move forward
and backward as we call next
and next_back
. When calling next
- we see if the ptr_crossed
flag is set - if set we return None
, if not set - we proceed forward - check if we are returing
the same element that tail is pointing at - if so, if set the ptr_crossed
flagged is set so that calling next_back
does not return any more elements. That's all to it. DoubleEndedIteraror
implementation does same thing just that it looks at the head pointer.
impl<T: std::fmt::Debug + Default + Clone + PartialEq> Iterator for NodeIterator<T> {
type Item = Rc<RefCell<Node<T>>>;
fn next(&mut self) -> Option<Self::Item> {
match self.head {
Some(_) => {
match self
.head
.as_ref()
.map(|next| (Rc::clone(next), next.borrow().next.as_ref().map(Rc::clone)))
{
None => None,
Some(this_and_next) => match self.ptr_crossed {
true => None,
false => {
let this = this_and_next.0;
let next = this_and_next.1;
self.ptr_crossed = self
.tail
.as_ref()
.map(|tail| Rc::ptr_eq(&this, tail))
.unwrap_or(false);
self.head = next;
Some(this)
}
},
}
}
None => None,
}
}
}
DoubleEndedIterator implementation for NodeIterator:
impl<T: std::fmt::Debug + Default + Clone + PartialEq> DoubleEndedIterator for NodeIterator<T> {
fn next_back(&mut self) -> Option<Self::Item> {
match self.tail {
Some(_) => {
match self.tail.as_ref().map(|tail| {
(
Rc::clone(tail),
tail.borrow()
.prev
.as_ref()
.cloned()
.and_then(|prev| prev.upgrade()),
)
}) {
None => None,
Some(this_and_prev) => match self.ptr_crossed {
true => None,
false => {
let this = this_and_prev.0;
let prev = this_and_prev.1;
self.ptr_crossed = self
.head
.as_ref()
.map(|head| Rc::ptr_eq(&this, head))
.unwrap_or(false);
self.tail = prev;
Some(this)
}
},
}
}
None => None,
}
}
}
Sorting algoriths
We present sorting algorithms in the following sections.
Bubble sort
Bubble sort algorithm starts with left most item in the given array and compares it to the element next to it and swaps them if next is smaller than the previous. Then it starts with the next item( irrespective of whether it was swapped or not in the previous step) and
compares with it's next and the process is continued. In the first pass, the comparison and swapping (if necessary) goes all the way till the end of the array and the biggest element in the array lands in the right most position. Again second pass starts with the left most element, puts next bigger element in the array in one position ahead of the biggest
element. We maintain a boolean
flag to avoid further unnecessary iterations if no
swapping was done in the previous pass (because array is already sorted).
Following is the bubble sort implementation:
///Bubble sort implementation
pub fn sort<T: PartialOrd>(items: &mut [T]) {
let mut swapped = true;
while swapped {
swapped = false;
let mut i = 0;
for j in 0..items.len() - 1 - i {
if items[j] > items[j + 1] {
items.swap(j, j + 1);
swapped = true;
i += 1;
}
}
}
}
Insertion sort
In insertion sort - we presume the item's array to be in two parts - left part which is presumed to be sorted and the rest - which is supposed to be unsorted. We pick up the unsorted elements one by one and place them in their aprropriate positions in the left sorted part untill whole unsorted part becomes empty - when the array is fully sorted.
The sorted part starts with just one element i.e. the first element - which is sorted in itself. Proceeding with second element onwards - we keep growing the sorted part. We are done when unsorted part becomes exhausted.
Following is the insertion sort algorithm implementation:
///Insertion sort implementation
pub fn sort<T: PartialOrd>(items: &mut [T]) {
for pos in 1..items.len() {
let mut i = pos;
while i > 0 && items[i] < items[i - 1] {
items.swap(i, i - 1);
i -= 1;
}
}
}
Selection sort
In selection sort, we start at the beginning of the item's array, look at the rest of array to find the index of any element that could be the smallest the whole array. If found, we we swap the smallest with the element at the first position of the array. Then we look at the second element of the array - and try to find its replacement - if any. This process continues till we look at all the elements till the element before the last element.
Below is the insertion sort implementation:
///Selection sort implementation
pub fn sort<T: PartialOrd>(items: &mut [T]) {
for curr_item_index in 0..items.len() - 1 {
let mut min_index_in_rest = curr_item_index;
for next in (curr_item_index + 1)..items.len() {
if items[next] < items[min_index_in_rest] {
min_index_in_rest = next;
}
}
if min_index_in_rest != curr_item_index {
items.swap(curr_item_index, min_index_in_rest);
}
}
}
Merge sort
We present the merge sort merge sort algorithm in this section. Merge sort is is divide and conquer algorithm - We keep dividing the input array (in memory) until each division is no longer than 1 element each. At this point all divinsions are sorted in themselves. Then we call the merge procedure. At this point we need an auxiliary array - to hold the merged
halves. From the auxiliary array - we copy the contents back to the original array picking next smaller elements. We don't expect the element type T
to be clonable - but we need it to have a Default::default
value - because rust does not like holes in memory.
Following is the implementation:
///Merge sort implementation
pub fn sort<'a, T: PartialOrd + Default>(items: &'a mut [T]) -> &'a [T] {
if items.len() == 0 {
return items;
}
let mut aux = Vec::with_capacity(items.len());
//Put default value - since don't expect `T` to be clone - we
//can not use Vec `fill`
for _ in 0..items.len() {
aux.push(T::default());
}
mergesort(items, 0, items.len() - 1, &mut aux);
items
}
pub fn mergesort<T: PartialOrd + Default>(
items: &mut [T],
left: usize,
right: usize,
aux: &mut Vec<T>,
) {
if right - left < 1 {
return;
}
let mid = left + (right - left) / 2;
mergesort(items, left, mid, aux);
mergesort(items, mid + 1, right, aux);
if items[mid] <= items[mid + 1] {
return;
}
merge(items, left, mid, right, aux);
}
pub fn merge<T: PartialOrd + Default>(
items: &mut [T],
left: usize,
mid: usize,
right: usize,
aux: &mut Vec<T>,
) {
//Copy elements to the auxiliary array
for i in left..=right {
aux.insert(i, std::mem::take(&mut items[i]));
}
//Start of left half
let mut left_index = left;
//Start of right half
let mut right_index = mid + 1;
//From left to right all the way
for item_index in left..=right {
//Left half got exhausted
//Copy what we have in the auxilary array to the items array
if left_index > mid {
items[item_index] = std::mem::take(&mut aux[right_index]);
right_index += 1;
//Right half exhausted
} else if right_index > right {
items[item_index] = std::mem::take(&mut aux[left_index]);
left_index += 1;
} else if aux[left_index] < aux[right_index] {
items[item_index] = std::mem::take(&mut aux[left_index]);
left_index += 1;
} else {
items[item_index] = std::mem::take(&mut aux[right_index]);
right_index += 1;
}
}
}
Quick sort
In this section we present the implementtion of quick sort algorithm. Quick sort is an efficient
sorting technique that uses no extra storage and can perform at nlog(n)
time when the array
elements are randomly placed.
Here we randomly select left, right or middle element as the pivot while the algorithm is process- ing sub arrays.
Following is the implementation:
Please ask your administrator.
///Implementation of Quick Sort algorithm
use rand::Rng;
pub fn quicksort<T: PartialOrd>(array: &mut [T]) {
let mut rng = rand::thread_rng();
sort(array, 0, array.len() - 1, &mut rng);
}
//Uses the right most element as pivot
#[inline]
fn right_as_pivot<T: PartialOrd>(array: &mut [T], left: usize, right: usize) -> usize {
let mut i = left;
for j in left..right {
if array[j] <= array[right] {
array.swap(i, j);
i += 1;
}
}
array.swap(i, right);
i
}
//Uses left most element as the as pivot
#[inline]
fn left_as_pivot<T: PartialOrd>(array: &mut [T], left: usize, right: usize) -> usize {
let mut i = right;
for j in (left + 1..=right).rev() {
if array[j] > array[left] {
array.swap(i, j);
i -= 1;
}
}
array.swap(i, left);
i
}
//Uses left most element as the as pivot
#[inline]
fn mid_as_pivot<T: PartialOrd>(array: &mut [T], left: usize, right: usize) -> usize {
let mid = left + (right - left) / 2;
array.swap(left, mid);
let mut i = right;
for j in (left + 1..=right).rev() {
if array[j] > array[left] {
array.swap(i, j);
i -= 1;
}
}
array.swap(i, left);
i
}
//Recursive parttion sort algorithm
fn sort<T: PartialOrd>(array: &mut [T], left: usize, right: usize, random: &mut impl Rng) {
if left < right {
let partition = partition(array, left, right, random);
if array[..partition].len() > 1 {
sort(array, left, partition - 1, random);
}
if array[partition + 1..].len() > 1 {
sort(array, partition + 1, right, random);
}
}
}
///Parttion the array segments using either left or right most elements as
///the pivots randomly.
#[inline]
fn partition<T: PartialOrd>(
array: &mut [T],
left: usize,
right: usize,
random: &mut impl Rng,
) -> usize {
if random.gen::<bool>() {
left_as_pivot(array, left, right)
} else if random.gen::<bool>() {
right_as_pivot(array, left, right)
} else {
mid_as_pivot(array, left, right)
}
}
#[cfg(test)]
mod tests {
use super::quicksort;
use rand::Rng;
#[test]
fn quicksort_test() {
let mut runs = 50000;
loop {
let mut array: [u16; 20] = [0; 20];
rand::thread_rng().fill(&mut array);
quicksort(&mut array);
if !is_sorted(&array) {
panic!("Array is not sorted...");
}
runs -= 1;
if runs == 0 {
break;
}
}
}
fn is_sorted(array: &[u16]) -> bool {
for idx in 1..array.len() {
if array[idx - 1] > array[idx] {
return false;
}
}
true
}
}
LRU cache
Here we implement a cache that can hold key -> value mappings in memory. The maximum capacity of cache is specified during instantiation and when storage reaches maximun capacity - least recently used entry is evicted.
The cache exposes the follwoing APIs:
put
- inserts or updatesget
- get a value corresponding to a keydelete
- delete an entry from the cache
Note: The cache implementation depends on the doubly linked list implementation discussed in the previous section. Another noticeable thing is that - because our LRU cache implementation uses doulby linked list APIs, the implementation is quite concise, its about sixty odd lines when comments are ignored.
Following is the complete implementation:
/// LRU cache implementation
use doubly_linked_list::{List, Node};
use std::cell::RefCell;
use std::collections::HashMap;
use std::hash::Hash;
use std::rc::{Rc, Weak};
//LRU Cache struct
pub struct LRUCache<K: Eq + Hash, V: std::fmt::Debug + Default + Clone + PartialEq> {
keys: HashMap<K, Weak<RefCell<Node<V>>>>,
entries: List<V>,
capacity: usize,
}
impl<K: Eq + Hash, V: std::fmt::Debug + Default + Clone + PartialEq> LRUCache<K, V> {
pub fn new(capacity: usize) -> Self {
Self {
keys: HashMap::new(),
entries: List::new(),
capacity,
}
}
//Insert (key, value) to the cache
//If key is already present - its value will be updated
//If backing storage (doubly linked list) size goes beyond cache capacity,
//least recently used entries will be evicted
pub fn put(&mut self, key: K, v: V) {
match self.keys.get(&key).and_then(|key| key.upgrade()) {
//Insert if not present
None => {
self.entries.push_front(v);
if let Some(front) = self.front() {
self.keys.insert(key, front);
}
self.purge_least_recently_used();
}
//Update if already exists
Some(ref mut entry) => {
entry.borrow_mut().replace(v);
self.entries.to_front(entry);
}
}
}
//Called internally to get rid of least recently used entries
fn purge_least_recently_used(&mut self) {
if self.entries.size() > self.capacity {
for _ in 0..(self.entries.size() - self.capacity) {
let _ = self.entries.pop_back();
}
}
}
//Get a value corresponding to a key
//Accessed entry, if found, moves to the front of backing storage
pub fn get(&mut self, key: &K) -> Option<V> {
match self.keys.get(key).and_then(|key| key.upgrade()) {
None => None,
Some(ref entry) => {
self.entries.to_front(entry);
Some(entry.borrow().key().clone())
}
}
}
//Delete a cache entry
pub fn delete(&mut self, key: &K) -> Option<V> {
match self.keys.get(key).and_then(|key| key.upgrade()) {
None => None,
Some(ref entry) => {
self.keys.remove(key);
self.entries.delete_target(entry)
}
}
}
//Get a weak reference to newly inserted value in the backing store
//This value is stored in the lookup HashMap against its key
fn front(&self) -> Option<Weak<RefCell<Node<V>>>> {
self.entries.head().as_ref().map(Rc::downgrade)
}
}
Heap
Heap data structure.
First we implement a max heap, then a min heap and finally combine them together to provide an unified heap struct.
Max heap
In max heap - parent element is always bigger than its children. When we insert an element to the
heap we insert it at right most postion of the underlying Vec
data structure and comapre it with
its parent and move it up as long as it is bigger than its parent.
While removing - we take out the top element - replace it with the right most element. Next we, look at its children to check if it is smaller than them. We swap parent with the bigger one of its children and repeat the process til parent is no longer smaller than its children.
Following is the entire code for max heap:
/***
* Implement a max heap data structure
***/
#[derive(Debug)]
pub struct MaxHeap<T: Ord> {
elements: Vec<T>,
}
impl<T: Ord> MaxHeap<T> {
pub fn new() -> Self {
MaxHeap {
elements: Vec::new(),
}
}
//Allocate memory upfront
pub fn with_capacity(capacity: usize) -> Self {
MaxHeap {
elements: Vec::with_capacity(capacity),
}
}
pub fn size(&self) -> usize {
self.elements.len()
}
fn get_parent_index(index: usize) -> Option<usize> {
match index {
0 => None,
_ => Some((index - 1) / 2),
}
}
fn left_child_index(index: usize) -> usize {
2 * index + 1
}
fn right_child_index(index: usize) -> usize {
2 * index + 2
}
fn has_parent(index: usize) -> bool {
Self::get_parent_index(index).is_some()
}
fn has_left_child(&self, index: usize) -> bool {
Self::left_child_index(index) < self.elements.len()
}
fn has_right_child(&self, index: usize) -> bool {
Self::right_child_index(index) < self.elements.len()
}
fn parent(&self, index: usize) -> Option<&T> {
match Self::has_parent(index) {
true => Some(&self.elements[Self::get_parent_index(index).unwrap()]),
false => None,
}
}
fn left_child(&self, index: usize) -> Option<&T> {
match self.has_left_child(index) {
true => Some(&self.elements[Self::left_child_index(index)]),
false => None,
}
}
fn right_child(&self, index: usize) -> Option<&T> {
match self.has_right_child(index) {
true => Some(&self.elements[Self::right_child_index(index)]),
false => None,
}
}
pub fn insert(&mut self, elem: T) {
self.elements.push(elem);
self.heapify_up();
}
//If the newly inserted element (at the last index) - is bigger than its parent,
//swap parent and the element. Go to parent index position, continue the process
//until element > parent condition does not hold
fn heapify_up(&mut self) {
let mut index = self.elements.len() - 1;
while Self::has_parent(index) && self.parent(index) < self.elements.get(index) {
let parent_index = Self::get_parent_index(index).unwrap();
self.elements.swap(parent_index, index);
index = parent_index;
}
}
//Takeout the element at the top and replace it with the last
//element and heapify down
//Return the element taken out
pub fn remove(&mut self) -> Option<T> {
match self.is_empty() {
true => None,
false => {
let t = self.elements.swap_remove(0);
self.heapify_down();
Some(t)
}
}
}
pub fn is_empty(&self) -> bool {
self.elements.len() == 0
}
pub fn top(&self) -> Option<&T> {
match self.is_empty() {
true => None,
false => self.elements.get(0),
}
}
//Start at the top index element. If it is bigger than either of its children
//we are done. Otherwise, bring top element down. Continue down.
fn heapify_down(&mut self) {
let mut index = 0;
while self.has_left_child(index) {
let mut bigger_child_index = Self::left_child_index(index);
if self.has_right_child(index) && self.right_child(index) > self.left_child(index) {
bigger_child_index = Self::right_child_index(index);
}
if self.elements[index] > self.elements[bigger_child_index] {
break;
} else {
self.elements.swap(index, bigger_child_index);
}
index = bigger_child_index;
}
}
}
Min heap
In a min heap parent is always smaller than its children.
Following is the code for min heap:
/***
* Implement a min heap data structure
***/
#[derive(Debug)]
pub struct MinHeap<T: Ord> {
elements: Vec<T>,
}
impl<T: Ord> MinHeap<T> {
pub fn new(capacity: usize) -> Self {
MinHeap {
elements: Vec::with_capacity(capacity),
}
}
fn get_parent_index(index: usize) -> Option<usize> {
match index {
0 => None,
_ => Some((index - 1) / 2),
}
}
fn left_child_index(index: usize) -> usize {
2 * index + 1
}
fn right_child_index(index: usize) -> usize {
2 * index + 2
}
fn has_parent(index: usize) -> bool {
Self::get_parent_index(index).is_some()
}
fn has_left_child(&self, index: usize) -> bool {
Self::left_child_index(index) < self.elements.len()
}
fn has_right_child(&self, index: usize) -> bool {
Self::right_child_index(index) < self.elements.len()
}
fn parent(&self, index: usize) -> Option<&T> {
match Self::has_parent(index) {
true => Some(&self.elements[Self::get_parent_index(index).unwrap()]),
false => None,
}
}
fn left_child(&self, index: usize) -> Option<&T> {
match self.has_left_child(index) {
true => Some(&self.elements[Self::left_child_index(index)]),
false => None,
}
}
fn right_child(&self, index: usize) -> Option<&T> {
match self.has_right_child(index) {
true => Some(&self.elements[Self::right_child_index(index)]),
false => None,
}
}
pub fn insert(&mut self, elem: T) {
self.elements.push(elem);
self.heapify_up();
}
//Take newly inserted element up as long as it is its parent is bigger
fn heapify_up(&mut self) {
let mut index = self.elements.len() - 1;
while Self::has_parent(index) && self.parent(index) > self.elements.get(index) {
let parent_index = Self::get_parent_index(index).unwrap();
self.elements.swap(parent_index, index);
index = parent_index;
}
}
pub fn remove(&mut self) -> Option<T> {
match self.elements.len() {
0 => None,
_ => {
let t = self.elements.swap_remove(0);
self.heapify_down();
Some(t)
}
}
}
//Bring parent down comparing it with its children as long as parent is bigger than its
//children
fn heapify_down(&mut self) {
let mut index = 0;
while self.has_left_child(index) {
let mut smaller_child_index = Self::left_child_index(index);
if self.has_right_child(index) && self.right_child(index) < self.left_child(index) {
smaller_child_index = Self::right_child_index(index);
}
if self.elements[index] < self.elements[smaller_child_index] {
break;
} else {
self.elements.swap(index, smaller_child_index);
}
index = smaller_child_index;
}
}
}
Max min heap
Here we present a Heap
struct that can be used either as min heap
or max heap
based on how we
instantiate it. Heap::min
gives a min heap whereas Heap::max
gives a max heap. Additionally, we
can use the Heap::with_capacity
to pass in a boolean flag to get a max or min heap.
Below is the code for the heap struct:
/***
* Implement a heap data structure that can be used as min or max heap
***/
pub struct Heap<T: Ord> {
elements: Vec<T>,
min: bool,
}
impl<T: Ord> Heap<T> {
//Creates min heap
pub fn min() -> Self {
Heap {
elements: Vec::new(),
min: true,
}
}
//Creates a max heap
pub fn max() -> Self {
Heap {
elements: Vec::new(),
min: false,
}
}
//Allocate memory upfront
//Creates a max or min heap based on the flag passed in
pub fn with_capacity(capacity: usize, min: bool) -> Self {
Heap {
elements: Vec::with_capacity(capacity),
min,
}
}
pub fn size(&self) -> usize {
self.elements.len()
}
fn get_parent_index(index: usize) -> Option<usize> {
match index {
0 => None,
_ => Some((index - 1) / 2),
}
}
fn left_child_index(index: usize) -> usize {
2 * index + 1
}
fn right_child_index(index: usize) -> usize {
2 * index + 2
}
fn has_parent(index: usize) -> bool {
Self::get_parent_index(index).is_some()
}
fn has_left_child(&self, index: usize) -> bool {
Self::left_child_index(index) < self.elements.len()
}
fn has_right_child(&self, index: usize) -> bool {
Self::right_child_index(index) < self.elements.len()
}
fn parent(&self, index: usize) -> Option<&T> {
match Self::has_parent(index) {
true => Some(&self.elements[Self::get_parent_index(index).unwrap()]),
false => None,
}
}
fn left_child(&self, index: usize) -> Option<&T> {
match self.has_left_child(index) {
true => Some(&self.elements[Self::left_child_index(index)]),
false => None,
}
}
fn right_child(&self, index: usize) -> Option<&T> {
match self.has_right_child(index) {
true => Some(&self.elements[Self::right_child_index(index)]),
false => None,
}
}
pub fn insert(&mut self, elem: T) {
self.elements.push(elem);
self.heapify_up();
}
//If max heap - take the inserted element up as long as it is bigger than its parent
//If min heap - take the inserted element up as long as it is smaller than its parent
fn heapify_up(&mut self) {
let mut index = self.elements.len() - 1;
while Self::has_parent(index)
&& (!self.min && self.parent(index) < self.elements.get(index)
|| (self.min && self.parent(index) > self.elements.get(index)))
{
let parent_index = Self::get_parent_index(index).unwrap();
self.elements.swap(parent_index, index);
index = parent_index;
}
}
pub fn remove(&mut self) -> Option<T> {
match self.is_empty() {
true => None,
false => {
let t = self.elements.swap_remove(0);
self.heapify_down();
Some(t)
}
}
}
pub fn is_empty(&self) -> bool {
self.elements.len() == 0
}
pub fn top(&self) -> Option<&T> {
match self.is_empty() {
true => None,
false => self.elements.get(0),
}
}
//If min heap, take the inserted element down as long as it is bigger than its children
//If max heap, take the inserted element down as long as it is smaller than its children
fn heapify_down(&mut self) {
let mut index = 0;
while self.has_left_child(index) {
let mut child_index = Self::left_child_index(index);
if self.has_right_child(index)
&& (!self.min && self.right_child(index) > self.left_child(index)
|| (self.min && self.right_child(index) < self.left_child(index)))
{
child_index = Self::right_child_index(index);
}
if !self.min && self.elements[index] > self.elements[child_index] {
break;
} else if self.min && self.elements[index] < self.elements[child_index] {
break;
} else {
self.elements.swap(index, child_index);
}
index = child_index;
}
}
}
Trie
Trie is a handy data structure. We can use it for storing words, search for entries and prefixes.
Below is Trie
implementation:
/*** Implement Trie data structure that holds string ***/
use std::collections::HashMap;
struct Node {
children: HashMap<char, Node>,
is_word: bool,
}
impl Node {
fn new() -> Self {
Node {
children: HashMap::new(),
is_word: false,
}
}
}
pub struct Trie {
root: Node,
}
impl Trie {
pub fn new() -> Self {
Trie { root: Node::new() }
}
//Insert a word into the Trie
pub fn insert(&mut self, word: &str) {
let mut current = &mut self.root;
let mut chars = word.chars();
while let Some(c) = chars.next() {
current = current.children.entry(c).or_insert(Node::new());
}
current.is_word = true;
}
//Seach for word in the Trie
pub fn search(&self, word: &str) -> bool {
let mut current = &self.root;
let mut chars = word.chars();
while let Some(c) = chars.next() {
match current.children.get(&c) {
Some(ref node) => current = node,
None => return false,
}
}
current.is_word
}
//Is there a word that starts with the given prefix
pub fn starts_with(&self, prefix: &str) -> bool {
let mut current = &self.root;
let mut chars = prefix.chars();
while let Some(c) = chars.next() {
match current.children.get(&c) {
Some(ref node) => current = node,
None => return false,
}
}
true
}
}
Longest prefix/suffixes
Here we find longest prefixes and suffixes given an array strings. Inline comments are given for longest suffix - we omit them for suffixes for brevity.
Longest common prefix
Find the longest common prefix for given array of strings:
/***
* Find the longest prefix from an array of strings
***/
pub fn longest_common_prefix(arr: &[&str]) -> String {
if arr.len() == 0 {
return String::new();
}
//Prefix would be as long as the min length of all the strings
let mut result = String::with_capacity(
arr.iter()
.min_by(|p, n| p.len().cmp(&n.len()))
.unwrap()
.len(),
);
//Get the characters of the first string in the in an iterator
let mut first = arr[0].chars();
//Map rests of the strings to iterator of chars and hold them in a vector
let mut rests: Vec<_> = arr[1..].iter().map(|s| s.chars()).collect();
//Iterate through the characters of the first string
while let Some(ch) = first.next() {
for i in 0..rests.len() {
let current = &mut rests[i];
match current.next() {
//Next string's next char matches with the first string char
//jump to the next string
Some(c) if c == ch => continue,
//Does not match - return whatever we have go so far
_ => return result,
}
}
//All strings next char matched, lengthen result
result.push(ch);
}
result
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn longest_common_prefix_test_1() {
let arr = ["flower", "flow", "flight"];
assert_eq!(longest_common_prefix(&arr), String::from("fl"));
let arr = ["flower1", "flower2", "flower3"];
assert_eq!(longest_common_prefix(&arr), String::from("flower"));
}
}
Longest common suffix
Find the longest common suffix given an array of strings:
/***
* Given an array of strings, find the logests common suffix for them
***/
pub fn longest_common_suffix(arr: &[&str]) -> String {
if arr.len() == 0 {
return String::new();
}
//Suffix would be as long as the min length of all the strings
let mut result = String::with_capacity(
arr.iter()
.min_by(|p, n| p.len().cmp(&n.len()))
.unwrap()
.len(),
);
let mut first = arr[0].chars();
let mut rests: Vec<_> = arr[1..].iter().map(|s| s.chars()).collect();
while let Some(ch) = first.next_back() {
for i in 0..rests.len() {
let current = &mut rests[i];
match current.next_back() {
Some(c) if c == ch => continue,
_ => return result,
}
}
//We are comming from the back - hence insert the next before the already existing char
result.insert(0, ch);
}
result
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn longest_common_suffix_test() {
let arr = ["privacy"];
let result = longest_common_suffix(&arr);
assert_eq!(result, String::from("privacy"));
let arr = ["privacy", "fallacy", "delicacy"];
let result = longest_common_suffix(&arr);
assert_eq!(result, String::from("acy"));
let arr = ["freedom", "kingdom", "boredom"];
let result = longest_common_suffix(&arr);
assert_eq!(result, String::from("dom"));
}
}
String permutation
Given string s, find all the permutations of its characters:
/***
* Given a string, find all the unique permutations of its characters
***/
pub fn permutate(s: &str) -> Vec<String> {
let mut result = Vec::new();
if s.len() == 0 {
return result;
}
if s.len() == 1 {
result.push(s.to_string());
return result;
}
let chars: Vec<_> = s.chars().collect();
for i in 0..chars.len() {
let ch = chars[i];
let mut segment = String::from(&s[0..i]);
segment += &s[i + 1..];
let intermediates = permutate(&segment);
for mut s in intermediates {
s.push(ch);
result.push(s);
}
}
result
}
#[cfg(test)]
mod tests {
use super::permutate;
#[test]
fn permutate_test() {
let result = permutate("abc");
assert_eq!(
result,
vec![
String::from("cba"),
String::from("bca"),
String::from("cab"),
String::from("acb"),
String::from("bac"),
String::from("abc")
]
);
}
}
Merge k sorted arrays
Here we present a simple routine to merge k sorted arrays which might not be of same
lengths. We make use our Heap
datastructure that we developed earlier. We also, make
use of tuple struct to maintain the state of each array while we are iterating over them.
Following is the implementation:
//Merge k sorted arrays. Arrays are not of equal lengths
use heap::Heap;
pub fn merge(arrays: &[&[i32]]) -> Vec<i32> {
let mut heap = Heap::min();
let mut size = 0;
for i in 0..arrays.len() {
size += arrays[i].len();
if arrays[i].len() > 0 {
let elem = Elem::new(arrays[i][0], i, 0);
heap.insert(elem);
}
}
let mut result: Vec<i32> = Vec::with_capacity(size);
while !heap.is_empty() {
let elem = heap.remove().unwrap();
let value = elem.0;
let array_idx = elem.1;
let value_idx = elem.2;
if value_idx + 1 < arrays[array_idx].len() {
heap.insert(Elem::new(
arrays[array_idx][value_idx + 1],
array_idx,
value_idx + 1,
));
}
result.push(value);
}
result
}
#[derive(Eq, Ord, PartialEq, PartialOrd)]
struct Elem<T: Ord>(T, usize, usize);
impl<T: Ord> Elem<T> {
fn new(value: T, array_idx: usize, value_idx: usize) -> Self {
Elem(value, array_idx, value_idx)
}
}
#[cfg(test)]
mod tests {
use super::merge;
#[test]
fn merge_test() {
assert_eq!(
merge(&[&[1, 3, 5, 6, 7], &[0, 2, 4], &[2, 4, 6, 8, 9, 10]]),
vec![0, 1, 2, 2, 3, 4, 4, 5, 6, 6, 7, 8, 9, 10]
);
}
}
Substring with Concatenation of All Words
Given a string s and an array of strings words of the same length. Return all starting indices of substring(s) in s that is a concatenation of each word in words exactly once, in any order, and without any intervening characters.
The answer can be in any order.
Example 1:
Input: s = "barfoothefoobarman", words = ["foo","bar"] Output: [0,9] Explanation: Substrings starting at index 0 and 9 are "barfoo" and "foobar" respectively The output order does not matter, returning [9,0] is fine too.
Example 2:
Input: s = "wordgoodgoodgoodbestword", words = ["word","good","best","word"]
Output: [12]
Ingnore the duplicate search words above.
Example 3:
Input: s = "barfoofoobarthefoobarman", words = ["bar","foo","the"]
Output: [6,9,12]
Solution explanation: We use a double ended queue for this solution. We keep iterating through the given input string for chuncks
of words equal in length to the search key words' and keep pushing them into the queue. When we the queue size becomes equal to the search key words' array size, we add the index of the first chunck to the result vector and delete all entries but the last - because that could be the beginning of another substring. If we encounter duplicate immediately after a previous chunck - we clear queue and add just only the current chunck. We also check for previous duplicates - and remove them if any.
Following is the implementation:
/// Substring with Concatenation of All Words
use std::collections::HashSet;
use std::collections::VecDeque;
pub fn find_substring(s: String, search_words: Vec<String>) -> Vec<usize> {
if s.is_empty() || search_words.is_empty() {
return vec![];
}
let result = substring_indices(s, search_words);
result
}
fn substring_indices(s: String, words: Vec<String>) -> Vec<usize> {
let mut result = Vec::new();
if s.is_empty() || words.is_empty() {
return result;
}
let mut stack: VecDeque<(&str, usize)> = VecDeque::with_capacity(words.len());
let split_size = words[0].len();
//Get rid of duplicate search words
let words = HashSet::<String>::from_iter(words);
let mut index = 0;
while index <= s.len() - split_size {
let chunck = &s[index..index + split_size];
if words.contains(&chunck.to_string()) {
if let Some(back) = stack.back() {
if &back.0 == &chunck {
stack.clear();
stack.push_back((chunck, index));
} else {
let repeat = stack
.iter()
.enumerate()
.find(|repeat| repeat.1 .0 == chunck);
if let Some(repeat) = repeat {
for _ in 0..=repeat.0 {
stack.pop_front();
}
}
stack.push_back((chunck, index));
}
} else {
stack.push_back((chunck, index));
}
} else {
stack.clear();
}
if stack.len() == words.len() {
let first = stack.pop_front();
if words.len() >= 2 {
for _ in 0..words.len() - 2 {
stack.pop_front();
}
}
if let Some(first) = first {
result.push(first.1);
}
}
index += split_size;
}
result
}
Pascal's triangle
Generate rows of Pascal's triangle
pub fn generate(rows: usize) -> Vec<Vec<usize>> {
if rows == 0 {
return vec![];
}
let mut triangle = Vec::with_capacity(rows);
triangle.push(vec![1]);
for i in 1..rows {
let prev_row = &triangle[i - 1];
let mut curr_row: Vec<_> = vec![1];
for j in 1..i {
curr_row.push(prev_row[j - 1] + prev_row[j]);
}
curr_row.push(1);
triangle.push(curr_row);
}
triangle
}
Find the nth row of pascal's triangle recursivley
pub fn nth(row: usize) -> Vec<usize> {
match row {
n if n < 1 => vec![],
n if n == 1 => vec![1],
n => {
let prev_row = nth(n - 1);
let mut curr = Vec::with_capacity(n);
curr.push(1);
for i in 1..prev_row.len() {
curr.push(prev_row[i - 1] + prev_row[i]);
}
curr.push(1);
curr
}
}
}
Two sum
Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
You can return the answer in any order
Two sum implementation
//Two sum implementation
use std::collections::HashMap;
pub fn two_sum(nums: Vec<i32>, target: i32) -> Vec<i32> {
let mut map = HashMap::<i32, i32>::new();
for i in 0..nums.len() {
if let Some(index) = map.get(&(target - nums[i])) {
return vec![*index, i as i32];
} else {
map.insert(nums[i], i as i32);
}
}
vec![]
}
Sell stock
You are given an array prices where prices[i] is the price of a given stock on the ith day.
You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock.
Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.
Implemenation
//Buy and sell stock once - max profit
use std::cmp::max;
pub fn max_profit(prices: Vec<i32>) -> i32 {
let mut max_profit = 0;
let mut left = 0; //Buy stock on day 'left'
let mut right = 1; //Sell stock on day 'right'
while right < prices.len() {
if prices[left] < prices[right] {
max_profit = max(max_profit, prices[right] - prices[left]);
} else {
left = right;
}
right += 1;
}
max_profit
}
Product except self
Given an integer array nums, return an array answer such that answer[i] is equal to the product of all the elements of nums except nums[i].
The product of any prefix or suffix of nums is guaranteed to fit in a 32-bit integer.
You must write an algorithm that runs in O(n) time and without using the division operation.
Solution:
pub fn product_except_self(nums: Vec<i32>) -> Vec<i32> {
let mut result = vec![1; nums.len()];
let mut prefix = 1;
for i in 0..nums.len() {
result[i] = prefix;
prefix *= nums[i];
}
let mut postfix = 1;
for i in (0..nums.len()).rev() {
result[i] *= postfix;
postfix *= nums[i];
}
result
}
Max sub array sum
Given an integer array, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.
O(n^3) implementation
pub fn max_sub_array_2(arr: Vec<i32>) -> i32 {
match arr.len() {
0 => 0,
_ => {
let mut max_sum = arr[0];
for i in 0..arr.len() {
for j in i..arr.len() {
let mut sum = arr[i];
for k in i + 1..=j {
sum += arr[k];
}
max_sum = max(max_sum, sum);
}
}
max_sum
}
}
}
O(n^2) implementation
pub fn max_sub_array_1(arr: Vec<i32>) -> i32 {
match arr.len() {
0 => 0,
_ => {
let mut max_sum = arr[0];
for i in 0..arr.len() {
let mut curr = 0;
for j in i..arr.len() {
curr += arr[j];
max_sum = max(max_sum, curr);
}
}
max_sum
}
}
}
O(n) implementation
use std::cmp::max;
pub fn max_sub_array(arr: Vec<i32>) -> i32 {
match arr.len() {
0 => 0,
_ => {
let mut max_sum = arr[0];
let mut curr = 0;
for num in arr {
if curr < 0 {
curr = 0;
}
curr += num;
max_sum = max(max_sum, curr);
}
max_sum
}
}
}
Max vowels in substring
This code is used to find the maximum number of vowels in a given string within a given substring length. It takes in a string and an integer (k) as parameters, and returns the maximum number of vowels found in any substring of length k.
Find maximum number of vowels
//Implementation
use std::collections::HashSet;
pub fn max_vowels(s: String, k: i32) -> i32 {
let k: usize = k.try_into().unwrap();
if s.len() < k.try_into().unwrap() {
panic!(
"IllegalArgumentException - string length {} is shorter than substring length {}",
s.len(),
k
);
}
let s = s.chars().collect::<Vec<_>>();
let vowels = "aeiou".chars().collect::<HashSet<_>>();
let mut count = 0;
let mut max_count = 0;
for i in 0..s.len() {
if vowels.contains(&s[i]) {
count = count + 1;
}
if i < k {
continue;
}
if vowels.contains(&s[i - k]) {
count = count - 1;
}
max_count = std::cmp::max(count, max_count);
}
max_count
}