The code itself is self explanatory

`using System.Collections;using System.Collections.Generic;using UnityEngine;[RequireComponent(typeof(CharacterController))]public class FPSController : MonoBehaviour{ public Camera playerCamera; public float walkSpeed = 6f; public float runSpeed = 12f; public float jumpPower = 7f; public float gravity = 10f; public float lookSpeed = 2f; public float lookXLimit = 45f; Vector3 moveDirection = Vector3.zero; float rotationX = 0; public bool canMove = true; CharacterController characterController; void Start() { characterController = GetComponent<CharacterController>(); Cursor.lockState = CursorLockMode.Locked; Cursor.visible = false; } void Update() { #region Handles Movment Vector3 forward = transform.TransformDirection(Vector3.forward); Vector3 right = transform.TransformDirection(Vector3.right); // Press Left Shift to run bool isRunning = Input.GetKey(KeyCode.LeftShift); float curSpeedX = canMove ? (isRunning ? runSpeed : walkSpeed) * Input.GetAxis("Vertical") : 0; float curSpeedY = canMove ? (isRunning ? runSpeed : walkSpeed) * Input.GetAxis("Horizontal") : 0; float movementDirectionY = moveDirection.y; moveDirection = (forward * curSpeedX) + (right * curSpeedY); #endregion #region Handles Jumping if (Input.GetButton("Jump") && canMove && characterController.isGrounded) { moveDirection.y = jumpPower; } else { moveDirection.y = movementDirectionY; } if (!characterController.isGrounded) { moveDirection.y -= gravity * Time.deltaTime; } #endregion #region Handles Rotation characterController.Move(moveDirection * Time.deltaTime); if (canMove) { rotationX += -Input.GetAxis("Mouse Y") * lookSpeed; rotationX = Mathf.Clamp(rotationX, -lookXLimit, lookXLimit); playerCamera.transform.localRotation = Quaternion.Euler(rotationX, 0, 0); transform.rotation *= Quaternion.Euler(0, Input.GetAxis("Mouse X") * lookSpeed, 0); } #endregion }}`

Vector 3 is used in 3D World or scene where Vector 2 is for 2D , Vector3(x,y,z) while vector2(x,y)

`deltaTime`

Rendering and script execution takes time. It differs every frame. If you want ~60 fps, you will not have stable fps - but differing amounts of time passing each frame. You could wait if you are too fast, but you cannot skip rendering when you are slower than expected.

To handle different lenghts of frames, you get the "Time.deltaTime".

`I`

, `V`

, `X`

, `L`

, `C`

, `D`

and `M`

.`Symbol ValueI 1V 5X 10L 50C 100D 500M 1000`

For example, `2`

is written as `II`

in Roman numeral, just two one's added together. `12`

is written as `XII`

, which is simply `X + II`

. The number `27`

is written as `XXVII`

, which is `XX + V + II`

.

Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not `IIII`

. Instead, the number four is written as `IV`

. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as `IX`

. There are six instances where subtraction is used:

`I`

can be placed before`V`

(5) and`X`

(10) to make 4 and 9.`X`

can be placed before`L`

(50) and`C`

(100) to make 40 and 90.`C`

can be placed before`D`

(500) and`M`

(1000) to make 400 and 900.

Given an integer, convert it to a roman numeral.

**Example 1:**

`Input: num = 3Output: "III"Explanation: 3 is represented as 3 ones.`

**Example 2:**

`Input: num = 58Output: "LVIII"Explanation: L = 50, V = 5, III = 3.`

**Example 3:**

`Input: num = 1994Output: "MCMXCIV"Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.`

**Constraints:**

`1 <= num <= 3999`

`class Solution: def intToRoman(self, num: int) -> str: hs = {1000 : "M", 900 : "CM", 500 : "D", 400 : "CD", 100 : "C", 90 : "XC", 50 : "L", 40 : "XL", 10 : "X", 9 : "IX", 5 : "V", 4 : "IV",1 : "I"} res='' for key, value in hs.items(): while key <= num: res += value num -= key return res`

]]>`root`

of a binary tree, find the maximum value `v`

for which there exist `a`

and `b`

where `v = |a.val - b.val|`

and `a`

is an ancestor of `b`

.A node `a`

is an ancestor of `b`

if either: any child of `a`

is equal to `b`

or any child of `a`

is an ancestor of `b`

.

**Example 1:**

`Input: root = [8,3,10,1,6,null,14,null,null,4,7,13]Output: 7Explanation: We have various ancestor-node differences, some of which are given below :|8 - 3| = 5|3 - 7| = 4|8 - 1| = 7|10 - 13| = 3Among all possible differences, the maximum value of 7 is obtained by |8 - 1| = 7.`

**Example 2:**

`Input: root = [1,null,2,null,0,3]Output: 3`

**Constraints:**

The number of nodes in the tree is in the range

`[2, 5000]`

.`0 <= Node.val <= 10<sup>5</sup>`

`class Solution(object): def maxAncestorDiff(self, root)->int: return self.helper(root, root.val, root.val) def helper(self, r, mn, mx): if not r: return mx - mn mn = min(mn, r.val) mx = max(mx, r.val) left_diff = self.helper(r.left, mn, mx) right_diff = self.helper(r.right, mn, mx) return max(left_diff, right_diff)`

`# Definition for a binary tree node.# class TreeNode:# def __init__(self, val=0, left=None, right=None):# self.val = val# self.left = left# self.right = rightclass TreeNode: def __init__(self, x): self.val = x self.left = self.right = Noneclass Solution: def maxAncestorDiff(self, root) : m=[0] self.dfs(root,m) return m[0] def dfs(self, root, m): if not root: return float('inf'), float('-inf') left = self.dfs(root.left, m) right = self.dfs(root.right, m) min_val = min(root.val, min(left[0], right[0])) max_val = max(root.val, max(left[1], right[1])) m[0] = max(m[0], max(abs(min_val - root.val), abs(max_val - root.val))) return min_val, max_val`

]]>**Medium**

243458Add to ListShare

You are given the `root`

of a binary tree with **unique** values, and an integer `start`

. At minute `0`

, an **infection** starts from the node with value `start`

.

Each minute, a node becomes infected if:

The node is currently uninfected.

The node is adjacent to an infected node.

Return *the number of minutes needed for the entire tree to be infected.*

**Example 1:**

`Input: root = [1,5,3,null,4,10,6,9,2], start = 3Output: 4Explanation: The following nodes are infected during:- Minute 0: Node 3- Minute 1: Nodes 1, 10 and 6- Minute 2: Node 5- Minute 3: Node 4- Minute 4: Nodes 9 and 2It takes 4 minutes for the whole tree to be infected so we return 4.`

**Example 2:**

`Input: root = [1], start = 1Output: 0Explanation: At minute 0, the only node in the tree is infected so we return 0.`

**Constraints:**

The number of nodes in the tree is in the range

`[1, 10<sup>5</sup>]`

.`1 <= Node.val <= 10<sup>5</sup>`

Each node has a

**unique**value.A node with a value of

`start`

exists in the tree.

`# Definition for a binary tree node.# class TreeNode:# def __init__(self, val=0, left=None, right=None):# self.val = val# self.left = left# self.right = rightclass Solution: def amountOfTime(self, root: Optional[TreeNode], start: int) -> int: result = 0 def DFS(node, start): if node == None: return 0 leftDepth = DFS(node.left, start) rightDepth = DFS(node.right, start) if node.val == start: Solution.result = max(leftDepth, rightDepth) return -1 elif leftDepth >= 0 and rightDepth >= 0: return max(leftDepth, rightDepth)+1 Solution.result = max(Solution.result, abs(leftDepth - rightDepth)) return min(leftDepth, rightDepth) - 1 DFS(root, start) return Solution.result`

]]>`2-9`

inclusive, return all possible letter combinations that the number could represent. Return the answer in A mapping of digits to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.

**Example 1:**

`Input: digits = "23"Output: ["ad","ae","af","bd","be","bf","cd","ce","cf"]`

**Example 2:**

`Input: digits = ""Output: []`

**Example 3:**

`Input: digits = "2"Output: ["a","b","c"]`

**Constraints:**

`0 <= digits.length <= 4`

`digits[i]`

is a digit in the range`['2', '9']`

.

`class Solution: def letterCombinations(self, digits: str) -> List[str]: two=["a","b","c"] three=["d","e","f"] four=["g","h","i"] five=["j","k","l"] six=["m","n","o"] sev=["p","q","r","s"] eit=["t","u","v"] nin=["w","x","y","z"] hs={2:two,3:three,4:four,5:five,6:six,7:sev,8:eit,9:nin} res=[] if len(digits)<1: return res elif len(digits)==1: for i in hs[int(digits)]: res.append(i) elif len(digits)==2: for i in hs[int(digits[0])]: for j in hs[int(digits[1])]: res.append(i+j) elif len(digits)==3: for i in hs[int(digits[0])]: for j in hs[int(digits[1])]: for m in hs[int(digits[2])]: res.append(i+j+m) elif len(digits)==4: for i in hs[int(digits[0])]: for j in hs[int(digits[1])]: for m in hs[int(digits[2])]: for n in hs[int(digits[3])]: res.append(i+j+m+n) return res`

]]>`n`

queens on an `n x n`

chessboard such that no two queens attack each other.Given an integer `n`

, return *all distinct solutions to the* ** n-queens puzzle**. You may return the answer in

Each solution contains a distinct board configuration of the n-queens' placement, where `'Q'`

and `'.'`

both indicate a queen and an empty space, respectively.

**Example 1:**

`Input: n = 4Output: [[".Q..","...Q","Q...","..Q."],["..Q.","Q...","...Q",".Q.."]]Explanation: There exist two distinct solutions to the 4-queens puzzle as shown above`

**Example 2:**

`Input: n = 1Output: [["Q"]]`

`class Solution: def solveNQueens(self, n: int) -> List[List[str]]: col = set() posDiag = set() # (r + c) negDiag = set() # (r - c) res = [] board = [["."] * n for i in range(n)] def backtrack(r): if r == n: copy = ["".join(row) for row in board] res.append(copy) return for c in range(n): if c in col or (r + c) in posDiag or (r - c) in negDiag: continue col.add(c) posDiag.add(r + c) negDiag.add(r - c) board[r][c] = "Q" backtrack(r + 1) col.remove(c) posDiag.remove(r + c) negDiag.remove(r - c) board[r][c] = "." backtrack(0) return res`

]]>`def bfs(graph, visited): queue = [] queue.append("a") visited["a"] = True while queue: temp = queue.pop(0) print(temp) for item in graph[temp]: if not visited.get(item): queue.append(item) visited[item] = Truenode = ["a", "b", "c", "d", "e", "g", "f"]edges = [["a", "b"], ["b", "c"], ["b", "e"], ["c", "d"], ["c", "e"], ["e", "g"], ["f", "f"]]graph = {}visited = {}for i in node: graph[i] = []for (u, v) in edges: graph[u].append(v) graph[v].append(u)bfs(graph, visited)`

]]>`from sys import maxsizefrom itertools import permutationsV = 4def tsp(graph, s): vertex = [] for i in range(V): if i != s: vertex.append(i) min_path = maxsize next_permutation = permutations(vertex) for i in next_permutation: current_pathweight = 0 k = s for j in i: current_pathweight += graph[k][j] k = j current_pathweight += graph[k][s] min_path = min(min_path, current_pathweight) return ("the cost is " , min_path)graph = [[0, 10, 15, 20], [10, 0, 35, 25], [15, 35, 0, 30], [20, 25, 30, 0]]s = 0print(tsp(graph, s))`

]]>]]>

tc O(N+h)

sc O(n+h)

`import heapqdef astar(start, goal, graph, heuristic): """ A* algorithm implementation. Args: start: Start node. goal: Goal node. graph: Graph represented as a dictionary of dictionaries. heuristic: Heuristic function. Returns: Path from start to goal. """ frontier = [(0, start)] came_from = {} cost_so_far = {} came_from[start] = None cost_so_far[start] = 0 # Define the heuristic values for each node in an array heuristic_values = [14, 12, 11, 6, 4, 11, 0] while frontier: current = heapq.heappop(frontier)[1] if current == goal: break for next_node in graph[current]: new_cost = cost_so_far[current] + graph[current][next_node] # Use the heuristic value from the array heuristic_value = heuristic_values[ord(next_node) - ord('A')] if next_node not in cost_so_far or new_cost < cost_so_far[next_node]: cost_so_far[next_node] = new_cost priority = new_cost + heuristic_value heapq.heappush(frontier, (priority, next_node)) came_from[next_node] = current path = [] current = goal while current != start: path.append(current) current = came_from[current] path.append(start) path.reverse() return pathgraph = { 'S': { 'B': 4, 'C': 3 }, 'B': { 'E': 12, 'F': 5 }, 'C': { 'E': 10, 'D': 7 }, 'D': { 'E': 4 }, 'E': { 'G': 5 }, 'F': { 'G': 16 }, 'G': {}}def heuristic(a, b): return abs(ord(a) - ord(b))path = astar('S', 'G', graph, heuristic)print(path)`

`from collections import dequeclass Graph: def __init__(self, adjacency_list): self.adjacency_list = adjacency_list def get_neighbors(self, v): return self.adjacency_list[v] def h(self, n): H = {'S': 14, 'C': 11, 'B': 12, 'F': 11, 'D': 6, 'E': 4, 'G': 0} return H[n] def a_star_algorithm(self, start_node, stop_node): open_list = set([start_node]) closed_list = set([]) g = {} g[start_node] = 0 parents = {} parents[start_node] = start_node while len(open_list) > 0: n = None for v in open_list: if n == None or g[v] + self.h(v) < g[n] + self.h(n): n = v if n == None: print('Path does not exist!') return None # if the current node is the stop_node # then we begin reconstructin the path from it to the start_node if n == stop_node: reconst_path = [] while parents[n] != n: reconst_path.append(n) n = parents[n] reconst_path.append(start_node) reconst_path.reverse() print('Path found: {}'.format(reconst_path)) return reconst_path # for all neighbors of the current node do for (m, weight) in self.get_neighbors(n): # if the current node isn't in both open_list and closed_list # add it to open_list and note n as it's parent if m not in open_list and m not in closed_list: open_list.add(m) parents[m] = n g[m] = g[n] + weight # otherwise, check if it's quicker to first visit n, then m # and if it is, update parent data and g data # and if the node was in the closed_list, move it to open_list else: if g[m] > g[n] + weight: g[m] = g[n] + weight parents[m] = n if m in closed_list: closed_list.remove(m) open_list.add(m) # remove n from the open_list, and add it to closed_list # because all of his neighbors were inspected open_list.remove(n) closed_list.add(n) print('Path does not exist!') return Noneadjacency_list = { 'S': [('B', 4), ('C', 3)], 'B': [('F', 5), ('E', 12)], 'C': [('D', 7), ('E', 10)], 'D': [('E', 2)], 'E': [('G', 5)], 'F': [('G', 16)],}graph1 = Graph(adjacency_list)graph1.a_star_algorithm('S', 'G')`

]]>Steps to deploy a Node.js app to any cloud using PM2, NGINX as a reverse proxy and an SSL from LetsEncrypt

In this tutorial i'm using azure's vm to accomplish the task and Domain from Namecheap which is free for any github verified student account , and if you have an .edu account then you can access azure's 100$ sponsorship program without any credit card

I will be using the root user, but would suggest creating a new user

`curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -sudo apt install nodejsnode --version`

There are a few ways to get your files on to the server, I would suggest using Git

`git clone yourproject.git`

`cd yourprojectnpm installnpm start (or whatever your start command)# stop appctrl+C`

`sudo npm i pm2 -gpm2 start app (or whatever your file name)# Other pm2 commandspm2 show apppm2 statuspm2 restart apppm2 stop apppm2 logs (Show log stream)pm2 flush (Clear logs)# To make sure app starts when rebootpm2 startup ubuntu`

`sudo ufw enablesudo ufw statussudo ufw allow ssh (Port 22)sudo ufw allow http (Port 80)sudo ufw allow https (Port 443)`

In case you can't access the url then make sure to allow inbound rule for port 80 in the azure's network security group

`sudo apt install nginxsudo nano /etc/nginx/sites-available/default`

Add the following to the location part of the server block

` server_name yourdomain.com www.yourdomain.com; location / { proxy_pass http://localhost:5000; #whatever port your app runs on proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }`

`# Check NGINX configsudo nginx -t# Restart NGINXsudo service nginx restart`

In Namecheap go to your DNS section and add the following

Add 2 A record, One for @ and One for www to the public ip address of your vm

Use any Trusted domain provided for domains.

It may take a bit to propogate

- Add SSL with LetsEncrypt

`sudo apt updatesudo apt-get install python3-certbot-nginxsudo certbot --nginx -d yourdomain.com -d www.yourdomain.com# Only valid for 90 days, test the renewal process withcertbot renew --dry-run`

Now visit https://yourdomain.com and you should see your Node app

`(a,b)`

is equal to `a + b`

. The - For example, if we have pairs
`(1,5)`

,`(2,3)`

, and`(4,4)`

, the**maximum pair sum**would be`max(1+5, 2+3, 4+4) = max(6, 5, 8) = 8`

.

Given an array `nums`

of **even** length `n`

, pair up the elements of `nums`

into `n / 2`

pairs such that:

Each element of

`nums`

is in**exactly one**pair, andThe

**maximum pair sum**is**minimized**.

Return *the minimized* *maximum pair sum**after optimally pairing up the elements*.

**Example 1:**

`Input: nums = [3,5,2,3]Output: 7Explanation: The elements can be paired up into pairs (3,3) and (5,2).The maximum pair sum is max(3+3, 5+2) = max(6, 7) = 7.`

**Example 2:**

`Input: nums = [3,5,4,2,4,6]Output: 8Explanation: The elements can be paired up into pairs (3,5), (4,4), and (6,2).The maximum pair sum is max(3+5, 4+4, 6+2) = max(8, 8, 8) = 8.`

**Constraints:**

`n == nums.length`

`2 <= n <= 10<sup>5</sup>`

`n`

is**even**.`1 <= nums[i] <= 10<sup>5</sup>`

First sort -> then use Two pointer Technique

Solution

`class Solution: def minPairSum(self, nums: List[int]) -> int: nums.sort() left=0 right=len(nums)-1 maxvalue=0 while left<right: curval=nums[left]+nums[right] maxvalue=max(curval,maxvalue) left+=1 right-=1 return maxvalue`

Time Complexity O(N log N)

Space Complexity O(1)

]]>`arr`

. Perform some operations (possibly none) on `arr`

so that it satisfies these conditions:The value of the

**first**element in`arr`

must be`1`

.The absolute difference between any 2 adjacent elements must be

**less than or equal to**`1`

. In other words,`abs(arr[i] - arr[i - 1]) <= 1`

for each`i`

where`1 <= i < arr.length`

(**0-indexed**).`abs(x)`

is the absolute value of`x`

.

There are 2 types of operations that you can perform any number of times:

**Decrease**the value of any element of`arr`

to a**smaller positive integer**.**Rearrange**the elements of`arr`

to be in any order.

Return *the* *maximum**possible value of an element in* `arr`

*after performing the operations to satisfy the conditions*.

**Example 1:**

`Input: arr = [2,2,1,2,1]Output: 2Explanation: We can satisfy the conditions by rearranging arr so it becomes [1,2,2,2,1].The largest element in arr is 2.`

**Example 2:**

`Input: arr = [100,1,1000]Output: 3Explanation: One possible way to satisfy the conditions is by doing the following:1. Rearrange arr so it becomes [1,100,1000].2. Decrease the value of the second element to 2.3. Decrease the value of the third element to 3.Now arr = [1,2,3], which satisfies the conditions.The largest element in arr is 3.`

**Example 3:**

`Input: arr = [1,2,3,4,5]Output: 5Explanation: The array already satisfies the conditions, and the largest element is 5.`

**Constraints:**

`1 <= arr.length <= 10<sup>5</sup>`

`1 <= arr[i] <= 10<sup>9</sup>`

- First Sort the array

`arr.sort()`

- The value of the
**first**element in`arr`

must be`1`

.

` max_val = 1 for i in range(1, len(arr)): if arr[i] > max_val: max_val += 1`

- And then just return the maxval

`class Solution: def maximumElementAfterDecrementingAndRearranging(self, arr: List[int]) -> int: arr.sort() max_val = 1 for i in range(1, len(arr)): if arr[i] > max_val: max_val += 1 return max_val`

]]>`k`

linked-lists `lists`

, each linked-list is sorted in ascending order.*Merge all the linked-lists into one sorted linked-list and return it.*

**Example 1:**

`Input: lists = [[1,4,5],[1,3,4],[2,6]]Output: [1,1,2,3,4,4,5,6]Explanation: The linked-lists are:[ 1->4->5, 1->3->4, 2->6]merging them into one sorted list:1->1->2->3->4->4->5->6`

**Example 2:**

`Input: lists = []Output: []`

**Example 3:**

`Input: lists = [[]]Output: []`

**Constraints:**

`k == lists.length`

`0 <= k <= 10<sup>4</sup>`

`0 <= lists[i].length <= 500`

`-10<sup>4</sup> <= lists[i][j] <= 10<sup>4</sup>`

`lists[i]`

is sorted in**ascending order**.The sum of

`lists[i].length`

will not exceed`10<sup>4</sup>`

.

So looking at the question we have to sort and merge the 2D array(not an array but just thinking in this way) / Linked List

So first we make a empty linked list

We have to add all the values to the empty linked list as they are different LL just like a list in a 2D array

One thing we will use to iterate and add values while the other we will use to return the linked list as we have to return it from the start

So head.next will act as begin -> your LL

` head=tmp= ListNode() res=[] for l in lists: while l!=None: res.append(l.val) l=l.next for val in sorted(res): tmp.next=ListNode() tmp=tmp.next tmp.val=val return head.next`

Time Complexity O(NlogN) as there is sorting :)

Space Complexity O(N)

]]>`arr`

of `k`

.A game will be played between the first two elements of the array (i.e. `arr[0]`

and `arr[1]`

). In each round of the game, we compare `arr[0]`

with `arr[1]`

, the larger integer wins and remains at position `0`

, and the smaller integer moves to the end of the array. The game ends when an integer wins `k`

consecutive rounds.

Return *the integer which will win the game*.

It is **guaranteed** that there will be a winner of the game.

**Example 1:**

`Input: arr = [2,1,3,5,4,6,7], k = 2Output: 5Explanation: Let's see the rounds of the game:Round | arr | winner | win_count 1 | [2,1,3,5,4,6,7] | 2 | 1 2 | [2,3,5,4,6,7,1] | 3 | 1 3 | [3,5,4,6,7,1,2] | 5 | 1 4 | [5,4,6,7,1,2,3] | 5 | 2So we can see that 4 rounds will be played and 5 is the winner because it wins 2 consecutive games.`

**Example 2:**

`Input: arr = [3,2,1], k = 10Output: 3Explanation: 3 will win the first 10 rounds consecutively.`

**Constraints:**

`2 <= arr.length <= 10<sup>5</sup>`

`1 <= arr[i] <= 10<sup>6</sup>`

`arr`

contains**distinct**integers.`1 <= k <= 10<sup>9</sup>`

As per the question we have to find the winner where `k`

will be the total victories and victories will be decided based on the condition

when the first index array value is greater than the second one so if this case statisfies then we move the 2nd indexed array to the last shift all the remaining towards the 2nd index

And we use hashmap to store the key value where the key is the index value and the value of hashmap will be the count of how many time we win

**But the bad part about this approach is that the time complexity and space complexity is complete shit**

so we get runtime error in leetcode

` def getWinner(self, arr: List[int], k: int) -> int: win={} cnt=0 if(len(arr)<2): return while cnt<=k: if (win.get(arr[0])==k): return arr[0] if(arr[0]>arr[1]): temp=arr[1] win[arr[0]]=win.get(arr[0],0)+1 cnt=max(cnt,win.get(arr[0])) arr.remove(temp) arr.append(temp) elif((arr[0]<arr[1])): temp=arr[0] win[arr[1]]=win.get(arr[1],0)+1 cnt=max(cnt,win.get(arr[1])) arr.remove(temp) arr.append(temp)`

if you've understood the question then i dont need to explain the below code

`def winner(arr,k): if k == 1: return max(arr[0], arr[1]) if k >= len(arr): return max(arr) current_winner = arr[0] consecutive_wins = 0 for i in range(1, len(arr)): if current_winner > arr[i]: consecutive_wins += 1 else: current_winner = arr[i] consecutive_wins = 1 if consecutive_wins == k: return current_winner return current_winner`

]]>For example:

`(2, 3) # Meeting from 10:00 10:30 am(6, 9) # Meeting from 12:00 1:30 pm`

Write a function merge_ranges() that takes a list of multiple meeting time ranges and returns a list of condensed ranges.

For example, given:

`[(0, 1), (3, 5), (4, 8), (10, 12), (9, 10)]`

your function would return:

`[(0, 1), (3, 8), (9, 12)]`

**Do not assume the meetings are in order.** The meeting times are coming from multiple teams.

**Write a solution that's efficient even when we can't put a nice upper bound on the numbers representing our time ranges.** Here we've simplified our times down to the number of 30-minute slots past 9:00 am. But we want the function to work even for very large numbers, like Unix timestamps. In any case, the spirit of the challenge is to merge meetings where start_time and end_time don't have an upper bound.

Look at this case:

`[(1, 2), (2, 3)]`

These meetings should probably be merged, although they don't exactly "overlap"they just "touch." Does your function do this?

Look at this case:

`[(1, 5), (2, 3)]`

Notice that although the second meeting starts later, it ends before the first meeting ends. Does your function correctly handle the case where a later meeting is "subsumed by" an earlier meeting?

Look at this case:

`[(1, 10), (2, 6), (3, 5), (7, 9)]`

Here *all* of our meetings should be merged together into just (1, 10). We need keep in mind that after we've merged the first two we're not done with the resultthe result of that merge *may itself need to be merged into other meetings as well*.

Make sure that your function won't "leave out" the *last* meeting.

We can do this in *O*(*n*lg*n*) time.

What if we only had two ranges? Let's take:

`[(1, 3), (2, 4)]`

These meetings clearly overlap, so we should merge them to give:

`[(1, 4)]`

But how did we know that these meetings overlap?

We could tell the meetings overlapped because the *end time* of the first one was after the *start time* of the second one! But our ideas of "first" and "second" are important herethis only works after we ensure that we treat the meeting that *starts earlier* as the "first" one.

How would we formalize this as an algorithm? **Be sure to consider these edge cases:**

The end time of the first meeting and the start time of the second meeting are equal. For example: [(1, 2), (2, 3)]

The second meeting ends before the first meeting ends. For example: [(1, 5), (2, 3)]

Here's a formal algorithm:

We treat the meeting with earlier start time as "first," and the other as "second."

If the end time of the first meeting is

*equal to or greater than*the start time of the second meeting, we merge the two meetings into one time range. The resulting time range's start time is the first meeting's start, and its end time is*the later of*the two meetings' end times.Else, we leave them separate.

So, we could compare *every* meeting to *every other* meeting in this way, merging them or leaving them separate.

Comparing *all pairs* of meetings would take *O*(*n*2) time. We can do better!

If we're going to beat *O*(*n*2) time, maybe we're going to get *O*(*n*) time? Is there a way to do this in one pass?

It'd be great if, for each meeting, we could just try to merge it with the *next* meeting. But that's definitely not sufficient, because the ordering of our meetings is random. There might be a non-next meeting that the current meeting could be merged with.

What if we sorted our list of meetings by start time?

Then any meetings that could be merged would always be adjacent!

So we could sort our meetings, then walk through the sorted list and see if each meeting can be merged with the one after it.

Sorting takes *O*(*n*lg*n*) time in the worst case. If we can then do the merging in one pass, that's another *O*(*n*) time, for *O*(*n*lg*n*) overall. That's not as good as *O*(*n*), but it's better than *O*(*n*2).

First, we sort our input list of meetings by start time so any meetings that might need to be merged are now next to each other.

Then we walk through our sorted meetings from left to right. At each step, either:

We

*can*merge the current meeting with the previous one, so we do.We

*can't*merge the current meeting with the previous one, so we know the previous meeting can't be merged with any future meetings and we throw the current meeting into merged_meetings.

`def merge_ranges(meetings): # Sort by start time sorted_meetings = sorted(meetings) # Initialize merged_meetings with the earliest meeting merged_meetings = [sorted_meetings[0]] for current_meeting_start, current_meeting_end in sorted_meetings[1:]: last_merged_meeting_start, last_merged_meeting_end = merged_meetings[-1] # If the current meeting overlaps with the last merged meeting, use the # later end time of the two if (current_meeting_start <= last_merged_meeting_end): merged_meetings[-1] = (last_merged_meeting_start, max(last_merged_meeting_end, current_meeting_end)) else: # Add the current meeting since it doesn't overlap merged_meetings.append((current_meeting_start, current_meeting_end)) return merged_meetings`

*O*(*n*lg*n*) time and *O*(*n*) space.

Even though we only walk through our list of meetings once to merge them, we *sort* all the meetings first, giving us a runtime of *O*(*n*lg*n*). It's worth noting that *if* our input were sorted, we could skip the sort and do this in *O*(*n*) time!

We create a new list of merged meeting times. In the worst case, none of the meetings overlap, giving us a list identical to the input list. Thus we have a worst-case space cost of *O*(*n*).

What if we

*did*have an upper bound on the input values? Could we improve our runtime? Would it cost us memory?Could we do this "in place" on the input list and save some space? What are the pros and cons of doing this in place?

This one arguably uses a greedy approach as well, except this time we had to *sort* the list first.

How did we figure that out?

We started off trying to solve the problem in one pass, and we noticed that it wouldn't work. We then noticed the *reason* it wouldn't work: to see if a given meeting can be merged, we have to look at *all* the other meetings! That's because the order of the meetings is random.

*That's* what got us thinking: what if the list *were* sorted? We saw that *then* a greedy approach would work. We had to spend *O*(*n*lg*n*) time on sorting the list, but it was better than our initial brute force approach, which cost us *O*(*n*2) time!

Users on longer flights like to start a second movie right when their first one ends, but they complain that the plane usually lands before they can see the ending. **So you're building a feature for choosing two movies whose total runtimes will equal the exact flight length.**

Write a function that takes an integer flight_length (in minutes) and a list of integers movie_lengths (in minutes) and returns a boolean indicating whether there are two numbers in movie_lengths whose sum equals flight_length.

When building your function:

Assume your users will watch

*exactly*two moviesDon't make your users watch the same movie twice

Optimize for runtime over memory

We can do this in *O*(*n*) time, where *n* is the length of movie_lengths.

Remember: your users shouldn't watch the same movie twice. **Are you sure your function wont give a false positive if the list has one element that is half flight_length**?

**How would we solve this by hand?** We know our two movie lengths need to sum to flight_length. So for a given first_movie_length, we need a second_movie_length that equals flight_length - first_movie_length.

To do this by hand we might go through movie_lengths from beginning to end, treating each item as first_movie_length, and for each of those check if there's a second_movie_length equal to flight_length - first_movie_length.

**How would we implement this in code?** We could nest two loops (the outer choosing first_movie_length, the inner choosing second_movie_length). Thatd give us a runtime of *O*(*n*2). We can do better.

To bring our runtime down we'll probably need to replace that inner loop (the one that looks for a matching second_movie_length) with something faster.

We could sort the movie_lengths firstthen we could use binary search to find second_movie_length in *O*(lg*n*) time instead of *O*(*n*) time. But sorting would cost *O*(*nlg*(*n*)), and we can do even better than that.

**Could we check for the existence of our second_movie_length in constant time**?

What data structure gives us convenient constant-time lookups?

A set!

So we could throw all of our movie_lengths into a set first, in *O*(*n*) time. *Then* we could loop through our possible first_movie_lengths and replace our inner loop with a simple check in our set. This'll give us *O*(*n*) runtime overall!

Of course, we need to add some logic to make sure we're not showing users the same movie twice...

But first, we can tighten this up a bit. Instead of two sequential loops, can we do it all in one loop? (Done carefully, this will give us protection from showing the same movie twice as well.)

We make one pass through movie_lengths, treating each item as the first_movie_length. At each iteration, we:

See if there's a matching_second_movie_length we've seen already (stored in our movie_lengths_seen set) that is equal to flight_length - first_movie_length. If there is, we short-circuit and return True.

Keep our movie_lengths_seen set up to date by throwing in the current first_movie_length.

`def can_two_movies_fill_flight(movie_lengths, flight_length): # Movie lengths we've seen so far movie_lengths_seen = set() for first_movie_length in movie_lengths: matching_second_movie_length = flight_length - first_movie_length if matching_second_movie_length in movie_lengths_seen: return True movie_lengths_seen.add(first_movie_length) # We never found a match, so return False return False`

We know users won't watch the same movie twice because we check movie_lengths_seen for matching_second_movie_length *before* we've put first_movie_length in it!

*O*(*n*) time, and *O*(*n*) space. Note while optimizing runtime we added a bit of space cost.

What if we wanted the movie lengths to sum to something

*close*to the flight length (say, within 20 minutes)?What if we wanted to fill the flight length as nicely as possible with

*any*number of movies (not just 2)?What if we knew that movie_lengths was

*sorted*? Could we save some space and/or time?

The trick was to use a set to access our movies *by length*, in *O*(1) time.

**Using hash-based data structures, like dictionaries or sets, is so common in coding challenge solutions, it should always be your first thought.** Always ask yourself, right from the start: "Can I save time by using a dictionary?"

**Big O notation is the language we use for talking about how long an algorithm takes to run**. It's how we compare the efficiency of different approaches to a problem.

It's like math except it's an **awesome, not-boring kind of math** where you get to wave your hands through the details and just focus on what's *basically* happening.

With big O notation we express the runtime in terms ofbrace yourself*how quickly it grows relative to the input, as the input gets arbitrarily large*.

Let's break that down:

**how quickly the runtime grows**It's hard to pin down the*exact runtime*of an algorithm. It depends on the speed of the processor, what else the computer is running, etc. So instead of talking about the runtime directly, we use big O notation to talk about*how quickly the runtime grows*.**relative to the input**If we were measuring our runtime directly, we could express our speed in seconds. Since we're measuring*how quickly our runtime grows*, we need to express our speed in terms of...something else. With Big O notation, we use the size of the input, which we call "*n*." So we can say things like the runtime grows "on the order of the size of the input"*O*(*n*)) or "on the order of the square of the size of the input"*O*(*n*2)).**as the input gets arbitrarily large**Our algorithm may have steps that seem expensive when*n*is small but are eclipsed eventually by other steps as*n*gets huge. For big O analysis, we care most about the stuff that grows fastest as the input grows, because everything else is quickly eclipsed as*n*gets very large. (If you know what an asymptote is, you might see why "big O analysis" is sometimes called "asymptotic analysis.")

If this seems abstract so far, that's because it is. Let's look at some examples.

` def print_first_item(items): print(items[0])`

**This function runs in (1) O(1) time (or "constant time") relative to its input**. The input list could be 1 item or 1,000 items, but this function would still just require one "step."

` def print_all_items(items): for item in items: print(item)`

**This function runs in O(n) time (or "linear time"), where n is the number of items in the list**. If the list has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

` def print_all_possible_ordered_pairs(items): for first_item in items: for second_item in items: print(first_item, second_item)`

Here we're nesting two loops. If our list has *n* items, our outer loop runs *n* times and our inner loop runs *n times for each iteration of the outer loop*, giving us *n*2 total prints. Thus **this function runs in O(n2) time (or "quadratic time")**. If the list has 10 items, we have to print 100 times. If it has 1,000 items, we have to print 1,000,000 times.

Both of these functions have *O*(*n*) runtime, even though one takes an integer as its input and the other takes a list:

` def say_hi_n_times(n): for time in range(n): print("hi")def print_all_items(items): for item in items: print(item)`

So sometimes *n* is an *actual number* that's an input to our function, and other times *n* is the *number of items* in an input list (or an input map, or an input object, etc.).

This is why big O notation *rules*. When you're calculating the big O complexity of something, you just throw out the constants. So like:

` def print_all_items_twice(items): for item in items: print(item) # Once more, with feeling for item in items: print(item)`

This is *O*(2*n*), which we just call *O*(*n*).

` def print_first_item_then_first_half_then_say_hi_100_times(items): print(items[0]) middle_index = len(items) // 2 index = 0 while index < middle_index: print(items[index]) index += 1 for time in range(100): print("hi")`

This is *O*(1+*n*/2+100), which we just call *O*(*n*).

Why can we get away with this? Remember, for big O notation we're looking at what happens **as n gets arbitrarily large**. As

For example:

` def print_all_numbers_then_all_pair_sums(numbers): print("these are the numbers:") for number in numbers: print(number) print("and these are their sums:") for first_number in numbers: for second_number in numbers: print(first_number + second_number)`

Here our runtime is *O*(*n*+*n*2), which we just call *O*(*n*2). Even if it was *O*(*n*2/2+100*n*), it would still be *O*(*n*2).

Similarly:

*O*(*n*3+50*n*2+10000) is*O*(*n*3)*O*((*n*+30)(*n*+5)) is*O*(*n*2)

Again, we can get away with this because the less significant terms quickly become, well, less significant as *n* gets big.

Often this "worst case" stipulation is implied. But sometimes you can impress your interviewer by saying it explicitly.

Sometimes the worst case runtime is significantly worse than the best case runtime:

` def contains(haystack, needle): # Does the haystack contain the needle? for item in haystack: if item == needle: return True return False`

Here we might have 100 items in our haystack, but the first item might be the needle, in which case we would return in just 1 iteration of our loop.

In general we'd say this is *O*(*n*) runtime and the "worst case" part would be implied. But to be more specific we could say this is worst case *O*(*n*) and best case *O*(1) runtime. For some algorithms we can also make rigorous statements about the "average case" runtime.

Sometimes we want to optimize for using less memory instead of (or in addition to) using less time. Talking about memory cost (or "space complexity") is very similar to talking about time cost. We simply look at the total size (relative to the size of the input) of any new variables we're allocating.

This function takes *O*(1) space (we use a fixed number of variables):

` def say_hi_n_times(n): for time in range(n): print("hi")`

This function takes *O*(*n*) space (the size of hi_list scales with the size of the input):

` def list_of_hi_n_times(n): hi_list = [] for time in range(n): hi_list.append("hi") return hi_list`

**Usually when we talk about space complexity, we're talking about additional space**, so we don't include space taken up by the inputs. For example, this function takes constant space even though the input has

` def get_largest_item(items): largest = float('-inf') for item in items: if item > largest: largest = item return largest`

**Sometimes there's a tradeoff between saving time and saving space**, so you have to decide which one you're optimizing for.

You should make a habit of thinking about the time and space complexity of algorithms *as you design them*. Before long this'll become second nature, allowing you to see optimizations and potential performance issues right away.

Asymptotic analysis is a powerful tool, but wield it wisely.

Big O ignores constants, but **sometimes the constants matter**. If we have a script that takes 5 hours to run, an optimization that divides the runtime by 5 might not affect big O, but it still saves you 4 hours of waiting.

**Beware of premature optimization**. Sometimes optimizing time or space negatively impacts readability or coding time. For a young startup it might be more important to write code that's easy to ship quickly or easy to understand later, even if this means it's less time and space efficient than it could be.

But that doesn't mean startups don't care about big O analysis. A great engineer (startup or otherwise) knows how to strike the right *balance* between runtime, space, implementation time, maintainability, and readability.

**You should develop the skill to see time and space optimizations, as well as the wisdom to judge if those optimizations are worthwhile.**

**We Will consider the below example to solve DFS traversal.**

I assume you have prior knowledge of how DFS works.

`node = ["a", "b", "c", "d", "e", "f", "g"]edges = [["a", "b"], ["b", "c"], ["b", "e"], ["c", "d"], ["c", "e"], ["e", "g"], ["f", "f"]]visited = set()graph = {}for i in node: graph[i] = []for (u, v) in edges: graph[u].append(v) graph[v].append(u)`

The Above code will generate a graph which will be as per the above figure.

First, we are declaring the nodes i.e. all the nodes like `"a", "b", "c"...`

then we declare the edges i.e all the connections between the nodes as we know that it is an undirected graph, Once done with making connections we can just push all the connections in the graph, In the first for loop we can see.

We are using visited=set() because here we will store the visited node cause once we visit something we also need to keep a note of that so we use set(set does not store duplicate value)

We are using graph={}->(dictionary in python hashmap in easy term), In hashmap we usually store key-value pair so as per our case we need to get the node which is connected to whom so we are using hashmap/dictionary (Key-value).

`for i in node: graph[i] = []`

Here we store the number of keys in the graph(If confused then please consider reading about hashmap/dictionary/object), So for each node, we create an empty list in which we will store the connections of the key concerning list.

`{'a': [], 'b': [], 'c': [], 'd': [], 'e': [], 'f': [], 'g': []}`

Once our basic key is done then we create the connections with the key value i.e. who is connected to "a" ->Only b right? yes similarly for all the nodes we check which edges are connected to the node

`for (u, v) in edges: graph[u].append(v) graph[v].append(u)`

`{'a': ['b'], 'b': ['a', 'c', 'e'], 'c': ['b', 'd', 'e'], 'd': ['c'], 'e': ['b', 'c', 'g'], 'f': ['f', 'f'], 'g': ['e']}`

First, I'll share the code and then explain each step

`def dfs(graph, node, visited): print(node) visited.add(node) for item in graph[node]: if item not in visited: dfs(graph, item, visited)`

We are going to use recursion for iterating over all the nodes, First, we print the node which we are currently in i.e. `print(node)`

then after printing we add that to our book of list visited, we go over each item in the graph[node] (in simple words we will go over all the connected edges of the node i.e. graph[a] will give all the connected node to a) after that we check have we already visited this node? if yes then we skip else we pass the current item to the function (recursion) this will go on until the base case is satisfied, so when no more nodes will be left to visit our dfs traversal will be done

`abcdeg`

The F node was not printed/visited because the F node is not connected to any other node :)