## Triangle

http://oj.leetcode.com/problems/triangle/

The problem statement is deceptive by mentioning path from “top” to “bottom”. I was fooled at first and tried to compare all the ways from the top to the bottoms. However, it is impractical since there are O(2^(n-1)) such paths.
I think this is a typical demonstration of the difference between bottom-up and top-down dynamic programming. It would be remarkably easier if we start from bottom and do upward. The solution is very short when the idea is set.

```class Solution{
public:
int minimumTotal(vector<vector<int>>& triangle ){
int N = triangle.size();
if(!N)
return 0;
vector<int> r = triangle[N-1];
for(int i= N-2; i>=0; i--){
for(int j=0; j<=i; j++){
r[j] = triangle[i][j]+min(r[j],r[j+1]);
}
}
return r;
}
};
```

## Sum Root to Leaf Numbers

http://oj.leetcode.com/problems/sum-root-to-leaf-numbers/
Another problem from Leetcode that I came up with two different solutions.
The first solution idea is to get paths from root to leaves first, compute the sum later. Those paths can be easily discovered by depth first search, while each path can be transformed to integer easily.
The second solution is much more elegant. There is no need to keep track of any paths, we calculate the sum as we go deeper from root to leaves. It runs a lot faster than the previous solution (12ms vs 36ms).

Second solution

```class Solution {
public:
int sumNumbers(TreeNode *root) {
return sumNumbers(root,0);
}
int sumNumbers(TreeNode* root, int curr){
if(!root)
return 0;
curr=curr*10+root->val;
if(!root->left && !root->right)
return curr;
return sumNumbers(root->left,curr)+sumNumbers(root->right,curr);
}
};
```

First solution

```class Solution {
public:
vector<int> pow10;
int sumNumbers(TreeNode *root) {
if(!root)
return 0;
pow10.resize(10);
pow10=1;
for(int i=1; i<pow10.size();i++){
pow10[i]=10*pow10[i-1];
}
list<int> path{root->val};
return sumNumbers(root,path);
}

int sumNumbers(TreeNode *root, list<int> &path){
if(!root->left && !root->right){
return calculateSum(path);
}
int sum = 0;
if(root->left){
path.push_back(root->left->val);
sum+=sumNumbers(root->left, path);
path.pop_back();
}
if(root->right){
path.push_back(root->right->val);
sum+=sumNumbers(root->right, path);
path.pop_back();
}
return sum;
}

int calculateSum(list<int>& path){
int sum = 0;
int n = path.size();
int i =0;
for(int p:path){
sum+=p*pow10[n-1-i];
i++;
}
return sum;
}
};
```

## Flatten Binary Tree

I have solved this problem weeks ago, but today I completely forgot how I had solved it. As I tried to simulate the interview, I started to think and finally came up with a solution, which is interestingly different from the previous one. That’s what I want to write about this problem, not because it’s difficult or intriguing itself, but because it encouraged me to think twice from different perspective.

My first solution idea is that the problem can be solved by tweaking DFS: inorder traversal. We just need to keep track of the current pointer and the next one.

```    void dfs(TreeNode* root, TreeNode* next){
if(!root){
return;
}
if(!root->left && !root->right){
root->right = next;
return;
}
else if(root->left){
if(root->right){
dfs(root->left, root->right);
dfs(root->right, next);
}
else{
dfs(root->left, next);
}
root->right = root->left;
root->left=NULL;
}
else{
dfs(root->right,next);
}
}
```

The other solution I came up today involve return the result as a pair of pointer.

```    Pair flatten(TreeNode* root){
if(!root){
return Pair(NULL,NULL);
}
if(!root->left && !root->right){
return Pair(root,root);
}
else if(root->left){
Pair left=flatten(root->left);
Pair right=flatten(root->right);
root->left=NULL;
root->right=left.first;
left.second->right=right.first;
if(right.second)
return Pair(root,right.second);
else
return Pair(root,left.second);
}
else{
Pair right = flatten(root->right);
return Pair(root,right.second);
}
}
```

The two solutions are essential the same, but the point of practicing this problem is thinking path, not memorizing. There are tons of interesting problems come from binary tree, the more I solve, the more intuitive my thinking is.

## Bellman-Ford’s Algorithm

Dijkstra’s algorithm is fast, but limited to non-negatively weighted graph. It also does not applicable for distributed system. Bellman-Ford’s algorithm is the solution for the two problems. It is asymptotically slower than Dijkstra’s algorithm as its complexity is O(EV). However, it can solve shortest path in negatively weighted graph, and it is highly distributed. In fact, it is the basic foundation for modern internet routing.

The code is extremely simple

```vector<int> bellmanford(Graph &g, int start=0){
vector<double> distance(g.V(),INF);
vector<int> parent(g.V(),-1);
distance[start]=0;
parent[start]=0;
for(int cnt=1; cnt<g.V(); cnt++){
for(int from=0; from<g.V(); from++){
double dis=e.weight()+distance[from];
if(dis<distance[e.to()]){
distance[e.to()]=dis;
parent[e.to()]=from;
}
}
}
}
return parent;
}

```

## Dijkstra’s Algorithm

2014/3/5: How stupid I was for not recognizing Dijkstra as a special case of A* without heuristic cost. It makes the algorithm much easier to understand and implement. We only need to consider vertices as states and put them to a priority_queue.

Furthermore, I did not know that we need to implement Dijkstra’s differently in case of dense graph. The below code runs in O(E log(V))~O(V^2 log(V)) in dense graph. However, there is another implementation with O(V^2) complexity.
Let s be the source and (S,V-S) is a cut of graph G. The pseudo code

```for v in V: distance[v]=INFINITY
S={s}
distance[s]=0;
while(S!=V){
u:= argmin distance[u] for u in (V-S)
if(u in (V-S) && distance[u]+weight(u,v)<distance[v])
distance[v]=distance[u]+weight(u,v)
S=S+{u}
}
```

If we can determine if a vertex u in (V-S) in O(1) time, then the above code will run in O(V^2) time as there are V iterations in each one of V loops. This can be done by an bool array. This method is easier to understand than the below code.

2014/03/03: I have heard and read about single source shortest path problem in graph theory dozens of time, but it’s the first time I have ever actually write a program to solve it. The algorithm follows greedy paradigm to update the shortest distance from source to each of every other vertices.
The dijkstra function is quite simple as follows

```#define INF numeric_limits<int>::max()
vector<int> shortestPath(Graph &g, int start=1){
vector<int> parent(g.V(),-1);
vector<double> distance(g.V(),INF);
vector<bool> done(g.V(), false);
parent[start]=start;
distance[start]=0;

MinPQ pq;
pq.init(g.V());

pq.push(Pair(0.0,start));
while(!pq.empty()){
Pair curr = pq.pop();
if(done[curr.second])
continue;
done[curr.second]=true;
if(done[e.to()])
continue;
double dis = e.weight+distance[e.from()];
if(dis<distance[e.to()]){
int index=pq.find(e.to());
if(index==-1)
pq.push(Pair(dis,e.to()));
else
pq.decrease(Pair(dis,e.to()), index);
parent[e.to()]=e.from();
distance[e.to()]=dis;
}
}
}
return parent;
}
```

The most important and difficult part is to build MinPQ. Default priority_queue in C++ does not support find and decrease methods, which are vital to dijkstra’s algorithm. At first I implement find in O(V) time, it works but not so efficient. I then use an additional array mapindex to hold the index of vertices in MinPQ, which makes both find and decrease O(1) operations.

```class MinPQ{
private:
Pair *data;
int* mapindex;
int MAX;
int size;
public:
MinPQ():size(0),MAX(0),data(NULL),mapindex(NULL){
}
~MinPQ(){
delete[] data;
delete[] mapindex;
}
void init(int max){
if(data)
delete[] data;
if(mapindex)
delete[] mapindex;
MAX=max;
size=0;
data=new Pair[max];
mapindex= new int[max];

for(int i=0; i<max; i++)
mapindex[i]=-1;
}
void heapify_up(int child){
int parent=child/2;
while(parent && data[child].first<data[parent].first){
swap(data[child],data[parent]);
updateindex(child);
updateindex(parent);
child=parent;
parent/=2;
}
}
void heapify_down(int parent){
int child=2*parent;
while(child<=size){
if(child<size-1 && data[child+1].first<data[child].first) child++;
if(data[child].first<data[parent].first){
swap(data[child], data[parent]);
updateindex(child);
updateindex(parent);
parent=child;
child*=2;
}
else
break;
}
}
void updateindex(int index){
mapindex[data[index].second]=index;
}
bool push(const Pair& x){
if(size>=MAX) return false;
data[++size]=x;
updateindex(size);
heapify_up(size);
return true;
}
Pair min() const{
assert(!empty());
return data;
}
Pair pop(){
assert(!empty());
Pair value = data;
data=data[size--];
updateindex(1);
heapify_down(1);

mapindex[value.second]=-1;
return value;
}
bool empty() const{
return size==0;
}
int find(int id){
return mapindex[id];
}
bool decrease(const Pair& t, int index){
data[index]=t;
heapify_up(index);
}
};
```

Time complexity is O(E logV) and space complexity is O(V).
Tested on data from this website. http://algs4.cs.princeton.edu/44sp/
References
Algorithms, Robert Sedgewick

## Max points on a line

I have been preparing for the coming technical interview for two weeks and Leetcode is my main source for practicing. I already solved over 80% problems, but I find it difficult to review my solution by merely looking at the source code. Therefore, I want to write down my thinking process for several interesting problems on my blog, which can be useful in the future.

The first problem I would like to share is as follows

Problem: Given n points on a 2D plane, find the maximum number of points that lie on the same straight line.
Having knowledge on Computer Vision, I instantly recognized the resemblance of this problem with Hough Transformation which detects shapes such as lines, circles in pictures by voting method. The basic idea of my solution is:
1. First count occurrences of each point and remove duplicate points.
2. Go over all possible pairs of point n(n-1)/2, put respective lines into an unordered_map<line,unordered_set>
3. Go over all lines in the hashtable and count the total number of points lying on each line based on the counting done in (1)
Hashing is the center idea to make the algorithm run fast. My code is a bit lengthy but easy to understand. No problem if encountering this one in an interview.
To be continue.

```/**
* Definition for a point.
* struct Point {
*     int x;
*     int y;
*     Point() : x(0), y(0) {}
*     Point(int a, int b) : x(a), y(b) {}
* };
*/
#define INF numeric_limits<double>::max()
#define SMALL 0.0000001

hash<double> hash_double;
hash<int> hash_int;

// y = ax+b
struct Line{
double a,b;
Line(double a_, double b_):a(a_),b(b_){}
bool operator==(const Line &other)const{
return fabs(a-other.a)<SMALL && fabs(b-other.b)<SMALL;
}
};

struct hashLine{
size_t operator()(const Line &l) const{
return hash_double(l.a)^hash_double(l.b);
}
};

bool pointLess(const Point& p1, const Point& p2){
return p1.x==p2.x ? p1.y<p2.y : p1.x<p2.x;
}

bool pointEqual(const Point& p1, const Point& p2){
return p1.x==p2.x && p1.y==p2.y;
}
class Solution {
public:
int maxPoints(vector<Point> &points) {
unordered_map<Line,unordered_set<int>,hashLine> count;
int n = points.size();
if(n<=2) return n;

sort(points.begin(), points.end(), pointLess);
vector<int> countEach(n);
fill(countEach.begin(),countEach.end(),0);

vector<Point> ps;
ps.push_back(points);
for(int i=0; i<n; i++){
if(pointEqual(points[i],ps.back())){
countEach[ps.size()-1]++;
}
else{
ps.push_back(points[i]);
countEach[ps.size()-1]=1;
}
}

n=ps.size();
if(n==1) return countEach;

for(int i=0; i<n; i++){
for(int j=i+1;j<n; j++){
double dx = ps[i].x-ps[j].x, dy =ps[i].y-ps[j].y;
double a,b;
if(dx){
a = dy/dx;
b = ps[i].y-a*ps[i].x;
}
else{
a = INF;
b = ps[i].x;
}
Line l(a,b);
count[l].insert(i);
count[l].insert(j);
}
}

int maxP = 0;
for(auto i:count){
int size = 0;
for(auto j:i.second){
size+=countEach[j];
}
maxP=max(maxP,size);
}
return maxP;
}
};
```

## How did I get MEXT!

I have recently received many questions regarding to my experience of studying abroad from students from my old high school Instead of enumerating those questions and answering them, I want to share my experience of getting Japanese Government Scholarship (MEXT). I hope that after reading this entry, you can figure out a realistic plan to realize your own dreams.

I got the information about MEXT through my sister who was studying at Hanoi University of Science and Technology. A friend of her, whose grade was among the highest at HUST, got MEXT and came to Japan to study. I was impressed by her story and instantly attracted by MEXT. I heard the story when I was high school’s senior, and I started planning to get MEXT since then. The plan was quite simple, I just forced myself to study English, Maths, Physics and Chemistry harder. Nonetheless, the plan did not go well. I loved mathematics too much to stay focus on other subjects. I gradually stopped studying physics, chemistry and English. I thought I would be able to review them before the tests in Japanese Embassy. It turned out that I had thought too much, because in fact, there was nothing I could do in high school to prepare for MEXT.

After entering HUST, I knew I had to overcome the most challenging part of my plan, which was to get in the top ten students with high grades in the first semester. Because I studied in the centre of gifted education, my classmates in university was far superior than my classmates in high schools. Most of them came from some gifted high schools, more than half got prizes in National Olympiad in maths, physics or informatics. Some of them even attended International Olympiad in physics or maths. To be honest, at the beginning of my freshman year, I thought it was almost impossible to be in the top ten of my class, let alone my university. However, my plan looked much more plausible after the first half of the semester. Finally, I got the highest GPA in my class, and was in the top 5 in university. I was proud and quite sure that I will be elected as a MEXT’s candidate.

I started studying English and reviewing chemistry and physics from the beginning of the second semester. It was very boring to learn those subjects which I had no interest in. However, I did it anyway, because in order to accomplish my plan, I had to learn whatever necessary. After a semester, I thought I was quite ready for the tests. I did not do them as good as I expected. I made many rueful mistakes. Fortunately, I passed the exams and got an interview. The interview lasted only 10 minutes and was fun. After 6 months, the result came out, and I was happy as hell.

That was my experience written a very unorganized way. After taking TOEFL, I feel that my writing skill has gotten worse day by day. 🙂