我只是不甘心,我不想放弃,不想停下来。败、苦、辱,首是三关,自己算是已经过了,然后,则是,静下来,韬光养晦。幸好,自己已经学会了在迷惘的时候如何去行动了,纷乱的时候,如何厘清思绪,做出选择。有幸习得这一切,不错。作为一个挑战者,这一切真是有趣。
9月
时间好快,一下子,到了9月了,自己以前的一个毛病现在找到办法去解决了。
近来在看<<大秦帝国>>,看到名士在七大战国一展胸中所学,想想自己一直在面试,却一直在失败。自己很想和他们一样建功立业,在AI的战国里面找到自己合适的位置。所以,就有投简历,但其实找工作的过程,是一个让人心变的浮躁的过程,人会想马上找到工作,想马上放下一切。
但是还是静下来吧,磨炼自己的剑,自己的剑还是不够锋利,庞涓、孙膑,苏秦、张仪,莫不是在山中修行了十几年,我就只在CMU学习了两年,慢慢来吧。
紧张和焦虑是因为自己要做的事情太多了,那所幸就集中做好最重要的事情。
把精力集中在一个方向吧,无人车的perception,先把手头上面的项目做好,然后,顺带的去学一些东西。
手头上最重要的是分割,现在无人车里面比较重要的是LiDar,我想把LiDar这个东西加到我的系统里面。然后,对于系统,现在是先把pipeline做出来,然后往里面加LiDar来提高精度。
最近,又学会一件事情,就是烦压力大的事情,给自己做减法,从简单容易做的事情开始,入手,然后这样子做成一件事会让自己压力小点,专注一点,甚至做出专注一点的选择,感觉也很难。因为你要做的事情,总是太多太多了。因为,专注意味着舍弃,这个舍得之道反而是最难的,舍弃什么。但是,如果自己纠结,难受,那就从简单的一件事情开始吧。
leetcode891-sums-of-subsequence-widths
Description
Given an array of integers A, consider all non-empty subsequences of A.
For any sequence S, let the width of S be the difference between the maximum and minimum element of S.
Return the sum of the widths of all subsequences of A.
As the answer may be very large, return the answer modulo 10^9 + 7.
Example 1:1
2
3
4
5
6Input: [2,1,3]
Output: 6
Explanation:
Subsequences are [1], [2], [3], [2,1], [2,3], [1,3], [2,1,3].
The corresponding widths are 0, 0, 0, 1, 1, 2, 2.
The sum of these widths is 6.
Idea
Previously, I perform dfs, but it sufferred from time limit. However, if you find the nature of the problem, it’s just a math problem:
The order in initial arrays doesn’t matter,
my first intuition is to sort the array.
For A[i]:
There are i smaller numbers,
so there are 2 ^ i sequences in which A[i] is maximum.
we should do res += A[i] * (2 ^ i)
There are n - i - 1 bigger numbers,
so there are 2 ^ (n - i - 1) sequences in which A[i] is minimum.
we should do res -= A[i] * 2 ^ (n - i - 1)
Done.
Time Complexity:
O(NlogN)
Code
dfs1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21class Solution(object):
def sumSubseqWidths(self, A):
"""
:type A: List[int]
:rtype: int
"""
self.res = []
A = sorted(A)
self.dfs([], 0, A)
return sum(self.res)
def dfs(self, pre, start, A):
if start > len(A):
return
if len(pre) > 0:
self.res.append(pre[-1] - pre[0])
for i in range(start, len(A)):
pre.append(A[i])
self.dfs(pre[:], i+1, A)
pre.pop()
math1
2
3
4
5
6
7
8
9
10
11class Solution {
public:
int sumSubseqWidths(vector<int>& A) {
sort(A.begin(), A.end());
long c = 1, res = 0, mod = 1e9 + 7;
for (int i = 0; i < A.size(); i++, c = (c<<1) % mod) {
res = (res + A[i] * c - A[A.size() - i - 1] * c) % mod;
}
return (res + mod) % mod;
}
};
C++
Vector in C++ STL
Vectors are same as dynamic arrays with the ability to resize itself automatically when an element is inserted or deleted, with their storage being handled automatically by the container. Vector elements are placed in contiguous storage so that they can be accessed and traversed using iterators. In vectors, data is inserted at the end. Inserting at the end takes differential time, as sometimes there may be a need of extending the array. Removing the last element takes only constant time because no resizing happens. Inserting and erasing at the beginning or in the middle is linear in time.
Certain functions associated with the vector are:
Iterators
begin() – Returns an iterator pointing to the first element in the vector
end() – Returns an iterator pointing to the theoretical element that follows the last element in the vector
…
leetcode65-valid-number
Description
Validate if a given string is numeric.
Some examples:
“0” => true
“ 0.1 “ => true
“abc” => false
“1 a” => false
“2e10” => true
Note: It is intended for the problem statement to be ambiguous. You should gather all requirements up front before implementing one.
Update (2015-02-10):
The signature of the C++ function had been updated. If you still see your function signature accepts a const char * argument, please click the reload button to reset your code definition.
Idea
Deterministic finite automaton (DFA) — natural language processing
Code
1 | class Solution(object): |
ml-metrics
Revisit the classical metrics in machine learning. (mainly cited from Koo Ping Shung)
Precision
You can see that Precision talks about how precise/accurate your model is out of those predicted positive, how many of them are actual positive.
In email spam detection, a false positive means that an email that is non-spam (actual negative) has been identified as spam (predicted spam). The email user might lose important emails if the precision is not high for the spam detection model.
Recall
Recall actually calculates how many of the Actual Positives our model capture through labeling it as Positive (True Positive)
Similarly, in sick patient detection. If a sick patient (Actual Positive) goes through the test and predicted as not sick (Predicted Negative). The cost associated with False Negative will be extremely high if the sickness is contagious.
F1
$F_1 = \frac{2}{\frac{1}{precision} + \frac{1}{recall}}$
Multi-View 3D Object Detection Network for Autonomous Driving
Abstract
This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and pre- dicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird’s eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable inter- actions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods
Structure
Multi-view
Bird’s Eye View Representation
Front View Representation
得失
经历了一些得失吧,回头看,就感觉自己做一件事情的时候,如果一直很在意这个事情的结果的话,就会寸步难行。当自己静下来去面对的时候,当自己静下来放下得失的时候,自己才能浸入到自己要做的事情里面。其实,经历这个过程本身就是一种收获,而在意得失其实本质上是害怕失败,而害怕失败更深层次缺又简单的原因是自己累了,倦了,想停下来,不想在折腾了,不想再重整旗鼓,然后重复这个过程了。但如果这个过程,本身意味着你不断的成长,那么为什么又会害怕呢,经历所谓的失败,其实是一次又一次纠错的过程。当然,失败的苦楚,还是挺让人难受的,但是,放下这一切,只是简简单单的把自己的list里面的事情一件一件的做好,这种快乐,还是很不错的呀。
还是那一句话,静下来就好了。
CyCADA
CYCADA: CYCLE-CONSISTENT ADVERSARIAL DOMAIN ADAPTATION
Abstract
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant represen- tations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recogni- tion and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.