Face-Landmark-Extraction-Pytorch

Trained with NVIDIA P100 GPU

NLP_Sentiment_Classification

  • A review and its label whether the review is Positive or Negative

  • In the review.txt the ratio of Positive and Negative words is following: Positive Negative Ratio

  • A trainig performance using these raw data is: Before Reduce Noise

  • Looks like I am doing a redundant calculation when I udpate weights in a hidden layer. After I optimized this problem: First Noise Reduction

  • Still, the performance is not quite enough :confused:, Let’s look at the distribution of words:

  • It looks like, there are too many vauge menaning words(which are close to the zero, or the middle values), let’s trim them out. After the trimming, the performance is: Last Noise Reduction

  • :stuck_out_tongue: The performance is boosted from 200 to 6000.

  • After the training, let’s look at the weights in the hidden layer.
  • Using the weights, we can find the similar meaning words!

  • Let’s take a loot its T-SNE.

  • Okay, then what about the positive and negative words?

Github