Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Support
    • Submit feedback
    • Contribute to GitLab
  • Sign in
W
word2vec
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 35
    • Issues 35
    • List
    • Boards
    • Labels
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Analytics
    • Analytics
    • CI / CD
    • Repository
    • Value Stream
  • Wiki
    • Wiki
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
  • DESHPANDE SRIJAY PARAG
  • word2vec
  • Issues
  • #6

Closed
Open
Opened Mar 21, 2016 by DESHPANDE SRIJAY PARAG@srijayd
  • Report abuse
  • New issue
Report abuse New issue

Does the vocab_size match the actual size of vocab in word2vec.c?

Created by: GoogleCodeExporter



What steps will reproduce the problem?
1. Download attached text_simple train file
2. Compile word2vec.c as: gcc word2vec.c -o word2vec -lm -pthread
3. Run: ./word2vec -train text_simple -save-vocab vocab.txt

What is the expected output? What do you see instead?
Expect in saved vocab.txt file:
===============
</s> 0
and 12
the 11
four 10
in 8
used 5
war 5
one 5
nine 9
===============
What is really seen in the file
===============
</s> 0
and 12
the 11
four 10
in 8
used 5
war 5
one 5
===============

The last element "nine 5" wass missing.

What version of the product are you using? On what operating system?
MacOS, gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 
2336.11.00)

Please provide any additional information below.

This is NOT really a bug report because I am confused to understand the format 
of train_file and how the vocab is constructed from it.

Based on the source code of word2vec.c, when reading from train_file, it will

1. insert </s> as the first element in vocab

2. scan each word (or </s> for newline) in train_file, add it to vocab, and 
hash it in vocab_hash

So far the vocab_size = the number of words in vocab, INCLUDING </s> at the head

3. sort the words in vocab based on their counts, but keep </s> as the first of 
vocab

Now the vocab_size because the number of words in vocab, EXCLUDING the leading 
</s>. And if there is no newline character in train_file, </s> won't even be 
hashed in vocab_hash

So there is a inconsistency here between vocab_size and the actual size of 
vocab (including </s>). It could be a bug because later when the vocab is being 
iterated, it is always done by iterating the elements from 0 to vocab_size-1, 
like in SaveVocab(). This results in that the leading </s> will be saved, but 
the last element in vocab will be ignored. At least that's what it looks with a 
simple train file "text_simple" as attached here.

Original issue reported on code.google.com by ma.li...@gmail.com on 25 Aug 2013 at 2:38

Attachments:

  • text_simple
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
0
Labels
None
Assign labels
  • View project labels
Reference: srijayd/word2vec#6