ccc dedb5e01c5 add README.md | 3 lat temu | |
---|---|---|
.ipynb_checkpoints | 3 lat temu | |
stoplist | 3 lat temu | |
README.md | 3 lat temu | |
cn_stopwords.txt | 3 lat temu | |
customized_stopwords.pickle | 3 lat temu | |
customized_stopwords.txt | 3 lat temu | |
dict.txt.big | 3 lat temu | |
gnews_keyword_extraction.ipynb | 3 lat temu | |
gnews_keyword_extraction.py | 3 lat temu | |
jieba_add_word.txt | 3 lat temu | |
jieba_add_word_kw_with_weighting.txt | 3 lat temu | |
predict_doc.txt | 3 lat temu | |
renewhouse_list.pickle | 3 lat temu | |
requirements.txt | 3 lat temu | |
tag_list.csv | 3 lat temu |
Extract keywords in target domain news from DB (gnews.gnews_detail).
First, we use Transformer to get the news semantic vector, and use HDBSCAN for clustering. Then we can use predict_doc.txt to predict which cluster the target domain may be in. Finally, we used unsupervised algorithms such as RAKE, TF-IDF, TextRank, and MultipartiteRank to extract the keywords of news in this cluster and store them in tag_list.csv.
pip install -r requirements.txt
Note: See requirements.txt for more details.
Copy the news content of the target domain and paste it to predict_doc.txt.
python gnews_keyword_extraction.py --topK 80 --target_domain_range 1
--topK
(optional) : – Get the top K keywords. (Default: 80
)--target_domain_range
(optional) : – Candidate range of target domain. 0
means only one target cluster is used, 1
means a total of three clusters are used from the side. (Default: 1
)Get keywords from tag_list.csv.