mirror of
https://github.com/fxsjy/jieba.git
synced 2025-07-24 00:00:05 +08:00
merge change from master to jieba3k
This commit is contained in:
commit
aae91b6fb6
2
.gitignore
vendored
2
.gitignore
vendored
@ -161,3 +161,5 @@ pip-log.txt
|
||||
|
||||
# Mac crap
|
||||
.DS_Store
|
||||
*.log
|
||||
|
||||
|
89
Changelog
Normal file
89
Changelog
Normal file
@ -0,0 +1,89 @@
|
||||
2013-04-27: version 0.28.1
|
||||
==========================
|
||||
1) hotfix. 修正了全模式下英文处理的bug.
|
||||
|
||||
2013-04-27: version 0.28
|
||||
========================
|
||||
1) 新增词典lazy load功能,用户可以在'import jieba'后再改变词典的路径. 感谢hermanschaaf
|
||||
2) 显示词典加载异常时错误的词条信息. 感谢neuront
|
||||
3) 修正了词典被vim编辑后会加载失败的bug. 感谢neuront
|
||||
|
||||
2013-04-22: version 0.27
|
||||
========================
|
||||
1) 新增并行分词功能,可以在多核计算机上显著提高分词速度
|
||||
2) 修正了“的”字频过高引起的bug;修正了对小数点和下划线的处理
|
||||
3) 修正了python2.6存在的兼容性问题
|
||||
|
||||
|
||||
2013-04-07: version 0.26
|
||||
========================
|
||||
1) 改进了对标点符号的处理,之前的版本会过滤掉所有的标点符号;
|
||||
2) 允许用户在自定义词典中添加词性;
|
||||
3) 改进了关键词提取的功能jieba.analyse.extract_tags;
|
||||
4) 修复了一个在pypy解释器下运行的bug.
|
||||
|
||||
|
||||
2013-02-18: version 0.25
|
||||
========================
|
||||
1)支持繁体中文的分词
|
||||
2)修正了多python进程时生成cache文件失败的bug
|
||||
|
||||
|
||||
2012-12-28: version 0.24
|
||||
========================
|
||||
1) 解决了没有标点的长句子分词效果差的问题,问题在于连续的小概率乘法可能会导致浮点下溢或为0.
|
||||
2) 修复了0.23的全模式下英文分词的bug
|
||||
|
||||
|
||||
2012-12-12: version 0.23
|
||||
========================
|
||||
1) 修复了之前版本不能识别中英混合词语的问题
|
||||
|
||||
|
||||
2012-11-28: version 0.22
|
||||
========================
|
||||
1) 新增jieba.cut_for_search方法, 该方法在精确分词的基础上对“长词”进行再次切分,适用于搜索引擎领域的分词,比精确分词模式有更高的召回率。
|
||||
2) 开始支持Python3.x版。 之前一直是只支持Python2.x系列,从这个版本起有一个单独的jieba3k
|
||||
|
||||
|
||||
2012-11-23: version 0.21
|
||||
========================
|
||||
1) 修复了全模式分词中散字过多的问题
|
||||
2) 用户自定义词典函数load_userdict支持file-like object作为输入
|
||||
|
||||
|
||||
2012-11-06: version 0.20
|
||||
========================
|
||||
1) 新增词性标注功能
|
||||
|
||||
|
||||
2012-10-25: version 0.19
|
||||
========================
|
||||
1) 提升了模块加载的速度
|
||||
2) 增加了用户自定义词典的接口
|
||||
|
||||
|
||||
2012-10-16: version 0.18
|
||||
========================
|
||||
1) 增加关键词提取功能
|
||||
|
||||
|
||||
2012-10-12: version 0.17
|
||||
========================
|
||||
1) 将词典文件dict.txt排序后存储,提升了Trie树构建速度,使得组件初始化时间缩短了10%;
|
||||
2) 增强了人名词语的训练,增强了未登录人名词语的识别能力
|
||||
|
||||
|
||||
2012-10-09: version 0.16
|
||||
========================
|
||||
1)将求最优切分路径的记忆化递归搜索算法改用循环实现,使分词速度提高了15%
|
||||
|
||||
2) 修复了Viterbi算法实现上的一个Bug
|
||||
|
||||
|
||||
2012-10-07: version 0.14
|
||||
========================
|
||||
1) 结巴分词被发布到了pypi,用户可以通过easy_install或者pip快速安装该组件;
|
||||
2) 合并了搜狗开源词库2006版,删除了一些低频词
|
||||
3) 优化了代码,缩短了程序初始化时间。
|
||||
4) 增加了在线效果演示
|
58
README.md
58
README.md
@ -124,7 +124,18 @@ Output:
|
||||
北京 ns
|
||||
天安门 ns
|
||||
|
||||
|
||||
功能 5) : 并行分词
|
||||
==================
|
||||
* 原理:将目标文本按行分隔后,把各行文本分配到多个python进程并行分词,然后归并结果,从而获得分词速度的可观提升
|
||||
* 基于python自带的multiprocessing模块,目前暂不支持windows
|
||||
* 用法:
|
||||
* `jieba.enable_parallel(4)` # 开启并行分词模式,参数为并行进程数
|
||||
* `jieba.disable_parallel()` # 关闭并行分词模式
|
||||
|
||||
* 例子:
|
||||
https://github.com/fxsjy/jieba/blob/master/test/parallel/test_file.py
|
||||
|
||||
* 实验结果:在4核3.4GHz Linux机器上,对金庸全集进行精确分词,获得了1MB/s的速度,是单进程版的3.3倍。
|
||||
|
||||
其他词典
|
||||
========
|
||||
@ -134,7 +145,26 @@ https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small
|
||||
2. 支持繁体分词更好的词典文件
|
||||
https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big
|
||||
|
||||
下载你所需要的词典,然后覆盖jieba/dict.txt 即可。
|
||||
下载你所需要的词典,然后覆盖jieba/dict.txt 即可或者用`jieba.set_dictionary('data/dict.txt.big')`
|
||||
|
||||
|
||||
模块初始化机制的改变:lazy load (从0.28版本开始)
|
||||
================================================
|
||||
|
||||
jieba采用延迟加载,"import jieba"不会立即触发词典的加载,一旦有必要才开始加载词典构建trie。如果你想手工初始jieba,也可以手动初始化。
|
||||
|
||||
import jieba
|
||||
jieba.initialize() #手动初始化(可选)
|
||||
|
||||
|
||||
在0.28之前的版本是不能指定主词典的路径的,有了延迟加载机制后,你可以改变主词典的路径:
|
||||
|
||||
|
||||
jieba.set_dictionary('data/dict.txt.big')
|
||||
|
||||
|
||||
例子: https://github.com/fxsjy/jieba/blob/master/test/test_change_dictpath.py
|
||||
|
||||
|
||||
|
||||
分词速度
|
||||
@ -242,6 +272,30 @@ Code sample (keyword extraction)
|
||||
|
||||
https://github.com/fxsjy/jieba/blob/master/test/extract_tags.py
|
||||
|
||||
Using Other Dictionaries
|
||||
========
|
||||
It is possible to supply Jieba with your own custom dictionary, and there are also two dictionaries readily available for download:
|
||||
|
||||
1. You can employ a smaller dictionary for a smaller memory footprint:
|
||||
https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small
|
||||
|
||||
2. There is also a bigger file that has better support for traditional characters (繁體):
|
||||
https://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big
|
||||
|
||||
By default, an in-between dictionary is used, called `dict.txt` and included in the distribution.
|
||||
|
||||
In either case, download the file you want first, and then call `jieba.set_dictionary('data/dict.txt.big')` or just replace the existing `dict.txt`.
|
||||
|
||||
Initialization
|
||||
========
|
||||
By default, Jieba employs lazy loading to only build the trie once it is necessary. This takes 1-3 seconds once, after which it is not initialized again. If you want to initialize Jieba manually, you can call:
|
||||
|
||||
import jieba
|
||||
jieba.initialize() #(optional)
|
||||
|
||||
You can also specify the dictionary (not supported before version 0.28) :
|
||||
|
||||
jieba.set_dictionary('data/dict.txt.big')
|
||||
|
||||
Segmentation speed
|
||||
=========
|
||||
|
@ -1,3 +1,4 @@
|
||||
from __future__ import with_statement
|
||||
import re
|
||||
import math
|
||||
import os,sys
|
||||
@ -9,65 +10,98 @@ import tempfile
|
||||
import marshal
|
||||
from math import log
|
||||
import random
|
||||
import threading
|
||||
|
||||
DICTIONARY = "dict.txt"
|
||||
DICT_LOCK = threading.RLock()
|
||||
trie = None # to be initialized
|
||||
FREQ = {}
|
||||
min_freq = 0.0
|
||||
total =0.0
|
||||
user_word_tag_tab={}
|
||||
initialized = False
|
||||
|
||||
def gen_trie(f_name):
|
||||
lfreq = {}
|
||||
trie = {}
|
||||
ltotal = 0.0
|
||||
content = open(f_name,'rb').read().decode('utf-8')
|
||||
for line in content.split("\n"):
|
||||
word,freq,_ = line.split(" ")
|
||||
freq = float(freq)
|
||||
lfreq[word] = freq
|
||||
ltotal+=freq
|
||||
p = trie
|
||||
for c in word:
|
||||
if not c in p:
|
||||
p[c] ={}
|
||||
p = p[c]
|
||||
p['']='' #ending flag
|
||||
with open(f_name, 'rb') as f:
|
||||
lineno = 0
|
||||
for line in f.read().rstrip().decode('utf-8').split('\n'):
|
||||
lineno += 1
|
||||
try:
|
||||
word,freq,_ = line.split(' ')
|
||||
freq = float(freq)
|
||||
lfreq[word] = freq
|
||||
ltotal+=freq
|
||||
p = trie
|
||||
for c in word:
|
||||
if not c in p:
|
||||
p[c] ={}
|
||||
p = p[c]
|
||||
p['']='' #ending flag
|
||||
except ValueError as e:
|
||||
print(f_name,' at line',lineno,line, file=sys.stderr)
|
||||
raise e
|
||||
return trie, lfreq,ltotal
|
||||
|
||||
def initialize(dictionary=DICTIONARY):
|
||||
global trie, FREQ, total, min_freq, initialized
|
||||
with DICT_LOCK:
|
||||
if initialized:
|
||||
return
|
||||
if trie:
|
||||
del trie
|
||||
trie = None
|
||||
_curpath=os.path.normpath( os.path.join( os.getcwd(), os.path.dirname(__file__) ) )
|
||||
|
||||
_curpath=os.path.normpath( os.path.join( os.getcwd(), os.path.dirname(__file__) ) )
|
||||
abs_path = os.path.join(_curpath,dictionary)
|
||||
print("Building Trie..., from " + abs_path, file=sys.stderr)
|
||||
t1 = time.time()
|
||||
if abs_path == os.path.join(_curpath,"dict.txt"): #defautl dictionary
|
||||
cache_file = os.path.join(tempfile.gettempdir(),"jieba.cache")
|
||||
else: #customer dictionary
|
||||
cache_file = os.path.join(tempfile.gettempdir(),"jieba.user."+str(hash(abs_path))+".cache")
|
||||
|
||||
print("Building Trie...",file=sys.stderr)
|
||||
|
||||
t1 = time.time()
|
||||
|
||||
cache_file = os.path.join(tempfile.gettempdir(),"jieba.cache")
|
||||
load_from_cache_fail = True
|
||||
if os.path.exists(cache_file) and os.path.getmtime(cache_file)>os.path.getmtime(os.path.join(_curpath,"dict.txt")):
|
||||
print("loading model from cache", file=sys.stderr)
|
||||
try:
|
||||
trie,FREQ,total,min_freq = marshal.load(open(cache_file,'rb'))
|
||||
load_from_cache_fail = False
|
||||
except:
|
||||
load_from_cache_fail = True
|
||||
if os.path.exists(cache_file) and os.path.getmtime(cache_file)>os.path.getmtime(abs_path):
|
||||
print("loading model from cache " + cache_file, file=sys.stderr)
|
||||
try:
|
||||
trie,FREQ,total,min_freq = marshal.load(open(cache_file,'rb'))
|
||||
load_from_cache_fail = False
|
||||
except:
|
||||
load_from_cache_fail = True
|
||||
|
||||
if load_from_cache_fail:
|
||||
trie,FREQ,total = gen_trie(os.path.join(_curpath,"dict.txt"))
|
||||
FREQ = dict([(k,log(float(v)/total)) for k,v in FREQ.items()]) #normalize
|
||||
min_freq = min(FREQ.values())
|
||||
print("dumping model to file cache", file=sys.stderr)
|
||||
tmp_suffix = "."+str(random.random())
|
||||
tmp_f = open(cache_file+tmp_suffix,'wb')
|
||||
marshal.dump((trie,FREQ,total,min_freq),tmp_f)
|
||||
tmp_f.close()
|
||||
if os.name=='nt':
|
||||
import shutil
|
||||
replace_file = shutil.move
|
||||
else:
|
||||
replace_file = os.rename
|
||||
replace_file(cache_file+tmp_suffix,cache_file)
|
||||
if load_from_cache_fail:
|
||||
trie,FREQ,total = gen_trie(abs_path)
|
||||
FREQ = dict([(k,log(float(v)/total)) for k,v in FREQ.items()]) #normalize
|
||||
min_freq = min(FREQ.values())
|
||||
print("dumping model to file cache " + cache_file, file=sys.stderr)
|
||||
tmp_suffix = "."+str(random.random())
|
||||
marshal.dump((trie,FREQ,total,min_freq),open(cache_file+tmp_suffix,'wb'))
|
||||
if os.name=='nt':
|
||||
import shutil
|
||||
replace_file = shutil.move
|
||||
else:
|
||||
replace_file = os.rename
|
||||
replace_file(cache_file+tmp_suffix,cache_file)
|
||||
|
||||
print("loading model cost ", time.time() - t1, "seconds.", file= sys.stderr)
|
||||
print("Trie has been built succesfully.", file= sys.stderr)
|
||||
initialized = True
|
||||
|
||||
print("loading model cost ", time.time() - t1, "seconds.",file=sys.stderr)
|
||||
print("Trie has been built succesfully.", file=sys.stderr)
|
||||
|
||||
|
||||
def require_initialized(fn):
|
||||
global initialized,DICTIONARY
|
||||
|
||||
def wrapped(*args, **kwargs):
|
||||
if initialized:
|
||||
return fn(*args, **kwargs)
|
||||
else:
|
||||
initialize(DICTIONARY)
|
||||
return fn(*args, **kwargs)
|
||||
return wrapped
|
||||
|
||||
def __cut_all(sentence):
|
||||
dag = get_DAG(sentence)
|
||||
@ -82,13 +116,15 @@ def __cut_all(sentence):
|
||||
yield sentence[k:j+1]
|
||||
old_j = j
|
||||
|
||||
|
||||
def calc(sentence,DAG,idx,route):
|
||||
N = len(sentence)
|
||||
route[N] = (1.0,'')
|
||||
route[N] = (0.0,'')
|
||||
for idx in range(N-1,-1,-1):
|
||||
candidates = [ ( FREQ.get(sentence[idx:x+1],min_freq) + route[x+1][0],x ) for x in DAG[idx] ]
|
||||
route[idx] = max(candidates)
|
||||
|
||||
@require_initialized
|
||||
def get_DAG(sentence):
|
||||
N = len(sentence)
|
||||
i,j=0,0
|
||||
@ -116,6 +152,7 @@ def get_DAG(sentence):
|
||||
DAG[i] =[i]
|
||||
return DAG
|
||||
|
||||
|
||||
def __cut_DAG(sentence):
|
||||
DAG = get_DAG(sentence)
|
||||
route ={}
|
||||
@ -148,8 +185,6 @@ def __cut_DAG(sentence):
|
||||
regognized = finalseg.cut(buf)
|
||||
for t in regognized:
|
||||
yield t
|
||||
|
||||
|
||||
def cut(sentence,cut_all=False):
|
||||
if( type(sentence) is bytes):
|
||||
try:
|
||||
@ -178,8 +213,11 @@ def cut(sentence,cut_all=False):
|
||||
if x.strip(' ')!='':
|
||||
yield x
|
||||
else:
|
||||
for xx in x:
|
||||
yield xx
|
||||
if not cut_all:
|
||||
for xx in x:
|
||||
yield xx
|
||||
else:
|
||||
yield x
|
||||
|
||||
def cut_for_search(sentence):
|
||||
words = cut(sentence)
|
||||
@ -196,6 +234,7 @@ def cut_for_search(sentence):
|
||||
yield gram3
|
||||
yield w
|
||||
|
||||
@require_initialized
|
||||
def load_userdict(f):
|
||||
global trie,total,FREQ
|
||||
if isinstance(f, (str, )):
|
||||
@ -219,3 +258,59 @@ def load_userdict(f):
|
||||
p[c] ={}
|
||||
p = p[c]
|
||||
p['']='' #ending flag
|
||||
|
||||
__ref_cut = cut
|
||||
__ref_cut_for_search = cut_for_search
|
||||
|
||||
def __lcut(sentence):
|
||||
return list(__ref_cut(sentence,False))
|
||||
def __lcut_all(sentence):
|
||||
return list(__ref_cut(sentence,True))
|
||||
def __lcut_for_search(sentence):
|
||||
return list(__ref_cut_for_search(sentence))
|
||||
|
||||
@require_initialized
|
||||
def enable_parallel(processnum):
|
||||
global pool,cut,cut_for_search
|
||||
if os.name=='nt':
|
||||
raise Exception("parallel mode only supports posix system")
|
||||
|
||||
from multiprocessing import Pool
|
||||
pool = Pool(processnum)
|
||||
|
||||
def pcut(sentence,cut_all=False):
|
||||
parts = re.compile(b'([\r\n]+)').split(sentence)
|
||||
if cut_all:
|
||||
result = pool.map(__lcut_all,parts)
|
||||
else:
|
||||
result = pool.map(__lcut,parts)
|
||||
for r in result:
|
||||
for w in r:
|
||||
yield w
|
||||
|
||||
def pcut_for_search(sentence):
|
||||
parts = re.compile(b'([\r\n]+)').split(sentence)
|
||||
result = pool.map(__lcut_for_search,parts)
|
||||
for r in result:
|
||||
for w in r:
|
||||
yield w
|
||||
|
||||
cut = pcut
|
||||
cut_for_search = pcut_for_search
|
||||
|
||||
def disable_parallel():
|
||||
global pool,cut,cut_for_search
|
||||
if pool != None:
|
||||
pool.close()
|
||||
pool = None
|
||||
cut = __ref_cut
|
||||
cut_for_search = __ref_cut_for_search
|
||||
|
||||
def set_dictionary(dictionary_path):
|
||||
global initialized, DICTIONARY
|
||||
with DICT_LOCK:
|
||||
abs_path = os.path.normpath( os.path.join( os.getcwd(), dictionary_path ) )
|
||||
if not os.path.exists(abs_path):
|
||||
raise Exception("path does not exists:" + abs_path)
|
||||
DICTIONARY = abs_path
|
||||
initialized = False
|
||||
|
@ -367427,4 +367427,4 @@ C++ 3 nz
|
||||
c++ 3 nz
|
||||
C# 3 nz
|
||||
c# 3 nz
|
||||
AT&T 3 nz
|
||||
AT&T 3 nz
|
||||
|
@ -122,8 +122,8 @@ def __cut_DAG(sentence):
|
||||
yield t
|
||||
|
||||
|
||||
def cut(sentence):
|
||||
if ( type(sentence) is bytes):
|
||||
def __cut_internal(sentence):
|
||||
if not ( type(sentence) is str):
|
||||
try:
|
||||
sentence = sentence.decode('utf-8')
|
||||
except:
|
||||
@ -151,3 +151,18 @@ def cut(sentence):
|
||||
yield pair(xx,'eng')
|
||||
else:
|
||||
yield pair(xx,'x')
|
||||
|
||||
def __lcut_internal(sentence):
|
||||
return list(__cut_internal(sentence))
|
||||
|
||||
def cut(sentence):
|
||||
if (not hasattr(jieba,'pool')) or (jieba.pool==None):
|
||||
for w in __cut_internal(sentence):
|
||||
yield w
|
||||
else:
|
||||
parts = re.compile('([\r\n]+)').split(sentence)
|
||||
result = jieba.pool.map(__lcut_internal,parts)
|
||||
for r in result:
|
||||
for w in r:
|
||||
yield w
|
||||
|
||||
|
2
setup.py
2
setup.py
@ -1,6 +1,6 @@
|
||||
from distutils.core import setup
|
||||
setup(name='jieba',
|
||||
version='0.26.1',
|
||||
version='0.28.1',
|
||||
description='Chinese Words Segementation Utilities',
|
||||
author='Sun, Junyi',
|
||||
author_email='ccnusjy@gmail.com',
|
||||
|
1
test/foobar.txt
Normal file
1
test/foobar.txt
Normal file
@ -0,0 +1 @@
|
||||
好人 12 n
|
34
test/parallel/extract_tags.py
Normal file
34
test/parallel/extract_tags.py
Normal file
@ -0,0 +1,34 @@
|
||||
import sys
|
||||
sys.path.append('../../')
|
||||
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
import jieba.analyse
|
||||
from optparse import OptionParser
|
||||
|
||||
USAGE ="usage: python extract_tags.py [file name] -k [top k]"
|
||||
|
||||
parser = OptionParser(USAGE)
|
||||
parser.add_option("-k",dest="topK")
|
||||
opt, args = parser.parse_args()
|
||||
|
||||
|
||||
if len(args) <1:
|
||||
print(USAGE)
|
||||
sys.exit(1)
|
||||
|
||||
file_name = args[0]
|
||||
|
||||
if opt.topK==None:
|
||||
topK=10
|
||||
else:
|
||||
topK = int(opt.topK)
|
||||
|
||||
|
||||
content = open(file_name,'rb').read()
|
||||
|
||||
tags = jieba.analyse.extract_tags(content,topK=topK)
|
||||
|
||||
print(",".join(tags) )
|
||||
|
||||
|
96
test/parallel/test.py
Normal file
96
test/parallel/test.py
Normal file
@ -0,0 +1,96 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = jieba.cut(test_sent)
|
||||
print( "/ ".join(result) )
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
|
||||
cuttest("我不喜欢日本和服。")
|
||||
cuttest("雷猴回归人间。")
|
||||
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
|
||||
cuttest("我需要廉租房")
|
||||
cuttest("永和服装饰品有限公司")
|
||||
cuttest("我爱北京天安门")
|
||||
cuttest("abc")
|
||||
cuttest("隐马尔可夫")
|
||||
cuttest("雷猴是个好网站")
|
||||
cuttest("“Microsoft”一词由“MICROcomputer(微型计算机)”和“SOFTware(软件)”两部分组成")
|
||||
cuttest("草泥马和欺实马是今年的流行词汇")
|
||||
cuttest("伊藤洋华堂总府店")
|
||||
cuttest("中国科学院计算技术研究所")
|
||||
cuttest("罗密欧与朱丽叶")
|
||||
cuttest("我购买了道具和服装")
|
||||
cuttest("PS: 我觉得开源有一个好处,就是能够敦促自己不断改进,避免敞帚自珍")
|
||||
cuttest("湖北省石首市")
|
||||
cuttest("湖北省十堰市")
|
||||
cuttest("总经理完成了这件事情")
|
||||
cuttest("电脑修好了")
|
||||
cuttest("做好了这件事情就一了百了了")
|
||||
cuttest("人们审美的观点是不同的")
|
||||
cuttest("我们买了一个美的空调")
|
||||
cuttest("线程初始化时我们要注意")
|
||||
cuttest("一个分子是由好多原子组织成的")
|
||||
cuttest("祝你马到功成")
|
||||
cuttest("他掉进了无底洞里")
|
||||
cuttest("中国的首都是北京")
|
||||
cuttest("孙君意")
|
||||
cuttest("外交部发言人马朝旭")
|
||||
cuttest("领导人会议和第四届东亚峰会")
|
||||
cuttest("在过去的这五年")
|
||||
cuttest("还需要很长的路要走")
|
||||
cuttest("60周年首都阅兵")
|
||||
cuttest("你好人们审美的观点是不同的")
|
||||
cuttest("买水果然后来世博园")
|
||||
cuttest("买水果然后去世博园")
|
||||
cuttest("但是后来我才知道你是对的")
|
||||
cuttest("存在即合理")
|
||||
cuttest("的的的的的在的的的的就以和和和")
|
||||
cuttest("I love你,不以为耻,反以为rong")
|
||||
cuttest("因")
|
||||
cuttest("")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("很好但主要是基于网页形式")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("为什么我不能拥有想要的生活")
|
||||
cuttest("后来我才")
|
||||
cuttest("此次来中国是为了")
|
||||
cuttest("使用了它就可以解决一些问题")
|
||||
cuttest(",使用了它就可以解决一些问题")
|
||||
cuttest("其实使用了它就可以解决一些问题")
|
||||
cuttest("好人使用了它就可以解决一些问题")
|
||||
cuttest("是因为和国家")
|
||||
cuttest("老年搜索还支持")
|
||||
cuttest("干脆就把那部蒙人的闲法给废了拉倒!RT @laoshipukong : 27日,全国人大常委会第三次审议侵权责任法草案,删除了有关医疗损害责任“举证倒置”的规定。在医患纠纷中本已处于弱势地位的消费者由此将陷入万劫不复的境地。 ")
|
||||
cuttest("大")
|
||||
cuttest("")
|
||||
cuttest("他说的确实在理")
|
||||
cuttest("长春市长春节讲话")
|
||||
cuttest("结婚的和尚未结婚的")
|
||||
cuttest("结合成分子时")
|
||||
cuttest("旅游和服务是最好的")
|
||||
cuttest("这件事情的确是我的错")
|
||||
cuttest("供大家参考指正")
|
||||
cuttest("哈尔滨政府公布塌桥原因")
|
||||
cuttest("我在机场入口处")
|
||||
cuttest("邢永臣摄影报道")
|
||||
cuttest("BP神经网络如何训练才能在分类时增加区分度?")
|
||||
cuttest("南京市长江大桥")
|
||||
cuttest("应一些使用者的建议,也为了便于利用NiuTrans用于SMT研究")
|
||||
cuttest('长春市长春药店')
|
||||
cuttest('邓颖超生前最喜欢的衣服')
|
||||
cuttest('胡锦涛是热爱世界和平的政治局常委')
|
||||
cuttest('程序员祝海林和朱会震是在孙健的左面和右面, 范凯在最右面.再往左是李松洪')
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('张晓梅去人民医院做了个B超然后去买了件T恤')
|
||||
cuttest('AT&T是一件不错的公司,给你发offer了吗?')
|
||||
cuttest('C++和c#是什么关系?11+122=133,是吗?PI=3.14159')
|
||||
cuttest('你认识那个和主席握手的的哥吗?他开一辆黑色的士。')
|
@ -1,7 +1,8 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../")
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = jieba.cut(test_sent,cut_all=True)
|
||||
@ -88,4 +89,4 @@ if __name__ == "__main__":
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
92
test/parallel/test_cut_for_search.py
Normal file
92
test/parallel/test_cut_for_search.py
Normal file
@ -0,0 +1,92 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = jieba.cut_for_search(test_sent)
|
||||
print("/ ".join(result))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
|
||||
cuttest("我不喜欢日本和服。")
|
||||
cuttest("雷猴回归人间。")
|
||||
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
|
||||
cuttest("我需要廉租房")
|
||||
cuttest("永和服装饰品有限公司")
|
||||
cuttest("我爱北京天安门")
|
||||
cuttest("abc")
|
||||
cuttest("隐马尔可夫")
|
||||
cuttest("雷猴是个好网站")
|
||||
cuttest("“Microsoft”一词由“MICROcomputer(微型计算机)”和“SOFTware(软件)”两部分组成")
|
||||
cuttest("草泥马和欺实马是今年的流行词汇")
|
||||
cuttest("伊藤洋华堂总府店")
|
||||
cuttest("中国科学院计算技术研究所")
|
||||
cuttest("罗密欧与朱丽叶")
|
||||
cuttest("我购买了道具和服装")
|
||||
cuttest("PS: 我觉得开源有一个好处,就是能够敦促自己不断改进,避免敞帚自珍")
|
||||
cuttest("湖北省石首市")
|
||||
cuttest("湖北省十堰市")
|
||||
cuttest("总经理完成了这件事情")
|
||||
cuttest("电脑修好了")
|
||||
cuttest("做好了这件事情就一了百了了")
|
||||
cuttest("人们审美的观点是不同的")
|
||||
cuttest("我们买了一个美的空调")
|
||||
cuttest("线程初始化时我们要注意")
|
||||
cuttest("一个分子是由好多原子组织成的")
|
||||
cuttest("祝你马到功成")
|
||||
cuttest("他掉进了无底洞里")
|
||||
cuttest("中国的首都是北京")
|
||||
cuttest("孙君意")
|
||||
cuttest("外交部发言人马朝旭")
|
||||
cuttest("领导人会议和第四届东亚峰会")
|
||||
cuttest("在过去的这五年")
|
||||
cuttest("还需要很长的路要走")
|
||||
cuttest("60周年首都阅兵")
|
||||
cuttest("你好人们审美的观点是不同的")
|
||||
cuttest("买水果然后来世博园")
|
||||
cuttest("买水果然后去世博园")
|
||||
cuttest("但是后来我才知道你是对的")
|
||||
cuttest("存在即合理")
|
||||
cuttest("的的的的的在的的的的就以和和和")
|
||||
cuttest("I love你,不以为耻,反以为rong")
|
||||
cuttest("因")
|
||||
cuttest("")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("很好但主要是基于网页形式")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("为什么我不能拥有想要的生活")
|
||||
cuttest("后来我才")
|
||||
cuttest("此次来中国是为了")
|
||||
cuttest("使用了它就可以解决一些问题")
|
||||
cuttest(",使用了它就可以解决一些问题")
|
||||
cuttest("其实使用了它就可以解决一些问题")
|
||||
cuttest("好人使用了它就可以解决一些问题")
|
||||
cuttest("是因为和国家")
|
||||
cuttest("老年搜索还支持")
|
||||
cuttest("干脆就把那部蒙人的闲法给废了拉倒!RT @laoshipukong : 27日,全国人大常委会第三次审议侵权责任法草案,删除了有关医疗损害责任“举证倒置”的规定。在医患纠纷中本已处于弱势地位的消费者由此将陷入万劫不复的境地。 ")
|
||||
cuttest("大")
|
||||
cuttest("")
|
||||
cuttest("他说的确实在理")
|
||||
cuttest("长春市长春节讲话")
|
||||
cuttest("结婚的和尚未结婚的")
|
||||
cuttest("结合成分子时")
|
||||
cuttest("旅游和服务是最好的")
|
||||
cuttest("这件事情的确是我的错")
|
||||
cuttest("供大家参考指正")
|
||||
cuttest("哈尔滨政府公布塌桥原因")
|
||||
cuttest("我在机场入口处")
|
||||
cuttest("邢永臣摄影报道")
|
||||
cuttest("BP神经网络如何训练才能在分类时增加区分度?")
|
||||
cuttest("南京市长江大桥")
|
||||
cuttest("应一些使用者的建议,也为了便于利用NiuTrans用于SMT研究")
|
||||
cuttest('长春市长春药店')
|
||||
cuttest('邓颖超生前最喜欢的衣服')
|
||||
cuttest('胡锦涛是热爱世界和平的政治局常委')
|
||||
cuttest('程序员祝海林和朱会震是在孙健的左面和右面, 范凯在最右面.再往左是李松洪')
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
19
test/parallel/test_file.py
Normal file
19
test/parallel/test_file.py
Normal file
@ -0,0 +1,19 @@
|
||||
import sys,time
|
||||
import sys
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
|
||||
url = sys.argv[1]
|
||||
content = open(url,"rb").read()
|
||||
t1 = time.time()
|
||||
words = list(jieba.cut(content))
|
||||
|
||||
t2 = time.time()
|
||||
tm_cost = t2-t1
|
||||
|
||||
log_f = open("1.log","wb")
|
||||
for w in words:
|
||||
log_f.write(w.encode("utf-8"))
|
||||
print('speed' , len(content)/tm_cost, " bytes/second")
|
||||
|
99
test/parallel/test_pos.py
Normal file
99
test/parallel/test_pos.py
Normal file
@ -0,0 +1,99 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
jieba.enable_parallel(4)
|
||||
import jieba.posseg as pseg
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = pseg.cut(test_sent)
|
||||
for w in result:
|
||||
sys.stdout.write(w.word+ "/"+ w.flag + ", ")
|
||||
print("")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
|
||||
cuttest("我不喜欢日本和服。")
|
||||
cuttest("雷猴回归人间。")
|
||||
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
|
||||
cuttest("我需要廉租房")
|
||||
cuttest("永和服装饰品有限公司")
|
||||
cuttest("我爱北京天安门")
|
||||
cuttest("abc")
|
||||
cuttest("隐马尔可夫")
|
||||
cuttest("雷猴是个好网站")
|
||||
cuttest("“Microsoft”一词由“MICROcomputer(微型计算机)”和“SOFTware(软件)”两部分组成")
|
||||
cuttest("草泥马和欺实马是今年的流行词汇")
|
||||
cuttest("伊藤洋华堂总府店")
|
||||
cuttest("中国科学院计算技术研究所")
|
||||
cuttest("罗密欧与朱丽叶")
|
||||
cuttest("我购买了道具和服装")
|
||||
cuttest("PS: 我觉得开源有一个好处,就是能够敦促自己不断改进,避免敞帚自珍")
|
||||
cuttest("湖北省石首市")
|
||||
cuttest("湖北省十堰市")
|
||||
cuttest("总经理完成了这件事情")
|
||||
cuttest("电脑修好了")
|
||||
cuttest("做好了这件事情就一了百了了")
|
||||
cuttest("人们审美的观点是不同的")
|
||||
cuttest("我们买了一个美的空调")
|
||||
cuttest("线程初始化时我们要注意")
|
||||
cuttest("一个分子是由好多原子组织成的")
|
||||
cuttest("祝你马到功成")
|
||||
cuttest("他掉进了无底洞里")
|
||||
cuttest("中国的首都是北京")
|
||||
cuttest("孙君意")
|
||||
cuttest("外交部发言人马朝旭")
|
||||
cuttest("领导人会议和第四届东亚峰会")
|
||||
cuttest("在过去的这五年")
|
||||
cuttest("还需要很长的路要走")
|
||||
cuttest("60周年首都阅兵")
|
||||
cuttest("你好人们审美的观点是不同的")
|
||||
cuttest("买水果然后来世博园")
|
||||
cuttest("买水果然后去世博园")
|
||||
cuttest("但是后来我才知道你是对的")
|
||||
cuttest("存在即合理")
|
||||
cuttest("的的的的的在的的的的就以和和和")
|
||||
cuttest("I love你,不以为耻,反以为rong")
|
||||
cuttest("因")
|
||||
cuttest("")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("很好但主要是基于网页形式")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("为什么我不能拥有想要的生活")
|
||||
cuttest("后来我才")
|
||||
cuttest("此次来中国是为了")
|
||||
cuttest("使用了它就可以解决一些问题")
|
||||
cuttest(",使用了它就可以解决一些问题")
|
||||
cuttest("其实使用了它就可以解决一些问题")
|
||||
cuttest("好人使用了它就可以解决一些问题")
|
||||
cuttest("是因为和国家")
|
||||
cuttest("老年搜索还支持")
|
||||
cuttest("干脆就把那部蒙人的闲法给废了拉倒!RT @laoshipukong : 27日,全国人大常委会第三次审议侵权责任法草案,删除了有关医疗损害责任“举证倒置”的规定。在医患纠纷中本已处于弱势地位的消费者由此将陷入万劫不复的境地。 ")
|
||||
cuttest("大")
|
||||
cuttest("")
|
||||
cuttest("他说的确实在理")
|
||||
cuttest("长春市长春节讲话")
|
||||
cuttest("结婚的和尚未结婚的")
|
||||
cuttest("结合成分子时")
|
||||
cuttest("旅游和服务是最好的")
|
||||
cuttest("这件事情的确是我的错")
|
||||
cuttest("供大家参考指正")
|
||||
cuttest("哈尔滨政府公布塌桥原因")
|
||||
cuttest("我在机场入口处")
|
||||
cuttest("邢永臣摄影报道")
|
||||
cuttest("BP神经网络如何训练才能在分类时增加区分度?")
|
||||
cuttest("南京市长江大桥")
|
||||
cuttest("应一些使用者的建议,也为了便于利用NiuTrans用于SMT研究")
|
||||
cuttest('长春市长春药店')
|
||||
cuttest('邓颖超生前最喜欢的衣服')
|
||||
cuttest('胡锦涛是热爱世界和平的政治局常委')
|
||||
cuttest('程序员祝海林和朱会震是在孙健的左面和右面, 范凯在最右面.再往左是李松洪')
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('张晓梅去人民医院做了个B超然后去买了件T恤')
|
||||
cuttest('AT&T是一件不错的公司,给你发offer了吗?')
|
||||
cuttest('C++和c#是什么关系?11+122=133,是吗?PI=3.14159')
|
||||
cuttest('你认识那个和主席握手的的哥吗?他开一辆黑色的士。')
|
22
test/parallel/test_pos_file.py
Normal file
22
test/parallel/test_pos_file.py
Normal file
@ -0,0 +1,22 @@
|
||||
import urllib2
|
||||
import sys,time
|
||||
import sys
|
||||
sys.path.append("../../")
|
||||
import jieba
|
||||
import jieba.posseg as pseg
|
||||
jieba.enable_parallel(4)
|
||||
|
||||
url = sys.argv[1]
|
||||
content = open(url,"rb").read()
|
||||
t1 = time.time()
|
||||
words = list(pseg.cut(content))
|
||||
|
||||
t2 = time.time()
|
||||
tm_cost = t2-t1
|
||||
|
||||
log_f = open("1.log","wb")
|
||||
for w in words:
|
||||
print >> log_f, w.encode("utf-8"), "/" ,
|
||||
|
||||
print 'speed' , len(content)/tm_cost, " bytes/second"
|
||||
|
27
test/test_change_dictpath.py
Normal file
27
test/test_change_dictpath.py
Normal file
@ -0,0 +1,27 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../")
|
||||
import jieba
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = jieba.cut(test_sent)
|
||||
print " ".join(result)
|
||||
|
||||
def testcase():
|
||||
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
|
||||
cuttest("我不喜欢日本和服。")
|
||||
cuttest("雷猴回归人间。")
|
||||
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
|
||||
cuttest("我需要廉租房")
|
||||
cuttest("永和服装饰品有限公司")
|
||||
cuttest("我爱北京天安门")
|
||||
cuttest("abc")
|
||||
cuttest("隐马尔可夫")
|
||||
cuttest("雷猴是个好网站")
|
||||
|
||||
if __name__ == "__main__":
|
||||
testcase()
|
||||
jieba.set_dictionary("foobar.txt")
|
||||
print "================================"
|
||||
testcase()
|
||||
|
@ -89,4 +89,8 @@ if __name__ == "__main__":
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('张晓梅去人民医院做了个B超然后去买了件T恤')
|
||||
cuttest('AT&T是一件不错的公司,给你发offer了吗?')
|
||||
cuttest('C++和c#是什么关系?11+122=133,是吗?PI=3.14159')
|
||||
cuttest('你认识那个和主席握手的的哥吗?他开一辆黑色的士。')
|
||||
|
95
test/test_cutall.py
Normal file
95
test/test_cutall.py
Normal file
@ -0,0 +1,95 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
sys.path.append("../")
|
||||
import jieba
|
||||
|
||||
def cuttest(test_sent):
|
||||
result = jieba.cut(test_sent,cut_all=True)
|
||||
print("/ ".join(result))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cuttest("这是一个伸手不见五指的黑夜。我叫孙悟空,我爱北京,我爱Python和C++。")
|
||||
cuttest("我不喜欢日本和服。")
|
||||
cuttest("雷猴回归人间。")
|
||||
cuttest("工信处女干事每月经过下属科室都要亲口交代24口交换机等技术性器件的安装工作")
|
||||
cuttest("我需要廉租房")
|
||||
cuttest("永和服装饰品有限公司")
|
||||
cuttest("我爱北京天安门")
|
||||
cuttest("abc")
|
||||
cuttest("隐马尔可夫")
|
||||
cuttest("雷猴是个好网站")
|
||||
cuttest("“Microsoft”一词由“MICROcomputer(微型计算机)”和“SOFTware(软件)”两部分组成")
|
||||
cuttest("草泥马和欺实马是今年的流行词汇")
|
||||
cuttest("伊藤洋华堂总府店")
|
||||
cuttest("中国科学院计算技术研究所")
|
||||
cuttest("罗密欧与朱丽叶")
|
||||
cuttest("我购买了道具和服装")
|
||||
cuttest("PS: 我觉得开源有一个好处,就是能够敦促自己不断改进,避免敞帚自珍")
|
||||
cuttest("湖北省石首市")
|
||||
cuttest("湖北省十堰市")
|
||||
cuttest("总经理完成了这件事情")
|
||||
cuttest("电脑修好了")
|
||||
cuttest("做好了这件事情就一了百了了")
|
||||
cuttest("人们审美的观点是不同的")
|
||||
cuttest("我们买了一个美的空调")
|
||||
cuttest("线程初始化时我们要注意")
|
||||
cuttest("一个分子是由好多原子组织成的")
|
||||
cuttest("祝你马到功成")
|
||||
cuttest("他掉进了无底洞里")
|
||||
cuttest("中国的首都是北京")
|
||||
cuttest("孙君意")
|
||||
cuttest("外交部发言人马朝旭")
|
||||
cuttest("领导人会议和第四届东亚峰会")
|
||||
cuttest("在过去的这五年")
|
||||
cuttest("还需要很长的路要走")
|
||||
cuttest("60周年首都阅兵")
|
||||
cuttest("你好人们审美的观点是不同的")
|
||||
cuttest("买水果然后来世博园")
|
||||
cuttest("买水果然后去世博园")
|
||||
cuttest("但是后来我才知道你是对的")
|
||||
cuttest("存在即合理")
|
||||
cuttest("的的的的的在的的的的就以和和和")
|
||||
cuttest("I love你,不以为耻,反以为rong")
|
||||
cuttest("因")
|
||||
cuttest("")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("很好但主要是基于网页形式")
|
||||
cuttest("hello你好人们审美的观点是不同的")
|
||||
cuttest("为什么我不能拥有想要的生活")
|
||||
cuttest("后来我才")
|
||||
cuttest("此次来中国是为了")
|
||||
cuttest("使用了它就可以解决一些问题")
|
||||
cuttest(",使用了它就可以解决一些问题")
|
||||
cuttest("其实使用了它就可以解决一些问题")
|
||||
cuttest("好人使用了它就可以解决一些问题")
|
||||
cuttest("是因为和国家")
|
||||
cuttest("老年搜索还支持")
|
||||
cuttest("干脆就把那部蒙人的闲法给废了拉倒!RT @laoshipukong : 27日,全国人大常委会第三次审议侵权责任法草案,删除了有关医疗损害责任“举证倒置”的规定。在医患纠纷中本已处于弱势地位的消费者由此将陷入万劫不复的境地。 ")
|
||||
cuttest("大")
|
||||
cuttest("")
|
||||
cuttest("他说的确实在理")
|
||||
cuttest("长春市长春节讲话")
|
||||
cuttest("结婚的和尚未结婚的")
|
||||
cuttest("结合成分子时")
|
||||
cuttest("旅游和服务是最好的")
|
||||
cuttest("这件事情的确是我的错")
|
||||
cuttest("供大家参考指正")
|
||||
cuttest("哈尔滨政府公布塌桥原因")
|
||||
cuttest("我在机场入口处")
|
||||
cuttest("邢永臣摄影报道")
|
||||
cuttest("BP神经网络如何训练才能在分类时增加区分度?")
|
||||
cuttest("南京市长江大桥")
|
||||
cuttest("应一些使用者的建议,也为了便于利用NiuTrans用于SMT研究")
|
||||
cuttest('长春市长春药店')
|
||||
cuttest('邓颖超生前最喜欢的衣服')
|
||||
cuttest('胡锦涛是热爱世界和平的政治局常委')
|
||||
cuttest('程序员祝海林和朱会震是在孙健的左面和右面, 范凯在最右面.再往左是李松洪')
|
||||
cuttest('一次性交多少钱')
|
||||
cuttest('两块五一套,三块八一斤,四块七一本,五块六一条')
|
||||
cuttest('小和尚留了一个像大和尚一样的和尚头')
|
||||
cuttest('我是中华人民共和国公民;我爸爸是共和党党员; 地铁和平门站')
|
||||
cuttest('张晓梅去人民医院做了个B超然后去买了件T恤')
|
||||
cuttest('AT&T是一件不错的公司,给你发offer了吗?')
|
||||
cuttest('C++和c#是什么关系?11+122=133,是吗?PI=3.14159')
|
||||
cuttest('你认识那个和主席握手的的哥吗?他开一辆黑色的士。')
|
@ -2,6 +2,7 @@ import sys,time
|
||||
import sys
|
||||
sys.path.append("../")
|
||||
import jieba
|
||||
jieba.initialize()
|
||||
|
||||
url = sys.argv[1]
|
||||
content = open(url,"rb").read()
|
||||
@ -15,5 +16,6 @@ log_f = open("1.log","wb")
|
||||
|
||||
log_f.write(bytes("/ ".join(words),'utf-8'))
|
||||
|
||||
print('cost',tm_cost)
|
||||
print('speed' , len(content)/tm_cost, " bytes/second")
|
||||
|
||||
|
29
test/test_multithread.py
Normal file
29
test/test_multithread.py
Normal file
@ -0,0 +1,29 @@
|
||||
#encoding=utf-8
|
||||
import sys
|
||||
import threading
|
||||
sys.path.append("../")
|
||||
|
||||
import jieba
|
||||
|
||||
class Worker(threading.Thread):
|
||||
def run(self):
|
||||
seg_list = jieba.cut("我来到北京清华大学",cut_all=True)
|
||||
print "Full Mode:" + "/ ".join(seg_list) #全模式
|
||||
|
||||
seg_list = jieba.cut("我来到北京清华大学",cut_all=False)
|
||||
print "Default Mode:" + "/ ".join(seg_list) #默认模式
|
||||
|
||||
seg_list = jieba.cut("他来到了网易杭研大厦")
|
||||
print ", ".join(seg_list)
|
||||
|
||||
seg_list = jieba.cut_for_search("小明硕士毕业于中国科学院计算所,后在日本京都大学深造") #搜索引擎模式
|
||||
print ", ".join(seg_list)
|
||||
workers = []
|
||||
for i in xrange(10):
|
||||
worker = Worker()
|
||||
workers.append(worker)
|
||||
worker.start()
|
||||
|
||||
for worker in workers:
|
||||
worker.join()
|
||||
|
22
test/test_pos_file.py
Normal file
22
test/test_pos_file.py
Normal file
@ -0,0 +1,22 @@
|
||||
import urllib2
|
||||
import sys,time
|
||||
import sys
|
||||
sys.path.append("../")
|
||||
import jieba
|
||||
jieba.initialize()
|
||||
import jieba.posseg as pseg
|
||||
|
||||
url = sys.argv[1]
|
||||
content = open(url,"rb").read()
|
||||
t1 = time.time()
|
||||
words = list(pseg.cut(content))
|
||||
|
||||
t2 = time.time()
|
||||
tm_cost = t2-t1
|
||||
|
||||
log_f = open("1.log","wb")
|
||||
for w in words:
|
||||
print(w.encode("utf-8"), "/" ,file=log_f)
|
||||
|
||||
print 'speed' , len(content)/tm_cost, " bytes/second"
|
||||
|
Loading…
x
Reference in New Issue
Block a user