我知道已经有一个可以接受的答案了……但是通常您只是从带注释的文档中获取SentenceAnnotations。
// creates a StanfordCoreNLP object, with POS tagging, lemmatization, NER, parsing, and coreference resolution
Properties props = new Properties();
props.put("annotators", "tokenize, ssplit, pos, lemma, ner, parse, dcoref");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// read some text in the text variable
String text = ... // Add your text here!
// create an empty Annotation just with the given text
Annotation document = new Annotation(text);
// run all Annotators on this text
pipeline.annotate(document);
// these are all the sentences in this document
// a CoreMap is essentially a Map that uses class objects as keys and has values with custom types
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
for(CoreMap sentence: sentences) {
// traversing the words in the current sentence
// a CoreLabel is a CoreMap with additional token-specific methods
for (CoreLabel token: sentence.get(TokensAnnotation.class)) {
// this is the text of the token
String word = token.get(TextAnnotation.class);
// this is the POS tag of the token
String pos = token.get(PartOfSpeechAnnotation.class);
// this is the NER label of the token
String ne = token.get(NamedEntityTagAnnotation.class);
}
}
来源-http://nlp.stanford.edu/software/corenlp.shtml (下半部分)
而且,如果您仅查找句子,则可以从管道初始化中删除诸如“ parse”和“ dcoref”之类的后续步骤,这将节省一些加载和处理时间。摇滚乐。 〜K
0
如何使用Stanford解析器将文本或段落拆分为句子?
是否有任何方法可以提取句子,例如为Ruby提供的
getSentencesFromString()
?