Did I find the right examples for you? yes no      Crawl my project      Python Jobs

All Samples(3)  |  Call(3)  |  Derive(0)  |  Import(0)

src/s/t/streamcorpus_pipeline-0.5.23.dev1/streamcorpus_pipeline/_upgrade_streamcorpus_v0_3_0.py   streamcorpus_pipeline(Download)
 
                new_token = streamcorpus.Token()
                new_sent.tokens.append(new_token)
 
                for attr in ['token_num', 'token', 'offsets', 'sentence_pos', 'lemma', 'pos', 'entity_type', 'mention_id', 'equiv_id', 'parent_id', 'dependency_path']:

src/s/t/streamcorpus_pipeline-0.5.23.dev1/streamcorpus_pipeline/_tokenizer.py   streamcorpus_pipeline(Download)
                token_num += 1
                sentence_pos += 1
                sent.tokens.append(tok)
            sentences.append(sent)
        return sentences

src/s/t/streamcorpus_pipeline-0.5.23.dev1/streamcorpus_pipeline/_lingpipe.py   streamcorpus_pipeline(Download)
                        try:
                            tok = tokens.next()
                            sent.tokens.append(tok)
                            #logger.debug('got token: %r  %d %d' % (tok.token, tok.mention_id, tok.sentence_pos))