Did I find the right examples for you? yes no      Crawl my project      Python Jobs

All Samples(2)  |  Call(1)  |  Derive(0)  |  Import(1)
Parse a URL into 6 components:
<scheme>://<netloc>/<path>;<params>?<query>#<fragment>
Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
Note that we don't break the components up in smaller bits
(e.g. netloc is a single string) and we don't expand % escapes.

        def urlparse(url, scheme='', allow_fragments=True):
    """Parse a URL into 6 components:
    :///;?#
    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes."""
    tuple = urlsplit(url, scheme, allow_fragments)
    scheme, netloc, url, query, fragment = tuple
    if scheme in uses_params and ';' in url:
        url, params = _splitparams(url)
    else:
        params = ''
    return ParseResult(scheme, netloc, url, params, query, fragment)
        


src/b/i/Bio_Eutils-1.63/Bio_Eutils/Entrez/Parser.py   Bio_Eutils(Download)
#Importing these functions with leading underscore as not intended for reuse
from Bio_Eutils._py3k import urlopen as _urlopen
from Bio_Eutils._py3k import urlparse as _urlparse
from Bio_Eutils._py3k import unicode
 
        we try to download it. If new DTDs become available from NCBI,
        putting them in Bio/Entrez/DTDs will allow the parser to see them."""
        urlinfo = _urlparse(systemId)
        #Following attribute requires Python 2.5+
        #if urlinfo.scheme=='http':