HEX
Server: Apache
System: Linux srv1.prosuiteplus.com 5.4.0-216-generic #236-Ubuntu SMP Fri Apr 11 19:53:21 UTC 2025 x86_64
User: prosuiteplus (1001)
PHP: 8.3.20
Disabled: NONE
Upload Files
File: //usr/lib/python3/dist-packages/pygments/__pycache__/lexer.cpython-38.pyc
U

`a�[Ny�@sdZddlmZddlZddlZddlZddlmZmZddl	m
Z
ddlmZm
Z
mZmZddlmZmZmZmZmZmZmZmZmZddlmZd	d
ddd
dddddddgZdddddgZedd��ZGdd�de �Z!ee!�Gdd	�d	e"��Z#Gdd�de#�Z$Gd d�de%�Z&Gd!d"�d"e"�Z'e'�Z(Gd#d$�d$e)�Z*Gd%d&�d&e"�Z+d'd�Z,Gd(d)�d)e"�Z-e-�Z.d*d�Z/Gd+d�d�Z0Gd,d�de�Z1Gd-d.�d.e!�Z2ee2�Gd/d
�d
e#��Z3Gd0d
�d
e"�Z4Gd1d�de3�Z5d2d3�Z6Gd4d5�d5e2�Z7ee7�Gd6d7�d7e3��Z8dS)8z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�)�print_functionN)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
_TokenType)	�get_bool_opt�get_int_opt�get_list_opt�make_analysator�	text_type�
add_metaclass�	iteritems�Future�guess_decode)�	regex_opt�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words)s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16becCsdS)N����xr"r"�0/usr/lib/python3/dist-packages/pygments/lexer.py�<lambda>$�r&c@seZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|krt|d�|d<t�||||�S)N�analyse_text)r
�type�__new__)Zmcs�name�bases�dr"r"r%r+-szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r+r"r"r"r%r('sr(c@sZeZdZdZdZgZgZgZgZdZ	dd�Z
dd�Zdd	�Zd
d�Z
dd
d�Zdd�ZdS)ra�
    Lexer for a specific language.

    Basic options recognized:
    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|�dd	�|_|�d
�pZ|j|_g|_	t
|dd�D]}|�|�qpdS)
N�stripnlT�stripallF�ensurenl�tabsizer�encoding�guessZ
inencoding�filtersr")�optionsr
r3r4r5rr6�getr7r9r�
add_filter)�selfr:�filter_r"r"r%�__init__bszLexer.__init__cCs(|jrd|jj|jfSd|jjSdS)Nz<pygments.lexers.%s with %r>z<pygments.lexers.%s>)r:�	__class__r/�r=r"r"r%�__repr__ns
�zLexer.__repr__cKs&t|t�st|f|�}|j�|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr9�append)r=r>r:r"r"r%r<us
zLexer.add_filtercCsdS)a~
        Has to return a float between ``0`` and ``1`` that indicates
        if a lexer wants to highlight this text. Used by ``guess_lexer``.
        If this method returns ``0`` it won't highlight it in any case, if
        it returns ``1`` highlighting with this lexer is guaranteed.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr")�textr"r"r%r)}szLexer.analyse_textFcs�t�t�s�jdkr"t��\�}nȈjdkr�zddl}Wntk
rTtd��YnXd}tD].\}}��|�r^�t|�d��	|d�}q�q^|dkr�|�
�dd��}��	|�d�p�d	d�}|�n&��	�j����d
�r�td
�d��n��d
��r�td
�d����dd����d
d���j
�r2����n�j�rD��d���jdk�r\���j���j�rx��d��sx�d7���fdd�}	|	�}
|�s�t|
�j��}
|
S)a=
        Return an iterable of (tokentype, value) pairs generated from
        `text`. If `unfiltered` is set to `True`, the filtering mechanism
        is bypassed even if filters are defined.

        Also preprocess the text, i.e. expand tabs and strip it if
        wanted and applies registered filters.
        r8�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir7r uz
�
�
c3s$����D]\}}}||fVq
dS�N)�get_tokens_unprocessed)�_�t�v�r=rEr"r%�streamer�sz"Lexer.get_tokens.<locals>.streamer)rCrr7rrF�ImportError�
_encoding_map�
startswith�len�decodeZdetectr;rGr4�stripr3r6�
expandtabsr5�endswithrr9)r=rEZ
unfilteredrLrFZdecodedZbomr7�encrP�streamr"rOr%�
get_tokens�sN	



�


zLexer.get_tokenscCst�dS)z�
        Return an iterable of (index, tokentype, value) pairs where "index"
        is the starting position of the token within the input text.

        In subclasses, implement this method as a generator to
        maximize effectiveness.
        N)�NotImplementedErrorrOr"r"r%rK�szLexer.get_tokens_unprocessed)F)r/r0r1r2r,�aliases�	filenamesZalias_filenamesZ	mimetypesZpriorityr?rBr<r)r[rKr"r"r"r%r3s
;c@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs0|f|�|_|f|�|_||_tj|f|�dSrJ)�
root_lexer�language_lexer�needlerr?)r=Z_root_lexerZ_language_lexerZ_needler:r"r"r%r?�szDelegatingLexer.__init__cCs�d}g}g}|j�|�D]H\}}}||jkrP|rF|�t|�|f�g}||7}q|�|||f�q|rx|�t|�|f�t||j�|��S)N�)r`rKrarDrT�
do_insertionsr_)r=rEZbuffered�
insertionsZ
lng_buffer�irMrNr"r"r%rK�s


�z&DelegatingLexer.get_tokens_unprocessedN)r/r0r1r2rr?rKr"r"r"r%r�s	c@seZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r/r0r1r2r"r"r"r%r�sc@seZdZdZdd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCsdS)Nrr"rAr"r"r%rBsz_inherit.__repr__N)r/r0r1r2rBr"r"r"r%rg�srgc@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGst�||�SrJ)�tupler+)�cls�argsr"r"r%r+szcombined.__new__cGsdSrJr")r=rkr"r"r%r?szcombined.__init__N)r/r0r1r2r+r?r"r"r"r%rh	srhc@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dSrJ)�_text�_start)r=�startrEr"r"r%r?sz_PseudoMatch.__init__NcCs|jSrJ)rn�r=�argr"r"r%rosz_PseudoMatch.startcCs|jt|j�SrJ)rnrTrmrpr"r"r%�end"sz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorrmrpr"r"r%�group%sz_PseudoMatch.groupcCs|jfSrJ)rmrAr"r"r%�groups*sz_PseudoMatch.groupscCsiSrJr"rAr"r"r%�	groupdict-sz_PseudoMatch.groupdict)N)N)N)
r/r0r1r2r?rorrrtrurvr"r"r"r%rls


rlcsd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s�t��D]�\}}|dkrqqt|�tkrR|�|d�}|r�|�|d�||fVq|�|d�}|dk	r|r||�|d�|_||t|�|d�|�|�D]}|r�|Vq�q|r�|��|_dS)N�)�	enumerater*r	rtro�posrlrr)�lexer�match�ctxre�action�data�item�rkr"r%�callback5s&�
zbygroups.<locals>.callback)Nr")rkr�r"r�r%r1sc@seZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nrfr"r"r"r%r�Ksr�csji�d�kr:��d�}t|ttf�r.|�d<nd|f�d<�tkrTd��fdd�	}nd	���fdd�	}|S)
a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sj�r��|j�|jf��}n|}|��}|j|��f��D]\}}}||||fVq<|rf|��|_dSrJ)�updater:r@rorKrtrrry�rzr{r|Zlx�srerMrN)�	gt_kwargs�kwargsr"r%r�iszusing.<locals>.callbackc3s^��|j��f��}|��}|j|��f��D]\}}}||||fVq0|rZ|��|_dSrJ)r�r:rorKrtrrryr���_otherr�r�r"r%r�xs
)N)N)�poprC�listrir)r�r�r�r�r"r�r%rSs



c@seZdZdZdd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dSrJ)r�)r=r�r"r"r%r?�szdefault.__init__N)r/r0r1r2r?r"r"r"r%r�sc@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    rbcCs||_||_||_dSrJ)r�prefix�suffix)r=rr�r�r"r"r%r?�szwords.__init__cCst|j|j|jd�S)N�r�r�)rrr�r�rAr"r"r%r;�sz	words.getN)rbrb)r/r0r1r2r?r;r"r"r"r%r�s
c@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r|��}t�||�jS)zBPreprocess the regular expression component of a token definition.)rCrr;�re�compiler{)rj�regex�rflagsr�r"r"r%�_process_regex�s
zRegexLexerMeta._process_regexcCs&t|�tks"t|�s"td|f��|S)z5Preprocess the token component of a token definition.z2token type must be simple type or callable, not %r)r*r	�callable�AssertionError)rj�tokenr"r"r%�_process_token�s�zRegexLexerMeta._process_tokencCst|t�rd|dkrdS||kr$|fS|dkr0|S|dd�dkrRt|dd��Sdsbtd|��n�t|t�r�d	|j}|jd
7_g}|D],}||ks�td|��|�|�|||��q�|||<|fSt|t��r|D] }||ks�|dks�td
|��q�|Sd�std|��dS)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state %rz_tmp_%drwzcircular state ref %r)r�r�zunknown new state zunknown new state def %r)	rC�str�intr�rh�_tmpname�extend�_process_stateri)rj�	new_state�unprocessed�	processedZ	tmp_state�itokensZistater"r"r%�_process_new_state�s>



���z!RegexLexerMeta._process_new_statecCs�t|�tkstd|��|ddks0td|��||kr@||Sg}||<|j}||D�],}t|t�r�||ks~td|��|�|�||t|���qZt|t�r�qZt|t	�r�|�
|j||�}|�t
�d�jd|f�qZt|�tks�td|��z|�|d||�}Wn<tk
�rB}	ztd	|d|||	f��W5d}	~	XYnX|�|d
�}
t|�dk�rfd}n|�
|d||�}|�||
|f�qZ|S)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rrbNzwrong rule def %rz+uncompilable regex %r in state %r of %r: %srw�)r*r�r��flagsrCrr�r�rgrr�r�rDr�r�r{rir��	Exception�
ValueErrorr�rT)rjr�r�r��tokensr�Ztdefr��rex�errr�r"r"r%r��sF
�

�
�zRegexLexerMeta._process_stateNcCs<i}|j|<|p|j|}t|�D]}|�|||�q$|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)rjr,�	tokendefsr�r�r"r"r%�process_tokendefs
zRegexLexerMeta.process_tokendefc

Cs�i}i}|jD]�}|j�di�}t|�D]�\}}|�|�}|dkr||||<z|�t�}Wntk
rpYq(YnX|||<q(|�|d�}|dkr�q(||||d�<z|�t�}	Wntk
r�Yq(X||	||<q(q|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nrw)�__mro__�__dict__r;r�indexrr�r�)
rjr�Zinheritable�cZtoksr��itemsZcuritemsZinherit_ndxZnew_inh_ndxr"r"r%�
get_tokendefs
s0


zRegexLexerMeta.get_tokendefscOsLd|jkr:i|_d|_t|d�r(|jr(n|�d|���|_tj	|f|�|�S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsrb)
r�r�r��hasattrr�r�r�r�r*�__call__)rjrk�kwdsr"r"r%r�;s
zRegexLexerMeta.__call__)N)r/r0r1r2r�r�r�r�r�r�r�r"r"r"r%r��s#,
1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�c
cs�d}|j}t|�}||d}|D�]\}}}	|||�}
|
r"|dk	rxt|�tkrb|||
��fVn|||
�D]
}|Vql|
��}|	dk	�r"t|	t�r�|	D]8}|dkr�|��q�|dkr�|�	|d�q�|�	|�q�nBt|	t
�r�||	d�=n,|	dk�r|�	|d�nd�std|	��||d}qq"zP||dk�r^d	g}|d	}|tdfV|d
7}Wq|t
||fV|d
7}Wqtk
�r�Y�q�YqXqdS)z}
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the inital stack (default: ``['root']``)
        rr�Nr�r�F�wrong state def: %rrHr�rw)r�r�r*r	rtrrrCrir�rDr�r�rrrs)
r=rEr�ryr�Z
statestack�statetokens�rexmatchr}r��mrr�r"r"r%rKhsN





z!RegexLexer.get_tokens_unprocessedN)r�)	r/r0r1r2r��	MULTILINEr�r�rKr"r"r"r%rIsc@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|p"dg|_dS)Nr�)rEryrTrrr�)r=rEryr�rrr"r"r%r?�szLexerContext.__init__cCsd|j|j|jfS)NzLexerContext(%r, %r, %r))rEryr�rAr"r"r%rB�s
�zLexerContext.__repr__)NN)r/r0r1r2r?rBr"r"r"r%r�s
c@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs|j}|st|d�}|d}n|}||jd}|j}|D�]6\}}}|||j|j�}	|	r:|dk	r�t|�tkr�|j||	��fV|	��|_n*|||	|�D]
}
|
Vq�|s�||jd}|dk	�rnt	|t
��r|D]B}|dkr�|j��q�|dk�r|j�|jd�q�|j�|�q�nJt	|t
��r0|j|d�=n0|dk�rN|j�|jd�nd�s`td|��||jd}q6q:zz|j|jk�r�W�q||jd	k�r�dg|_|d}|jtd	fV|jd
7_Wq6|jt||jfV|jd
7_Wq6tk
�r
Y�qYq6Xq6dS)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�Nr�r�Fr�rHrw)r�rr�rEryrrr*r	rtrCrir�rDr�r�rrrs)r=rE�contextr�r|r�r�r}r�r�rr�r"r"r%rK�s\




z)ExtendedRegexLexer.get_tokens_unprocessed)NN)r/r0r1r2rKr"r"r"r%r�sc	cs�t|�}zt|�\}}Wn&tk
r>|D]
}|Vq,YdSXd}d}|D]�\}}}	|dkrb|}d}
|�r
|t|	�|k�r
|	|
||�}|||fV|t|�7}|D]"\}}
}||
|fV|t|�7}q�||}
zt|�\}}Wqftk
�rd}Y�q
YqfXqf|||	|
d�fV|t|	�|
7}qL|�r�|�p>d}|D]$\}}}	|||	fV|t|	�7}�qDzt|�\}}Wn tk
�r�d}Y�q�YnX�q0dS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrT)rdr�r�r�rZrealposZinsleftrerMrNZoldiZtmpvalZit_indexZit_tokenZit_value�pr"r"r%rc�sL
rcc@seZdZdZdd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�r t|j|j|jd��n|�t��|��tjf����fdd�	}|S)Nr�cs`�jd���fddg�}t��}��|||�}t��}|dd7<|d||7<|S)Nr�rr!rw)�
_prof_data�
setdefault�timer{)rEry�endpos�infoZt0�res�t1�rjZcompiledr�r�r"r%�
match_func@sz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	rCrrr�r�r�r��sys�maxsize)rjr�r�r�r�r"r�r%r�8s

�z&ProfilingRegexLexerMeta._process_regexN)r/r0r1r2r�r"r"r"r%r�5sr�c@s"eZdZdZgZdZddd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�r�c#s��jj�i�t��||�D]
}|Vq�jj��}tdd�|��D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	�t	d
d�t	d�|D]}t	d
|�q�t	d	�dS)NcssN|]F\\}}\}}|t|��d��dd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrVrG)�.0r��r�nrMr"r"r%�	<genexpr>Xs�
�z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jSrJ)�_prof_sort_indexr#rAr"r%r&[r'z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss|]}|dVqdS)�Nr")r�r$r"r"r%r�]sz2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)r@r�rDrrKr��sortedr��sum�printr/rT)r=rEr��tokZrawdatar~Z	sum_totalr.r"rAr%rKRs*�
��z*ProfilingRegexLexer.get_tokens_unprocessedN)r�)r/r0r1r2r�r�rKr"r"r"r%r�Ksr�)9r2Z
__future__rr�r�r�Zpygments.filterrrZpygments.filtersrZpygments.tokenrrrr	Z
pygments.utilr
rrr
rrrrrZpygments.regexoptr�__all__rR�staticmethodZ_default_analyser*r(�objectrrr�rrgrrirhrlrr�rrrrr�rrrrcr�r�r"r"r"r%�<module>sh
,��'
2)WE?