HEX
Server: Apache
System: Linux srv1.prosuiteplus.com 5.4.0-216-generic #236-Ubuntu SMP Fri Apr 11 19:53:21 UTC 2025 x86_64
User: prosuiteplus (1001)
PHP: 8.3.20
Disabled: NONE
Upload Files
File: //usr/lib/python3/dist-packages/pygments/formatters/__pycache__/other.cpython-38.pyc
U

`a�[*�@s~dZddlmZddlmZmZddlmZddlm	Z	dddgZ
Gd	d�de�ZGd
d�de�ZdZ
dZGd
d�de�ZdS)z�
    pygments.formatters.other
    ~~~~~~~~~~~~~~~~~~~~~~~~~

    Other formatters: NullFormatter, RawTokenFormatter.

    :copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�)�	Formatter)�OptionError�get_choice_opt)�Token)�colorize�
NullFormatter�RawTokenFormatter�TestcaseFormatterc@s*eZdZdZdZddgZdgZdd�ZdS)	rz;
    Output the text unchanged without any formatting.
    z	Text only�textZnullz*.txtcCs8|j}|D](\}}|r(|�|�|��q
|�|�q
dS�N)�encoding�write�encode)�self�tokensource�outfile�enc�ttype�value�r�;/usr/lib/python3/dist-packages/pygments/formatters/other.py�formats
zNullFormatter.formatN)�__name__�
__module__�__qualname__�__doc__�name�aliases�	filenamesrrrrrrs
c@s6eZdZdZdZddgZdgZdZdd�Zd	d
�Z	dS)ra}
    Format tokens as a raw representation for storing token streams.

    The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
    be converted to a token stream with the `RawTokenLexer`, described in the
    :doc:`lexer list <lexers>`.

    Only two options are accepted:

    `compress`
        If set to ``'gz'`` or ``'bz2'``, compress the output with the given
        compression algorithm after encoding (default: ``''``).
    `error_color`
        If set to a color name, highlight error tokens using that color.  If
        set but with no value, defaults to ``'red'``.

        .. versionadded:: 0.11

    z
Raw tokens�raw�tokensz*.rawFcKs�tj|f|�d|_t|dddddgd�|_|�dd�|_|jdkrJd	|_|jdk	r�zt|jd�Wn"tk
r�t	d
|j��YnXdS)N�ascii�compress�Znone�gz�bz2�error_colorTZredzInvalid color %r specified)
r�__init__rrr"�getr&r�KeyError�
ValueError�rZoptionsrrrr'?s 
�

�zRawTokenFormatter.__init__c
sz��d�Wntk
r*td��YnX|jdkrbddl}|�ddd����fdd	�}�j}nL|jd
kr�ddl}|�d����fdd	�}��fdd
�}n�fdd	�}�j}|jr�|D]8\}}d||f}	|t	j
kr�|t|j|	��q�||	�q�n|D]\}}|d||f�q�|�dS)N�z3The raw tokens formatter needs a binary output filer$rr#�wb�	cs��|���dSr�r
r�r
�rrrr
Zsz'RawTokenFormatter.format.<locals>.writer%cs����|����dSr)r
r"rr0�Z
compressorrrrr
`scs���������dSr)r
�flushrr2rrr3bsz'RawTokenFormatter.format.<locals>.flushcs��|���dSrr/r0r1rrr
fsz%s	%r
)r
�	TypeErrorr"�gzipZGzipFiler3r%Z
BZ2Compressorr&r�Errorr)
rrrr5r
r3r%rr�linerr2rrQs2



zRawTokenFormatter.formatN)
rrrrrrrZ
unicodeoutputr'rrrrrr%szF    def testNeedsName(self):
        fragment = %r
        tokens = [
zR        ]
        self.assertEqual(tokens, list(self.lexer.get_tokens(fragment)))
c@s*eZdZdZdZdgZdd�Zdd�ZdS)	r	zU
    Format tokens as appropriate for a new testcase.

    .. versionadded:: 2.0
    ZTestcaseZtestcasecKs.tj|f|�|jdk	r*|jdkr*td��dS)N�utf-8z*Only None and utf-8 are allowed encodings.)rr'rr*r+rrrr'�szTestcaseFormatter.__init__cCs�d}g}g}|D]&\}}|�|�|�d|||f�qtd�|�f}d�|�}	t}
|jdkrt|�||	|
�n0|�|�d��|�|	�d��|�|
�d��|��dS)Nz            z%s(%s, %r),
r#r8)�append�TESTCASE_BEFORE�join�TESTCASE_AFTERrr
rr3)rrrZindentationZrawbufZoutbufrrZbeforeZduringZafterrrrr�s


zTestcaseFormatter.formatN)rrrrrrr'rrrrrr	�s
N)rZpygments.formatterrZ
pygments.utilrrZpygments.tokenrZpygments.consoler�__all__rrr:r<r	rrrr�<module>s

Q