HEX
Server: LiteSpeed
System: Linux atali.colombiahosting.com.co 5.14.0-570.12.1.el9_6.x86_64 #1 SMP PREEMPT_DYNAMIC Tue May 13 06:11:55 EDT 2025 x86_64
User: coopserp (1713)
PHP: 8.2.29
Disabled: dl,exec,passthru,proc_open,proc_close,shell_exec,memory_limit,system,popen,curl_multi_exec,show_source,symlink,link,leak,listen,diskfreespace,tmpfile,ignore_user_abord,highlight_file,source,show_source,fpaththru,virtual,posix_ctermid,posix_getcwd,posix_getegid,posix_geteuid,posix_getgid,posix_getgrgid,posix_getgrnam,posix_getgroups,posix_getlogin,posix_getpgid,posix_getpgrp,posix_getpid,posix,posix_getppid,posix_getpwnam,posix_getpwuid,posix_getrlimit,posix_getsid,posix_getuid,posix_isatty,posix_kill,posix_mkfifo,posix_setegid,posix_seteuid,posix_setgid,posix_setpgid,posix_setsid,posix_setid,posix_times,posix_ttyname,posix_uname,proc_get_status,proc_nice,proc_terminate
Upload Files
File: //proc/thread-self/root/proc/self/root/proc/self/root/proc/thread-self/cwd/site-packages.zip
PKok\%M�mm(netifaces.cpython-39-x86_64-linux-gnu.sonuȯ��ELF>�@H�@8@$!�5�5 �5�5 �5 �� 66 6 �����$$P�td444<<Q�tdGNUr���N�m��f�4�/�^(>�5��(	@	57:BE���|�f?��qX������	��u.�� ��J+x�w��;q�j�� �C�X��Y�~�#f���a �k8 R"�_�: �: ��&=�: 	�0.__gmon_start___init_fini_ITM_deregisterTMCloneTable_ITM_registerTMCloneTable__cxa_finalize_Jv_RegisterClassesPyList_NewgetifaddrsstrncmpPyUnicode_FromStringPySequence_ContainsPyList_Append_Py_DeallocfreeifaddrsPyExc_OSErrorPyErr_SetFromErrnogetnameinfosprintfgetpagesizePyDict_NewPyDict_SetItemStringPyDict_GetItemStringmallocsocketbindgetsocknamesendtorecvmsgmemcmpif_indextonameinet_ntop_Py_FalseStructPyTuple_PackPyObject_IsTruePyObject_SizePyLong_FromLongPyDict_SetItemfreeclose__errno_location_Py_TrueStructPyErr_SetStringPyDict_GetItemPyErr_NoMemoryPyArg_ParseTuplestrcmpmemcpyPyExc_ValueErrorPyInit_netifacesPyModule_Create2PyModule_AddIntConstantPyModule_AddObjectPyModule_AddStringConstantlibpthread.so.0libc.so.6_edata__bss_start_endGLIBC_2.2.5GLIBC_2.3� ui	#ii
/ui	#�5 P�5 6 6 �9 �/�9  :  : �/(: @ 8: 0@: �/H: �X: 1`: �/h: �x: P1�7 	�7 �7 �7 8 8 $8 ,8 1 8 2@8 H8 P8 X8 `8 h8 p8 x8 
�8 �8 �8 
�8 �8 �8 �8 �8 �8 �8 �8 �8 �8 �8 �8 �8 9 9  9 !9 " 9 #(9 %09 &89 '@9 (H9 )P9 *X9 +`9 -h9 .p9 /x9 0�9 2�9 3�9 4H����H����5j& �%l& @�%j& h����%b& h�����%Z& h����%R& h����%J& h����%B& h����%:& h����%2& h�p����%*& h�`����%"& h	�P����%& h
�@����%& h�0����%
& h� ����%& h
�����%�% h�����%�% h���%�% h����%�% h�����%�% h����%�% h����%�% h����%�% h����%�% h����%�% h�p����%�% h�`����%�% h�P����%�% h�@����%�% h�0����%�% h� ����%�% h�����%z% h�����%r% h���%j% h ����%b% h!�����%Z% h"����%R% h#����%J% h$����%B% h%����%:% h&����%2% h'�p����%*% h(�`����%"% h)�P����%% h*�@���H��H�u# H��t��H���f�H�& H�=�% UH)�H��H��w]�H�# H��t�]��@H��% H�=�% UH)�H��H��H��H��?H�H�u]�H�# H��t�]H���@�=�% u'H�=�" UH��tH�=�  �����h���]�`% ��fffff.�H�=�  t&H��" H��tUH�=�  H���]�W�����K���f.��AU1�ATUSH��H�$�7���H��I���������H�$H����E1��f�H�mtZL�cH�H��t]H�kH��t�M��t�L��H���/�����t�H������L��H��H���e�����u�H��L������H�mu�H�������DH�<$�g���L��H��[]A\A]�1���I�muL�����H�! H�8����1���AVAUATUH��SH��H�����f������
I�����w
H��4�E1�E1��$�L��H��������tx�EA��Ѓ�
��wH�
�Lc$�I��f��H�]tcO�4dI��wfM��A�E�����t:I�L���H�5H��1�H��H����L9�u�C�D5�1�A�}����H��[]A\A]A^�D�eH�]�D�������f�AWAVAUATU� SH�����H��$�= �N�H�����H���H�����I��H�D$(����H�5^H��H��L�����H�+�qH�|$(H�5:����H�D$XHc�H��H�D$�*���H��I���C1Ҿ��-������$�zH�\$p��Ǻf�L$p�D$xH���D$t�T������8�<$H�T$`H���D$`�������<$�L�Q�fA�T$1�A��L��I�D$A�D$A�$fA�D$A�D$A�D$���H����H��$�E1�H�D$H��$�H�D$H��$�H�D$ H��$�H�D$PH��$�H�D$HfDH�D$H�t$ 1ҋ<$L��$�DŽ$�HDŽ$�H��$�H�D$HDŽ$�HDŽ$�DŽ$�H��$�H�D$H��$�������$� ��H���:����A�,$����9�A����L���.A�B��A)�H�A�����+����D9����{A��űD$t9Cu�D�k�CA��f��E���af�����M�H�S��~��Cf��v�9�|�A�����E1�1�E���3@f��tZf��uD�B����)�H�Hƒ�~N�f��vE9�A�rf��tw�L�Rf��ID����D�J�f.�L�z�f.�H��tM�C<��<
A�����H�5��D�D$8D�L$0�l$D�����D�L$0D�D$8D�T$D�����D��A����M����!Є������H�t$PD��D�D$0����H��H����{H�T$H�L������H�����{�D�D$0��L�=( �C< w)����E���DHǃ��t	D9���D����H������H�|$HH������L��H��H��H�D$8�1��0���L��H�D$0E1��0�����L�L$8��I�)�wH�m�_H�|$0H��t�C������H���zM��t1�{�2���H�|$XH��L��H���o���H�m�I�/��D��R���f�E�������L���/����<$����H�D$(�H�5�D�D$8D�L$0������D�L$0D�D$8�U���A����������S�ډH�� H�8����H�|$(H�H�D$H��H��H��L����<$�C���1�H�Ĉ[]A\A]A^A_�H��������L�=� �F���L����D��v���L�=W �[���H�C H�5�H�8���o���H�������L���z��|���L��H��1��s���L�L$8I���S���@����H�|$(H��H�D$8���L�D$8M���=H����H�l$0H��H�����H�EH�D$8H��H��H�E�*���H�|$0����������fD�������H�D$(H�8H�W�H�<$H�1�H������H�|$(H�$��H�$���H������H�- H�8�u�H�|$(H�H�D$H��H��H�u�g�L����<$��1��_����L�D$8���H��H��L�D$8��H�|$0H�@H��L��H�L$0H�8H�|$(���H�L$0H�)�0���H�����#���H�l$0H�}H�W�H��H�UuH��H�D$���H�D$H��tH�8H�W�H��H�uH����M�������I�/�y���L�����l���@H�! H�8�i�H�|$(H�H�$H��H��H�u�\�L����1��\���H�l$0H�muH��L�D$�4�L�D$I�(�y���L�����l����AWH��H�5q1�AVAUATUSH��HH�T$H�D$ �����C�H��H�$��H�|$ �����H�\$ H����H�D$01�L�|$@H�D$�vH�5�
L���!�M��tI�muL���m�M��tI�.uL���Z�H��tH�muH���F�H�CL���(�'�H�����H�H���eH�{H��t�H�t$����u�H�{H��t�L��E1��"������bH�{ H��t
f�?
��L��E1�����&H�{(L��1�������H�Cf�8����H��I����M��tH�5�L��H����M��tH�5�L��L����H�������CH�������H�5rL�������������H�<$H��H����H���pH����L��H����I�,$����L��������@L���h�H��H�Cf�8�'����@�f1�=������H��tH�m��1��
�H��I������M��tI�muL���}�M��tI�.uL���j�H��tH�muH���V�H�$H�H�D$H��H��H���H���2��DE1�A��D���T���� �t#�΃�@�������@���������3Dƃ�U�gE�L�����D)�D)�A��A��E��`D�G1�M���-fD��D�G	t�uH��$@I9�s	A�$:I��H��D��L��$@���q0��W@��:M�A��A�H0A��W��:AM�M9�sM�D$A�4$M9���A�L$I����D9�u�����H��$@L)�L�t$H�5
1�L�����L��
H����������!�%����t������D�H�JHD��H��H+T$Hc�H9�}H�t$H��L���F�Ƅ$?L���&�I������fDL����I������A��������A��A����?������M���H��$@L)�H���$���I�D$H��$@A�$:A�D$:H)�I������1��i�H�$H�H�D$H��H��H��sH�~ H�5o
H�8���1�H��H[]A\A]A^A_�@H�|$ ����H�$u��@I�,$tyH��t
H�(��H�$H�H�D$H��H��H���H�|$ ����f�����H��I��tRH�@H�<$L��H��L� ��I�m�����L���%���L��H�D$��H�D$�p���H�����r���I�,$t*H�m�`���H������S���H��1�����)���L�������H�<$���B���H�$H�H�D$H��H��H�uH����H�) H�8�q�1����H���r�f��~���M���8���ATH�=� ��US��H��H�����H�5k1�H��H�����H�=W��1�I����L��H��H�����H�5=�H����H�=)�r��I���E�L��H��H����H�5
�H���s�H�=��7��I���
�L��H��H���L�H�5��H���8�H�=����I�����L��H��H����H�5��H����H�=����I����L��H��H�����H�5q�H�����H�=]���I���Y�L��H��H����H�5=�H����H�=)�K��I����L��H��H���`�H�5�H���L�H�=����I�����L��H��H���%�H�5��H����H�=�����I����L��H��H�����H�5��H�����H�=����I���m�L��H��H����H�5y�H����H�=e�_��I���2�L��H��H���t�H�5H�H���`�H�=4�$��I����L��H��H���9�H�5�
H���%�H�=����
I����L��H��H����H�5��
H�����H�=����
I����L��H��H�����H�5��H����H�=��s��I���F�L��H��H����H�5��H���t�H�=m�8��I����L��H��H���M�H�5P�H���9�H�=<���I�����L��H��H����H�5�H����H�=	����I����L��H��H�����H�5��H�����H�=����I���Z�L��H��H����H�5��	H����H�=��L��	I����L��H��H���a�H�5��H���M�H�=s���I�����L��H��H���&�H�5T�H����H�=@����I����L��H��H�����H�5%�H�����H�=���I���n�L��H��H����H�5��H����H�=��`��I���3�L��H��H���u�H�5��H���a�H�=��%��I����L��H��H���:�H�5��H���&�H�=y����I����L��H��H����H�5��H�����H�=����I����L��H��H�����H�5!�H����H�=
�t��I���G�L��H��H����H�5��H���u�H�=��9��I����L��H��H���N�H�5��H���:�H�=����I�����L��H��H����H�5�H��H�����H��H�5�H�����H��[]A\�H��H���%02x:defaultnetlink message truncated/%uaddrnetmaskpeerbroadcastAF_UNSPECAF_UNIXAF_FILEAF_INETAF_AX25AF_SNAAF_DECnetAF_APPLETALKAF_ROUTEAF_LINKAF_PACKETAF_IPXAF_INET6AF_NETBEUIAF_ATMPVCAF_ATMSVCAF_IRDAAF_NETROMAF_BRIDGEAF_X25AF_ROSEAF_SECURITYAF_KEYAF_NETLINKAF_ASHAF_ECONETAF_PPPOXAF_WANPIPEAF_BLUETOOTHaddress_families0.11.0versionnetifacesifaddressesinterfacesgatewaysYou must specify a valid interface name.Obtain information about the specified network interface.

Returns a dict whose keys are equal to the address family constants,
e.g. netifaces.AF_INET, and whose values are a list of addresses in
that family that are attached to the network interface.Obtain a list of the interfaces available on this machine.Obtain a list of the gateways on this machine.

Returns a dict whose keys are equal to the address family constants,
e.g. netifaces.AF_INET, and whose values are a list of tuples of the
format (<address>, <interface>, <is_default>).

There is also a special entry with the key 'default', which you can use
to quickly obtain the default gateway for a particular address family.

There may in general be multiple gateways; different address
families may have different gateway settings (e.g. AF_INET vs AF_INET6)
and on some systems it's also possible to have interface-specific
default gateways.
 ;<���X���p����0�X��zRx�$P���FJw�?;*3$"<D���B�D�A �A(�D@�
(A ABBAD���B�B�B �A(�D0�D@�
0A(A BBBAL�p��B�B�B �B(�A0�F8�G��
8A0A(B BBBAL���B�N�B �B(�A0�A8�G�,
8A0A(B BBBE,l0�=B�M�A �)ABP6 ��
0.�5 �5 ���o���
9(8 �
�(	���o0���o�o�
���o6 ���&6FVfv��������&6FVfv��������&6FVfv�/�������� : �/@ 0�/�1�/�P1GCC: (GNU) 4.1.2 20080704 (Red Hat 4.1.2-55)GCC: (GNU) 4.8.2 20140120 (Red Hat 4.8.2-15)GCC: (GNU) 4.1.2 20080704 (Red Hat 4.1.2-55),���M���-.�
�8�����int�=�iZ
�i��
�i���
�M�`
b#��#��#�
�#�# ?�#(�#0��#8�
�#@�	�#H��#P��#X��#`1!�#h?#b#p�'b#tL)p#x�-F#�.T#�M/�#��3�#�P<{#�3E�#��	F�#�:G�#�AH�#�HI-#�Kb#�M�#��	��
����
�����#���#��b#��������b��
n���?^�F�M��\�b�
��(	1?�2FQ4MA	iV
	r���B��
���
�#9
�#G
ď# �
ď#(�
��#04
ɏ#8�
��#@8
�	#H	
��#P
�R	#X�
��#`�
��#hM
�#pt
�]	#x@
�c#�]
�R	#��

�	#�j
�G	#��
�#�$
�8#�
�#��
�#�~
�#��
�~	#��
�#�'
��	#��
��	#�	
�G#��
�S#��
��#�y
��#�
�7#�
��	#��
��	#��
�#��
�	#�2


#�4
�	#��
�#�D
�#�4
7#�5
7#�
7#�F
	7#��

7#�
�#��
M#�C
�#��
<#�
�i��k�#�l�#�w
m�s�t�#u�#
v�{�!'777���HN7c77u�nt7�777I���b�7G�����7	���7�7���b!7�7��,2bL777��W]br770�}�b�7����b�7r��������
�7����7	7��Hd�'	-	bG	7�7��,�	�!��h	n	�~	7��	�	7�	77b��!{�!I	�n0	�,��,���	�	7
�77�
�

71
��
�P
8�
buf
9�#obj
:7#len
;�#�
<�#�
>b# �
?b#$t
@�#(i
A�
#0`
B�
#8H

C�
#@�
D�#H��
E1
a
G�
�
b7b�

H%+
<7�
JGM7l7l-7r	7 
o�
�
t=#l
u=#�
v=#
w=#�
x=# W
yc#(�
z#0�
{#8�
|#@�
}�#HZ
~#P(
=#X�
�=#`+
�=#h�

�=#pa
�=#x�
�#��	
��#��
�#�l	
�=#�T
�=#�l
�=#�)
�=#�Y
�c#�*
�=#�d
�=#��
�=#�7
�=#��
�=#�
�=#��
�=#�B
�=#�x
�=#��
�#�
�=#�,
�=#�
�wP
�-�
��#�
�=#�
��#�
��#�
��# �
��#(d
��#0�
�L#82
�=#@�
��#Hj
��

�k�
��#q
�=#�
�!#�
�8
���
�#.
�#�
�#�
�v
���
��
#�
�#
����
-k�
7 #G|$#�%P#+&b#d(#�M
(�#get
�#set�#doc#K�# Y�?��
�
 
U��
V#Z
W�#]
-���(	?�
#H?#�# 7��H7*[
�(,��-�#W.�#�	/�#�
07# 7��1l
g=��
>b#`
?�#
�hK|�L�#BM#(�N#0O�#8Pf#@�Q|#HR�#P�
S�#X�T�#`��7�7����	��b�77�
�	,�c.�#�/-#>)-
�2`"U
lF
��a<�-#��a#�q�
��qb��w	� ���������	�a�#� T�����m��
'8�\�ٍ#��c#��\#,�-#�ߍ# d
�-#(r�b#0�8
���-#�`# �#U!L#h
�L�'N-#O�#)
Pb#���s
�
`�M��-#�ު#��#�P#

���-#��#�	�#�5#\�#SM�
�#/%-#�&/#':#�
(#()$#
*?#�d�+s7�	C�
XZW�Wb9W�#W��W��W!WMkW��W�V	W�
W%�W1W=�WI�	8	h	�	�$	�/	:	SE	�P	�[	f	+q	7|	C�\�[$[�9[�#[�[�[![)k[4�[?V	[J�
[U�[`[k�[v,�;_�
8� Y��	��������
��
� 7���� �����\y-��
���:��#�[�����%n�y%��5�
�PO��#?`�;?�
FCM
�k
k�v -#�!F#�"v#|#v#
%&$(v#O)k#p*k#o+v#S,v#
C_M	ab#msgb�#
M�e��m��
��2;
��� �!
"P$,%�&�(�)L*�,�-�	.�0�1E24�6�:$>����
���#���(�
+
yv�{F#h|F#

��?#��?#=	�?#��?#	
�?#��?#A�?#
�?#��M#��36
��&��������|��-o
|��
v
U�	"
F^�
A��
���#x`#F�H�#����J�?�
���F#4F#xb#�F#�?#
?#	P#'��-bS.b
\8 #"�#]#M#h%b#�&b# �/�#(W9�#0���.op�7 q�b}!��7!-�bobj�7"q�7"��7#��b�$af�b���op�7 �UbX!Ub!�V�!�W-"LYb"�Zb"	[b"\c%&n~-&len~-&ptr�"[�%"�
�X ��bB!�b!���!��-%"���&n�M"L�M"��M"Z�M"S��"���"O��'"A�?&x�?&zx�M%"A�?&ch1��&ch2��(�7�}� )o7*�
7;*�C�+^D�@*E�,W *T7--�0X ."v/9I0Uv1
I 0Uv1-I< 0U}0Tv/*KI0U}0Tv2[jJ� 3"]/jI0U}1�hI� 0U01��I� 0Uw1�I� 0Uv0T|0Q@4I�I4y�I5����@".��.�56�7��7�7
,8�2}��az!.�O,`"7z7$�70�7<_2}����!.��9p��!7I/L�I0Uv}0T	@./�J0Uv0Q}0R
0X00Y00w1(��7�8 &.)o�7x*��7�*f
�7�	X��":hdr��#:rt�v#[�.#*��."
*<�.�
;s�bL;seq�b�;ret��+g	����{+��.	4+��c��{*L�b�*��b
*��b�
<+�b+�.��|=�-����#."�/yI0Us,�t++�
	���{>iov
���{+'���{*�b0,01+;dst+�f;gw,��*�-b"*.*.o*�.*.;len/bH*�0b~+gi@.	4+�jE.	�3,��*+�qJ.��}+�r���{*s�*tt*
u7�*�v7�*iw7S*�	x7�*�	x7d-M`��%."\4�I-W���%."�/�I0Uv-.r��m(.X�.LT.@�?�@d@p,�U(.X@.L�.@�?�7dK7p�2<e��&."Y/`I0U��z2Ae��&."|/XI0U��z2e��'."�/�I0Uv2����V'.��A���."�4�I2 ! ��'3"V/ I0Uv2! 8 ��'."
43 I1BJ�'0Uv�1ZJ�'0U��z0T��z1<KI(0Tv1hI-(0U1/AwJ0U��z0T��z0Q��z/w�J0U��z,")*�7--�P��(."�/�I0Uv-����(."�/�I0U4�BJ/�wJ0U��{0Tv0Q-0��Q)."4jI2�����).�UA���."x/�I0U1��J�)0T��{1��J�)0T0Q��{0R
1	I*0Uv1I *0U��{10�JK*0U30T��z0Qv0R4@K1U(Kp*0U|1]=K�*0Uw�/��J0U20T��z0Qv90�**xHZ.�4VK40�I1KdK+0T	�30Q@/�dK0T	40Q41��KX+0Uw�0T��z0Q0/��K0T	N.-z���+."�/�I0U��z2���+."�/�I0U��z2���$,."
/�I0U��z4��K4��K4��K1��Kv,0U0T	F.0Qs1L�,0U��z0T	F.1&#L�,0U��z1C<L�,0U@0T30Q01|^L�,0Uw�0Ts0Q<1��L$-0Uw�0Ts0Q��{1�L`-0Uw�0T|0QL0R00X	40Y<1�(Kx-0U|1�=K�-0Uw�4z�L4��I1�(K�-0U|1�=K�-0Uw�4��I/�(K0U|�.B��"	�b*.� M?@.�	0.	��Z.��$(\�7@ �&/�8)o�7�)T�7�+���}*��7*a�bQ+^���}*�,�7+�J.��}*�7�*�7g *�7!*a7�!-^� p/l0.�D".|�".p1#,�W0@�7�z#7��#7�4$7�}$7��$@�C���},P�/7��%7
&7[&,�07�&7''73`'1W$�I:00U~0T	h./�$�L0U|0T��}/�!� 0T,�t6*A7�'2�� � \�0.�>(A� � �."�(/� I0U}2�� � ]=1.��(A� � �."?)/� I0U~2�� 
!^�1.�b)A� 
!�."�)/
!I0Uv-.! a 4.X�).L**.@l*? @d@p,@
4.X�*.L<+.@e+?@7d�+7p4,2T"p"�P2."-/g"I0U|-�%���2.":-/=&I0U|-��%���2.�p-D�%��."�-4O&I2&0&�
3."�-/+&I0U}-T& �C3."�-/�&I0U|2[&s&�}3."5./n&I0Uv1("BJ�30Uv
��17"ZJ�30Uw0Tv1T"KI�30T|1�%hI�30U1/&wJ0Uw0Tv0Q}/!�J0U|2��"�"D|4.�X.A�"�"�."{./�"I0U}2��"�"E�4.��.A�"�"�."�./�"I0U~2��"�"F45.��.A�"�"�."//�"I0Uv2�"(#Ho5."*//#I0Uw-�%Pb�5."N//�&I0Uw1� �K�50U|0T	y.4�!�K1�!�K60U|0T	l.0Q}1�!�K.60U|0T	q.0Q~1"�KY60U|0T	~.0Qv4�"�K4�%�I9<!F!�6"O-"�-4F!M,�H7;sin8M�/9�"�"�6;__v:M�/;__x:M�/D��"�;.�0D�"��."_04}&I1^!� `70T1�!� x70T1x"I�70U1�$I�70U/�$I0U-G%0�7."�0/�&I0Uw2�&�&�/8."�0/�&I0Uw1p )M\80U�T0T	�/0Q��}4} �K1� �I�80U��}1G%�I�80U01y%�K�80T	�/4�%�I4�&�IE�
7�&-.1iH*y
7�1;m
7�11'GM>90U	�9 0T
�4'�K1*'jMu90Us0T	�.0Q016'I�90U	�.1@'BJ�90U01N'wJ�90Uv0Q|1b'jM�90Us0T	�.0Q11n'I:0U	�.1{'BJ):0U11�'wJG:0Uv0Q|1�'jMq:0Us0T	�.0Q11�'I�:0U	�.1�'BJ�:0U11�'wJ�:0Uv0Q|1�'jM�:0Us0T	�.0Q21�'I;0U	�.1�'BJ%;0U21�'wJC;0Uv0Q|1(jMm;0Us0T	�.0Q31(I�;0U	�.1,(BJ�;0U31:(wJ�;0Uv0Q|1N(jM�;0Us0T	�.0QF1Z(I
<0U	�.1g(BJ!<0UF1u(wJ?<0Uv0Q|1�(jMi<0Us0T	�.0Q<1�(I�<0U	�.1�(BJ�<0U<1�(wJ�<0Uv0Q|1�(jM�<0Us0T	�.0Q51�(I=0U	�.1�(BJ=0U51�(wJ;=0Uv0Q|1�(jMe=0Us0T	�.0Q@1)I�=0U	�.1)BJ�=0U@1&)wJ�=0Uv0Q|1:)jM�=0Us0T	�.0QA1F)I>0U	�.1S)BJ>0UA1a)wJ7>0Uv0Q|1u)jMa>0Us0T	�.0QA1�)I�>0U	�.1�)BJ�>0UA1�)wJ�>0Uv0Q|1�)jM�>0Us0T	�.0Q41�)I�>0U	�.1�)BJ?0U41�)wJ3?0Uv0Q|1�)jM]?0Us0T	�.0Q:1�)I|?0U	�.1*BJ�?0U:1*wJ�?0Uv0Q|1&*jM�?0Us0T	�.0Q=12*I�?0U	�.1?*BJ@0U=1M*wJ/@0Uv0Q|1a*jMY@0Us0T	/0Q81m*Ix@0U	/1z*BJ�@0U81�*wJ�@0Uv0Q|1�*jM�@0Us0T	/0QD1�*I�@0U	/1�*BJ
A0UD1�*wJ+A0Uv0Q|1�*jMUA0Us0T	/0QG1�*ItA0U	/1�*BJ�A0UG1�*wJ�A0Uv0Q|1+jM�A0Us0T	"/0Q61+I�A0U	"/1++BJ	B0U619+wJ'B0Uv0Q|1M+jMQB0Us0T	,/0Q71Y+IpB0U	,/1f+BJ�B0U71t+wJ�B0Uv0Q|1�+jM�B0Us0T	6/0Q91�+I�B0U	6/1�+BJC0U91�+wJ#C0Uv0Q|1�+jMMC0Us0T	=/0Q;1�+IlC0U	=/1�+BJ�C0U;1�+wJ�C0Uv0Q|1�+jM�C0Us0T	E/0Q>1
,I�C0U	E/1,BJD0U>1%,wJD0Uv0Q|19,jMID0Us0T	Q/0Q?1E,IhD0U	Q/1R,BJD0U?1`,wJ�D0Uv0Q|1t,jM�D0Us0T	X/0Q@1�,I�D0U	X/1�,BJ�D0U@1�,wJE0Uv0Q|1�,jMEE0Us0T	c/0QB1�,IdE0U	c/1�,BJ{E0UB1�,wJ�E0Uv0Q|1�,jM�E0Us0T	j/0QC1�,I�E0U	j/1-BJ�E0UC1-wJF0Uv0Q|1%-jMAF0Us0T	�.0QF11-I`F0U	�.1>-BJwF0UF1L-wJ�F0Uv0Q|1`-jM�F0Us0T	t/0QH1l-I�F0U	t/1y-BJ�F0UH1�-wJG0Uv0Q|1�-jM=G0Us0T	}/0QI1�-I\G0U	}/1�-BJsG0UI1�-wJ�G0Uv0Q|1�-jM�G0Us0T	�/0QO1�-I�G0U	�/1�-BJ�G0UO1�-wJH0Uv0Q|1.�M:H0Us0T	�/0Qv/%.�M0Us0T	�/0Q	�/[yH�+�	iH	 : +!
�	�9 F���F2��F}�Fh �FF �F�!W7F�!s7GV�I7H�"�7-IIC#�bKI77H
$"bhI77H�$7�I�HvCb�I�IHO%fb�I-JXF�IH�!�7�I7I{	ObJ�KI&�bBJ�c�c�cMHF
'7ZJiH�(7wJ77H�(b�J777I�#��J7H���JM�H)A�Jb\�cH�
*#7K�KI1b(K7G�+[=K�II@bVKbL�,+HE%>b�K\\-H7��Kb�Kb�JI!
�K7MI�bLy(7H�(:bL77H�(97#L7IZ+M�<L-H;db^LbbbH�nb�Lb�cH�rb�LbO�LcHS
��Lb\-b�cLZ!�7N��M�\�O.b)MH�-(bGM7KH�-�7dMdMb�H�-�b�M7iH-�b�M77Pg-�b7%:;I$>$>I:;
:;I8
	&I
'I:;
:;
:;I8
I!I/&:;:;'I
:;I8
:;<'I:;(
:;:;
:;I.:;' :;I .:;'I !:;I"4:;I#.:;'I $:;I%&4:;I'(.:;'I@�B):;I*4:;I+4:;I
,U-1RUXY.1/��10��
�B
1��121XY31
4��15.1@�B641741819:
:;I8
;4:;I<4:;I=
:;>4:;I
?U@41A1XYB!IC41
D1RUXYE.?:;'I@�BF4:;I?<G.?:;'<H.?:;'I<I.?:;'I<J.?:;'<KL.?:;'I<M.?:;'I<N.?'I4<O.?:;�@'I<P.?:;'I<9	��
/opt/_internal/cpython-3.9.5/include/python3.9/opt/rh/devtoolset-2/root/usr/lib/gcc/x86_64-CentOS-linux/4.8.2/include/usr/include/bits/usr/include/usr/include/sys/opt/_internal/cpython-3.9.5/include/python3.9/cpython/usr/include/linux/usr/include/netax25/usr/include/netinet/usr/include/netipx/usr/include/net/usr/include/asm-x86_64/usr/include/netpacket/usr/include/arpanetifaces.cobject.hstddef.htypes.hlibio.htypes.hunistd.hstdint.hpyport.hobject.hmethodobject.hdescrobject.hlongintrepr.hlistobject.hmoduleobject.huio.hsockaddr.hsocket.hatalk.hax25.hin.h	ipx.h
socket.hif.htypes.htypes.hnetlink.hrtnetlink.hpacket.h
ifaddrs.hstdio.hboolobject.hpyerrors.hunicodeobject.habstract.hlistobject.hstring.hnetdb.hlongobject.hdictobject.hinet.htupleobject.hstdlib.herrno.hmodsupport.h<built-in>	��
;.E.8��Z:>�t��xt�tvJ��X=�d>��x�|�.jX�<���xJ|����xt	��~��<
�O�~J��xJ	JwJ.�:Z�/IW]'_�YN������}X��^}]SA��Ys=�u��
�Z�:>l,>fTxTY�:�	ȫ��7]ytZkw�!�g���8��.Xt��e�=Z�
��)�~���	��
�ZHLHLk�;M-Yf;0(��Pn�X�K�MKy�K�Y��I
�":>iZ�	X�u���W=Z�:YX)<�t �J��vf�	�Y�V>�s��֓��~��X�z.[5��t�����~��.�o ���t������u����v֑"���}���
��uX�
.�u.��
$�u���
X���wt�;=���=�}��xJ� �X�������
�u�|�
X��u�|x ��JXr��T��Z:L�f&�{��X�.�{X���{��X|���Xn���X|���|J�<�|<��wX����U<+<��{J���<�|<�X�Ot1.�
�Z:>
fY!Y"�K;i�|��I=����}������{.�X��3Z:>�{f�X|���Xn���X|x��
 
<ɆZKL�h�_WY;=v�Wb."�+[g��Cy�h���Y��m��CZB��U�{t>m�*5�L�Y��zt���0�~X���y��t�X�&��|J�;=4K�=�}X�x1(���z.x(|����{t����
.v�
.Y;=jZ�>Y$='='='=,'='='='='='='='='='='=''=	'='='='='='='='='='='='='='([RTA_DSTrtm_familyRTM_NEWADDR_unused2__RTM_MAXRTM_GETROUTE_filenolenfunc__s1_lenfreeifaddrssockaddr_isoadd_to_family__sockaddr_un__PyDict_GetItemStringtp_getattrsq_itemRTM_NEWACTIONnb_addob_refcntmsg_namesq_ass_itemrtm_dst_lensq_inplace_repeattp_as_asyncnb_matrix_multiplynb_lshiftsq_inplace_concattp_is_gc_shortbufnb_powersockaddr_insa_family_ts_nodeinet_ntopam_anextIFF_PROMISC__CONST_SOCKADDR_ARGRTM_NEWADDRLABELPyInit_netifacessq_repeattp_itemsizesq_concatuint16_tinitprocsin_zeroin_port_tPyGetSetDefRT_TABLE_MAINtp_bases__off_tmemcmpzeroesifu_dstaddrIFF_BROADCASTRTM_SETLINKaddress_family_dictstring_from_netmask_lockPyModule_Create2netifaces.cnb_negativesetattrofuncsockaddr_lltp_deallocmsg_namelenIFF_AUTOMEDIA_typeobjectnb_floor_dividenb_inplace_lshiftsa_familyRTA_FLOWpfxbuf_Py_DeallocMSG_PEEKsockaddr_nlMSG_CTRUNCMSG_NOSIGNALmemcpysin6sockaddr_nsprioritynl_pidRTA_SESSIONnb_indextp_richcompareMSG_DONTWAITrtm_protocolm_freesll_family_IO_write_endnb_remainderRTA_PROTOINFOvisitprocs_addrbyte_Py_TrueStructsat_zeroRTA_CACHEINFOnb_inplace_multiplymsg_controlbraddrax25_addresswas_sq_slicePyMemberDefPyExc_OSErrorgatewaysob_type_Py_XDECREFtp_freesin_familygetterPyModuleDefPyVarObject__sockaddr__MSG_RSTnb_andSOCK_RAW__u8tp_callob_itemRTM_NEWQDISCtp_strwas_sq_ass_sliceternaryfuncsin_portRTM_GETQDISCob_basePyTypeObjectsa_dataRTM_DELACTIONRTA_TABLEPyModule_AddIntConstantstring_from_sockaddrsq_contains/tmp/pip-req-build-fl52pudd__sockaddr_in___chaintp_setattrPySequence_Containsifa_dataSOCK_RDMisdefaultsockaddr_unrichcmpfuncunsigned charmp_ass_subscriptRTM_DELTFILTERSOCK_DGRAM__sockaddr_inarp__getsockname_IO_lock_tifndx__RTA_MAXtp_dictoffsetIFF_POINTOPOINTPyNumberMethodsmsg_iovlenPyMethodDeftp_finalizestrncmpm_initu_int16_trta_typemp_subscripttp_clear_Py_DECREFrtm_flagsPyModuleDef_Base__sockaddr_eon__nb_boolIFF_NOTRAILERSsat_portRTM_DELNEIGHtp_initobjobjargprocob_sizeMSG_PROXYattrstp_dictmoduledefinterruptedrecvmsg_IO_write_ptrtp_as_mappingu6_addr8setattrfuncnlmsg_flagsRTA_IIFMSG_WAITALLu_int8_tbinaryfunc__s2_len__errno_locationm_docinterfacesIFF_ALLMULTIbf_getbuffervectorcallfuncgetiterfuncgniaddrssizeargfuncerrorsll_addruint8_tdescrsetfuncrtm_src_lendescrgetfunc__sockaddr_ipx__sanlnb_inplace_addsprintfnb_reserveddeftuple_IO_save_baseiovecMSG_SYNm_indexIFF_NOARPreprfuncsockaddr_x25sin6_flowinfo__pad2RTM_GETTFILTERrtmsgPyList_Appendrtm_typeRTA_PREFSRCsax25_ndigisRT_TABLE_UNSPECPyLong_FromLongPy_hash_tvaluedefaultsRTA_SRCPyObjectnb_xor__SOCKADDR_ARG__socklen_tlladdrRTA_GATEWAYRTM_SETNEIGHTBLslot__ssize_tsanl_from_longobjectIFF_MULTICASTsipx_nodes_netm_traversesin6_addrPyBufferProcsml_flagstp_newIFF_UPm_namegetpagesizeRTA_METRICSfoundPyModuleDef_Slotnb_inplace_true_dividePyObject_SizedestructorPyCFunctionrtm_tossanl_kernelresultbuflenmsg_iov_sbufbindMSG_DONTROUTE_IO_save_endifa_nextsipx_zerotp_delnlmsg_lensipx_typestdouttp_namertm_scopeclosurenlmsg_pidifa_flagsPyModule_AddStringConstantRTM_GETMULTICASTtp_as_sequenceRTM_GETLINKtp_as_buffersizetypenllennb_inplace_andsll_hatypeshort unsigned intrt_class_t__be16rtm_tableSOCK_STREAMRTM_GETRULErtattrtp_allocRTM_NEWNEIGHsuboffsetssendto__off64_tmsg_controllensockaddr_eon_IO_read_basem_clearIFF_SLAVE__u16PyTuple_Pack__sockaddr_iso___IO_buf_endIFF_DEBUGRTM_GETADDRtp_getattroallocfuncm_copy_modetp_methods_IO_write_base__sockaddr_ax25__tp_mromsgbuf__u32closein6_uSOCK_PACKETnb_oripv4_defaultformatunaryfunc_IO_markerPyDict_SetItemRTM_GETNEIGHRT_TABLE_DEFAULTifa_ifunb_floatPyDict_GetItem__sockaddr_dl__u6_addr16PyExc_ValueErrortraverseprocRTM_NEWPREFIXRTM_DELADDRLABELtp_vectorcall_offsetinquiryuint32_tnb_invertml_doc__sockaddr_in6__ml_name__sockaddr_ns__sipx_portatalk_addrtp_as_numberiov_lenPyAsyncMethodstp_weaklistoffsetPyDict_SetItemStringml_methreadonlytp_docgetattrofuncpyfamilySOCK_SEQPACKETPyObject_IsTrueRTA_MP_ALGOsockaddr_inarpsin6_scope_idPySequenceMethodsrtattr_type_tstdinsin_addris_multiu6_addr32tp_weaklistprev_name_IO_buf_basebufferinfonewfunchashfuncgetattrfunc_IO_read_endsipx_network_IO_FILEPyModule_AddObjectin_addr_tIFF_PORTSELnlmsgerrRTM_GETTCLASSPyErr_NoMemoryshapeselftp_hashnl_groupspmsgPyUnicode_FromStringndimssizeobjargprocsat_familytp_vectorcallRTM_NEWTFILTERtp_version_tagsockaddr_ax25ipv6_default__builtin_strcmpgetnameinfoRTM_GETANYCAST__pad1__pad3__pad4__pad5nlmsg_typemallocgetbufferprocnlmsg_seqPyDict_NewRTM_DELROUTEfailure_markers_possetteram_awaitPyList_NewRTM_BASEtp_memberstp_traversePyErr_SetFromErrnobufendmp_lengthsat_addrdef_prioritiesdoublenlmsghdram_aiternb_inplace_xortp_subclassesargsnb_inplace_powertp_setattronl_familyfreefuncnb_multiplym_basenb_true_dividetp_getsetrta_lenbufsizeifa_netmasktp_iternextsq_lengthIFF_MASTERRTM_NEWTCLASSRTM_NEWNEIGHTBLtp_descr_getsll_halentp_itersipx_familysocketnb_inplace_floor_dividebytesstridesifa_addrMSG_OOBtp_basenb_rshiftsin6_familysll_pkttype__sockaddr_x25__ifnamebufbf_releasebufferRTM_GETNEIGHTBLlong long unsigned int_cur_columnreleasebufferprocsax25_callu_int32_tnb_inplace_remainder__socket_typebigaddrMSG_ERRQUEUEMSG_CONFIRMMSG_MOREgetifaddrsm_slots_objectMSG_FINMSG_EORnb_absolute_IO_backup_base_IO_read_ptrPyListObjectinternalRTA_MULTIPATHsanl_lennb_inplace_orm_methodsm_sizetp_reprtp_cachesax25_familysll_protocolPy_ssize_t_old_offsetrouting_msgnb_inplace_rshiftRTA_PRIORITYpyaddrRTA_OIFPyArg_ParseTuplenb_positiveRT_TABLE_LOCALlong long int_flags2PyMappingMethodsprefixsockaddr_atifu_broadaddrgnilensockaddr_in6allocatedtp_flagsRTA_UNSPECIFF_LOOPBACKRTM_GETACTIONbufptrob_digitiov_basenb_subtractperrPyType_TypeGNU C 4.8.2 20140120 (Red Hat 4.8.2-15) -mtune=generic -march=x86-64 -g -O3 -fwrapv -fPICMSG_TRYHARDRT_TABLE_MAXRTM_DELLINKif_indextonameifa_nameIFF_DYNAMICRTM_DELQDISC__sockaddr_at__PyErr_SetStringIFF_RUNNINGRTM_NEWLINKpy_familyiternextfuncsockaddr_dlRTM_GETADDRLABELRTM_DELRULEnb_intMSG_TRUNCaf_to_lensockaddr_ipxtp_descr_setPy_bufferRTM_GETPREFIXRTM_NEWRULEpyifnameshort int_vtable_offsetnb_inplace_matrix_multiplytp_basicsizenb_inplace_subtract_Py_FalseStructsll_ifindexRTM_DELADDRnb_divmodobjobjprocnl_padRTM_NEWROUTEsin6_portRTM_DELTCLASSwwww w(�w���w(��w ��w��w��w��w�U��U�#P#�]��]>0�>�\��0�00�0�S��S��0�>KV��P��V>KV��V��w��w��w��w ��w(��w0��w���w0��w(��w ��w��w��w��w��FUF�V���U���V�T�]���T���]���T�KSP.JUJ�V���U���V.JT;	u�
�����0�o�\��\��]��V��U��v}���V��}~"1�}�S��SS\p
���\_Q_�p
�����p
�����Vwwww 	w(	w0w8�w���w8��w0��w(��w ��w��w��w��
w�U�
�U�LPPP�_����z��_����z�	_	,	��z,	p	_p	4
��z4
s
_s
�
��z[ePe�S��P����{��S��
��{�\W}\}.SP�S�S1�S	,	Sp	4
Ss
�
S��P��\��\��P��
\��P��w��w	4
w4
?
P?
S
ws
�
w^0�^�1���0���1��	0�	,	1�p	4
1�4
s
0�s
�
1�@}PP7P0�V����z��V����z�	V	,	��z,	p	Vp	�	��z4
s
Vs
�
��z0�0���]�9]P�]��0���]�	0�	,	],	p	0�p	4
]4
s
0�s
�
]f�S��S�	S4
s
SW}P��^�0�WUY�UPjU��0��0�tv_��0��	��dfY��	���.s�P�s��s�1�s�	,	s�p	4
s�s
�
s���QP\Q��RPjR�	��@[x $0 $+(�[���z� $0 $+(�
��z� $0 $+(� P �V��V
V!DP��P��P��V1jV��P��T����z1���z@�_��U
_��0���P����z��0�
0�1���zp	�	��z�	4
��zs
�
��z��0��&_��0��
_
0�1K_Ke0�ejPj�_	,	_p	4
_s
�
_��Y>EY��V1>V��U����zj���zp	�	��z�	4
��zs
�
��z��p����v��j�v��p	�	v��s
x
v������zj���zp	4
��zs
�
��zj���zp	�	��z�	4
��zs
�
��zj�v��p	�	v��s
x
v��j���zp	4
��zs
�
��zs�P����zp	z	��z�	
��z��Pp	~	P�	�	P�	�	R�	�	��z�	�	P�	�	��zs
�
P��V�	�	��z�	
V�	
P�	
P�
�
XP&V�
V	,	V&V�
V	,	V&_�
_����z��U����z
4
_
4
_��s��	��z;	p	��zG
s
��z�
�
w�
�
w�
�
w�
�
w �
�
w(�
�
w0�
�
w8�
�w���w8��w0��w(��w �www]w��
�
U�
]�U��
�
T�
�
U�
]�T��
P�w]w�
-0�-�1���V��1���0�V1�F0�U]1��
0��SSF0�F]S-�]��0��2]2C0�CHPH�]6]^p]��]��]U]]-�^��0���
^�
-0�-2P2�0�^U]0�->Q>�V�0��V��0���P��
V�
�0���V��UU]0�-�0�0�U]0�-�
��2
�H�
�
�U]
�-�_�2_H�__U]_��U�
UHqU�
X0�GP��PHnXnq0��
YHnY�
�Qn�QU]Qy1$��Yn�YU]Y_:\:C|�C\�X��|��"\nq_q�\��|���PU]X�
�
QHnQ�
�
Q�
�
rq��
�
R�
�
8��
�
7��
�
T�
�
P�
�
p�HnPX'fX5TX�TU]TGRj�RU]R-�\+GPG�\)
B
PB
�
\�\�\?�]��]6]^p]��]DS]S�^��^R^^�^�^Xf^f�VkzV��U��\��\R\^�\�\��v
�����v
�����w��wRw^�w�w��\R\^�\�\��v
�����wRw^�w�w��P��VRV^�V�V��P'P^iPpvPv�]��P����}��P��]��P�]��\6\��\'6P��P,6P��P��]��\�\��V/
�
]4
C
]C
�
^H
V
^V
�
V[
j
Vj
�
w6Rw
Q
w��P�
P��p��P

V��V��U

V��V��U��wFMQMUwFw`bwbowopwp�w ��w��w��w��P��V{�P��S��P@Ku�@G��O���fp���0P���	,	p	8
x
�
�����0P��1�	,	p	8
x
�
���0���1�	,	p	8
x
�
��>K��1>��j�p	
x
�
�0�
	,		,	0�
�������	!��� U]&0�����
"P�U]&0�
"P�U]�
�
P[ �U]0� �
�
 ��~����� 6`�� '��'6��,6�����6R�
��

����

������FU.symtab.strtab.shstrtab.note.gnu.build-id.gnu.hash.dynsym.dynstr.gnu.version.gnu.version_r.rela.dyn.rela.plt.init.text.fini.rodata.eh_frame_hdr.eh_frame.init_array.fini_array.jcr.data.rel.ro.dynamic.got.got.plt.data.bss.comment.debug_aranges.debug_info.debug_abbrev.debug_line.debug_str.debug_loc.debug_ranges��$.���o��<8���@��9H���o�
�
vU���o00Pd��(n�
�

x��s���~����0.0.	�@.@.� �44<�P4P4���5 �5��5 �5�6 6�6 6�6 6���7 �7H�(8 (8p��9 �9 ��: �:��:��);0Y;�M&�*(P�=	40���?^�22J��`��XH�@#?	�������
0��
	�
��0.
@.4P4�5 �5 6 6 6 �7 (8 �9 �:  ��*�56 B�W�j��: ��5 �P��5 �������
�3$#��,
4>
�3Q
4d@ �l�9 hv : �*�~�5�6 ��6 �6 ��: �(8 ���1Od ���: ���&=��0.���(AQet����� ���5�: :J[w��: ������ 2AZs� �"�	���/usr/lib/../lib64/crti.ocall_gmon_startcrtstuff.c__JCR_LIST__deregister_tm_clonesregister_tm_clones__do_global_dtors_auxcompleted.6330__do_global_dtors_aux_fini_array_entryframe_dummy__frame_dummy_init_array_entrynetifaces.cinterfacesstring_from_sockaddr.constprop.2CSWTCH.22gatewayssanl_kernel.13636ipv6_default.13667ipv4_default.13666ifaddrsmoduledefmethods__FRAME_END____JCR_END____dso_handle_DYNAMIC__TMC_END___GLOBAL_OFFSET_TABLE_inet_ntop@@GLIBC_2.2.5PyList_NewPyDict_SetItemStringfree@@GLIBC_2.2.5PyModule_AddIntConstant__errno_location@@GLIBC_2.2.5strncmp@@GLIBC_2.2.5_ITM_deregisterTMCloneTablePyErr_SetFromErrno_edata_Py_DeallocPyInit_netifacesPyErr_NoMemory_finiPyErr_SetStringPyExc_ValueErrorPySequence_Containssendto@@GLIBC_2.2.5close@@GLIBC_2.2.5getnameinfo@@GLIBC_2.2.5PyLong_FromLongmemcmp@@GLIBC_2.2.5PyDict_GetItemmemcpy@@GLIBC_2.2.5PyList_Appendstrcmp@@GLIBC_2.2.5PyExc_OSError_Py_FalseStruct__gmon_start__PyModule_Create2PyDict_GetItemStringPyObject_Sizemalloc@@GLIBC_2.2.5getifaddrs@@GLIBC_2.3_endPyObject_IsTruePyArg_ParseTupleif_indextoname@@GLIBC_2.2.5_Py_TrueStruct__bss_startrecvmsg@@GLIBC_2.2.5freeifaddrs@@GLIBC_2.3PyDict_Newbind@@GLIBC_2.2.5PyUnicode_FromStringPyModule_AddStringConstantPyModule_AddObject_Jv_RegisterClassesPyDict_SetItemgetsockname@@GLIBC_2.2.5getpagesize@@GLIBC_2.2.5sprintf@@GLIBC_2.2.5_ITM_registerTMCloneTable__cxa_finalize@@GLIBC_2.2.5_initPyTuple_Packsocket@@GLIBC_2.2.5PKok\��;�;psutil/_compat.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Module which provides compatibility with older Python versions.
This is more future-compatible rather than the opposite (prefer latest
Python 3 way of doing things).
"""

import collections
import contextlib
import errno
import functools
import os
import sys
import types


# fmt: off
__all__ = [
    # constants
    "PY3",
    # builtins
    "long", "range", "super", "unicode", "basestring",
    # literals
    "b",
    # collections module
    "lru_cache",
    # shutil module
    "which", "get_terminal_size",
    # contextlib module
    "redirect_stderr",
    # python 3 exceptions
    "FileNotFoundError", "PermissionError", "ProcessLookupError",
    "InterruptedError", "ChildProcessError", "FileExistsError",
]
# fmt: on


PY3 = sys.version_info[0] >= 3
_SENTINEL = object()

if PY3:
    long = int
    xrange = range
    unicode = str
    basestring = str
    range = range

    def b(s):
        return s.encode("latin-1")

else:
    long = long
    range = xrange
    unicode = unicode
    basestring = basestring

    def b(s):
        return s


# --- builtins


# Python 3 super().
# Taken from "future" package.
# Credit: Ryan Kelly
if PY3:
    super = super
else:
    _builtin_super = super

    def super(type_=_SENTINEL, type_or_obj=_SENTINEL, framedepth=1):
        """Like Python 3 builtin super(). If called without any arguments
        it attempts to infer them at runtime.
        """
        if type_ is _SENTINEL:
            f = sys._getframe(framedepth)
            try:
                # Get the function's first positional argument.
                type_or_obj = f.f_locals[f.f_code.co_varnames[0]]
            except (IndexError, KeyError):
                msg = 'super() used in a function with no args'
                raise RuntimeError(msg)
            try:
                # Get the MRO so we can crawl it.
                mro = type_or_obj.__mro__
            except (AttributeError, RuntimeError):
                try:
                    mro = type_or_obj.__class__.__mro__
                except AttributeError:
                    msg = 'super() used in a non-newstyle class'
                    raise RuntimeError(msg)
            for type_ in mro:
                #  Find the class that owns the currently-executing method.
                for meth in type_.__dict__.values():
                    # Drill down through any wrappers to the underlying func.
                    # This handles e.g. classmethod() and staticmethod().
                    try:
                        while not isinstance(meth, types.FunctionType):
                            if isinstance(meth, property):
                                # Calling __get__ on the property will invoke
                                # user code which might throw exceptions or
                                # have side effects
                                meth = meth.fget
                            else:
                                try:
                                    meth = meth.__func__
                                except AttributeError:
                                    meth = meth.__get__(type_or_obj, type_)
                    except (AttributeError, TypeError):
                        continue
                    if meth.func_code is f.f_code:
                        break  # found
                else:
                    # Not found. Move onto the next class in MRO.
                    continue
                break  # found
            else:
                msg = 'super() called outside a method'
                raise RuntimeError(msg)

        # Dispatch to builtin super().
        if type_or_obj is not _SENTINEL:
            return _builtin_super(type_, type_or_obj)
        return _builtin_super(type_)


# --- exceptions


if PY3:
    FileNotFoundError = FileNotFoundError  # NOQA
    PermissionError = PermissionError  # NOQA
    ProcessLookupError = ProcessLookupError  # NOQA
    InterruptedError = InterruptedError  # NOQA
    ChildProcessError = ChildProcessError  # NOQA
    FileExistsError = FileExistsError  # NOQA
else:
    # https://github.com/PythonCharmers/python-future/blob/exceptions/
    #     src/future/types/exceptions/pep3151.py
    import platform

    def _instance_checking_exception(base_exception=Exception):
        def wrapped(instance_checker):
            class TemporaryClass(base_exception):
                def __init__(self, *args, **kwargs):
                    if len(args) == 1 and isinstance(args[0], TemporaryClass):
                        unwrap_me = args[0]
                        for attr in dir(unwrap_me):
                            if not attr.startswith('__'):
                                setattr(self, attr, getattr(unwrap_me, attr))
                    else:
                        super(TemporaryClass, self).__init__(  # noqa
                            *args, **kwargs
                        )

                class __metaclass__(type):
                    def __instancecheck__(cls, inst):
                        return instance_checker(inst)

                    def __subclasscheck__(cls, classinfo):
                        value = sys.exc_info()[1]
                        return isinstance(value, cls)

            TemporaryClass.__name__ = instance_checker.__name__
            TemporaryClass.__doc__ = instance_checker.__doc__
            return TemporaryClass

        return wrapped

    @_instance_checking_exception(EnvironmentError)
    def FileNotFoundError(inst):
        return getattr(inst, 'errno', _SENTINEL) == errno.ENOENT

    @_instance_checking_exception(EnvironmentError)
    def ProcessLookupError(inst):
        return getattr(inst, 'errno', _SENTINEL) == errno.ESRCH

    @_instance_checking_exception(EnvironmentError)
    def PermissionError(inst):
        return getattr(inst, 'errno', _SENTINEL) in (errno.EACCES, errno.EPERM)

    @_instance_checking_exception(EnvironmentError)
    def InterruptedError(inst):
        return getattr(inst, 'errno', _SENTINEL) == errno.EINTR

    @_instance_checking_exception(EnvironmentError)
    def ChildProcessError(inst):
        return getattr(inst, 'errno', _SENTINEL) == errno.ECHILD

    @_instance_checking_exception(EnvironmentError)
    def FileExistsError(inst):
        return getattr(inst, 'errno', _SENTINEL) == errno.EEXIST

    if platform.python_implementation() != "CPython":
        try:
            raise OSError(errno.EEXIST, "perm")
        except FileExistsError:
            pass
        except OSError:
            msg = (
                "broken or incompatible Python implementation, see: "
                "https://github.com/giampaolo/psutil/issues/1659"
            )
            raise RuntimeError(msg)


# --- stdlib additions


# py 3.2 functools.lru_cache
# Taken from: http://code.activestate.com/recipes/578078
# Credit: Raymond Hettinger
try:
    from functools import lru_cache
except ImportError:
    try:
        from threading import RLock
    except ImportError:
        from dummy_threading import RLock

    _CacheInfo = collections.namedtuple(
        "CacheInfo", ["hits", "misses", "maxsize", "currsize"]
    )

    class _HashedSeq(list):
        __slots__ = ('hashvalue',)

        def __init__(self, tup, hash=hash):
            self[:] = tup
            self.hashvalue = hash(tup)

        def __hash__(self):
            return self.hashvalue

    def _make_key(
        args,
        kwds,
        typed,
        kwd_mark=(_SENTINEL,),
        fasttypes=set((int, str, frozenset, type(None))),  # noqa
        sorted=sorted,
        tuple=tuple,
        type=type,
        len=len,
    ):
        key = args
        if kwds:
            sorted_items = sorted(kwds.items())
            key += kwd_mark
            for item in sorted_items:
                key += item
        if typed:
            key += tuple(type(v) for v in args)
            if kwds:
                key += tuple(type(v) for k, v in sorted_items)
        elif len(key) == 1 and type(key[0]) in fasttypes:
            return key[0]
        return _HashedSeq(key)

    def lru_cache(maxsize=100, typed=False):
        """Least-recently-used cache decorator, see:
        http://docs.python.org/3/library/functools.html#functools.lru_cache.
        """

        def decorating_function(user_function):
            cache = {}
            stats = [0, 0]
            HITS, MISSES = 0, 1
            make_key = _make_key
            cache_get = cache.get
            _len = len
            lock = RLock()
            root = []
            root[:] = [root, root, None, None]
            nonlocal_root = [root]
            PREV, NEXT, KEY, RESULT = 0, 1, 2, 3
            if maxsize == 0:

                def wrapper(*args, **kwds):
                    result = user_function(*args, **kwds)
                    stats[MISSES] += 1
                    return result

            elif maxsize is None:

                def wrapper(*args, **kwds):
                    key = make_key(args, kwds, typed)
                    result = cache_get(key, root)
                    if result is not root:
                        stats[HITS] += 1
                        return result
                    result = user_function(*args, **kwds)
                    cache[key] = result
                    stats[MISSES] += 1
                    return result

            else:

                def wrapper(*args, **kwds):
                    if kwds or typed:
                        key = make_key(args, kwds, typed)
                    else:
                        key = args
                    lock.acquire()
                    try:
                        link = cache_get(key)
                        if link is not None:
                            (root,) = nonlocal_root
                            link_prev, link_next, key, result = link
                            link_prev[NEXT] = link_next
                            link_next[PREV] = link_prev
                            last = root[PREV]
                            last[NEXT] = root[PREV] = link
                            link[PREV] = last
                            link[NEXT] = root
                            stats[HITS] += 1
                            return result
                    finally:
                        lock.release()
                    result = user_function(*args, **kwds)
                    lock.acquire()
                    try:
                        (root,) = nonlocal_root
                        if key in cache:
                            pass
                        elif _len(cache) >= maxsize:
                            oldroot = root
                            oldroot[KEY] = key
                            oldroot[RESULT] = result
                            root = nonlocal_root[0] = oldroot[NEXT]
                            oldkey = root[KEY]
                            root[KEY] = root[RESULT] = None
                            del cache[oldkey]
                            cache[key] = oldroot
                        else:
                            last = root[PREV]
                            link = [last, root, key, result]
                            last[NEXT] = root[PREV] = cache[key] = link
                        stats[MISSES] += 1
                    finally:
                        lock.release()
                    return result

            def cache_info():
                """Report cache statistics."""
                lock.acquire()
                try:
                    return _CacheInfo(
                        stats[HITS], stats[MISSES], maxsize, len(cache)
                    )
                finally:
                    lock.release()

            def cache_clear():
                """Clear the cache and cache statistics."""
                lock.acquire()
                try:
                    cache.clear()
                    root = nonlocal_root[0]
                    root[:] = [root, root, None, None]
                    stats[:] = [0, 0]
                finally:
                    lock.release()

            wrapper.__wrapped__ = user_function
            wrapper.cache_info = cache_info
            wrapper.cache_clear = cache_clear
            return functools.update_wrapper(wrapper, user_function)

        return decorating_function


# python 3.3
try:
    from shutil import which
except ImportError:

    def which(cmd, mode=os.F_OK | os.X_OK, path=None):
        """Given a command, mode, and a PATH string, return the path which
        conforms to the given mode on the PATH, or None if there is no such
        file.

        `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
        of os.environ.get("PATH"), or can be overridden with a custom search
        path.
        """

        def _access_check(fn, mode):
            return (
                os.path.exists(fn)
                and os.access(fn, mode)
                and not os.path.isdir(fn)
            )

        if os.path.dirname(cmd):
            if _access_check(cmd, mode):
                return cmd
            return None

        if path is None:
            path = os.environ.get("PATH", os.defpath)
        if not path:
            return None
        path = path.split(os.pathsep)

        if sys.platform == "win32":
            if os.curdir not in path:
                path.insert(0, os.curdir)

            pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
            if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
                files = [cmd]
            else:
                files = [cmd + ext for ext in pathext]
        else:
            files = [cmd]

        seen = set()
        for dir in path:
            normdir = os.path.normcase(dir)
            if normdir not in seen:
                seen.add(normdir)
                for thefile in files:
                    name = os.path.join(dir, thefile)
                    if _access_check(name, mode):
                        return name
        return None


# python 3.3
try:
    from shutil import get_terminal_size
except ImportError:

    def get_terminal_size(fallback=(80, 24)):
        try:
            import fcntl
            import struct
            import termios
        except ImportError:
            return fallback
        else:
            try:
                # This should work on Linux.
                res = struct.unpack(
                    'hh', fcntl.ioctl(1, termios.TIOCGWINSZ, '1234')
                )
                return (res[1], res[0])
            except Exception:  # noqa: BLE001
                return fallback


# python 3.3
try:
    from subprocess import TimeoutExpired as SubprocessTimeoutExpired
except ImportError:

    class SubprocessTimeoutExpired(Exception):
        pass


# python 3.5
try:
    from contextlib import redirect_stderr
except ImportError:

    @contextlib.contextmanager
    def redirect_stderr(new_target):
        original = sys.stderr
        try:
            sys.stderr = new_target
            yield new_target
        finally:
            sys.stderr = original
PKok\͔������psutil/tests/test_misc.pynu�[���#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Miscellaneous tests."""

import ast
import collections
import errno
import json
import os
import pickle
import socket
import stat
import sys

import psutil
import psutil.tests
from psutil import POSIX
from psutil import WINDOWS
from psutil._common import bcat
from psutil._common import cat
from psutil._common import debug
from psutil._common import isfile_strict
from psutil._common import memoize
from psutil._common import memoize_when_activated
from psutil._common import parse_environ_block
from psutil._common import supports_ipv6
from psutil._common import wrap_numbers
from psutil._compat import PY3
from psutil._compat import FileNotFoundError
from psutil._compat import redirect_stderr
from psutil.tests import CI_TESTING
from psutil.tests import HAS_BATTERY
from psutil.tests import HAS_MEMORY_MAPS
from psutil.tests import HAS_NET_IO_COUNTERS
from psutil.tests import HAS_SENSORS_BATTERY
from psutil.tests import HAS_SENSORS_FANS
from psutil.tests import HAS_SENSORS_TEMPERATURES
from psutil.tests import PYTHON_EXE
from psutil.tests import PYTHON_EXE_ENV
from psutil.tests import QEMU_USER
from psutil.tests import SCRIPTS_DIR
from psutil.tests import PsutilTestCase
from psutil.tests import mock
from psutil.tests import process_namespace
from psutil.tests import pytest
from psutil.tests import reload_module
from psutil.tests import sh
from psutil.tests import system_namespace


# ===================================================================
# --- Test classes' repr(), str(), ...
# ===================================================================


class TestSpecialMethods(PsutilTestCase):
    def test_check_pid_range(self):
        with pytest.raises(OverflowError):
            psutil._psplatform.cext.check_pid_range(2**128)
        with pytest.raises(psutil.NoSuchProcess):
            psutil.Process(2**128)

    def test_process__repr__(self, func=repr):
        p = psutil.Process(self.spawn_testproc().pid)
        r = func(p)
        assert "psutil.Process" in r
        assert "pid=%s" % p.pid in r
        assert "name='%s'" % str(p.name()) in r.replace("name=u'", "name='")
        assert "status=" in r
        assert "exitcode=" not in r
        p.terminate()
        p.wait()
        r = func(p)
        assert "status='terminated'" in r
        assert "exitcode=" in r

        with mock.patch.object(
            psutil.Process,
            "name",
            side_effect=psutil.ZombieProcess(os.getpid()),
        ):
            p = psutil.Process()
            r = func(p)
            assert "pid=%s" % p.pid in r
            assert "status='zombie'" in r
            assert "name=" not in r
        with mock.patch.object(
            psutil.Process,
            "name",
            side_effect=psutil.NoSuchProcess(os.getpid()),
        ):
            p = psutil.Process()
            r = func(p)
            assert "pid=%s" % p.pid in r
            assert "terminated" in r
            assert "name=" not in r
        with mock.patch.object(
            psutil.Process,
            "name",
            side_effect=psutil.AccessDenied(os.getpid()),
        ):
            p = psutil.Process()
            r = func(p)
            assert "pid=%s" % p.pid in r
            assert "name=" not in r

    def test_process__str__(self):
        self.test_process__repr__(func=str)

    def test_error__repr__(self):
        assert repr(psutil.Error()) == "psutil.Error()"

    def test_error__str__(self):
        assert str(psutil.Error()) == ""  # noqa

    def test_no_such_process__repr__(self):
        assert (
            repr(psutil.NoSuchProcess(321))
            == "psutil.NoSuchProcess(pid=321, msg='process no longer exists')"
        )
        assert (
            repr(psutil.NoSuchProcess(321, name="name", msg="msg"))
            == "psutil.NoSuchProcess(pid=321, name='name', msg='msg')"
        )

    def test_no_such_process__str__(self):
        assert (
            str(psutil.NoSuchProcess(321))
            == "process no longer exists (pid=321)"
        )
        assert (
            str(psutil.NoSuchProcess(321, name="name", msg="msg"))
            == "msg (pid=321, name='name')"
        )

    def test_zombie_process__repr__(self):
        assert (
            repr(psutil.ZombieProcess(321))
            == 'psutil.ZombieProcess(pid=321, msg="PID still '
            'exists but it\'s a zombie")'
        )
        assert (
            repr(psutil.ZombieProcess(321, name="name", ppid=320, msg="foo"))
            == "psutil.ZombieProcess(pid=321, ppid=320, name='name',"
            " msg='foo')"
        )

    def test_zombie_process__str__(self):
        assert (
            str(psutil.ZombieProcess(321))
            == "PID still exists but it's a zombie (pid=321)"
        )
        assert (
            str(psutil.ZombieProcess(321, name="name", ppid=320, msg="foo"))
            == "foo (pid=321, ppid=320, name='name')"
        )

    def test_access_denied__repr__(self):
        assert repr(psutil.AccessDenied(321)) == "psutil.AccessDenied(pid=321)"
        assert (
            repr(psutil.AccessDenied(321, name="name", msg="msg"))
            == "psutil.AccessDenied(pid=321, name='name', msg='msg')"
        )

    def test_access_denied__str__(self):
        assert str(psutil.AccessDenied(321)) == "(pid=321)"
        assert (
            str(psutil.AccessDenied(321, name="name", msg="msg"))
            == "msg (pid=321, name='name')"
        )

    def test_timeout_expired__repr__(self):
        assert (
            repr(psutil.TimeoutExpired(5))
            == "psutil.TimeoutExpired(seconds=5, msg='timeout after 5"
            " seconds')"
        )
        assert (
            repr(psutil.TimeoutExpired(5, pid=321, name="name"))
            == "psutil.TimeoutExpired(pid=321, name='name', seconds=5, "
            "msg='timeout after 5 seconds')"
        )

    def test_timeout_expired__str__(self):
        assert str(psutil.TimeoutExpired(5)) == "timeout after 5 seconds"
        assert (
            str(psutil.TimeoutExpired(5, pid=321, name="name"))
            == "timeout after 5 seconds (pid=321, name='name')"
        )

    def test_process__eq__(self):
        p1 = psutil.Process()
        p2 = psutil.Process()
        assert p1 == p2
        p2._ident = (0, 0)
        assert p1 != p2
        assert p1 != 'foo'

    def test_process__hash__(self):
        s = set([psutil.Process(), psutil.Process()])
        assert len(s) == 1


# ===================================================================
# --- Misc, generic, corner cases
# ===================================================================


class TestMisc(PsutilTestCase):
    def test__all__(self):
        dir_psutil = dir(psutil)
        for name in dir_psutil:
            if name in (
                'debug',
                'long',
                'tests',
                'test',
                'PermissionError',
                'ProcessLookupError',
            ):
                continue
            if not name.startswith('_'):
                try:
                    __import__(name)
                except ImportError:
                    if name not in psutil.__all__:
                        fun = getattr(psutil, name)
                        if fun is None:
                            continue
                        if (
                            fun.__doc__ is not None
                            and 'deprecated' not in fun.__doc__.lower()
                        ):
                            raise self.fail('%r not in psutil.__all__' % name)

        # Import 'star' will break if __all__ is inconsistent, see:
        # https://github.com/giampaolo/psutil/issues/656
        # Can't do `from psutil import *` as it won't work on python 3
        # so we simply iterate over __all__.
        for name in psutil.__all__:
            assert name in dir_psutil

    def test_version(self):
        assert (
            '.'.join([str(x) for x in psutil.version_info])
            == psutil.__version__
        )

    def test_process_as_dict_no_new_names(self):
        # See https://github.com/giampaolo/psutil/issues/813
        p = psutil.Process()
        p.foo = '1'
        assert 'foo' not in p.as_dict()

    def test_serialization(self):
        def check(ret):
            json.loads(json.dumps(ret))

            a = pickle.dumps(ret)
            b = pickle.loads(a)
            assert ret == b

        # --- process APIs

        proc = psutil.Process()
        check(psutil.Process().as_dict())

        ns = process_namespace(proc)
        for fun, name in ns.iter(ns.getters, clear_cache=True):
            with self.subTest(proc=proc, name=name):
                try:
                    ret = fun()
                except psutil.Error:
                    pass
                else:
                    check(ret)

        # --- system APIs

        ns = system_namespace()
        for fun, name in ns.iter(ns.getters):
            if name in {"win_service_iter", "win_service_get"}:
                continue
            if QEMU_USER and name == "net_if_stats":
                # OSError: [Errno 38] ioctl(SIOCETHTOOL) not implemented
                continue
            with self.subTest(name=name):
                try:
                    ret = fun()
                except psutil.AccessDenied:
                    pass
                else:
                    check(ret)

        # --- exception classes

        b = pickle.loads(
            pickle.dumps(
                psutil.NoSuchProcess(pid=4567, name='name', msg='msg')
            )
        )
        assert isinstance(b, psutil.NoSuchProcess)
        assert b.pid == 4567
        assert b.name == 'name'
        assert b.msg == 'msg'

        b = pickle.loads(
            pickle.dumps(
                psutil.ZombieProcess(pid=4567, name='name', ppid=42, msg='msg')
            )
        )
        assert isinstance(b, psutil.ZombieProcess)
        assert b.pid == 4567
        assert b.ppid == 42
        assert b.name == 'name'
        assert b.msg == 'msg'

        b = pickle.loads(
            pickle.dumps(psutil.AccessDenied(pid=123, name='name', msg='msg'))
        )
        assert isinstance(b, psutil.AccessDenied)
        assert b.pid == 123
        assert b.name == 'name'
        assert b.msg == 'msg'

        b = pickle.loads(
            pickle.dumps(
                psutil.TimeoutExpired(seconds=33, pid=4567, name='name')
            )
        )
        assert isinstance(b, psutil.TimeoutExpired)
        assert b.seconds == 33
        assert b.pid == 4567
        assert b.name == 'name'

    # # XXX: https://github.com/pypa/setuptools/pull/2896
    # @pytest.mark.skipif(APPVEYOR,
    #     reason="temporarily disabled due to setuptools bug"
    # )
    # def test_setup_script(self):
    #     setup_py = os.path.join(ROOT_DIR, 'setup.py')
    #     if CI_TESTING and not os.path.exists(setup_py):
    #         raise pytest.skip("can't find setup.py")
    #     module = import_module_by_path(setup_py)
    #     self.assertRaises(SystemExit, module.setup)
    #     self.assertEqual(module.get_version(), psutil.__version__)

    def test_ad_on_process_creation(self):
        # We are supposed to be able to instantiate Process also in case
        # of zombie processes or access denied.
        with mock.patch.object(
            psutil.Process, '_get_ident', side_effect=psutil.AccessDenied
        ) as meth:
            psutil.Process()
            assert meth.called

        with mock.patch.object(
            psutil.Process, '_get_ident', side_effect=psutil.ZombieProcess(1)
        ) as meth:
            psutil.Process()
            assert meth.called

        with mock.patch.object(
            psutil.Process, '_get_ident', side_effect=ValueError
        ) as meth:
            with pytest.raises(ValueError):
                psutil.Process()
            assert meth.called

        with mock.patch.object(
            psutil.Process, '_get_ident', side_effect=psutil.NoSuchProcess(1)
        ) as meth:
            with self.assertRaises(psutil.NoSuchProcess):
                psutil.Process()
            assert meth.called

    def test_sanity_version_check(self):
        # see: https://github.com/giampaolo/psutil/issues/564
        with mock.patch(
            "psutil._psplatform.cext.version", return_value="0.0.0"
        ):
            with pytest.raises(ImportError) as cm:
                reload_module(psutil)
            assert "version conflict" in str(cm.value).lower()


# ===================================================================
# --- psutil/_common.py utils
# ===================================================================


class TestMemoizeDecorator(PsutilTestCase):
    def setUp(self):
        self.calls = []

    tearDown = setUp

    def run_against(self, obj, expected_retval=None):
        # no args
        for _ in range(2):
            ret = obj()
            assert self.calls == [((), {})]
            if expected_retval is not None:
                assert ret == expected_retval
        # with args
        for _ in range(2):
            ret = obj(1)
            assert self.calls == [((), {}), ((1,), {})]
            if expected_retval is not None:
                assert ret == expected_retval
        # with args + kwargs
        for _ in range(2):
            ret = obj(1, bar=2)
            assert self.calls == [((), {}), ((1,), {}), ((1,), {'bar': 2})]
            if expected_retval is not None:
                assert ret == expected_retval
        # clear cache
        assert len(self.calls) == 3
        obj.cache_clear()
        ret = obj()
        if expected_retval is not None:
            assert ret == expected_retval
        assert len(self.calls) == 4
        # docstring
        assert obj.__doc__ == "My docstring."

    def test_function(self):
        @memoize
        def foo(*args, **kwargs):
            """My docstring."""
            baseclass.calls.append((args, kwargs))
            return 22

        baseclass = self
        self.run_against(foo, expected_retval=22)

    def test_class(self):
        @memoize
        class Foo:
            """My docstring."""

            def __init__(self, *args, **kwargs):
                baseclass.calls.append((args, kwargs))

            def bar(self):
                return 22

        baseclass = self
        self.run_against(Foo, expected_retval=None)
        assert Foo().bar() == 22

    def test_class_singleton(self):
        # @memoize can be used against classes to create singletons
        @memoize
        class Bar:
            def __init__(self, *args, **kwargs):
                pass

        assert Bar() is Bar()
        assert id(Bar()) == id(Bar())
        assert id(Bar(1)) == id(Bar(1))
        assert id(Bar(1, foo=3)) == id(Bar(1, foo=3))
        assert id(Bar(1)) != id(Bar(2))

    def test_staticmethod(self):
        class Foo:
            @staticmethod
            @memoize
            def bar(*args, **kwargs):
                """My docstring."""
                baseclass.calls.append((args, kwargs))
                return 22

        baseclass = self
        self.run_against(Foo().bar, expected_retval=22)

    def test_classmethod(self):
        class Foo:
            @classmethod
            @memoize
            def bar(cls, *args, **kwargs):
                """My docstring."""
                baseclass.calls.append((args, kwargs))
                return 22

        baseclass = self
        self.run_against(Foo().bar, expected_retval=22)

    def test_original(self):
        # This was the original test before I made it dynamic to test it
        # against different types. Keeping it anyway.
        @memoize
        def foo(*args, **kwargs):
            """Foo docstring."""
            calls.append(None)
            return (args, kwargs)

        calls = []
        # no args
        for _ in range(2):
            ret = foo()
            expected = ((), {})
            assert ret == expected
            assert len(calls) == 1
        # with args
        for _ in range(2):
            ret = foo(1)
            expected = ((1,), {})
            assert ret == expected
            assert len(calls) == 2
        # with args + kwargs
        for _ in range(2):
            ret = foo(1, bar=2)
            expected = ((1,), {'bar': 2})
            assert ret == expected
            assert len(calls) == 3
        # clear cache
        foo.cache_clear()
        ret = foo()
        expected = ((), {})
        assert ret == expected
        assert len(calls) == 4
        # docstring
        assert foo.__doc__ == "Foo docstring."


class TestCommonModule(PsutilTestCase):
    def test_memoize_when_activated(self):
        class Foo:
            @memoize_when_activated
            def foo(self):
                calls.append(None)

        f = Foo()
        calls = []
        f.foo()
        f.foo()
        assert len(calls) == 2

        # activate
        calls = []
        f.foo.cache_activate(f)
        f.foo()
        f.foo()
        assert len(calls) == 1

        # deactivate
        calls = []
        f.foo.cache_deactivate(f)
        f.foo()
        f.foo()
        assert len(calls) == 2

    def test_parse_environ_block(self):
        def k(s):
            return s.upper() if WINDOWS else s

        assert parse_environ_block("a=1\0") == {k("a"): "1"}
        assert parse_environ_block("a=1\0b=2\0\0") == {
            k("a"): "1",
            k("b"): "2",
        }
        assert parse_environ_block("a=1\0b=\0\0") == {k("a"): "1", k("b"): ""}
        # ignore everything after \0\0
        assert parse_environ_block("a=1\0b=2\0\0c=3\0") == {
            k("a"): "1",
            k("b"): "2",
        }
        # ignore everything that is not an assignment
        assert parse_environ_block("xxx\0a=1\0") == {k("a"): "1"}
        assert parse_environ_block("a=1\0=b=2\0") == {k("a"): "1"}
        # do not fail if the block is incomplete
        assert parse_environ_block("a=1\0b=2") == {k("a"): "1"}

    def test_supports_ipv6(self):
        self.addCleanup(supports_ipv6.cache_clear)
        if supports_ipv6():
            with mock.patch('psutil._common.socket') as s:
                s.has_ipv6 = False
                supports_ipv6.cache_clear()
                assert not supports_ipv6()

            supports_ipv6.cache_clear()
            with mock.patch(
                'psutil._common.socket.socket', side_effect=socket.error
            ) as s:
                assert not supports_ipv6()
                assert s.called

            supports_ipv6.cache_clear()
            with mock.patch(
                'psutil._common.socket.socket', side_effect=socket.gaierror
            ) as s:
                assert not supports_ipv6()
                supports_ipv6.cache_clear()
                assert s.called

            supports_ipv6.cache_clear()
            with mock.patch(
                'psutil._common.socket.socket.bind',
                side_effect=socket.gaierror,
            ) as s:
                assert not supports_ipv6()
                supports_ipv6.cache_clear()
                assert s.called
        else:
            with pytest.raises(socket.error):
                sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
                try:
                    sock.bind(("::1", 0))
                finally:
                    sock.close()

    def test_isfile_strict(self):
        this_file = os.path.abspath(__file__)
        assert isfile_strict(this_file)
        assert not isfile_strict(os.path.dirname(this_file))
        with mock.patch(
            'psutil._common.os.stat', side_effect=OSError(errno.EPERM, "foo")
        ):
            with pytest.raises(OSError):
                isfile_strict(this_file)
        with mock.patch(
            'psutil._common.os.stat', side_effect=OSError(errno.EACCES, "foo")
        ):
            with pytest.raises(OSError):
                isfile_strict(this_file)
        with mock.patch(
            'psutil._common.os.stat', side_effect=OSError(errno.ENOENT, "foo")
        ):
            assert not isfile_strict(this_file)
        with mock.patch('psutil._common.stat.S_ISREG', return_value=False):
            assert not isfile_strict(this_file)

    def test_debug(self):
        if PY3:
            from io import StringIO
        else:
            from StringIO import StringIO

        with mock.patch.object(psutil._common, "PSUTIL_DEBUG", True):
            with redirect_stderr(StringIO()) as f:
                debug("hello")
                sys.stderr.flush()
        msg = f.getvalue()
        assert msg.startswith("psutil-debug"), msg
        assert "hello" in msg
        assert __file__.replace('.pyc', '.py') in msg

        # supposed to use repr(exc)
        with mock.patch.object(psutil._common, "PSUTIL_DEBUG", True):
            with redirect_stderr(StringIO()) as f:
                debug(ValueError("this is an error"))
        msg = f.getvalue()
        assert "ignoring ValueError" in msg
        assert "'this is an error'" in msg

        # supposed to use str(exc), because of extra info about file name
        with mock.patch.object(psutil._common, "PSUTIL_DEBUG", True):
            with redirect_stderr(StringIO()) as f:
                exc = OSError(2, "no such file")
                exc.filename = "/foo"
                debug(exc)
        msg = f.getvalue()
        assert "no such file" in msg
        assert "/foo" in msg

    def test_cat_bcat(self):
        testfn = self.get_testfn()
        with open(testfn, "w") as f:
            f.write("foo")
        assert cat(testfn) == "foo"
        assert bcat(testfn) == b"foo"
        with pytest.raises(FileNotFoundError):
            cat(testfn + '-invalid')
        with pytest.raises(FileNotFoundError):
            bcat(testfn + '-invalid')
        assert cat(testfn + '-invalid', fallback="bar") == "bar"
        assert bcat(testfn + '-invalid', fallback="bar") == "bar"


# ===================================================================
# --- Tests for wrap_numbers() function.
# ===================================================================


nt = collections.namedtuple('foo', 'a b c')


class TestWrapNumbers(PsutilTestCase):
    def setUp(self):
        wrap_numbers.cache_clear()

    tearDown = setUp

    def test_first_call(self):
        input = {'disk1': nt(5, 5, 5)}
        assert wrap_numbers(input, 'disk_io') == input

    def test_input_hasnt_changed(self):
        input = {'disk1': nt(5, 5, 5)}
        assert wrap_numbers(input, 'disk_io') == input
        assert wrap_numbers(input, 'disk_io') == input

    def test_increase_but_no_wrap(self):
        input = {'disk1': nt(5, 5, 5)}
        assert wrap_numbers(input, 'disk_io') == input
        input = {'disk1': nt(10, 15, 20)}
        assert wrap_numbers(input, 'disk_io') == input
        input = {'disk1': nt(20, 25, 30)}
        assert wrap_numbers(input, 'disk_io') == input
        input = {'disk1': nt(20, 25, 30)}
        assert wrap_numbers(input, 'disk_io') == input

    def test_wrap(self):
        # let's say 100 is the threshold
        input = {'disk1': nt(100, 100, 100)}
        assert wrap_numbers(input, 'disk_io') == input
        # first wrap restarts from 10
        input = {'disk1': nt(100, 100, 10)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(100, 100, 110)}
        # then it remains the same
        input = {'disk1': nt(100, 100, 10)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(100, 100, 110)}
        # then it goes up
        input = {'disk1': nt(100, 100, 90)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(100, 100, 190)}
        # then it wraps again
        input = {'disk1': nt(100, 100, 20)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(100, 100, 210)}
        # and remains the same
        input = {'disk1': nt(100, 100, 20)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(100, 100, 210)}
        # now wrap another num
        input = {'disk1': nt(50, 100, 20)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(150, 100, 210)}
        # and again
        input = {'disk1': nt(40, 100, 20)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(190, 100, 210)}
        # keep it the same
        input = {'disk1': nt(40, 100, 20)}
        assert wrap_numbers(input, 'disk_io') == {'disk1': nt(190, 100, 210)}

    def test_changing_keys(self):
        # Emulate a case where the second call to disk_io()
        # (or whatever) provides a new disk, then the new disk
        # disappears on the third call.
        input = {'disk1': nt(5, 5, 5)}
        assert wrap_numbers(input, 'disk_io') == input
        input = {'disk1': nt(5, 5, 5), 'disk2': nt(7, 7, 7)}
        assert wrap_numbers(input, 'disk_io') == input
        input = {'disk1': nt(8, 8, 8)}
        assert wrap_numbers(input, 'disk_io') == input

    def test_changing_keys_w_wrap(self):
        input = {'disk1': nt(50, 50, 50), 'disk2': nt(100, 100, 100)}
        assert wrap_numbers(input, 'disk_io') == input
        # disk 2 wraps
        input = {'disk1': nt(50, 50, 50), 'disk2': nt(100, 100, 10)}
        assert wrap_numbers(input, 'disk_io') == {
            'disk1': nt(50, 50, 50),
            'disk2': nt(100, 100, 110),
        }
        # disk 2 disappears
        input = {'disk1': nt(50, 50, 50)}
        assert wrap_numbers(input, 'disk_io') == input

        # then it appears again; the old wrap is supposed to be
        # gone.
        input = {'disk1': nt(50, 50, 50), 'disk2': nt(100, 100, 100)}
        assert wrap_numbers(input, 'disk_io') == input
        # remains the same
        input = {'disk1': nt(50, 50, 50), 'disk2': nt(100, 100, 100)}
        assert wrap_numbers(input, 'disk_io') == input
        # and then wraps again
        input = {'disk1': nt(50, 50, 50), 'disk2': nt(100, 100, 10)}
        assert wrap_numbers(input, 'disk_io') == {
            'disk1': nt(50, 50, 50),
            'disk2': nt(100, 100, 110),
        }

    def test_real_data(self):
        d = {
            'nvme0n1': (300, 508, 640, 1571, 5970, 1987, 2049, 451751, 47048),
            'nvme0n1p1': (1171, 2, 5600256, 1024, 516, 0, 0, 0, 8),
            'nvme0n1p2': (54, 54, 2396160, 5165056, 4, 24, 30, 1207, 28),
            'nvme0n1p3': (2389, 4539, 5154, 150, 4828, 1844, 2019, 398, 348),
        }
        assert wrap_numbers(d, 'disk_io') == d
        assert wrap_numbers(d, 'disk_io') == d
        # decrease this   ↓
        d = {
            'nvme0n1': (100, 508, 640, 1571, 5970, 1987, 2049, 451751, 47048),
            'nvme0n1p1': (1171, 2, 5600256, 1024, 516, 0, 0, 0, 8),
            'nvme0n1p2': (54, 54, 2396160, 5165056, 4, 24, 30, 1207, 28),
            'nvme0n1p3': (2389, 4539, 5154, 150, 4828, 1844, 2019, 398, 348),
        }
        out = wrap_numbers(d, 'disk_io')
        assert out['nvme0n1'][0] == 400

    # --- cache tests

    def test_cache_first_call(self):
        input = {'disk1': nt(5, 5, 5)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        assert cache[1] == {'disk_io': {}}
        assert cache[2] == {'disk_io': {}}

    def test_cache_call_twice(self):
        input = {'disk1': nt(5, 5, 5)}
        wrap_numbers(input, 'disk_io')
        input = {'disk1': nt(10, 10, 10)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        assert cache[1] == {
            'disk_io': {('disk1', 0): 0, ('disk1', 1): 0, ('disk1', 2): 0}
        }
        assert cache[2] == {'disk_io': {}}

    def test_cache_wrap(self):
        # let's say 100 is the threshold
        input = {'disk1': nt(100, 100, 100)}
        wrap_numbers(input, 'disk_io')

        # first wrap restarts from 10
        input = {'disk1': nt(100, 100, 10)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        assert cache[1] == {
            'disk_io': {('disk1', 0): 0, ('disk1', 1): 0, ('disk1', 2): 100}
        }
        assert cache[2] == {'disk_io': {'disk1': set([('disk1', 2)])}}

        def check_cache_info():
            cache = wrap_numbers.cache_info()
            assert cache[1] == {
                'disk_io': {
                    ('disk1', 0): 0,
                    ('disk1', 1): 0,
                    ('disk1', 2): 100,
                }
            }
            assert cache[2] == {'disk_io': {'disk1': set([('disk1', 2)])}}

        # then it remains the same
        input = {'disk1': nt(100, 100, 10)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        check_cache_info()

        # then it goes up
        input = {'disk1': nt(100, 100, 90)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        check_cache_info()

        # then it wraps again
        input = {'disk1': nt(100, 100, 20)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        assert cache[1] == {
            'disk_io': {('disk1', 0): 0, ('disk1', 1): 0, ('disk1', 2): 190}
        }
        assert cache[2] == {'disk_io': {'disk1': set([('disk1', 2)])}}

    def test_cache_changing_keys(self):
        input = {'disk1': nt(5, 5, 5)}
        wrap_numbers(input, 'disk_io')
        input = {'disk1': nt(5, 5, 5), 'disk2': nt(7, 7, 7)}
        wrap_numbers(input, 'disk_io')
        cache = wrap_numbers.cache_info()
        assert cache[0] == {'disk_io': input}
        assert cache[1] == {
            'disk_io': {('disk1', 0): 0, ('disk1', 1): 0, ('disk1', 2): 0}
        }
        assert cache[2] == {'disk_io': {}}

    def test_cache_clear(self):
        input = {'disk1': nt(5, 5, 5)}
        wrap_numbers(input, 'disk_io')
        wrap_numbers(input, 'disk_io')
        wrap_numbers.cache_clear('disk_io')
        assert wrap_numbers.cache_info() == ({}, {}, {})
        wrap_numbers.cache_clear('disk_io')
        wrap_numbers.cache_clear('?!?')

    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_cache_clear_public_apis(self):
        if not psutil.disk_io_counters() or not psutil.net_io_counters():
            raise pytest.skip("no disks or NICs available")
        psutil.disk_io_counters()
        psutil.net_io_counters()
        caches = wrap_numbers.cache_info()
        for cache in caches:
            assert 'psutil.disk_io_counters' in cache
            assert 'psutil.net_io_counters' in cache

        psutil.disk_io_counters.cache_clear()
        caches = wrap_numbers.cache_info()
        for cache in caches:
            assert 'psutil.net_io_counters' in cache
            assert 'psutil.disk_io_counters' not in cache

        psutil.net_io_counters.cache_clear()
        caches = wrap_numbers.cache_info()
        assert caches == ({}, {}, {})


# ===================================================================
# --- Example script tests
# ===================================================================


@pytest.mark.skipif(
    not os.path.exists(SCRIPTS_DIR), reason="can't locate scripts directory"
)
class TestScripts(PsutilTestCase):
    """Tests for scripts in the "scripts" directory."""

    @staticmethod
    def assert_stdout(exe, *args, **kwargs):
        kwargs.setdefault("env", PYTHON_EXE_ENV)
        exe = '%s' % os.path.join(SCRIPTS_DIR, exe)
        cmd = [PYTHON_EXE, exe]
        for arg in args:
            cmd.append(arg)
        try:
            out = sh(cmd, **kwargs).strip()
        except RuntimeError as err:
            if 'AccessDenied' in str(err):
                return str(err)
            else:
                raise
        assert out, out
        return out

    @staticmethod
    def assert_syntax(exe):
        exe = os.path.join(SCRIPTS_DIR, exe)
        with open(exe, encoding="utf8") if PY3 else open(exe) as f:
            src = f.read()
        ast.parse(src)

    def test_coverage(self):
        # make sure all example scripts have a test method defined
        meths = dir(self)
        for name in os.listdir(SCRIPTS_DIR):
            if name.endswith('.py'):
                if 'test_' + os.path.splitext(name)[0] not in meths:
                    # self.assert_stdout(name)
                    raise self.fail(
                        'no test defined for %r script'
                        % os.path.join(SCRIPTS_DIR, name)
                    )

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_executable(self):
        for root, dirs, files in os.walk(SCRIPTS_DIR):
            for file in files:
                if file.endswith('.py'):
                    path = os.path.join(root, file)
                    if not stat.S_IXUSR & os.stat(path)[stat.ST_MODE]:
                        raise self.fail('%r is not executable' % path)

    def test_disk_usage(self):
        self.assert_stdout('disk_usage.py')

    def test_free(self):
        self.assert_stdout('free.py')

    def test_meminfo(self):
        self.assert_stdout('meminfo.py')

    def test_procinfo(self):
        self.assert_stdout('procinfo.py', str(os.getpid()))

    @pytest.mark.skipif(CI_TESTING and not psutil.users(), reason="no users")
    def test_who(self):
        self.assert_stdout('who.py')

    def test_ps(self):
        self.assert_stdout('ps.py')

    def test_pstree(self):
        self.assert_stdout('pstree.py')

    def test_netstat(self):
        self.assert_stdout('netstat.py')

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_ifconfig(self):
        self.assert_stdout('ifconfig.py')

    @pytest.mark.skipif(not HAS_MEMORY_MAPS, reason="not supported")
    def test_pmap(self):
        self.assert_stdout('pmap.py', str(os.getpid()))

    def test_procsmem(self):
        if 'uss' not in psutil.Process().memory_full_info()._fields:
            raise pytest.skip("not supported")
        self.assert_stdout('procsmem.py')

    def test_killall(self):
        self.assert_syntax('killall.py')

    def test_nettop(self):
        self.assert_syntax('nettop.py')

    def test_top(self):
        self.assert_syntax('top.py')

    def test_iotop(self):
        self.assert_syntax('iotop.py')

    def test_pidof(self):
        output = self.assert_stdout('pidof.py', psutil.Process().name())
        assert str(os.getpid()) in output

    @pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
    def test_winservices(self):
        self.assert_stdout('winservices.py')

    def test_cpu_distribution(self):
        self.assert_syntax('cpu_distribution.py')

    @pytest.mark.skipif(not HAS_SENSORS_TEMPERATURES, reason="not supported")
    def test_temperatures(self):
        if not psutil.sensors_temperatures():
            raise pytest.skip("no temperatures")
        self.assert_stdout('temperatures.py')

    @pytest.mark.skipif(not HAS_SENSORS_FANS, reason="not supported")
    def test_fans(self):
        if not psutil.sensors_fans():
            raise pytest.skip("no fans")
        self.assert_stdout('fans.py')

    @pytest.mark.skipif(not HAS_SENSORS_BATTERY, reason="not supported")
    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_battery(self):
        self.assert_stdout('battery.py')

    @pytest.mark.skipif(not HAS_SENSORS_BATTERY, reason="not supported")
    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_sensors(self):
        self.assert_stdout('sensors.py')
PKok\X���N�Npsutil/tests/test_bsd.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

# TODO: (FreeBSD) add test for comparing connections with 'sockstat' cmd.


"""Tests specific to all BSD platforms."""


import datetime
import os
import re
import time

import psutil
from psutil import BSD
from psutil import FREEBSD
from psutil import NETBSD
from psutil import OPENBSD
from psutil.tests import HAS_BATTERY
from psutil.tests import TOLERANCE_SYS_MEM
from psutil.tests import PsutilTestCase
from psutil.tests import pytest
from psutil.tests import retry_on_failure
from psutil.tests import sh
from psutil.tests import spawn_testproc
from psutil.tests import terminate
from psutil.tests import which


if BSD:
    from psutil._psutil_posix import getpagesize

    PAGESIZE = getpagesize()
    # muse requires root privileges
    MUSE_AVAILABLE = os.getuid() == 0 and which('muse')
else:
    PAGESIZE = None
    MUSE_AVAILABLE = False


def sysctl(cmdline):
    """Expects a sysctl command with an argument and parse the result
    returning only the value of interest.
    """
    result = sh("sysctl " + cmdline)
    if FREEBSD:
        result = result[result.find(": ") + 2 :]
    elif OPENBSD or NETBSD:
        result = result[result.find("=") + 1 :]
    try:
        return int(result)
    except ValueError:
        return result


def muse(field):
    """Thin wrapper around 'muse' cmdline utility."""
    out = sh('muse')
    for line in out.split('\n'):
        if line.startswith(field):
            break
    else:
        raise ValueError("line not found")
    return int(line.split()[1])


# =====================================================================
# --- All BSD*
# =====================================================================


@pytest.mark.skipif(not BSD, reason="BSD only")
class BSDTestCase(PsutilTestCase):
    """Generic tests common to all BSD variants."""

    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    @pytest.mark.skipif(NETBSD, reason="-o lstart doesn't work on NETBSD")
    def test_process_create_time(self):
        output = sh("ps -o lstart -p %s" % self.pid)
        start_ps = output.replace('STARTED', '').strip()
        start_psutil = psutil.Process(self.pid).create_time()
        start_psutil = time.strftime(
            "%a %b %e %H:%M:%S %Y", time.localtime(start_psutil)
        )
        assert start_ps == start_psutil

    def test_disks(self):
        # test psutil.disk_usage() and psutil.disk_partitions()
        # against "df -a"
        def df(path):
            out = sh('df -k "%s"' % path).strip()
            lines = out.split('\n')
            lines.pop(0)
            line = lines.pop(0)
            dev, total, used, free = line.split()[:4]
            if dev == 'none':
                dev = ''
            total = int(total) * 1024
            used = int(used) * 1024
            free = int(free) * 1024
            return dev, total, used, free

        for part in psutil.disk_partitions(all=False):
            usage = psutil.disk_usage(part.mountpoint)
            dev, total, used, free = df(part.mountpoint)
            assert part.device == dev
            assert usage.total == total
            # 10 MB tolerance
            if abs(usage.free - free) > 10 * 1024 * 1024:
                raise self.fail("psutil=%s, df=%s" % (usage.free, free))
            if abs(usage.used - used) > 10 * 1024 * 1024:
                raise self.fail("psutil=%s, df=%s" % (usage.used, used))

    @pytest.mark.skipif(not which('sysctl'), reason="sysctl cmd not available")
    def test_cpu_count_logical(self):
        syst = sysctl("hw.ncpu")
        assert psutil.cpu_count(logical=True) == syst

    @pytest.mark.skipif(not which('sysctl'), reason="sysctl cmd not available")
    @pytest.mark.skipif(
        NETBSD, reason="skipped on NETBSD"  # we check /proc/meminfo
    )
    def test_virtual_memory_total(self):
        num = sysctl('hw.physmem')
        assert num == psutil.virtual_memory().total

    @pytest.mark.skipif(
        not which('ifconfig'), reason="ifconfig cmd not available"
    )
    def test_net_if_stats(self):
        for name, stats in psutil.net_if_stats().items():
            try:
                out = sh("ifconfig %s" % name)
            except RuntimeError:
                pass
            else:
                assert stats.isup == ('RUNNING' in out)
                if "mtu" in out:
                    assert stats.mtu == int(re.findall(r'mtu (\d+)', out)[0])


# =====================================================================
# --- FreeBSD
# =====================================================================


@pytest.mark.skipif(not FREEBSD, reason="FREEBSD only")
class FreeBSDPsutilTestCase(PsutilTestCase):
    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    @retry_on_failure()
    def test_memory_maps(self):
        out = sh('procstat -v %s' % self.pid)
        maps = psutil.Process(self.pid).memory_maps(grouped=False)
        lines = out.split('\n')[1:]
        while lines:
            line = lines.pop()
            fields = line.split()
            _, start, stop, _perms, res = fields[:5]
            map = maps.pop()
            assert "%s-%s" % (start, stop) == map.addr
            assert int(res) == map.rss
            if not map.path.startswith('['):
                assert fields[10] == map.path

    def test_exe(self):
        out = sh('procstat -b %s' % self.pid)
        assert psutil.Process(self.pid).exe() == out.split('\n')[1].split()[-1]

    def test_cmdline(self):
        out = sh('procstat -c %s' % self.pid)
        assert ' '.join(psutil.Process(self.pid).cmdline()) == ' '.join(
            out.split('\n')[1].split()[2:]
        )

    def test_uids_gids(self):
        out = sh('procstat -s %s' % self.pid)
        euid, ruid, suid, egid, rgid, sgid = out.split('\n')[1].split()[2:8]
        p = psutil.Process(self.pid)
        uids = p.uids()
        gids = p.gids()
        assert uids.real == int(ruid)
        assert uids.effective == int(euid)
        assert uids.saved == int(suid)
        assert gids.real == int(rgid)
        assert gids.effective == int(egid)
        assert gids.saved == int(sgid)

    @retry_on_failure()
    def test_ctx_switches(self):
        tested = []
        out = sh('procstat -r %s' % self.pid)
        p = psutil.Process(self.pid)
        for line in out.split('\n'):
            line = line.lower().strip()
            if ' voluntary context' in line:
                pstat_value = int(line.split()[-1])
                psutil_value = p.num_ctx_switches().voluntary
                assert pstat_value == psutil_value
                tested.append(None)
            elif ' involuntary context' in line:
                pstat_value = int(line.split()[-1])
                psutil_value = p.num_ctx_switches().involuntary
                assert pstat_value == psutil_value
                tested.append(None)
        if len(tested) != 2:
            raise RuntimeError("couldn't find lines match in procstat out")

    @retry_on_failure()
    def test_cpu_times(self):
        tested = []
        out = sh('procstat -r %s' % self.pid)
        p = psutil.Process(self.pid)
        for line in out.split('\n'):
            line = line.lower().strip()
            if 'user time' in line:
                pstat_value = float('0.' + line.split()[-1].split('.')[-1])
                psutil_value = p.cpu_times().user
                assert pstat_value == psutil_value
                tested.append(None)
            elif 'system time' in line:
                pstat_value = float('0.' + line.split()[-1].split('.')[-1])
                psutil_value = p.cpu_times().system
                assert pstat_value == psutil_value
                tested.append(None)
        if len(tested) != 2:
            raise RuntimeError("couldn't find lines match in procstat out")


@pytest.mark.skipif(not FREEBSD, reason="FREEBSD only")
class FreeBSDSystemTestCase(PsutilTestCase):
    @staticmethod
    def parse_swapinfo():
        # the last line is always the total
        output = sh("swapinfo -k").splitlines()[-1]
        parts = re.split(r'\s+', output)

        if not parts:
            raise ValueError("Can't parse swapinfo: %s" % output)

        # the size is in 1k units, so multiply by 1024
        total, used, free = (int(p) * 1024 for p in parts[1:4])
        return total, used, free

    def test_cpu_frequency_against_sysctl(self):
        # Currently only cpu 0 is frequency is supported in FreeBSD
        # All other cores use the same frequency.
        sensor = "dev.cpu.0.freq"
        try:
            sysctl_result = int(sysctl(sensor))
        except RuntimeError:
            raise pytest.skip("frequencies not supported by kernel")
        assert psutil.cpu_freq().current == sysctl_result

        sensor = "dev.cpu.0.freq_levels"
        sysctl_result = sysctl(sensor)
        # sysctl returns a string of the format:
        # <freq_level_1>/<voltage_level_1> <freq_level_2>/<voltage_level_2>...
        # Ordered highest available to lowest available.
        max_freq = int(sysctl_result.split()[0].split("/")[0])
        min_freq = int(sysctl_result.split()[-1].split("/")[0])
        assert psutil.cpu_freq().max == max_freq
        assert psutil.cpu_freq().min == min_freq

    # --- virtual_memory(); tests against sysctl

    @retry_on_failure()
    def test_vmem_active(self):
        syst = sysctl("vm.stats.vm.v_active_count") * PAGESIZE
        assert abs(psutil.virtual_memory().active - syst) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_inactive(self):
        syst = sysctl("vm.stats.vm.v_inactive_count") * PAGESIZE
        assert abs(psutil.virtual_memory().inactive - syst) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_wired(self):
        syst = sysctl("vm.stats.vm.v_wire_count") * PAGESIZE
        assert abs(psutil.virtual_memory().wired - syst) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_cached(self):
        syst = sysctl("vm.stats.vm.v_cache_count") * PAGESIZE
        assert abs(psutil.virtual_memory().cached - syst) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_free(self):
        syst = sysctl("vm.stats.vm.v_free_count") * PAGESIZE
        assert abs(psutil.virtual_memory().free - syst) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_buffers(self):
        syst = sysctl("vfs.bufspace")
        assert abs(psutil.virtual_memory().buffers - syst) < TOLERANCE_SYS_MEM

    # --- virtual_memory(); tests against muse

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    def test_muse_vmem_total(self):
        num = muse('Total')
        assert psutil.virtual_memory().total == num

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_active(self):
        num = muse('Active')
        assert abs(psutil.virtual_memory().active - num) < TOLERANCE_SYS_MEM

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_inactive(self):
        num = muse('Inactive')
        assert abs(psutil.virtual_memory().inactive - num) < TOLERANCE_SYS_MEM

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_wired(self):
        num = muse('Wired')
        assert abs(psutil.virtual_memory().wired - num) < TOLERANCE_SYS_MEM

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_cached(self):
        num = muse('Cache')
        assert abs(psutil.virtual_memory().cached - num) < TOLERANCE_SYS_MEM

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_free(self):
        num = muse('Free')
        assert abs(psutil.virtual_memory().free - num) < TOLERANCE_SYS_MEM

    @pytest.mark.skipif(not MUSE_AVAILABLE, reason="muse not installed")
    @retry_on_failure()
    def test_muse_vmem_buffers(self):
        num = muse('Buffer')
        assert abs(psutil.virtual_memory().buffers - num) < TOLERANCE_SYS_MEM

    def test_cpu_stats_ctx_switches(self):
        assert (
            abs(
                psutil.cpu_stats().ctx_switches
                - sysctl('vm.stats.sys.v_swtch')
            )
            < 1000
        )

    def test_cpu_stats_interrupts(self):
        assert (
            abs(psutil.cpu_stats().interrupts - sysctl('vm.stats.sys.v_intr'))
            < 1000
        )

    def test_cpu_stats_soft_interrupts(self):
        assert (
            abs(
                psutil.cpu_stats().soft_interrupts
                - sysctl('vm.stats.sys.v_soft')
            )
            < 1000
        )

    @retry_on_failure()
    def test_cpu_stats_syscalls(self):
        # pretty high tolerance but it looks like it's OK.
        assert (
            abs(psutil.cpu_stats().syscalls - sysctl('vm.stats.sys.v_syscall'))
            < 200000
        )

    # def test_cpu_stats_traps(self):
    #    self.assertAlmostEqual(psutil.cpu_stats().traps,
    #                           sysctl('vm.stats.sys.v_trap'), delta=1000)

    # --- swap memory

    def test_swapmem_free(self):
        _total, _used, free = self.parse_swapinfo()
        assert abs(psutil.swap_memory().free - free) < TOLERANCE_SYS_MEM

    def test_swapmem_used(self):
        _total, used, _free = self.parse_swapinfo()
        assert abs(psutil.swap_memory().used - used) < TOLERANCE_SYS_MEM

    def test_swapmem_total(self):
        total, _used, _free = self.parse_swapinfo()
        assert abs(psutil.swap_memory().total - total) < TOLERANCE_SYS_MEM

    # --- others

    def test_boot_time(self):
        s = sysctl('sysctl kern.boottime')
        s = s[s.find(" sec = ") + 7 :]
        s = s[: s.find(',')]
        btime = int(s)
        assert btime == psutil.boot_time()

    # --- sensors_battery

    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_sensors_battery(self):
        def secs2hours(secs):
            m, _s = divmod(secs, 60)
            h, m = divmod(m, 60)
            return "%d:%02d" % (h, m)

        out = sh("acpiconf -i 0")
        fields = dict(
            [(x.split('\t')[0], x.split('\t')[-1]) for x in out.split("\n")]
        )
        metrics = psutil.sensors_battery()
        percent = int(fields['Remaining capacity:'].replace('%', ''))
        remaining_time = fields['Remaining time:']
        assert metrics.percent == percent
        if remaining_time == 'unknown':
            assert metrics.secsleft == psutil.POWER_TIME_UNLIMITED
        else:
            assert secs2hours(metrics.secsleft) == remaining_time

    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_sensors_battery_against_sysctl(self):
        assert psutil.sensors_battery().percent == sysctl(
            "hw.acpi.battery.life"
        )
        assert psutil.sensors_battery().power_plugged == (
            sysctl("hw.acpi.acline") == 1
        )
        secsleft = psutil.sensors_battery().secsleft
        if secsleft < 0:
            assert sysctl("hw.acpi.battery.time") == -1
        else:
            assert secsleft == sysctl("hw.acpi.battery.time") * 60

    @pytest.mark.skipif(HAS_BATTERY, reason="has battery")
    def test_sensors_battery_no_battery(self):
        # If no battery is present one of these calls is supposed
        # to fail, see:
        # https://github.com/giampaolo/psutil/issues/1074
        with pytest.raises(RuntimeError):
            sysctl("hw.acpi.battery.life")
            sysctl("hw.acpi.battery.time")
            sysctl("hw.acpi.acline")
        assert psutil.sensors_battery() is None

    # --- sensors_temperatures

    def test_sensors_temperatures_against_sysctl(self):
        num_cpus = psutil.cpu_count(True)
        for cpu in range(num_cpus):
            sensor = "dev.cpu.%s.temperature" % cpu
            # sysctl returns a string in the format 46.0C
            try:
                sysctl_result = int(float(sysctl(sensor)[:-1]))
            except RuntimeError:
                raise pytest.skip("temperatures not supported by kernel")
            assert (
                abs(
                    psutil.sensors_temperatures()["coretemp"][cpu].current
                    - sysctl_result
                )
                < 10
            )

            sensor = "dev.cpu.%s.coretemp.tjmax" % cpu
            sysctl_result = int(float(sysctl(sensor)[:-1]))
            assert (
                psutil.sensors_temperatures()["coretemp"][cpu].high
                == sysctl_result
            )


# =====================================================================
# --- OpenBSD
# =====================================================================


@pytest.mark.skipif(not OPENBSD, reason="OPENBSD only")
class OpenBSDTestCase(PsutilTestCase):
    def test_boot_time(self):
        s = sysctl('kern.boottime')
        sys_bt = datetime.datetime.strptime(s, "%a %b %d %H:%M:%S %Y")
        psutil_bt = datetime.datetime.fromtimestamp(psutil.boot_time())
        assert sys_bt == psutil_bt


# =====================================================================
# --- NetBSD
# =====================================================================


@pytest.mark.skipif(not NETBSD, reason="NETBSD only")
class NetBSDTestCase(PsutilTestCase):
    @staticmethod
    def parse_meminfo(look_for):
        with open('/proc/meminfo') as f:
            for line in f:
                if line.startswith(look_for):
                    return int(line.split()[1]) * 1024
        raise ValueError("can't find %s" % look_for)

    # --- virtual mem

    def test_vmem_total(self):
        assert psutil.virtual_memory().total == self.parse_meminfo("MemTotal:")

    def test_vmem_free(self):
        assert (
            abs(psutil.virtual_memory().free - self.parse_meminfo("MemFree:"))
            < TOLERANCE_SYS_MEM
        )

    def test_vmem_buffers(self):
        assert (
            abs(
                psutil.virtual_memory().buffers
                - self.parse_meminfo("Buffers:")
            )
            < TOLERANCE_SYS_MEM
        )

    def test_vmem_shared(self):
        assert (
            abs(
                psutil.virtual_memory().shared
                - self.parse_meminfo("MemShared:")
            )
            < TOLERANCE_SYS_MEM
        )

    def test_vmem_cached(self):
        assert (
            abs(psutil.virtual_memory().cached - self.parse_meminfo("Cached:"))
            < TOLERANCE_SYS_MEM
        )

    # --- swap mem

    def test_swapmem_total(self):
        assert (
            abs(psutil.swap_memory().total - self.parse_meminfo("SwapTotal:"))
            < TOLERANCE_SYS_MEM
        )

    def test_swapmem_free(self):
        assert (
            abs(psutil.swap_memory().free - self.parse_meminfo("SwapFree:"))
            < TOLERANCE_SYS_MEM
        )

    def test_swapmem_used(self):
        smem = psutil.swap_memory()
        assert smem.used == smem.total - smem.free

    # --- others

    def test_cpu_stats_interrupts(self):
        with open('/proc/stat', 'rb') as f:
            for line in f:
                if line.startswith(b'intr'):
                    interrupts = int(line.split()[1])
                    break
            else:
                raise ValueError("couldn't find line")
        assert abs(psutil.cpu_stats().interrupts - interrupts) < 1000

    def test_cpu_stats_ctx_switches(self):
        with open('/proc/stat', 'rb') as f:
            for line in f:
                if line.startswith(b'ctxt'):
                    ctx_switches = int(line.split()[1])
                    break
            else:
                raise ValueError("couldn't find line")
        assert abs(psutil.cpu_stats().ctx_switches - ctx_switches) < 1000
PKok\�;�3<3<psutil/tests/test_memleaks.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Tests for detecting function memory leaks (typically the ones
implemented in C). It does so by calling a function many times and
checking whether process memory usage keeps increasing between
calls or over time.
Note that this may produce false positives (especially on Windows
for some reason).
PyPy appears to be completely unstable for this framework, probably
because of how its JIT handles memory, so tests are skipped.
"""

from __future__ import print_function

import functools
import os
import platform

import psutil
import psutil._common
from psutil import LINUX
from psutil import MACOS
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._compat import ProcessLookupError
from psutil._compat import super
from psutil.tests import HAS_CPU_AFFINITY
from psutil.tests import HAS_CPU_FREQ
from psutil.tests import HAS_ENVIRON
from psutil.tests import HAS_IONICE
from psutil.tests import HAS_MEMORY_MAPS
from psutil.tests import HAS_NET_IO_COUNTERS
from psutil.tests import HAS_PROC_CPU_NUM
from psutil.tests import HAS_PROC_IO_COUNTERS
from psutil.tests import HAS_RLIMIT
from psutil.tests import HAS_SENSORS_BATTERY
from psutil.tests import HAS_SENSORS_FANS
from psutil.tests import HAS_SENSORS_TEMPERATURES
from psutil.tests import QEMU_USER
from psutil.tests import TestMemoryLeak
from psutil.tests import create_sockets
from psutil.tests import get_testfn
from psutil.tests import process_namespace
from psutil.tests import pytest
from psutil.tests import skip_on_access_denied
from psutil.tests import spawn_testproc
from psutil.tests import system_namespace
from psutil.tests import terminate


cext = psutil._psplatform.cext
thisproc = psutil.Process()
FEW_TIMES = 5


def fewtimes_if_linux():
    """Decorator for those Linux functions which are implemented in pure
    Python, and which we want to run faster.
    """

    def decorator(fun):
        @functools.wraps(fun)
        def wrapper(self, *args, **kwargs):
            if LINUX:
                before = self.__class__.times
                try:
                    self.__class__.times = FEW_TIMES
                    return fun(self, *args, **kwargs)
                finally:
                    self.__class__.times = before
            else:
                return fun(self, *args, **kwargs)

        return wrapper

    return decorator


# ===================================================================
# Process class
# ===================================================================


class TestProcessObjectLeaks(TestMemoryLeak):
    """Test leaks of Process class methods."""

    proc = thisproc

    def test_coverage(self):
        ns = process_namespace(None)
        ns.test_class_coverage(self, ns.getters + ns.setters)

    @fewtimes_if_linux()
    def test_name(self):
        self.execute(self.proc.name)

    @fewtimes_if_linux()
    def test_cmdline(self):
        self.execute(self.proc.cmdline)

    @fewtimes_if_linux()
    def test_exe(self):
        self.execute(self.proc.exe)

    @fewtimes_if_linux()
    def test_ppid(self):
        self.execute(self.proc.ppid)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @fewtimes_if_linux()
    def test_uids(self):
        self.execute(self.proc.uids)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @fewtimes_if_linux()
    def test_gids(self):
        self.execute(self.proc.gids)

    @fewtimes_if_linux()
    def test_status(self):
        self.execute(self.proc.status)

    def test_nice(self):
        self.execute(self.proc.nice)

    def test_nice_set(self):
        niceness = thisproc.nice()
        self.execute(lambda: self.proc.nice(niceness))

    @pytest.mark.skipif(not HAS_IONICE, reason="not supported")
    def test_ionice(self):
        self.execute(self.proc.ionice)

    @pytest.mark.skipif(not HAS_IONICE, reason="not supported")
    def test_ionice_set(self):
        if WINDOWS:
            value = thisproc.ionice()
            self.execute(lambda: self.proc.ionice(value))
        else:
            self.execute(lambda: self.proc.ionice(psutil.IOPRIO_CLASS_NONE))
            fun = functools.partial(cext.proc_ioprio_set, os.getpid(), -1, 0)
            self.execute_w_exc(OSError, fun)

    @pytest.mark.skipif(not HAS_PROC_IO_COUNTERS, reason="not supported")
    @fewtimes_if_linux()
    def test_io_counters(self):
        self.execute(self.proc.io_counters)

    @pytest.mark.skipif(POSIX, reason="worthless on POSIX")
    def test_username(self):
        # always open 1 handle on Windows (only once)
        psutil.Process().username()
        self.execute(self.proc.username)

    @fewtimes_if_linux()
    def test_create_time(self):
        self.execute(self.proc.create_time)

    @fewtimes_if_linux()
    @skip_on_access_denied(only_if=OPENBSD)
    def test_num_threads(self):
        self.execute(self.proc.num_threads)

    @pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
    def test_num_handles(self):
        self.execute(self.proc.num_handles)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @fewtimes_if_linux()
    def test_num_fds(self):
        self.execute(self.proc.num_fds)

    @fewtimes_if_linux()
    def test_num_ctx_switches(self):
        self.execute(self.proc.num_ctx_switches)

    @fewtimes_if_linux()
    @skip_on_access_denied(only_if=OPENBSD)
    def test_threads(self):
        self.execute(self.proc.threads)

    @fewtimes_if_linux()
    def test_cpu_times(self):
        self.execute(self.proc.cpu_times)

    @fewtimes_if_linux()
    @pytest.mark.skipif(not HAS_PROC_CPU_NUM, reason="not supported")
    def test_cpu_num(self):
        self.execute(self.proc.cpu_num)

    @fewtimes_if_linux()
    def test_memory_info(self):
        self.execute(self.proc.memory_info)

    @fewtimes_if_linux()
    def test_memory_full_info(self):
        self.execute(self.proc.memory_full_info)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @fewtimes_if_linux()
    def test_terminal(self):
        self.execute(self.proc.terminal)

    def test_resume(self):
        times = FEW_TIMES if POSIX else self.times
        self.execute(self.proc.resume, times=times)

    @fewtimes_if_linux()
    def test_cwd(self):
        self.execute(self.proc.cwd)

    @pytest.mark.skipif(not HAS_CPU_AFFINITY, reason="not supported")
    def test_cpu_affinity(self):
        self.execute(self.proc.cpu_affinity)

    @pytest.mark.skipif(not HAS_CPU_AFFINITY, reason="not supported")
    def test_cpu_affinity_set(self):
        affinity = thisproc.cpu_affinity()
        self.execute(lambda: self.proc.cpu_affinity(affinity))
        self.execute_w_exc(ValueError, lambda: self.proc.cpu_affinity([-1]))

    @fewtimes_if_linux()
    def test_open_files(self):
        with open(get_testfn(), 'w'):
            self.execute(self.proc.open_files)

    @pytest.mark.skipif(not HAS_MEMORY_MAPS, reason="not supported")
    @fewtimes_if_linux()
    def test_memory_maps(self):
        self.execute(self.proc.memory_maps)

    @pytest.mark.skipif(not LINUX, reason="LINUX only")
    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit(self):
        self.execute(lambda: self.proc.rlimit(psutil.RLIMIT_NOFILE))

    @pytest.mark.skipif(not LINUX, reason="LINUX only")
    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_set(self):
        limit = thisproc.rlimit(psutil.RLIMIT_NOFILE)
        self.execute(lambda: self.proc.rlimit(psutil.RLIMIT_NOFILE, limit))
        self.execute_w_exc((OSError, ValueError), lambda: self.proc.rlimit(-1))

    @fewtimes_if_linux()
    # Windows implementation is based on a single system-wide
    # function (tested later).
    @pytest.mark.skipif(WINDOWS, reason="worthless on WINDOWS")
    def test_net_connections(self):
        # TODO: UNIX sockets are temporarily implemented by parsing
        # 'pfiles' cmd  output; we don't want that part of the code to
        # be executed.
        with create_sockets():
            kind = 'inet' if SUNOS else 'all'
            self.execute(lambda: self.proc.net_connections(kind))

    @pytest.mark.skipif(not HAS_ENVIRON, reason="not supported")
    def test_environ(self):
        self.execute(self.proc.environ)

    @pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
    def test_proc_info(self):
        self.execute(lambda: cext.proc_info(os.getpid()))


class TestTerminatedProcessLeaks(TestProcessObjectLeaks):
    """Repeat the tests above looking for leaks occurring when dealing
    with terminated processes raising NoSuchProcess exception.
    The C functions are still invoked but will follow different code
    paths. We'll check those code paths.
    """

    @classmethod
    def setUpClass(cls):
        super().setUpClass()
        cls.subp = spawn_testproc()
        cls.proc = psutil.Process(cls.subp.pid)
        cls.proc.kill()
        cls.proc.wait()

    @classmethod
    def tearDownClass(cls):
        super().tearDownClass()
        terminate(cls.subp)

    def call(self, fun):
        try:
            fun()
        except psutil.NoSuchProcess:
            pass

    if WINDOWS:

        def test_kill(self):
            self.execute(self.proc.kill)

        def test_terminate(self):
            self.execute(self.proc.terminate)

        def test_suspend(self):
            self.execute(self.proc.suspend)

        def test_resume(self):
            self.execute(self.proc.resume)

        def test_wait(self):
            self.execute(self.proc.wait)

        def test_proc_info(self):
            # test dual implementation
            def call():
                try:
                    return cext.proc_info(self.proc.pid)
                except ProcessLookupError:
                    pass

            self.execute(call)


@pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
class TestProcessDualImplementation(TestMemoryLeak):
    def test_cmdline_peb_true(self):
        self.execute(lambda: cext.proc_cmdline(os.getpid(), use_peb=True))

    def test_cmdline_peb_false(self):
        self.execute(lambda: cext.proc_cmdline(os.getpid(), use_peb=False))


# ===================================================================
# system APIs
# ===================================================================


class TestModuleFunctionsLeaks(TestMemoryLeak):
    """Test leaks of psutil module functions."""

    def test_coverage(self):
        ns = system_namespace()
        ns.test_class_coverage(self, ns.all)

    # --- cpu

    @fewtimes_if_linux()
    def test_cpu_count(self):  # logical
        self.execute(lambda: psutil.cpu_count(logical=True))

    @fewtimes_if_linux()
    def test_cpu_count_cores(self):
        self.execute(lambda: psutil.cpu_count(logical=False))

    @fewtimes_if_linux()
    def test_cpu_times(self):
        self.execute(psutil.cpu_times)

    @fewtimes_if_linux()
    def test_per_cpu_times(self):
        self.execute(lambda: psutil.cpu_times(percpu=True))

    @fewtimes_if_linux()
    def test_cpu_stats(self):
        self.execute(psutil.cpu_stats)

    @fewtimes_if_linux()
    # TODO: remove this once 1892 is fixed
    @pytest.mark.skipif(
        MACOS and platform.machine() == 'arm64', reason="skipped due to #1892"
    )
    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_cpu_freq(self):
        self.execute(psutil.cpu_freq)

    @pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
    def test_getloadavg(self):
        psutil.getloadavg()
        self.execute(psutil.getloadavg)

    # --- mem

    def test_virtual_memory(self):
        self.execute(psutil.virtual_memory)

    # TODO: remove this skip when this gets fixed
    @pytest.mark.skipif(SUNOS, reason="worthless on SUNOS (uses a subprocess)")
    def test_swap_memory(self):
        self.execute(psutil.swap_memory)

    def test_pid_exists(self):
        times = FEW_TIMES if POSIX else self.times
        self.execute(lambda: psutil.pid_exists(os.getpid()), times=times)

    # --- disk

    def test_disk_usage(self):
        times = FEW_TIMES if POSIX else self.times
        self.execute(lambda: psutil.disk_usage('.'), times=times)

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_disk_partitions(self):
        self.execute(psutil.disk_partitions)

    @pytest.mark.skipif(
        LINUX and not os.path.exists('/proc/diskstats'),
        reason="/proc/diskstats not available on this Linux version",
    )
    @fewtimes_if_linux()
    def test_disk_io_counters(self):
        self.execute(lambda: psutil.disk_io_counters(nowrap=False))

    # --- proc

    @fewtimes_if_linux()
    def test_pids(self):
        self.execute(psutil.pids)

    # --- net

    @fewtimes_if_linux()
    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_net_io_counters(self):
        self.execute(lambda: psutil.net_io_counters(nowrap=False))

    @fewtimes_if_linux()
    @pytest.mark.skipif(MACOS and os.getuid() != 0, reason="need root access")
    def test_net_connections(self):
        # always opens and handle on Windows() (once)
        psutil.net_connections(kind='all')
        with create_sockets():
            self.execute(lambda: psutil.net_connections(kind='all'))

    def test_net_if_addrs(self):
        # Note: verified that on Windows this was a false positive.
        tolerance = 80 * 1024 if WINDOWS else self.tolerance
        self.execute(psutil.net_if_addrs, tolerance=tolerance)

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_net_if_stats(self):
        self.execute(psutil.net_if_stats)

    # --- sensors

    @fewtimes_if_linux()
    @pytest.mark.skipif(not HAS_SENSORS_BATTERY, reason="not supported")
    def test_sensors_battery(self):
        self.execute(psutil.sensors_battery)

    @fewtimes_if_linux()
    @pytest.mark.skipif(not HAS_SENSORS_TEMPERATURES, reason="not supported")
    def test_sensors_temperatures(self):
        self.execute(psutil.sensors_temperatures)

    @fewtimes_if_linux()
    @pytest.mark.skipif(not HAS_SENSORS_FANS, reason="not supported")
    def test_sensors_fans(self):
        self.execute(psutil.sensors_fans)

    # --- others

    @fewtimes_if_linux()
    def test_boot_time(self):
        self.execute(psutil.boot_time)

    def test_users(self):
        self.execute(psutil.users)

    def test_set_debug(self):
        self.execute(lambda: psutil._set_debug(False))

    if WINDOWS:

        # --- win services

        def test_win_service_iter(self):
            self.execute(cext.winservice_enumerate)

        def test_win_service_get(self):
            pass

        def test_win_service_get_config(self):
            name = next(psutil.win_service_iter()).name()
            self.execute(lambda: cext.winservice_query_config(name))

        def test_win_service_get_status(self):
            name = next(psutil.win_service_iter()).name()
            self.execute(lambda: cext.winservice_query_status(name))

        def test_win_service_get_description(self):
            name = next(psutil.win_service_iter()).name()
            self.execute(lambda: cext.winservice_query_descr(name))
PKok\���H!1!1psutil/tests/test_contracts.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Contracts tests. These tests mainly check API sanity in terms of
returned types and APIs availability.
Some of these are duplicates of tests test_system.py and test_process.py.
"""

import platform
import signal

import psutil
from psutil import AIX
from psutil import FREEBSD
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._compat import long
from psutil.tests import GITHUB_ACTIONS
from psutil.tests import HAS_CPU_FREQ
from psutil.tests import HAS_NET_IO_COUNTERS
from psutil.tests import HAS_SENSORS_FANS
from psutil.tests import HAS_SENSORS_TEMPERATURES
from psutil.tests import PYPY
from psutil.tests import QEMU_USER
from psutil.tests import SKIP_SYSCONS
from psutil.tests import PsutilTestCase
from psutil.tests import create_sockets
from psutil.tests import enum
from psutil.tests import is_namedtuple
from psutil.tests import kernel_version
from psutil.tests import pytest


# ===================================================================
# --- APIs availability
# ===================================================================

# Make sure code reflects what doc promises in terms of APIs
# availability.


class TestAvailConstantsAPIs(PsutilTestCase):
    def test_PROCFS_PATH(self):
        assert hasattr(psutil, "PROCFS_PATH") == (LINUX or SUNOS or AIX)

    def test_win_priority(self):
        ae = self.assertEqual
        ae(hasattr(psutil, "ABOVE_NORMAL_PRIORITY_CLASS"), WINDOWS)
        ae(hasattr(psutil, "BELOW_NORMAL_PRIORITY_CLASS"), WINDOWS)
        ae(hasattr(psutil, "HIGH_PRIORITY_CLASS"), WINDOWS)
        ae(hasattr(psutil, "IDLE_PRIORITY_CLASS"), WINDOWS)
        ae(hasattr(psutil, "NORMAL_PRIORITY_CLASS"), WINDOWS)
        ae(hasattr(psutil, "REALTIME_PRIORITY_CLASS"), WINDOWS)

    def test_linux_ioprio_linux(self):
        ae = self.assertEqual
        ae(hasattr(psutil, "IOPRIO_CLASS_NONE"), LINUX)
        ae(hasattr(psutil, "IOPRIO_CLASS_RT"), LINUX)
        ae(hasattr(psutil, "IOPRIO_CLASS_BE"), LINUX)
        ae(hasattr(psutil, "IOPRIO_CLASS_IDLE"), LINUX)

    def test_linux_ioprio_windows(self):
        ae = self.assertEqual
        ae(hasattr(psutil, "IOPRIO_HIGH"), WINDOWS)
        ae(hasattr(psutil, "IOPRIO_NORMAL"), WINDOWS)
        ae(hasattr(psutil, "IOPRIO_LOW"), WINDOWS)
        ae(hasattr(psutil, "IOPRIO_VERYLOW"), WINDOWS)

    @pytest.mark.skipif(
        GITHUB_ACTIONS and LINUX,
        reason="unsupported on GITHUB_ACTIONS + LINUX",
    )
    def test_rlimit(self):
        ae = self.assertEqual
        ae(hasattr(psutil, "RLIM_INFINITY"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_AS"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_CORE"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_CPU"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_DATA"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_FSIZE"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_MEMLOCK"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_NOFILE"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_NPROC"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_RSS"), LINUX or FREEBSD)
        ae(hasattr(psutil, "RLIMIT_STACK"), LINUX or FREEBSD)

        ae(hasattr(psutil, "RLIMIT_LOCKS"), LINUX)
        if POSIX:
            if kernel_version() >= (2, 6, 8):
                ae(hasattr(psutil, "RLIMIT_MSGQUEUE"), LINUX)
            if kernel_version() >= (2, 6, 12):
                ae(hasattr(psutil, "RLIMIT_NICE"), LINUX)
            if kernel_version() >= (2, 6, 12):
                ae(hasattr(psutil, "RLIMIT_RTPRIO"), LINUX)
            if kernel_version() >= (2, 6, 25):
                ae(hasattr(psutil, "RLIMIT_RTTIME"), LINUX)
            if kernel_version() >= (2, 6, 8):
                ae(hasattr(psutil, "RLIMIT_SIGPENDING"), LINUX)

        ae(hasattr(psutil, "RLIMIT_SWAP"), FREEBSD)
        ae(hasattr(psutil, "RLIMIT_SBSIZE"), FREEBSD)
        ae(hasattr(psutil, "RLIMIT_NPTS"), FREEBSD)


class TestAvailSystemAPIs(PsutilTestCase):
    def test_win_service_iter(self):
        assert hasattr(psutil, "win_service_iter") == WINDOWS

    def test_win_service_get(self):
        assert hasattr(psutil, "win_service_get") == WINDOWS

    def test_cpu_freq(self):
        assert hasattr(psutil, "cpu_freq") == (
            LINUX or MACOS or WINDOWS or FREEBSD or OPENBSD
        )

    def test_sensors_temperatures(self):
        assert hasattr(psutil, "sensors_temperatures") == (LINUX or FREEBSD)

    def test_sensors_fans(self):
        assert hasattr(psutil, "sensors_fans") == LINUX

    def test_battery(self):
        assert hasattr(psutil, "sensors_battery") == (
            LINUX or WINDOWS or FREEBSD or MACOS
        )


class TestAvailProcessAPIs(PsutilTestCase):
    def test_environ(self):
        assert hasattr(psutil.Process, "environ") == (
            LINUX
            or MACOS
            or WINDOWS
            or AIX
            or SUNOS
            or FREEBSD
            or OPENBSD
            or NETBSD
        )

    def test_uids(self):
        assert hasattr(psutil.Process, "uids") == POSIX

    def test_gids(self):
        assert hasattr(psutil.Process, "uids") == POSIX

    def test_terminal(self):
        assert hasattr(psutil.Process, "terminal") == POSIX

    def test_ionice(self):
        assert hasattr(psutil.Process, "ionice") == (LINUX or WINDOWS)

    @pytest.mark.skipif(
        GITHUB_ACTIONS and LINUX,
        reason="unsupported on GITHUB_ACTIONS + LINUX",
    )
    def test_rlimit(self):
        assert hasattr(psutil.Process, "rlimit") == (LINUX or FREEBSD)

    def test_io_counters(self):
        hasit = hasattr(psutil.Process, "io_counters")
        assert hasit == (not (MACOS or SUNOS))

    def test_num_fds(self):
        assert hasattr(psutil.Process, "num_fds") == POSIX

    def test_num_handles(self):
        assert hasattr(psutil.Process, "num_handles") == WINDOWS

    def test_cpu_affinity(self):
        assert hasattr(psutil.Process, "cpu_affinity") == (
            LINUX or WINDOWS or FREEBSD
        )

    def test_cpu_num(self):
        assert hasattr(psutil.Process, "cpu_num") == (
            LINUX or FREEBSD or SUNOS
        )

    def test_memory_maps(self):
        hasit = hasattr(psutil.Process, "memory_maps")
        assert hasit == (not (OPENBSD or NETBSD or AIX or MACOS))


# ===================================================================
# --- API types
# ===================================================================


class TestSystemAPITypes(PsutilTestCase):
    """Check the return types of system related APIs.
    Mainly we want to test we never return unicode on Python 2, see:
    https://github.com/giampaolo/psutil/issues/1039.
    """

    @classmethod
    def setUpClass(cls):
        cls.proc = psutil.Process()

    def assert_ntuple_of_nums(self, nt, type_=float, gezero=True):
        assert is_namedtuple(nt)
        for n in nt:
            assert isinstance(n, type_)
            if gezero:
                assert n >= 0

    def test_cpu_times(self):
        self.assert_ntuple_of_nums(psutil.cpu_times())
        for nt in psutil.cpu_times(percpu=True):
            self.assert_ntuple_of_nums(nt)

    def test_cpu_percent(self):
        assert isinstance(psutil.cpu_percent(interval=None), float)
        assert isinstance(psutil.cpu_percent(interval=0.00001), float)

    def test_cpu_times_percent(self):
        self.assert_ntuple_of_nums(psutil.cpu_times_percent(interval=None))
        self.assert_ntuple_of_nums(psutil.cpu_times_percent(interval=0.0001))

    def test_cpu_count(self):
        assert isinstance(psutil.cpu_count(), int)

    # TODO: remove this once 1892 is fixed
    @pytest.mark.skipif(
        MACOS and platform.machine() == 'arm64', reason="skipped due to #1892"
    )
    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_cpu_freq(self):
        if psutil.cpu_freq() is None:
            raise pytest.skip("cpu_freq() returns None")
        self.assert_ntuple_of_nums(psutil.cpu_freq(), type_=(float, int, long))

    def test_disk_io_counters(self):
        # Duplicate of test_system.py. Keep it anyway.
        for k, v in psutil.disk_io_counters(perdisk=True).items():
            assert isinstance(k, str)
            self.assert_ntuple_of_nums(v, type_=(int, long))

    def test_disk_partitions(self):
        # Duplicate of test_system.py. Keep it anyway.
        for disk in psutil.disk_partitions():
            assert isinstance(disk.device, str)
            assert isinstance(disk.mountpoint, str)
            assert isinstance(disk.fstype, str)
            assert isinstance(disk.opts, str)

    @pytest.mark.skipif(SKIP_SYSCONS, reason="requires root")
    def test_net_connections(self):
        with create_sockets():
            ret = psutil.net_connections('all')
            assert len(ret) == len(set(ret))
            for conn in ret:
                assert is_namedtuple(conn)

    def test_net_if_addrs(self):
        # Duplicate of test_system.py. Keep it anyway.
        for ifname, addrs in psutil.net_if_addrs().items():
            assert isinstance(ifname, str)
            for addr in addrs:
                if enum is not None and not PYPY:
                    assert isinstance(addr.family, enum.IntEnum)
                else:
                    assert isinstance(addr.family, int)
                assert isinstance(addr.address, str)
                assert isinstance(addr.netmask, (str, type(None)))
                assert isinstance(addr.broadcast, (str, type(None)))

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_net_if_stats(self):
        # Duplicate of test_system.py. Keep it anyway.
        for ifname, info in psutil.net_if_stats().items():
            assert isinstance(ifname, str)
            assert isinstance(info.isup, bool)
            if enum is not None:
                assert isinstance(info.duplex, enum.IntEnum)
            else:
                assert isinstance(info.duplex, int)
            assert isinstance(info.speed, int)
            assert isinstance(info.mtu, int)

    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_net_io_counters(self):
        # Duplicate of test_system.py. Keep it anyway.
        for ifname in psutil.net_io_counters(pernic=True):
            assert isinstance(ifname, str)

    @pytest.mark.skipif(not HAS_SENSORS_FANS, reason="not supported")
    def test_sensors_fans(self):
        # Duplicate of test_system.py. Keep it anyway.
        for name, units in psutil.sensors_fans().items():
            assert isinstance(name, str)
            for unit in units:
                assert isinstance(unit.label, str)
                assert isinstance(unit.current, (float, int, type(None)))

    @pytest.mark.skipif(not HAS_SENSORS_TEMPERATURES, reason="not supported")
    def test_sensors_temperatures(self):
        # Duplicate of test_system.py. Keep it anyway.
        for name, units in psutil.sensors_temperatures().items():
            assert isinstance(name, str)
            for unit in units:
                assert isinstance(unit.label, str)
                assert isinstance(unit.current, (float, int, type(None)))
                assert isinstance(unit.high, (float, int, type(None)))
                assert isinstance(unit.critical, (float, int, type(None)))

    def test_boot_time(self):
        # Duplicate of test_system.py. Keep it anyway.
        assert isinstance(psutil.boot_time(), float)

    def test_users(self):
        # Duplicate of test_system.py. Keep it anyway.
        for user in psutil.users():
            assert isinstance(user.name, str)
            assert isinstance(user.terminal, (str, type(None)))
            assert isinstance(user.host, (str, type(None)))
            assert isinstance(user.pid, (int, type(None)))


class TestProcessWaitType(PsutilTestCase):
    @pytest.mark.skipif(not POSIX, reason="not POSIX")
    def test_negative_signal(self):
        p = psutil.Process(self.spawn_testproc().pid)
        p.terminate()
        code = p.wait()
        assert code == -signal.SIGTERM
        if enum is not None:
            assert isinstance(code, enum.IntEnum)
        else:
            assert isinstance(code, int)
PKok\��c�H�H psutil/tests/test_process_all.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Iterate over all process PIDs and for each one of them invoke and
test all psutil.Process() methods.
"""

import enum
import errno
import multiprocessing
import os
import stat
import time
import traceback

import psutil
from psutil import AIX
from psutil import BSD
from psutil import FREEBSD
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import OSX
from psutil import POSIX
from psutil import WINDOWS
from psutil._compat import PY3
from psutil._compat import FileNotFoundError
from psutil._compat import long
from psutil._compat import unicode
from psutil.tests import CI_TESTING
from psutil.tests import PYTEST_PARALLEL
from psutil.tests import QEMU_USER
from psutil.tests import VALID_PROC_STATUSES
from psutil.tests import PsutilTestCase
from psutil.tests import check_connection_ntuple
from psutil.tests import create_sockets
from psutil.tests import is_namedtuple
from psutil.tests import is_win_secure_system_proc
from psutil.tests import process_namespace
from psutil.tests import pytest


# Cuts the time in half, but (e.g.) on macOS the process pool stays
# alive after join() (multiprocessing bug?), messing up other tests.
USE_PROC_POOL = LINUX and not CI_TESTING and not PYTEST_PARALLEL


def proc_info(pid):
    tcase = PsutilTestCase()

    def check_exception(exc, proc, name, ppid):
        tcase.assertEqual(exc.pid, pid)
        if exc.name is not None:
            tcase.assertEqual(exc.name, name)
        if isinstance(exc, psutil.ZombieProcess):
            tcase.assertProcessZombie(proc)
            if exc.ppid is not None:
                tcase.assertGreaterEqual(exc.ppid, 0)
                tcase.assertEqual(exc.ppid, ppid)
        elif isinstance(exc, psutil.NoSuchProcess):
            tcase.assertProcessGone(proc)
        str(exc)
        repr(exc)

    def do_wait():
        if pid != 0:
            try:
                proc.wait(0)
            except psutil.Error as exc:
                check_exception(exc, proc, name, ppid)

    try:
        proc = psutil.Process(pid)
    except psutil.NoSuchProcess:
        tcase.assertPidGone(pid)
        return {}
    try:
        d = proc.as_dict(['ppid', 'name'])
    except psutil.NoSuchProcess:
        tcase.assertProcessGone(proc)
    else:
        name, ppid = d['name'], d['ppid']
        info = {'pid': proc.pid}
        ns = process_namespace(proc)
        # We don't use oneshot() because in order not to fool
        # check_exception() in case of NSP.
        for fun, fun_name in ns.iter(ns.getters, clear_cache=False):
            try:
                info[fun_name] = fun()
            except psutil.Error as exc:
                check_exception(exc, proc, name, ppid)
                continue
        do_wait()
        return info


class TestFetchAllProcesses(PsutilTestCase):
    """Test which iterates over all running processes and performs
    some sanity checks against Process API's returned values.
    Uses a process pool to get info about all processes.
    """

    def setUp(self):
        psutil._set_debug(False)
        # Using a pool in a CI env may result in deadlock, see:
        # https://github.com/giampaolo/psutil/issues/2104
        if USE_PROC_POOL:
            self.pool = multiprocessing.Pool()

    def tearDown(self):
        psutil._set_debug(True)
        if USE_PROC_POOL:
            self.pool.terminate()
            self.pool.join()

    def iter_proc_info(self):
        # Fixes "can't pickle <function proc_info>: it's not the
        # same object as test_process_all.proc_info".
        from psutil.tests.test_process_all import proc_info

        if USE_PROC_POOL:
            return self.pool.imap_unordered(proc_info, psutil.pids())
        else:
            ls = []
            for pid in psutil.pids():
                ls.append(proc_info(pid))
            return ls

    def test_all(self):
        failures = []
        for info in self.iter_proc_info():
            for name, value in info.items():
                meth = getattr(self, name)
                try:
                    meth(value, info)
                except Exception:  # noqa: BLE001
                    s = '\n' + '=' * 70 + '\n'
                    s += "FAIL: name=test_%s, pid=%s, ret=%s\ninfo=%s\n" % (
                        name,
                        info['pid'],
                        repr(value),
                        info,
                    )
                    s += '-' * 70
                    s += "\n%s" % traceback.format_exc()
                    s = "\n".join((" " * 4) + i for i in s.splitlines()) + "\n"
                    failures.append(s)
                else:
                    if value not in (0, 0.0, [], None, '', {}):
                        assert value, value
        if failures:
            raise self.fail(''.join(failures))

    def cmdline(self, ret, info):
        assert isinstance(ret, list)
        for part in ret:
            assert isinstance(part, str)

    def exe(self, ret, info):
        assert isinstance(ret, (str, unicode))
        assert ret.strip() == ret
        if ret:
            if WINDOWS and not ret.endswith('.exe'):
                return  # May be "Registry", "MemCompression", ...
            assert os.path.isabs(ret), ret
            # Note: os.stat() may return False even if the file is there
            # hence we skip the test, see:
            # http://stackoverflow.com/questions/3112546/os-path-exists-lies
            if POSIX and os.path.isfile(ret):
                if hasattr(os, 'access') and hasattr(os, "X_OK"):
                    # XXX: may fail on MACOS
                    try:
                        assert os.access(ret, os.X_OK)
                    except AssertionError:
                        if os.path.exists(ret) and not CI_TESTING:
                            raise

    def pid(self, ret, info):
        assert isinstance(ret, int)
        assert ret >= 0

    def ppid(self, ret, info):
        assert isinstance(ret, (int, long))
        assert ret >= 0
        proc_info(ret)

    def name(self, ret, info):
        assert isinstance(ret, (str, unicode))
        if WINDOWS and not ret and is_win_secure_system_proc(info['pid']):
            # https://github.com/giampaolo/psutil/issues/2338
            return
        # on AIX, "<exiting>" processes don't have names
        if not AIX:
            assert ret, repr(ret)

    def create_time(self, ret, info):
        assert isinstance(ret, float)
        try:
            assert ret >= 0
        except AssertionError:
            # XXX
            if OPENBSD and info['status'] == psutil.STATUS_ZOMBIE:
                pass
            else:
                raise
        # this can't be taken for granted on all platforms
        # self.assertGreaterEqual(ret, psutil.boot_time())
        # make sure returned value can be pretty printed
        # with strftime
        time.strftime("%Y %m %d %H:%M:%S", time.localtime(ret))

    def uids(self, ret, info):
        assert is_namedtuple(ret)
        for uid in ret:
            assert isinstance(uid, int)
            assert uid >= 0

    def gids(self, ret, info):
        assert is_namedtuple(ret)
        # note: testing all gids as above seems not to be reliable for
        # gid == 30 (nodoby); not sure why.
        for gid in ret:
            assert isinstance(gid, int)
            if not MACOS and not NETBSD:
                assert gid >= 0

    def username(self, ret, info):
        assert isinstance(ret, str)
        assert ret.strip() == ret
        assert ret.strip()

    def status(self, ret, info):
        assert isinstance(ret, str)
        assert ret, ret
        if QEMU_USER:
            # status does not work under qemu user
            return
        assert ret != '?'  # XXX
        assert ret in VALID_PROC_STATUSES

    def io_counters(self, ret, info):
        assert is_namedtuple(ret)
        for field in ret:
            assert isinstance(field, (int, long))
            if field != -1:
                assert field >= 0

    def ionice(self, ret, info):
        if LINUX:
            assert isinstance(ret.ioclass, int)
            assert isinstance(ret.value, int)
            assert ret.ioclass >= 0
            assert ret.value >= 0
        else:  # Windows, Cygwin
            choices = [
                psutil.IOPRIO_VERYLOW,
                psutil.IOPRIO_LOW,
                psutil.IOPRIO_NORMAL,
                psutil.IOPRIO_HIGH,
            ]
            assert isinstance(ret, int)
            assert ret >= 0
            assert ret in choices

    def num_threads(self, ret, info):
        assert isinstance(ret, int)
        if WINDOWS and ret == 0 and is_win_secure_system_proc(info['pid']):
            # https://github.com/giampaolo/psutil/issues/2338
            return
        assert ret >= 1

    def threads(self, ret, info):
        assert isinstance(ret, list)
        for t in ret:
            assert is_namedtuple(t)
            assert t.id >= 0
            assert t.user_time >= 0
            assert t.system_time >= 0
            for field in t:
                assert isinstance(field, (int, float))

    def cpu_times(self, ret, info):
        assert is_namedtuple(ret)
        for n in ret:
            assert isinstance(n, float)
            assert n >= 0
        # TODO: check ntuple fields

    def cpu_percent(self, ret, info):
        assert isinstance(ret, float)
        assert 0.0 <= ret <= 100.0, ret

    def cpu_num(self, ret, info):
        assert isinstance(ret, int)
        if FREEBSD and ret == -1:
            return
        assert ret >= 0
        if psutil.cpu_count() == 1:
            assert ret == 0
        assert ret in list(range(psutil.cpu_count()))

    def memory_info(self, ret, info):
        assert is_namedtuple(ret)
        for value in ret:
            assert isinstance(value, (int, long))
            assert value >= 0
        if WINDOWS:
            assert ret.peak_wset >= ret.wset
            assert ret.peak_paged_pool >= ret.paged_pool
            assert ret.peak_nonpaged_pool >= ret.nonpaged_pool
            assert ret.peak_pagefile >= ret.pagefile

    def memory_full_info(self, ret, info):
        assert is_namedtuple(ret)
        total = psutil.virtual_memory().total
        for name in ret._fields:
            value = getattr(ret, name)
            assert isinstance(value, (int, long))
            assert value >= 0
            if LINUX or (OSX and name in ('vms', 'data')):
                # On Linux there are processes (e.g. 'goa-daemon') whose
                # VMS is incredibly high for some reason.
                continue
            assert value <= total, name

        if LINUX:
            assert ret.pss >= ret.uss

    def open_files(self, ret, info):
        assert isinstance(ret, list)
        for f in ret:
            assert isinstance(f.fd, int)
            assert isinstance(f.path, str)
            assert f.path.strip() == f.path
            if WINDOWS:
                assert f.fd == -1
            elif LINUX:
                assert isinstance(f.position, int)
                assert isinstance(f.mode, str)
                assert isinstance(f.flags, int)
                assert f.position >= 0
                assert f.mode in ('r', 'w', 'a', 'r+', 'a+')
                assert f.flags > 0
            elif BSD and not f.path:
                # XXX see: https://github.com/giampaolo/psutil/issues/595
                continue
            assert os.path.isabs(f.path), f
            try:
                st = os.stat(f.path)
            except FileNotFoundError:
                pass
            else:
                assert stat.S_ISREG(st.st_mode), f

    def num_fds(self, ret, info):
        assert isinstance(ret, int)
        assert ret >= 0

    def net_connections(self, ret, info):
        with create_sockets():
            assert len(ret) == len(set(ret))
            for conn in ret:
                assert is_namedtuple(conn)
                check_connection_ntuple(conn)

    def cwd(self, ret, info):
        assert isinstance(ret, (str, unicode))
        assert ret.strip() == ret
        if ret:
            assert os.path.isabs(ret), ret
            try:
                st = os.stat(ret)
            except OSError as err:
                if WINDOWS and psutil._psplatform.is_permission_err(err):
                    pass
                # directory has been removed in mean time
                elif err.errno != errno.ENOENT:
                    raise
            else:
                assert stat.S_ISDIR(st.st_mode)

    def memory_percent(self, ret, info):
        assert isinstance(ret, float)
        assert 0 <= ret <= 100, ret

    def is_running(self, ret, info):
        assert isinstance(ret, bool)

    def cpu_affinity(self, ret, info):
        assert isinstance(ret, list)
        assert ret != []
        cpus = list(range(psutil.cpu_count()))
        for n in ret:
            assert isinstance(n, int)
            assert n in cpus

    def terminal(self, ret, info):
        assert isinstance(ret, (str, type(None)))
        if ret is not None:
            assert os.path.isabs(ret), ret
            assert os.path.exists(ret), ret

    def memory_maps(self, ret, info):
        for nt in ret:
            assert isinstance(nt.addr, str)
            assert isinstance(nt.perms, str)
            assert isinstance(nt.path, str)
            for fname in nt._fields:
                value = getattr(nt, fname)
                if fname == 'path':
                    if not value.startswith(("[", "anon_inode:")):
                        assert os.path.isabs(nt.path), nt.path
                        # commented as on Linux we might get
                        # '/foo/bar (deleted)'
                        # assert os.path.exists(nt.path), nt.path
                elif fname == 'addr':
                    assert value, repr(value)
                elif fname == 'perms':
                    if not WINDOWS:
                        assert value, repr(value)
                else:
                    assert isinstance(value, (int, long))
                    assert value >= 0

    def num_handles(self, ret, info):
        assert isinstance(ret, int)
        assert ret >= 0

    def nice(self, ret, info):
        assert isinstance(ret, int)
        if POSIX:
            assert -20 <= ret <= 20, ret
        else:
            priorities = [
                getattr(psutil, x)
                for x in dir(psutil)
                if x.endswith('_PRIORITY_CLASS')
            ]
            assert ret in priorities
            if PY3:
                assert isinstance(ret, enum.IntEnum)
            else:
                assert isinstance(ret, int)

    def num_ctx_switches(self, ret, info):
        assert is_namedtuple(ret)
        for value in ret:
            assert isinstance(value, (int, long))
            assert value >= 0

    def rlimit(self, ret, info):
        assert isinstance(ret, tuple)
        assert len(ret) == 2
        assert ret[0] >= -1
        assert ret[1] >= -1

    def environ(self, ret, info):
        assert isinstance(ret, dict)
        for k, v in ret.items():
            assert isinstance(k, str)
            assert isinstance(v, str)


class TestPidsRange(PsutilTestCase):
    """Given pid_exists() return value for a range of PIDs which may or
    may not exist, make sure that psutil.Process() and psutil.pids()
    agree with pid_exists(). This guarantees that the 3 APIs are all
    consistent with each other. See:
    https://github.com/giampaolo/psutil/issues/2359

    XXX - Note about Windows: it turns out there are some "hidden" PIDs
    which are not returned by psutil.pids() and are also not revealed
    by taskmgr.exe and ProcessHacker, still they can be instantiated by
    psutil.Process() and queried. One of such PIDs is "conhost.exe".
    Running as_dict() for it reveals that some Process() APIs
    erroneously raise NoSuchProcess, so we know we have problem there.
    Let's ignore this for now, since it's quite a corner case (who even
    imagined hidden PIDs existed on Windows?).
    """

    def setUp(self):
        psutil._set_debug(False)

    def tearDown(self):
        psutil._set_debug(True)

    def test_it(self):
        def is_linux_tid(pid):
            try:
                f = open("/proc/%s/status" % pid, "rb")
            except FileNotFoundError:
                return False
            else:
                with f:
                    for line in f:
                        if line.startswith(b"Tgid:"):
                            tgid = int(line.split()[1])
                            # If tgid and pid are different then we're
                            # dealing with a process TID.
                            return tgid != pid
                    raise ValueError("'Tgid' line not found")

        def check(pid):
            # In case of failure retry up to 3 times in order to avoid
            # race conditions, especially when running in a CI
            # environment where PIDs may appear and disappear at any
            # time.
            x = 3
            while True:
                exists = psutil.pid_exists(pid)
                try:
                    if exists:
                        psutil.Process(pid)
                        if not WINDOWS:  # see docstring
                            assert pid in psutil.pids()
                    else:
                        # On OpenBSD thread IDs can be instantiated,
                        # and oneshot() succeeds, but other APIs fail
                        # with EINVAL.
                        if not OPENBSD:
                            with pytest.raises(psutil.NoSuchProcess):
                                psutil.Process(pid)
                        if not WINDOWS:  # see docstring
                            assert pid not in psutil.pids()
                except (psutil.Error, AssertionError):
                    x -= 1
                    if x == 0:
                        raise
                else:
                    return

        for pid in range(1, 3000):
            if LINUX and is_linux_tid(pid):
                # On Linux a TID (thread ID) can be passed to the
                # Process class and is querable like a PID (process
                # ID). Skip it.
                continue
            with self.subTest(pid=pid):
                check(pid)
PKok\Z����psutil/tests/__init__.pynu�[���# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Test utilities."""

from __future__ import print_function

import atexit
import contextlib
import ctypes
import errno
import functools
import gc
import os
import platform
import random
import re
import select
import shlex
import shutil
import signal
import socket
import stat
import subprocess
import sys
import tempfile
import textwrap
import threading
import time
import unittest
import warnings
from socket import AF_INET
from socket import AF_INET6
from socket import SOCK_STREAM


try:
    import pytest
except ImportError:
    pytest = None

import psutil
from psutil import AIX
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._common import bytes2human
from psutil._common import debug
from psutil._common import memoize
from psutil._common import print_color
from psutil._common import supports_ipv6
from psutil._compat import PY3
from psutil._compat import FileExistsError
from psutil._compat import FileNotFoundError
from psutil._compat import range
from psutil._compat import super
from psutil._compat import unicode
from psutil._compat import which


try:
    from unittest import mock  # py3
except ImportError:
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        import mock  # NOQA - requires "pip install mock"

if PY3:
    import enum
else:
    import unittest2 as unittest

    enum = None

if POSIX:
    from psutil._psposix import wait_pid


# fmt: off
__all__ = [
    # constants
    'APPVEYOR', 'DEVNULL', 'GLOBAL_TIMEOUT', 'TOLERANCE_SYS_MEM', 'NO_RETRIES',
    'PYPY', 'PYTHON_EXE', 'PYTHON_EXE_ENV', 'ROOT_DIR', 'SCRIPTS_DIR',
    'TESTFN_PREFIX', 'UNICODE_SUFFIX', 'INVALID_UNICODE_SUFFIX',
    'CI_TESTING', 'VALID_PROC_STATUSES', 'TOLERANCE_DISK_USAGE', 'IS_64BIT',
    "HAS_CPU_AFFINITY", "HAS_CPU_FREQ", "HAS_ENVIRON", "HAS_PROC_IO_COUNTERS",
    "HAS_IONICE", "HAS_MEMORY_MAPS", "HAS_PROC_CPU_NUM", "HAS_RLIMIT",
    "HAS_SENSORS_BATTERY", "HAS_BATTERY", "HAS_SENSORS_FANS",
    "HAS_SENSORS_TEMPERATURES", "HAS_NET_CONNECTIONS_UNIX", "MACOS_11PLUS",
    "MACOS_12PLUS", "COVERAGE", 'AARCH64', "QEMU_USER", "PYTEST_PARALLEL",
    # subprocesses
    'pyrun', 'terminate', 'reap_children', 'spawn_testproc', 'spawn_zombie',
    'spawn_children_pair',
    # threads
    'ThreadTask',
    # test utils
    'unittest', 'skip_on_access_denied', 'skip_on_not_implemented',
    'retry_on_failure', 'TestMemoryLeak', 'PsutilTestCase',
    'process_namespace', 'system_namespace', 'print_sysinfo',
    'is_win_secure_system_proc', 'fake_pytest',
    # fs utils
    'chdir', 'safe_rmpath', 'create_py_exe', 'create_c_exe', 'get_testfn',
    # os
    'get_winver', 'kernel_version',
    # sync primitives
    'call_until', 'wait_for_pid', 'wait_for_file',
    # network
    'check_net_address', 'filter_proc_net_connections',
    'get_free_port', 'bind_socket', 'bind_unix_socket', 'tcp_socketpair',
    'unix_socketpair', 'create_sockets',
    # compat
    'reload_module', 'import_module_by_path',
    # others
    'warn', 'copyload_shared_lib', 'is_namedtuple',
]
# fmt: on


# ===================================================================
# --- constants
# ===================================================================

# --- platforms

PYPY = '__pypy__' in sys.builtin_module_names
# whether we're running this test suite on a Continuous Integration service
APPVEYOR = 'APPVEYOR' in os.environ
GITHUB_ACTIONS = 'GITHUB_ACTIONS' in os.environ or 'CIBUILDWHEEL' in os.environ
CI_TESTING = APPVEYOR or GITHUB_ACTIONS
COVERAGE = 'COVERAGE_RUN' in os.environ
PYTEST_PARALLEL = "PYTEST_XDIST_WORKER" in os.environ  # `make test-parallel`
if LINUX and GITHUB_ACTIONS:
    with open('/proc/1/cmdline') as f:
        QEMU_USER = "/bin/qemu-" in f.read()
else:
    QEMU_USER = False
# are we a 64 bit process?
IS_64BIT = sys.maxsize > 2**32
AARCH64 = platform.machine() == "aarch64"


@memoize
def macos_version():
    version_str = platform.mac_ver()[0]
    version = tuple(map(int, version_str.split(".")[:2]))
    if version == (10, 16):
        # When built against an older macOS SDK, Python will report
        # macOS 10.16 instead of the real version.
        version_str = subprocess.check_output(
            [
                sys.executable,
                "-sS",
                "-c",
                "import platform; print(platform.mac_ver()[0])",
            ],
            env={"SYSTEM_VERSION_COMPAT": "0"},
            universal_newlines=True,
        )
        version = tuple(map(int, version_str.split(".")[:2]))
    return version


if MACOS:
    MACOS_11PLUS = macos_version() > (10, 15)
    MACOS_12PLUS = macos_version() >= (12, 0)
else:
    MACOS_11PLUS = False
    MACOS_12PLUS = False


# --- configurable defaults

# how many times retry_on_failure() decorator will retry
NO_RETRIES = 10
# bytes tolerance for system-wide related tests
TOLERANCE_SYS_MEM = 5 * 1024 * 1024  # 5MB
TOLERANCE_DISK_USAGE = 10 * 1024 * 1024  # 10MB
# the timeout used in functions which have to wait
GLOBAL_TIMEOUT = 5
# be more tolerant if we're on CI in order to avoid false positives
if CI_TESTING:
    NO_RETRIES *= 3
    GLOBAL_TIMEOUT *= 3
    TOLERANCE_SYS_MEM *= 4
    TOLERANCE_DISK_USAGE *= 3

# --- file names

# Disambiguate TESTFN for parallel testing.
if os.name == 'java':
    # Jython disallows @ in module names
    TESTFN_PREFIX = '$psutil-%s-' % os.getpid()
else:
    TESTFN_PREFIX = '@psutil-%s-' % os.getpid()
UNICODE_SUFFIX = u"-ƒőő"
# An invalid unicode string.
if PY3:
    INVALID_UNICODE_SUFFIX = b"f\xc0\x80".decode('utf8', 'surrogateescape')
else:
    INVALID_UNICODE_SUFFIX = "f\xc0\x80"
ASCII_FS = sys.getfilesystemencoding().lower() in ('ascii', 'us-ascii')

# --- paths

ROOT_DIR = os.path.realpath(
    os.path.join(os.path.dirname(__file__), '..', '..')
)
SCRIPTS_DIR = os.environ.get(
    "PSUTIL_SCRIPTS_DIR", os.path.join(ROOT_DIR, 'scripts')
)
HERE = os.path.realpath(os.path.dirname(__file__))

# --- support

HAS_CPU_AFFINITY = hasattr(psutil.Process, "cpu_affinity")
HAS_CPU_FREQ = hasattr(psutil, "cpu_freq")
HAS_ENVIRON = hasattr(psutil.Process, "environ")
HAS_GETLOADAVG = hasattr(psutil, "getloadavg")
HAS_IONICE = hasattr(psutil.Process, "ionice")
HAS_MEMORY_MAPS = hasattr(psutil.Process, "memory_maps")
HAS_NET_CONNECTIONS_UNIX = POSIX and not SUNOS
HAS_NET_IO_COUNTERS = hasattr(psutil, "net_io_counters")
HAS_PROC_CPU_NUM = hasattr(psutil.Process, "cpu_num")
HAS_PROC_IO_COUNTERS = hasattr(psutil.Process, "io_counters")
HAS_RLIMIT = hasattr(psutil.Process, "rlimit")
HAS_SENSORS_BATTERY = hasattr(psutil, "sensors_battery")
try:
    HAS_BATTERY = HAS_SENSORS_BATTERY and bool(psutil.sensors_battery())
except Exception:  # noqa: BLE001
    HAS_BATTERY = False
HAS_SENSORS_FANS = hasattr(psutil, "sensors_fans")
HAS_SENSORS_TEMPERATURES = hasattr(psutil, "sensors_temperatures")
HAS_THREADS = hasattr(psutil.Process, "threads")
SKIP_SYSCONS = (MACOS or AIX) and os.getuid() != 0

# --- misc


def _get_py_exe():
    def attempt(exe):
        try:
            subprocess.check_call(
                [exe, "-V"], stdout=subprocess.PIPE, stderr=subprocess.PIPE
            )
        except subprocess.CalledProcessError:
            return None
        else:
            return exe

    env = os.environ.copy()

    # On Windows, starting with python 3.7, virtual environments use a
    # venv launcher startup process. This does not play well when
    # counting spawned processes, or when relying on the PID of the
    # spawned process to do some checks, e.g. connections check per PID.
    # Let's use the base python in this case.
    base = getattr(sys, "_base_executable", None)
    if WINDOWS and sys.version_info >= (3, 7) and base is not None:
        # We need to set __PYVENV_LAUNCHER__ to sys.executable for the
        # base python executable to know about the environment.
        env["__PYVENV_LAUNCHER__"] = sys.executable
        return base, env
    elif GITHUB_ACTIONS:
        return sys.executable, env
    elif MACOS:
        exe = (
            attempt(sys.executable)
            or attempt(os.path.realpath(sys.executable))
            or attempt(which("python%s.%s" % sys.version_info[:2]))
            or attempt(psutil.Process().exe())
        )
        if not exe:
            raise ValueError("can't find python exe real abspath")
        return exe, env
    else:
        exe = os.path.realpath(sys.executable)
        assert os.path.exists(exe), exe
        return exe, env


PYTHON_EXE, PYTHON_EXE_ENV = _get_py_exe()
DEVNULL = open(os.devnull, 'r+')
atexit.register(DEVNULL.close)

VALID_PROC_STATUSES = [
    getattr(psutil, x) for x in dir(psutil) if x.startswith('STATUS_')
]
AF_UNIX = getattr(socket, "AF_UNIX", object())

_subprocesses_started = set()
_pids_started = set()


# ===================================================================
# --- threads
# ===================================================================


class ThreadTask(threading.Thread):
    """A thread task which does nothing expect staying alive."""

    def __init__(self):
        super().__init__()
        self._running = False
        self._interval = 0.001
        self._flag = threading.Event()

    def __repr__(self):
        name = self.__class__.__name__
        return '<%s running=%s at %#x>' % (name, self._running, id(self))

    def __enter__(self):
        self.start()
        return self

    def __exit__(self, *args, **kwargs):
        self.stop()

    def start(self):
        """Start thread and keep it running until an explicit
        stop() request. Polls for shutdown every 'timeout' seconds.
        """
        if self._running:
            raise ValueError("already started")
        threading.Thread.start(self)
        self._flag.wait()

    def run(self):
        self._running = True
        self._flag.set()
        while self._running:
            time.sleep(self._interval)

    def stop(self):
        """Stop thread execution and and waits until it is stopped."""
        if not self._running:
            raise ValueError("already stopped")
        self._running = False
        self.join()


# ===================================================================
# --- subprocesses
# ===================================================================


def _reap_children_on_err(fun):
    @functools.wraps(fun)
    def wrapper(*args, **kwargs):
        try:
            return fun(*args, **kwargs)
        except Exception:
            reap_children()
            raise

    return wrapper


@_reap_children_on_err
def spawn_testproc(cmd=None, **kwds):
    """Create a python subprocess which does nothing for some secs and
    return it as a subprocess.Popen instance.
    If "cmd" is specified that is used instead of python.
    By default stdin and stdout are redirected to /dev/null.
    It also attempts to make sure the process is in a reasonably
    initialized state.
    The process is registered for cleanup on reap_children().
    """
    kwds.setdefault("stdin", DEVNULL)
    kwds.setdefault("stdout", DEVNULL)
    kwds.setdefault("cwd", os.getcwd())
    kwds.setdefault("env", PYTHON_EXE_ENV)
    if WINDOWS:
        # Prevents the subprocess to open error dialogs. This will also
        # cause stderr to be suppressed, which is suboptimal in order
        # to debug broken tests.
        CREATE_NO_WINDOW = 0x8000000
        kwds.setdefault("creationflags", CREATE_NO_WINDOW)
    if cmd is None:
        testfn = get_testfn(dir=os.getcwd())
        try:
            safe_rmpath(testfn)
            pyline = (
                "import time;"
                + "open(r'%s', 'w').close();" % testfn
                + "[time.sleep(0.1) for x in range(100)];"  # 10 secs
            )
            cmd = [PYTHON_EXE, "-c", pyline]
            sproc = subprocess.Popen(cmd, **kwds)
            _subprocesses_started.add(sproc)
            wait_for_file(testfn, delete=True, empty=True)
        finally:
            safe_rmpath(testfn)
    else:
        sproc = subprocess.Popen(cmd, **kwds)
        _subprocesses_started.add(sproc)
        wait_for_pid(sproc.pid)
    return sproc


@_reap_children_on_err
def spawn_children_pair():
    """Create a subprocess which creates another one as in:
    A (us) -> B (child) -> C (grandchild).
    Return a (child, grandchild) tuple.
    The 2 processes are fully initialized and will live for 60 secs
    and are registered for cleanup on reap_children().
    """
    tfile = None
    testfn = get_testfn(dir=os.getcwd())
    try:
        s = textwrap.dedent("""\
            import subprocess, os, sys, time
            s = "import os, time;"
            s += "f = open('%s', 'w');"
            s += "f.write(str(os.getpid()));"
            s += "f.close();"
            s += "[time.sleep(0.1) for x in range(100 * 6)];"
            p = subprocess.Popen([r'%s', '-c', s])
            p.wait()
            """ % (os.path.basename(testfn), PYTHON_EXE))
        # On Windows if we create a subprocess with CREATE_NO_WINDOW flag
        # set (which is the default) a "conhost.exe" extra process will be
        # spawned as a child. We don't want that.
        if WINDOWS:
            subp, tfile = pyrun(s, creationflags=0)
        else:
            subp, tfile = pyrun(s)
        child = psutil.Process(subp.pid)
        grandchild_pid = int(wait_for_file(testfn, delete=True, empty=False))
        _pids_started.add(grandchild_pid)
        grandchild = psutil.Process(grandchild_pid)
        return (child, grandchild)
    finally:
        safe_rmpath(testfn)
        if tfile is not None:
            safe_rmpath(tfile)


def spawn_zombie():
    """Create a zombie process and return a (parent, zombie) process tuple.
    In order to kill the zombie parent must be terminate()d first, then
    zombie must be wait()ed on.
    """
    assert psutil.POSIX
    unix_file = get_testfn()
    src = textwrap.dedent("""\
        import os, sys, time, socket, contextlib
        child_pid = os.fork()
        if child_pid > 0:
            time.sleep(3000)
        else:
            # this is the zombie process
            s = socket.socket(socket.AF_UNIX)
            with contextlib.closing(s):
                s.connect('%s')
                if sys.version_info < (3, ):
                    pid = str(os.getpid())
                else:
                    pid = bytes(str(os.getpid()), 'ascii')
                s.sendall(pid)
        """ % unix_file)
    tfile = None
    sock = bind_unix_socket(unix_file)
    try:
        sock.settimeout(GLOBAL_TIMEOUT)
        parent, tfile = pyrun(src)
        conn, _ = sock.accept()
        try:
            select.select([conn.fileno()], [], [], GLOBAL_TIMEOUT)
            zpid = int(conn.recv(1024))
            _pids_started.add(zpid)
            zombie = psutil.Process(zpid)
            call_until(lambda: zombie.status() == psutil.STATUS_ZOMBIE)
            return (parent, zombie)
        finally:
            conn.close()
    finally:
        sock.close()
        safe_rmpath(unix_file)
        if tfile is not None:
            safe_rmpath(tfile)


@_reap_children_on_err
def pyrun(src, **kwds):
    """Run python 'src' code string in a separate interpreter.
    Returns a subprocess.Popen instance and the test file where the source
    code was written.
    """
    kwds.setdefault("stdout", None)
    kwds.setdefault("stderr", None)
    srcfile = get_testfn()
    try:
        with open(srcfile, "w") as f:
            f.write(src)
        subp = spawn_testproc([PYTHON_EXE, f.name], **kwds)
        wait_for_pid(subp.pid)
        return (subp, srcfile)
    except Exception:
        safe_rmpath(srcfile)
        raise


@_reap_children_on_err
def sh(cmd, **kwds):
    """Run cmd in a subprocess and return its output.
    raises RuntimeError on error.
    """
    # Prevents subprocess to open error dialogs in case of error.
    flags = 0x8000000 if WINDOWS else 0
    kwds.setdefault("stdout", subprocess.PIPE)
    kwds.setdefault("stderr", subprocess.PIPE)
    kwds.setdefault("universal_newlines", True)
    kwds.setdefault("creationflags", flags)
    if isinstance(cmd, str):
        cmd = shlex.split(cmd)
    p = subprocess.Popen(cmd, **kwds)
    _subprocesses_started.add(p)
    if PY3:
        stdout, stderr = p.communicate(timeout=GLOBAL_TIMEOUT)
    else:
        stdout, stderr = p.communicate()
    if p.returncode != 0:
        raise RuntimeError(stdout + stderr)
    if stderr:
        warn(stderr)
    if stdout.endswith('\n'):
        stdout = stdout[:-1]
    return stdout


def terminate(proc_or_pid, sig=signal.SIGTERM, wait_timeout=GLOBAL_TIMEOUT):
    """Terminate a process and wait() for it.
    Process can be a PID or an instance of psutil.Process(),
    subprocess.Popen() or psutil.Popen().
    If it's a subprocess.Popen() or psutil.Popen() instance also closes
    its stdin / stdout / stderr fds.
    PID is wait()ed even if the process is already gone (kills zombies).
    Does nothing if the process does not exist.
    Return process exit status.
    """

    def wait(proc, timeout):
        if isinstance(proc, subprocess.Popen) and not PY3:
            proc.wait()
        else:
            proc.wait(timeout)
        if WINDOWS and isinstance(proc, subprocess.Popen):
            # Otherwise PID may still hang around.
            try:
                return psutil.Process(proc.pid).wait(timeout)
            except psutil.NoSuchProcess:
                pass

    def sendsig(proc, sig):
        # XXX: otherwise the build hangs for some reason.
        if MACOS and GITHUB_ACTIONS:
            sig = signal.SIGKILL
        # If the process received SIGSTOP, SIGCONT is necessary first,
        # otherwise SIGTERM won't work.
        if POSIX and sig != signal.SIGKILL:
            proc.send_signal(signal.SIGCONT)
        proc.send_signal(sig)

    def term_subprocess_proc(proc, timeout):
        try:
            sendsig(proc, sig)
        except OSError as err:
            if WINDOWS and err.winerror == 6:  # "invalid handle"
                pass
            elif err.errno != errno.ESRCH:
                raise
        return wait(proc, timeout)

    def term_psutil_proc(proc, timeout):
        try:
            sendsig(proc, sig)
        except psutil.NoSuchProcess:
            pass
        return wait(proc, timeout)

    def term_pid(pid, timeout):
        try:
            proc = psutil.Process(pid)
        except psutil.NoSuchProcess:
            # Needed to kill zombies.
            if POSIX:
                return wait_pid(pid, timeout)
        else:
            return term_psutil_proc(proc, timeout)

    def flush_popen(proc):
        if proc.stdout:
            proc.stdout.close()
        if proc.stderr:
            proc.stderr.close()
        # Flushing a BufferedWriter may raise an error.
        if proc.stdin:
            proc.stdin.close()

    p = proc_or_pid
    try:
        if isinstance(p, int):
            return term_pid(p, wait_timeout)
        elif isinstance(p, (psutil.Process, psutil.Popen)):
            return term_psutil_proc(p, wait_timeout)
        elif isinstance(p, subprocess.Popen):
            return term_subprocess_proc(p, wait_timeout)
        else:
            raise TypeError("wrong type %r" % p)
    finally:
        if isinstance(p, (subprocess.Popen, psutil.Popen)):
            flush_popen(p)
        pid = p if isinstance(p, int) else p.pid
        assert not psutil.pid_exists(pid), pid


def reap_children(recursive=False):
    """Terminate and wait() any subprocess started by this test suite
    and any children currently running, ensuring that no processes stick
    around to hog resources.
    If recursive is True it also tries to terminate and wait()
    all grandchildren started by this process.
    """
    # Get the children here before terminating them, as in case of
    # recursive=True we don't want to lose the intermediate reference
    # pointing to the grandchildren.
    children = psutil.Process().children(recursive=recursive)

    # Terminate subprocess.Popen.
    while _subprocesses_started:
        subp = _subprocesses_started.pop()
        terminate(subp)

    # Collect started pids.
    while _pids_started:
        pid = _pids_started.pop()
        terminate(pid)

    # Terminate children.
    if children:
        for p in children:
            terminate(p, wait_timeout=None)
        _, alive = psutil.wait_procs(children, timeout=GLOBAL_TIMEOUT)
        for p in alive:
            warn("couldn't terminate process %r; attempting kill()" % p)
            terminate(p, sig=signal.SIGKILL)


# ===================================================================
# --- OS
# ===================================================================


def kernel_version():
    """Return a tuple such as (2, 6, 36)."""
    if not POSIX:
        raise NotImplementedError("not POSIX")
    s = ""
    uname = os.uname()[2]
    for c in uname:
        if c.isdigit() or c == '.':
            s += c
        else:
            break
    if not s:
        raise ValueError("can't parse %r" % uname)
    minor = 0
    micro = 0
    nums = s.split('.')
    major = int(nums[0])
    if len(nums) >= 2:
        minor = int(nums[1])
    if len(nums) >= 3:
        micro = int(nums[2])
    return (major, minor, micro)


def get_winver():
    if not WINDOWS:
        raise NotImplementedError("not WINDOWS")
    wv = sys.getwindowsversion()
    if hasattr(wv, 'service_pack_major'):  # python >= 2.7
        sp = wv.service_pack_major or 0
    else:
        r = re.search(r"\s\d$", wv[4])
        sp = int(r.group(0)) if r else 0
    return (wv[0], wv[1], sp)


# ===================================================================
# --- sync primitives
# ===================================================================


class retry:
    """A retry decorator."""

    def __init__(
        self,
        exception=Exception,
        timeout=None,
        retries=None,
        interval=0.001,
        logfun=None,
    ):
        if timeout and retries:
            raise ValueError("timeout and retries args are mutually exclusive")
        self.exception = exception
        self.timeout = timeout
        self.retries = retries
        self.interval = interval
        self.logfun = logfun

    def __iter__(self):
        if self.timeout:
            stop_at = time.time() + self.timeout
            while time.time() < stop_at:
                yield
        elif self.retries:
            for _ in range(self.retries):
                yield
        else:
            while True:
                yield

    def sleep(self):
        if self.interval is not None:
            time.sleep(self.interval)

    def __call__(self, fun):
        @functools.wraps(fun)
        def wrapper(*args, **kwargs):
            exc = None
            for _ in self:
                try:
                    return fun(*args, **kwargs)
                except self.exception as _:  # NOQA
                    exc = _
                    if self.logfun is not None:
                        self.logfun(exc)
                    self.sleep()
                    continue
            if PY3:
                raise exc  # noqa: PLE0704
            else:
                raise  # noqa: PLE0704

        # This way the user of the decorated function can change config
        # parameters.
        wrapper.decorator = self
        return wrapper


@retry(
    exception=psutil.NoSuchProcess,
    logfun=None,
    timeout=GLOBAL_TIMEOUT,
    interval=0.001,
)
def wait_for_pid(pid):
    """Wait for pid to show up in the process list then return.
    Used in the test suite to give time the sub process to initialize.
    """
    if pid not in psutil.pids():
        raise psutil.NoSuchProcess(pid)
    psutil.Process(pid)


@retry(
    exception=(FileNotFoundError, AssertionError),
    logfun=None,
    timeout=GLOBAL_TIMEOUT,
    interval=0.001,
)
def wait_for_file(fname, delete=True, empty=False):
    """Wait for a file to be written on disk with some content."""
    with open(fname, "rb") as f:
        data = f.read()
    if not empty:
        assert data
    if delete:
        safe_rmpath(fname)
    return data


@retry(
    exception=AssertionError,
    logfun=None,
    timeout=GLOBAL_TIMEOUT,
    interval=0.001,
)
def call_until(fun):
    """Keep calling function until it evaluates to True."""
    ret = fun()
    assert ret
    return ret


# ===================================================================
# --- fs
# ===================================================================


def safe_rmpath(path):
    """Convenience function for removing temporary test files or dirs."""

    def retry_fun(fun):
        # On Windows it could happen that the file or directory has
        # open handles or references preventing the delete operation
        # to succeed immediately, so we retry for a while. See:
        # https://bugs.python.org/issue33240
        stop_at = time.time() + GLOBAL_TIMEOUT
        while time.time() < stop_at:
            try:
                return fun()
            except FileNotFoundError:
                pass
            except WindowsError as _:
                err = _
                warn("ignoring %s" % (str(err)))
            time.sleep(0.01)
        raise err

    try:
        st = os.stat(path)
        if stat.S_ISDIR(st.st_mode):
            fun = functools.partial(shutil.rmtree, path)
        else:
            fun = functools.partial(os.remove, path)
        if POSIX:
            fun()
        else:
            retry_fun(fun)
    except FileNotFoundError:
        pass


def safe_mkdir(dir):
    """Convenience function for creating a directory."""
    try:
        os.mkdir(dir)
    except FileExistsError:
        pass


@contextlib.contextmanager
def chdir(dirname):
    """Context manager which temporarily changes the current directory."""
    curdir = os.getcwd()
    try:
        os.chdir(dirname)
        yield
    finally:
        os.chdir(curdir)


def create_py_exe(path):
    """Create a Python executable file in the given location."""
    assert not os.path.exists(path), path
    atexit.register(safe_rmpath, path)
    shutil.copyfile(PYTHON_EXE, path)
    if POSIX:
        st = os.stat(path)
        os.chmod(path, st.st_mode | stat.S_IEXEC)
    return path


def create_c_exe(path, c_code=None):
    """Create a compiled C executable in the given location."""
    assert not os.path.exists(path), path
    if not which("gcc"):
        raise pytest.skip("gcc is not installed")
    if c_code is None:
        c_code = textwrap.dedent("""
            #include <unistd.h>
            int main() {
                pause();
                return 1;
            }
            """)
    else:
        assert isinstance(c_code, str), c_code

    atexit.register(safe_rmpath, path)
    with open(get_testfn(suffix='.c'), "w") as f:
        f.write(c_code)
    try:
        subprocess.check_call(["gcc", f.name, "-o", path])
    finally:
        safe_rmpath(f.name)
    return path


def get_testfn(suffix="", dir=None):
    """Return an absolute pathname of a file or dir that did not
    exist at the time this call is made. Also schedule it for safe
    deletion at interpreter exit. It's technically racy but probably
    not really due to the time variant.
    """
    while True:
        name = tempfile.mktemp(prefix=TESTFN_PREFIX, suffix=suffix, dir=dir)
        if not os.path.exists(name):  # also include dirs
            path = os.path.realpath(name)  # needed for OSX
            atexit.register(safe_rmpath, path)
            return path


# ===================================================================
# --- testing
# ===================================================================


class fake_pytest:
    """A class that mimics some basic pytest APIs. This is meant for
    when unit tests are run in production, where pytest may not be
    installed. Still, the user can test psutil installation via:

        $ python3 -m psutil.tests
    """

    @staticmethod
    def main(*args, **kw):  # noqa ARG004
        """Mimics pytest.main(). It has the same effect as running
        `python3 -m unittest -v` from the project root directory.
        """
        suite = unittest.TestLoader().discover(HERE)
        unittest.TextTestRunner(verbosity=2).run(suite)
        warnings.warn(
            "Fake pytest module was used. Test results may be inaccurate.",
            UserWarning,
            stacklevel=1,
        )
        return suite

    @staticmethod
    def raises(exc, match=None):
        """Mimics `pytest.raises`."""

        class ExceptionInfo:
            _exc = None

            @property
            def value(self):
                return self._exc

        @contextlib.contextmanager
        def context(exc, match=None):
            einfo = ExceptionInfo()
            try:
                yield einfo
            except exc as err:
                if match and not re.search(match, str(err)):
                    msg = '"{}" does not match "{}"'.format(match, str(err))
                    raise AssertionError(msg)
                einfo._exc = err
            else:
                raise AssertionError("%r not raised" % exc)

        return context(exc, match=match)

    @staticmethod
    def warns(warning, match=None):
        """Mimics `pytest.warns`."""
        if match:
            return unittest.TestCase().assertWarnsRegex(warning, match)
        return unittest.TestCase().assertWarns(warning)

    @staticmethod
    def skip(reason=""):
        """Mimics `unittest.SkipTest`."""
        raise unittest.SkipTest(reason)

    class mark:

        @staticmethod
        def skipif(condition, reason=""):
            """Mimics `@pytest.mark.skipif` decorator."""
            return unittest.skipIf(condition, reason)

        class xdist_group:
            """Mimics `@pytest.mark.xdist_group` decorator (no-op)."""

            def __init__(self, name=None):
                pass

            def __call__(self, cls_or_meth):
                return cls_or_meth


if pytest is None:
    pytest = fake_pytest


class TestCase(unittest.TestCase):
    # ...otherwise multiprocessing.Pool complains
    if not PY3:

        def runTest(self):
            pass

        @contextlib.contextmanager
        def subTest(self, *args, **kw):
            # fake it for python 2.7
            yield


# monkey patch default unittest.TestCase
unittest.TestCase = TestCase


class PsutilTestCase(TestCase):
    """Test class providing auto-cleanup wrappers on top of process
    test utilities. All test classes should derive from this one, even
    if we use pytest.
    """

    def get_testfn(self, suffix="", dir=None):
        fname = get_testfn(suffix=suffix, dir=dir)
        self.addCleanup(safe_rmpath, fname)
        return fname

    def spawn_testproc(self, *args, **kwds):
        sproc = spawn_testproc(*args, **kwds)
        self.addCleanup(terminate, sproc)
        return sproc

    def spawn_children_pair(self):
        child1, child2 = spawn_children_pair()
        self.addCleanup(terminate, child2)
        self.addCleanup(terminate, child1)  # executed first
        return (child1, child2)

    def spawn_zombie(self):
        parent, zombie = spawn_zombie()
        self.addCleanup(terminate, zombie)
        self.addCleanup(terminate, parent)  # executed first
        return (parent, zombie)

    def pyrun(self, *args, **kwds):
        sproc, srcfile = pyrun(*args, **kwds)
        self.addCleanup(safe_rmpath, srcfile)
        self.addCleanup(terminate, sproc)  # executed first
        return sproc

    def _check_proc_exc(self, proc, exc):
        assert isinstance(exc, psutil.Error)
        assert exc.pid == proc.pid
        assert exc.name == proc._name
        if exc.name:
            assert exc.name
        if isinstance(exc, psutil.ZombieProcess):
            assert exc.ppid == proc._ppid
            if exc.ppid is not None:
                assert exc.ppid >= 0
        str(exc)
        repr(exc)

    def assertPidGone(self, pid):
        with pytest.raises(psutil.NoSuchProcess) as cm:
            try:
                psutil.Process(pid)
            except psutil.ZombieProcess:
                raise AssertionError("wasn't supposed to raise ZombieProcess")
        assert cm.value.pid == pid
        assert cm.value.name is None
        assert not psutil.pid_exists(pid), pid
        assert pid not in psutil.pids()
        assert pid not in [x.pid for x in psutil.process_iter()]

    def assertProcessGone(self, proc):
        self.assertPidGone(proc.pid)
        ns = process_namespace(proc)
        for fun, name in ns.iter(ns.all, clear_cache=True):
            with self.subTest(proc=proc, name=name):
                try:
                    ret = fun()
                except psutil.ZombieProcess:
                    raise
                except psutil.NoSuchProcess as exc:
                    self._check_proc_exc(proc, exc)
                else:
                    msg = "Process.%s() didn't raise NSP and returned %r" % (
                        name,
                        ret,
                    )
                    raise AssertionError(msg)
        proc.wait(timeout=0)  # assert not raise TimeoutExpired

    def assertProcessZombie(self, proc):
        # A zombie process should always be instantiable.
        clone = psutil.Process(proc.pid)
        # Cloned zombie on Open/NetBSD has null creation time, see:
        # https://github.com/giampaolo/psutil/issues/2287
        assert proc == clone
        if not (OPENBSD or NETBSD):
            assert hash(proc) == hash(clone)
        # Its status always be querable.
        assert proc.status() == psutil.STATUS_ZOMBIE
        # It should be considered 'running'.
        assert proc.is_running()
        assert psutil.pid_exists(proc.pid)
        # as_dict() shouldn't crash.
        proc.as_dict()
        # It should show up in pids() and process_iter().
        assert proc.pid in psutil.pids()
        assert proc.pid in [x.pid for x in psutil.process_iter()]
        psutil._pmap = {}
        assert proc.pid in [x.pid for x in psutil.process_iter()]
        # Call all methods.
        ns = process_namespace(proc)
        for fun, name in ns.iter(ns.all, clear_cache=True):
            with self.subTest(proc=proc, name=name):
                try:
                    fun()
                except (psutil.ZombieProcess, psutil.AccessDenied) as exc:
                    self._check_proc_exc(proc, exc)
        if LINUX:
            # https://github.com/giampaolo/psutil/pull/2288
            with pytest.raises(psutil.ZombieProcess) as cm:
                proc.cmdline()
            self._check_proc_exc(proc, cm.value)
            with pytest.raises(psutil.ZombieProcess) as cm:
                proc.exe()
            self._check_proc_exc(proc, cm.value)
            with pytest.raises(psutil.ZombieProcess) as cm:
                proc.memory_maps()
            self._check_proc_exc(proc, cm.value)
        # Zombie cannot be signaled or terminated.
        proc.suspend()
        proc.resume()
        proc.terminate()
        proc.kill()
        assert proc.is_running()
        assert psutil.pid_exists(proc.pid)
        assert proc.pid in psutil.pids()
        assert proc.pid in [x.pid for x in psutil.process_iter()]
        psutil._pmap = {}
        assert proc.pid in [x.pid for x in psutil.process_iter()]

        # Its parent should 'see' it (edit: not true on BSD and MACOS).
        # descendants = [x.pid for x in psutil.Process().children(
        #                recursive=True)]
        # self.assertIn(proc.pid, descendants)

        # __eq__ can't be relied upon because creation time may not be
        # querable.
        # self.assertEqual(proc, psutil.Process(proc.pid))

        # XXX should we also assume ppid() to be usable? Note: this
        # would be an important use case as the only way to get
        # rid of a zombie is to kill its parent.
        # self.assertEqual(proc.ppid(), os.getpid())


@pytest.mark.skipif(PYPY, reason="unreliable on PYPY")
class TestMemoryLeak(PsutilTestCase):
    """Test framework class for detecting function memory leaks,
    typically functions implemented in C which forgot to free() memory
    from the heap. It does so by checking whether the process memory
    usage increased before and after calling the function many times.

    Note that this is hard (probably impossible) to do reliably, due
    to how the OS handles memory, the GC and so on (memory can even
    decrease!). In order to avoid false positives, in case of failure
    (mem > 0) we retry the test for up to 5 times, increasing call
    repetitions each time. If the memory keeps increasing then it's a
    failure.

    If available (Linux, OSX, Windows), USS memory is used for comparison,
    since it's supposed to be more precise, see:
    https://gmpy.dev/blog/2016/real-process-memory-and-environ-in-python
    If not, RSS memory is used. mallinfo() on Linux and _heapwalk() on
    Windows may give even more precision, but at the moment are not
    implemented.

    PyPy appears to be completely unstable for this framework, probably
    because of its JIT, so tests on PYPY are skipped.

    Usage:

        class TestLeaks(psutil.tests.TestMemoryLeak):

            def test_fun(self):
                self.execute(some_function)
    """

    # Configurable class attrs.
    times = 200
    warmup_times = 10
    tolerance = 0  # memory
    retries = 10 if CI_TESTING else 5
    verbose = True
    _thisproc = psutil.Process()
    _psutil_debug_orig = bool(os.getenv('PSUTIL_DEBUG'))

    @classmethod
    def setUpClass(cls):
        psutil._set_debug(False)  # avoid spamming to stderr

    @classmethod
    def tearDownClass(cls):
        psutil._set_debug(cls._psutil_debug_orig)

    def _get_mem(self):
        # USS is the closest thing we have to "real" memory usage and it
        # should be less likely to produce false positives.
        mem = self._thisproc.memory_full_info()
        return getattr(mem, "uss", mem.rss)

    def _get_num_fds(self):
        if POSIX:
            return self._thisproc.num_fds()
        else:
            return self._thisproc.num_handles()

    def _log(self, msg):
        if self.verbose:
            print_color(msg, color="yellow", file=sys.stderr)

    def _check_fds(self, fun):
        """Makes sure num_fds() (POSIX) or num_handles() (Windows) does
        not increase after calling a function.  Used to discover forgotten
        close(2) and CloseHandle syscalls.
        """
        before = self._get_num_fds()
        self.call(fun)
        after = self._get_num_fds()
        diff = after - before
        if diff < 0:
            raise self.fail(
                "negative diff %r (gc probably collected a "
                "resource from a previous test)" % diff
            )
        if diff > 0:
            type_ = "fd" if POSIX else "handle"
            if diff > 1:
                type_ += "s"
            msg = "%s unclosed %s after calling %r" % (diff, type_, fun)
            raise self.fail(msg)

    def _call_ntimes(self, fun, times):
        """Get 2 distinct memory samples, before and after having
        called fun repeatedly, and return the memory difference.
        """
        gc.collect(generation=1)
        mem1 = self._get_mem()
        for x in range(times):
            ret = self.call(fun)
            del x, ret
        gc.collect(generation=1)
        mem2 = self._get_mem()
        assert gc.garbage == []
        diff = mem2 - mem1  # can also be negative
        return diff

    def _check_mem(self, fun, times, retries, tolerance):
        messages = []
        prev_mem = 0
        increase = times
        for idx in range(1, retries + 1):
            mem = self._call_ntimes(fun, times)
            msg = "Run #%s: extra-mem=%s, per-call=%s, calls=%s" % (
                idx,
                bytes2human(mem),
                bytes2human(mem / times),
                times,
            )
            messages.append(msg)
            success = mem <= tolerance or mem <= prev_mem
            if success:
                if idx > 1:
                    self._log(msg)
                return
            else:
                if idx == 1:
                    print()  # NOQA
                self._log(msg)
                times += increase
                prev_mem = mem
        raise self.fail(". ".join(messages))

    # ---

    def call(self, fun):
        return fun()

    def execute(
        self, fun, times=None, warmup_times=None, retries=None, tolerance=None
    ):
        """Test a callable."""
        times = times if times is not None else self.times
        warmup_times = (
            warmup_times if warmup_times is not None else self.warmup_times
        )
        retries = retries if retries is not None else self.retries
        tolerance = tolerance if tolerance is not None else self.tolerance
        try:
            assert times >= 1, "times must be >= 1"
            assert warmup_times >= 0, "warmup_times must be >= 0"
            assert retries >= 0, "retries must be >= 0"
            assert tolerance >= 0, "tolerance must be >= 0"
        except AssertionError as err:
            raise ValueError(str(err))

        self._call_ntimes(fun, warmup_times)  # warm up
        self._check_fds(fun)
        self._check_mem(fun, times=times, retries=retries, tolerance=tolerance)

    def execute_w_exc(self, exc, fun, **kwargs):
        """Convenience method to test a callable while making sure it
        raises an exception on every call.
        """

        def call():
            self.assertRaises(exc, fun)

        self.execute(call, **kwargs)


def print_sysinfo():
    import collections
    import datetime
    import getpass
    import locale
    import pprint

    try:
        import pip
    except ImportError:
        pip = None
    try:
        import wheel
    except ImportError:
        wheel = None

    info = collections.OrderedDict()

    # OS
    if psutil.LINUX and which('lsb_release'):
        info['OS'] = sh('lsb_release -d -s')
    elif psutil.OSX:
        info['OS'] = 'Darwin %s' % platform.mac_ver()[0]
    elif psutil.WINDOWS:
        info['OS'] = "Windows " + ' '.join(map(str, platform.win32_ver()))
        if hasattr(platform, 'win32_edition'):
            info['OS'] += ", " + platform.win32_edition()
    else:
        info['OS'] = "%s %s" % (platform.system(), platform.version())
    info['arch'] = ', '.join(
        list(platform.architecture()) + [platform.machine()]
    )
    if psutil.POSIX:
        info['kernel'] = platform.uname()[2]

    # python
    info['python'] = ', '.join([
        platform.python_implementation(),
        platform.python_version(),
        platform.python_compiler(),
    ])
    info['pip'] = getattr(pip, '__version__', 'not installed')
    if wheel is not None:
        info['pip'] += " (wheel=%s)" % wheel.__version__

    # UNIX
    if psutil.POSIX:
        if which('gcc'):
            out = sh(['gcc', '--version'])
            info['gcc'] = str(out).split('\n')[0]
        else:
            info['gcc'] = 'not installed'
        s = platform.libc_ver()[1]
        if s:
            info['glibc'] = s

    # system
    info['fs-encoding'] = sys.getfilesystemencoding()
    lang = locale.getlocale()
    info['lang'] = '%s, %s' % (lang[0], lang[1])
    info['boot-time'] = datetime.datetime.fromtimestamp(
        psutil.boot_time()
    ).strftime("%Y-%m-%d %H:%M:%S")
    info['time'] = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    info['user'] = getpass.getuser()
    info['home'] = os.path.expanduser("~")
    info['cwd'] = os.getcwd()
    info['pyexe'] = PYTHON_EXE
    info['hostname'] = platform.node()
    info['PID'] = os.getpid()

    # metrics
    info['cpus'] = psutil.cpu_count()
    info['loadavg'] = "%.1f%%, %.1f%%, %.1f%%" % (
        tuple([x / psutil.cpu_count() * 100 for x in psutil.getloadavg()])
    )
    mem = psutil.virtual_memory()
    info['memory'] = "%s%%, used=%s, total=%s" % (
        int(mem.percent),
        bytes2human(mem.used),
        bytes2human(mem.total),
    )
    swap = psutil.swap_memory()
    info['swap'] = "%s%%, used=%s, total=%s" % (
        int(swap.percent),
        bytes2human(swap.used),
        bytes2human(swap.total),
    )
    info['pids'] = len(psutil.pids())
    pinfo = psutil.Process().as_dict()
    pinfo.pop('memory_maps', None)
    info['proc'] = pprint.pformat(pinfo)

    print("=" * 70, file=sys.stderr)  # NOQA
    for k, v in info.items():
        print("%-17s %s" % (k + ':', v), file=sys.stderr)  # NOQA
    print("=" * 70, file=sys.stderr)  # NOQA
    sys.stdout.flush()

    # if WINDOWS:
    #     os.system("tasklist")
    # elif which("ps"):
    #     os.system("ps aux")
    # print("=" * 70, file=sys.stderr)  # NOQA

    sys.stdout.flush()


def is_win_secure_system_proc(pid):
    # see: https://github.com/giampaolo/psutil/issues/2338
    @memoize
    def get_procs():
        ret = {}
        out = sh("tasklist.exe /NH /FO csv")
        for line in out.splitlines()[1:]:
            bits = [x.replace('"', "") for x in line.split(",")]
            name, pid = bits[0], int(bits[1])
            ret[pid] = name
        return ret

    try:
        return get_procs()[pid] == "Secure System"
    except KeyError:
        return False


def _get_eligible_cpu():
    p = psutil.Process()
    if hasattr(p, "cpu_num"):
        return p.cpu_num()
    elif hasattr(p, "cpu_affinity"):
        return random.choice(p.cpu_affinity())
    return 0


class process_namespace:
    """A container that lists all Process class method names + some
    reasonable parameters to be called with. Utility methods (parent(),
    children(), ...) are excluded.

    >>> ns = process_namespace(psutil.Process())
    >>> for fun, name in ns.iter(ns.getters):
    ...    fun()
    """

    utils = [('cpu_percent', (), {}), ('memory_percent', (), {})]

    ignored = [
        ('as_dict', (), {}),
        ('children', (), {'recursive': True}),
        ('connections', (), {}),  # deprecated
        ('is_running', (), {}),
        ('memory_info_ex', (), {}),  # deprecated
        ('oneshot', (), {}),
        ('parent', (), {}),
        ('parents', (), {}),
        ('pid', (), {}),
        ('wait', (0,), {}),
    ]

    getters = [
        ('cmdline', (), {}),
        ('cpu_times', (), {}),
        ('create_time', (), {}),
        ('cwd', (), {}),
        ('exe', (), {}),
        ('memory_full_info', (), {}),
        ('memory_info', (), {}),
        ('name', (), {}),
        ('net_connections', (), {'kind': 'all'}),
        ('nice', (), {}),
        ('num_ctx_switches', (), {}),
        ('num_threads', (), {}),
        ('open_files', (), {}),
        ('ppid', (), {}),
        ('status', (), {}),
        ('threads', (), {}),
        ('username', (), {}),
    ]
    if POSIX:
        getters += [('uids', (), {})]
        getters += [('gids', (), {})]
        getters += [('terminal', (), {})]
        getters += [('num_fds', (), {})]
    if HAS_PROC_IO_COUNTERS:
        getters += [('io_counters', (), {})]
    if HAS_IONICE:
        getters += [('ionice', (), {})]
    if HAS_RLIMIT:
        getters += [('rlimit', (psutil.RLIMIT_NOFILE,), {})]
    if HAS_CPU_AFFINITY:
        getters += [('cpu_affinity', (), {})]
    if HAS_PROC_CPU_NUM:
        getters += [('cpu_num', (), {})]
    if HAS_ENVIRON:
        getters += [('environ', (), {})]
    if WINDOWS:
        getters += [('num_handles', (), {})]
    if HAS_MEMORY_MAPS:
        getters += [('memory_maps', (), {'grouped': False})]

    setters = []
    if POSIX:
        setters += [('nice', (0,), {})]
    else:
        setters += [('nice', (psutil.NORMAL_PRIORITY_CLASS,), {})]
    if HAS_RLIMIT:
        setters += [('rlimit', (psutil.RLIMIT_NOFILE, (1024, 4096)), {})]
    if HAS_IONICE:
        if LINUX:
            setters += [('ionice', (psutil.IOPRIO_CLASS_NONE, 0), {})]
        else:
            setters += [('ionice', (psutil.IOPRIO_NORMAL,), {})]
    if HAS_CPU_AFFINITY:
        setters += [('cpu_affinity', ([_get_eligible_cpu()],), {})]

    killers = [
        ('send_signal', (signal.SIGTERM,), {}),
        ('suspend', (), {}),
        ('resume', (), {}),
        ('terminate', (), {}),
        ('kill', (), {}),
    ]
    if WINDOWS:
        killers += [('send_signal', (signal.CTRL_C_EVENT,), {})]
        killers += [('send_signal', (signal.CTRL_BREAK_EVENT,), {})]

    all = utils + getters + setters + killers

    def __init__(self, proc):
        self._proc = proc

    def iter(self, ls, clear_cache=True):
        """Given a list of tuples yields a set of (fun, fun_name) tuples
        in random order.
        """
        ls = list(ls)
        random.shuffle(ls)
        for fun_name, args, kwds in ls:
            if clear_cache:
                self.clear_cache()
            fun = getattr(self._proc, fun_name)
            fun = functools.partial(fun, *args, **kwds)
            yield (fun, fun_name)

    def clear_cache(self):
        """Clear the cache of a Process instance."""
        self._proc._init(self._proc.pid, _ignore_nsp=True)

    @classmethod
    def test_class_coverage(cls, test_class, ls):
        """Given a TestCase instance and a list of tuples checks that
        the class defines the required test method names.
        """
        for fun_name, _, _ in ls:
            meth_name = 'test_' + fun_name
            if not hasattr(test_class, meth_name):
                msg = "%r class should define a '%s' method" % (
                    test_class.__class__.__name__,
                    meth_name,
                )
                raise AttributeError(msg)

    @classmethod
    def test(cls):
        this = set([x[0] for x in cls.all])
        ignored = set([x[0] for x in cls.ignored])
        klass = set([x for x in dir(psutil.Process) if x[0] != '_'])
        leftout = (this | ignored) ^ klass
        if leftout:
            raise ValueError("uncovered Process class names: %r" % leftout)


class system_namespace:
    """A container that lists all the module-level, system-related APIs.
    Utilities such as cpu_percent() are excluded. Usage:

    >>> ns = system_namespace
    >>> for fun, name in ns.iter(ns.getters):
    ...    fun()
    """

    getters = [
        ('boot_time', (), {}),
        ('cpu_count', (), {'logical': False}),
        ('cpu_count', (), {'logical': True}),
        ('cpu_stats', (), {}),
        ('cpu_times', (), {'percpu': False}),
        ('cpu_times', (), {'percpu': True}),
        ('disk_io_counters', (), {'perdisk': True}),
        ('disk_partitions', (), {'all': True}),
        ('disk_usage', (os.getcwd(),), {}),
        ('net_connections', (), {'kind': 'all'}),
        ('net_if_addrs', (), {}),
        ('net_if_stats', (), {}),
        ('net_io_counters', (), {'pernic': True}),
        ('pid_exists', (os.getpid(),), {}),
        ('pids', (), {}),
        ('swap_memory', (), {}),
        ('users', (), {}),
        ('virtual_memory', (), {}),
    ]
    if HAS_CPU_FREQ:
        if MACOS and platform.machine() == 'arm64':  # skipped due to #1892
            pass
        else:
            getters += [('cpu_freq', (), {'percpu': True})]
    if HAS_GETLOADAVG:
        getters += [('getloadavg', (), {})]
    if HAS_SENSORS_TEMPERATURES:
        getters += [('sensors_temperatures', (), {})]
    if HAS_SENSORS_FANS:
        getters += [('sensors_fans', (), {})]
    if HAS_SENSORS_BATTERY:
        getters += [('sensors_battery', (), {})]
    if WINDOWS:
        getters += [('win_service_iter', (), {})]
        getters += [('win_service_get', ('alg',), {})]

    ignored = [
        ('process_iter', (), {}),
        ('wait_procs', ([psutil.Process()],), {}),
        ('cpu_percent', (), {}),
        ('cpu_times_percent', (), {}),
    ]

    all = getters

    @staticmethod
    def iter(ls):
        """Given a list of tuples yields a set of (fun, fun_name) tuples
        in random order.
        """
        ls = list(ls)
        random.shuffle(ls)
        for fun_name, args, kwds in ls:
            fun = getattr(psutil, fun_name)
            fun = functools.partial(fun, *args, **kwds)
            yield (fun, fun_name)

    test_class_coverage = process_namespace.test_class_coverage


def retry_on_failure(retries=NO_RETRIES):
    """Decorator which runs a test function and retries N times before
    actually failing.
    """

    def logfun(exc):
        print("%r, retrying" % exc, file=sys.stderr)  # NOQA

    return retry(
        exception=AssertionError, timeout=None, retries=retries, logfun=logfun
    )


def skip_on_access_denied(only_if=None):
    """Decorator to Ignore AccessDenied exceptions."""

    def decorator(fun):
        @functools.wraps(fun)
        def wrapper(*args, **kwargs):
            try:
                return fun(*args, **kwargs)
            except psutil.AccessDenied:
                if only_if is not None:
                    if not only_if:
                        raise
                raise pytest.skip("raises AccessDenied")

        return wrapper

    return decorator


def skip_on_not_implemented(only_if=None):
    """Decorator to Ignore NotImplementedError exceptions."""

    def decorator(fun):
        @functools.wraps(fun)
        def wrapper(*args, **kwargs):
            try:
                return fun(*args, **kwargs)
            except NotImplementedError:
                if only_if is not None:
                    if not only_if:
                        raise
                msg = (
                    "%r was skipped because it raised NotImplementedError"
                    % fun.__name__
                )
                raise pytest.skip(msg)

        return wrapper

    return decorator


# ===================================================================
# --- network
# ===================================================================


# XXX: no longer used
def get_free_port(host='127.0.0.1'):
    """Return an unused TCP port. Subject to race conditions."""
    with contextlib.closing(socket.socket()) as sock:
        sock.bind((host, 0))
        return sock.getsockname()[1]


def bind_socket(family=AF_INET, type=SOCK_STREAM, addr=None):
    """Binds a generic socket."""
    if addr is None and family in (AF_INET, AF_INET6):
        addr = ("", 0)
    sock = socket.socket(family, type)
    try:
        if os.name not in ('nt', 'cygwin'):
            sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        sock.bind(addr)
        if type == socket.SOCK_STREAM:
            sock.listen(5)
        return sock
    except Exception:
        sock.close()
        raise


def bind_unix_socket(name, type=socket.SOCK_STREAM):
    """Bind a UNIX socket."""
    assert psutil.POSIX
    assert not os.path.exists(name), name
    sock = socket.socket(socket.AF_UNIX, type)
    try:
        sock.bind(name)
        if type == socket.SOCK_STREAM:
            sock.listen(5)
    except Exception:
        sock.close()
        raise
    return sock


def tcp_socketpair(family, addr=("", 0)):
    """Build a pair of TCP sockets connected to each other.
    Return a (server, client) tuple.
    """
    with contextlib.closing(socket.socket(family, SOCK_STREAM)) as ll:
        ll.bind(addr)
        ll.listen(5)
        addr = ll.getsockname()
        c = socket.socket(family, SOCK_STREAM)
        try:
            c.connect(addr)
            caddr = c.getsockname()
            while True:
                a, addr = ll.accept()
                # check that we've got the correct client
                if addr == caddr:
                    return (a, c)
                a.close()
        except OSError:
            c.close()
            raise


def unix_socketpair(name):
    """Build a pair of UNIX sockets connected to each other through
    the same UNIX file name.
    Return a (server, client) tuple.
    """
    assert psutil.POSIX
    server = client = None
    try:
        server = bind_unix_socket(name, type=socket.SOCK_STREAM)
        server.setblocking(0)
        client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
        client.setblocking(0)
        client.connect(name)
        # new = server.accept()
    except Exception:
        if server is not None:
            server.close()
        if client is not None:
            client.close()
        raise
    return (server, client)


@contextlib.contextmanager
def create_sockets():
    """Open as many socket families / types as possible."""
    socks = []
    fname1 = fname2 = None
    try:
        socks.append(bind_socket(socket.AF_INET, socket.SOCK_STREAM))
        socks.append(bind_socket(socket.AF_INET, socket.SOCK_DGRAM))
        if supports_ipv6():
            socks.append(bind_socket(socket.AF_INET6, socket.SOCK_STREAM))
            socks.append(bind_socket(socket.AF_INET6, socket.SOCK_DGRAM))
        if POSIX and HAS_NET_CONNECTIONS_UNIX:
            fname1 = get_testfn()
            fname2 = get_testfn()
            s1, s2 = unix_socketpair(fname1)
            s3 = bind_unix_socket(fname2, type=socket.SOCK_DGRAM)
            for s in (s1, s2, s3):
                socks.append(s)
        yield socks
    finally:
        for s in socks:
            s.close()
        for fname in (fname1, fname2):
            if fname is not None:
                safe_rmpath(fname)


def check_net_address(addr, family):
    """Check a net address validity. Supported families are IPv4,
    IPv6 and MAC addresses.
    """
    import ipaddress  # python >= 3.3 / requires "pip install ipaddress"

    if enum and PY3 and not PYPY:
        assert isinstance(family, enum.IntEnum), family
    if family == socket.AF_INET:
        octs = [int(x) for x in addr.split('.')]
        assert len(octs) == 4, addr
        for num in octs:
            assert 0 <= num <= 255, addr
        if not PY3:
            addr = unicode(addr)
        ipaddress.IPv4Address(addr)
    elif family == socket.AF_INET6:
        assert isinstance(addr, str), addr
        if not PY3:
            addr = unicode(addr)
        ipaddress.IPv6Address(addr)
    elif family == psutil.AF_LINK:
        assert re.match(r'([a-fA-F0-9]{2}[:|\-]?){6}', addr) is not None, addr
    else:
        raise ValueError("unknown family %r" % family)


def check_connection_ntuple(conn):
    """Check validity of a connection namedtuple."""

    def check_ntuple(conn):
        has_pid = len(conn) == 7
        assert len(conn) in (6, 7), len(conn)
        assert conn[0] == conn.fd, conn.fd
        assert conn[1] == conn.family, conn.family
        assert conn[2] == conn.type, conn.type
        assert conn[3] == conn.laddr, conn.laddr
        assert conn[4] == conn.raddr, conn.raddr
        assert conn[5] == conn.status, conn.status
        if has_pid:
            assert conn[6] == conn.pid, conn.pid

    def check_family(conn):
        assert conn.family in (AF_INET, AF_INET6, AF_UNIX), conn.family
        if enum is not None:
            assert isinstance(conn.family, enum.IntEnum), conn
        else:
            assert isinstance(conn.family, int), conn
        if conn.family == AF_INET:
            # actually try to bind the local socket; ignore IPv6
            # sockets as their address might be represented as
            # an IPv4-mapped-address (e.g. "::127.0.0.1")
            # and that's rejected by bind()
            s = socket.socket(conn.family, conn.type)
            with contextlib.closing(s):
                try:
                    s.bind((conn.laddr[0], 0))
                except socket.error as err:
                    if err.errno != errno.EADDRNOTAVAIL:
                        raise
        elif conn.family == AF_UNIX:
            assert conn.status == psutil.CONN_NONE, conn.status

    def check_type(conn):
        # SOCK_SEQPACKET may happen in case of AF_UNIX socks
        SOCK_SEQPACKET = getattr(socket, "SOCK_SEQPACKET", object())
        assert conn.type in (
            socket.SOCK_STREAM,
            socket.SOCK_DGRAM,
            SOCK_SEQPACKET,
        ), conn.type
        if enum is not None:
            assert isinstance(conn.type, enum.IntEnum), conn
        else:
            assert isinstance(conn.type, int), conn
        if conn.type == socket.SOCK_DGRAM:
            assert conn.status == psutil.CONN_NONE, conn.status

    def check_addrs(conn):
        # check IP address and port sanity
        for addr in (conn.laddr, conn.raddr):
            if conn.family in (AF_INET, AF_INET6):
                assert isinstance(addr, tuple), type(addr)
                if not addr:
                    continue
                assert isinstance(addr.port, int), type(addr.port)
                assert 0 <= addr.port <= 65535, addr.port
                check_net_address(addr.ip, conn.family)
            elif conn.family == AF_UNIX:
                assert isinstance(addr, str), type(addr)

    def check_status(conn):
        assert isinstance(conn.status, str), conn.status
        valids = [
            getattr(psutil, x) for x in dir(psutil) if x.startswith('CONN_')
        ]
        assert conn.status in valids, conn.status
        if conn.family in (AF_INET, AF_INET6) and conn.type == SOCK_STREAM:
            assert conn.status != psutil.CONN_NONE, conn.status
        else:
            assert conn.status == psutil.CONN_NONE, conn.status

    check_ntuple(conn)
    check_family(conn)
    check_type(conn)
    check_addrs(conn)
    check_status(conn)


def filter_proc_net_connections(cons):
    """Our process may start with some open UNIX sockets which are not
    initialized by us, invalidating unit tests.
    """
    new = []
    for conn in cons:
        if POSIX and conn.family == socket.AF_UNIX:
            if MACOS and "/syslog" in conn.raddr:
                debug("skipping %s" % str(conn))
                continue
        new.append(conn)
    return new


# ===================================================================
# --- compatibility
# ===================================================================


def reload_module(module):
    """Backport of importlib.reload of Python 3.3+."""
    try:
        import importlib

        if not hasattr(importlib, 'reload'):  # python <=3.3
            raise ImportError
    except ImportError:
        import imp

        return imp.reload(module)
    else:
        return importlib.reload(module)


def import_module_by_path(path):
    name = os.path.splitext(os.path.basename(path))[0]
    if sys.version_info[0] < 3:
        import imp

        return imp.load_source(name, path)
    else:
        import importlib.util

        spec = importlib.util.spec_from_file_location(name, path)
        mod = importlib.util.module_from_spec(spec)
        spec.loader.exec_module(mod)
        return mod


# ===================================================================
# --- others
# ===================================================================


def warn(msg):
    """Raise a warning msg."""
    warnings.warn(msg, UserWarning, stacklevel=2)


def is_namedtuple(x):
    """Check if object is an instance of namedtuple."""
    t = type(x)
    b = t.__bases__
    if len(b) != 1 or b[0] is not tuple:
        return False
    f = getattr(t, '_fields', None)
    if not isinstance(f, tuple):
        return False
    return all(isinstance(n, str) for n in f)


if POSIX:

    @contextlib.contextmanager
    def copyload_shared_lib(suffix=""):
        """Ctx manager which picks up a random shared CO lib used
        by this process, copies it in another location and loads it
        in memory via ctypes. Return the new absolutized path.
        """
        exe = 'pypy' if PYPY else 'python'
        ext = ".so"
        dst = get_testfn(suffix=suffix + ext)
        libs = [
            x.path
            for x in psutil.Process().memory_maps()
            if os.path.splitext(x.path)[1] == ext and exe in x.path.lower()
        ]
        src = random.choice(libs)
        shutil.copyfile(src, dst)
        try:
            ctypes.CDLL(dst)
            yield dst
        finally:
            safe_rmpath(dst)

else:

    @contextlib.contextmanager
    def copyload_shared_lib(suffix=""):
        """Ctx manager which picks up a random shared DLL lib used
        by this process, copies it in another location and loads it
        in memory via ctypes.
        Return the new absolutized, normcased path.
        """
        from ctypes import WinError
        from ctypes import wintypes

        ext = ".dll"
        dst = get_testfn(suffix=suffix + ext)
        libs = [
            x.path
            for x in psutil.Process().memory_maps()
            if x.path.lower().endswith(ext)
            and 'python' in os.path.basename(x.path).lower()
            and 'wow64' not in x.path.lower()
        ]
        if PYPY and not libs:
            libs = [
                x.path
                for x in psutil.Process().memory_maps()
                if 'pypy' in os.path.basename(x.path).lower()
            ]
        src = random.choice(libs)
        shutil.copyfile(src, dst)
        cfile = None
        try:
            cfile = ctypes.WinDLL(dst)
            yield dst
        finally:
            # Work around OverflowError:
            # - https://ci.appveyor.com/project/giampaolo/psutil/build/1207/
            #       job/o53330pbnri9bcw7
            # - http://bugs.python.org/issue30286
            # - http://stackoverflow.com/questions/23522055
            if cfile is not None:
                FreeLibrary = ctypes.windll.kernel32.FreeLibrary
                FreeLibrary.argtypes = [wintypes.HMODULE]
                ret = FreeLibrary(cfile._handle)
                if ret == 0:
                    WinError()
            safe_rmpath(dst)


# ===================================================================
# --- Exit funs (first is executed last)
# ===================================================================


# this is executed first
@atexit.register
def cleanup_test_procs():
    reap_children(recursive=True)


# atexit module does not execute exit functions in case of SIGTERM, which
# gets sent to test subprocesses, which is a problem if they import this
# module. With this it will. See:
# https://gmpy.dev/blog/2016/how-to-always-execute-exit-functions-in-python
if POSIX:
    signal.signal(signal.SIGTERM, lambda sig, _: sys.exit(sig))
PKok\V�a�SS psutil/tests/test_connections.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Tests for psutil.net_connections() and Process.net_connections() APIs."""

import os
import socket
import textwrap
from contextlib import closing
from socket import AF_INET
from socket import AF_INET6
from socket import SOCK_DGRAM
from socket import SOCK_STREAM

import psutil
from psutil import FREEBSD
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._common import supports_ipv6
from psutil._compat import PY3
from psutil.tests import AF_UNIX
from psutil.tests import HAS_NET_CONNECTIONS_UNIX
from psutil.tests import SKIP_SYSCONS
from psutil.tests import PsutilTestCase
from psutil.tests import bind_socket
from psutil.tests import bind_unix_socket
from psutil.tests import check_connection_ntuple
from psutil.tests import create_sockets
from psutil.tests import filter_proc_net_connections
from psutil.tests import pytest
from psutil.tests import reap_children
from psutil.tests import retry_on_failure
from psutil.tests import skip_on_access_denied
from psutil.tests import tcp_socketpair
from psutil.tests import unix_socketpair
from psutil.tests import wait_for_file


SOCK_SEQPACKET = getattr(socket, "SOCK_SEQPACKET", object())


def this_proc_net_connections(kind):
    cons = psutil.Process().net_connections(kind=kind)
    if kind in ("all", "unix"):
        return filter_proc_net_connections(cons)
    return cons


@pytest.mark.xdist_group(name="serial")
class ConnectionTestCase(PsutilTestCase):
    def setUp(self):
        assert this_proc_net_connections(kind='all') == []

    def tearDown(self):
        # Make sure we closed all resources.
        assert this_proc_net_connections(kind='all') == []

    def compare_procsys_connections(self, pid, proc_cons, kind='all'):
        """Given a process PID and its list of connections compare
        those against system-wide connections retrieved via
        psutil.net_connections.
        """
        try:
            sys_cons = psutil.net_connections(kind=kind)
        except psutil.AccessDenied:
            # On MACOS, system-wide connections are retrieved by iterating
            # over all processes
            if MACOS:
                return
            else:
                raise
        # Filter for this proc PID and exlucde PIDs from the tuple.
        sys_cons = [c[:-1] for c in sys_cons if c.pid == pid]
        sys_cons.sort()
        proc_cons.sort()
        assert proc_cons == sys_cons


class TestBasicOperations(ConnectionTestCase):
    @pytest.mark.skipif(SKIP_SYSCONS, reason="requires root")
    def test_system(self):
        with create_sockets():
            for conn in psutil.net_connections(kind='all'):
                check_connection_ntuple(conn)

    def test_process(self):
        with create_sockets():
            for conn in this_proc_net_connections(kind='all'):
                check_connection_ntuple(conn)

    def test_invalid_kind(self):
        with pytest.raises(ValueError):
            this_proc_net_connections(kind='???')
        with pytest.raises(ValueError):
            psutil.net_connections(kind='???')


@pytest.mark.xdist_group(name="serial")
class TestUnconnectedSockets(ConnectionTestCase):
    """Tests sockets which are open but not connected to anything."""

    def get_conn_from_sock(self, sock):
        cons = this_proc_net_connections(kind='all')
        smap = dict([(c.fd, c) for c in cons])
        if NETBSD or FREEBSD:
            # NetBSD opens a UNIX socket to /var/log/run
            # so there may be more connections.
            return smap[sock.fileno()]
        else:
            assert len(cons) == 1
            if cons[0].fd != -1:
                assert smap[sock.fileno()].fd == sock.fileno()
            return cons[0]

    def check_socket(self, sock):
        """Given a socket, makes sure it matches the one obtained
        via psutil. It assumes this process created one connection
        only (the one supposed to be checked).
        """
        conn = self.get_conn_from_sock(sock)
        check_connection_ntuple(conn)

        # fd, family, type
        if conn.fd != -1:
            assert conn.fd == sock.fileno()
        assert conn.family == sock.family
        # see: http://bugs.python.org/issue30204
        assert conn.type == sock.getsockopt(socket.SOL_SOCKET, socket.SO_TYPE)

        # local address
        laddr = sock.getsockname()
        if not laddr and PY3 and isinstance(laddr, bytes):
            # See: http://bugs.python.org/issue30205
            laddr = laddr.decode()
        if sock.family == AF_INET6:
            laddr = laddr[:2]
        assert conn.laddr == laddr

        # XXX Solaris can't retrieve system-wide UNIX sockets
        if sock.family == AF_UNIX and HAS_NET_CONNECTIONS_UNIX:
            cons = this_proc_net_connections(kind='all')
            self.compare_procsys_connections(os.getpid(), cons, kind='all')
        return conn

    def test_tcp_v4(self):
        addr = ("127.0.0.1", 0)
        with closing(bind_socket(AF_INET, SOCK_STREAM, addr=addr)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ()
            assert conn.status == psutil.CONN_LISTEN

    @pytest.mark.skipif(not supports_ipv6(), reason="IPv6 not supported")
    def test_tcp_v6(self):
        addr = ("::1", 0)
        with closing(bind_socket(AF_INET6, SOCK_STREAM, addr=addr)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ()
            assert conn.status == psutil.CONN_LISTEN

    def test_udp_v4(self):
        addr = ("127.0.0.1", 0)
        with closing(bind_socket(AF_INET, SOCK_DGRAM, addr=addr)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ()
            assert conn.status == psutil.CONN_NONE

    @pytest.mark.skipif(not supports_ipv6(), reason="IPv6 not supported")
    def test_udp_v6(self):
        addr = ("::1", 0)
        with closing(bind_socket(AF_INET6, SOCK_DGRAM, addr=addr)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ()
            assert conn.status == psutil.CONN_NONE

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_unix_tcp(self):
        testfn = self.get_testfn()
        with closing(bind_unix_socket(testfn, type=SOCK_STREAM)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ""  # noqa
            assert conn.status == psutil.CONN_NONE

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_unix_udp(self):
        testfn = self.get_testfn()
        with closing(bind_unix_socket(testfn, type=SOCK_STREAM)) as sock:
            conn = self.check_socket(sock)
            assert conn.raddr == ""  # noqa
            assert conn.status == psutil.CONN_NONE


@pytest.mark.xdist_group(name="serial")
class TestConnectedSocket(ConnectionTestCase):
    """Test socket pairs which are actually connected to
    each other.
    """

    # On SunOS, even after we close() it, the server socket stays around
    # in TIME_WAIT state.
    @pytest.mark.skipif(SUNOS, reason="unreliable on SUONS")
    def test_tcp(self):
        addr = ("127.0.0.1", 0)
        assert this_proc_net_connections(kind='tcp4') == []
        server, client = tcp_socketpair(AF_INET, addr=addr)
        try:
            cons = this_proc_net_connections(kind='tcp4')
            assert len(cons) == 2
            assert cons[0].status == psutil.CONN_ESTABLISHED
            assert cons[1].status == psutil.CONN_ESTABLISHED
            # May not be fast enough to change state so it stays
            # commenteed.
            # client.close()
            # cons = this_proc_net_connections(kind='all')
            # self.assertEqual(len(cons), 1)
            # self.assertEqual(cons[0].status, psutil.CONN_CLOSE_WAIT)
        finally:
            server.close()
            client.close()

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_unix(self):
        testfn = self.get_testfn()
        server, client = unix_socketpair(testfn)
        try:
            cons = this_proc_net_connections(kind='unix')
            assert not (cons[0].laddr and cons[0].raddr), cons
            assert not (cons[1].laddr and cons[1].raddr), cons
            if NETBSD or FREEBSD:
                # On NetBSD creating a UNIX socket will cause
                # a UNIX connection to  /var/run/log.
                cons = [c for c in cons if c.raddr != '/var/run/log']
            assert len(cons) == 2
            if LINUX or FREEBSD or SUNOS or OPENBSD:
                # remote path is never set
                assert cons[0].raddr == ""  # noqa
                assert cons[1].raddr == ""  # noqa
                # one local address should though
                assert testfn == (cons[0].laddr or cons[1].laddr)
            else:
                # On other systems either the laddr or raddr
                # of both peers are set.
                assert (cons[0].laddr or cons[1].laddr) == testfn
        finally:
            server.close()
            client.close()


class TestFilters(ConnectionTestCase):
    def test_filters(self):
        def check(kind, families, types):
            for conn in this_proc_net_connections(kind=kind):
                assert conn.family in families
                assert conn.type in types
            if not SKIP_SYSCONS:
                for conn in psutil.net_connections(kind=kind):
                    assert conn.family in families
                    assert conn.type in types

        with create_sockets():
            check(
                'all',
                [AF_INET, AF_INET6, AF_UNIX],
                [SOCK_STREAM, SOCK_DGRAM, SOCK_SEQPACKET],
            )
            check('inet', [AF_INET, AF_INET6], [SOCK_STREAM, SOCK_DGRAM])
            check('inet4', [AF_INET], [SOCK_STREAM, SOCK_DGRAM])
            check('tcp', [AF_INET, AF_INET6], [SOCK_STREAM])
            check('tcp4', [AF_INET], [SOCK_STREAM])
            check('tcp6', [AF_INET6], [SOCK_STREAM])
            check('udp', [AF_INET, AF_INET6], [SOCK_DGRAM])
            check('udp4', [AF_INET], [SOCK_DGRAM])
            check('udp6', [AF_INET6], [SOCK_DGRAM])
            if HAS_NET_CONNECTIONS_UNIX:
                check(
                    'unix',
                    [AF_UNIX],
                    [SOCK_STREAM, SOCK_DGRAM, SOCK_SEQPACKET],
                )

    @skip_on_access_denied(only_if=MACOS)
    def test_combos(self):
        reap_children()

        def check_conn(proc, conn, family, type, laddr, raddr, status, kinds):
            all_kinds = (
                "all",
                "inet",
                "inet4",
                "inet6",
                "tcp",
                "tcp4",
                "tcp6",
                "udp",
                "udp4",
                "udp6",
            )
            check_connection_ntuple(conn)
            assert conn.family == family
            assert conn.type == type
            assert conn.laddr == laddr
            assert conn.raddr == raddr
            assert conn.status == status
            for kind in all_kinds:
                cons = proc.net_connections(kind=kind)
                if kind in kinds:
                    assert cons != []
                else:
                    assert cons == []
            # compare against system-wide connections
            # XXX Solaris can't retrieve system-wide UNIX
            # sockets.
            if HAS_NET_CONNECTIONS_UNIX:
                self.compare_procsys_connections(proc.pid, [conn])

        tcp_template = textwrap.dedent("""
            import socket, time
            s = socket.socket({family}, socket.SOCK_STREAM)
            s.bind(('{addr}', 0))
            s.listen(5)
            with open('{testfn}', 'w') as f:
                f.write(str(s.getsockname()[:2]))
            [time.sleep(0.1) for x in range(100)]
            """)

        udp_template = textwrap.dedent("""
            import socket, time
            s = socket.socket({family}, socket.SOCK_DGRAM)
            s.bind(('{addr}', 0))
            with open('{testfn}', 'w') as f:
                f.write(str(s.getsockname()[:2]))
            [time.sleep(0.1) for x in range(100)]
            """)

        # must be relative on Windows
        testfile = os.path.basename(self.get_testfn(dir=os.getcwd()))
        tcp4_template = tcp_template.format(
            family=int(AF_INET), addr="127.0.0.1", testfn=testfile
        )
        udp4_template = udp_template.format(
            family=int(AF_INET), addr="127.0.0.1", testfn=testfile
        )
        tcp6_template = tcp_template.format(
            family=int(AF_INET6), addr="::1", testfn=testfile
        )
        udp6_template = udp_template.format(
            family=int(AF_INET6), addr="::1", testfn=testfile
        )

        # launch various subprocess instantiating a socket of various
        # families and types to enrich psutil results
        tcp4_proc = self.pyrun(tcp4_template)
        tcp4_addr = eval(wait_for_file(testfile, delete=True))  # noqa
        udp4_proc = self.pyrun(udp4_template)
        udp4_addr = eval(wait_for_file(testfile, delete=True))  # noqa
        if supports_ipv6():
            tcp6_proc = self.pyrun(tcp6_template)
            tcp6_addr = eval(wait_for_file(testfile, delete=True))  # noqa
            udp6_proc = self.pyrun(udp6_template)
            udp6_addr = eval(wait_for_file(testfile, delete=True))  # noqa
        else:
            tcp6_proc = None
            udp6_proc = None
            tcp6_addr = None
            udp6_addr = None

        for p in psutil.Process().children():
            cons = p.net_connections()
            assert len(cons) == 1
            for conn in cons:
                # TCP v4
                if p.pid == tcp4_proc.pid:
                    check_conn(
                        p,
                        conn,
                        AF_INET,
                        SOCK_STREAM,
                        tcp4_addr,
                        (),
                        psutil.CONN_LISTEN,
                        ("all", "inet", "inet4", "tcp", "tcp4"),
                    )
                # UDP v4
                elif p.pid == udp4_proc.pid:
                    check_conn(
                        p,
                        conn,
                        AF_INET,
                        SOCK_DGRAM,
                        udp4_addr,
                        (),
                        psutil.CONN_NONE,
                        ("all", "inet", "inet4", "udp", "udp4"),
                    )
                # TCP v6
                elif p.pid == getattr(tcp6_proc, "pid", None):
                    check_conn(
                        p,
                        conn,
                        AF_INET6,
                        SOCK_STREAM,
                        tcp6_addr,
                        (),
                        psutil.CONN_LISTEN,
                        ("all", "inet", "inet6", "tcp", "tcp6"),
                    )
                # UDP v6
                elif p.pid == getattr(udp6_proc, "pid", None):
                    check_conn(
                        p,
                        conn,
                        AF_INET6,
                        SOCK_DGRAM,
                        udp6_addr,
                        (),
                        psutil.CONN_NONE,
                        ("all", "inet", "inet6", "udp", "udp6"),
                    )

    def test_count(self):
        with create_sockets():
            # tcp
            cons = this_proc_net_connections(kind='tcp')
            assert len(cons) == (2 if supports_ipv6() else 1)
            for conn in cons:
                assert conn.family in (AF_INET, AF_INET6)
                assert conn.type == SOCK_STREAM
            # tcp4
            cons = this_proc_net_connections(kind='tcp4')
            assert len(cons) == 1
            assert cons[0].family == AF_INET
            assert cons[0].type == SOCK_STREAM
            # tcp6
            if supports_ipv6():
                cons = this_proc_net_connections(kind='tcp6')
                assert len(cons) == 1
                assert cons[0].family == AF_INET6
                assert cons[0].type == SOCK_STREAM
            # udp
            cons = this_proc_net_connections(kind='udp')
            assert len(cons) == (2 if supports_ipv6() else 1)
            for conn in cons:
                assert conn.family in (AF_INET, AF_INET6)
                assert conn.type == SOCK_DGRAM
            # udp4
            cons = this_proc_net_connections(kind='udp4')
            assert len(cons) == 1
            assert cons[0].family == AF_INET
            assert cons[0].type == SOCK_DGRAM
            # udp6
            if supports_ipv6():
                cons = this_proc_net_connections(kind='udp6')
                assert len(cons) == 1
                assert cons[0].family == AF_INET6
                assert cons[0].type == SOCK_DGRAM
            # inet
            cons = this_proc_net_connections(kind='inet')
            assert len(cons) == (4 if supports_ipv6() else 2)
            for conn in cons:
                assert conn.family in (AF_INET, AF_INET6)
                assert conn.type in (SOCK_STREAM, SOCK_DGRAM)
            # inet6
            if supports_ipv6():
                cons = this_proc_net_connections(kind='inet6')
                assert len(cons) == 2
                for conn in cons:
                    assert conn.family == AF_INET6
                    assert conn.type in (SOCK_STREAM, SOCK_DGRAM)
            # Skipped on BSD becayse by default the Python process
            # creates a UNIX socket to '/var/run/log'.
            if HAS_NET_CONNECTIONS_UNIX and not (FREEBSD or NETBSD):
                cons = this_proc_net_connections(kind='unix')
                assert len(cons) == 3
                for conn in cons:
                    assert conn.family == AF_UNIX
                    assert conn.type in (SOCK_STREAM, SOCK_DGRAM)


@pytest.mark.skipif(SKIP_SYSCONS, reason="requires root")
class TestSystemWideConnections(ConnectionTestCase):
    """Tests for net_connections()."""

    def test_it(self):
        def check(cons, families, types_):
            for conn in cons:
                assert conn.family in families
                if conn.family != AF_UNIX:
                    assert conn.type in types_
                check_connection_ntuple(conn)

        with create_sockets():
            from psutil._common import conn_tmap

            for kind, groups in conn_tmap.items():
                # XXX: SunOS does not retrieve UNIX sockets.
                if kind == 'unix' and not HAS_NET_CONNECTIONS_UNIX:
                    continue
                families, types_ = groups
                cons = psutil.net_connections(kind)
                assert len(cons) == len(set(cons))
                check(cons, families, types_)

    @retry_on_failure()
    def test_multi_sockets_procs(self):
        # Creates multiple sub processes, each creating different
        # sockets. For each process check that proc.net_connections()
        # and psutil.net_connections() return the same results.
        # This is done mainly to check whether net_connections()'s
        # pid is properly set, see:
        # https://github.com/giampaolo/psutil/issues/1013
        with create_sockets() as socks:
            expected = len(socks)
        pids = []
        times = 10
        fnames = []
        for _ in range(times):
            fname = self.get_testfn()
            fnames.append(fname)
            src = textwrap.dedent("""\
                import time, os
                from psutil.tests import create_sockets
                with create_sockets():
                    with open(r'%s', 'w') as f:
                        f.write("hello")
                    [time.sleep(0.1) for x in range(100)]
                """ % fname)
            sproc = self.pyrun(src)
            pids.append(sproc.pid)

        # sync
        for fname in fnames:
            wait_for_file(fname)

        syscons = [
            x for x in psutil.net_connections(kind='all') if x.pid in pids
        ]
        for pid in pids:
            assert len([x for x in syscons if x.pid == pid]) == expected
            p = psutil.Process(pid)
            assert len(p.net_connections('all')) == expected


class TestMisc(PsutilTestCase):
    def test_net_connection_constants(self):
        ints = []
        strs = []
        for name in dir(psutil):
            if name.startswith('CONN_'):
                num = getattr(psutil, name)
                str_ = str(num)
                assert str_.isupper(), str_
                assert str not in strs
                assert num not in ints
                ints.append(num)
                strs.append(str_)
        if SUNOS:
            psutil.CONN_IDLE  # noqa
            psutil.CONN_BOUND  # noqa
        if WINDOWS:
            psutil.CONN_DELETE_TCB  # noqa
PKok\�TZn�n�psutil/tests/test_process.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Tests for psutil.Process class."""

import collections
import errno
import getpass
import itertools
import os
import signal
import socket
import stat
import string
import subprocess
import sys
import textwrap
import time
import types

import psutil
from psutil import AIX
from psutil import BSD
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import OSX
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._common import open_text
from psutil._compat import PY3
from psutil._compat import FileNotFoundError
from psutil._compat import long
from psutil._compat import redirect_stderr
from psutil._compat import super
from psutil.tests import APPVEYOR
from psutil.tests import CI_TESTING
from psutil.tests import GITHUB_ACTIONS
from psutil.tests import GLOBAL_TIMEOUT
from psutil.tests import HAS_CPU_AFFINITY
from psutil.tests import HAS_ENVIRON
from psutil.tests import HAS_IONICE
from psutil.tests import HAS_MEMORY_MAPS
from psutil.tests import HAS_PROC_CPU_NUM
from psutil.tests import HAS_PROC_IO_COUNTERS
from psutil.tests import HAS_RLIMIT
from psutil.tests import HAS_THREADS
from psutil.tests import MACOS_11PLUS
from psutil.tests import PYPY
from psutil.tests import PYTHON_EXE
from psutil.tests import PYTHON_EXE_ENV
from psutil.tests import QEMU_USER
from psutil.tests import PsutilTestCase
from psutil.tests import ThreadTask
from psutil.tests import call_until
from psutil.tests import copyload_shared_lib
from psutil.tests import create_c_exe
from psutil.tests import create_py_exe
from psutil.tests import mock
from psutil.tests import process_namespace
from psutil.tests import pytest
from psutil.tests import reap_children
from psutil.tests import retry_on_failure
from psutil.tests import sh
from psutil.tests import skip_on_access_denied
from psutil.tests import skip_on_not_implemented
from psutil.tests import wait_for_pid


# ===================================================================
# --- psutil.Process class tests
# ===================================================================


class TestProcess(PsutilTestCase):
    """Tests for psutil.Process class."""

    def spawn_psproc(self, *args, **kwargs):
        sproc = self.spawn_testproc(*args, **kwargs)
        try:
            return psutil.Process(sproc.pid)
        except psutil.NoSuchProcess:
            self.assertPidGone(sproc.pid)
            raise

    # ---

    def test_pid(self):
        p = psutil.Process()
        assert p.pid == os.getpid()
        with pytest.raises(AttributeError):
            p.pid = 33

    def test_kill(self):
        p = self.spawn_psproc()
        p.kill()
        code = p.wait()
        if WINDOWS:
            assert code == signal.SIGTERM
        else:
            assert code == -signal.SIGKILL
        self.assertProcessGone(p)

    def test_terminate(self):
        p = self.spawn_psproc()
        p.terminate()
        code = p.wait()
        if WINDOWS:
            assert code == signal.SIGTERM
        else:
            assert code == -signal.SIGTERM
        self.assertProcessGone(p)

    def test_send_signal(self):
        sig = signal.SIGKILL if POSIX else signal.SIGTERM
        p = self.spawn_psproc()
        p.send_signal(sig)
        code = p.wait()
        if WINDOWS:
            assert code == sig
        else:
            assert code == -sig
        self.assertProcessGone(p)

    @pytest.mark.skipif(not POSIX, reason="not POSIX")
    def test_send_signal_mocked(self):
        sig = signal.SIGTERM
        p = self.spawn_psproc()
        with mock.patch(
            'psutil.os.kill', side_effect=OSError(errno.ESRCH, "")
        ):
            with pytest.raises(psutil.NoSuchProcess):
                p.send_signal(sig)

        p = self.spawn_psproc()
        with mock.patch(
            'psutil.os.kill', side_effect=OSError(errno.EPERM, "")
        ):
            with pytest.raises(psutil.AccessDenied):
                p.send_signal(sig)

    def test_wait_exited(self):
        # Test waitpid() + WIFEXITED -> WEXITSTATUS.
        # normal return, same as exit(0)
        cmd = [PYTHON_EXE, "-c", "pass"]
        p = self.spawn_psproc(cmd)
        code = p.wait()
        assert code == 0
        self.assertProcessGone(p)
        # exit(1), implicit in case of error
        cmd = [PYTHON_EXE, "-c", "1 / 0"]
        p = self.spawn_psproc(cmd, stderr=subprocess.PIPE)
        code = p.wait()
        assert code == 1
        self.assertProcessGone(p)
        # via sys.exit()
        cmd = [PYTHON_EXE, "-c", "import sys; sys.exit(5);"]
        p = self.spawn_psproc(cmd)
        code = p.wait()
        assert code == 5
        self.assertProcessGone(p)
        # via os._exit()
        cmd = [PYTHON_EXE, "-c", "import os; os._exit(5);"]
        p = self.spawn_psproc(cmd)
        code = p.wait()
        assert code == 5
        self.assertProcessGone(p)

    @pytest.mark.skipif(NETBSD, reason="fails on NETBSD")
    def test_wait_stopped(self):
        p = self.spawn_psproc()
        if POSIX:
            # Test waitpid() + WIFSTOPPED and WIFCONTINUED.
            # Note: if a process is stopped it ignores SIGTERM.
            p.send_signal(signal.SIGSTOP)
            with pytest.raises(psutil.TimeoutExpired):
                p.wait(timeout=0.001)
            p.send_signal(signal.SIGCONT)
            with pytest.raises(psutil.TimeoutExpired):
                p.wait(timeout=0.001)
            p.send_signal(signal.SIGTERM)
            assert p.wait() == -signal.SIGTERM
            assert p.wait() == -signal.SIGTERM
        else:
            p.suspend()
            with pytest.raises(psutil.TimeoutExpired):
                p.wait(timeout=0.001)
            p.resume()
            with pytest.raises(psutil.TimeoutExpired):
                p.wait(timeout=0.001)
            p.terminate()
            assert p.wait() == signal.SIGTERM
            assert p.wait() == signal.SIGTERM

    def test_wait_non_children(self):
        # Test wait() against a process which is not our direct
        # child.
        child, grandchild = self.spawn_children_pair()
        with pytest.raises(psutil.TimeoutExpired):
            child.wait(0.01)
        with pytest.raises(psutil.TimeoutExpired):
            grandchild.wait(0.01)
        # We also terminate the direct child otherwise the
        # grandchild will hang until the parent is gone.
        child.terminate()
        grandchild.terminate()
        child_ret = child.wait()
        grandchild_ret = grandchild.wait()
        if POSIX:
            assert child_ret == -signal.SIGTERM
            # For processes which are not our children we're supposed
            # to get None.
            assert grandchild_ret is None
        else:
            assert child_ret == signal.SIGTERM
            assert child_ret == signal.SIGTERM

    def test_wait_timeout(self):
        p = self.spawn_psproc()
        p.name()
        with pytest.raises(psutil.TimeoutExpired):
            p.wait(0.01)
        with pytest.raises(psutil.TimeoutExpired):
            p.wait(0)
        with pytest.raises(ValueError):
            p.wait(-1)

    def test_wait_timeout_nonblocking(self):
        p = self.spawn_psproc()
        with pytest.raises(psutil.TimeoutExpired):
            p.wait(0)
        p.kill()
        stop_at = time.time() + GLOBAL_TIMEOUT
        while time.time() < stop_at:
            try:
                code = p.wait(0)
                break
            except psutil.TimeoutExpired:
                pass
        else:
            raise self.fail('timeout')
        if POSIX:
            assert code == -signal.SIGKILL
        else:
            assert code == signal.SIGTERM
        self.assertProcessGone(p)

    def test_cpu_percent(self):
        p = psutil.Process()
        p.cpu_percent(interval=0.001)
        p.cpu_percent(interval=0.001)
        for _ in range(100):
            percent = p.cpu_percent(interval=None)
            assert isinstance(percent, float)
            assert percent >= 0.0
        with pytest.raises(ValueError):
            p.cpu_percent(interval=-1)

    def test_cpu_percent_numcpus_none(self):
        # See: https://github.com/giampaolo/psutil/issues/1087
        with mock.patch('psutil.cpu_count', return_value=None) as m:
            psutil.Process().cpu_percent()
            assert m.called

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_cpu_times(self):
        times = psutil.Process().cpu_times()
        assert times.user >= 0.0, times
        assert times.system >= 0.0, times
        assert times.children_user >= 0.0, times
        assert times.children_system >= 0.0, times
        if LINUX:
            assert times.iowait >= 0.0, times
        # make sure returned values can be pretty printed with strftime
        for name in times._fields:
            time.strftime("%H:%M:%S", time.localtime(getattr(times, name)))

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_cpu_times_2(self):
        user_time, kernel_time = psutil.Process().cpu_times()[:2]
        utime, ktime = os.times()[:2]

        # Use os.times()[:2] as base values to compare our results
        # using a tolerance  of +/- 0.1 seconds.
        # It will fail if the difference between the values is > 0.1s.
        if (max([user_time, utime]) - min([user_time, utime])) > 0.1:
            raise self.fail("expected: %s, found: %s" % (utime, user_time))

        if (max([kernel_time, ktime]) - min([kernel_time, ktime])) > 0.1:
            raise self.fail("expected: %s, found: %s" % (ktime, kernel_time))

    @pytest.mark.skipif(not HAS_PROC_CPU_NUM, reason="not supported")
    def test_cpu_num(self):
        p = psutil.Process()
        num = p.cpu_num()
        assert num >= 0
        if psutil.cpu_count() == 1:
            assert num == 0
        assert p.cpu_num() in range(psutil.cpu_count())

    def test_create_time(self):
        p = self.spawn_psproc()
        now = time.time()
        create_time = p.create_time()

        # Use time.time() as base value to compare our result using a
        # tolerance of +/- 1 second.
        # It will fail if the difference between the values is > 2s.
        difference = abs(create_time - now)
        if difference > 2:
            raise self.fail(
                "expected: %s, found: %s, difference: %s"
                % (now, create_time, difference)
            )

        # make sure returned value can be pretty printed with strftime
        time.strftime("%Y %m %d %H:%M:%S", time.localtime(p.create_time()))

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_terminal(self):
        terminal = psutil.Process().terminal()
        if terminal is not None:
            try:
                tty = os.path.realpath(sh('tty'))
            except RuntimeError:
                # Note: happens if pytest is run without the `-s` opt.
                raise pytest.skip("can't rely on `tty` CLI")
            else:
                assert terminal == tty

    @pytest.mark.skipif(not HAS_PROC_IO_COUNTERS, reason="not supported")
    @skip_on_not_implemented(only_if=LINUX)
    def test_io_counters(self):
        p = psutil.Process()
        # test reads
        io1 = p.io_counters()
        with open(PYTHON_EXE, 'rb') as f:
            f.read()
        io2 = p.io_counters()
        if not BSD and not AIX:
            assert io2.read_count > io1.read_count
            assert io2.write_count == io1.write_count
            if LINUX:
                assert io2.read_chars > io1.read_chars
                assert io2.write_chars == io1.write_chars
        else:
            assert io2.read_bytes >= io1.read_bytes
            assert io2.write_bytes >= io1.write_bytes

        # test writes
        io1 = p.io_counters()
        with open(self.get_testfn(), 'wb') as f:
            if PY3:
                f.write(bytes("x" * 1000000, 'ascii'))
            else:
                f.write("x" * 1000000)
        io2 = p.io_counters()
        assert io2.write_count >= io1.write_count
        assert io2.write_bytes >= io1.write_bytes
        assert io2.read_count >= io1.read_count
        assert io2.read_bytes >= io1.read_bytes
        if LINUX:
            assert io2.write_chars > io1.write_chars
            assert io2.read_chars >= io1.read_chars

        # sanity check
        for i in range(len(io2)):
            if BSD and i >= 2:
                # On BSD read_bytes and write_bytes are always set to -1.
                continue
            assert io2[i] >= 0
            assert io2[i] >= 0

    @pytest.mark.skipif(not HAS_IONICE, reason="not supported")
    @pytest.mark.skipif(not LINUX, reason="linux only")
    def test_ionice_linux(self):
        def cleanup(init):
            ioclass, value = init
            if ioclass == psutil.IOPRIO_CLASS_NONE:
                value = 0
            p.ionice(ioclass, value)

        p = psutil.Process()
        if not CI_TESTING:
            assert p.ionice()[0] == psutil.IOPRIO_CLASS_NONE
        assert psutil.IOPRIO_CLASS_NONE == 0
        assert psutil.IOPRIO_CLASS_RT == 1  # high
        assert psutil.IOPRIO_CLASS_BE == 2  # normal
        assert psutil.IOPRIO_CLASS_IDLE == 3  # low
        init = p.ionice()
        self.addCleanup(cleanup, init)

        # low
        p.ionice(psutil.IOPRIO_CLASS_IDLE)
        assert tuple(p.ionice()) == (psutil.IOPRIO_CLASS_IDLE, 0)
        with pytest.raises(ValueError):  # accepts no value
            p.ionice(psutil.IOPRIO_CLASS_IDLE, value=7)
        # normal
        p.ionice(psutil.IOPRIO_CLASS_BE)
        assert tuple(p.ionice()) == (psutil.IOPRIO_CLASS_BE, 0)
        p.ionice(psutil.IOPRIO_CLASS_BE, value=7)
        assert tuple(p.ionice()) == (psutil.IOPRIO_CLASS_BE, 7)
        with pytest.raises(ValueError):
            p.ionice(psutil.IOPRIO_CLASS_BE, value=8)
        try:
            p.ionice(psutil.IOPRIO_CLASS_RT, value=7)
        except psutil.AccessDenied:
            pass
        # errs
        with pytest.raises(ValueError, match="ioclass accepts no value"):
            p.ionice(psutil.IOPRIO_CLASS_NONE, 1)
        with pytest.raises(ValueError, match="ioclass accepts no value"):
            p.ionice(psutil.IOPRIO_CLASS_IDLE, 1)
        with pytest.raises(
            ValueError, match="'ioclass' argument must be specified"
        ):
            p.ionice(value=1)

    @pytest.mark.skipif(not HAS_IONICE, reason="not supported")
    @pytest.mark.skipif(
        not WINDOWS, reason="not supported on this win version"
    )
    def test_ionice_win(self):
        p = psutil.Process()
        if not CI_TESTING:
            assert p.ionice() == psutil.IOPRIO_NORMAL
        init = p.ionice()
        self.addCleanup(p.ionice, init)

        # base
        p.ionice(psutil.IOPRIO_VERYLOW)
        assert p.ionice() == psutil.IOPRIO_VERYLOW
        p.ionice(psutil.IOPRIO_LOW)
        assert p.ionice() == psutil.IOPRIO_LOW
        try:
            p.ionice(psutil.IOPRIO_HIGH)
        except psutil.AccessDenied:
            pass
        else:
            assert p.ionice() == psutil.IOPRIO_HIGH
        # errs
        with pytest.raises(
            TypeError, match="value argument not accepted on Windows"
        ):
            p.ionice(psutil.IOPRIO_NORMAL, value=1)
        with pytest.raises(ValueError, match="is not a valid priority"):
            p.ionice(psutil.IOPRIO_HIGH + 1)

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_get(self):
        import resource

        p = psutil.Process(os.getpid())
        names = [x for x in dir(psutil) if x.startswith('RLIMIT')]
        assert names, names
        for name in names:
            value = getattr(psutil, name)
            assert value >= 0
            if name in dir(resource):
                assert value == getattr(resource, name)
                # XXX - On PyPy RLIMIT_INFINITY returned by
                # resource.getrlimit() is reported as a very big long
                # number instead of -1. It looks like a bug with PyPy.
                if PYPY:
                    continue
                assert p.rlimit(value) == resource.getrlimit(value)
            else:
                ret = p.rlimit(value)
                assert len(ret) == 2
                assert ret[0] >= -1
                assert ret[1] >= -1

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_set(self):
        p = self.spawn_psproc()
        p.rlimit(psutil.RLIMIT_NOFILE, (5, 5))
        assert p.rlimit(psutil.RLIMIT_NOFILE) == (5, 5)
        # If pid is 0 prlimit() applies to the calling process and
        # we don't want that.
        if LINUX:
            with pytest.raises(ValueError, match="can't use prlimit"):
                psutil._psplatform.Process(0).rlimit(0)
        with pytest.raises(ValueError):
            p.rlimit(psutil.RLIMIT_NOFILE, (5, 5, 5))

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit(self):
        p = psutil.Process()
        testfn = self.get_testfn()
        soft, hard = p.rlimit(psutil.RLIMIT_FSIZE)
        try:
            p.rlimit(psutil.RLIMIT_FSIZE, (1024, hard))
            with open(testfn, "wb") as f:
                f.write(b"X" * 1024)
            # write() or flush() doesn't always cause the exception
            # but close() will.
            with pytest.raises(IOError) as exc:
                with open(testfn, "wb") as f:
                    f.write(b"X" * 1025)
            assert (exc.value.errno if PY3 else exc.value[0]) == errno.EFBIG
        finally:
            p.rlimit(psutil.RLIMIT_FSIZE, (soft, hard))
            assert p.rlimit(psutil.RLIMIT_FSIZE) == (soft, hard)

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_infinity(self):
        # First set a limit, then re-set it by specifying INFINITY
        # and assume we overridden the previous limit.
        p = psutil.Process()
        soft, hard = p.rlimit(psutil.RLIMIT_FSIZE)
        try:
            p.rlimit(psutil.RLIMIT_FSIZE, (1024, hard))
            p.rlimit(psutil.RLIMIT_FSIZE, (psutil.RLIM_INFINITY, hard))
            with open(self.get_testfn(), "wb") as f:
                f.write(b"X" * 2048)
        finally:
            p.rlimit(psutil.RLIMIT_FSIZE, (soft, hard))
            assert p.rlimit(psutil.RLIMIT_FSIZE) == (soft, hard)

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_infinity_value(self):
        # RLIMIT_FSIZE should be RLIM_INFINITY, which will be a really
        # big number on a platform with large file support.  On these
        # platforms we need to test that the get/setrlimit functions
        # properly convert the number to a C long long and that the
        # conversion doesn't raise an error.
        p = psutil.Process()
        soft, hard = p.rlimit(psutil.RLIMIT_FSIZE)
        assert hard == psutil.RLIM_INFINITY
        p.rlimit(psutil.RLIMIT_FSIZE, (soft, hard))

    def test_num_threads(self):
        # on certain platforms such as Linux we might test for exact
        # thread number, since we always have with 1 thread per process,
        # but this does not apply across all platforms (MACOS, Windows)
        p = psutil.Process()
        if OPENBSD:
            try:
                step1 = p.num_threads()
            except psutil.AccessDenied:
                raise pytest.skip("on OpenBSD this requires root access")
        else:
            step1 = p.num_threads()

        with ThreadTask():
            step2 = p.num_threads()
            assert step2 == step1 + 1

    @pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
    def test_num_handles(self):
        # a better test is done later into test/_windows.py
        p = psutil.Process()
        assert p.num_handles() > 0

    @pytest.mark.skipif(not HAS_THREADS, reason="not supported")
    def test_threads(self):
        p = psutil.Process()
        if OPENBSD:
            try:
                step1 = p.threads()
            except psutil.AccessDenied:
                raise pytest.skip("on OpenBSD this requires root access")
        else:
            step1 = p.threads()

        with ThreadTask():
            step2 = p.threads()
            assert len(step2) == len(step1) + 1
            athread = step2[0]
            # test named tuple
            assert athread.id == athread[0]
            assert athread.user_time == athread[1]
            assert athread.system_time == athread[2]

    @retry_on_failure()
    @skip_on_access_denied(only_if=MACOS)
    @pytest.mark.skipif(not HAS_THREADS, reason="not supported")
    def test_threads_2(self):
        p = self.spawn_psproc()
        if OPENBSD:
            try:
                p.threads()
            except psutil.AccessDenied:
                raise pytest.skip("on OpenBSD this requires root access")
        assert (
            abs(p.cpu_times().user - sum([x.user_time for x in p.threads()]))
            < 0.1
        )
        assert (
            abs(
                p.cpu_times().system
                - sum([x.system_time for x in p.threads()])
            )
            < 0.1
        )

    @retry_on_failure()
    def test_memory_info(self):
        p = psutil.Process()

        # step 1 - get a base value to compare our results
        rss1, vms1 = p.memory_info()[:2]
        percent1 = p.memory_percent()
        assert rss1 > 0
        assert vms1 > 0

        # step 2 - allocate some memory
        memarr = [None] * 1500000

        rss2, vms2 = p.memory_info()[:2]
        percent2 = p.memory_percent()

        # step 3 - make sure that the memory usage bumped up
        assert rss2 > rss1
        assert vms2 >= vms1  # vms might be equal
        assert percent2 > percent1
        del memarr

        if WINDOWS:
            mem = p.memory_info()
            assert mem.rss == mem.wset
            assert mem.vms == mem.pagefile

        mem = p.memory_info()
        for name in mem._fields:
            assert getattr(mem, name) >= 0

    def test_memory_full_info(self):
        p = psutil.Process()
        total = psutil.virtual_memory().total
        mem = p.memory_full_info()
        for name in mem._fields:
            value = getattr(mem, name)
            assert value >= 0
            if name == 'vms' and OSX or LINUX:
                continue
            assert value <= total
        if LINUX or WINDOWS or MACOS:
            assert mem.uss >= 0
        if LINUX:
            assert mem.pss >= 0
            assert mem.swap >= 0

    @pytest.mark.skipif(not HAS_MEMORY_MAPS, reason="not supported")
    def test_memory_maps(self):
        p = psutil.Process()
        maps = p.memory_maps()
        assert len(maps) == len(set(maps))
        ext_maps = p.memory_maps(grouped=False)

        for nt in maps:
            if not nt.path.startswith('['):
                if QEMU_USER and "/bin/qemu-" in nt.path:
                    continue
                assert os.path.isabs(nt.path), nt.path
                if POSIX:
                    try:
                        assert os.path.exists(nt.path) or os.path.islink(
                            nt.path
                        ), nt.path
                    except AssertionError:
                        if not LINUX:
                            raise
                        else:
                            # https://github.com/giampaolo/psutil/issues/759
                            with open_text('/proc/self/smaps') as f:
                                data = f.read()
                            if "%s (deleted)" % nt.path not in data:
                                raise
                else:
                    # XXX - On Windows we have this strange behavior with
                    # 64 bit dlls: they are visible via explorer but cannot
                    # be accessed via os.stat() (wtf?).
                    if '64' not in os.path.basename(nt.path):
                        try:
                            st = os.stat(nt.path)
                        except FileNotFoundError:
                            pass
                        else:
                            assert stat.S_ISREG(st.st_mode), nt.path
        for nt in ext_maps:
            for fname in nt._fields:
                value = getattr(nt, fname)
                if fname == 'path':
                    continue
                if fname in ('addr', 'perms'):
                    assert value, value
                else:
                    assert isinstance(value, (int, long))
                    assert value >= 0, value

    @pytest.mark.skipif(not HAS_MEMORY_MAPS, reason="not supported")
    def test_memory_maps_lists_lib(self):
        # Make sure a newly loaded shared lib is listed.
        p = psutil.Process()
        with copyload_shared_lib() as path:

            def normpath(p):
                return os.path.realpath(os.path.normcase(p))

            libpaths = [normpath(x.path) for x in p.memory_maps()]
            assert normpath(path) in libpaths

    def test_memory_percent(self):
        p = psutil.Process()
        p.memory_percent()
        with pytest.raises(ValueError):
            p.memory_percent(memtype="?!?")
        if LINUX or MACOS or WINDOWS:
            p.memory_percent(memtype='uss')

    def test_is_running(self):
        p = self.spawn_psproc()
        assert p.is_running()
        assert p.is_running()
        p.kill()
        p.wait()
        assert not p.is_running()
        assert not p.is_running()

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_exe(self):
        p = self.spawn_psproc()
        exe = p.exe()
        try:
            assert exe == PYTHON_EXE
        except AssertionError:
            if WINDOWS and len(exe) == len(PYTHON_EXE):
                # on Windows we don't care about case sensitivity
                normcase = os.path.normcase
                assert normcase(exe) == normcase(PYTHON_EXE)
            else:
                # certain platforms such as BSD are more accurate returning:
                # "/usr/local/bin/python2.7"
                # ...instead of:
                # "/usr/local/bin/python"
                # We do not want to consider this difference in accuracy
                # an error.
                ver = "%s.%s" % (sys.version_info[0], sys.version_info[1])
                try:
                    assert exe.replace(ver, '') == PYTHON_EXE.replace(ver, '')
                except AssertionError:
                    # Typically MACOS. Really not sure what to do here.
                    pass

        out = sh([exe, "-c", "import os; print('hey')"])
        assert out == 'hey'

    def test_cmdline(self):
        cmdline = [
            PYTHON_EXE,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        p = self.spawn_psproc(cmdline)

        if NETBSD and p.cmdline() == []:
            # https://github.com/giampaolo/psutil/issues/2250
            raise pytest.skip("OPENBSD: returned EBUSY")

        # XXX - most of the times the underlying sysctl() call on Net
        # and Open BSD returns a truncated string.
        # Also /proc/pid/cmdline behaves the same so it looks
        # like this is a kernel bug.
        # XXX - AIX truncates long arguments in /proc/pid/cmdline
        if NETBSD or OPENBSD or AIX:
            assert p.cmdline()[0] == PYTHON_EXE
        else:
            if MACOS and CI_TESTING:
                pyexe = p.cmdline()[0]
                if pyexe != PYTHON_EXE:
                    assert ' '.join(p.cmdline()[1:]) == ' '.join(cmdline[1:])
                    return
            if QEMU_USER:
                assert ' '.join(p.cmdline()[2:]) == ' '.join(cmdline)
                return
            assert ' '.join(p.cmdline()) == ' '.join(cmdline)

    @pytest.mark.skipif(PYPY, reason="broken on PYPY")
    def test_long_cmdline(self):
        cmdline = [PYTHON_EXE]
        cmdline.extend(["-v"] * 50)
        cmdline.extend(
            ["-c", "import time; [time.sleep(0.1) for x in range(100)]"]
        )
        p = self.spawn_psproc(cmdline)
        if OPENBSD:
            # XXX: for some reason the test process may turn into a
            # zombie (don't know why).
            try:
                assert p.cmdline() == cmdline
            except psutil.ZombieProcess:
                raise pytest.skip("OPENBSD: process turned into zombie")
        elif QEMU_USER:
            assert p.cmdline()[2:] == cmdline
        else:
            ret = p.cmdline()
            if NETBSD and ret == []:
                # https://github.com/giampaolo/psutil/issues/2250
                raise pytest.skip("OPENBSD: returned EBUSY")
            assert ret == cmdline

    def test_name(self):
        p = self.spawn_psproc()
        name = p.name().lower()
        pyexe = os.path.basename(os.path.realpath(sys.executable)).lower()
        assert pyexe.startswith(name), (pyexe, name)

    @pytest.mark.skipif(PYPY or QEMU_USER, reason="unreliable on PYPY")
    @pytest.mark.skipif(QEMU_USER, reason="unreliable on QEMU user")
    def test_long_name(self):
        pyexe = create_py_exe(self.get_testfn(suffix=string.digits * 2))
        cmdline = [
            pyexe,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        p = self.spawn_psproc(cmdline)
        if OPENBSD:
            # XXX: for some reason the test process may turn into a
            # zombie (don't know why). Because the name() is long, all
            # UNIX kernels truncate it to 15 chars, so internally psutil
            # tries to guess the full name() from the cmdline(). But the
            # cmdline() of a zombie on OpenBSD fails (internally), so we
            # just compare the first 15 chars. Full explanation:
            # https://github.com/giampaolo/psutil/issues/2239
            try:
                assert p.name() == os.path.basename(pyexe)
            except AssertionError:
                if p.status() == psutil.STATUS_ZOMBIE:
                    assert os.path.basename(pyexe).startswith(p.name())
                else:
                    raise
        else:
            assert p.name() == os.path.basename(pyexe)

    # XXX
    @pytest.mark.skipif(SUNOS, reason="broken on SUNOS")
    @pytest.mark.skipif(AIX, reason="broken on AIX")
    @pytest.mark.skipif(PYPY, reason="broken on PYPY")
    @pytest.mark.skipif(QEMU_USER, reason="broken on QEMU user")
    def test_prog_w_funky_name(self):
        # Test that name(), exe() and cmdline() correctly handle programs
        # with funky chars such as spaces and ")", see:
        # https://github.com/giampaolo/psutil/issues/628
        pyexe = create_py_exe(self.get_testfn(suffix='foo bar )'))
        cmdline = [
            pyexe,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        p = self.spawn_psproc(cmdline)
        assert p.cmdline() == cmdline
        assert p.name() == os.path.basename(pyexe)
        assert os.path.normcase(p.exe()) == os.path.normcase(pyexe)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_uids(self):
        p = psutil.Process()
        real, effective, _saved = p.uids()
        # os.getuid() refers to "real" uid
        assert real == os.getuid()
        # os.geteuid() refers to "effective" uid
        assert effective == os.geteuid()
        # No such thing as os.getsuid() ("saved" uid), but starting
        # from python 2.7 we have os.getresuid() which returns all
        # of them.
        if hasattr(os, "getresuid"):
            assert os.getresuid() == p.uids()

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_gids(self):
        p = psutil.Process()
        real, effective, _saved = p.gids()
        # os.getuid() refers to "real" uid
        assert real == os.getgid()
        # os.geteuid() refers to "effective" uid
        assert effective == os.getegid()
        # No such thing as os.getsgid() ("saved" gid), but starting
        # from python 2.7 we have os.getresgid() which returns all
        # of them.
        if hasattr(os, "getresuid"):
            assert os.getresgid() == p.gids()

    def test_nice(self):
        def cleanup(init):
            try:
                p.nice(init)
            except psutil.AccessDenied:
                pass

        p = psutil.Process()
        with pytest.raises(TypeError):
            p.nice("str")
        init = p.nice()
        self.addCleanup(cleanup, init)

        if WINDOWS:
            highest_prio = None
            for prio in [
                psutil.IDLE_PRIORITY_CLASS,
                psutil.BELOW_NORMAL_PRIORITY_CLASS,
                psutil.NORMAL_PRIORITY_CLASS,
                psutil.ABOVE_NORMAL_PRIORITY_CLASS,
                psutil.HIGH_PRIORITY_CLASS,
                psutil.REALTIME_PRIORITY_CLASS,
            ]:
                with self.subTest(prio=prio):
                    try:
                        p.nice(prio)
                    except psutil.AccessDenied:
                        pass
                    else:
                        new_prio = p.nice()
                        # The OS may limit our maximum priority,
                        # even if the function succeeds. For higher
                        # priorities, we match either the expected
                        # value or the highest so far.
                        if prio in (
                            psutil.ABOVE_NORMAL_PRIORITY_CLASS,
                            psutil.HIGH_PRIORITY_CLASS,
                            psutil.REALTIME_PRIORITY_CLASS,
                        ):
                            if new_prio == prio or highest_prio is None:
                                highest_prio = prio
                                assert new_prio == highest_prio
                        else:
                            assert new_prio == prio
        else:
            try:
                if hasattr(os, "getpriority"):
                    assert (
                        os.getpriority(os.PRIO_PROCESS, os.getpid())
                        == p.nice()
                    )
                p.nice(1)
                assert p.nice() == 1
                if hasattr(os, "getpriority"):
                    assert (
                        os.getpriority(os.PRIO_PROCESS, os.getpid())
                        == p.nice()
                    )
                # XXX - going back to previous nice value raises
                # AccessDenied on MACOS
                if not MACOS:
                    p.nice(0)
                    assert p.nice() == 0
            except psutil.AccessDenied:
                pass

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_status(self):
        p = psutil.Process()
        assert p.status() == psutil.STATUS_RUNNING

    def test_username(self):
        p = self.spawn_psproc()
        username = p.username()
        if WINDOWS:
            domain, username = username.split('\\')
            getpass_user = getpass.getuser()
            if getpass_user.endswith('$'):
                # When running as a service account (most likely to be
                # NetworkService), these user name calculations don't produce
                # the same result, causing the test to fail.
                raise pytest.skip('running as service account')
            assert username == getpass_user
            if 'USERDOMAIN' in os.environ:
                assert domain == os.environ['USERDOMAIN']
        else:
            assert username == getpass.getuser()

    def test_cwd(self):
        p = self.spawn_psproc()
        assert p.cwd() == os.getcwd()

    def test_cwd_2(self):
        cmd = [
            PYTHON_EXE,
            "-c",
            (
                "import os, time; os.chdir('..'); [time.sleep(0.1) for x in"
                " range(100)]"
            ),
        ]
        p = self.spawn_psproc(cmd)
        call_until(lambda: p.cwd() == os.path.dirname(os.getcwd()))

    @pytest.mark.skipif(not HAS_CPU_AFFINITY, reason="not supported")
    def test_cpu_affinity(self):
        p = psutil.Process()
        initial = p.cpu_affinity()
        assert initial, initial
        self.addCleanup(p.cpu_affinity, initial)

        if hasattr(os, "sched_getaffinity"):
            assert initial == list(os.sched_getaffinity(p.pid))
        assert len(initial) == len(set(initial))

        all_cpus = list(range(len(psutil.cpu_percent(percpu=True))))
        for n in all_cpus:
            p.cpu_affinity([n])
            assert p.cpu_affinity() == [n]
            if hasattr(os, "sched_getaffinity"):
                assert p.cpu_affinity() == list(os.sched_getaffinity(p.pid))
            # also test num_cpu()
            if hasattr(p, "num_cpu"):
                assert p.cpu_affinity()[0] == p.num_cpu()

        # [] is an alias for "all eligible CPUs"; on Linux this may
        # not be equal to all available CPUs, see:
        # https://github.com/giampaolo/psutil/issues/956
        p.cpu_affinity([])
        if LINUX:
            assert p.cpu_affinity() == p._proc._get_eligible_cpus()
        else:
            assert p.cpu_affinity() == all_cpus
        if hasattr(os, "sched_getaffinity"):
            assert p.cpu_affinity() == list(os.sched_getaffinity(p.pid))

        with pytest.raises(TypeError):
            p.cpu_affinity(1)
        p.cpu_affinity(initial)
        # it should work with all iterables, not only lists
        p.cpu_affinity(set(all_cpus))
        p.cpu_affinity(tuple(all_cpus))

    @pytest.mark.skipif(not HAS_CPU_AFFINITY, reason="not supported")
    def test_cpu_affinity_errs(self):
        p = self.spawn_psproc()
        invalid_cpu = [len(psutil.cpu_times(percpu=True)) + 10]
        with pytest.raises(ValueError):
            p.cpu_affinity(invalid_cpu)
        with pytest.raises(ValueError):
            p.cpu_affinity(range(10000, 11000))
        with pytest.raises(TypeError):
            p.cpu_affinity([0, "1"])
        with pytest.raises(ValueError):
            p.cpu_affinity([0, -1])

    @pytest.mark.skipif(not HAS_CPU_AFFINITY, reason="not supported")
    def test_cpu_affinity_all_combinations(self):
        p = psutil.Process()
        initial = p.cpu_affinity()
        assert initial, initial
        self.addCleanup(p.cpu_affinity, initial)

        # All possible CPU set combinations.
        if len(initial) > 12:
            initial = initial[:12]  # ...otherwise it will take forever
        combos = []
        for i in range(len(initial) + 1):
            for subset in itertools.combinations(initial, i):
                if subset:
                    combos.append(list(subset))

        for combo in combos:
            p.cpu_affinity(combo)
            assert sorted(p.cpu_affinity()) == sorted(combo)

    # TODO: #595
    @pytest.mark.skipif(BSD, reason="broken on BSD")
    # can't find any process file on Appveyor
    @pytest.mark.skipif(APPVEYOR, reason="unreliable on APPVEYOR")
    def test_open_files(self):
        p = psutil.Process()
        testfn = self.get_testfn()
        files = p.open_files()
        assert testfn not in files
        with open(testfn, 'wb') as f:
            f.write(b'x' * 1024)
            f.flush()
            # give the kernel some time to see the new file
            call_until(lambda: len(p.open_files()) != len(files))
            files = p.open_files()
            filenames = [os.path.normcase(x.path) for x in files]
            assert os.path.normcase(testfn) in filenames
            if LINUX:
                for file in files:
                    if file.path == testfn:
                        assert file.position == 1024
        for file in files:
            assert os.path.isfile(file.path), file

        # another process
        cmdline = (
            "import time; f = open(r'%s', 'r'); [time.sleep(0.1) for x in"
            " range(100)];" % testfn
        )
        p = self.spawn_psproc([PYTHON_EXE, "-c", cmdline])

        for x in range(100):
            filenames = [os.path.normcase(x.path) for x in p.open_files()]
            if testfn in filenames:
                break
            time.sleep(0.01)
        else:
            assert os.path.normcase(testfn) in filenames
        for file in filenames:
            assert os.path.isfile(file), file

    # TODO: #595
    @pytest.mark.skipif(BSD, reason="broken on BSD")
    # can't find any process file on Appveyor
    @pytest.mark.skipif(APPVEYOR, reason="unreliable on APPVEYOR")
    def test_open_files_2(self):
        # test fd and path fields
        p = psutil.Process()
        normcase = os.path.normcase
        testfn = self.get_testfn()
        with open(testfn, 'w') as fileobj:
            for file in p.open_files():
                if (
                    normcase(file.path) == normcase(fileobj.name)
                    or file.fd == fileobj.fileno()
                ):
                    break
            else:
                raise self.fail(
                    "no file found; files=%s" % (repr(p.open_files()))
                )
            assert normcase(file.path) == normcase(fileobj.name)
            if WINDOWS:
                assert file.fd == -1
            else:
                assert file.fd == fileobj.fileno()
            # test positions
            ntuple = p.open_files()[0]
            assert ntuple[0] == ntuple.path
            assert ntuple[1] == ntuple.fd
            # test file is gone
            assert fileobj.name not in p.open_files()

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_num_fds(self):
        p = psutil.Process()
        testfn = self.get_testfn()
        start = p.num_fds()
        file = open(testfn, 'w')
        self.addCleanup(file.close)
        assert p.num_fds() == start + 1
        sock = socket.socket()
        self.addCleanup(sock.close)
        assert p.num_fds() == start + 2
        file.close()
        sock.close()
        assert p.num_fds() == start

    @skip_on_not_implemented(only_if=LINUX)
    @pytest.mark.skipif(
        OPENBSD or NETBSD, reason="not reliable on OPENBSD & NETBSD"
    )
    def test_num_ctx_switches(self):
        p = psutil.Process()
        before = sum(p.num_ctx_switches())
        for _ in range(2):
            time.sleep(0.05)  # this shall ensure a context switch happens
            after = sum(p.num_ctx_switches())
            if after > before:
                return
        raise self.fail("num ctx switches still the same after 2 iterations")

    def test_ppid(self):
        p = psutil.Process()
        if hasattr(os, 'getppid'):
            assert p.ppid() == os.getppid()
        p = self.spawn_psproc()
        assert p.ppid() == os.getpid()

    def test_parent(self):
        p = self.spawn_psproc()
        assert p.parent().pid == os.getpid()

        lowest_pid = psutil.pids()[0]
        assert psutil.Process(lowest_pid).parent() is None

    def test_parent_multi(self):
        parent = psutil.Process()
        child, grandchild = self.spawn_children_pair()
        assert grandchild.parent() == child
        assert child.parent() == parent

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    @retry_on_failure()
    def test_parents(self):
        parent = psutil.Process()
        assert parent.parents()
        child, grandchild = self.spawn_children_pair()
        assert child.parents()[0] == parent
        assert grandchild.parents()[0] == child
        assert grandchild.parents()[1] == parent

    def test_children(self):
        parent = psutil.Process()
        assert parent.children() == []
        assert parent.children(recursive=True) == []
        # On Windows we set the flag to 0 in order to cancel out the
        # CREATE_NO_WINDOW flag (enabled by default) which creates
        # an extra "conhost.exe" child.
        child = self.spawn_psproc(creationflags=0)
        children1 = parent.children()
        children2 = parent.children(recursive=True)
        for children in (children1, children2):
            assert len(children) == 1
            assert children[0].pid == child.pid
            assert children[0].ppid() == parent.pid

    def test_children_recursive(self):
        # Test children() against two sub processes, p1 and p2, where
        # p1 (our child) spawned p2 (our grandchild).
        parent = psutil.Process()
        child, grandchild = self.spawn_children_pair()
        assert parent.children() == [child]
        assert parent.children(recursive=True) == [child, grandchild]
        # If the intermediate process is gone there's no way for
        # children() to recursively find it.
        child.terminate()
        child.wait()
        assert parent.children(recursive=True) == []

    def test_children_duplicates(self):
        # find the process which has the highest number of children
        table = collections.defaultdict(int)
        for p in psutil.process_iter():
            try:
                table[p.ppid()] += 1
            except psutil.Error:
                pass
        # this is the one, now let's make sure there are no duplicates
        pid = sorted(table.items(), key=lambda x: x[1])[-1][0]
        if LINUX and pid == 0:
            raise pytest.skip("PID 0")
        p = psutil.Process(pid)
        try:
            c = p.children(recursive=True)
        except psutil.AccessDenied:  # windows
            pass
        else:
            assert len(c) == len(set(c))

    def test_parents_and_children(self):
        parent = psutil.Process()
        child, grandchild = self.spawn_children_pair()
        # forward
        children = parent.children(recursive=True)
        assert len(children) == 2
        assert children[0] == child
        assert children[1] == grandchild
        # backward
        parents = grandchild.parents()
        assert parents[0] == child
        assert parents[1] == parent

    def test_suspend_resume(self):
        p = self.spawn_psproc()
        p.suspend()
        for _ in range(100):
            if p.status() == psutil.STATUS_STOPPED:
                break
            time.sleep(0.01)
        p.resume()
        assert p.status() != psutil.STATUS_STOPPED

    def test_invalid_pid(self):
        with pytest.raises(TypeError):
            psutil.Process("1")
        with pytest.raises(ValueError):
            psutil.Process(-1)

    def test_as_dict(self):
        p = psutil.Process()
        d = p.as_dict(attrs=['exe', 'name'])
        assert sorted(d.keys()) == ['exe', 'name']

        p = psutil.Process(min(psutil.pids()))
        d = p.as_dict(attrs=['net_connections'], ad_value='foo')
        if not isinstance(d['net_connections'], list):
            assert d['net_connections'] == 'foo'

        # Test ad_value is set on AccessDenied.
        with mock.patch(
            'psutil.Process.nice', create=True, side_effect=psutil.AccessDenied
        ):
            assert p.as_dict(attrs=["nice"], ad_value=1) == {"nice": 1}

        # Test that NoSuchProcess bubbles up.
        with mock.patch(
            'psutil.Process.nice',
            create=True,
            side_effect=psutil.NoSuchProcess(p.pid, "name"),
        ):
            with pytest.raises(psutil.NoSuchProcess):
                p.as_dict(attrs=["nice"])

        # Test that ZombieProcess is swallowed.
        with mock.patch(
            'psutil.Process.nice',
            create=True,
            side_effect=psutil.ZombieProcess(p.pid, "name"),
        ):
            assert p.as_dict(attrs=["nice"], ad_value="foo") == {"nice": "foo"}

        # By default APIs raising NotImplementedError are
        # supposed to be skipped.
        with mock.patch(
            'psutil.Process.nice', create=True, side_effect=NotImplementedError
        ):
            d = p.as_dict()
            assert 'nice' not in list(d.keys())
            # ...unless the user explicitly asked for some attr.
            with pytest.raises(NotImplementedError):
                p.as_dict(attrs=["nice"])

        # errors
        with pytest.raises(TypeError):
            p.as_dict('name')
        with pytest.raises(ValueError):
            p.as_dict(['foo'])
        with pytest.raises(ValueError):
            p.as_dict(['foo', 'bar'])

    def test_oneshot(self):
        p = psutil.Process()
        with mock.patch("psutil._psplatform.Process.cpu_times") as m:
            with p.oneshot():
                p.cpu_times()
                p.cpu_times()
            assert m.call_count == 1

        with mock.patch("psutil._psplatform.Process.cpu_times") as m:
            p.cpu_times()
            p.cpu_times()
        assert m.call_count == 2

    def test_oneshot_twice(self):
        # Test the case where the ctx manager is __enter__ed twice.
        # The second __enter__ is supposed to resut in a NOOP.
        p = psutil.Process()
        with mock.patch("psutil._psplatform.Process.cpu_times") as m1:
            with mock.patch("psutil._psplatform.Process.oneshot_enter") as m2:
                with p.oneshot():
                    p.cpu_times()
                    p.cpu_times()
                    with p.oneshot():
                        p.cpu_times()
                        p.cpu_times()
                assert m1.call_count == 1
                assert m2.call_count == 1

        with mock.patch("psutil._psplatform.Process.cpu_times") as m:
            p.cpu_times()
            p.cpu_times()
        assert m.call_count == 2

    def test_oneshot_cache(self):
        # Make sure oneshot() cache is nonglobal. Instead it's
        # supposed to be bound to the Process instance, see:
        # https://github.com/giampaolo/psutil/issues/1373
        p1, p2 = self.spawn_children_pair()
        p1_ppid = p1.ppid()
        p2_ppid = p2.ppid()
        assert p1_ppid != p2_ppid
        with p1.oneshot():
            assert p1.ppid() == p1_ppid
            assert p2.ppid() == p2_ppid
        with p2.oneshot():
            assert p1.ppid() == p1_ppid
            assert p2.ppid() == p2_ppid

    def test_halfway_terminated_process(self):
        # Test that NoSuchProcess exception gets raised in case the
        # process dies after we create the Process object.
        # Example:
        # >>> proc = Process(1234)
        # >>> time.sleep(2)  # time-consuming task, process dies in meantime
        # >>> proc.name()
        # Refers to Issue #15
        def assert_raises_nsp(fun, fun_name):
            try:
                ret = fun()
            except psutil.ZombieProcess:  # differentiate from NSP
                raise
            except psutil.NoSuchProcess:
                pass
            except psutil.AccessDenied:
                if OPENBSD and fun_name in ('threads', 'num_threads'):
                    return
                raise
            else:
                # NtQuerySystemInformation succeeds even if process is gone.
                if WINDOWS and fun_name in ('exe', 'name'):
                    return
                raise self.fail(
                    "%r didn't raise NSP and returned %r instead" % (fun, ret)
                )

        p = self.spawn_psproc()
        p.terminate()
        p.wait()
        if WINDOWS:  # XXX
            call_until(lambda: p.pid not in psutil.pids())
        self.assertProcessGone(p)

        ns = process_namespace(p)
        for fun, name in ns.iter(ns.all):
            assert_raises_nsp(fun, name)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_zombie_process(self):
        _parent, zombie = self.spawn_zombie()
        self.assertProcessZombie(zombie)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_zombie_process_is_running_w_exc(self):
        # Emulate a case where internally is_running() raises
        # ZombieProcess.
        p = psutil.Process()
        with mock.patch(
            "psutil.Process", side_effect=psutil.ZombieProcess(0)
        ) as m:
            assert p.is_running()
            assert m.called

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_zombie_process_status_w_exc(self):
        # Emulate a case where internally status() raises
        # ZombieProcess.
        p = psutil.Process()
        with mock.patch(
            "psutil._psplatform.Process.status",
            side_effect=psutil.ZombieProcess(0),
        ) as m:
            assert p.status() == psutil.STATUS_ZOMBIE
            assert m.called

    def test_reused_pid(self):
        # Emulate a case where PID has been reused by another process.
        if PY3:
            from io import StringIO
        else:
            from StringIO import StringIO

        subp = self.spawn_testproc()
        p = psutil.Process(subp.pid)
        p._ident = (p.pid, p.create_time() + 100)

        list(psutil.process_iter())
        assert p.pid in psutil._pmap
        assert not p.is_running()

        # make sure is_running() removed PID from process_iter()
        # internal cache
        with mock.patch.object(psutil._common, "PSUTIL_DEBUG", True):
            with redirect_stderr(StringIO()) as f:
                list(psutil.process_iter())
        assert (
            "refreshing Process instance for reused PID %s" % p.pid
            in f.getvalue()
        )
        assert p.pid not in psutil._pmap

        assert p != psutil.Process(subp.pid)
        msg = "process no longer exists and its PID has been reused"
        ns = process_namespace(p)
        for fun, name in ns.iter(ns.setters + ns.killers, clear_cache=False):
            with self.subTest(name=name):
                with pytest.raises(psutil.NoSuchProcess, match=msg):
                    fun()

        assert "terminated + PID reused" in str(p)
        assert "terminated + PID reused" in repr(p)

        with pytest.raises(psutil.NoSuchProcess, match=msg):
            p.ppid()
        with pytest.raises(psutil.NoSuchProcess, match=msg):
            p.parent()
        with pytest.raises(psutil.NoSuchProcess, match=msg):
            p.parents()
        with pytest.raises(psutil.NoSuchProcess, match=msg):
            p.children()

    def test_pid_0(self):
        # Process(0) is supposed to work on all platforms except Linux
        if 0 not in psutil.pids():
            with pytest.raises(psutil.NoSuchProcess):
                psutil.Process(0)
            # These 2 are a contradiction, but "ps" says PID 1's parent
            # is PID 0.
            assert not psutil.pid_exists(0)
            assert psutil.Process(1).ppid() == 0
            return

        p = psutil.Process(0)
        exc = psutil.AccessDenied if WINDOWS else ValueError
        with pytest.raises(exc):
            p.wait()
        with pytest.raises(exc):
            p.terminate()
        with pytest.raises(exc):
            p.suspend()
        with pytest.raises(exc):
            p.resume()
        with pytest.raises(exc):
            p.kill()
        with pytest.raises(exc):
            p.send_signal(signal.SIGTERM)

        # test all methods
        ns = process_namespace(p)
        for fun, name in ns.iter(ns.getters + ns.setters):
            try:
                ret = fun()
            except psutil.AccessDenied:
                pass
            else:
                if name in ("uids", "gids"):
                    assert ret.real == 0
                elif name == "username":
                    user = 'NT AUTHORITY\\SYSTEM' if WINDOWS else 'root'
                    assert p.username() == user
                elif name == "name":
                    assert name, name

        if not OPENBSD:
            assert 0 in psutil.pids()
            assert psutil.pid_exists(0)

    @pytest.mark.skipif(not HAS_ENVIRON, reason="not supported")
    def test_environ(self):
        def clean_dict(d):
            exclude = ["PLAT", "HOME", "PYTEST_CURRENT_TEST", "PYTEST_VERSION"]
            if MACOS:
                exclude.extend([
                    "__CF_USER_TEXT_ENCODING",
                    "VERSIONER_PYTHON_PREFER_32_BIT",
                    "VERSIONER_PYTHON_VERSION",
                    "VERSIONER_PYTHON_VERSION",
                ])
            for name in exclude:
                d.pop(name, None)
            return dict([
                (
                    k.replace("\r", "").replace("\n", ""),
                    v.replace("\r", "").replace("\n", ""),
                )
                for k, v in d.items()
            ])

        self.maxDiff = None
        p = psutil.Process()
        d1 = clean_dict(p.environ())
        d2 = clean_dict(os.environ.copy())
        if not OSX and GITHUB_ACTIONS:
            assert d1 == d2

    @pytest.mark.skipif(not HAS_ENVIRON, reason="not supported")
    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @pytest.mark.skipif(
        MACOS_11PLUS,
        reason="macOS 11+ can't get another process environment, issue #2084",
    )
    @pytest.mark.skipif(
        NETBSD, reason="sometimes fails on `assert is_running()`"
    )
    def test_weird_environ(self):
        # environment variables can contain values without an equals sign
        code = textwrap.dedent("""
            #include <unistd.h>
            #include <fcntl.h>

            char * const argv[] = {"cat", 0};
            char * const envp[] = {"A=1", "X", "C=3", 0};

            int main(void) {
                // Close stderr on exec so parent can wait for the
                // execve to finish.
                if (fcntl(2, F_SETFD, FD_CLOEXEC) != 0)
                    return 0;
                return execve("/bin/cat", argv, envp);
            }
            """)
        cexe = create_c_exe(self.get_testfn(), c_code=code)
        sproc = self.spawn_testproc(
            [cexe], stdin=subprocess.PIPE, stderr=subprocess.PIPE
        )
        p = psutil.Process(sproc.pid)
        wait_for_pid(p.pid)
        assert p.is_running()
        # Wait for process to exec or exit.
        assert sproc.stderr.read() == b""
        if MACOS and CI_TESTING:
            try:
                env = p.environ()
            except psutil.AccessDenied:
                # XXX: fails sometimes with:
                # PermissionError from 'sysctl(KERN_PROCARGS2) -> EIO'
                return
        else:
            env = p.environ()
        assert env == {"A": "1", "C": "3"}
        sproc.communicate()
        assert sproc.returncode == 0


# ===================================================================
# --- Limited user tests
# ===================================================================


if POSIX and os.getuid() == 0:

    class LimitedUserTestCase(TestProcess):
        """Repeat the previous tests by using a limited user.
        Executed only on UNIX and only if the user who run the test script
        is root.
        """

        # the uid/gid the test suite runs under
        if hasattr(os, 'getuid'):
            PROCESS_UID = os.getuid()
            PROCESS_GID = os.getgid()

        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)
            # re-define all existent test methods in order to
            # ignore AccessDenied exceptions
            for attr in [x for x in dir(self) if x.startswith('test')]:
                meth = getattr(self, attr)

                def test_(self):
                    try:
                        meth()  # noqa
                    except psutil.AccessDenied:
                        pass

                setattr(self, attr, types.MethodType(test_, self))

        def setUp(self):
            super().setUp()
            os.setegid(1000)
            os.seteuid(1000)

        def tearDown(self):
            os.setegid(self.PROCESS_UID)
            os.seteuid(self.PROCESS_GID)
            super().tearDown()

        def test_nice(self):
            try:
                psutil.Process().nice(-1)
            except psutil.AccessDenied:
                pass
            else:
                raise self.fail("exception not raised")

        @pytest.mark.skipif(True, reason="causes problem as root")
        def test_zombie_process(self):
            pass


# ===================================================================
# --- psutil.Popen tests
# ===================================================================


class TestPopen(PsutilTestCase):
    """Tests for psutil.Popen class."""

    @classmethod
    def tearDownClass(cls):
        reap_children()

    def test_misc(self):
        # XXX this test causes a ResourceWarning on Python 3 because
        # psutil.__subproc instance doesn't get properly freed.
        # Not sure what to do though.
        cmd = [
            PYTHON_EXE,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)];",
        ]
        with psutil.Popen(
            cmd,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            env=PYTHON_EXE_ENV,
        ) as proc:
            proc.name()
            proc.cpu_times()
            proc.stdin  # noqa
            assert dir(proc)
            with pytest.raises(AttributeError):
                proc.foo  # noqa
            proc.terminate()
        if POSIX:
            assert proc.wait(5) == -signal.SIGTERM
        else:
            assert proc.wait(5) == signal.SIGTERM

    def test_ctx_manager(self):
        with psutil.Popen(
            [PYTHON_EXE, "-V"],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            stdin=subprocess.PIPE,
            env=PYTHON_EXE_ENV,
        ) as proc:
            proc.communicate()
        assert proc.stdout.closed
        assert proc.stderr.closed
        assert proc.stdin.closed
        assert proc.returncode == 0

    def test_kill_terminate(self):
        # subprocess.Popen()'s terminate(), kill() and send_signal() do
        # not raise exception after the process is gone. psutil.Popen
        # diverges from that.
        cmd = [
            PYTHON_EXE,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)];",
        ]
        with psutil.Popen(
            cmd,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            env=PYTHON_EXE_ENV,
        ) as proc:
            proc.terminate()
            proc.wait()
            with pytest.raises(psutil.NoSuchProcess):
                proc.terminate()
            with pytest.raises(psutil.NoSuchProcess):
                proc.kill()
            with pytest.raises(psutil.NoSuchProcess):
                proc.send_signal(signal.SIGTERM)
            if WINDOWS:
                with pytest.raises(psutil.NoSuchProcess):
                    proc.send_signal(signal.CTRL_C_EVENT)
                with pytest.raises(psutil.NoSuchProcess):
                    proc.send_signal(signal.CTRL_BREAK_EVENT)
PKok\���O�O�psutil/tests/test_system.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Tests for system APIS."""

import contextlib
import datetime
import errno
import os
import platform
import pprint
import shutil
import signal
import socket
import sys
import time

import psutil
from psutil import AIX
from psutil import BSD
from psutil import FREEBSD
from psutil import LINUX
from psutil import MACOS
from psutil import NETBSD
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil import WINDOWS
from psutil._compat import PY3
from psutil._compat import FileNotFoundError
from psutil._compat import long
from psutil.tests import ASCII_FS
from psutil.tests import CI_TESTING
from psutil.tests import DEVNULL
from psutil.tests import GITHUB_ACTIONS
from psutil.tests import GLOBAL_TIMEOUT
from psutil.tests import HAS_BATTERY
from psutil.tests import HAS_CPU_FREQ
from psutil.tests import HAS_GETLOADAVG
from psutil.tests import HAS_NET_IO_COUNTERS
from psutil.tests import HAS_SENSORS_BATTERY
from psutil.tests import HAS_SENSORS_FANS
from psutil.tests import HAS_SENSORS_TEMPERATURES
from psutil.tests import IS_64BIT
from psutil.tests import MACOS_12PLUS
from psutil.tests import PYPY
from psutil.tests import QEMU_USER
from psutil.tests import UNICODE_SUFFIX
from psutil.tests import PsutilTestCase
from psutil.tests import check_net_address
from psutil.tests import enum
from psutil.tests import mock
from psutil.tests import pytest
from psutil.tests import retry_on_failure


# ===================================================================
# --- System-related API tests
# ===================================================================


class TestProcessIter(PsutilTestCase):
    def test_pid_presence(self):
        assert os.getpid() in [x.pid for x in psutil.process_iter()]
        sproc = self.spawn_testproc()
        assert sproc.pid in [x.pid for x in psutil.process_iter()]
        p = psutil.Process(sproc.pid)
        p.kill()
        p.wait()
        assert sproc.pid not in [x.pid for x in psutil.process_iter()]

    def test_no_duplicates(self):
        ls = [x for x in psutil.process_iter()]
        assert sorted(ls, key=lambda x: x.pid) == sorted(
            set(ls), key=lambda x: x.pid
        )

    def test_emulate_nsp(self):
        list(psutil.process_iter())  # populate cache
        for x in range(2):
            with mock.patch(
                'psutil.Process.as_dict',
                side_effect=psutil.NoSuchProcess(os.getpid()),
            ):
                assert list(psutil.process_iter(attrs=["cpu_times"])) == []
            psutil.process_iter.cache_clear()  # repeat test without cache

    def test_emulate_access_denied(self):
        list(psutil.process_iter())  # populate cache
        for x in range(2):
            with mock.patch(
                'psutil.Process.as_dict',
                side_effect=psutil.AccessDenied(os.getpid()),
            ):
                with pytest.raises(psutil.AccessDenied):
                    list(psutil.process_iter(attrs=["cpu_times"]))
            psutil.process_iter.cache_clear()  # repeat test without cache

    def test_attrs(self):
        for p in psutil.process_iter(attrs=['pid']):
            assert list(p.info.keys()) == ['pid']
        # yield again
        for p in psutil.process_iter(attrs=['pid']):
            assert list(p.info.keys()) == ['pid']
        with pytest.raises(ValueError):
            list(psutil.process_iter(attrs=['foo']))
        with mock.patch(
            "psutil._psplatform.Process.cpu_times",
            side_effect=psutil.AccessDenied(0, ""),
        ) as m:
            for p in psutil.process_iter(attrs=["pid", "cpu_times"]):
                assert p.info['cpu_times'] is None
                assert p.info['pid'] >= 0
            assert m.called
        with mock.patch(
            "psutil._psplatform.Process.cpu_times",
            side_effect=psutil.AccessDenied(0, ""),
        ) as m:
            flag = object()
            for p in psutil.process_iter(
                attrs=["pid", "cpu_times"], ad_value=flag
            ):
                assert p.info['cpu_times'] is flag
                assert p.info['pid'] >= 0
            assert m.called

    def test_cache_clear(self):
        list(psutil.process_iter())  # populate cache
        assert psutil._pmap
        psutil.process_iter.cache_clear()
        assert not psutil._pmap


class TestProcessAPIs(PsutilTestCase):
    @pytest.mark.skipif(
        PYPY and WINDOWS,
        reason="spawn_testproc() unreliable on PYPY + WINDOWS",
    )
    def test_wait_procs(self):
        def callback(p):
            pids.append(p.pid)

        pids = []
        sproc1 = self.spawn_testproc()
        sproc2 = self.spawn_testproc()
        sproc3 = self.spawn_testproc()
        procs = [psutil.Process(x.pid) for x in (sproc1, sproc2, sproc3)]
        with pytest.raises(ValueError):
            psutil.wait_procs(procs, timeout=-1)
        with pytest.raises(TypeError):
            psutil.wait_procs(procs, callback=1)
        t = time.time()
        gone, alive = psutil.wait_procs(procs, timeout=0.01, callback=callback)

        assert time.time() - t < 0.5
        assert gone == []
        assert len(alive) == 3
        assert pids == []
        for p in alive:
            assert not hasattr(p, 'returncode')

        @retry_on_failure(30)
        def test_1(procs, callback):
            gone, alive = psutil.wait_procs(
                procs, timeout=0.03, callback=callback
            )
            assert len(gone) == 1
            assert len(alive) == 2
            return gone, alive

        sproc3.terminate()
        gone, alive = test_1(procs, callback)
        assert sproc3.pid in [x.pid for x in gone]
        if POSIX:
            assert gone.pop().returncode == -signal.SIGTERM
        else:
            assert gone.pop().returncode == 1
        assert pids == [sproc3.pid]
        for p in alive:
            assert not hasattr(p, 'returncode')

        @retry_on_failure(30)
        def test_2(procs, callback):
            gone, alive = psutil.wait_procs(
                procs, timeout=0.03, callback=callback
            )
            assert len(gone) == 3
            assert len(alive) == 0
            return gone, alive

        sproc1.terminate()
        sproc2.terminate()
        gone, alive = test_2(procs, callback)
        assert set(pids) == set([sproc1.pid, sproc2.pid, sproc3.pid])
        for p in gone:
            assert hasattr(p, 'returncode')

    @pytest.mark.skipif(
        PYPY and WINDOWS,
        reason="spawn_testproc() unreliable on PYPY + WINDOWS",
    )
    def test_wait_procs_no_timeout(self):
        sproc1 = self.spawn_testproc()
        sproc2 = self.spawn_testproc()
        sproc3 = self.spawn_testproc()
        procs = [psutil.Process(x.pid) for x in (sproc1, sproc2, sproc3)]
        for p in procs:
            p.terminate()
        psutil.wait_procs(procs)

    def test_pid_exists(self):
        sproc = self.spawn_testproc()
        assert psutil.pid_exists(sproc.pid)
        p = psutil.Process(sproc.pid)
        p.kill()
        p.wait()
        assert not psutil.pid_exists(sproc.pid)
        assert not psutil.pid_exists(-1)
        assert psutil.pid_exists(0) == (0 in psutil.pids())

    def test_pid_exists_2(self):
        pids = psutil.pids()
        for pid in pids:
            try:
                assert psutil.pid_exists(pid)
            except AssertionError:
                # in case the process disappeared in meantime fail only
                # if it is no longer in psutil.pids()
                time.sleep(0.1)
                assert pid not in psutil.pids()
        pids = range(max(pids) + 15000, max(pids) + 16000)
        for pid in pids:
            assert not psutil.pid_exists(pid)


class TestMiscAPIs(PsutilTestCase):
    def test_boot_time(self):
        bt = psutil.boot_time()
        assert isinstance(bt, float)
        assert bt > 0
        assert bt < time.time()

    @pytest.mark.skipif(
        CI_TESTING and not psutil.users(), reason="unreliable on CI"
    )
    def test_users(self):
        users = psutil.users()
        assert users != []
        for user in users:
            with self.subTest(user=user):
                assert user.name
                assert isinstance(user.name, str)
                assert isinstance(user.terminal, (str, type(None)))
                if user.host is not None:
                    assert isinstance(user.host, (str, type(None)))
                user.terminal  # noqa
                user.host  # noqa
                assert user.started > 0.0
                datetime.datetime.fromtimestamp(user.started)
                if WINDOWS or OPENBSD:
                    assert user.pid is None
                else:
                    psutil.Process(user.pid)

    def test_test(self):
        # test for psutil.test() function
        stdout = sys.stdout
        sys.stdout = DEVNULL
        try:
            psutil.test()
        finally:
            sys.stdout = stdout

    def test_os_constants(self):
        names = [
            "POSIX",
            "WINDOWS",
            "LINUX",
            "MACOS",
            "FREEBSD",
            "OPENBSD",
            "NETBSD",
            "BSD",
            "SUNOS",
        ]
        for name in names:
            assert isinstance(getattr(psutil, name), bool), name

        if os.name == 'posix':
            assert psutil.POSIX
            assert not psutil.WINDOWS
            names.remove("POSIX")
            if "linux" in sys.platform.lower():
                assert psutil.LINUX
                names.remove("LINUX")
            elif "bsd" in sys.platform.lower():
                assert psutil.BSD
                assert [psutil.FREEBSD, psutil.OPENBSD, psutil.NETBSD].count(
                    True
                ) == 1
                names.remove("BSD")
                names.remove("FREEBSD")
                names.remove("OPENBSD")
                names.remove("NETBSD")
            elif (
                "sunos" in sys.platform.lower()
                or "solaris" in sys.platform.lower()
            ):
                assert psutil.SUNOS
                names.remove("SUNOS")
            elif "darwin" in sys.platform.lower():
                assert psutil.MACOS
                names.remove("MACOS")
        else:
            assert psutil.WINDOWS
            assert not psutil.POSIX
            names.remove("WINDOWS")

        # assert all other constants are set to False
        for name in names:
            assert not getattr(psutil, name), name


class TestMemoryAPIs(PsutilTestCase):
    def test_virtual_memory(self):
        mem = psutil.virtual_memory()
        assert mem.total > 0, mem
        assert mem.available > 0, mem
        assert 0 <= mem.percent <= 100, mem
        assert mem.used > 0, mem
        assert mem.free >= 0, mem
        for name in mem._fields:
            value = getattr(mem, name)
            if name != 'percent':
                assert isinstance(value, (int, long))
            if name != 'total':
                if not value >= 0:
                    raise self.fail("%r < 0 (%s)" % (name, value))
                if value > mem.total:
                    raise self.fail(
                        "%r > total (total=%s, %s=%s)"
                        % (name, mem.total, name, value)
                    )

    def test_swap_memory(self):
        mem = psutil.swap_memory()
        assert mem._fields == (
            'total',
            'used',
            'free',
            'percent',
            'sin',
            'sout',
        )

        assert mem.total >= 0, mem
        assert mem.used >= 0, mem
        if mem.total > 0:
            # likely a system with no swap partition
            assert mem.free > 0, mem
        else:
            assert mem.free == 0, mem
        assert 0 <= mem.percent <= 100, mem
        assert mem.sin >= 0, mem
        assert mem.sout >= 0, mem


class TestCpuAPIs(PsutilTestCase):
    def test_cpu_count_logical(self):
        logical = psutil.cpu_count()
        assert logical is not None
        assert logical == len(psutil.cpu_times(percpu=True))
        assert logical >= 1

        if os.path.exists("/proc/cpuinfo"):
            with open("/proc/cpuinfo") as fd:
                cpuinfo_data = fd.read()
            if "physical id" not in cpuinfo_data:
                raise pytest.skip("cpuinfo doesn't include physical id")

    def test_cpu_count_cores(self):
        logical = psutil.cpu_count()
        cores = psutil.cpu_count(logical=False)
        if cores is None:
            raise pytest.skip("cpu_count_cores() is None")
        if WINDOWS and sys.getwindowsversion()[:2] <= (6, 1):  # <= Vista
            assert cores is None
        else:
            assert cores >= 1
            assert logical >= cores

    def test_cpu_count_none(self):
        # https://github.com/giampaolo/psutil/issues/1085
        for val in (-1, 0, None):
            with mock.patch(
                'psutil._psplatform.cpu_count_logical', return_value=val
            ) as m:
                assert psutil.cpu_count() is None
                assert m.called
            with mock.patch(
                'psutil._psplatform.cpu_count_cores', return_value=val
            ) as m:
                assert psutil.cpu_count(logical=False) is None
                assert m.called

    def test_cpu_times(self):
        # Check type, value >= 0, str().
        total = 0
        times = psutil.cpu_times()
        sum(times)
        for cp_time in times:
            assert isinstance(cp_time, float)
            assert cp_time >= 0.0
            total += cp_time
        assert round(abs(total - sum(times)), 6) == 0
        str(times)
        # CPU times are always supposed to increase over time
        # or at least remain the same and that's because time
        # cannot go backwards.
        # Surprisingly sometimes this might not be the case (at
        # least on Windows and Linux), see:
        # https://github.com/giampaolo/psutil/issues/392
        # https://github.com/giampaolo/psutil/issues/645
        # if not WINDOWS:
        #     last = psutil.cpu_times()
        #     for x in range(100):
        #         new = psutil.cpu_times()
        #         for field in new._fields:
        #             new_t = getattr(new, field)
        #             last_t = getattr(last, field)
        #             self.assertGreaterEqual(new_t, last_t,
        #                                     msg="%s %s" % (new_t, last_t))
        #         last = new

    def test_cpu_times_time_increases(self):
        # Make sure time increases between calls.
        t1 = sum(psutil.cpu_times())
        stop_at = time.time() + GLOBAL_TIMEOUT
        while time.time() < stop_at:
            t2 = sum(psutil.cpu_times())
            if t2 > t1:
                return
        raise self.fail("time remained the same")

    def test_per_cpu_times(self):
        # Check type, value >= 0, str().
        for times in psutil.cpu_times(percpu=True):
            total = 0
            sum(times)
            for cp_time in times:
                assert isinstance(cp_time, float)
                assert cp_time >= 0.0
                total += cp_time
            assert round(abs(total - sum(times)), 6) == 0
            str(times)
        assert len(psutil.cpu_times(percpu=True)[0]) == len(
            psutil.cpu_times(percpu=False)
        )

        # Note: in theory CPU times are always supposed to increase over
        # time or remain the same but never go backwards. In practice
        # sometimes this is not the case.
        # This issue seemd to be afflict Windows:
        # https://github.com/giampaolo/psutil/issues/392
        # ...but it turns out also Linux (rarely) behaves the same.
        # last = psutil.cpu_times(percpu=True)
        # for x in range(100):
        #     new = psutil.cpu_times(percpu=True)
        #     for index in range(len(new)):
        #         newcpu = new[index]
        #         lastcpu = last[index]
        #         for field in newcpu._fields:
        #             new_t = getattr(newcpu, field)
        #             last_t = getattr(lastcpu, field)
        #             self.assertGreaterEqual(
        #                 new_t, last_t, msg="%s %s" % (lastcpu, newcpu))
        #     last = new

    def test_per_cpu_times_2(self):
        # Simulate some work load then make sure time have increased
        # between calls.
        tot1 = psutil.cpu_times(percpu=True)
        giveup_at = time.time() + GLOBAL_TIMEOUT
        while True:
            if time.time() >= giveup_at:
                return self.fail("timeout")
            tot2 = psutil.cpu_times(percpu=True)
            for t1, t2 in zip(tot1, tot2):
                t1, t2 = psutil._cpu_busy_time(t1), psutil._cpu_busy_time(t2)
                difference = t2 - t1
                if difference >= 0.05:
                    return

    @pytest.mark.skipif(
        CI_TESTING and OPENBSD, reason="unreliable on OPENBSD + CI"
    )
    def test_cpu_times_comparison(self):
        # Make sure the sum of all per cpu times is almost equal to
        # base "one cpu" times. On OpenBSD the sum of per-CPUs is
        # higher for some reason.
        base = psutil.cpu_times()
        per_cpu = psutil.cpu_times(percpu=True)
        summed_values = base._make([sum(num) for num in zip(*per_cpu)])
        for field in base._fields:
            with self.subTest(field=field, base=base, per_cpu=per_cpu):
                assert (
                    abs(getattr(base, field) - getattr(summed_values, field))
                    < 1
                )

    def _test_cpu_percent(self, percent, last_ret, new_ret):
        try:
            assert isinstance(percent, float)
            assert percent >= 0.0
            assert percent is not -0.0
            assert percent <= 100.0 * psutil.cpu_count()
        except AssertionError as err:
            raise AssertionError(
                "\n%s\nlast=%s\nnew=%s"
                % (err, pprint.pformat(last_ret), pprint.pformat(new_ret))
            )

    def test_cpu_percent(self):
        last = psutil.cpu_percent(interval=0.001)
        for _ in range(100):
            new = psutil.cpu_percent(interval=None)
            self._test_cpu_percent(new, last, new)
            last = new
        with pytest.raises(ValueError):
            psutil.cpu_percent(interval=-1)

    def test_per_cpu_percent(self):
        last = psutil.cpu_percent(interval=0.001, percpu=True)
        assert len(last) == psutil.cpu_count()
        for _ in range(100):
            new = psutil.cpu_percent(interval=None, percpu=True)
            for percent in new:
                self._test_cpu_percent(percent, last, new)
            last = new
        with pytest.raises(ValueError):
            psutil.cpu_percent(interval=-1, percpu=True)

    def test_cpu_times_percent(self):
        last = psutil.cpu_times_percent(interval=0.001)
        for _ in range(100):
            new = psutil.cpu_times_percent(interval=None)
            for percent in new:
                self._test_cpu_percent(percent, last, new)
            self._test_cpu_percent(sum(new), last, new)
            last = new
        with pytest.raises(ValueError):
            psutil.cpu_times_percent(interval=-1)

    def test_per_cpu_times_percent(self):
        last = psutil.cpu_times_percent(interval=0.001, percpu=True)
        assert len(last) == psutil.cpu_count()
        for _ in range(100):
            new = psutil.cpu_times_percent(interval=None, percpu=True)
            for cpu in new:
                for percent in cpu:
                    self._test_cpu_percent(percent, last, new)
                self._test_cpu_percent(sum(cpu), last, new)
            last = new

    def test_per_cpu_times_percent_negative(self):
        # see: https://github.com/giampaolo/psutil/issues/645
        psutil.cpu_times_percent(percpu=True)
        zero_times = [
            x._make([0 for x in range(len(x._fields))])
            for x in psutil.cpu_times(percpu=True)
        ]
        with mock.patch('psutil.cpu_times', return_value=zero_times):
            for cpu in psutil.cpu_times_percent(percpu=True):
                for percent in cpu:
                    self._test_cpu_percent(percent, None, None)

    def test_cpu_stats(self):
        # Tested more extensively in per-platform test modules.
        infos = psutil.cpu_stats()
        assert infos._fields == (
            'ctx_switches',
            'interrupts',
            'soft_interrupts',
            'syscalls',
        )
        for name in infos._fields:
            value = getattr(infos, name)
            assert value >= 0
            # on AIX, ctx_switches is always 0
            if not AIX and name in ('ctx_switches', 'interrupts'):
                assert value > 0

    # TODO: remove this once 1892 is fixed
    @pytest.mark.skipif(
        MACOS and platform.machine() == 'arm64', reason="skipped due to #1892"
    )
    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_cpu_freq(self):
        def check_ls(ls):
            for nt in ls:
                assert nt._fields == ('current', 'min', 'max')
                if nt.max != 0.0:
                    assert nt.current <= nt.max
                for name in nt._fields:
                    value = getattr(nt, name)
                    assert isinstance(value, (int, long, float))
                    assert value >= 0

        ls = psutil.cpu_freq(percpu=True)
        if FREEBSD and not ls:
            raise pytest.skip("returns empty list on FreeBSD")

        assert ls, ls
        check_ls([psutil.cpu_freq(percpu=False)])

        if LINUX:
            assert len(ls) == psutil.cpu_count()

    @pytest.mark.skipif(not HAS_GETLOADAVG, reason="not supported")
    def test_getloadavg(self):
        loadavg = psutil.getloadavg()
        assert len(loadavg) == 3
        for load in loadavg:
            assert isinstance(load, float)
            assert load >= 0.0


class TestDiskAPIs(PsutilTestCase):
    @pytest.mark.skipif(
        PYPY and not IS_64BIT, reason="unreliable on PYPY32 + 32BIT"
    )
    def test_disk_usage(self):
        usage = psutil.disk_usage(os.getcwd())
        assert usage._fields == ('total', 'used', 'free', 'percent')

        assert usage.total > 0, usage
        assert usage.used > 0, usage
        assert usage.free > 0, usage
        assert usage.total > usage.used, usage
        assert usage.total > usage.free, usage
        assert 0 <= usage.percent <= 100, usage.percent
        if hasattr(shutil, 'disk_usage'):
            # py >= 3.3, see: http://bugs.python.org/issue12442
            shutil_usage = shutil.disk_usage(os.getcwd())
            tolerance = 5 * 1024 * 1024  # 5MB
            assert usage.total == shutil_usage.total
            assert abs(usage.free - shutil_usage.free) < tolerance
            if not MACOS_12PLUS:
                # see https://github.com/giampaolo/psutil/issues/2147
                assert abs(usage.used - shutil_usage.used) < tolerance

        # if path does not exist OSError ENOENT is expected across
        # all platforms
        fname = self.get_testfn()
        with pytest.raises(FileNotFoundError):
            psutil.disk_usage(fname)

    @pytest.mark.skipif(not ASCII_FS, reason="not an ASCII fs")
    def test_disk_usage_unicode(self):
        # See: https://github.com/giampaolo/psutil/issues/416
        with pytest.raises(UnicodeEncodeError):
            psutil.disk_usage(UNICODE_SUFFIX)

    def test_disk_usage_bytes(self):
        psutil.disk_usage(b'.')

    def test_disk_partitions(self):
        def check_ntuple(nt):
            assert isinstance(nt.device, str)
            assert isinstance(nt.mountpoint, str)
            assert isinstance(nt.fstype, str)
            assert isinstance(nt.opts, str)

        # all = False
        ls = psutil.disk_partitions(all=False)
        assert ls
        for disk in ls:
            check_ntuple(disk)
            if WINDOWS and 'cdrom' in disk.opts:
                continue
            if not POSIX:
                assert os.path.exists(disk.device), disk
            else:
                # we cannot make any assumption about this, see:
                # http://goo.gl/p9c43
                disk.device  # noqa
            # on modern systems mount points can also be files
            assert os.path.exists(disk.mountpoint), disk
            assert disk.fstype, disk

        # all = True
        ls = psutil.disk_partitions(all=True)
        assert ls
        for disk in psutil.disk_partitions(all=True):
            check_ntuple(disk)
            if not WINDOWS and disk.mountpoint:
                try:
                    os.stat(disk.mountpoint)
                except OSError as err:
                    if GITHUB_ACTIONS and MACOS and err.errno == errno.EIO:
                        continue
                    # http://mail.python.org/pipermail/python-dev/
                    #     2012-June/120787.html
                    if err.errno not in (errno.EPERM, errno.EACCES):
                        raise
                else:
                    assert os.path.exists(disk.mountpoint), disk

        # ---

        def find_mount_point(path):
            path = os.path.abspath(path)
            while not os.path.ismount(path):
                path = os.path.dirname(path)
            return path.lower()

        mount = find_mount_point(__file__)
        mounts = [
            x.mountpoint.lower()
            for x in psutil.disk_partitions(all=True)
            if x.mountpoint
        ]
        assert mount in mounts

    @pytest.mark.skipif(
        LINUX and not os.path.exists('/proc/diskstats'),
        reason="/proc/diskstats not available on this linux version",
    )
    @pytest.mark.skipif(
        CI_TESTING and not psutil.disk_io_counters(), reason="unreliable on CI"
    )  # no visible disks
    def test_disk_io_counters(self):
        def check_ntuple(nt):
            assert nt[0] == nt.read_count
            assert nt[1] == nt.write_count
            assert nt[2] == nt.read_bytes
            assert nt[3] == nt.write_bytes
            if not (OPENBSD or NETBSD):
                assert nt[4] == nt.read_time
                assert nt[5] == nt.write_time
                if LINUX:
                    assert nt[6] == nt.read_merged_count
                    assert nt[7] == nt.write_merged_count
                    assert nt[8] == nt.busy_time
                elif FREEBSD:
                    assert nt[6] == nt.busy_time
            for name in nt._fields:
                assert getattr(nt, name) >= 0, nt

        ret = psutil.disk_io_counters(perdisk=False)
        assert ret is not None, "no disks on this system?"
        check_ntuple(ret)
        ret = psutil.disk_io_counters(perdisk=True)
        # make sure there are no duplicates
        assert len(ret) == len(set(ret))
        for key in ret:
            assert key, key
            check_ntuple(ret[key])

    def test_disk_io_counters_no_disks(self):
        # Emulate a case where no disks are installed, see:
        # https://github.com/giampaolo/psutil/issues/1062
        with mock.patch(
            'psutil._psplatform.disk_io_counters', return_value={}
        ) as m:
            assert psutil.disk_io_counters(perdisk=False) is None
            assert psutil.disk_io_counters(perdisk=True) == {}
            assert m.called


class TestNetAPIs(PsutilTestCase):
    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_net_io_counters(self):
        def check_ntuple(nt):
            assert nt[0] == nt.bytes_sent
            assert nt[1] == nt.bytes_recv
            assert nt[2] == nt.packets_sent
            assert nt[3] == nt.packets_recv
            assert nt[4] == nt.errin
            assert nt[5] == nt.errout
            assert nt[6] == nt.dropin
            assert nt[7] == nt.dropout
            assert nt.bytes_sent >= 0, nt
            assert nt.bytes_recv >= 0, nt
            assert nt.packets_sent >= 0, nt
            assert nt.packets_recv >= 0, nt
            assert nt.errin >= 0, nt
            assert nt.errout >= 0, nt
            assert nt.dropin >= 0, nt
            assert nt.dropout >= 0, nt

        ret = psutil.net_io_counters(pernic=False)
        check_ntuple(ret)
        ret = psutil.net_io_counters(pernic=True)
        assert ret != []
        for key in ret:
            assert key
            assert isinstance(key, str)
            check_ntuple(ret[key])

    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_net_io_counters_no_nics(self):
        # Emulate a case where no NICs are installed, see:
        # https://github.com/giampaolo/psutil/issues/1062
        with mock.patch(
            'psutil._psplatform.net_io_counters', return_value={}
        ) as m:
            assert psutil.net_io_counters(pernic=False) is None
            assert psutil.net_io_counters(pernic=True) == {}
            assert m.called

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_net_if_addrs(self):
        nics = psutil.net_if_addrs()
        assert nics, nics

        nic_stats = psutil.net_if_stats()

        # Not reliable on all platforms (net_if_addrs() reports more
        # interfaces).
        # self.assertEqual(sorted(nics.keys()),
        #                  sorted(psutil.net_io_counters(pernic=True).keys()))

        families = set([socket.AF_INET, socket.AF_INET6, psutil.AF_LINK])
        for nic, addrs in nics.items():
            assert isinstance(nic, str)
            assert len(set(addrs)) == len(addrs)
            for addr in addrs:
                assert isinstance(addr.family, int)
                assert isinstance(addr.address, str)
                assert isinstance(addr.netmask, (str, type(None)))
                assert isinstance(addr.broadcast, (str, type(None)))
                assert addr.family in families
                if PY3 and not PYPY:
                    assert isinstance(addr.family, enum.IntEnum)
                if nic_stats[nic].isup:
                    # Do not test binding to addresses of interfaces
                    # that are down
                    if addr.family == socket.AF_INET:
                        s = socket.socket(addr.family)
                        with contextlib.closing(s):
                            s.bind((addr.address, 0))
                    elif addr.family == socket.AF_INET6:
                        info = socket.getaddrinfo(
                            addr.address,
                            0,
                            socket.AF_INET6,
                            socket.SOCK_STREAM,
                            0,
                            socket.AI_PASSIVE,
                        )[0]
                        af, socktype, proto, _canonname, sa = info
                        s = socket.socket(af, socktype, proto)
                        with contextlib.closing(s):
                            s.bind(sa)
                for ip in (
                    addr.address,
                    addr.netmask,
                    addr.broadcast,
                    addr.ptp,
                ):
                    if ip is not None:
                        # TODO: skip AF_INET6 for now because I get:
                        # AddressValueError: Only hex digits permitted in
                        # u'c6f3%lxcbr0' in u'fe80::c8e0:fff:fe54:c6f3%lxcbr0'
                        if addr.family != socket.AF_INET6:
                            check_net_address(ip, addr.family)
                # broadcast and ptp addresses are mutually exclusive
                if addr.broadcast:
                    assert addr.ptp is None
                elif addr.ptp:
                    assert addr.broadcast is None

        if BSD or MACOS or SUNOS:
            if hasattr(socket, "AF_LINK"):
                assert psutil.AF_LINK == socket.AF_LINK
        elif LINUX:
            assert psutil.AF_LINK == socket.AF_PACKET
        elif WINDOWS:
            assert psutil.AF_LINK == -1

    def test_net_if_addrs_mac_null_bytes(self):
        # Simulate that the underlying C function returns an incomplete
        # MAC address. psutil is supposed to fill it with null bytes.
        # https://github.com/giampaolo/psutil/issues/786
        if POSIX:
            ret = [('em1', psutil.AF_LINK, '06:3d:29', None, None, None)]
        else:
            ret = [('em1', -1, '06-3d-29', None, None, None)]
        with mock.patch(
            'psutil._psplatform.net_if_addrs', return_value=ret
        ) as m:
            addr = psutil.net_if_addrs()['em1'][0]
            assert m.called
            if POSIX:
                assert addr.address == '06:3d:29:00:00:00'
            else:
                assert addr.address == '06-3d-29-00-00-00'

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_net_if_stats(self):
        nics = psutil.net_if_stats()
        assert nics, nics
        all_duplexes = (
            psutil.NIC_DUPLEX_FULL,
            psutil.NIC_DUPLEX_HALF,
            psutil.NIC_DUPLEX_UNKNOWN,
        )
        for name, stats in nics.items():
            assert isinstance(name, str)
            isup, duplex, speed, mtu, flags = stats
            assert isinstance(isup, bool)
            assert duplex in all_duplexes
            assert duplex in all_duplexes
            assert speed >= 0
            assert mtu >= 0
            assert isinstance(flags, str)

    @pytest.mark.skipif(
        not (LINUX or BSD or MACOS), reason="LINUX or BSD or MACOS specific"
    )
    def test_net_if_stats_enodev(self):
        # See: https://github.com/giampaolo/psutil/issues/1279
        with mock.patch(
            'psutil._psutil_posix.net_if_mtu',
            side_effect=OSError(errno.ENODEV, ""),
        ) as m:
            ret = psutil.net_if_stats()
            assert ret == {}
            assert m.called


class TestSensorsAPIs(PsutilTestCase):
    @pytest.mark.skipif(not HAS_SENSORS_TEMPERATURES, reason="not supported")
    def test_sensors_temperatures(self):
        temps = psutil.sensors_temperatures()
        for name, entries in temps.items():
            assert isinstance(name, str)
            for entry in entries:
                assert isinstance(entry.label, str)
                if entry.current is not None:
                    assert entry.current >= 0
                if entry.high is not None:
                    assert entry.high >= 0
                if entry.critical is not None:
                    assert entry.critical >= 0

    @pytest.mark.skipif(not HAS_SENSORS_TEMPERATURES, reason="not supported")
    def test_sensors_temperatures_fahreneit(self):
        d = {'coretemp': [('label', 50.0, 60.0, 70.0)]}
        with mock.patch(
            "psutil._psplatform.sensors_temperatures", return_value=d
        ) as m:
            temps = psutil.sensors_temperatures(fahrenheit=True)['coretemp'][0]
            assert m.called
            assert temps.current == 122.0
            assert temps.high == 140.0
            assert temps.critical == 158.0

    @pytest.mark.skipif(not HAS_SENSORS_BATTERY, reason="not supported")
    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_sensors_battery(self):
        ret = psutil.sensors_battery()
        assert ret.percent >= 0
        assert ret.percent <= 100
        if ret.secsleft not in (
            psutil.POWER_TIME_UNKNOWN,
            psutil.POWER_TIME_UNLIMITED,
        ):
            assert ret.secsleft >= 0
        else:
            if ret.secsleft == psutil.POWER_TIME_UNLIMITED:
                assert ret.power_plugged
        assert isinstance(ret.power_plugged, bool)

    @pytest.mark.skipif(not HAS_SENSORS_FANS, reason="not supported")
    def test_sensors_fans(self):
        fans = psutil.sensors_fans()
        for name, entries in fans.items():
            assert isinstance(name, str)
            for entry in entries:
                assert isinstance(entry.label, str)
                assert isinstance(entry.current, (int, long))
                assert entry.current >= 0
PKok\3�
��psutil/tests/test_osx.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""macOS specific tests."""

import platform
import re
import time

import psutil
from psutil import MACOS
from psutil import POSIX
from psutil.tests import HAS_BATTERY
from psutil.tests import TOLERANCE_DISK_USAGE
from psutil.tests import TOLERANCE_SYS_MEM
from psutil.tests import PsutilTestCase
from psutil.tests import pytest
from psutil.tests import retry_on_failure
from psutil.tests import sh
from psutil.tests import spawn_testproc
from psutil.tests import terminate


if POSIX:
    from psutil._psutil_posix import getpagesize


def sysctl(cmdline):
    """Expects a sysctl command with an argument and parse the result
    returning only the value of interest.
    """
    out = sh(cmdline)
    result = out.split()[1]
    try:
        return int(result)
    except ValueError:
        return result


def vm_stat(field):
    """Wrapper around 'vm_stat' cmdline utility."""
    out = sh('vm_stat')
    for line in out.split('\n'):
        if field in line:
            break
    else:
        raise ValueError("line not found")
    return int(re.search(r'\d+', line).group(0)) * getpagesize()


@pytest.mark.skipif(not MACOS, reason="MACOS only")
class TestProcess(PsutilTestCase):
    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    def test_process_create_time(self):
        output = sh("ps -o lstart -p %s" % self.pid)
        start_ps = output.replace('STARTED', '').strip()
        hhmmss = start_ps.split(' ')[-2]
        year = start_ps.split(' ')[-1]
        start_psutil = psutil.Process(self.pid).create_time()
        assert hhmmss == time.strftime(
            "%H:%M:%S", time.localtime(start_psutil)
        )
        assert year == time.strftime("%Y", time.localtime(start_psutil))


@pytest.mark.skipif(not MACOS, reason="MACOS only")
class TestSystemAPIs(PsutilTestCase):

    # --- disk

    @retry_on_failure()
    def test_disks(self):
        # test psutil.disk_usage() and psutil.disk_partitions()
        # against "df -a"
        def df(path):
            out = sh('df -k "%s"' % path).strip()
            lines = out.split('\n')
            lines.pop(0)
            line = lines.pop(0)
            dev, total, used, free = line.split()[:4]
            if dev == 'none':
                dev = ''
            total = int(total) * 1024
            used = int(used) * 1024
            free = int(free) * 1024
            return dev, total, used, free

        for part in psutil.disk_partitions(all=False):
            usage = psutil.disk_usage(part.mountpoint)
            dev, total, used, free = df(part.mountpoint)
            assert part.device == dev
            assert usage.total == total
            assert abs(usage.free - free) < TOLERANCE_DISK_USAGE
            assert abs(usage.used - used) < TOLERANCE_DISK_USAGE

    # --- cpu

    def test_cpu_count_logical(self):
        num = sysctl("sysctl hw.logicalcpu")
        assert num == psutil.cpu_count(logical=True)

    def test_cpu_count_cores(self):
        num = sysctl("sysctl hw.physicalcpu")
        assert num == psutil.cpu_count(logical=False)

    # TODO: remove this once 1892 is fixed
    @pytest.mark.skipif(
        MACOS and platform.machine() == 'arm64', reason="skipped due to #1892"
    )
    def test_cpu_freq(self):
        freq = psutil.cpu_freq()
        assert freq.current * 1000 * 1000 == sysctl("sysctl hw.cpufrequency")
        assert freq.min * 1000 * 1000 == sysctl("sysctl hw.cpufrequency_min")
        assert freq.max * 1000 * 1000 == sysctl("sysctl hw.cpufrequency_max")

    # --- virtual mem

    def test_vmem_total(self):
        sysctl_hwphymem = sysctl('sysctl hw.memsize')
        assert sysctl_hwphymem == psutil.virtual_memory().total

    @retry_on_failure()
    def test_vmem_free(self):
        vmstat_val = vm_stat("free")
        psutil_val = psutil.virtual_memory().free
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_active(self):
        vmstat_val = vm_stat("active")
        psutil_val = psutil.virtual_memory().active
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_inactive(self):
        vmstat_val = vm_stat("inactive")
        psutil_val = psutil.virtual_memory().inactive
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_vmem_wired(self):
        vmstat_val = vm_stat("wired")
        psutil_val = psutil.virtual_memory().wired
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    # --- swap mem

    @retry_on_failure()
    def test_swapmem_sin(self):
        vmstat_val = vm_stat("Pageins")
        psutil_val = psutil.swap_memory().sin
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_swapmem_sout(self):
        vmstat_val = vm_stat("Pageout")
        psutil_val = psutil.swap_memory().sout
        assert abs(psutil_val - vmstat_val) < TOLERANCE_SYS_MEM

    # --- network

    def test_net_if_stats(self):
        for name, stats in psutil.net_if_stats().items():
            try:
                out = sh("ifconfig %s" % name)
            except RuntimeError:
                pass
            else:
                assert stats.isup == ('RUNNING' in out), out
                assert stats.mtu == int(re.findall(r'mtu (\d+)', out)[0])

    # --- sensors_battery

    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_sensors_battery(self):
        out = sh("pmset -g batt")
        percent = re.search(r"(\d+)%", out).group(1)
        drawing_from = re.search("Now drawing from '([^']+)'", out).group(1)
        power_plugged = drawing_from == "AC Power"
        psutil_result = psutil.sensors_battery()
        assert psutil_result.power_plugged == power_plugged
        assert psutil_result.percent == int(percent)
PKok\
�u55psutil/tests/__main__.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Run unit tests. This is invoked by:
$ python -m psutil.tests.
"""

from psutil.tests import pytest


pytest.main(["-v", "-s", "--tb=short"])
PKok\��k��0psutil/tests/__pycache__/test_aix.cpython-39.pycnu�[���a

��?h��@sjdZddlZddlZddlmZddlmZddlmZddlmZejj	edd�Gd	d
�d
e��Z
dS)zAIX specific tests.�N)�AIX)�PsutilTestCase)�pytest)�shzAIX only)�reasonc@s4eZdZdd�Zdd�Zdd�Zdd�Zd	d
�ZdS)�AIXSpecificTestCasecCs�td�}d}d��D]}|d|f7}qt�||�}|dus@J�d}t|�d��|}t|�d��|}t|�d��|}t|�d	��|}	t��}
d
||}|
j|ks�J�t	|
j
|�|ks�J�t	|
j|�|ks�J�t	|
j|	�|ks�J�dS)Nz/usr/bin/svmon -O unit=KBz	memory\s*z+size inuse free pin virtual available mmode�(?P<%s>\S+)\s+i�size�	availableZinuse�free�)
r�split�re�search�int�group�psutilZvirtual_memory�total�abs�usedr
r)�self�out�
re_pattern�field�matchobjZKBrr
rr�
psutil_resultZTOLERANCE_SYS_MEM�r�A/usr/local/lib64/python3.9/site-packages/psutil/tests/test_aix.py�test_virtual_memorys"z'AIXSpecificTestCase.test_virtual_memorycCsTtd�}t�d|�}|dus J�t|�d��}d}t��}t|j|�|ksPJ�dS)Nz/usr/sbin/lsps -az=(?P<space>\S+)\s+(?P<vol>\S+)\s+(?P<vg>\S+)\s+(?P<size>\d+)MBr	i)rrrrrrZswap_memoryr)rrrZtotal_mbZMBrrrr�test_swap_memory0s�z$AIXSpecificTestCase.test_swap_memorycCs�td�}d}d��D]}|d|f7}qt�||�}|dus@J�d}t��}t|jt|�	d���|kslJ�t|j
t|�	d���|ks�J�t|jt|�	d���|ks�J�t|jt|�	d	���|ks�J�dS)
N�/usr/bin/mpstat -azALL\s*zfmin maj mpcs mpcr dev soft dec ph cs ics bound rq push S3pull S3grd S0rd S1rd S2rd S3rd S4rd S5rd syscri��csZsysc�devZsoft)
rr
rrrZ	cpu_statsrZctx_switchesrrZsyscallsZ
interruptsZsoft_interrupts)rrrrrZCPU_STATS_TOLERANCErrrr�test_cpu_statsGs2�
��������z"AIXSpecificTestCase.test_cpu_statscCs:td�}tt�d|��d��}tjdd�}||ks6J�dS)Nr z
lcpu=(\d+)�T)�logical)rrrrrr�	cpu_count)rrZmpstat_lcpuZpsutil_lcpurrr�test_cpu_count_logicalisz*AIXSpecificTestCase.test_cpu_count_logicalcCs4td�}t|���}tt�����}||ks0J�dS)Nz/etc/ifconfig -l)r�setr
rZnet_if_addrs�keys)rrZifconfig_namesZpsutil_namesrrr�test_net_if_addrs_namesosz+AIXSpecificTestCase.test_net_if_addrs_namesN)�__name__�
__module__�__qualname__rrr#r'r*rrrrrs
"r)�__doc__rrrZpsutil.testsrrr�markZskipifrrrrr�<module>	sPKok\�*ejXjX6psutil/tests/__pycache__/test_testutils.cpython-39.pycnu�[���a

��?h�H�@s�dZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddlm
Z
ddlmZddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*ddlm+Z+dd lm,Z,dd!lm-Z-dd"lm.Z.dd#lm/Z/dd$lm0Z0dd%lm1Z1dd&lm2Z2dd'lm3Z3dd(lm4Z4Gd)d*�d*e�Z5Gd+d,�d,e�Z6Gd-d.�d.e�Z7Gd/d0�d0e�Z8Gd1d2�d2e�Z9e)j:j;d3d4�Gd5d6�d6e��Z<Gd7d8�d8e�Z=Gd9d:�d:e�Z>Gd;d<�d<e�Z?dS)=z1Tests for testing utils (psutil.tests namespace).�N)�FREEBSD)�NETBSD)�POSIX)�open_binary)�	open_text)�
supports_ipv6)�PY3)�
CI_TESTING)�COVERAGE)�HAS_NET_CONNECTIONS_UNIX)�HERE)�
PYTHON_EXE)�PYTHON_EXE_ENV)�PsutilTestCase)�TestMemoryLeak)�bind_socket)�bind_unix_socket��
call_until)�chdir)�create_sockets)�fake_pytest)�filter_proc_net_connections)�
get_free_port)�
is_namedtuple)�mock)�process_namespace)�pytest)�
reap_children)�retry)�retry_on_failure)�
safe_mkdir)�safe_rmpath)�system_namespace)�tcp_socketpair)�	terminate)�unix_socketpair)�
wait_for_file)�wait_for_pidc@sxeZdZe�d�dd��Ze�d�dd��Ze�d�dd��Ze�d�dd	��Ze�d�d
d��Z	e�d�dd
��Z
dS)�TestRetryDecoratorz
time.sleepcsFtdddd��fdd��}ttd���|�dks4J�|jdksBJ�dS)N����retries�intervalZlogfuncs�r���ddqdS�Nr+r��pop���queuer2�G/usr/local/lib64/python3.9/site-packages/psutil/tests/test_testutils.py�fooIs
z2TestRetryDecorator.test_retry_success.<locals>.foo�)r�list�range�
call_count��self�sleepr6r2r3r5�test_retry_successEs
z%TestRetryDecorator.test_retry_successcshtdddd��fdd��}ttd���t�t��|�Wd�n1sL0Y|jdksdJ�dS)Nr*r+r,cs�r���ddqdSr/r0r2r3r2r5r6Ws
z2TestRetryDecorator.test_retry_failure.<locals>.foo�)rr8r9r�raises�ZeroDivisionErrorr:r;r2r3r5�test_retry_failureTs$z%TestRetryDecorator.test_retry_failurecCsVttdd�dd��}t�t��|�Wd�n1s:0Y|jdksRJ�dS)Nr+)�	exceptionr.cSst�dS�N)�	TypeErrorr2r2r2r5r6esz2TestRetryDecorator.test_exception_arg.<locals>.foor)r�
ValueErrorrr@rEr:r;r2r2r5�test_exception_argcs


$z%TestRetryDecorator.test_exception_argcCsXtdddd�dd��}t�t��|�Wd�n1s<0Y|jdksTJ�dS)Nr*r,cSsdddSr/r2r2r2r2r5r6qsz4TestRetryDecorator.test_no_interval_arg.<locals>.foor�rrr@rAr:r;r2r2r5�test_no_interval_argms

$z'TestRetryDecorator.test_no_interval_argcCsXtdddd�dd��}t�t��|�Wd�n1s<0Y|jdksTJ�dS)Nr*r+r,cSsdddSr/r2r2r2r2r5r6{sz0TestRetryDecorator.test_retries_arg.<locals>.foorHr;r2r2r5�test_retries_argys

$z#TestRetryDecorator.test_retries_argcCs:t�t��tddd�Wd�n1s,0YdS)Nr*r+)r-�timeout)rr@rFr)r<r=r2r2r5�test_retries_and_timeout_args�sz0TestRetryDecorator.test_retries_and_timeout_argsN)�__name__�
__module__�__qualname__r�patchr>rBrGrIrJrLr2r2r2r5r)Ds


	

	r)c@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)�TestSyncTestUtilsc	Cs�tt���tt���d}tjdtdg�d��Dt	�
tj��t|�Wd�n1s\0YWd�n1sz0YdS)Ni���psutil.tests.retry.__iter__r�Zreturn_value)r(�os�getpid�max�psutilZpidsrrP�iterrr@Z
NoSuchProcess)r<Znopidr2r2r5�test_wait_for_pid�s
z#TestSyncTestUtils.test_wait_for_pidcCsX|��}t|d��}|�d�Wd�n1s20Yt|�tj�|�rTJ�dS)N�wr6��
get_testfn�open�writer'rT�path�exists�r<�testfn�fr2r2r5�test_wait_for_file�s
(z$TestSyncTestUtils.test_wait_for_filecCsR|��}t|d��Wd�n1s(0Yt|dd�tj�|�rNJ�dS)NrZT)�empty)r\r]r'rTr_r`�r<rbr2r2r5�test_wait_for_file_empty�s
z*TestSyncTestUtils.test_wait_for_file_emptyc	Csr|��}tjdtdg�d��Bt�t��t|�Wd�n1sF0YWd�n1sd0YdS)NrRrrS)r\rrPrXrr@�IOErrorr'rfr2r2r5�test_wait_for_file_no_file�sz,TestSyncTestUtils.test_wait_for_file_no_filecCs\|��}t|d��}|�d�Wd�n1s20Yt|dd�tj�|�sXJ�dS)NrZr6F)�deleter[rar2r2r5�test_wait_for_file_no_delete�s
(z.TestSyncTestUtils.test_wait_for_file_no_deletecCstdd��dS)NcSsdS�Nr+r2r2r2r2r5�<lambda>��z3TestSyncTestUtils.test_call_until.<locals>.<lambda>r�r<r2r2r5�test_call_until�sz!TestSyncTestUtils.test_call_untilN)	rMrNrOrYrdrgrirkrpr2r2r2r5rQ�srQc@s4eZdZdd�Zdd�Zdd�Zdd�Zd	d
�ZdS)�TestFSTestUtilscCs:tt��}|jdksJ�Wd�n1s,0YdS)N�r)r�__file__�mode�r<rcr2r2r5�test_open_text�s
zTestFSTestUtils.test_open_textcCs:tt��}|jdksJ�Wd�n1s,0YdS)N�rb)rrsrtrur2r2r5�test_open_binary�s
z TestFSTestUtils.test_open_binarycCs<|��}t|�tj�|�s J�t|�tj�|�s8J�dSrD)r\r!rTr_�isdirrfr2r2r5�test_safe_mkdir�s
zTestFSTestUtils.test_safe_mkdirc	Cs�|��}t|d���t|�tj�|�r.J�t|�t�|�t|�tj�|�rXJ�tj	dt
tjd�d��L}t
�t
��t|�Wd�n1s�0Y|js�J�Wd�n1s�0YdS)NrZzpsutil.tests.os.stat�)Zside_effect)r\r]�closer"rTr_r`�mkdirrrP�OSError�errno�EINVALrr@�called)r<rb�mr2r2r5�test_safe_rmpath�s
�&z TestFSTestUtils.test_safe_rmpathcCsp|��}t��}t�|�t|��*t��tj�||�ks>J�Wd�n1sR0Yt��|kslJ�dSrD)r\rT�getcwdr}rr_�join)r<rb�baser2r2r5�
test_chdir�s

8zTestFSTestUtils.test_chdirN)rMrNrOrvrxrzr�r�r2r2r2r5rq�s
rqc@s>eZdZdd�Zdd�Zejjedd�dd��Z	d	d
�Z
dS)�TestProcessUtilscCsN|��}t�|j�}|��s J�t�|��r2J�tjjr>J�tjjrJJ�dSrD)	�spawn_testprocrW�Process�pid�
is_runningr�testsZ
_pids_startedZ_subprocesses_started)r<Zsubp�pr2r2r5�test_reap_children�sz#TestProcessUtils.test_reap_childrencCs�|��\}}|j|jksJ�|��s(J�|��s4J�t����}||gksNJ�t��jdd�}t|�dksnJ�||vszJ�||vs�J�|��t�	�ks�J�|��|jks�J�t
|�|��r�J�|��s�J�t
|�|��r�J�dS)NT)�	recursive�)Zspawn_children_pairr�r�rWr��children�len�ppidrTrUr%)r<�childZ
grandchildr�r2r2r5�test_spawn_children_pair�s"z)TestProcessUtils.test_spawn_children_pair�
POSIX only��reasoncCs"|��\}}|��tjksJ�dSrD)�spawn_zombie�statusrWZ
STATUS_ZOMBIE)r<�_parent�zombier2r2r5�test_spawn_zombie�sz"TestProcessUtils.test_spawn_zombiecCs�|��}t|�|�|j�t|�t�|��j�}t|�|�|j�t|�tddg}tj|tj	tj	t
d�}t|�|�|j�t|�|��j}t|�|�|j�t|�tr�|��\}}t|�t|�|�|j�|�|j�dS)Nz-cz3import time; [time.sleep(0.1) for x in range(100)];)�stdout�stderr�env)
r�r%Z
assertPidGoner�rWr�r
�Popen�
subprocess�PIPErrr�)r<r��cmdr��parentr�r2r2r5�test_terminates>��
zTestProcessUtils.test_terminateN)rMrNrOr�r�r�mark�skipifrr�r�r2r2r2r5r��s
	
r�c@sleZdZdd�Zejjedd�dd��Zdd�Z	ejjedd�ejje
pNed	d�d
d���Zdd
�Z
dS)�TestNetUtilscCsRt�}t�td|fd���$}|��d|ks0J�Wd�n1sD0YdS)Nr{��addrr+)r�
contextlib�closingr�getsockname)r<�port�sr2r2r5r+szTestNetUtils.bind_socketr�r�cCs�|��}t|�}t�|��f|jtjks,J�|jtjks<J�|�	�|ksLJ�t
j�|�s\J�t
�t
�
|�j�srJ�Wd�n1s�0Y|��}t|tjd�}t�|�� |jtjks�J�Wd�n1s�0YdS)N)�type)r\rr�r��family�socket�AF_UNIXr��SOCK_STREAMr�rTr_r`�stat�S_ISSOCK�st_mode�
SOCK_DGRAM)r<�name�sockr2r2r5�test_bind_unix_socket0s4z"TestNetUtils.test_bind_unix_socketc	Cs�dt�f}ttj|d�\}}t�|��jt�|��@|��|ksDJ�|��|ksTJ�|��|ksdJ�Wd�n1sx0YWd�n1s�0YdS)Nz	127.0.0.1r�)rr$r��AF_INETr�r�r��getpeername)r<r��server�clientr2r2r5�tcp_tcp_socketpair@s
zTestNetUtils.tcp_tcp_socketpairz*/var/run/log UNIX socket opened by defaultcCs�t��}|��}t|jdd��gks(J�|��}t|�\}}z�tj�	|�sNJ�t
�t�
|�j�sdJ�|��|dksxJ�t
t|jdd���dks�J�|��|ks�J�|��|ks�J�W|��|��n|��|��0dS)N�unix)�kindr�)rWr��num_fdsrZnet_connectionsr\r&rTr_r`r�r�r�r�r�r�r|)r<r�r�r�r�r�r2r2r5�test_unix_socketpairKs,����
�z!TestNetUtils.test_unix_socketpaircCs�t���}t�t�}t�t�}|D]2}||jd7<||�tjtj�d7<q |tj	dksfJ�t
�r~|tjdks~J�tr�t
r�|tjdks�J�|tjdks�J�|tjdks�J�Wd�n1s�0YdS)Nr+r�)r�collections�defaultdict�intr��
getsockoptr��
SOL_SOCKET�SO_TYPEr�r�AF_INET6rrr�r�r�)r<�socksZfams�typesr�r2r2r5�test_create_socketsgs

z TestNetUtils.test_create_socketsN)rMrNrOrrr�r�rr�r�rrr�r�r2r2r2r5r�*s
�r��serial�r�c@sheZdZe�dd��Zdd�Ze�ejje	dd�ejje
dd�dd	����Zd
d�Zdd
�Z
dd�ZdS)�TestMemLeakClasscs8�fdd�}ddi�|j|ddd��ddks4J�dS)	Ncs�dd7<dS)N�cntr+r2r2�r�r2r5�fun|sz(TestMemLeakClass.test_times.<locals>.funr�r�
�)�times�warmup_times�)�execute)r<r�r2r�r5�
test_timeszszTestMemLeakClass.test_timescCs2t�t��"|jdd�dd�Wd�n1s20Yt�t��"|jdd�dd�Wd�n1sn0Yt�t��"|jdd�dd�Wd�n1s�0Yt�t��"|jd	d�dd
�Wd�n1s�0Yt�t��"|jdd�dd�Wd�n1�s$0YdS)
NcSsdS�Nrr2r2r2r2r5rm�rnz1TestMemLeakClass.test_param_err.<locals>.<lambda>r�r�cSsdSr�r2r2r2r2r5rm�rn���cSsdSr�r2r2r2r2r5rm�rn)r�cSsdSr�r2r2r2r2r5rm�rn)�	tolerancecSsdSr�r2r2r2r2r5rm�rn)r-)rr@rFr�ror2r2r5�test_param_err�s0000zTestMemLeakClass.test_param_errz
skipped on CIr�zskipped during test coveragecCs\g}|fdd�}zBtjtdd��|j|dd�Wd�n1sD0YW~n~0dS)NcSs|�dd�dS)NZ�xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx���append��lsr2r2r5r��sz+TestMemLeakClass.test_leak_mem.<locals>.funz	extra-mem��match�dr�)rr@�AssertionErrorr�)r<r�r�r2r2r5�
test_leak_mem�s.zTestMemLeakClass.test_leak_memcs^��fdd�}g�trdnd}tjtd|d����|�Wd�n1sP0YdS)Ncs"tt�}��|j���|�dSrD)r]rsZ
addCleanupr|r�)rc�Zboxr<r2r5r��sz1TestMemLeakClass.test_unclosed_files.<locals>.fun�fd�handlez	unclosed r�)rrr@r�r�)r<r�r�r2r�r5�test_unclosed_files�s
z$TestMemLeakClass.test_unclosed_filescs>�fdd�}g�d}|j||ddd�t��|dks:J�dS)Ncs��dd�dS)NZxxxxxxxxxxxxxxxxxxxxxxxxr�r�r2r�r2r5r��sz,TestMemLeakClass.test_tolerance.<locals>.funr�ri�)r�r�r�r+)r�r�)r<r�r�r2r�r5�test_tolerance�s�zTestMemLeakClass.test_tolerancecCs�dd�}|�t|�t�t��|�t|�Wd�n1s@0Ydd�}t�t��|�t|�Wd�n1s~0YdS)NcSsdddSr/r2r2r2r2r5�fun_1�sz2TestMemLeakClass.test_execute_w_exc.<locals>.fun_1cSsdSrDr2r2r2r2r5�fun_2�sz2TestMemLeakClass.test_execute_w_exc.<locals>.fun_2)Z
execute_w_excrArr@r~r�)r<r�r�r2r2r5�test_execute_w_exc�s*z#TestMemLeakClass.test_execute_w_excN)rMrNrOr r�r�rr�r�r	r
r�r�r�r�r2r2r2r5r�xs

r�c@sVeZdZdd�Zdd�Zdd�Zdd�Zejj	e
d	d
�dd��Zd
d�Zdd�Z
dS)�TestFakePytestcCs(t��}|�|�t��}|�|�}|SrD)�unittestZ	TestSuiteZaddTestZTextTestRunner�run)r<�klass�suite�runner�resultr2r2r5�run_test_class�s


zTestFakePytest.run_test_classc
Cs�t�t��}ddWd�n1s(0Yt|jt�sBJ�tjtdd��}td��Wd�n1sn0Yz:tjtdd��}td��Wd�n1s�0YWn4ty�}zt|�dks�J�WYd}~nd}~00|�d��dS)Nr+rr6r��barz"foo" does not match "bar"�exception not raised)	rr@rA�
isinstance�valuerFr��str�fail)r<�cm�errr2r2r5�test_raises�s&&*&zTestFakePytest.test_raisescCsVtjjdd�dd��}|�dks$J�tjjdd�Gdd�d��}|���dksRJ�dS)Nr�r�cSsdSrlr2r2r2r2r5r6�sz%TestFakePytest.test_mark.<locals>.foor+c@seZdZdd�ZdS)z%TestFakePytest.test_mark.<locals>.FoocSsdSrlr2ror2r2r5r��sz)TestFakePytest.test_mark.<locals>.Foo.barN)rMrNrOr�r2r2r2r5�Foo�sr)rr��xdist_groupr�)r<r6rr2r2r5�	test_mark�s
zTestFakePytest.test_markcCs�Gdd�dtj�}|�|d��}|��s,J�t|j�dks>J�|jdddksTJ�Gdd�dtj�}|�|d��}|��s�J�t|j�dks�J�dS)Nc@s$eZdZejjddd�dd��ZdS)�,TestFakePytest.test_skipif.<locals>.TestCaseTr�r�cSsddksJ�dSrlr2ror2r2r5r6�s�0TestFakePytest.test_skipif.<locals>.TestCase.fooN�rMrNrOrr�r�r6r2r2r2r5�TestCase�srr6r+rr�c@s$eZdZejjddd�dd��ZdS)r	Fr�r�cSsddksJ�dSrlr2ror2r2r5r6�sr
Nrr2r2r2r5r�s�r�rr�Z
wasSuccessfulr��skipped�r<rr�r2r2r5�test_skipif�szTestFakePytest.test_skipifznot PY3r�cCsXGdd�dtj�}|�|d��}|��s,J�t|j�dks>J�|jdddksTJ�dS)Nc@seZdZdd�ZdS)z*TestFakePytest.test_skip.<locals>.TestCasecSst�d�ddksJ�dS)Nr�r+r)r�skipror2r2r5r6s
z.TestFakePytest.test_skip.<locals>.TestCase.fooN)rMrNrOr6r2r2r2r5rsrr6r+rr�r
rr2r2r5�	test_skip�s
zTestFakePytest.test_skipc	Cs|jtd�}t�|�ttj�|d�d��Wd�n1s@0Yttj�|d�d��$}|�t�	d��
��Wd�n1s�0Ytj�
tjd|��T|�td��(t��}|��dks�J�Wd�n1s�0YWd�n1s�0YdS)	N)�dirz__init__.pyrZztest_file.pyz�                import unittest

                class TestCase(unittest.TestCase):
                    def test_passed(self):
                        pass
                rzFake pytest module was usedr+)r\rrTr}r]r_r�r^�textwrap�dedent�lstriprrP�objectrWr�ZassertWarnsRegex�UserWarningr�mainZcountTestCases)r<Ztmpdirrcr�r2r2r5�	test_main
s
2�zTestFakePytest.test_maincCs<t�t�� tjdtdd�Wd�n1s00Yz>t�t�� tjdtdd�Wd�n1sl0YWnty�Yn0|�d��tjtdd�� tjdtdd�Wd�n1s�0YzDtjtdd�� tjdtdd�Wd�n1�s0YWnt�y,Yn0|�d��dS)Nr6r+)�
stacklevelr�r�r�)rZwarnsr�warnings�warn�DeprecationWarningr�rror2r2r5�
test_warnss .2
.4zTestFakePytest.test_warnsN)rMrNrOr�rrrrr�r�rrrrr2r2r2r5r��s
r�c@seZdZdd�Zdd�ZdS)�TestTestingUtilscCsLt��}t|�}|��dd�|�|j�D�dd}|�|��ksHJ�dS)NcSsg|]}|ddkr|�qS)r+r�r2��.0�xr2r2r5�
<listcomp>?rnz;TestTestingUtils.test_process_namespace.<locals>.<listcomp>r)rWr�r�testrX�gettersr�)r<r��nsr�r2r2r5�test_process_namespace;s
z'TestTestingUtils.test_process_namespacecCs:t�}dd�|�|j�D�dd}|�t��ks6J�dS)NcSsg|]}|ddkr|�qS)r+�net_if_addrsr2r!r2r2r5r$Drnz:TestTestingUtils.test_system_namespace.<locals>.<listcomp>r)r#rXr&rWr))r<r'r�r2r2r5�test_system_namespaceBsz&TestTestingUtils.test_system_namespaceN)rMrNrOr(r*r2r2r2r5r :sr c@seZdZdd�ZdS)�TestOtherUtilscCs.tt�dd�ddd��sJ�tt��r*J�dS)Nr6za b cr+r�r7)rr��
namedtuple�tupleror2r2r5�test_is_namedtupleIsz!TestOtherUtils.test_is_namedtupleN)rMrNrOr.r2r2r2r5r+Hsr+)@�__doc__r�r�rrTr�r�r�rr�rrWZpsutil.testsrrrZpsutil._commonrrrZpsutil._compatrr	r
rrr
rrrrrrrrrrrrrrrrrr r!r"r#r$r%r&r'r(r)rQrqr�r�r�rr�r�r r+r2r2r2r5�<module>sxE(-LNKvPKok\���Sh�h�0psutil/tests/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s:dZddlmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlmZddlmZddlmZzddlZWne�ydZYn0ddl Z ddl m!Z!ddl m"Z"dd	l m#Z#dd
l m$Z$ddl m%Z%ddl m&Z&dd
l m'Z'ddl m(Z(ddl)m*Z*ddl)m+Z+ddl)m,Z,ddl)m-Z-ddl)m.Z.ddl/m0Z0ddl/m1Z1ddl/m2Z2ddl/m3Z3ddl/m4Z4ddl/m5Z5ddl/m6Z6zddlm7Z7WnPe�yre�8��"e�9d�ddl7Z7Wd�n1�sd0YYn0e0�r�ddl:Z:nddl;ZdZ:e&�r�ddl<m=Z=gd�Z>dej?vZ@d e	jAvZBd!e	jAv�p�d"e	jAvZCeB�p�eCZDd#e	jAvZEd$e	jAvZFe"�r6eC�r6eGd%��ZHd&eH�I�vZJWd�n1�s*0Ynd'ZJejKd(kZLe
�M�d)kZNe,d*d+��ZOe#�rxeO�d,kZPeO�d-kZQnd'ZPd'ZQd.ZRd/ZSd0ZTd1ZUeD�r�eRd29ZReUd29ZUeSd39ZSeTd29ZTe	jVd4k�r�d5e	�W�ZXnd6e	�W�ZXd7ZYe0�r�d8�Zd9d:�Z[nd;Z[e�\��]�d<vZ^e	j_�`e	j_�ae	j_�bec�d=d=��Zde	jA�ed>e	j_�aedd?��Zfe	j_�`e	j_�bec��Zgehe jid@�Zjehe dA�Zkehe jidB�Zlehe dC�Zmehe jidD�Znehe jidE�Zoe&�o�e'Zpehe dF�Zqehe jidG�Zrehe jidH�Zsehe jidI�Ztehe dJ�Zuzeu�o�eve �w��ZxWney�y
d'ZxYn0ehe dK�Zzehe dL�Z{ehe jidM�Z|e#�s8e!�oBe	�}�dkZ~dNdO�Ze�\Z�Z�eGe	j�dP�Z�e��e�j��dQdR�e�e �D�Z�e�edSe���Z�e��Z�e��Z�GdTdU�dUej��Z�dVdW�Z�e�d�dXdY��Z�e�dZd[��Z�d\d]�Z�e�d^d_��Z�e�d`da��Z�ej�eUfdbdc�Z�d�ddde�Z�dfdg�Z�dhdi�Z�Gdjdk�dk�Z�e�e j�deUdldm�dndo��Z�e�e2e�fdeUdldm�d�dqdr��Z�e�e�deUdldm�dsdt��Z�dudv�Z�dwdx�Z�ej�dydz��Z�d{d|�Z�d�d}d~�Z�d�d�d��Z�Gd�d��d��Z�edu�r�e�ZGd�d��d�ej��Z�e�e_�Gd�d��d�e��Z�ej�j�e@d�d��Gd�d��d�e���Z�d�d��Z�d�d��Z�d�d��Z�Gd�d��d��Z�Gd�d��d��Z�eRfd�d��Z�d�d�d��Z�d�d�d��Z�d�d�d��Z�eedfd�d��Z�ejfd�d��Z�d�d�d��Z�d�d��Z�ej�d�d���Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�e&�rej�d�d�d���Z�nej�d�d�d���Z�ej�d�d���Z�e&�r6e�ej�d�d���dS)�zTest utilities.�)�print_functionN)�AF_INET)�AF_INET6)�SOCK_STREAM)�AIX)�LINUX)�MACOS)�NETBSD)�OPENBSD)�POSIX)�SUNOS)�WINDOWS)�bytes2human)�debug)�memoize)�print_color)�
supports_ipv6)�PY3)�FileExistsError)�FileNotFoundError)�range)�super)�unicode)�which)�mock�ignore)�wait_pid)M�APPVEYOR�DEVNULL�GLOBAL_TIMEOUT�TOLERANCE_SYS_MEM�
NO_RETRIES�PYPY�
PYTHON_EXE�PYTHON_EXE_ENV�ROOT_DIR�SCRIPTS_DIR�
TESTFN_PREFIX�UNICODE_SUFFIX�INVALID_UNICODE_SUFFIX�
CI_TESTING�VALID_PROC_STATUSES�TOLERANCE_DISK_USAGE�IS_64BIT�HAS_CPU_AFFINITY�HAS_CPU_FREQ�HAS_ENVIRON�HAS_PROC_IO_COUNTERS�
HAS_IONICE�HAS_MEMORY_MAPS�HAS_PROC_CPU_NUM�
HAS_RLIMIT�HAS_SENSORS_BATTERY�HAS_BATTERY�HAS_SENSORS_FANS�HAS_SENSORS_TEMPERATURES�HAS_NET_CONNECTIONS_UNIX�MACOS_11PLUS�MACOS_12PLUS�COVERAGE�AARCH64�	QEMU_USER�PYTEST_PARALLEL�pyrun�	terminate�
reap_children�spawn_testproc�spawn_zombie�spawn_children_pair�
ThreadTask�unittest�skip_on_access_denied�skip_on_not_implemented�retry_on_failure�TestMemoryLeak�PsutilTestCase�process_namespace�system_namespace�
print_sysinfo�is_win_secure_system_proc�fake_pytest�chdir�safe_rmpath�
create_py_exe�create_c_exe�
get_testfn�
get_winver�kernel_version�
call_until�wait_for_pid�
wait_for_file�check_net_address�filter_proc_net_connections�
get_free_port�bind_socket�bind_unix_socket�tcp_socketpair�unix_socketpair�create_sockets�
reload_module�import_module_by_path�warn�copyload_shared_lib�
is_namedtupleZ__pypy__r�GITHUB_ACTIONSZCIBUILDWHEELZCOVERAGE_RUNZPYTEST_XDIST_WORKERz/proc/1/cmdlinez
/bin/qemu-Fl�aarch64cCsnt��d}ttt|�d�dd���}|dkrjtjtj	dddgdd	id
d�}ttt|�d�dd���}|S)Nr�.�)�
�z-sS�-cz-import platform; print(platform.mac_ver()[0])ZSYSTEM_VERSION_COMPAT�0T)�env�universal_newlines)
�platform�mac_ver�tuple�map�int�split�
subprocess�check_output�sys�
executable)�version_str�version�r��A/usr/local/lib64/python3.9/site-packages/psutil/tests/__init__.py�
macos_version�s��
r�)rn�)�rrniPi�����javaz$psutil-%s-z@psutil-%s-u-ƒőősf���utf8�surrogateescapeufÀ€)�asciizus-asciiz..ZPSUTIL_SCRIPTS_DIR�scripts�cpu_affinity�cpu_freq�environ�
getloadavg�ionice�memory_maps�net_io_counters�cpu_num�io_counters�rlimit�sensors_battery�sensors_fans�sensors_temperatures�threadscCs�dd�}tj��}ttdd�}trFtjdkrF|durFtj|d<||fStrTtj|fSt	r�|tj�p�|tj
�tj��p�|tdtjdd���p�|t
�����}|s�td��||fStj
�tj�}tj
�|�s�J|��||fSdS)	NcSs<ztj|dgtjtjd�Wntjy2YdS0|SdS)Nz-V)�stdout�stderr)rz�
check_call�PIPE�CalledProcessError)�exer�r�r��attempt�s�
z_get_py_exe.<locals>.attempt�_base_executable)r���__PYVENV_LAUNCHER__zpython%s.%srmz"can't find python exe real abspath)�osr��copy�getattrr|r
�version_infor}rjr�path�realpathr�psutil�Processr��
ValueError�exists)r�rr�baser�r�r�r��_get_py_exe�s,




���r�zr+cCs g|]}|�d�rtt|��qS)ZSTATUS_��
startswithr�r���.0�xr�r�r��
<listcomp>$sr��AF_UNIXcsPeZdZdZ�fdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
�ZS)rGz6A thread task which does nothing expect staying alive.cs$t���d|_d|_t��|_dS)NF���MbP?)r�__init__�_running�	_interval�	threading�Event�_flag��self��	__class__r�r�r�5s
zThreadTask.__init__cCs|jj}d||jt|�fS)Nz<%s running=%s at %#x>)r��__name__r��id�r��namer�r�r��__repr__;szThreadTask.__repr__cCs|��|S�N)�startr�r�r�r��	__enter__?szThreadTask.__enter__cOs|��dSr�)�stop)r��args�kwargsr�r�r��__exit__CszThreadTask.__exit__cCs(|jrtd��tj�|�|j��dS)zStart thread and keep it running until an explicit
        stop() request. Polls for shutdown every 'timeout' seconds.
        zalready startedN)r�r�r��Threadr�r��waitr�r�r�r�r�FszThreadTask.startcCs(d|_|j��|jr$t�|j�qdS)NT)r�r��set�time�sleepr�r�r�r�r��runOs
zThreadTask.runcCs |jstd��d|_|��dS)z8Stop thread execution and and waits until it is stopped.zalready stoppedFN)r�r��joinr�r�r�r�r�UszThreadTask.stop)r��
__module__�__qualname__�__doc__r�r�r�r�r�r�r��
__classcell__r�r�r�r�rG2s	rGcst����fdd��}|S)Ncs0z�|i|��WSty*t��Yn0dSr�)�	ExceptionrC�r�r���funr�r��wrappercs
z&_reap_children_on_err.<locals>.wrapper��	functools�wraps�r�r�r�r�r��_reap_children_on_errbsr�cKs�|�dt�|�dt�|�dt���|�dt�trHd}|�d|�|dur�tt��d�}zXt|�d	d
|d}td|g}t	j
|fi|��}t�|�t
|d
d
d�Wt|�q�t|�0n&t	j
|fi|��}t�|�t|j�|S)aCreate a python subprocess which does nothing for some secs and
    return it as a subprocess.Popen instance.
    If "cmd" is specified that is used instead of python.
    By default stdin and stdout are redirected to /dev/null.
    It also attempts to make sure the process is in a reasonably
    initialized state.
    The process is registered for cleanup on reap_children().
    �stdinr��cwdrr��
creationflagsN��dirzimport time;zopen(r'%s', 'w').close();z&[time.sleep(0.1) for x in range(100)];rpT��delete�empty)�
setdefaultrr��getcwdr$r
rWrTr#rz�Popen�_subprocesses_started�addr\r[�pid)�cmd�kwdsZCREATE_NO_WINDOW�testfnZpyline�sprocr�r�r�rDns4
���



rDcCs�d}tt��d�}z�t�dtj�|�tf�}trDt	|dd�\}}nt	|�\}}t
�|j�}t
t|ddd��}t�|�t
�|�}||fWt|�|dur�t|�Snt|�|dur�t|�0dS)	aCreate a subprocess which creates another one as in:
    A (us) -> B (child) -> C (grandchild).
    Return a (child, grandchild) tuple.
    The 2 processes are fully initialized and will live for 60 secs
    and are registered for cleanup on reap_children().
    Nr�aV            import subprocess, os, sys, time
            s = "import os, time;"
            s += "f = open('%s', 'w');"
            s += "f.write(str(os.getpid()));"
            s += "f.close();"
            s += "[time.sleep(0.1) for x in range(100 * 6)];"
            p = subprocess.Popen([r'%s', '-c', s])
            p.wait()
            r)r�TFr�)rWr�r��textwrap�dedentr��basenamer#r
rAr�r�r�rxr\�
_pids_startedr�rT)�tfiler��s�subp�childZgrandchild_pidZ
grandchildr�r�r�rF�s0	�


��rFcs$tjs
J�t�}t�d|�}d}t|�}z�|�t�t|�\}}|�	�\}}z|t
�
|��gggt�t|�
d��}t�|�t�|��t�fdd��|�fW|��W|��t|�|dur�t|�S|��0W|��t|�|du�r t|�n$|��t|�|du�rt|�0dS)z�Create a zombie process and return a (parent, zombie) process tuple.
    In order to kill the zombie parent must be terminate()d first, then
    zombie must be wait()ed on.
    a        import os, sys, time, socket, contextlib
        child_pid = os.fork()
        if child_pid > 0:
            time.sleep(3000)
        else:
            # this is the zombie process
            s = socket.socket(socket.AF_UNIX)
            with contextlib.closing(s):
                s.connect('%s')
                if sys.version_info < (3, ):
                    pid = str(os.getpid())
                else:
                    pid = bytes(str(os.getpid()), 'ascii')
                s.sendall(pid)
        N�cs���tjkSr�)�statusr��
STATUS_ZOMBIEr���zombier�r��<lambda>��zspawn_zombie.<locals>.<lambda>)r�rrWr�r�ra�
settimeoutrrA�accept�select�filenorx�recvr�r�r�rZ�closerT)Z	unix_file�srcr��sock�parent�conn�_Zzpidr�rr�rE�sD
�


��

�
rEcKs�|�dd�|�dd�t�}z^t|d��}|�|�Wd�n1sJ0Ytt|jgfi|��}t|j�||fWSt	y�t
|��Yn0dS)z�Run python 'src' code string in a separate interpreter.
    Returns a subprocess.Popen instance and the test file where the source
    code was written.
    r�Nr��w)r�rW�open�writerDr#r�r[r�r�rT)rr��srcfile�fr�r�r�r�rA�s(

rAcKs�trdnd}|�dtj�|�dtj�|�dd�|�d|�t|t�rTt�|�}tj|fi|��}t	�
|�tr�|jt
d�\}}n|��\}}|jdkr�t||��|r�t|�|�d	�r�|d
d�}|S)zURun cmd in a subprocess and return its output.
    raises RuntimeError on error.
    r�rr�r�rsTr���timeout�
N���)r
r�rzr��
isinstance�str�shlexryr�r�r�r�communicater�
returncode�RuntimeErrorrg�endswith)r�r��flags�pr�r�r�r�r��shs&




r#c
sdd��dd�����fdd�}���fdd���fd	d
�}dd�}|}�znt|t�r�|||�Wt|tjtjf�rz||�t|t�r�|n|j}t�|�r�J|��St|tjtjf��r�||�Wt|tjtjf�r�||�t|t�r�|n|j}t�|��rJ|��St|tj��rd|||�Wt|tjtjf��r8||�t|t��rH|n|j}t�|��rbJ|��Std
|��Wt|tjtjf��r�||�t|t��r�|n|j}t�|��rJ|��nHt|tjtjf��r�||�t|t��r�|n|j}t�|��rJ|��0dS)a�Terminate a process and wait() for it.
    Process can be a PID or an instance of psutil.Process(),
    subprocess.Popen() or psutil.Popen().
    If it's a subprocess.Popen() or psutil.Popen() instance also closes
    its stdin / stdout / stderr fds.
    PID is wait()ed even if the process is already gone (kills zombies).
    Does nothing if the process does not exist.
    Return process exit status.
    cSsbt|tj�rts|��n
|�|�tr^t|tj�r^zt�|j��|�WStj	y\Yn0dSr�)
rrzr�rr�r
r�r�r��
NoSuchProcess��procrr�r�r�r�'s

zterminate.<locals>.waitcSs6trtrtj}tr(|tjkr(|�tj�|�|�dSr�)rrj�signal�SIGKILLr�send_signal�SIGCONT)r&�sigr�r�r��sendsig3s
zterminate.<locals>.sendsigc
s\z�|��WnBtyP}z*tr.|jdkr.n|jtjkr<�WYd}~n
d}~00�||�S)N�)�OSErrorr
�winerror�errnoZESRCH)r&r�err�r,r+r�r�r��term_subprocess_proc=sz'terminate.<locals>.term_subprocess_proccs.z�|��Wntjy"Yn0�||�Sr�)r�r$r%r2r�r��term_psutil_procGs
z#terminate.<locals>.term_psutil_proccsDzt�|�}Wn&tjy4tr0t||�YSYn0�||�SdSr�)r�r�r$rr)r�rr&)r4r�r��term_pidNszterminate.<locals>.term_pidcSs4|jr|j��|jr |j��|jr0|j��dSr�)r�rr�r�)r&r�r�r��flush_popenXs

zterminate.<locals>.flush_popenz
wrong type %rN)	rrxrzr�r�r��
pid_existsr��	TypeError)Zproc_or_pidr+�wait_timeoutr3r5r6r"r�r�)r,r+r4r�r�rBsL


	

�
�
��rBcCs�t��j|d�}tr&t��}t|�qtr<t��}t|�q&|r�|D]}t|dd�qDtj|td�\}}|D]}t	d|�t|t
jd�qldS)aTerminate and wait() any subprocess started by this test suite
    and any children currently running, ensuring that no processes stick
    around to hog resources.
    If recursive is True it also tries to terminate and wait()
    all grandchildren started by this process.
    ��	recursiveN)r9rz0couldn't terminate process %r; attempting kill())r+)r�r��childrenr��poprBr��
wait_procsrrgr'r()r;r<r�r�r"r�aliver�r�r�rCrs


rCcCs�tstd��d}t��d}|D]"}|��s4|dkr>||7}q qDq |sTtd|��d}d}|�d�}t|d�}t|�dkr�t|d�}t|�dkr�t|d�}|||fS)	z"Return a tuple such as (2, 6, 36).z	not POSIX�rmrlzcan't parse %rr�r�)	r�NotImplementedErrorr��uname�isdigitr�ryrx�len)r�rC�c�minor�micro�nums�majorr�r�r�rY�s&

rYcCsbtstd��t��}t|d�r*|jp&d}n&t�d|d�}|rLt|�	d��nd}|d|d|fS)Nznot WINDOWS�service_pack_majorrz\s\d$r�rA)
r
rBr|�getwindowsversion�hasattrrK�re�searchrx�group)Zwv�sp�rr�r�r�rX�s
rXc@s<eZdZdZeddddfdd�Zdd�Zdd	�Zd
d�ZdS)�retryzA retry decorator.Nr�cCs2|r|rtd��||_||_||_||_||_dS)Nz/timeout and retries args are mutually exclusive)r��	exceptionr�retries�interval�logfun)r�rTrrUrVrWr�r�r�r��szretry.__init__ccsT|jr*t��|j}t��|krPdVqn&|jrHt|j�D]
}dVq:ndVqHdSr�)rr�rUr)r��stop_atrr�r�r��__iter__�s

zretry.__iter__cCs|jdurt�|j�dSr�)rVr�r�r�r�r�r�r��s
zretry.sleepcs"t�����fdd��}�|_|S)Ncs�d}�D]l}z�|i|��WS�jyr}z8|}�jdurJ��|����WYd}~qWYd}~qd}~00qtr�|�n�dSr�)rTrWr�r)r�r��excr�r�r�r�r�r��s

$zretry.__call__.<locals>.wrapper)r�r��	decorator)r�r�r�r�r[r��__call__�szretry.__call__)	r�r�r�r�r�r�rYr�r]r�r�r�r�rS�s�
rSr�)rTrWrrVcCs$|t��vrt�|��t�|�dS)z�Wait for pid to show up in the process list then return.
    Used in the test suite to give time the sub process to initialize.
    N)r��pidsr$r��r�r�r�r�r[�s

r[TcCsNt|d��}|��}Wd�n1s(0Y|s>|s>J�|rJt|�|S)z8Wait for a file to be written on disk with some content.�rbN)r�readrT)�fnamer�r�r�datar�r�r�r\
s&r\cCs|�}|sJ�|S)z1Keep calling function until it evaluates to True.r�)r��retr�r�r�rZsrZcCsldd�}zLt�|�}t�|j�r0t�tj|�}nt�tj|�}t	rJ|�n||�Wnt
yfYn0dS)z?Convenience function for removing temporary test files or dirs.c
Ss~t��t}t��|krvz|�WSty2Yn8tyh}z |}tdt|��WYd}~n
d}~00t�d�q|�dS)Nzignoring %sg{�G�z�?)r�rrZWindowsErrorrgrr�)r�rXrr1r�r�r��	retry_fun0s&zsafe_rmpath.<locals>.retry_funN)r��stat�S_ISDIR�st_moder��partial�shutil�rmtree�removerr)r�re�str�r�r�r�rT-s
rTcCs&zt�|�Wnty Yn0dS)z.Convenience function for creating a directory.N)r��mkdirrr�r�r�r��
safe_mkdirOsroc	cs8t��}zt�|�dVWt�|�nt�|�0dS)z@Context manager which temporarily changes the current directory.N)r�r�rS)�dirname�curdirr�r�r�rSWs

rScCsRtj�|�rJ|��t�t|�t�t|�t	rNt�
|�}t�||jt
j
B�|S)z6Create a Python executable file in the given location.)r�r�r��atexit�registerrTrj�copyfiler#rrf�chmodrh�S_IEXEC)r�rmr�r�r�rUbs
rUcCs�tj�|�rJ|��td�s&t�d��|dur:t�d�}nt|t	�sLJ|��t
�t|�t
tdd�d��}|�|�Wd�n1s�0Yz"t�d|jd|g�Wt|j�nt|j�0|S)	z5Create a compiled C executable in the given location.�gcczgcc is not installedNz�
            #include <unistd.h>
            int main() {
                pause();
                return 1;
            }
            z.c��suffixrz-o)r�r�r�r�pytest�skipr�r�rrrrrsrTrrWrrzr�r�)r�Zc_coderr�r�r�rVms
(rVr@cCs>tjt||d�}tj�|�stj�|�}t�t	|�|SqdS)z�Return an absolute pathname of a file or dir that did not
    exist at the time this call is made. Also schedule it for safe
    deletion at interpreter exit. It's technically racy but probably
    not really due to the time variant.
    )�prefixryr�N)
�tempfile�mktempr'r�r�r�r�rrrsrT)ryr�r�r�r�r�r�rW�s
rWc@sTeZdZdZedd��Zeddd��Zeddd��Zedd
d��ZGdd
�d
�Z	dS)rRz�A class that mimics some basic pytest APIs. This is meant for
    when unit tests are run in production, where pytest may not be
    installed. Still, the user can test psutil installation via:

        $ python3 -m psutil.tests
    cOs4t���t�}tjdd��|�tjdtdd�|S)z�Mimics pytest.main(). It has the same effect as running
        `python3 -m unittest -v` from the project root directory.
        rm)�	verbosityz<Fake pytest module was used. Test results may be inaccurate.rA��
stacklevel)	rHZ
TestLoaderZdiscover�HEREZTextTestRunnerr��warningsrg�UserWarning)r��kwZsuiter�r�r��main�s�zfake_pytest.mainNcs.Gdd�d��tjd�fdd�	�}|||d�S)zMimics `pytest.raises`.c@seZdZdZedd��ZdS)z)fake_pytest.raises.<locals>.ExceptionInfoNcSs|jSr�)�_excr�r�r�r��value�sz/fake_pytest.raises.<locals>.ExceptionInfo.value)r�r�r�r��propertyr�r�r�r�r��
ExceptionInfo�sr�Nc
3sx��}z
|VWnV|yf}z>|rLt�|t|��sLd�|t|��}t|��||_WYd}~nd}~00td|��dS)Nz"{}" does not match "{}"z
%r not raised)rNrOr�format�AssertionErrorr�)rZ�matchZeinfor1�msg�r�r�r��context�s
z#fake_pytest.raises.<locals>.context)r�)N)�
contextlib�contextmanager)rZr�r�r�r�r��raises�szfake_pytest.raisescCs"|rt���||�St���|�S)zMimics `pytest.warns`.)rH�TestCaseZassertWarnsRegexZassertWarns)�warningr�r�r�r��warns�szfake_pytest.warnsr@cCst�|��dS)zMimics `unittest.SkipTest`.N)rHZSkipTest��reasonr�r�r�r{�szfake_pytest.skipc@s(eZdZeddd��ZGdd�d�ZdS)zfake_pytest.markr@cCst�||�S)z'Mimics `@pytest.mark.skipif` decorator.)rHZskipIf)�	conditionr�r�r�r��skipif�szfake_pytest.mark.skipifc@s"eZdZdZddd�Zdd�ZdS)zfake_pytest.mark.xdist_groupz4Mimics `@pytest.mark.xdist_group` decorator (no-op).NcCsdSr�r�r�r�r�r�r��sz%fake_pytest.mark.xdist_group.__init__cCs|Sr�r�)r�Zcls_or_methr�r�r�r]�sz%fake_pytest.mark.xdist_group.__call__)N)r�r�r�r�r�r]r�r�r�r��xdist_group�s
r�N)r@)r�r�r��staticmethodr�r�r�r�r�r��mark�sr�)N)N)r@)
r�r�r�r�r�r�r�r�r{r�r�r�r�r�rR�s

rRc@s&eZdZes"dd�Zejdd��ZdS)r�cCsdSr�r�r�r�r�r��runTest�szTestCase.runTestcos
dVdSr�r�)r�r�r�r�r�r��subTest�szTestCase.subTestN)r�r�r�rr�r�r�r�r�r�r�r�r��sr�c@sZeZdZdZddd�Zdd�Zdd	�Zd
d�Zdd
�Zdd�Z	dd�Z
dd�Zdd�ZdS)rMz�Test class providing auto-cleanup wrappers on top of process
    test utilities. All test classes should derive from this one, even
    if we use pytest.
    r@NcCst||d�}|�t|�|S)N)ryr�)rW�
addCleanuprT)r�ryr�rbr�r�r�rWszPsutilTestCase.get_testfncOst|i|��}|�t|�|Sr�)rDr�rB)r�r�r�r�r�r�r�rDszPsutilTestCase.spawn_testproccCs*t�\}}|�t|�|�t|�||fSr�)rFr�rB)r�Zchild1Zchild2r�r�r�rFs
z"PsutilTestCase.spawn_children_paircCs*t�\}}|�t|�|�t|�||fSr�)rEr�rB)r�rrr�r�r�rEs
zPsutilTestCase.spawn_zombiecOs.t|i|��\}}|�t|�|�t|�|Sr�)rAr�rTrB)r�r�r�r�rr�r�r�rAszPsutilTestCase.pyruncCs�t|tj�sJ�|j|jks J�|j|jks0J�|jr@|js@J�t|tj�rt|j|jks\J�|jdurt|jdkstJ�t	|�t
|�dS�Nr)rr��Errorr�r��_name�
ZombieProcess�ppidZ_ppidr�repr)r�r&rZr�r�r��_check_proc_excs

zPsutilTestCase._check_proc_excc	Cs�t�tj��<}zt�|�Wntjy8td��Yn0Wd�n1sN0Y|jj|kshJ�|jj	dusxJ�t�
|�r�J|��|t��vs�J�|dd�t��D�vs�J�dS)Nz&wasn't supposed to raise ZombieProcesscSsg|]
}|j�qSr�r_r�r�r�r�r�5rz0PsutilTestCase.assertPidGone.<locals>.<listcomp>)
rzr�r�r$r�r�r�r�r�r�r7r^�process_iter)r�r��cmr�r�r��
assertPidGone+s,zPsutilTestCase.assertPidGonecCs�|�|j�t|�}|j|jdd�D]�\}}|j||d��vz
|�}WnFtjy\�YnFtjy�}z|�	||�WYd}~nd}~00d||f}t
|��Wd�q$1s�0Yq$|jdd�dS)NT��clear_cache�r&r�z-Process.%s() didn't raise NSP and returned %rrr)r�r�rN�iter�allr�r�r�r$r�r�r�)r�r&�nsr�r�rdrZr�r�r�r��assertProcessGone7s 
"�(z PsutilTestCase.assertProcessGonecCs�t�|j�}||ksJ�ts4ts4t|�t|�ks4J�|��tjksFJ�|��sRJ�t�	|j�sbJ�|�
�|jt��vs|J�|jdd�t��D�vs�J�it_
|jdd�t��D�vs�J�t|�}|j|jdd�D]~\}}|j||d��Vz
|�Wn:tjtjf�y.}z|�||�WYd}~n
d}~00Wd�q�1�sF0Yq�t�r$t�tj��}|��Wd�n1�s�0Y|�||j�t�tj��}|��Wd�n1�s�0Y|�||j�t�tj��}|��Wd�n1�s0Y|�||j�|��|��|��|��|���sRJ�t�	|j��sdJ�|jt��v�sxJ�|jdd�t��D�v�s�J�it_
|jdd�t��D�v�s�J�dS)	NcSsg|]
}|j�qSr�r_r�r�r�r�r�[rz6PsutilTestCase.assertProcessZombie.<locals>.<listcomp>cSsg|]
}|j�qSr�r_r�r�r�r�r�]rTr�r�cSsg|]
}|j�qSr�r_r�r�r�r�r�yrcSsg|]
}|j�qSr�r_r�r�r�r�r�{r) r�r�r�r
r	�hashrr�
is_runningr7�as_dictr^r�Z_pmaprNr�r�r�r��AccessDeniedr�rrzr��cmdliner�r�r��suspend�resumerB�kill)r�r&�cloner�r�r�rZr�r�r�r��assertProcessZombieJsN
D(((z"PsutilTestCase.assertProcessZombie)r@N)
r�r�r�r�rWrDrFrErAr�r�r�r�r�r�r�r�rM�s

rMzunreliable on PYPYr�c@s�eZdZdZdZdZdZer dndZdZ	e
��Ze
e�d��Zedd	��Zed
d��Zdd
�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zddd�Zdd�ZdS) rLa�Test framework class for detecting function memory leaks,
    typically functions implemented in C which forgot to free() memory
    from the heap. It does so by checking whether the process memory
    usage increased before and after calling the function many times.

    Note that this is hard (probably impossible) to do reliably, due
    to how the OS handles memory, the GC and so on (memory can even
    decrease!). In order to avoid false positives, in case of failure
    (mem > 0) we retry the test for up to 5 times, increasing call
    repetitions each time. If the memory keeps increasing then it's a
    failure.

    If available (Linux, OSX, Windows), USS memory is used for comparison,
    since it's supposed to be more precise, see:
    https://gmpy.dev/blog/2016/real-process-memory-and-environ-in-python
    If not, RSS memory is used. mallinfo() on Linux and _heapwalk() on
    Windows may give even more precision, but at the moment are not
    implemented.

    PyPy appears to be completely unstable for this framework, probably
    because of its JIT, so tests on PYPY are skipped.

    Usage:

        class TestLeaks(psutil.tests.TestMemoryLeak):

            def test_fun(self):
                self.execute(some_function)
    ��rnrr�TZPSUTIL_DEBUGcCst�d�dS)NF)r��
_set_debug��clsr�r�r��
setUpClass�szTestMemoryLeak.setUpClasscCst�|j�dSr�)r�r��_psutil_debug_origr�r�r�r��
tearDownClass�szTestMemoryLeak.tearDownClasscCs|j��}t|d|j�S)NZuss)�	_thisproc�memory_full_infor�Zrss)r��memr�r�r��_get_mem�s
zTestMemoryLeak._get_memcCstr|j��S|j��SdSr�)rr��num_fds�num_handlesr�r�r�r��_get_num_fds�s
zTestMemoryLeak._get_num_fdscCs|jrt|dtjd�dS)N�yellow)�color�file)�verboserr|r�)r�r�r�r�r��_log�szTestMemoryLeak._logcCsx|��}|�|�|��}||}|dkr8|�d|��|dkrttrHdnd}|dkr\|d7}d|||f}|�|��dS)	z�Makes sure num_fds() (POSIX) or num_handles() (Windows) does
        not increase after calling a function.  Used to discover forgotten
        close(2) and CloseHandle syscalls.
        rzHnegative diff %r (gc probably collected a resource from a previous test)�fd�handlerAr�z%s unclosed %s after calling %rN)r��call�failr)r�r��before�after�diff�type_r�r�r�r��
_check_fds�s 
��zTestMemoryLeak._check_fdscCs^tjdd�|��}t|�D]}|�|�}~~qtjdd�|��}tjgksRJ�||}|S)z�Get 2 distinct memory samples, before and after having
        called fun repeatedly, and return the memory difference.
        rA)Z
generation)�gcZcollectr�rr��garbage)r�r��timesZmem1r�rdZmem2r�r�r�r��_call_ntimes�s
zTestMemoryLeak._call_ntimescCs�g}d}|}td|d�D]�}|�||�}	d|t|	�t|	|�|f}
|�|
�|	|kp^|	|k}|r||dkrv|�|
�dS|dkr�t�|�|
�||7}|	}q|�d�|���dS)NrrAz,Run #%s: extra-mem=%s, per-call=%s, calls=%sz. )rr�r�appendr��printr�r�)r�r�r�rU�	tolerance�messagesZprev_memZincrease�idxr�r��successr�r�r��
_check_mem�s.
�


zTestMemoryLeak._check_memcCs|�Sr�r�)r�r�r�r�r�r�
szTestMemoryLeak.callNc
Cs�|dur|n|j}|dur|n|j}|dur0|n|j}|durB|n|j}zD|dksZJd��|dksjJd��|dkszJd��|dks�Jd��Wn0ty�}ztt|���WYd}~n
d}~00|�||�|�|�|j	||||d�dS)	zTest a callable.NrAztimes must be >= 1rzwarmup_times must be >= 0zretries must be >= 0ztolerance must be >= 0)r�rUr�)
r��warmup_timesrUr�r�r�rr�r�r�)r�r�r�r�rUr�r1r�r�r��executes�"
zTestMemoryLeak.executecs&���fdd�}�j|fi|��dS)znConvenience method to test a callable while making sure it
        raises an exception on every call.
        cs�����dSr�)�assertRaisesr��rZr�r�r�r�r�+sz*TestMemoryLeak.execute_w_exc.<locals>.callN)r�)r�rZr�r�r�r�r�r��
execute_w_exc&szTestMemoryLeak.execute_w_exc)NNNN)r�r�r�r�r�r�r�r*rUr�r�r�r��boolr��getenvr��classmethodr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rL�s,

�
rLcCs�ddl}ddl}ddl}ddl}ddl}zddl}WntyJd}Yn0zddl}Wntynd}Yn0|��}t	j
r�td�r�td�|d<nzt	j
r�dt��d|d<n^t	jr�dd�ttt����|d<ttd�r�|dd	t��7<nd
t��t��f|d<d	�tt���t��g�|d<t	j�rFt��d|d
<d	�t��t��t� �g�|d<t!|dd�|d<|du�r�|dd|j"7<t	j�r�td��r�tddg�}t|��#d�d|d<nd|d<t�$�d}	|	�r�|	|d<t%�&�|d<|�'�}
d|
d|
df|d<|j�(t	�)���*d�|d<|j�+��*d�|d<|�,�|d<t-j.�/d�|d <t-�0�|d!<t1|d"<t�2�|d#<t-�3�|d$<t	�4�|d%<d&t5d'd(�t	�6�D��|d)<t	�7�}d*t8|j9�t:|j;�t:|j<�f|d+<t	�=�}d*t8|j9�t:|j;�t:|j<�f|d,<t>t	�?��|d-<t	�@��A�}
|
�Bd.d�|�C|
�|d/<tDd0t%jEd1�|�F�D]$\}}tDd2|d3|ft%jEd1��q^tDd0t%jEd1�t%jG�H�t%jG�H�dS)4Nr�lsb_releasezlsb_release -d -sZOSz	Darwin %szWindows � �
win32_editionz, z%s %s�archrmZkernel�python�__version__z
not installed�pipz (wheel=%s)rwz	--versionrrA�glibczfs-encodingz%s, %s�langz%Y-%m-%d %H:%M:%Sz	boot-timer��user�~�homer�Zpyexe�hostnameZPIDZcpusz%.1f%%, %.1f%%, %.1f%%cSsg|]}|t��d�qS)�d)r��	cpu_countr�r�r�r�r�{rz!print_sysinfo.<locals>.<listcomp>Zloadavgz%s%%, used=%s, total=%sZmemory�swapr^r�r&zF======================================================================�r�z%-17s %s�:)I�collections�datetime�getpass�locale�pprintr��ImportError�wheel�OrderedDictr�rrr#ZOSXrtrur
r�rwr�	win32_verrMr��systemr�list�architecture�machinerrC�python_implementation�python_version�python_compilerr�r�ry�libc_verr|�getfilesystemencoding�	getlocale�
fromtimestamp�	boot_time�strftime�now�getuserr�r��
expanduserr�r#�node�getpidr�rvr��virtual_memoryrx�percentr�used�total�swap_memoryrEr^r�r�r=�pformatr�r��itemsr��flush)r�r�r�rrr�r�info�outr�r�r�r�Zpinfo�k�vr�r�r�rP1s�


��


����
�

rPcCs6tdd��}z|�|dkWSty0YdS0dS)NcSsXi}td�}|��dd�D]6}dd�|�d�D�}|dt|d�}}|||<q|S)Nztasklist.exe /NH /FO csvrAcSsg|]}|�dd��qS)�"r@)�replacer�r�r�r�r��rz@is_win_secure_system_proc.<locals>.get_procs.<locals>.<listcomp>�,r)r#�
splitlinesryrx)rdr!�line�bitsr�r�r�r�r��	get_procs�s
z,is_win_secure_system_proc.<locals>.get_procsz
Secure SystemF)r�KeyError)r�r*r�r�r�rQ�s
	rQcCs6t��}t|d�r|��St|d�r2t�|���SdS)Nr�r�r)r�r�rMr��random�choicer�)r"r�r�r��_get_eligible_cpu�s

r.c@sJeZdZdZddifddifgZddifddddifd	difd
difddifddifd
difddifddifddifg
Zddifddifddifddifddifddifddifddifddddifddifddifddifd difd!difd"difd#difd$difgZe�rNed%difg7Zed&difg7Zed'difg7Zed(difg7Ze�rded)difg7Ze	�rzed*difg7Ze
�r�ed+ejfifg7Ze
�r�ed,difg7Ze�r�ed-difg7Ze�r�ed.difg7Ze�r�ed/difg7Ze�red0dd1d2ifg7ZgZe�r"eddifg7Znedejfifg7Ze
�rRed+ejd3fifg7Ze	�r�e�rved*ejd4fifg7Zned*ejfifg7Ze
�r�ed,e�gfifg7Zd5ejfifd6difd7difd8difd9difgZe�red5ejfifg7Zed5ejfifg7ZeeeeZd:d;�ZdEd<d=�Zd>d?�Z e!d@dA��Z"e!dBdC��Z#dDS)FrNaA container that lists all Process class method names + some
    reasonable parameters to be called with. Utility methods (parent(),
    children(), ...) are excluded.

    >>> ns = process_namespace(psutil.Process())
    >>> for fun, name in ns.iter(ns.getters):
    ...    fun()
    �cpu_percentr�Zmemory_percentr�r<r;T�connectionsr�Zmemory_info_exZoneshotr�parentsr�r��rr��	cpu_timesZcreate_timer�r�r�Zmemory_infor��net_connections�kindr��niceZnum_ctx_switchesZnum_threadsZ
open_filesr�rr��usernameZuids�gidsZterminalr�r�r�r�r�r�r�r�r�ZgroupedF)r�irr)r�r�rBr�cCs
||_dSr�)�_proc)r�r&r�r�r�r�szprocess_namespace.__init__ccs`t|�}t�|�|D]D\}}}|r,|��t|j|�}tj|g|�Ri|��}||fVqdS�z_Given a list of tuples yields a set of (fun, fun_name) tuples
        in random order.
        N)rr,�shuffler�r�r9r�ri)r��lsr��fun_namer�r�r�r�r�r�r�s
zprocess_namespace.itercCs|jj|jjdd�dS)z&Clear the cache of a Process instance.T)Z_ignore_nspN)r9�_initr�r�r�r�r�r�&szprocess_namespace.clear_cachecCs>|D]4\}}}d|}t||�sd|jj|f}t|��qdS)z}Given a TestCase instance and a list of tuples checks that
        the class defines the required test method names.
        Ztest_z$%r class should define a '%s' methodN)rMr�r��AttributeError)r�Z
test_classr<r=r�	meth_namer�r�r�r��test_class_coverage*s
�z%process_namespace.test_class_coveragecCs`tdd�|jD��}tdd�|jD��}tdd�ttj�D��}||B|A}|r\td|��dS)NcSsg|]}|d�qSr2r�r�r�r�r�r�:rz*process_namespace.test.<locals>.<listcomp>cSsg|]}|d�qSr2r�r�r�r�r�r�;rcSsg|]}|ddkr|�qS)rrr�r�r�r�r�r�<rz!uncovered Process class names: %r)r�r��ignoredr�r�r�r�)r��thisrB�klassZleftoutr�r�r��test8szprocess_namespace.testN)T)$r�r�r�r��utilsrB�gettersrr1r2r5r�Z
RLIMIT_NOFILEr.r4r0r
r3ZsettersZNORMAL_PRIORITY_CLASSrZIOPRIO_CLASS_NONEZ
IOPRIO_NORMALr.r'�SIGTERMZkillersZCTRL_C_EVENTZCTRL_BREAK_EVENTr�r�r�r�r�rArEr�r�r�r�rN�s�	���



rNc@s�eZdZdZddifddddifddddifddifd	dd
difd	dd
difddddifd
dddifde��fifddddifddifddifddddifde��fifddifddifddifddifgZer�e	r�e
��dkr�neddd
difg7Ze�reddifg7Ze
�r"eddifg7Ze�r8eddifg7Ze�rNed difg7Ze�rted!difg7Zed"d#ifg7Zd$difd%e��gfifd&difd'difgZeZed(d)��ZejZd*S)+rOz�A container that lists all the module-level, system-related APIs.
    Utilities such as cpu_percent() are excluded. Usage:

    >>> ns = system_namespace
    >>> for fun, name in ns.iter(ns.getters):
    ...    fun()
    rr�r��logicalFTZ	cpu_statsr3ZpercpuZdisk_io_countersZperdiskZdisk_partitionsr��
disk_usager4r5Znet_if_addrsZnet_if_statsr�Zpernicr7r^rZusersr�arm64r�r�r�r�r�Zwin_service_iterZwin_service_get)�algr�r>r/Zcpu_times_percentccsRt|�}t�|�|D]6\}}}tt|�}tj|g|�Ri|��}||fVqdSr:)rr,r;r�r�r�ri)r<r=r�r�r�r�r�r�r�ys

zsystem_namespace.iterN)r�r�r�r�r�r�rrGr/rrtr	�HAS_GETLOADAVGr9r8r6r
r�r�rBr�r�r�rNrAr�r�r�r�rOBsX	��
rOcCsdd�}ttd||d�S)zZDecorator which runs a test function and retries N times before
    actually failing.
    cSstd|tjd�dS)Nz%r, retryingr�)r�r|r�)rZr�r�r�rW�sz retry_on_failure.<locals>.logfunN)rTrrUrW)rSr�)rUrWr�r�r�rK�s�rKcs�fdd�}|S)z,Decorator to Ignore AccessDenied exceptions.cst�����fdd��}|S)NcsBz�|i|��WStjy<�dur.�s.�t�d��Yn0dS)Nzraises AccessDenied)r�r�rzr{r��r��only_ifr�r�r��sz9skip_on_access_denied.<locals>.decorator.<locals>.wrapperr�r��rOr�r�r\�s	z(skip_on_access_denied.<locals>.decoratorr��rOr\r�rPr�rI�s
rIcs�fdd�}|S)z3Decorator to Ignore NotImplementedError exceptions.cst�����fdd��}|S)NcsJz�|i|��WStyD�dur,�s,�d�j}t�|��Yn0dS)Nz4%r was skipped because it raised NotImplementedError)rBr�rzr{)r�r�r�rNr�r�r��s��z;skip_on_not_implemented.<locals>.decorator.<locals>.wrapperr�r�rPr�r�r\�s
z*skip_on_not_implemented.<locals>.decoratorr�rQr�rPr�rJ�srJ�	127.0.0.1cCsLt�t����*}|�|df�|��dWd�S1s>0YdS)z6Return an unused TCP port. Subject to race conditions.rrAN)r��closing�socket�bind�getsockname)�hostr
r�r�r�r_�sr_cCs�|dur|ttfvrd}t�||�}z@tjdvrB|�tjtjd�|�|�|tj	kr`|�
d�|WSty�|���Yn0dS)zBinds a generic socket.N�r@r)�nt�cygwinrAr�)
rrrTr�r��
setsockopt�
SOL_SOCKET�SO_REUSEADDRrUr�listenr�r)�family�type�addrr
r�r�r�r`�s



r`cCsptjs
J�tj�|�rJ|��t�tj|�}z"|�|�|tjkrL|�	d�Wnt
yj|���Yn0|S)zBind a UNIX socket.r�)r�rr�r�r�rTr�rUrr^r�r)r�r`r
r�r�r�ra�s


rarXc	Cs�t�t�|t����}|�|�|�d�|��}t�|t�}zL|�|�|��}|��\}}||kr|||fWWd�S|�	�qPWnt
y�|�	��Yn0Wd�n1s�0YdS)z^Build a pair of TCP sockets connected to each other.
    Return a (server, client) tuple.
    r�N)r�rSrTrrUr^rV�connectrrr.)r_raZllrFZcaddr�ar�r�r�rb�s


rbcCs�tjs
J�d}}z@t|tjd�}|�d�t�tjtj�}|�d�|�|�Wn4ty�|durp|�	�|dur�|�	��Yn0||fS)z�Build a pair of UNIX sockets connected to each other through
    the same UNIX file name.
    Return a (server, client) tuple.
    N�r`r)
r�rrarTr�setblockingr�rbr�r)r��server�clientr�r�r�rcs


rcc	cs g}d}}z�|�ttjtj��|�ttjtj��t�rd|�ttjtj��|�ttjtj��tr�t	r�t
�}t
�}t|�\}}t|tjd�}|||fD]}|�|�q�|VW|D]}|�
�q�||fD]}|dur�t|�q�n6|D]}|�
�q�||fD]}|du�rt|��q0dS)z1Open as many socket families / types as possible.Nrd)r�r`rTrr�
SOCK_DGRAMrrrr:rWrcrarrT)�socksZfname1Zfname2�s1�s2Zs3r�rbr�r�r�rds4
�

rdcCsddl}tr(tr(ts(t|tj�s(J|��|tjkr�dd�|�d�D�}t	|�dksZJ|��|D]"}d|krvdks^nJ|��q^ts�t
|�}|�|�nd|tjkr�t|t
�s�J|��ts�t
|�}|�|�n0|tjkr�t�d|�dus�J|��ntd	|��dS)
z[Check a net address validity. Supported families are IPv4,
    IPv6 and MAC addresses.
    rNcSsg|]}t|��qSr�)rxr�r�r�r�r�>rz%check_net_address.<locals>.<listcomp>rlr��z([a-fA-F0-9]{2}[:|\-]?){6}zunknown family %r)�	ipaddress�enumrr"r�IntEnumrTrryrEr�IPv4Addressrr�IPv6Addressr�ZAF_LINKrNr�r�)rar_rmZocts�numr�r�r�r]5s&
 

r]cCsTdd�}dd�}dd�}dd�}d	d
�}||�||�||�||�||�dS)z*Check validity of a connection namedtuple.cSs�t|�dk}t|�dvs$Jt|���|d|jks<J|j��|d|jksTJ|j��|d|jkslJ|j��|d|jks�J|j��|d|jks�J|j��|d|jks�J|j��|r�|d	|jks�J|j��dS)
Nr�)r-r�rrArmr�r�r�r-)rEr�r_r`�laddr�raddrrr�)rZhas_pidr�r�r��check_ntupleSsz-check_connection_ntuple.<locals>.check_ntuplecSs
|jtttfvsJ|j��tdur:t|jtj�sNJ|��nt|jt�sNJ|��|jtkr�t�|j|j	�}t
�|��^z|�|j
ddf�Wn4tjy�}z|jtjkr��WYd}~n
d}~00Wd�n1s�0Yn$|jtk�r|jtjk�sJ|j��dSr�)r_rrr�rnrrorxrTr`r�rSrUrs�errorr0Z
EADDRNOTAVAILrr��	CONN_NONE)rr�r1r�r�r��check_family_s
8z-check_connection_ntuple.<locals>.check_familycSs�ttdt��}|jtjtj|fvs,J|j��tdurLt|jtj�s`J|��nt|jt	�s`J|��|jtjkr�|j
tjks�J|j
��dS)N�SOCK_SEQPACKET)
r�rT�objectr`rrhrnrrorxrr�rw)rryr�r�r��
check_typets��z+check_connection_ntuple.<locals>.check_typecSs�|j|jfD]�}|jttfvr�t|t�s4Jt|���|s:qt|jt	�sTJt|j���d|jkrjdksvnJ|j��t
|j|j�q|jtkrt|t
�sJt|���qdS)Nri��)rsrtr_rrrrvr`�portrxr]�ipr�r)rrar�r�r��check_addrs�s"
z,check_connection_ntuple.<locals>.check_addrscSs�t|jt�sJ|j��dd�tt�D�}|j|vs<J|j��|jttfvrl|jt	krl|jtj
ks�J|j��n|jtj
ks�J|j��dS)NcSs g|]}|�d�rtt|��qS)ZCONN_r�r�r�r�r�r��szAcheck_connection_ntuple.<locals>.check_status.<locals>.<listcomp>)rrrr�r�r_rrr`rrw)rZvalidsr�r�r��check_status�s�z-check_connection_ntuple.<locals>.check_statusNr�)rrurxr{r~rr�r�r��check_connection_ntuplePs
r�cCsLg}|D]>}tr<|jtjkr<tr<d|jvr<tdt|��q|�|�q|S)ztOur process may start with some open UNIX sockets which are not
    initialized by us, invalidating unit tests.
    z/syslogzskipping %s)	rr_rTr�rrtrrr�)Zcons�newrr�r�r�r^�sr^cCsNzddl}t|d�st�Wn$ty>ddl}|�|�YS0|�|�SdS)z,Backport of importlib.reload of Python 3.3+.rN�reload)�	importlibrMr�impr�)�moduler�r�r�r�r�re�s
recCsptj�tj�|��d}tjddkr:ddl}|�||�Sddl}|j	�
||�}|j	�|�}|j�
|�|SdS)Nrr�)r�r��splitextr�r|r�r�Zload_source�importlib.util�util�spec_from_file_location�module_from_spec�loader�exec_module)r�r�r�r��spec�modr�r�r�rf�srfcCstj|tdd�dS)zRaise a warning msg.rmr�N)r�rgr�)r�r�r�r�rg�srgcCsVt|�}|j}t|�dks&|dtur*dSt|dd�}t|t�sDdStdd�|D��S)z-Check if object is an instance of namedtuple.rArF�_fieldsNcss|]}t|t�VqdSr�)rr)r��nr�r�r��	<genexpr>�rz is_namedtuple.<locals>.<genexpr>)r`�	__bases__rErvr�rr�)r��t�brr�r�r�ri�s
ric#s|trdnd�d�t|�d�}��fdd�t����D�}t�|�}t�||�zt	�
|�|VWt|�n
t|�0dS)z�Ctx manager which picks up a random shared CO lib used
        by this process, copies it in another location and loads it
        in memory via ctypes. Return the new absolutized path.
        �pypyr�z.sorxcs6g|].}tj�|j�d�kr�|j��vr|j�qS)rA)r�r�r��lowerr��r��extr�r�r��s$��'copyload_shared_lib.<locals>.<listcomp>N)r"rWr�r�r�r,r-rjrt�ctypes�CDLLrT)ry�dst�libsrr�r�r�rh�s
�

rhc	#sddlm}ddlm}d�t|�d�}�fdd�t����D�}trb|sbdd�t����D�}t�	|�}t
�||�d	}zPt�|�}|VW|d	ur�tj
jj}|jg|_||j�}|dkr�|�t|�nB|d	u�rtj
jj}|jg|_||j�}|dk�r|�t|�0d	S)
z�Ctx manager which picks up a random shared DLL lib used
        by this process, copies it in another location and loads it
        in memory via ctypes.
        Return the new absolutized, normcased path.
        r)�WinError)�wintypesz.dllrxcsFg|]>}|j�����rdtj�|j���vrd|j��vr|j�qS)r�Zwow64)r�r�r r�r�r��r�r�r�r�s
�r�cSs(g|] }dtj�|j���vr|j�qS)r�)r�r�r�r�r�r�r�r�r�s�N)r�r�r�rWr�r�r�r"r,r-rjrt�WinDLL�windllZkernel32�FreeLibraryZHMODULE�argtypes�_handlerT)	ryr�r�r�r�r�cfiler�rdr�r�r�rhs>

�
�





�




cCstdd�dS)NTr:)rCr�r�r�r��cleanup_test_procs7sr�cCs
t�|�Sr�)r|�exit)r+rr�r�r�rArr)N)F)TF)N)r@N)N)N)rR)rX)r@)r@)�r��
__future__rrrr�r�r0r�r�r�rtr,rNrrrjr'rTrfrzr|r}r�r�r�rHr�rrrrzrr�rrrr	r
rrr
Zpsutil._commonrrrrrZpsutil._compatrrrrrrrr�catch_warnings�simplefilterrnZ	unittest2Zpsutil._psposixr�__all__�builtin_module_namesr"r�rrjr*r=r@rrrar?�maxsizer-r	r>r�r;r<r!r r,rr�rr'r(�decoder)rr�ZASCII_FSr�r�r�rp�__file__r%�getr&r�rMr�r.r/r0rMr2r3r:ZHAS_NET_IO_COUNTERSr4r1r5r6r�r�r7r�r8r9ZHAS_THREADS�getuidZSKIP_SYSCONSr�r#r$�devnullrrsrr�r+r�rzr�r�r�r�r�rGr�rDrFrErAr#rHrBrCrYrXrSr$r[r�r\rZrTror�rSrUrVrWrRr�rMr�r�rLrPrQr.rNrOrKrIrJr_r`rarbrcrdr]r�r^rerfrgrirhr�r�r�r�r��<module>s�


.
-





.


��






*
�0)
&-

V
%;�
	��
"



M
%l	F




R3
PKok\�����\�\5psutil/tests/__pycache__/test_memleaks.cpython-39.pycnu�[���a

��?h3<�@sdZddlmZddlZddlZddlZddlZddlZddlmZddlm	Z	ddlm
Z
ddlmZddlmZdd	lm
Z
dd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%dd lm&Z&dd!lm'Z'ej(j)Z)e�*�Z+d"Z,d#d$�Z-Gd%d&�d&e�Z.Gd'd(�d(e.�Z/e#j0j1e
d)d*�Gd+d,�d,e��Z2Gd-d.�d.e�Z3dS)/a�Tests for detecting function memory leaks (typically the ones
implemented in C). It does so by calling a function many times and
checking whether process memory usage keeps increasing between
calls or over time.
Note that this may produce false positives (especially on Windows
for some reason).
PyPy appears to be completely unstable for this framework, probably
because of how its JIT handles memory, so tests are skipped.
�)�print_functionN)�LINUX)�MACOS)�OPENBSD)�POSIX)�SUNOS)�WINDOWS)�ProcessLookupError)�super)�HAS_CPU_AFFINITY)�HAS_CPU_FREQ)�HAS_ENVIRON)�
HAS_IONICE)�HAS_MEMORY_MAPS)�HAS_NET_IO_COUNTERS)�HAS_PROC_CPU_NUM)�HAS_PROC_IO_COUNTERS)�
HAS_RLIMIT)�HAS_SENSORS_BATTERY)�HAS_SENSORS_FANS)�HAS_SENSORS_TEMPERATURES)�	QEMU_USER)�TestMemoryLeak)�create_sockets)�
get_testfn)�process_namespace)�pytest)�skip_on_access_denied)�spawn_testproc)�system_namespace)�	terminate�cCsdd�}|S)zsDecorator for those Linux functions which are implemented in pure
    Python, and which we want to run faster.
    cst����fdd��}|S)Ncs\trB|jj}z(t|j_�|g|�Ri|��W||j_S||j_0n�|g|�Ri|��SdS�N)r�	__class__�times�	FEW_TIMES)�self�args�kwargs�before��fun��F/usr/local/lib64/python3.9/site-packages/psutil/tests/test_memleaks.py�wrapperDs�z5fewtimes_if_linux.<locals>.decorator.<locals>.wrapper)�	functools�wraps)r+r.r,r*r-�	decoratorCsz$fewtimes_if_linux.<locals>.decoratorr,)r1r,r,r-�fewtimes_if_linux>sr2c@s0eZdZdZeZdd�Ze�dd��Ze�dd��Z	e�dd	��Z
e�d
d��Zej
jedd
�e�dd���Zej
jedd
�e�dd���Ze�dd��Zdd�Zdd�Zej
jedd
�dd��Zej
jedd
�dd��Zej
jedd
�e�dd���Zej
jedd
�d d!��Ze�d"d#��Ze�eed$�d%d&���Zej
jed'd
�d(d)��Z ej
jedd
�e�d*d+���Z!e�d,d-��Z"e�eed$�d.d/���Z#e�d0d1��Z$e�ej
je%dd
�d2d3���Z&e�d4d5��Z'e�d6d7��Z(ej
jedd
�e�d8d9���Z)d:d;�Z*e�d<d=��Z+ej
je,dd
�d>d?��Z-ej
je,dd
�d@dA��Z.e�dBdC��Z/ej
je0dd
�e�dDdE���Z1ej
je2dFd
�ej
je3dd
�dGdH���Z4ej
je2dFd
�ej
je3dd
�dIdJ���Z5e�ej
jedKd
�dLdM���Z6ej
je7dd
�dNdO��Z8ej
jed'd
�dPdQ��Z9dRS)S�TestProcessObjectLeaksz$Test leaks of Process class methods.cCs td�}|�||j|j�dSr")r�test_class_coverageZgettersZsetters�r&�nsr,r,r-�
test_coverage_sz$TestProcessObjectLeaks.test_coveragecCs|�|jj�dSr")�execute�proc�name�r&r,r,r-�	test_namecsz TestProcessObjectLeaks.test_namecCs|�|jj�dSr")r8r9Zcmdliner;r,r,r-�test_cmdlinegsz#TestProcessObjectLeaks.test_cmdlinecCs|�|jj�dSr")r8r9Zexer;r,r,r-�test_exekszTestProcessObjectLeaks.test_execCs|�|jj�dSr")r8r9Zppidr;r,r,r-�	test_ppidosz TestProcessObjectLeaks.test_ppidz
POSIX only��reasoncCs|�|jj�dSr")r8r9Zuidsr;r,r,r-�	test_uidsssz TestProcessObjectLeaks.test_uidscCs|�|jj�dSr")r8r9�gidsr;r,r,r-�	test_gidsxsz TestProcessObjectLeaks.test_gidscCs|�|jj�dSr")r8r9�statusr;r,r,r-�test_status}sz"TestProcessObjectLeaks.test_statuscCs|�|jj�dSr")r8r9�nicer;r,r,r-�	test_nice�sz TestProcessObjectLeaks.test_nicecs t�������fdd��dS)Ncs�j���Sr")r9rGr,�Znicenessr&r,r-�<lambda>��z6TestProcessObjectLeaks.test_nice_set.<locals>.<lambda>)�thisprocrGr8r;r,rIr-�
test_nice_set�sz$TestProcessObjectLeaks.test_nice_set�
not supportedcCs|�|jj�dSr")r8r9�ionicer;r,r,r-�test_ionice�sz"TestProcessObjectLeaks.test_ionicecsZtr"t�������fdd��n4���fdd��t�tjt�	�dd�}��
t|�dS)Ncs�j���Sr")r9rOr,�r&�valuer,r-rJ�rKz8TestProcessObjectLeaks.test_ionice_set.<locals>.<lambda>cs�j�tj�Sr")r9rO�psutilZIOPRIO_CLASS_NONEr,r;r,r-rJ�rK���r)rrLrOr8r/�partial�cextZproc_ioprio_set�os�getpid�
execute_w_exc�OSError�r&r+r,rQr-�test_ionice_set�sz&TestProcessObjectLeaks.test_ionice_setcCs|�|jj�dSr")r8r9Zio_countersr;r,r,r-�test_io_counters�sz'TestProcessObjectLeaks.test_io_counterszworthless on POSIXcCst����|�|jj�dSr")rS�Process�usernamer8r9r;r,r,r-�
test_username�sz$TestProcessObjectLeaks.test_usernamecCs|�|jj�dSr")r8r9Zcreate_timer;r,r,r-�test_create_time�sz'TestProcessObjectLeaks.test_create_time)Zonly_ifcCs|�|jj�dSr")r8r9Znum_threadsr;r,r,r-�test_num_threads�sz'TestProcessObjectLeaks.test_num_threads�WINDOWS onlycCs|�|jj�dSr")r8r9Znum_handlesr;r,r,r-�test_num_handles�sz'TestProcessObjectLeaks.test_num_handlescCs|�|jj�dSr")r8r9Znum_fdsr;r,r,r-�test_num_fds�sz#TestProcessObjectLeaks.test_num_fdscCs|�|jj�dSr")r8r9Znum_ctx_switchesr;r,r,r-�test_num_ctx_switches�sz,TestProcessObjectLeaks.test_num_ctx_switchescCs|�|jj�dSr")r8r9�threadsr;r,r,r-�test_threads�sz#TestProcessObjectLeaks.test_threadscCs|�|jj�dSr")r8r9�	cpu_timesr;r,r,r-�test_cpu_times�sz%TestProcessObjectLeaks.test_cpu_timescCs|�|jj�dSr")r8r9Zcpu_numr;r,r,r-�test_cpu_num�sz#TestProcessObjectLeaks.test_cpu_numcCs|�|jj�dSr")r8r9Zmemory_infor;r,r,r-�test_memory_info�sz'TestProcessObjectLeaks.test_memory_infocCs|�|jj�dSr")r8r9Zmemory_full_infor;r,r,r-�test_memory_full_info�sz,TestProcessObjectLeaks.test_memory_full_infocCs|�|jj�dSr")r8r9Zterminalr;r,r,r-�
test_terminal�sz$TestProcessObjectLeaks.test_terminalcCs$trtn|j}|j|jj|d�dS)N�r$)rr%r$r8r9�resume�r&r$r,r,r-�test_resume�sz"TestProcessObjectLeaks.test_resumecCs|�|jj�dSr")r8r9�cwdr;r,r,r-�test_cwd�szTestProcessObjectLeaks.test_cwdcCs|�|jj�dSr")r8r9�cpu_affinityr;r,r,r-�test_cpu_affinity�sz(TestProcessObjectLeaks.test_cpu_affinitycs4t�������fdd����t�fdd��dS)Ncs�j���Sr"�r9rur,�Zaffinityr&r,r-rJ�rKz>TestProcessObjectLeaks.test_cpu_affinity_set.<locals>.<lambda>cs�j�dg�S�NrTrwr,r;r,r-rJ�rK)rLrur8rY�
ValueErrorr;r,rxr-�test_cpu_affinity_set�sz,TestProcessObjectLeaks.test_cpu_affinity_setcCs>tt�d��|�|jj�Wd�n1s00YdS)N�w)�openrr8r9Z
open_filesr;r,r,r-�test_open_files�sz&TestProcessObjectLeaks.test_open_filescCs|�|jj�dSr")r8r9Zmemory_mapsr;r,r,r-�test_memory_maps�sz'TestProcessObjectLeaks.test_memory_mapsz
LINUX onlycs���fdd��dS)Ncs�j�tj�Sr"�r9�rlimitrS�
RLIMIT_NOFILEr,r;r,r-rJ�rKz4TestProcessObjectLeaks.test_rlimit.<locals>.<lambda>�r8r;r,r;r-�test_rlimit�sz"TestProcessObjectLeaks.test_rlimitcs<t�tj������fdd����ttf�fdd��dS)Ncs�j�tj��Sr"r�r,��limitr&r,r-rJ�rKz8TestProcessObjectLeaks.test_rlimit_set.<locals>.<lambda>cs�j�d�Sry)r9r�r,r;r,r-rJ�rK)rLr�rSr�r8rYrZrzr;r,r�r-�test_rlimit_set�sz&TestProcessObjectLeaks.test_rlimit_setzworthless on WINDOWScsJt��0trdnd�����fdd��Wd�n1s<0YdS)NZinet�allcs�j���Sr")r9�net_connectionsr,��kindr&r,r-rJrKz=TestProcessObjectLeaks.test_net_connections.<locals>.<lambda>)rrr8r;r,r�r-�test_net_connections�sz+TestProcessObjectLeaks.test_net_connectionscCs|�|jj�dSr")r8r9�environr;r,r,r-�test_environsz#TestProcessObjectLeaks.test_environcCs|�dd��dS)NcSst�t���Sr")rV�	proc_inforWrXr,r,r,r-rJrKz7TestProcessObjectLeaks.test_proc_info.<locals>.<lambda>r�r;r,r,r-�test_proc_info
sz%TestProcessObjectLeaks.test_proc_infoN):�__name__�
__module__�__qualname__�__doc__rLr9r7r2r<r=r>r?r�mark�skipifrrBrDrFrHrMrrPr\rr]r`rarrrbrrdrerfrhrjrrkrlrmrnrrrtrrvr{r~rrrrr�r�r�r
r�r�r,r,r,r-r3Zs�






	











r3cspeZdZdZe�fdd��Ze�fdd��Zdd�Zerhdd	�Z	d
d�Z
dd
�Zdd�Zdd�Z
dd�Z�ZS)�TestTerminatedProcessLeaksz�Repeat the tests above looking for leaks occurring when dealing
    with terminated processes raising NoSuchProcess exception.
    The C functions are still invoked but will follow different code
    paths. We'll check those code paths.
    cs:t���t�|_t�|jj�|_|j��|j�	�dSr")
r
�
setUpClassr�subprSr^�pidr9�kill�wait��cls�r#r,r-r�s


z%TestTerminatedProcessLeaks.setUpClasscst���t|j�dSr")r
�
tearDownClassr r�r�r�r,r-r�s
z(TestTerminatedProcessLeaks.tearDownClasscCs$z
|�WntjyYn0dSr")rSZ
NoSuchProcessr[r,r,r-�call#s
zTestTerminatedProcessLeaks.callcCs|�|jj�dSr")r8r9r�r;r,r,r-�	test_kill+sz$TestTerminatedProcessLeaks.test_killcCs|�|jj�dSr")r8r9r r;r,r,r-�test_terminate.sz)TestTerminatedProcessLeaks.test_terminatecCs|�|jj�dSr")r8r9Zsuspendr;r,r,r-�test_suspend1sz'TestTerminatedProcessLeaks.test_suspendcCs|�|jj�dSr")r8r9rpr;r,r,r-rr4sz&TestTerminatedProcessLeaks.test_resumecCs|�|jj�dSr")r8r9r�r;r,r,r-�	test_wait7sz$TestTerminatedProcessLeaks.test_waitcs�fdd�}��|�dS)Ncs(zt��jj�WSty"Yn0dSr")rVr�r9r�r	r,r;r,r-r�<sz7TestTerminatedProcessLeaks.test_proc_info.<locals>.callr�)r&r�r,r;r-r�:sz)TestTerminatedProcessLeaks.test_proc_info)r�r�r�r��classmethodr�r�r�rr�r�r�rrr�r��
__classcell__r,r,r�r-r�sr�rcr@c@seZdZdd�Zdd�ZdS)�TestProcessDualImplementationcCs|�dd��dS)NcSstjt��dd�S)NT�Zuse_peb�rVZproc_cmdlinerWrXr,r,r,r-rJHrKzETestProcessDualImplementation.test_cmdline_peb_true.<locals>.<lambda>r�r;r,r,r-�test_cmdline_peb_trueGsz3TestProcessDualImplementation.test_cmdline_peb_truecCs|�dd��dS)NcSstjt��dd�S)NFr�r�r,r,r,r-rJKrKzFTestProcessDualImplementation.test_cmdline_peb_false.<locals>.<lambda>r�r;r,r,r-�test_cmdline_peb_falseJsz4TestProcessDualImplementation.test_cmdline_peb_falseN)r�r�r�r�r�r,r,r,r-r�Esr�c@sPeZdZdZdd�Ze�dd��Ze�dd��Ze�dd	��Ze�d
d��Z	e�dd
��Z
e�ejj
eore��dkdd�ejj
edd�dd����Zejj
edd�dd��Zdd�Zejj
edd�dd��Zdd�Zdd�Zejj
ed d�d!d"��Zejj
e�oej�d#�d$d�e�d%d&���Z e�d'd(��Z!e�ejj
e"dd�d)d*���Z#e�ejj
e�one�$�d+kd,d�d-d.���Z%d/d0�Z&ejj
ed d�d1d2��Z'e�ejj
e(dd�d3d4���Z)e�ejj
e*dd�d5d6���Z+e�ejj
e,dd�d7d8���Z-e�d9d:��Z.d;d<�Z/d=d>�Z0e�rLd?d@�Z1dAdB�Z2dCdD�Z3dEdF�Z4dGdH�Z5dIS)J�TestModuleFunctionsLeaksz&Test leaks of psutil module functions.cCst�}|�||j�dSr")rr4r�r5r,r,r-r7Vsz&TestModuleFunctionsLeaks.test_coveragecCs|�dd��dS)NcSstjdd�S)NT��logical�rS�	cpu_countr,r,r,r-rJ^rKz9TestModuleFunctionsLeaks.test_cpu_count.<locals>.<lambda>r�r;r,r,r-�test_cpu_count\sz'TestModuleFunctionsLeaks.test_cpu_countcCs|�dd��dS)NcSstjdd�S)NFr�r�r,r,r,r-rJbrKz?TestModuleFunctionsLeaks.test_cpu_count_cores.<locals>.<lambda>r�r;r,r,r-�test_cpu_count_cores`sz-TestModuleFunctionsLeaks.test_cpu_count_corescCs|�tj�dSr")r8rSrir;r,r,r-rjdsz'TestModuleFunctionsLeaks.test_cpu_timescCs|�dd��dS)NcSstjdd�S)NT)Zpercpu)rSrir,r,r,r-rJjrKz=TestModuleFunctionsLeaks.test_per_cpu_times.<locals>.<lambda>r�r;r,r,r-�test_per_cpu_timeshsz+TestModuleFunctionsLeaks.test_per_cpu_timescCs|�tj�dSr")r8rSZ	cpu_statsr;r,r,r-�test_cpu_statslsz'TestModuleFunctionsLeaks.test_cpu_stats�arm64zskipped due to #1892r@rNcCs|�tj�dSr")r8rSZcpu_freqr;r,r,r-�
test_cpu_freqpsz&TestModuleFunctionsLeaks.test_cpu_freqrccCst��|�tj�dSr")rS�
getloadavgr8r;r,r,r-�test_getloadavgysz(TestModuleFunctionsLeaks.test_getloadavgcCs|�tj�dSr")r8rSZvirtual_memoryr;r,r,r-�test_virtual_memory�sz,TestModuleFunctionsLeaks.test_virtual_memoryz&worthless on SUNOS (uses a subprocess)cCs|�tj�dSr")r8rSZswap_memoryr;r,r,r-�test_swap_memory�sz)TestModuleFunctionsLeaks.test_swap_memorycCs$trtn|j}|jdd�|d�dS)NcSst�t���Sr")rSZ
pid_existsrWrXr,r,r,r-rJ�rKz:TestModuleFunctionsLeaks.test_pid_exists.<locals>.<lambda>ro�rr%r$r8rqr,r,r-�test_pid_exists�sz(TestModuleFunctionsLeaks.test_pid_existscCs$trtn|j}|jdd�|d�dS)NcSs
t�d�S)N�.)rS�
disk_usager,r,r,r-rJ�rKz:TestModuleFunctionsLeaks.test_disk_usage.<locals>.<lambda>ror�rqr,r,r-�test_disk_usage�sz(TestModuleFunctionsLeaks.test_disk_usagezQEMU user not supportedcCs|�tj�dSr")r8rSZdisk_partitionsr;r,r,r-�test_disk_partitions�sz-TestModuleFunctionsLeaks.test_disk_partitionsz/proc/diskstatsz3/proc/diskstats not available on this Linux versioncCs|�dd��dS)NcSstjdd�S�NF)Znowrap)rSZdisk_io_countersr,r,r,r-rJ�rKz@TestModuleFunctionsLeaks.test_disk_io_counters.<locals>.<lambda>r�r;r,r,r-�test_disk_io_counters�sz.TestModuleFunctionsLeaks.test_disk_io_counterscCs|�tj�dSr")r8rSZpidsr;r,r,r-�	test_pids�sz"TestModuleFunctionsLeaks.test_pidscCs|�dd��dS)NcSstjdd�Sr�)rSZnet_io_countersr,r,r,r-rJ�rKz?TestModuleFunctionsLeaks.test_net_io_counters.<locals>.<lambda>r�r;r,r,r-�test_net_io_counters�sz-TestModuleFunctionsLeaks.test_net_io_countersrzneed root accesscCsDtjdd�t��|�dd��Wd�n1s60YdS)Nr��r�cSstjdd�S)Nr�r�)rSr�r,r,r,r-rJ�rKz?TestModuleFunctionsLeaks.test_net_connections.<locals>.<lambda>)rSr�rr8r;r,r,r-r��sz-TestModuleFunctionsLeaks.test_net_connectionscCs"trdn|j}|jtj|d�dS)Ni@)�	tolerance)rr�r8rSZnet_if_addrs)r&r�r,r,r-�test_net_if_addrs�sz*TestModuleFunctionsLeaks.test_net_if_addrscCs|�tj�dSr")r8rSZnet_if_statsr;r,r,r-�test_net_if_stats�sz*TestModuleFunctionsLeaks.test_net_if_statscCs|�tj�dSr")r8rSZsensors_batteryr;r,r,r-�test_sensors_battery�sz-TestModuleFunctionsLeaks.test_sensors_batterycCs|�tj�dSr")r8rSZsensors_temperaturesr;r,r,r-�test_sensors_temperatures�sz2TestModuleFunctionsLeaks.test_sensors_temperaturescCs|�tj�dSr")r8rSZsensors_fansr;r,r,r-�test_sensors_fans�sz*TestModuleFunctionsLeaks.test_sensors_fanscCs|�tj�dSr")r8rSZ	boot_timer;r,r,r-�test_boot_time�sz'TestModuleFunctionsLeaks.test_boot_timecCs|�tj�dSr")r8rSZusersr;r,r,r-�
test_users�sz#TestModuleFunctionsLeaks.test_userscCs|�dd��dS)NcSs
t�d�S)NF)rSZ
_set_debugr,r,r,r-rJ�rKz9TestModuleFunctionsLeaks.test_set_debug.<locals>.<lambda>r�r;r,r,r-�test_set_debug�sz'TestModuleFunctionsLeaks.test_set_debugcCs|�tj�dSr")r8rVZwinservice_enumerater;r,r,r-�test_win_service_iter�sz.TestModuleFunctionsLeaks.test_win_service_itercCsdSr"r,r;r,r,r-�test_win_service_get�sz-TestModuleFunctionsLeaks.test_win_service_getcs&tt������|��fdd��dS)Ncs
t���Sr")rVZwinservice_query_configr,�r:r,r-rJ�rKzFTestModuleFunctionsLeaks.test_win_service_get_config.<locals>.<lambda>��nextrSZwin_service_iterr:r8r;r,r�r-�test_win_service_get_config�sz4TestModuleFunctionsLeaks.test_win_service_get_configcs&tt������|��fdd��dS)Ncs
t���Sr")rVZwinservice_query_statusr,r�r,r-rJ�rKzFTestModuleFunctionsLeaks.test_win_service_get_status.<locals>.<lambda>r�r;r,r�r-�test_win_service_get_status�sz4TestModuleFunctionsLeaks.test_win_service_get_statuscs&tt������|��fdd��dS)Ncs
t���Sr")rVZwinservice_query_descrr,r�r,r-rJ�rKzKTestModuleFunctionsLeaks.test_win_service_get_description.<locals>.<lambda>r�r;r,r�r-� test_win_service_get_description�sz9TestModuleFunctionsLeaks.test_win_service_get_descriptionN)6r�r�r�r�r7r2r�r�rjr�r�rr�r�r�platform�machinerr�rr�r�rr�r�r�rr�rrW�path�existsr�r�rr��getuidr�r�r�rr�rr�rr�r�r�r�r�r�r�r�r�r,r,r,r-r�Ss~




�


�


r�)4r��
__future__rr/rWr�rSZpsutil._commonrrrrrrZpsutil._compatr	r
Zpsutil.testsrrr
rrrrrrrrrrrrrrrrrrr Z_psplatformrVr^rLr%r2r3r�r�r�r�r�r,r,r,r-�<module>s\
66
PKok\�đ�&>&>8psutil/tests/__pycache__/test_connections.cpython-39.pycnu�[���a

��?hS�@s\dZddlZddlZddlZddlmZddlmZddlmZddlmZddlm	Z	ddl
Z
ddl
mZdd	l
mZdd
l
m
Z
ddl
mZddl
mZdd
l
mZddl
mZddl
mZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%dd lm&Z&dd!lm'Z'e(ed"e)��Z*d#d$�Z+e!j,j-d%d&�Gd'd(�d(e��Z.Gd)d*�d*e.�Z/e!j,j-d%d&�Gd+d,�d,e.��Z0e!j,j-d%d&�Gd-d.�d.e.��Z1Gd/d0�d0e.�Z2e!j,j3ed1d2�Gd3d4�d4e.��Z4Gd5d6�d6e�Z5dS)7zFTests for psutil.net_connections() and Process.net_connections() APIs.�N)�closing)�AF_INET)�AF_INET6)�
SOCK_DGRAM)�SOCK_STREAM)�FREEBSD)�LINUX)�MACOS)�NETBSD)�OPENBSD)�POSIX)�SUNOS)�WINDOWS)�
supports_ipv6)�PY3)�AF_UNIX)�HAS_NET_CONNECTIONS_UNIX)�SKIP_SYSCONS)�PsutilTestCase)�bind_socket)�bind_unix_socket)�check_connection_ntuple)�create_sockets)�filter_proc_net_connections)�pytest)�
reap_children)�retry_on_failure)�skip_on_access_denied)�tcp_socketpair)�unix_socketpair)�
wait_for_file�SOCK_SEQPACKETcCs$t��j|d�}|dvr t|�S|S)N��kind)�all�unix)�psutil�Process�net_connectionsr)r#�cons�r*�I/usr/local/lib64/python3.9/site-packages/psutil/tests/test_connections.py�this_proc_net_connections2sr,�serial)�namec@s&eZdZdd�Zdd�Zd	dd�ZdS)
�ConnectionTestCasecCstdd�gksJ�dS�Nr$r"�r,��selfr*r*r+�setUp;szConnectionTestCase.setUpcCstdd�gksJ�dSr0r1r2r*r*r+�tearDown>szConnectionTestCase.tearDownr$csdztj|d�}Wn tjy0tr*YdS�Yn0�fdd�|D�}|��|��||ks`J�dS)z�Given a process PID and its list of connections compare
        those against system-wide connections retrieved via
        psutil.net_connections.
        r"Ncs"g|]}|j�kr|dd��qS)N�����pid��.0�cr7r*r+�
<listcomp>Q�zBConnectionTestCase.compare_procsys_connections.<locals>.<listcomp>)r&r(ZAccessDeniedr	�sort)r3r8Z	proc_consr#Zsys_consr*r7r+�compare_procsys_connectionsBsz.ConnectionTestCase.compare_procsys_connectionsN)r$)�__name__�
__module__�__qualname__r4r5r?r*r*r*r+r/9sr/c@s4eZdZejjedd�dd��Zdd�Zdd�Z	d	S)
�TestBasicOperations�
requires root��reasoncCsDt��*tjdd�D]}t|�qWd�n1s60YdSr0)rr&r(r�r3�connr*r*r+�test_systemXszTestBasicOperations.test_systemcCsBt��(tdd�D]}t|�qWd�n1s40YdSr0)rr,rrGr*r*r+�test_process^sz TestBasicOperations.test_processcCsnt�t��tdd�Wd�n1s*0Yt�t��tjdd�Wd�n1s`0YdS)Nz???r")rZraises�
ValueErrorr,r&r(r2r*r*r+�test_invalid_kindcs(z%TestBasicOperations.test_invalid_kindN)
r@rArBr�mark�skipifrrIrJrLr*r*r*r+rCWs
rCc@s�eZdZdZdd�Zdd�Zdd�Zejj	e
�dd	�d
d��Zdd
�Zejj	e
�dd	�dd��Z
ejj	edd	�dd��Zejj	edd	�dd��ZdS)�TestUnconnectedSocketsz;Tests sockets which are open but not connected to anything.cCsttdd�}tdd�|D��}ts$tr0||��St|�dks@J�|djdkrh||��j|��kshJ�|dSdS)Nr$r"cSsg|]}|j|f�qSr*)�fdr9r*r*r+r<pr=z=TestUnconnectedSockets.get_conn_from_sock.<locals>.<listcomp>�rr6)r,�dictr
r�fileno�lenrP)r3�sockr)Zsmapr*r*r+�get_conn_from_sockns
z)TestUnconnectedSockets.get_conn_from_sockcCs�|�|�}t|�|jdkr.|j|��ks.J�|j|jks>J�|j|�tjtj	�ksXJ�|�
�}|sztrzt|t
�rz|��}|jtkr�|dd�}|j|ks�J�|jtkr�tr�tdd�}|jt��|dd�|S)z�Given a socket, makes sure it matches the one obtained
        via psutil. It assumes this process created one connection
        only (the one supposed to be checked).
        r6N�r$r")rVrrPrS�family�type�
getsockopt�socket�
SOL_SOCKET�SO_TYPE�getsocknamer�
isinstance�bytes�decoder�laddrrrr,r?�os�getpid)r3rUrHrbr)r*r*r+�check_socket{s 



z#TestUnconnectedSockets.check_socketcCsbd}tttt|d���8}|�|�}|jdks0J�|jtjks@J�Wd�n1sT0YdS�N��	127.0.0.1r��addrr*)	rrrrre�raddr�statusr&�CONN_LISTEN�r3rjrUrHr*r*r+�test_tcp_v4�s

z"TestUnconnectedSockets.test_tcp_v4zIPv6 not supportedrEcCsbd}tttt|d���8}|�|�}|jdks0J�|jtjks@J�Wd�n1sT0YdS�N)�::1rrir*)	rrrrrerkrlr&rmrnr*r*r+�test_tcp_v6�s

z"TestUnconnectedSockets.test_tcp_v6cCsbd}tttt|d���8}|�|�}|jdks0J�|jtjks@J�Wd�n1sT0YdSrf)	rrrrrerkrlr&�	CONN_NONErnr*r*r+�test_udp_v4�s

z"TestUnconnectedSockets.test_udp_v4cCsbd}tttt|d���8}|�|�}|jdks0J�|jtjks@J�Wd�n1sT0YdSrp)	rrrrrerkrlr&rsrnr*r*r+�test_udp_v6�s

z"TestUnconnectedSockets.test_udp_v6�
POSIX onlycCsd|��}tt|td���8}|�|�}|jdks2J�|jtjksBJ�Wd�n1sV0YdS�N)rY��	�
get_testfnrrrrerkrlr&rs�r3�testfnrUrHr*r*r+�
test_unix_tcp�s

z$TestUnconnectedSockets.test_unix_tcpcCsd|��}tt|td���8}|�|�}|jdks2J�|jtjksBJ�Wd�n1sV0YdSrwryr{r*r*r+�
test_unix_udp�s

z$TestUnconnectedSockets.test_unix_udpN)r@rArB�__doc__rVrerorrMrNrrrrtrurr}r~r*r*r*r+rOjs



rOc@sBeZdZdZejjedd�dd��Zejje	dd�dd��Z
d	S)
�TestConnectedSocketzFTest socket pairs which are actually connected to
    each other.
    zunreliable on SUONSrEcCs�d}tdd�gksJ�tt|d�\}}zVtdd�}t|�dksBJ�|djtjksVJ�|djtjksjJ�W|��|��n|��|��0dS)Nrg�tcp4r"rirWrrQ)r,rrrTrlr&ZCONN_ESTABLISHED�close)r3rj�server�clientr)r*r*r+�test_tcp�s

�zTestConnectedSocket.test_tcprvcCs|��}t|�\}}z�tdd�}|djr<|djr<J|��|djrX|djrXJ|��ts`trndd�|D�}t|�dks~J�ts�ts�t	s�t
r�|djdks�J�|djdks�J�||djp�|djks�J�n|djp�|dj|ks�J�W|��|��n|��|��0dS)	Nr%r"rrQcSsg|]}|jdkr|�qS)z/var/run/log)rkr9r*r*r+r<�r=z1TestConnectedSocket.test_unix.<locals>.<listcomp>rWrx)rzrr,rbrkr
rrTrr
rr�)r3r|r�r�r)r*r*r+�	test_unix�s$

�zTestConnectedSocket.test_unixN)r@rArBrrrMrNr
r�rr�r*r*r*r+r��s

r�c@s.eZdZdd�Zeed�dd��Zdd�ZdS)	�TestFilterscCs�dd�}t���|dtttgtttg�|dttgttg�|dtgttg�|dttgtg�|dtgtg�|dtgtg�|d	ttgtg�|d
tgtg�|dtgtg�tr�|dtgtttg�Wd�n1s�0YdS)
NcSsbt|d�D] }|j|vsJ�|j|vs
J�q
ts^tj|d�D] }|j|vsNJ�|j|vs<J�q<dS)Nr")r,rXrYrr&r()r#�families�typesrHr*r*r+�checksz'TestFilters.test_filters.<locals>.checkr$�inet�inet4�tcpr��tcp6�udp�udp4�udp6r%)rrrrrrr!r)r3r�r*r*r+�test_filterss*	��zTestFilters.test_filters)Zonly_ifcs�t��fdd�}t�d�}t�d�}tj��jt��d��}|jt	t
�d|d�}|jt	t
�d|d�}|jt	t�d|d�}|jt	t�d|d�}��|�}	t
t|d	d
��}
��|�}t
t|d	d
��}t�r���|�}
t
t|d	d
��}��|�}t
t|d	d
��}nd}
d}d}d}t����D]�}|��}t|�dk�s8J�|D]�}|j|	jk�rh|||t
t|
dtjd
�n�|j|jk�r�|||t
t|dtjd�nZ|jt|
dd�k�r�|||tt|dtjd�n,|jt|dd�k�r<|||tt|dtjd��q<�qdS)Ncs�d}t|�|j|ksJ�|j|ks(J�|j|ks6J�|j|ksDJ�|j|ksRJ�|D]2}	|j|	d�}
|	|vr||
gks�J�qV|
gksVJ�qVtr���|j	|g�dS)N)
r$r�r��inet6r�r�r�r�r�r�r")
rrXrYrbrkrlr(rr?r8)�procrHrXrYrbrkrl�kindsZ	all_kindsr#r)r2r*r+�
check_conn$sz+TestFilters.test_combos.<locals>.check_conna4
            import socket, time
            s = socket.socket({family}, socket.SOCK_STREAM)
            s.bind(('{addr}', 0))
            s.listen(5)
            with open('{testfn}', 'w') as f:
                f.write(str(s.getsockname()[:2]))
            [time.sleep(0.1) for x in range(100)]
            a
            import socket, time
            s = socket.socket({family}, socket.SOCK_DGRAM)
            s.bind(('{addr}', 0))
            with open('{testfn}', 'w') as f:
                f.write(str(s.getsockname()[:2]))
            [time.sleep(0.1) for x in range(100)]
            )�dirrh)rXrjr|rqT)�deleterQr*)r$r�r�r�r�)r$r�r�r�r�r8)r$r�r�r�r�)r$r�r�r�r�)r�textwrap�dedentrc�path�basenamerz�getcwd�format�intrr�pyrun�evalr rr&r'�childrenr(rTr8rrmrrs�getattr)r3r�Ztcp_templateZudp_templateZtestfileZ
tcp4_templateZ
udp4_templateZ
tcp6_templateZ
udp6_templateZ	tcp4_procZ	tcp4_addrZ	udp4_procZ	udp4_addrZ	tcp6_procZ	tcp6_addrZ	udp6_procZ	udp6_addr�pr)rHr*r2r+�test_combos s�




�
�
�
�



����zTestFilters.test_comboscCs�t����tdd�}t|�t�r$dndks.J�|D]$}|jttfvsHJ�|jtks2J�q2tdd�}t|�dksrJ�|djtks�J�|djtks�J�t�r�tdd�}t|�dks�J�|djtks�J�|djtks�J�tdd�}t|�t�r�dndks�J�|D]*}|jttfv�sJ�|jt	k�sJ��qtd	d�}t|�dk�sJJ�|djtk�s^J�|djt	k�srJ�t��r�td
d�}t|�dk�s�J�|djtk�s�J�|djt	k�s�J�tdd�}t|�t��r�dndk�s�J�|D].}|jttfv�sJ�|jtt	fv�s�J��q�t��rntd
d�}t|�dk�s>J�|D]*}|jtk�sVJ�|jtt	fv�sBJ��qBt
�r�t�s�t�s�tdd�}t|�dk�s�J�|D]*}|jt
k�s�J�|jtt	fv�s�J��q�Wd�n1�s�0YdS)Nr�r"rWrQr�rr�r�r�r�r��r�r%�)rr,rTrrXrrrYrrrrr
r)r3r)rHr*r*r+�
test_count�s\









zTestFilters.test_countN)r@rArBr�rr	r�r�r*r*r*r+r�s

r�rDrEc@s&eZdZdZdd�Ze�dd��ZdS)�TestSystemWideConnectionszTests for net_connections().cCs�dd�}t��rddlm}|��D]L\}}|dkr:ts:q$|\}}t�|�}t|�tt|��ksdJ�||||�q$Wd�n1s�0YdS)NcSs<|D]2}|j|vsJ�|jtkr.|j|vs.J�t|�qdS)N)rXrrYr)r)r��types_rHr*r*r+r��s

z0TestSystemWideConnections.test_it.<locals>.checkr)�	conn_tmapr%)	r�psutil._commonr��itemsrr&r(rT�set)r3r�r�r#�groupsr�r�r)r*r*r+�test_it�s
z!TestSystemWideConnections.test_itcs�t��}t|�}Wd�n1s$0Yg�d}g}t|�D]:}|��}|�|�t�d|�}|�|�}��|j�qB|D]}t	|�q��fdd�t
jdd�D�}	�D]B�t�fdd�|	D��|ks�J�t
���}
t|
�d��|ks�J�q�dS)N�
a"                import time, os
                from psutil.tests import create_sockets
                with create_sockets():
                    with open(r'%s', 'w') as f:
                        f.write("hello")
                    [time.sleep(0.1) for x in range(100)]
                csg|]}|j�vr|�qSr*r7�r:�x)�pidsr*r+r<szFTestSystemWideConnections.test_multi_sockets_procs.<locals>.<listcomp>r$r"csg|]}|j�kr|�qSr*r7r�r7r*r+r<!r=)
rrT�rangerz�appendr�r�r�r8r r&r(r')r3�socks�expected�times�fnames�_�fname�srcZsprocZsysconsr�r*)r8r�r+�test_multi_sockets_procs�s,&
�



�
z2TestSystemWideConnections.test_multi_sockets_procsN)r@rArBrr�rr�r*r*r*r+r��sr�c@seZdZdd�ZdS)�TestMisccCs�g}g}tt�D]\}|�d�rtt|�}t|�}|��s@J|��t|vsLJ�||vsXJ�|�|�|�|�qtr~tjtj	t
r�tjdS)NZCONN_)r�r&�
startswithr��str�isupperr�r
Z	CONN_IDLEZ
CONN_BOUNDrZCONN_DELETE_TCB)r3Zints�strsr.�numZstr_r*r*r+�test_net_connection_constants's 


z&TestMisc.test_net_connection_constantsN)r@rArBr�r*r*r*r+r�&sr�)6rrcr[r��
contextlibrrrrrr&rrr	r
rrr
rr�rZpsutil._compatrZpsutil.testsrrrrrrrrrrrrrrrr r��objectr!r,rMZxdist_groupr/rCrOr�r�rNr�r�r*r*r*r+�<module>sb]7g?PKok\����7�72psutil/tests/__pycache__/test_posix.cpython-39.pycnu�[���a

��?hD�@s�dZddlZddlZddlZddlZddlZddlZddlZddlmZddlm	Z	ddlm
Z
ddlmZddlmZddlm
Z
dd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZe
�rNddlZddlZddlm Z d,dd�Z!dd�Z"dd�Z#dd�Z$d d!�Z%d"d#�Z&ej'j(e
d$d%�Gd&d'�d'e��Z)ej'j(e
d$d%�Gd(d)�d)e��Z*ej'j(e
d$d%�Gd*d+�d+e��Z+dS)-zPOSIX specific tests.�N)�AIX)�BSD)�LINUX)�MACOS)�OPENBSD)�POSIX)�SUNOS)�AARCH64)�HAS_NET_IO_COUNTERS)�
PYTHON_EXE)�	QEMU_USER)�PsutilTestCase)�mock)�pytest)�retry_on_failure)�sh)�skip_on_access_denied)�spawn_testproc)�	terminate)�which)�getpagesizec	Cs�dg}tr|�d�|dur0|�dt|�g�nts8trD|�d�n
|�d�tr^|�dd�}|�d	|g�t|�}tr�|��n|��d
d�}g}|D]6}|�	�}zt
|�}Wnty�Yn0|�|�q�|dur�|S|dSdS)zwWrapper for calling the ps command with a little bit of cross-platform
    support for a narrow range of features.
    �psz--no-headersNz-pz-A�ax�startZstimez-o�r)r�append�extend�strrr�replacer�
splitlines�strip�int�
ValueError)�fmt�pid�cmd�output�
all_output�line�r)�C/usr/local/lib64/python3.9/site-packages/psutil/tests/test_posix.pyr/s0

rcCs>d}trd}t||���}tr6d|dvs.J�|dS|dS)N�commandZcommz
/bin/qemu-rr)rr�splitr)r$�fieldr+r)r)r*�ps_namegsr.cCs0d}tstrd}t||�}t�dd|�}|��S)Nr+�argsz\(python.*?\)$�)rrr�re�subr )r$r-�outr)r)r*�ps_argsrs
r4cCsd}trd}t||�S)NZrssZrssize�rr�r$r-r)r)r*�ps_rss|sr7cCsd}trd}t||�S)NZvszZvsizer5r6r)r)r*�ps_vsz�sr8c	
Cs�ztd|���}Wn@tyT}z(dt|���vr>t�d���WYd}~n
d}~00|�d�d}|��}t|d�d}t|d�d}t|d�d}t	|d	�
d
d��}||||fS)Nzdf -k %szdevice busyzdf returned EBUSY�
r�����%r0)rr �RuntimeErrorr�lowerr�skipr,r!�floatr)	�devicer3�errr(�fields�	sys_total�sys_used�sys_free�sys_percentr)r)r*�df�s
rJz
POSIX only��reasonc@s�eZdZdZedd��Zedd��Zdd�Zdd	�Zd
d�Z	dd
�Z
dd�Ze�e
�dd���Ze�e
�dd���Zdd�Zdd�Zdd�Zdd�Zejjep�edd�dd��Zd d!�Ze
�d"d#��Zejjed$d�ejjed%d�d&d'���Zd(S))�TestProcesszBCompare psutil results against 'ps' command line utility (mainly).cCsttddgtjd�j|_dS)Nz-Ez-O)�stdin)rr�
subprocess�PIPEr$��clsr)r)r*�
setUpClass�s�zTestProcess.setUpClasscCst|j�dS�N)rr$rQr)r)r*�
tearDownClass�szTestProcess.tearDownClasscCs,td|j�}t�|j���}||ks(J�dS)N�ppid)rr$�psutil�ProcessrV)�selfZppid_psZppid_psutilr)r)r*�	test_ppid�szTestProcess.test_ppidcCs.td|j�}t�|j���j}||ks*J�dS)N�uid)rr$rWrX�uids�real)rYZuid_psZ
uid_psutilr)r)r*�test_uid�szTestProcess.test_uidcCs.td|j�}t�|j���j}||ks*J�dS)NZrgid)rr$rWrX�gidsr])rYZgid_psZ
gid_psutilr)r)r*�test_gid�szTestProcess.test_gidcCs,td|j�}t�|j���}||ks(J�dS)N�user)rr$rWrX�username)rYZusername_psZusername_psutilr)r)r*�
test_username�szTestProcess.test_usernamecCs^t��}tjdtd��4}|��t|��j�ks2J�|j	s<J�Wd�n1sP0YdS)Nzpsutil.pwd.getpwuid�Zside_effect)
rWrXr�patch�KeyErrorrbrr\r]�called)rY�pZfunr)r)r*�test_username_no_resolution�sz'TestProcess.test_username_no_resolutioncCs<t�d�t|j�}t�|j���dd}||ks8J�dS)N皙�����?rr:)�time�sleepr7r$rWrX�memory_info)rYZrss_psZ
rss_psutilr)r)r*�test_rss_memory�s

zTestProcess.test_rss_memorycCs<t�d�t|j�}t�|j���dd}||ks8J�dS)Nrjrr:)rkrlr8r$rWrXrm)rYZvsz_psZ
vsz_psutilr)r)r*�test_vsz_memory�s

zTestProcess.test_vsz_memorycCsvt|j�}tj�|���}t�|j�����}t	�
dd|�}t	�
dd|�}t	�
dd|�}t	�
dd|�}||ksrJ�dS)Nz\d.\dr0z\d)r.r$�os�path�basenamer@rWrX�namer1r2)rYZname_psZname_psutilr)r)r*�	test_name�s
zTestProcess.test_namec	Cs�d}gd�}tjd|d��Vtjd|d��(t��}|��dksDJ�Wd�n1sX0YWd�n1sv0YdS)N�long-program-name)�long-program-name-extendedZfoo�bar�psutil._psplatform.Process.name�Zreturn_value�"psutil._psplatform.Process.cmdlinerv)rrerWrXrs)rYrs�cmdlinerhr)r)r*�test_name_long�s�zTestProcess.test_name_longc	Cs�d}tjd|d��^tjdt�dd�d��(t��}|��dksDJ�Wd�n1sX0YWd�n1sv0YdS�Nrurxryrzrr0rd)rrerWZAccessDeniedrXrs�rYrsrhr)r)r*�test_name_long_cmdline_ad_exc�s
�z)TestProcess.test_name_long_cmdline_ad_excc
Cs�d}tjd|d���tjdt�dd�d��Lt��}t�tj��|��Wd�n1s^0YWd�n1s|0YWd�n1s�0YdSr})rrerWZ
NoSuchProcessrXr�raisesrsr~r)r)r*�test_name_long_cmdline_nsp_excs
�z*TestProcess.test_name_long_cmdline_nsp_exczps -o start not availablerKcCs\td|j�}t�|j���}tj�|��d�}t|�}tj�|��d�}|||fvsXJ�dS)Nrz%H:%M:%S)	rr$rWrXZcreate_time�datetime�
fromtimestamp�strftime�round)rYZtime_psZtime_psutilZtime_psutil_tstampZround_time_psutilZround_time_psutil_tstampr)r)r*�test_create_times����zTestProcess.test_create_timecCs^t|j�}t�|j���}z||ks(J�Wn.tyX|dt|��}||ksTJ�Yn0dSrT)r.r$rWrXZexe�AssertionError�len)rYZps_pathnameZpsutil_pathnameZadjusted_ps_pathnamer)r)r*�test_exes
zTestProcess.test_execCsTt|j�}d�t�|j����}trDt|�t|�krD|�|�sPJ�n||ksPJ�dS)N� )	r4r$�joinrWrXr{r	r��
startswith)rYZ
ps_cmdlineZpsutil_cmdliner)r)r*�test_cmdline1s

zTestProcess.test_cmdlineznot reliable on SUNOSznot reliable on AIXcCs(td|j�}t����}||ks$J�dS)N�nice)rr$rWrXr�)rYZps_niceZpsutil_nicer)r)r*�	test_nice?szTestProcess.test_niceN)�__name__�
__module__�__qualname__�__doc__�classmethodrSrUrZr^r`rcrirrrnrortr|rr�r�mark�skipifrrr�r�r�rrr�r)r)r)r*rM�s8

	





rMc@s�eZdZdZe�dd��Zejje	dd�ejje
d�dd�ejjedd�d	d
����Ze�dd��Z
e�d
d��Zdd�Zdd�Zdd�Zdd�Zejjedd�e�dd���ZdS)�TestSystemAPIszTest some system APIs.cs~ttd���t���ts$tr0d�vr0��dd�t��t��dkrz�fdd��D��fdd��D�}|�dt	|���dS)Nr$rrcsg|]}|�vr|�qSr)r)��.0�x)�pids_psr)r*�
<listcomp>X�z,TestSystemAPIs.test_pids.<locals>.<listcomp>csg|]}|�vr|�qSr)r)r�)�pids_psutilr)r*r�Xszdifference: )
�sortedrrWZpidsrr�insertr��failr)rY�
differencer))r�r�r*�	test_pidsKs�zTestSystemAPIs.test_pidszunreliable on SUNOSrK�ifconfigzno ifconfig cmdz
not supportedcCsLtd�}tjdd�D]2}|��D]}|�|�r qq |�d||f��qdS)Nzifconfig -aT)Zpernicz/couldn't find %s nic in 'ifconfig -a' output
%s)rrWZnet_io_countersr,r�r�)rYr&Znicr(r)r)r*�test_nic_names_s
��zTestSystemAPIs.test_nic_namescCs�td�}|��st�d��|�d�}dd�|D�}dd�|D�}t|�tt���ksXJ�|jt��|d��`t	t���D]B\}}|j
||ks�J�|j||ks�J�|jdurxt�
|j�qxWd�n1s�0YdS)N�who -u�no users on this systemr9cSsg|]}|��d�qS)r�r,r�r)r)r*r�vr�z-TestSystemAPIs.test_users.<locals>.<listcomp>cSsg|]}|��d�qS)rr�r�r)r)r*r�wr��rWZwho)rr rrAr,r�rW�users�subTest�	enumeratersZterminalr$rX)rYr3�linesr�Z	terminals�idx�ur)r)r*�
test_usersps


zTestSystemAPIs.test_userscCstd�}|��st�d��d}t�d|�}|r4d}nNt�d|�}|rJd}n8t�d|�}|r`d}n"t�d	|�}|r�d}d
d�|D�}|s�t�d|��|jt��|d
��Jt	t���D],\}}t
j
�|j��
|�}|||ks�J�q�Wd�n1s�0YdS)Nr�r�z\d\d\d\d-\d\d-\d\d \d\d:\d\dz%Y-%m-%d %H:%Mz[A-Z][a-z][a-z] \d\d \d\d:\d\dz%b %d %H:%Mz[A-Z][a-z][a-z] \d\dz%b %dz[a-z][a-z][a-z] \d\dcSsg|]}|���qSr))�
capitalizer�r)r)r*r��r�z5TestSystemAPIs.test_users_started.<locals>.<listcomp>z(cannot interpret tstamp in who output
%sr�)rr rrAr1�findallr�rWr�r�r�r��startedr�)rYr3Ztstampr�r�r�Zpsutil_valuer)r)r*�test_users_started�s:
���z!TestSystemAPIs.test_users_startedc	Cs~tjdttjd�d��T}t�t�� tj�	t
���Wd�n1sH0Y|js\J�Wd�n1sp0YdS)Nzpsutil._psposix.os.killr0rd)
rre�OSError�errno�EBADFrr�rW�_psposixZ
pid_existsrp�getpidrg�rY�mr)r)r*�test_pid_exists_let_raise�s�.z(TestSystemAPIs.test_pid_exists_let_raisec	Cs~tjdttjd�d��T}t�t�� tj�	t
���Wd�n1sH0Y|js\J�Wd�n1sp0YdS)N�psutil._psposix.os.waitpidr0rd)
rrer�r�r�rr�rWr��wait_pidrpr�rgr�r)r)r*�test_os_waitpid_let_raise�s�.z(TestSystemAPIs.test_os_waitpid_let_raisec	Cs�tjdttjd�d��\}t�tjj	��$tjj
t��dd�Wd�n1sP0Y|j
sdJ�Wd�n1sx0YdS)Nr�r0rdg{�G�z�?)�timeout)rrer�r�ZEINTRrr�rWr��TimeoutExpiredr�rpr�rgr�r)r)r*�test_os_waitpid_eintr�s�2z$TestSystemAPIs.test_os_waitpid_eintrc	Csvtjddd��T}t�t�� tj�t�	��Wd�n1s@0Y|j
sTJ�Wd�n1sh0YdS)Nr�)r���ry)rrerr�r"rWr�r�rpr�rgr�r)r)r*�test_os_waitpid_bad_ret_status�s�.z-TestSystemAPIs.test_os_waitpid_bad_ret_statuszunreliable on AIXc	Cs�d}tjdd�D]�}t�|j�}zt|j�\}}}}WnVty�}z>t|���}d|vsjd|vsjd|vrvWYd}~q�WYd}~qd}~00t	|j
|�|ks�J�t	|j|�|ks�J�t	|j|�|ks�J�t	|j
|�dksJ�qdS)Ni@F)�allzno such file or directoryzraw devices not supportedzpermission deniedr)rWZdisk_partitions�
disk_usageZ
mountpointrJrCr?rr@�abs�total�used�free�percent)	rYZ	tolerance�part�usagerFrGrHrIrDr)r)r*�test_disk_usage�s&���zTestSystemAPIs.test_disk_usageN)r�r�r�r�rr�rr�r�rrr
r�r�r�r�r�r�r�rr�r)r)r)r*r�Gs$


&
	
r�c@seZdZdd�ZdS)�TestMisccCs4t�}|dksJ�|t��ks"J�|tjks0J�dS)Nr)r�resource�mmapZPAGESIZE)rYZpagesizer)r)r*�test_getpagesize�szTestMisc.test_getpagesizeN)r�r�r�r�r)r)r)r*r��sr�)N),r�r�r�rpr1rOrkrWrrrrrrrZpsutil.testsr	r
rrr
rrrrrrrrr�r�Zpsutil._psutil_posixrrr.r4r7r8rJr�r�rMr�r�r)r)r)r*�<module>sZ
8
-#PKok\v�b�?�?8psutil/tests/__pycache__/test_process_all.cpython-39.pycnu�[���a

��?h�H�@s�dZddlZddlZddlZddlZddlZddlZddlZddlZddlm	Z	ddlm
Z
ddlmZddlmZddlm
Z
ddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#e�o�e�o�eZ$dd�Z%Gdd�de�Z&Gd d!�d!e�Z'dS)"zeIterate over all process PIDs and for each one of them invoke and
test all psutil.Process() methods.
�N)�AIX)�BSD)�FREEBSD)�LINUX)�MACOS)�NETBSD)�OPENBSD)�OSX)�POSIX)�WINDOWS)�PY3)�FileNotFoundError)�long)�unicode)�
CI_TESTING)�PYTEST_PARALLEL)�	QEMU_USER)�VALID_PROC_STATUSES)�PsutilTestCase)�check_connection_ntuple)�create_sockets)�
is_namedtuple)�is_win_secure_system_proc)�process_namespace)�pytestcs*t����fdd�������fdd�}zt����Wn"tjyX����iYS0z��ddg�}Wntjy�����Yn�0|d|d��d�ji}t��}|j	|j
dd	�D]Z\}}z|�||<Wq�tj�y}z&�|����WYd}~q�WYd}~q�d}~00q�|�|SdS)
Ncs���|j��|jdur&��|j|�t|tj�rd��|�|jdurz��|jd���|j|�nt|tj	�rz��
|�t|�t|�dS�Nr)
�assertEqual�pid�name�
isinstance�psutilZ
ZombieProcessZassertProcessZombie�ppidZassertGreaterEqual�
NoSuchProcess�assertProcessGone�str�repr)�exc�procrr!)r�tcase��I/usr/local/lib64/python3.9/site-packages/psutil/tests/test_process_all.py�check_exception7s



z"proc_info.<locals>.check_exceptionc
sP�dkrLz��d�Wn4tjyJ}z�|����WYd}~n
d}~00dSr)�waitr �Error)r&)r+rrr!r'r)r*�do_waitEs
zproc_info.<locals>.do_waitr!rrF)�clear_cache)rr �Processr"Z
assertPidGoneZas_dictr#rr�iterZgettersr-)rr.�d�info�nsZfunZfun_namer&r))r+rrr!r'r(r*�	proc_info4s.


$r5c@s8eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�Zd d!�Zd"d#�Zd$d%�Zd&d'�Zd(d)�Zd*d+�Zd,d-�Zd.d/�Zd0d1�Zd2d3�Zd4d5�Zd6d7�Zd8d9�Zd:d;�Z d<d=�Z!d>d?�Z"d@dA�Z#dBdC�Z$dDdE�Z%dFdG�Z&dHdI�Z'dJdK�Z(dLS)M�TestFetchAllProcessesz�Test which iterates over all running processes and performs
    some sanity checks against Process API's returned values.
    Uses a process pool to get info about all processes.
    cCst�d�trt��|_dS�NF)r �
_set_debug�
USE_PROC_POOL�multiprocessingZPool�pool��selfr)r)r*�setUpks
zTestFetchAllProcesses.setUpcCs&t�d�tr"|j��|j��dS�NT)r r8r9r;�	terminate�joinr<r)r)r*�tearDownrs

zTestFetchAllProcesses.tearDowncCsJddlm}tr"|j�|t���Sg}t��D]}|�||��q.|SdS)Nr)r5)Zpsutil.tests.test_process_allr5r9r;Zimap_unorderedr �pids�append)r=r5Zlsrr)r)r*�iter_proc_infoxsz$TestFetchAllProcesses.iter_proc_infocCs�g}|��D]�}|��D]�\}}t||�}z|||�Wnpty�d}|d||dt|�|f7}|d7}|dt��7}d�dd�|��D��d}|�	|�Yq0|d	d
gddifvr|sJ|��qq|r�|�
d�|���dS)NzH
======================================================================
z+FAIL: name=test_%s, pid=%s, ret=%s
info=%s
rzF----------------------------------------------------------------------z
%s�
css|]}d|VqdS)z    Nr))�.0�ir)r)r*�	<genexpr>��z1TestFetchAllProcesses.test_all.<locals>.<genexpr>r��)rE�items�getattr�	Exceptionr%�	traceback�
format_excrA�
splitlinesrDZfail)r=Zfailuresr3r�value�meth�sr)r)r*�test_all�s,
�zTestFetchAllProcesses.test_allcCs*t|t�sJ�|D]}t|t�sJ�qdS�N)r�listr$)r=�retr3�partr)r)r*�cmdline�szTestFetchAllProcesses.cmdlinecCs�t|ttf�sJ�|��|ks"J�|r�tr8|�d�s8dStj�|�sLJ|��t	r�tj�
|�r�ttd�r�ttd�r�zt�|tj
�s�J�Wn$ty�tj�|�r�ts��Yn0dS)Nz.exe�access�X_OK)rr$r�stripr�endswith�os�path�isabsr
�isfile�hasattrr\r]�AssertionError�existsr�r=rYr3r)r)r*�exe�szTestFetchAllProcesses.execCst|t�sJ�|dksJ�dSr�r�intrgr)r)r*r�szTestFetchAllProcesses.pidcCs*t|ttf�sJ�|dksJ�t|�dSr)rrjrr5rgr)r)r*r!�szTestFetchAllProcesses.ppidcCsBt|ttf�sJ�tr*|s*t|d�r*dSts>|s>Jt|���dS)Nr)rr$rrrrr%rgr)r)r*r�s
zTestFetchAllProcesses.namecCs^t|t�sJ�z|dksJ�Wn(tyFtr@|dtjkr@n�Yn0t�dt�|��dS)Nr�statusz%Y %m %d %H:%M:%S)	r�floatrerr Z
STATUS_ZOMBIE�time�strftime�	localtimergr)r)r*�create_time�sz!TestFetchAllProcesses.create_timecCs4t|�sJ�|D]}t|t�s"J�|dksJ�qdSr)rrrj)r=rYr3�uidr)r)r*�uids�szTestFetchAllProcesses.uidscCs<t|�sJ�|D]&}t|t�s"J�tsts|dksJ�qdSr)rrrjrr)r=rYr3�gidr)r)r*�gids�s
zTestFetchAllProcesses.gidscCs.t|t�sJ�|��|ksJ�|��s*J�dSrW)rr$r^rgr)r)r*�username�szTestFetchAllProcesses.usernamecCs>t|t�sJ�|sJ|��tr"dS|dks.J�|tvs:J�dS)N�?)rr$rrrgr)r)r*rk�szTestFetchAllProcesses.statuscCs@t|�sJ�|D]*}t|ttf�s&J�|dkr|dksJ�qdS)N���r�rrrjr)r=rYr3�fieldr)r)r*�io_counters�s
z!TestFetchAllProcesses.io_counterscCs�trBt|jt�sJ�t|jt�s$J�|jdks2J�|jdks|J�n:tjtjtjtj	g}t|t�sdJ�|dkspJ�||vs|J�dSr)
rrZioclassrjrSr ZIOPRIO_VERYLOWZ
IOPRIO_LOWZ
IOPRIO_NORMALZIOPRIO_HIGH)r=rYr3�choicesr)r)r*�ionice�s�zTestFetchAllProcesses.ionicecCs:t|t�sJ�tr*|dkr*t|d�r*dS|dks6J�dS)Nrr�)rrjrrrgr)r)r*�num_threads
sz!TestFetchAllProcesses.num_threadscCsnt|t�sJ�|D]V}t|�s"J�|jdks0J�|jdks>J�|jdksLJ�|D]}t|ttf�sPJ�qPqdSr)rrXr�idZ	user_timeZsystem_timerjrl)r=rYr3�tryr)r)r*�threadsszTestFetchAllProcesses.threadscCs4t|�sJ�|D]}t|t�s"J�|dksJ�qdSr)rrrl)r=rYr3�nr)r)r*�	cpu_timesszTestFetchAllProcesses.cpu_timescCs0t|t�sJ�d|kr"dks,nJ|��dS)NrKgY@�rrlrgr)r)r*�cpu_percent%sz!TestFetchAllProcesses.cpu_percentcCs^t|t�sJ�tr|dkrdS|dks*J�t��dkrB|dksBJ�|ttt����vsZJ�dS)Nrwrr})rrjrr �	cpu_countrX�rangergr)r)r*�cpu_num)szTestFetchAllProcesses.cpu_numcCs|t|�sJ�|D]"}t|ttf�s&J�|dksJ�qtrx|j|jksHJ�|j|jksXJ�|j	|j
kshJ�|j|jksxJ�dSr)
rrrjrrZ	peak_wsetZwsetZpeak_paged_poolZ
paged_poolZpeak_nonpaged_poolZ
nonpaged_poolZ
peak_pagefileZpagefile�r=rYr3rSr)r)r*�memory_info2sz!TestFetchAllProcesses.memory_infocCs�t|�sJ�t��j}|jD]N}t||�}t|ttf�s<J�|dksHJ�t	st
rZ|dvrZq||ksJ|��qt	r�|j|jks�J�dS)Nr)Zvms�data)
rr Zvirtual_memory�total�_fieldsrNrrjrrr	ZpssZuss)r=rYr3r�rrSr)r)r*�memory_full_info=s


z&TestFetchAllProcesses.memory_full_infoc	Cs"t|t�sJ�|D�]}t|jt�s(J�t|jt�s8J�|j��|jksLJ�tr`|jdks�J�nltr�t|j	t�stJ�t|j
t�s�J�t|jt�s�J�|j	dks�J�|j
dvs�J�|jdks�J�ntr�|js�qt
j�|j�s�J|��zt
�|j�}Wnt�yYq0t�|j�sJ|��qdS)Nrwr)�r�w�azr+za+)rrX�fdrjrar$r^rr�position�mode�flagsrr`rb�statr
�S_ISREG�st_mode)r=rYr3�f�str)r)r*�
open_filesMs,

z TestFetchAllProcesses.open_filescCst|t�sJ�|dksJ�dSrrirgr)r)r*�num_fdsgszTestFetchAllProcesses.num_fdscCs`t��Ft|�tt|��ks J�|D]}t|�s4J�t|�q$Wd�n1sR0YdSrW)r�len�setrr)r=rYr3�connr)r)r*�net_connectionsks
z%TestFetchAllProcesses.net_connectionsc
Cs�t|ttf�sJ�|��|ks"J�|r�tj�|�s:J|��zt�|�}WnDty�}z,t	rjt
j�|�rjn|j
t
jkrx�WYd}~nd}~00t�|j�s�J�dSrW)rr$rr^r`rarbr��OSErrorrr Z_psplatformZis_permission_err�errno�ENOENT�S_ISDIRr�)r=rYr3r��errr)r)r*�cwdrszTestFetchAllProcesses.cwdcCs0t|t�sJ�d|kr"dks,nJ|��dS)Nr�dr�rgr)r)r*�memory_percent�sz$TestFetchAllProcesses.memory_percentcCst|t�sJ�dSrW)r�boolrgr)r)r*�
is_running�sz TestFetchAllProcesses.is_runningcCsRt|t�sJ�|gksJ�ttt����}|D]}t|t�s@J�||vs.J�q.dSrW)rrXr�r r�rj)r=rYr3Zcpusr�r)r)r*�cpu_affinity�sz"TestFetchAllProcesses.cpu_affinitycCsJt|ttd�f�sJ�|durFtj�|�s2J|��tj�|�sFJ|��dSrW)rr$�typer`rarbrfrgr)r)r*�terminal�szTestFetchAllProcesses.terminalcCs�|D]�}t|jt�sJ�t|jt�s(J�t|jt�s8J�|jD]�}t||�}|dkrx|�d�s�tj�	|j�s�J|j��q>|dkr�|s�Jt
|���q>|dkr�ts�|s�Jt
|���q>t|tt
f�s�J�|dks>J�q>qdS)Nra)�[zanon_inode:�addr�permsr)rr�r$r�rar�rN�
startswithr`rbr%rrjr)r=rYr3�nt�fnamerSr)r)r*�memory_maps�s 


z!TestFetchAllProcesses.memory_mapscCst|t�sJ�|dksJ�dSrrirgr)r)r*�num_handles�sz!TestFetchAllProcesses.num_handlescCsxt|t�sJ�tr2d|kr&dkstnJ|��nBdd�tt�D�}||vsPJ�trft|tj�stJ�nt|t�stJ�dS)Ni���cSs g|]}|�d�rtt|��qS)Z_PRIORITY_CLASS)r_rNr )rG�xr)r)r*�
<listcomp>�s
�z.TestFetchAllProcesses.nice.<locals>.<listcomp>)rrjr
�dirr r�enum�IntEnum)r=rYr3Z
prioritiesr)r)r*�nice�s �zTestFetchAllProcesses.nicecCs8t|�sJ�|D]"}t|ttf�s&J�|dksJ�qdSrrxr�r)r)r*�num_ctx_switches�sz&TestFetchAllProcesses.num_ctx_switchescCsBt|t�sJ�t|�dksJ�|ddks.J�|ddks>J�dS)N�rrwr})r�tupler�rgr)r)r*�rlimit�szTestFetchAllProcesses.rlimitcCs@t|t�sJ�|��D]$\}}t|t�s,J�t|t�sJ�qdSrW)r�dictrMr$)r=rYr3�k�vr)r)r*�environ�szTestFetchAllProcesses.environN))�__name__�
__module__�__qualname__�__doc__r>rBrErVr[rhrr!rrprrrtrurkrzr|r~r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r)r)r)r*r6esL
			
	r6c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�
TestPidsRangea@Given pid_exists() return value for a range of PIDs which may or
    may not exist, make sure that psutil.Process() and psutil.pids()
    agree with pid_exists(). This guarantees that the 3 APIs are all
    consistent with each other. See:
    https://github.com/giampaolo/psutil/issues/2359

    XXX - Note about Windows: it turns out there are some "hidden" PIDs
    which are not returned by psutil.pids() and are also not revealed
    by taskmgr.exe and ProcessHacker, still they can be instantiated by
    psutil.Process() and queried. One of such PIDs is "conhost.exe".
    Running as_dict() for it reveals that some Process() APIs
    erroneously raise NoSuchProcess, so we know we have problem there.
    Let's ignore this for now, since it's quite a corner case (who even
    imagined hidden PIDs existed on Windows?).
    cCst�d�dSr7�r r8r<r)r)r*r>�szTestPidsRange.setUpcCst�d�dSr?r�r<r)r)r*rB�szTestPidsRange.tearDownc	Csfdd�}dd�}tdd�D]F}tr,||�r,q|j|d��||�Wd�q1sV0YqdS)NcSs�ztd|d�}Wnty&YdS0|�V|D]8}|�d�r2t|��d�}||kWd�Sq2td��Wd�n1s�0YdS)Nz/proc/%s/status�rbFsTgid:r}z'Tgid' line not found)�openr
r�rj�split�
ValueError)rr��lineZtgidr)r)r*�is_linux_tid�s
z+TestPidsRange.test_it.<locals>.is_linux_tidc	Ss�d}t�|�}zv|r4t�|�ts�|t��vs�J�nNtsnt�tj��t�|�Wd�n1sd0Yts�|t��vs�J�Wn*tj	t
fy�|d8}|dkr��Yq0dSqdS)N�r}r)r Z
pid_existsr0rrCrrZraisesr"r-re)rr�rfr)r)r*�check�s"

(z$TestPidsRange.test_it.<locals>.checkr}i�)r)r�rZsubTest)r=r�r�rr)r)r*�test_it�szTestPidsRange.test_itN)r�r�r�r�r>rBr�r)r)r)r*r��sr�)(r�r�r�r:r`r�rmrPr rrrrrrrr	r
rZpsutil._compatrr
rrZpsutil.testsrrrrrrrrrrrr9r5r6r�r)r)r)r*�<module>sN1qPKok\+��N�N0psutil/tests/__pycache__/test_bsd.cpython-39.pycnu�[���a

��?h�N�@s�dZddlZddlZddlZddlZddlZddlmZddlmZddlmZddlm	Z	ddl
mZddl
mZdd	l
m
Z
dd
l
mZddl
mZddl
mZdd
l
mZddl
mZddl
mZer�ddlmZe�Ze��dko�ed�ZndZdZdd�Zdd�Zejjedd�Gdd�de
��Zejjedd�Gdd�de
��Zejjedd�Gdd�de
��Zejje	dd�Gd d!�d!e
��Z ejjed"d�Gd#d$�d$e
��Z!dS)%z$Tests specific to all BSD platforms.�N)�BSD)�FREEBSD)�NETBSD)�OPENBSD)�HAS_BATTERY)�TOLERANCE_SYS_MEM)�PsutilTestCase)�pytest)�retry_on_failure)�sh)�spawn_testproc)�	terminate)�which)�getpagesize�museFcCsltd|�}tr(||�d�dd�}nts0trF||�d�dd�}z
t|�WStyf|YS0dS)zmExpects a sysctl command with an argument and parse the result
    returning only the value of interest.
    zsysctl z: �N�=�)rr�findrr�int�
ValueError)�cmdline�result�r�A/usr/local/lib64/python3.9/site-packages/psutil/tests/test_bsd.py�sysctl-s
rcCs>td�}|�d�D]}|�|�rq.qtd��t|��d�S)z+Thin wrapper around 'muse' cmdline utility.r�
zline not foundr)r�split�
startswithrr)�field�out�linerrrr<s
zBSD only��reasonc@s�eZdZdZedd��Zedd��Zejj	e
dd�dd	��Zd
d�Zejj	e
d�d
d�dd��Zejj	e
d�d
d�ejj	e
dd�dd���Zejj	e
d�dd�dd��ZdS)�BSDTestCasez)Generic tests common to all BSD variants.cCst�j|_dS�N�r�pid��clsrrr�
setUpClassPszBSDTestCase.setUpClasscCst|j�dSr%�r
r'r(rrr�
tearDownClassTszBSDTestCase.tearDownClassz -o lstart doesn't work on NETBSDr"cCsPtd|j�}|�dd���}t�|j���}t�dt�	|��}||ksLJ�dS)Nzps -o lstart -p %sZSTARTED�z%a %b %e %H:%M:%S %Y)
rr'�replace�strip�psutil�ProcessZcreate_time�time�strftime�	localtime)�self�outputZstart_psZstart_psutilrrr�test_process_create_timeXs
�z$BSDTestCase.test_process_create_timecCs�dd�}tjdd�D]�}t�|j�}||j�\}}}}|j|ksDJ�|j|ksRJ�t|j|�dkrx|�d|j|f��t|j	|�dkr|�d|j	|f��qdS)NcSs�td|���}|�d�}|�d�|�d�}|��dd�\}}}}|dkrRd}t|�d}t|�d}t|�d}||||fS)Nz
df -k "%s"rr��noner-�)rr/r�popr)�pathr �linesr!�dev�total�used�freerrr�dfes


z"BSDTestCase.test_disks.<locals>.dfF)�alli�zpsutil=%s, df=%s)
r0Zdisk_partitions�
disk_usageZ
mountpointZdevicer?�absrAZfailr@)r5rB�part�usager>r?r@rArrr�
test_disksbs
zBSDTestCase.test_disksrzsysctl cmd not availablecCs td�}tjdd�|ksJ�dS)Nzhw.ncpuT)�logical)rr0�	cpu_count�r5Zsystrrr�test_cpu_count_logical}sz"BSDTestCase.test_cpu_count_logicalzskipped on NETBSDcCstd�}|t��jksJ�dS)Nz
hw.physmem)rr0�virtual_memoryr?�r5�numrrr�test_virtual_memory_total�sz%BSDTestCase.test_virtual_memory_total�ifconfigzifconfig cmd not availablec	Csvt����D]d\}}ztd|�}Wnty6Yq0|jd|vksJJ�d|vr|jtt�	d|�d�ksJ�qdS)Nzifconfig %s�RUNNING�mtuz	mtu (\d+)r)
r0Znet_if_stats�itemsr�RuntimeErrorZisuprSr�re�findall)r5�name�statsr rrr�test_net_if_stats�szBSDTestCase.test_net_if_statsN)�__name__�
__module__�__qualname__�__doc__�classmethodr*r,r	�mark�skipifrr7rHrrLrPrZrrrrr$Ls&


	
�
�r$zFREEBSD onlyc@sfeZdZedd��Zedd��Ze�dd��Zdd�Zd	d
�Z	dd�Z
e�d
d��Ze�dd��ZdS)�FreeBSDPsutilTestCasecCst�j|_dSr%r&r(rrrr*�sz FreeBSDPsutilTestCase.setUpClasscCst|j�dSr%r+r(rrrr,�sz#FreeBSDPsutilTestCase.tearDownClasscCs�td|j�}t�|j�jdd�}|�d�dd�}|r�|��}|��}|dd�\}}}}	}
|��}d||f|jks|J�t|
�|j	ks�J�|j
�d�s4|d	|j
ks4J�q4dS)
Nzprocstat -v %sF)Zgroupedrr�z%s-%s�[�
)rr'r0r1Zmemory_mapsrr;�addrrZrssr<r)r5r �mapsr=r!�fields�_�start�stopZ_perms�res�maprrr�test_memory_maps�sz&FreeBSDPsutilTestCase.test_memory_mapscCs<td|j�}t�|j���|�d�d��dks8J�dS)Nzprocstat -b %srr���)rr'r0r1Zexer�r5r rrr�test_exe�szFreeBSDPsutilTestCase.test_execCsLtd|j�}d�t�|j����d�|�d�d��dd��ksHJ�dS)Nzprocstat -c %s� rrr)rr'�joinr0r1rrrprrr�test_cmdline�s�z"FreeBSDPsutilTestCase.test_cmdlinecCs�td|j�}|�d�d��dd�\}}}}}}t�|j�}|��}	|��}
|	jt|�ksbJ�|	j	t|�kstJ�|	j
t|�ks�J�|
jt|�ks�J�|
j	t|�ks�J�|
j
t|�ks�J�dS)Nzprocstat -s %srrr�)rr'rr0r1�uids�gids�realrZ	effectiveZsaved)r5r ZeuidZruidZsuidZegidZrgidZsgid�prvrwrrr�test_uids_gids�s&z$FreeBSDPsutilTestCase.test_uids_gidscCs�g}td|j�}t�|j�}|�d�D]�}|����}d|vrrt|��d�}|��j	}||ksfJ�|�
d�q(d|vr(t|��d�}|��j}||ks�J�|�
d�q(t|�dkr�t
d��dS)N�procstat -r %srz voluntary contextroz involuntary contextr�)couldn't find lines match in procstat out)rr'r0r1r�lowerr/rZnum_ctx_switchesZ	voluntary�appendZinvoluntary�lenrU�r5Ztestedr ryr!Zpstat_valueZpsutil_valuerrr�test_ctx_switches�s"

z'FreeBSDPsutilTestCase.test_ctx_switchescCs�g}td|j�}t�|j�}|�d�D]�}|����}d|vr�td|��d�d�d�}|��j	}||kstJ�|�
d�q(d|vr(td|��d�d�d�}|��j}||ks�J�|�
d�q(t|�dkr�t
d	��dS)
Nr{rz	user timez0.ro�.zsystem timerr|)rr'r0r1rr}r/�floatZ	cpu_times�userr~�systemrrUr�rrr�test_cpu_times�s"

z$FreeBSDPsutilTestCase.test_cpu_timesN)
r[r\r]r_r*r,r
rnrqrtrzr�r�rrrrrb�s




rbc@s�eZdZedd��Zdd�Ze�dd��Ze�dd��Ze�d	d
��Z	e�dd��Z
e�d
d��Ze�dd��Ze
jjedd�dd��Ze
jjedd�e�dd���Ze
jjedd�e�dd���Ze
jjedd�e�dd���Ze
jjedd�e�dd���Ze
jjedd�e�dd���Ze
jjedd�e�dd ���Zd!d"�Zd#d$�Zd%d&�Ze�d'd(��Zd)d*�Zd+d,�Zd-d.�Zd/d0�Ze
jje d1d�d2d3��Z!e
jje d1d�d4d5��Z"e
jje d6d�d7d8��Z#d9d:�Z$d;S)<�FreeBSDSystemTestCasecCsRtd���d}t�d|�}|s,td|��dd�|dd�D�\}}}|||fS)	Nzswapinfo -kroz\s+zCan't parse swapinfo: %scss|]}t|�dVqdS)r:N)r)�.0ryrrr�	<genexpr>�z7FreeBSDSystemTestCase.parse_swapinfo.<locals>.<genexpr>rr8)r�
splitlinesrVrr)r6�partsr?r@rArrr�parse_swapinfo�sz$FreeBSDSystemTestCase.parse_swapinfocCs�d}ztt|��}Wnty0t�d��Yn0t��j|ksDJ�d}t|�}t|��d�d�d�}t|��d�d�d�}t��j	|ks�J�t��j
|ks�J�dS)Nzdev.cpu.0.freqz#frequencies not supported by kernelzdev.cpu.0.freq_levelsr�/ro)rrrUr	�skipr0Zcpu_freq�currentr�max�min)r5�sensor�
sysctl_resultZmax_freqZmin_freqrrr�!test_cpu_frequency_against_sysctlsz7FreeBSDSystemTestCase.test_cpu_frequency_against_sysctlcCs*td�t}tt��j|�tks&J�dS)Nzvm.stats.vm.v_active_count)r�PAGESIZErEr0rM�activerrKrrr�test_vmem_activesz&FreeBSDSystemTestCase.test_vmem_activecCs*td�t}tt��j|�tks&J�dS)Nzvm.stats.vm.v_inactive_count)rr�rEr0rM�inactiverrKrrr�test_vmem_inactive!sz(FreeBSDSystemTestCase.test_vmem_inactivecCs*td�t}tt��j|�tks&J�dS)Nzvm.stats.vm.v_wire_count)rr�rEr0rM�wiredrrKrrr�test_vmem_wired&sz%FreeBSDSystemTestCase.test_vmem_wiredcCs*td�t}tt��j|�tks&J�dS)Nzvm.stats.vm.v_cache_count)rr�rEr0rM�cachedrrKrrr�test_vmem_cached+sz&FreeBSDSystemTestCase.test_vmem_cachedcCs*td�t}tt��j|�tks&J�dS)Nzvm.stats.vm.v_free_count)rr�rEr0rMrArrKrrr�test_vmem_free0sz$FreeBSDSystemTestCase.test_vmem_freecCs&td�}tt��j|�tks"J�dS)Nzvfs.bufspace)rrEr0rM�buffersrrKrrr�test_vmem_buffers5sz'FreeBSDSystemTestCase.test_vmem_bufferszmuse not installedr"cCstd�}t��j|ksJ�dS)NZTotal)rr0rMr?rNrrr�test_muse_vmem_total<sz*FreeBSDSystemTestCase.test_muse_vmem_totalcCs&td�}tt��j|�tks"J�dS)NZActive)rrEr0rMr�rrNrrr�test_muse_vmem_activeAsz+FreeBSDSystemTestCase.test_muse_vmem_activecCs&td�}tt��j|�tks"J�dS)NZInactive)rrEr0rMr�rrNrrr�test_muse_vmem_inactiveGsz-FreeBSDSystemTestCase.test_muse_vmem_inactivecCs&td�}tt��j|�tks"J�dS)NZWired)rrEr0rMr�rrNrrr�test_muse_vmem_wiredMsz*FreeBSDSystemTestCase.test_muse_vmem_wiredcCs&td�}tt��j|�tks"J�dS)N�Cache)rrEr0rMr�rrNrrr�test_muse_vmem_cachedSsz+FreeBSDSystemTestCase.test_muse_vmem_cachedcCs&td�}tt��j|�tks"J�dS)NZFree)rrEr0rMrArrNrrr�test_muse_vmem_freeYsz)FreeBSDSystemTestCase.test_muse_vmem_freecCs&td�}tt��j|�tks"J�dS)NZBuffer)rrEr0rMr�rrNrrr�test_muse_vmem_buffers_sz,FreeBSDSystemTestCase.test_muse_vmem_bufferscCs"tt��jtd��dksJ�dS)Nzvm.stats.sys.v_swtch��)rEr0�	cpu_stats�ctx_switchesr�r5rrr�test_cpu_stats_ctx_switcheses����z1FreeBSDSystemTestCase.test_cpu_stats_ctx_switchescCs"tt��jtd��dksJ�dS)Nzvm.stats.sys.v_intrr�)rEr0r��
interruptsrr�rrr�test_cpu_stats_interruptsns��z/FreeBSDSystemTestCase.test_cpu_stats_interruptscCs"tt��jtd��dksJ�dS)Nzvm.stats.sys.v_softr�)rEr0r�Zsoft_interruptsrr�rrr�test_cpu_stats_soft_interruptsts����z4FreeBSDSystemTestCase.test_cpu_stats_soft_interruptscCs"tt��jtd��dksJ�dS)Nzvm.stats.sys.v_syscalli@
)rEr0r�Zsyscallsrr�rrr�test_cpu_stats_syscalls}s��z-FreeBSDSystemTestCase.test_cpu_stats_syscallscCs,|��\}}}tt��j|�tks(J�dSr%)r�rEr0�swap_memoryrAr)r5�_total�_usedrArrr�test_swapmem_free�sz'FreeBSDSystemTestCase.test_swapmem_freecCs,|��\}}}tt��j|�tks(J�dSr%)r�rEr0r�r@r)r5r�r@�_freerrr�test_swapmem_used�sz'FreeBSDSystemTestCase.test_swapmem_usedcCs,|��\}}}tt��j|�tks(J�dSr%)r�rEr0r�r?r)r5r?r�r�rrr�test_swapmem_total�sz(FreeBSDSystemTestCase.test_swapmem_totalcCsLtd�}||�d�dd�}|d|�d��}t|�}|t��ksHJ�dS)Nzsysctl kern.boottimez sec = ��,)rrrr0�	boot_time)r5�sZbtimerrr�test_boot_time�s
z$FreeBSDSystemTestCase.test_boot_timez
no batterycCs�dd�}td�}tdd�|�d�D��}t��}t|d�dd	��}|d
}|j|ksZJ�|dkrt|jtj	ks�J�n||j�|ks�J�dS)NcSs(t|d�\}}t|d�\}}d||fS)N�<z%d:%02d)�divmod)Zsecs�mZ_s�hrrr�
secs2hours�sz>FreeBSDSystemTestCase.test_sensors_battery.<locals>.secs2hoursz
acpiconf -i 0cSs(g|] }|�d�d|�d�df�qS)�	rro)r)r��xrrr�
<listcomp>�r�z>FreeBSDSystemTestCase.test_sensors_battery.<locals>.<listcomp>rzRemaining capacity:�%r-zRemaining time:�unknown)
r�dictrr0�sensors_batteryrr.�percent�secsleftZPOWER_TIME_UNLIMITED)r5r�r rhZmetricsr�Zremaining_timerrr�test_sensors_battery�s�z*FreeBSDSystemTestCase.test_sensors_batterycCslt��jtd�ksJ�t��jtd�dkks0J�t��j}|dkrTtd�dkshJ�n|td�dkshJ�dS)N�hw.acpi.battery.life�hw.acpi.aclinerr�hw.acpi.battery.timeror�)r0r�r�rZ
power_pluggedr�)r5r�rrr�#test_sensors_battery_against_sysctl�s
�

�
z9FreeBSDSystemTestCase.test_sensors_battery_against_sysctlzhas batterycCsVt�t��(td�td�td�Wd�n1s80Yt��dusRJ�dS)Nr�r�r�)r	ZraisesrUrr0r�r�rrr�test_sensors_battery_no_battery�s
&z5FreeBSDSystemTestCase.test_sensors_battery_no_batteryc	Cs�t�d�}t|�D]�}d|}zttt|�dd���}WntyVt�d��Yn0t	t�
�d|j|�dkszJ�d|}ttt|�dd���}t�
�d|j|ksJ�qdS)NTzdev.cpu.%s.temperatureroz$temperatures not supported by kernelZcoretemprezdev.cpu.%s.coretemp.tjmax)
r0rJ�rangerr�rrUr	r�rEZsensors_temperaturesr��high)r5Znum_cpus�cpur�r�rrr�(test_sensors_temperatures_against_sysctl�s*
������z>FreeBSDSystemTestCase.test_sensors_temperatures_against_sysctlN)%r[r\r]�staticmethodr�r�r
r�r�r�r�r�r�r	r`ra�MUSE_AVAILABLEr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rr�r�r�r�rrrrr��sf







		

	



r�zOPENBSD onlyc@seZdZdd�ZdS)�OpenBSDTestCasecCs6td�}tj�|d�}tj�t���}||ks2J�dS)Nz
kern.boottimez%a %b %d %H:%M:%S %Y)r�datetime�strptime�
fromtimestampr0r�)r5r�Zsys_btZ	psutil_btrrrr��szOpenBSDTestCase.test_boot_timeN)r[r\r]r�rrrrr��sr�zNETBSD onlyc@sheZdZedd��Zdd�Zdd�Zdd�Zd	d
�Zdd�Z	d
d�Z
dd�Zdd�Zdd�Z
dd�ZdS)�NetBSDTestCasecCsrtd��J}|D]4}|�|�rt|��d�dWd�SqWd�n1sX0Ytd|��dS)Nz
/proc/meminforr:z
can't find %s)�openrrrr)Zlook_for�fr!rrr�
parse_meminfo�s


FzNetBSDTestCase.parse_meminfocCst��j|�d�ksJ�dS)Nz	MemTotal:)r0rMr?r�r�rrr�test_vmem_total	szNetBSDTestCase.test_vmem_totalcCs$tt��j|�d��tks J�dS)NzMemFree:)rEr0rMrAr�rr�rrrr�s��zNetBSDTestCase.test_vmem_freecCs$tt��j|�d��tks J�dS)NzBuffers:)rEr0rMr�r�rr�rrrr�s����z NetBSDTestCase.test_vmem_bufferscCs$tt��j|�d��tks J�dS)Nz
MemShared:)rEr0rMZsharedr�rr�rrr�test_vmem_shareds����zNetBSDTestCase.test_vmem_sharedcCs$tt��j|�d��tks J�dS)NzCached:)rEr0rMr�r�rr�rrrr�$s��zNetBSDTestCase.test_vmem_cachedcCs$tt��j|�d��tks J�dS)Nz
SwapTotal:)rEr0r�r?r�rr�rrrr�,s��z!NetBSDTestCase.test_swapmem_totalcCs$tt��j|�d��tks J�dS)Nz	SwapFree:)rEr0r�rAr�rr�rrrr�2s��z NetBSDTestCase.test_swapmem_freecCs"t��}|j|j|jksJ�dSr%)r0r�r@r?rA)r5Zsmemrrrr�8sz NetBSDTestCase.test_swapmem_usedcCsxtdd��@}|D]"}|�d�rt|��d�}q<qtd��Wd�n1sP0Ytt��j|�dkstJ�dS)N�
/proc/stat�rbsintrr�couldn't find liner�)	r�rrrrrEr0r�r�)r5r�r!r�rrrr�>s
&z(NetBSDTestCase.test_cpu_stats_interruptscCsxtdd��@}|D]"}|�d�rt|��d�}q<qtd��Wd�n1sP0Ytt��j|�dkstJ�dS)Nr�r�sctxtrr�r�)	r�rrrrrEr0r�r�)r5r�r!r�rrrr�Hs
&z*NetBSDTestCase.test_cpu_stats_ctx_switchesN)r[r\r]r�r�r�r�r�r�r�r�r�r�r�r�rrrrr��s
			
r�)"r^r��osrVr2r0rrrrZpsutil.testsrrrr	r
rrr
rZpsutil._psutil_posixrr��getuidr�rrr`rar$rbr�r�r�rrrr�<module>
sJQXx
PKok\{��
y
y4psutil/tests/__pycache__/test_windows.cpython-39.pycnu�[���a

��?h؄�@s�dZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddlm
Z
ddlmZddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!e
�r�e�s�e�"��:e�#d�ddl$Z$ddl%Z%ddl&Z&ddl'Z'Wd�n1�s�0Ye
�r�ddl(m)Z)ej*j+Z+ej,j-e
dd�ej,j-edd�ej,j-e�o�edd�Gdd�de����Z.dd�Z/e0fd d!�Z1Gd"d#�d#e.�Z2Gd$d%�d%e.�Z3Gd&d'�d'e.�Z4Gd(d)�d)e.�Z5Gd*d+�d+e.�Z6ej,j-e
dd�Gd,d-�d-e��Z7ej,j-e
dd�Gd.d/�d/e��Z8ej,j-e
dd�Gd0d1�d1e��Z9dS)2zWindows specific tests.�N)�WINDOWS)�FileNotFoundError)�super)�which)�APPVEYOR)�GITHUB_ACTIONS)�HAS_BATTERY)�IS_64BIT)�PY3)�PYPY)�TOLERANCE_DISK_USAGE)�TOLERANCE_SYS_MEM)�PsutilTestCase)�mock)�pytest)�retry_on_failure)�sh)�spawn_testproc)�	terminate�ignore)�convert_oserrorzWINDOWS only��reasonzpywin32 not available on PYPYzpywin32 broken on GITHUB + PY2c@seZdZdS)�WindowsTestCaseN)�__name__�
__module__�__qualname__�rr�E/usr/local/lib64/python3.9/site-packages/psutil/tests/test_windows.pyr;srcCs&td�st�d��dd|}t|�S)z�Currently not used, but available just in case. Usage:

    >>> powershell(
        "Get-CIMInstance Win32_PageFileUsage | Select AllocatedBaseSize")
    zpowershell.exezpowershell.exe not availablez?powershell.exe -ExecutionPolicy Bypass -NoLogo -NonInteractive z,-NoProfile -WindowStyle Hidden -Command "%s")rr�skipr)�cmd�cmdlinerrr�
powershellEs
��r"csjtd||f���}d�|��dd����}�durbd|vrXt�fdd�|��D��S�|�Sn|SdS)z�Currently not used, but available just in case. Usage:

    >>> wmic("Win32_OperatingSystem", "FreePhysicalMemory")
    2134124534
    zwmic path %s get %s��N�,csg|]}�|��qSrr��.0�x��	converterrr�
<listcomp>^�zwmic.<locals>.<listcomp>)r�strip�join�
splitlines�tuple�split)�path�whatr*�out�datarr)r�wmicTs
r6c@sReZdZejjdejvdd�dd��Zdd�Z	dd	�Z
d
d�Zdd
�Zdd�Z
dS)�TestCpuAPIs�NUMBER_OF_PROCESSORSz-NUMBER_OF_PROCESSORS env var is not availablercCs"ttjd�}|t��ksJ�dS)Nr8)�int�os�environ�psutil�	cpu_count)�selfZnum_cpusrrr�&test_cpu_count_vs_NUMBER_OF_PROCESSORSksz2TestCpuAPIs.test_cpu_count_vs_NUMBER_OF_PROCESSORScCs$t��d}t��}||ks J�dS)N�)�win32apiZ
GetSystemInfor<r=�r>�	sys_value�psutil_valuerrr�test_cpu_count_vs_GetSystemInfousz+TestCpuAPIs.test_cpu_count_vs_GetSystemInfocCs2t��}tdd�|��D��}t��|ks.J�dS)Ncss|]}|jVqdS�N)ZNumberOfLogicalProcessors�r'�procrrr�	<genexpr>~sz<TestCpuAPIs.test_cpu_count_logical_vs_wmi.<locals>.<genexpr>��wmi�WMI�sum�Win32_Processorr<r=)r>�wZprocsrrr�test_cpu_count_logical_vs_wmi|s
�z)TestCpuAPIs.test_cpu_count_logical_vs_wmicCs6t��}tdd�|��D��}tjdd�|ks2J�dS)Ncss|]}|jVqdSrF)Z
NumberOfCoresrGrrrrI�r,z:TestCpuAPIs.test_cpu_count_cores_vs_wmi.<locals>.<genexpr>F)�logicalrJ)r>rOZcoresrrr�test_cpu_count_cores_vs_wmi�sz'TestCpuAPIs.test_cpu_count_cores_vs_wmicCs t��ttjdd��ksJ�dS)NT)Zpercpu)r<r=�len�	cpu_times�r>rrr�test_cpu_count_vs_cpu_times�sz'TestCpuAPIs.test_cpu_count_vs_cpu_timescCs@t��}|��d}|jt��jks(J�|jt��jks<J�dS�Nr)	rKrLrNZCurrentClockSpeedr<Zcpu_freq�currentZ
MaxClockSpeed�max)r>rOrHrrr�
test_cpu_freq�szTestCpuAPIs.test_cpu_freqN)rrrr�mark�skipifr:r;r?rErPrRrVrZrrrrr7js�
r7c@s�eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zej	j
edd�e�d
d���Z
e�dd��Ze�dd��Zdd�Zdd�Zdd�Zdd�ZdS)�TestSystemAPIscCsRtd�}tjdd���}|D]0}d|�dd���vr6q||vr|�d|��qdS)Nz
ipconfig /allT)Zperniczpseudo-interface� �-z-%r nic wasn't found in 'ipconfig /all' output)rr<Znet_io_counters�keys�replace�lower�fail)r>r4ZnicsZnicrrr�test_nic_names�s�zTestSystemAPIs.test_nic_namescCs,t����d}t|j�t��jks(J�dSrW)rKrLZWin32_ComputerSystemr9ZTotalPhysicalMemoryr<�virtual_memory�total�r>rOrrr�test_total_phymem�sz TestSystemAPIs.test_total_phymemcCs4t����d}tt|j�t��j�t	ks0J�dSrW)
rKrL�Win32_PerfRawData_PerfOS_Memory�absr9ZAvailableBytesr<re�freer
rgrrr�test_free_phymem�s
��zTestSystemAPIs.test_free_phymemcCsht����d}t|j�t��jt��jks2J�t��jdkrdt��j	dksRJ�t��j
dksdJ�dSrW)rKrLrir9ZCommitLimitr<rerf�swap_memoryrk�usedrgrrr�test_total_swapmem�s��z!TestSystemAPIs.test_total_swapmemcCs|t��jdkrxt��jdd�d}t|j�dt|j�}t��j	dksLJ�t
t��j	|�dksfJ�t��j	dksxJ�dS)NrZ_Total)�Name�dr@)r<rmrfrKrLZ#Win32_PerfRawData_PerfOS_PagingFiler9ZPercentUsageZPercentUsage_Base�percentrj)r>rOZpercentSwaprrr�test_percent_swapmem�sz#TestSystemAPIs.test_percent_swapmemztest not relieable on appveyorrcCs:t����}tdd�|D��}tt���}||ks6J�dS)NcSsg|]
}|j�qSr�Z	ProcessIdr&rrrr+�r,z,TestSystemAPIs.test_pids.<locals>.<listcomp>)rKrL�
Win32_Process�setr<�pids)r>rOZwmi_pidsZpsutil_pidsrrr�	test_pids�szTestSystemAPIs.test_pidsc
Cs�tjdd�}t����}|D]�}|D]�}|j�dd�|jkr$|jsFqd|j	vrTq|j�
d�rdqzt�|j�}Wnty�YqYn0|j
t|j�ks�J�t|j�}|j|ks�J�t|j|�dkr�|�d|j|f��qq$|�d	t|���qdS)
NT��all�\r#�cdrom�A:i�zpsutil=%s, wmi=%szcan't find partition %s)r<�disk_partitionsrKrLZWin32_LogicalDiskZdeviceraZDeviceID�
mountpoint�opts�
startswith�
disk_usagerrfr9�SizeZ	FreeSpacerkrjrc�repr)r>Zps_partsZ	wmi_partsZps_partZwmi_part�usageZwmi_freerrr�
test_disks�s0

�zTestSystemAPIs.test_diskscCs�t��D]r}d|jvrqt�|j�}t�|j�}t|d|j�t	ksJJ�t|d|j
�t	ksdJ�|j|j
|jksJ�qdS)Nr|rr$)r<r~r�rAZGetDiskFreeSpaceExrr�rjrkrrfrn)r>ZdiskrCrDrrr�test_disk_usage�s
�zTestSystemAPIs.test_disk_usagecCs>dd�t���d�D�}dd�tjdd�D�}||ks:J�dS)NcSs"g|]}|r|�d�s|d�qS)r}r{)r�r&rrrr+s�z7TestSystemAPIs.test_disk_partitions.<locals>.<listcomp>z\cSsg|]}|j�d�s|j�qS)r})rr�r&rrrr+s�Try)rAZGetLogicalDriveStringsr1r<r~rBrrr�test_disk_partitionss�
�z#TestSystemAPIs.test_disk_partitionscCs\tt���}t����}t�}|D]}|�|j�|�|j�q"||@sXJd||f��dS)Nzno common entries in %s, %s)	rv�cextZnet_if_statsrKrLZWin32_NetworkAdapter�addrpZNetConnectionID)r>Zps_namesZwmi_adaptersZ	wmi_namesZwmi_adapterrrr�test_net_if_stats
s�z TestSystemAPIs.test_net_if_statscCs^t����}|dj�d�d}tj�|d�}tj�t�	��}t
||���}|dksZJ�dS)Nr�.�%Y%m%d%H%M%Sr@)rKrLZWin32_OperatingSystemZLastBootUpTimer1�datetime�strptime�
fromtimestampr<�	boot_timerj�
total_seconds)r>Zwmi_osZ
wmi_btime_strZwmi_btime_dtZ	psutil_dt�diffrrr�test_boot_times�zTestSystemAPIs.test_boot_timecCs�tjddd�� t��dks J�Wd�n1s40Ytjddd�� t��dks^J�Wd�n1sr0Ytjddd�� t��dks�J�Wd�n1s�0Ytjddd�� t��dks�J�Wd�n1s�0YdS)Nz psutil._pswindows.cext.boot_timer@�Zreturn_value��iM)r�patchr<r�rUrrr�test_boot_time_fluctuation#s...z)TestSystemAPIs.test_boot_time_fluctuationN)rrrrdrhrlrorsrr[r\rrrxr�r�r�r�r�r�rrrrr]�s 




r]c@sheZdZdd�Zejjedd�dd��Zejjedd�dd��Z	d	d
�Z
dd�Zd
d�Zdd�Z
dS)�TestSensorsBatterycCs2t��drt��dus.J�nt��dus.J�dS)NZSystemBatteriesPresent)rAZGetPwrCapabilitiesr<�sensors_batteryrUrrr�test_has_battery5sz#TestSensorsBattery.test_has_batteryz
no batteryrcCs:t��}|�d�d}t��}t|j|j�dks6J�dS)N�select * from Win32_Batteryrr$)rKrL�queryr<r�rjrrZEstimatedChargeRemaining�r>rOZbattery_wmiZbattery_psutilrrr�test_percent;s��zTestSensorsBattery.test_percentcCs6t��}|�d�d}t��}|j|jdkks2J�dS)Nr�r�)rKrLr�r<r�Z
power_pluggedZ
BatteryStatusr�rrr�test_power_pluggedEsz%TestSensorsBattery.test_power_pluggedcCsLtjddd��*}t��dus J�|js*J�Wd�n1s>0YdS)N�&psutil._pswindows.cext.sensors_battery)r�rrr�)rr�r<r��called�r>�mrrr�test_emulate_no_batteryNs�z*TestSensorsBattery.test_emulate_no_batterycCsPtjddd��.}t��jtjks$J�|js.J�Wd�n1sB0YdS)Nr�)r$rrrr��rr�r<r��secsleftZPOWER_TIME_UNLIMITEDr�r�rrr�test_emulate_power_connectedVs���z/TestSensorsBattery.test_emulate_power_connectedcCsPtjddd��.}t��jtjks$J�|js.J�Wd�n1sB0YdS)Nr�)r�rrr�r�r�rrr�test_emulate_power_charging`s���z.TestSensorsBattery.test_emulate_power_chargingcCsPtjddd��.}t��jtjks$J�|js.J�Wd�n1sB0YdS)Nr�)rrr���r�)rr�r<r�r�ZPOWER_TIME_UNKNOWNr�r�rrr�test_emulate_secs_left_unknownjs��z1TestSensorsBattery.test_emulate_secs_left_unknownN)rrrr�rr[r\rr�r�r�r�r�r�rrrrr�4s
	


r�c@s�eZdZedd��Zedd��Zdd�Zdd�Zd	d
�Zdd�Z	d
d�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#S)$�TestProcesscCst�j|_dSrF�r�pid��clsrrr�
setUpClass{szTestProcess.setUpClasscCst|j�dSrF�rr�r�rrr�
tearDownClassszTestProcess.tearDownClasscCsBt�d�}t�tj��|��Wd�n1s40YdSrW)r<�Processr�raises�AccessDenied�kill�r>�prrr�
test_issue_24�s
zTestProcess.test_issue_24cCs�t�d�}|��dksJ�t|�|��|��dks:J�z|��dd�\}}Wn&tjyxt�	�ddvrt�Yn0|dks�J�dS)Nr��Systemgr�r$)Zvistazwin-7Zwin7r)
r<r��name�str�username�create_time�memory_infor��platform�uname)r>r��rssZ_vmsrrr�test_special_pid�s
zTestProcess.test_special_pidcCsFt�|j�}t�t��|�tj�Wd�n1s80YdSrF)	r<r�r�rr��
ValueError�send_signal�signal�SIGINTr�rrr�test_send_signal�szTestProcess.test_send_signalcCsbt�t���}|��}t�tjtj	t���}|��}||dksDJ�t�
|�|��|ks^J�dS)Nr$)r<r�r:�getpid�num_handlesrA�OpenProcess�win32con�PROCESS_QUERY_INFORMATION�FALSE�CloseHandle)r>r��before�handle�afterrrr�test_num_handles_increment�s�
z&TestProcess.test_num_handles_incrementcCs�t�|��j�}|�tj�|�tj�|��|�	�t
�tj��|�tj�Wd�n1sf0Yt
�tj��|�tj�Wd�n1s�0YdSrF)
r<r�rr�r�r�ZCTRL_C_EVENTZCTRL_BREAK_EVENTr��waitrr��
NoSuchProcessr�rrr�test_ctrl_signals�s*zTestProcess.test_ctrl_signalscCs8t�tj�}|�d�r t�d��t���	�|ks4J�dS)N�$zrunning as service account)
rAZ
GetUserNameExr�ZNameSamCompatible�endswithrrr<r�r�)r>r�rrr�
test_username�s

zTestProcess.test_usernamecCsft�ddt�����}d�t�����}|ddkrD|dkrVnn|�	ddd�}||ksbJ�dS)Nz[ ]+r^r�"r#r�)
�re�subrAZGetCommandLiner-r.r<r�r!rarBrrr�test_cmdline�s
 zTestProcess.test_cmdlinecCsJt�tjtjt���}|�tj|�t	�
|�}t���
�}||ksFJ�dSrF)rAr�r�r�r�r:r��
addCleanupr��win32processZGetPriorityClassr<r��nice�r>r�rCrDrrr�	test_nice�s�
zTestProcess.test_nicecCs�t�tjtj|j�}|�tj|�t�	|�}t
�|j���}|d|j
ksNJ�|d|jks`J�|d|jksrJ�|d|jks�J�|d|jks�J�|d|jks�J�|d|jks�J�|d|jks�J�|j|jks�J�|j|jks�J�dS)	NZPeakWorkingSetSize�WorkingSetSizeZQuotaPeakPagedPoolUsageZQuotaPagedPoolUsageZQuotaPeakNonPagedPoolUsageZQuotaNonPagedPoolUsageZ
PagefileUsageZPeakPagefileUsage)rAr�r�r�r�r�r�r�r�ZGetProcessMemoryInfor<r�r�Z	peak_wsetZwsetZpeak_paged_poolZ
paged_poolZpeak_nonpaged_poolZ
nonpaged_poolZpagefileZ
peak_pagefiler��vmsr�rrr�test_memory_info�s.�
�����zTestProcess.test_memory_infocCsXt�tjtj|j�}|�tj|�t�	|j�}|�
�|��}t�
|�}||ksTJ�dSrF)rAr�r�r�r�r�r�r�r<r�rr�r�ZGetExitCodeProcess)r>r�r�rDrCrrr�	test_wait�s�
zTestProcess.test_waitcCs\dd�}t�tjtj|j�}|�tj|�|t�	|�d�}t
�|j���}||ksXJ�dS)Ncs�fdd�td�D�S)Ncsg|]}d|>�@r|�qS)r$r)r'�i�r(rrr+r,zGTestProcess.test_cpu_affinity.<locals>.from_bitmask.<locals>.<listcomp>�@)�ranger�rr�r�from_bitmasksz3TestProcess.test_cpu_affinity.<locals>.from_bitmaskr)
rAr�r�r�r�r�r�r�r�ZGetProcessAffinityMaskr<r�Zcpu_affinity)r>r�r�rCrDrrr�test_cpu_affinitys��zTestProcess.test_cpu_affinitycCs�t�tjtjt���}|�tj|�t	�
|�}t���
�}|j|dksLJ�|j|dks^J�|j|dkspJ�|j|dks�J�|j|dks�J�|j|dks�J�dS)NZReadOperationCountZWriteOperationCountZReadTransferCountZWriteTransferCountZOtherOperationCountZOtherTransferCount)rAr�r�r�r�r:r�r�r�r�ZGetProcessIoCountersr<r��io_countersZ
read_countZwrite_count�
read_bytes�write_bytes�other_countZother_bytesr�rrr�test_io_counterss�
zTestProcess.test_io_counterscCs�ddl}ddl}d}|jj�|d|j�}|�|jjj|�|j�	�}|jj�
||�|��|j}t
�|j���}||ks|J�dS)Nr�)�ctypesZctypes.wintypes�windllZkernel32r�r�r�r�ZwintypesZDWORDZGetProcessHandleCount�byref�valuer<r�r�)r>r�r�r�ZhndcntrCrDrrr�test_num_handles"s�

�zTestProcess.test_num_handlesc
Cs�t�}d|_tjd|d��vt�d��L}t��}t�tj��|�	�Wd�n1sZ0YWd�n1sx0YWd�n1s�0Y|j
dks�J�dS)Ni+z psutil._psplatform.cext.proc_cwd�Zside_effectz
time.sleepr@)�WindowsError�winerrorrr�r<r�rr�r��cwdZ
call_count)r>�excr�r�rrr�test_error_partial_copy4sbz#TestProcess.test_error_partial_copycCsTt��dd}tj�|�}t�tj��|��Wd�n1sF0YdS)Nr�i��)r<rw�_psplatformr�rr�r��exe)r>r�rHrrr�test_exe?szTestProcess.test_exeN)rrr�classmethodr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrrrrrr�zs&

		r�c@s|eZdZdZedd��Zedd��Zdd�Zej	j
edd	�d
d��Zdd
�Z
dd�Ze�dd��Ze�dd��Zdd�ZdS)�TestProcessWMIz%Compare Process API results with WMI.cCst�j|_dSrFr�r�rrrr�KszTestProcessWMI.setUpClasscCst|j�dSrFr�r�rrrr�OszTestProcessWMI.tearDownClasscCs8t��j|jd�d}t�|j�}|��|jks4J�dS�Nrtr)rKrLrur�r<r�r�ZCaption�r>rOr�rrr�	test_nameSszTestProcessWMI.test_namez!unreliable path on GITHUB_ACTIONSrcCs@t��j|jd�d}t�|j�}|����|j��ks<J�dSr)	rKrLrur�r<r�rrbZExecutablePathr	rrrrYszTestProcessWMI.test_execCsFt��j|jd�d}t�|j�}d�|���|j�	dd�ksBJ�dS)Nrtrr^r�r#)
rKrLrur�r<r�r.r!ZCommandLinerar	rrrr�cszTestProcessWMI.test_cmdlinecCsPt��j|jd�d}t�|j�}|��\}}}d||f}|��|ksLJ�dS)Nrtrz%s\%s)rKrLrur�r<r�ZGetOwnerr�)r>rOr��domain�_r�rrrr�hs
zTestProcessWMI.test_usernamecCsBt��j|jd�d}t�|j�}|��j}|t|j	�ks>J�dSr)
rKrLrur�r<r�r�r�r9r�)r>rOr�r�rrr�test_memory_rssos
zTestProcessWMI.test_memory_rsscCs\t��j|jd�d}t�|j�}|��j}t|j	�}|||dfvrX|�
d||f��dS)Nrtrr�zwmi=%s, psutil=%s)rKrLrur�r<r�r�r�r9Z
PageFileUsagerc)r>rOr�r�Z	wmi_usagerrr�test_memory_vmsvs

zTestProcessWMI.test_memory_vmscCs\t��j|jd�d}t�|j�}t|j�d�d�}t	�
dt	�|����}||ksXJ�dS)Nrtrr�r�)
rKrLrur�r<r�r�ZCreationDater1�time�strftime�	localtimer�)r>rOr�Zwmic_createZ
psutil_createrrr�test_create_time�s�zTestProcessWMI.test_create_timeN)rrr�__doc__rr�r�r
rr[r\rrr�r�rr
rrrrrrrHs"

�


rc@sXeZdZdZedd��Zedd��Zdd�Zdd	�Zd
d�Z	dd
�Z
dd�Zdd�ZdS)�TestDualProcessImplementationawCertain APIs on Windows have 2 internal implementations, one
    based on documented Windows APIs, another one based
    NtQuerySystemInformation() which gets called as fallback in
    case the first fails because of limited permission error.
    Here we test that the two methods return the exact same value,
    see:
    https://github.com/giampaolo/psutil/issues/304.
    cCst�j|_dSrFr�r�rrrr��sz(TestDualProcessImplementation.setUpClasscCst|j�dSrFr�r�rrrr��sz+TestDualProcessImplementation.tearDownClasscCs�t�|j���}tjdttjd�d���}t�|j���}t	|�t	|�ksLJ�t
t	|��D]@}||dkslJ�||dks|J�t||||�dksXJ�qX|js�J�Wd�n1s�0YdS)Nz(psutil._psplatform.cext.proc_memory_info�msgr�ri)
r<r�r�r�rr��OSError�errno�EPERMrSr�rjr�)r>Zmem_1�funZmem_2r�rrrr��s
�z.TestDualProcessImplementation.test_memory_infocCslt�|j���}tjdttjd�d��2}t�|j���|ks@J�|j	sJJ�Wd�n1s^0YdS)N�"psutil._psplatform.cext.proc_timesrr�)
r<r�r�r�rr�rrrr�)r>�ctimerrrrr�s
�z.TestDualProcessImplementation.test_create_timecCs�t�|j���}tjdttjd�d��Z}t�|j���}|j	sBJ�t
|j|j�dksZJ�t
|j|j�dksrJ�Wd�n1s�0YdS)Nrrr�g{�G�z�?)
r<r�r�rTrr�rrrr�rj�user�system)r>Zcpu_times_1rZcpu_times_2rrr�test_cpu_times�s
�
z,TestDualProcessImplementation.test_cpu_timescCs�t�|j���}tjdttjd�d��X}t�|j���}t	t
|��D] }t||||�dksDJ�qD|jspJ�Wd�n1s�0YdS)Nz(psutil._psplatform.cext.proc_io_countersrr�r@)
r<r�r�r�rr�rrrr�rSrjr�)r>Z
io_counters_1rZ
io_counters_2r�rrrr��s
�z.TestDualProcessImplementation.test_io_counterscCslt�|j���}tjdttjd�d��2}t�|j���|ks@J�|j	sJJ�Wd�n1s^0YdS)Nz(psutil._psplatform.cext.proc_num_handlesrr�)
r<r�r�r�rr�rrrr�)r>r�rrrrr��s
�z.TestDualProcessImplementation.test_num_handlescCs�t��D]r}z tj|dd�}tj|dd�}Wn@tyl}z(t|�}t|tjtjf�sX�WYd}~qd}~00||ksJ�qdS)NT)Zuse_pebF)	r<rwr�Zproc_cmdlinerr�
isinstancer�r�)r>r��a�b�errrrrr��s�z*TestDualProcessImplementation.test_cmdlineN)
rrrrrr�r�r�rrr�r�r�rrrrr�s	

		rcspeZdZdZedd��ZddgZ�fdd�Z�fdd	�Zd
d�Z	dd
�Z
dd�Zdd�Zdd�Z
dd�Z�ZS)�RemoteProcessTestCasez�Certain functions require calling ReadProcessMemory.
    This trivially works when called on the current process.
    Check that this works on other processes, especially when they
    have a different bitness.
    cCs\d}t�d�D]H}tj|d|gtjtjd�}|��\}}|��|tt�kr|SqdS)Nz6import sys; sys.stdout.write(str(sys.maxsize > 2**32))zC:\Python*\python.exe�-c)�args�stdout�stderr)	�glob�
subprocess�Popen�PIPE�STDOUT�communicater�r�r	)�code�filenamerH�outputrrrr�find_other_interpreter�s�z,RemoteProcessTestCase.find_other_interpreterr$zimport sys; sys.stdin.read()cs�t���|��}|dur$t�d��tr8tj|_||_	n||_tj|_	t
j��}t
t
���|d<|j|j	g|j|tjd�|_|j|jg|j|tjd�|_dS)Nz0could not find interpreter with opposite bitness�THINK_OF_A_NUMBER)�env�stdin)r�setUpr1rrr	�sys�
executableZpython64Zpython32r:r;�copyr�r�r�	test_argsr)r+�proc32�proc64)r>Zother_pythonr3��	__class__rrr5s&
�
��zRemoteProcessTestCase.setUpcs"t���|j��|j��dSrF)r�tearDownr:r-r;rUr<rrr>s

zRemoteProcessTestCase.tearDowncCs@t�|jj�}t|���dks"J�|��dd�|jks<J�dS�N�r$)r<r�r:r�rSr!r9r�rrr�test_cmdline_32!sz%RemoteProcessTestCase.test_cmdline_32cCs@t�|jj�}t|���dks"J�|��dd�|jks<J�dSr?)r<r�r;r�rSr!r9r�rrr�test_cmdline_64&sz%RemoteProcessTestCase.test_cmdline_64cCs&t�|jj�}|��t��ks"J�dSrF)r<r�r:r�rr:�getcwdr�rrr�test_cwd_32+sz!RemoteProcessTestCase.test_cwd_32cCs&t�|jj�}|��t��ks"J�dSrF)r<r�r;r�rr:rCr�rrr�test_cwd_64/sz!RemoteProcessTestCase.test_cwd_64cCs>t�|jj�}|��}d|vs"J�|dtt���ks:J�dS)Nr2)r<r�r:r�r;r�r:r�)r>r��errr�test_environ_323sz%RemoteProcessTestCase.test_environ_32cCs4t�|jj�}z|��Wntjy.Yn0dSrF)r<r�r;r�r;r�r�rrr�test_environ_649s
z%RemoteProcessTestCase.test_environ_64)rrrr�staticmethodr1r9r5r>rArBrDrErGrH�
__classcell__rrr<rr#�s
r#c@seZdZdd�Zdd�ZdS)�TestServicescCs`tgd��}tgd��}tgd��}t��D�],}|��}t|dt�sLJ�|d��s\J�t|dt�snJ�t|dt�s�J�|d|vs�J�|ddur�t�|d�t|d	t�s�J�t|dt�s�J�t|d
t�s�J�|d
|vs�J�|d|v�sJ�t|dt��sJ�|��}|du�r@t�|�}|�	��s@J�t�
|���}||ks,J�q,dS)N)�running�paused�start�pause�continue�stop�stopped)Z	automaticZmanual�disabled)rLrMZ
start_pendingZ
pause_pendingZcontinue_pendingZstop_pendingrRr��display_namer��statusr�ZbinpathZ
start_type�description)rvr<�win_service_iterZas_dictrr�r-r�r��
is_running�win_service_getr�)r>Zvalid_statusesZvalid_start_typesZservr5r�r��srrr�test_win_service_iterHs0		

z"TestServices.test_win_service_iterc	Cs�tjjj}tjjj}tt�����}t�	tj
��}t�|d�Wd�n1sT0Y|jj|dksrJ�t�|�}t
r�ddd|f}n|df}t|�}tjd|d��Dt�	tj
��|��Wd�n1s�0YWd�n1s�0Ytjd|d��Ft�	tj
��|��Wd�n1�s<0YWd�n1�s\0Yt
�rzddd|f}n|df}t|�}tjd|d��Ft�	tj��|��Wd�n1�s�0YWd�n1�s�0Ytjd|d��Ft�	tj��|��Wd�n1�s,0YWd�n1�sL0Y|��t|�v�slJ�|��t|�v�s�J�|��t|�v�s�J�|��t|�v�s�J�dS)Nz???rrz/psutil._psplatform.cext.winservice_query_statusr�z/psutil._psplatform.cext.winservice_query_config)r<rr��ERROR_SERVICE_DOES_NOT_EXIST�ERROR_ACCESS_DENIED�nextrWr�rr�r�rYr�r
r�rr�rUr�r�r�rTr�)r>r\r]r��cmZservicer%rrrr�test_win_service_gettsP�
,
�D�H�H�Hz!TestServices.test_win_service_getN)rrrr[r`rrrrrKFs,rK):rr�rr(r:r�r�r�r)r6r�warningsr<rZpsutil._compatrrrZpsutil.testsrrrr	r
rrr
rrrrrrr�catch_warnings�simplefilterrAr�r�rKZpsutil._pswindowsrrr�r[r\rr"r9r6r7r]r�r�rrr#rKrrrr�<module>s|

(�(#FOHW]PKok\�n�E0psutil/tests/__pycache__/test_osx.cpython-39.pycnu�[���a

��?h��@sdZddlZddlZddlZddlZddlmZddlmZddlmZddlm	Z	ddlm
Z
ddlmZdd	lmZdd
lm
Z
ddlmZddlmZdd
lmZer�ddlmZdd�Zdd�Zejjedd�Gdd�de��Zejjedd�Gdd�de��ZdS)zmacOS specific tests.�N)�MACOS)�POSIX)�HAS_BATTERY)�TOLERANCE_DISK_USAGE)�TOLERANCE_SYS_MEM)�PsutilTestCase)�pytest)�retry_on_failure)�sh)�spawn_testproc)�	terminate)�getpagesizecCs:t|�}|��d}z
t|�WSty4|YS0dS)zmExpects a sysctl command with an argument and parse the result
    returning only the value of interest.
    �N)r
�split�int�
ValueError)Zcmdline�out�result�r�A/usr/local/lib64/python3.9/site-packages/psutil/tests/test_osx.py�sysctls
rcCsHtd�}|�d�D]}||vrq,qtd��tt�d|��d��t�S)z)Wrapper around 'vm_stat' cmdline utility.�vm_stat�
zline not foundz\d+r)r
rrr�re�search�groupr
)�fieldr�linerrrr+srz
MACOS only��reasonc@s,eZdZedd��Zedd��Zdd�ZdS)�TestProcesscCst�j|_dS�N)r�pid��clsrrr�
setUpClass8szTestProcess.setUpClasscCst|j�dSr!)rr"r#rrr�
tearDownClass<szTestProcess.tearDownClasscCs�td|j�}|�dd���}|�d�d}|�d�d}t�|j���}|t�	dt�
|��ksdJ�|t�	dt�
|��ks~J�dS)	Nzps -o lstart -p %sZSTARTED�� ������z%H:%M:%Sz%Y)r
r"�replace�stripr�psutil�ProcessZcreate_time�time�strftime�	localtime)�self�outputZstart_psZhhmmss�yearZstart_psutilrrr�test_process_create_time@s
�
z$TestProcess.test_process_create_timeN)�__name__�
__module__�__qualname__�classmethodr%r&r5rrrrr 6s


r c@s�eZdZe�dd��Zdd�Zdd�Zejj	e
o:e��dkdd	�d
d��Z
dd
�Ze�dd��Ze�dd��Ze�dd��Ze�dd��Ze�dd��Ze�dd��Zdd�Zejj	edd	�dd��ZdS) �TestSystemAPIscCs�dd�}tjdd�D]j}t�|j�}||j�\}}}}|j|ksDJ�|j|ksRJ�t|j|�tkshJ�t|j	|�tksJ�qdS)NcSs�td|���}|�d�}|�d�|�d�}|��dd�\}}}}|dkrRd}t|�d}t|�d}t|�d}||||fS)Nz
df -k "%s"rr��noner'i)r
r,r�popr)�pathr�linesr�dev�total�used�freerrr�dfUs


z%TestSystemAPIs.test_disks.<locals>.dfF)�all)
r-Zdisk_partitions�
disk_usageZ
mountpointZdevicerA�absrCrrB)r2rD�part�usager@rArBrCrrr�
test_disksQs
zTestSystemAPIs.test_diskscCs td�}|tjdd�ksJ�dS)Nzsysctl hw.logicalcpuT��logical�rr-�	cpu_count�r2�numrrr�test_cpu_count_logicallsz%TestSystemAPIs.test_cpu_count_logicalcCs td�}|tjdd�ksJ�dS)Nzsysctl hw.physicalcpuFrKrMrOrrr�test_cpu_count_corespsz#TestSystemAPIs.test_cpu_count_cores�arm64zskipped due to #1892rcCsZt��}|jddtd�ks"J�|jddtd�ks<J�|jddtd�ksVJ�dS)Ni�zsysctl hw.cpufrequencyzsysctl hw.cpufrequency_minzsysctl hw.cpufrequency_max)r-Zcpu_freq�currentr�min�max)r2�freqrrr�
test_cpu_frequszTestSystemAPIs.test_cpu_freqcCstd�}|t��jksJ�dS)Nzsysctl hw.memsize)rr-�virtual_memoryrA)r2Zsysctl_hwphymemrrr�test_vmem_total�szTestSystemAPIs.test_vmem_totalcCs*td�}t��j}t||�tks&J�dS)NrC)rr-rYrCrGr�r2Z
vmstat_valZ
psutil_valrrr�test_vmem_free�s
zTestSystemAPIs.test_vmem_freecCs*td�}t��j}t||�tks&J�dS)N�active)rr-rYr]rGrr[rrr�test_vmem_active�s
zTestSystemAPIs.test_vmem_activecCs*td�}t��j}t||�tks&J�dS)N�inactive)rr-rYr_rGrr[rrr�test_vmem_inactive�s
z!TestSystemAPIs.test_vmem_inactivecCs*td�}t��j}t||�tks&J�dS)N�wired)rr-rYrarGrr[rrr�test_vmem_wired�s
zTestSystemAPIs.test_vmem_wiredcCs*td�}t��j}t||�tks&J�dS)NZPageins)rr-�swap_memory�sinrGrr[rrr�test_swapmem_sin�s
zTestSystemAPIs.test_swapmem_sincCs*td�}t��j}t||�tks&J�dS)NZPageout)rr-rcZsoutrGrr[rrr�test_swapmem_sout�s
z TestSystemAPIs.test_swapmem_soutc	Csrt����D]`\}}ztd|�}Wnty6Yq0|jd|vksNJ|��|jtt�	d|�d�ksJ�qdS)Nzifconfig %s�RUNNINGz	mtu (\d+)r)
r-Znet_if_stats�itemsr
�RuntimeErrorZisupZmturr�findall)r2�name�statsrrrr�test_net_if_stats�sz TestSystemAPIs.test_net_if_statsz
no batterycCs`td�}t�d|��d�}t�d|��d�}|dk}t��}|j|ksJJ�|jt|�ks\J�dS)Nz
pmset -g battz(\d+)%rzNow drawing from '([^']+)'zAC Power)	r
rrrr-Zsensors_battery�
power_plugged�percentr)r2rroZdrawing_fromrnZ
psutil_resultrrr�test_sensors_battery�sz#TestSystemAPIs.test_sensors_batteryN)r6r7r8r	rJrQrRr�mark�skipifr�platform�machinerXrZr\r^r`rbrerfrmrrprrrrr:Ls0
�






r:)�__doc__rsrr/r-rrZpsutil.testsrrrrrr	r
rrZpsutil._psutil_posixr
rrrqrrr r:rrrr�<module>s.PKok\�J�>C)C)2psutil/tests/__pycache__/test_linux.cpython-39.pycnu�[���a

��?h[d�@sdZddlmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlZddlZddlmZddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*ddlm+Z+ddlm,Z,ddlm-Z-e�r�ddl.m/Z/dd l.m0Z0dd!l.m1Z1dd"l.m2Z2ej3�4ej3�5e6��Z7d#Z8d$Z9d%Z:d&Z;d'Z<e�r,d(Z=e�d)�Z>d*d+�Z?d,d-�Z@d.d/�ZAd0d1�ZBd2d3�ZCd4d5�ZDd6d7�ZEd8d9�ZFd:d;�ZGejHd<d=��ZIejHd>d?��ZJe'jKjLed@dA�GdBdC�dCe#��ZMe'jKjLed@dA�GdDdE�dEe#��ZNe'jKjLed@dA�GdFdG�dGe#��ZOe'jKjLed@dA�GdHdI�dIe#��ZPe'jKjLed@dA�GdJdK�dKe#��ZQe'jKjLed@dA�GdLdM�dMe#��ZRe'jKjLed@dA�GdNdO�dOe#��ZSe'jKjLed@dA�GdPdQ�dQe#��ZTe'jKjLed@dA�GdRdS�dSe#��ZUe'jKjLed@dA�GdTdU�dUe#��ZVe'jKjLed@dA�GdVdW�dWe#��ZWe'jKjLed@dA�e'jKjLe dXdA�GdYdZ�dZe#���ZXe'jKjLed@dA�Gd[d\�d\e#��ZYe'jKjLed@dA�Gd]d^�d^e#��ZZe'jKjLed@dA�Gd_d`�d`e#��Z[e'jKjLed@dA�Gdadb�dbe#��Z\e'jKjLed@dA�Gdcdd�dde#��Z]e'jKjLed@dA�Gdedf�dfe#��Z^e'jKjLed@dA�e'jKjLedgdA�Gdhdi�die#���Z_e'jKjLed@dA�Gdjdk�dke#��Z`e'jKjLed@dA�Gdldm�dme#��Zae'jKjLed@dA�Gdndo�doe#��Zbe'jKjLed@dA�Gdpdq�dqe#��Zce'jKjLed@dA�Gdrds�dse#��Zde'jKjLed@dA�Gdtdu�due#��ZedS)vzLinux specific tests.�)�divisionN)�LINUX)�PY3)�FileNotFoundError)�
basestring)�AARCH64)�GITHUB_ACTIONS)�GLOBAL_TIMEOUT)�HAS_BATTERY)�HAS_CPU_FREQ)�HAS_GETLOADAVG)�
HAS_RLIMIT)�PYPY)�PYTEST_PARALLEL)�	QEMU_USER)�TOLERANCE_DISK_USAGE)�TOLERANCE_SYS_MEM)�PsutilTestCase)�
ThreadTask)�
call_until)�mock)�pytest)�
reload_module)�retry_on_failure)�safe_rmpath)�sh)�skip_on_not_implemented)�which)�CLOCK_TICKS)�RootFsDeviceFinder)�calculate_avail_vmem)�open_binaryi�i�i'�i�i�iz/sys/class/hwmon/hwmon*cCs�ddl}|dd�}tr"t|d�}t�tjtj�}t�|��8t�|�	|�
�tt�
d|��dd��Wd�S1sz0YdS�Nr��ascii�256s��)�fcntlr�bytes�socket�AF_INET�
SOCK_DGRAM�
contextlib�closing�	inet_ntoa�ioctl�fileno�SIOCGIFADDR�struct�pack��ifnamer(�s�r8�C/usr/local/lib64/python3.9/site-packages/psutil/tests/test_linux.py�get_ipv4_addressNs
��r:cCs�ddl}|dd�}tr"t|d�}t�tjtj�}t�|��8t�|�	|�
�tt�
d|��dd��Wd�S1sz0YdSr")r(rr)r*r+r,r-r.r/r0r1�SIOCGIFNETMASKr3r4r5r8r8r9�get_ipv4_netmask]s
���r<cCs�ddl}|dd�}tr"t|d�}t�tjtj�}t�|��8t�|�	|�
�tt�
d|��dd��Wd�S1sz0YdSr")r(rr)r*r+r,r-r.r/r0r1�SIOCGIFBRDADDRr3r4r5r8r8r9�get_ipv4_broadcastls
���r>cCs�td��T}g}|D]"}|��}|d|kr|�|�qt|�dkrNtd|��Wd�n1sb0Ytt|��D]j}||d}g}tdt|�d�D]}|�|||d��q�d�|�}	t�tj	|	�}
t�
tj	|
�||<qx|S)Nz/proc/net/if_inet6���rzcould not find interface %r��:)�open�split�append�len�
ValueError�range�joinr*�	inet_pton�AF_INET6�	inet_ntop)r6�fZ
all_fields�line�fields�iZunformatted�groups�j�	formatted�packedr8r8r9�get_ipv6_addresses{s"
*
rTc	s�ddl}|dd�}tr"t|d�}t�tjtj�}t�|��n|�|�	�t
t�d|��}trfdd��nddl
}|j�d��fdd	�|d
d�D��dd�Wd�S1s�0YdS)
Nrr#r$r%cSs|S�Nr8��xr8r8r9�ord�szget_mac_address.<locals>.ord�csg|]}d�|��qS)z%02x:r8)�.0�char�rXr8r9�
<listcomp>��z#get_mac_address.<locals>.<listcomp>�r'r?)r(rr)r*r+r,r-r.r0r1�
SIOCGIFHWADDRr3r4�__builtin__rXrH)r6r(r7�inforar8r\r9�get_mac_address�s
�
rccCs�tddgddid�}|�d�}|D]F}|�d�r"|��\}}}}t�dd�}|t|�t|�t|��Sq"td	d�|���d
S)zQParse 'free' cmd and return swap memory's s total, used and free
    values.
    �free�-b�LANG�C.UTF-8��env�
ZSwapztotal used freez&can't find 'Swap' in 'free' output:
%sN)rrC�
startswith�collections�
namedtuple�intrFrH)�out�linesrM�_�total�usedrd�ntr8r8r9�	free_swap�s

�rucCs�tddgddid�}|�d�}|D]P}|�d�r"dd	�|��d
d�D�\}}}}t�dd�}||||||�Sq"td
d�|���dS)zSParse 'free' cmd and return physical memory's total, used
    and free values.
    rdrerfrgrhrjZMemcss|]}t|�VqdSrU)rn�rZrWr8r8r9�	<genexpr>�r^zfree_physmem.<locals>.<genexpr>��ztotal used free shared outputz%can't find 'Mem' in 'free' output:
%sN)rrCrkrlrmrFrH)rorprMrrrsrd�sharedrtr8r8r9�free_physmem�s

"��r{cCsZtddgddid�}|�d�D]*}|��}||vrt|�d�d�Sqtd	|��dS)
N�vmstatz-srfrgrhrj� rz can't find %r in 'vmstat' output)rrC�striprnrF)�statrorMr8r8r9r|�sr|cCs@tddg���}d|vr"t�d��tttt�d|�	�d���S)Nrdz-V�UNKNOWNzcan't determine free versionz\d+r?)
rr~r�skip�tuple�maprn�re�findallrC)ror8r8r9�get_free_version_info�s
r�c#sX��fdd�}t�trdnd}tj|d|d��}|VWd�n1sJ0YdS)z�Mock open() builtin and forces it to return a certain content
    for a given path. `pairs` is a {"path": "content", ...} dict.
    csZ|�vr@�|}tr4t|t�r(t�|�St�|�SqVt�|�Sn�|g|�Ri|��SdSrU)r�
isinstancer�io�StringIO�BytesIO)�name�args�kwargs�content��	orig_open�pairsr8r9�	open_mock�s

z$mock_open_content.<locals>.open_mock�
builtins.open�__builtin__.openT��create�side_effectN�rBrr�patch)r�r��patch_point�mr8r�r9�mock_open_content�s

r�c#sZ���fdd�}t�trdnd}tj|d|d��}|VWd�n1sL0YdS)zZMock open() builtin and raises `exc` if the path being opened
    matches `for_path`.
    cs(|�kr��n�|g|�Ri|��SdSrUr8�r�r�r���exc�for_pathr�r8r9r��sz&mock_open_exception.<locals>.open_mockr�r�Tr�Nr�)r�r�r�r�r�r8r�r9�mock_open_exception�s
r�z
LINUX only��reasonc@sLeZdZdd�Ze�dd��Ze�dd��Ze�dd��Ze�d	d
��ZdS)�"TestSystemVirtualMemoryAgainstFreecCs"t�j}t��j}||ksJ�dSrU)r{rr�psutil�virtual_memory��selfZ	cli_value�psutil_valuer8r8r9�
test_totals
z-TestSystemVirtualMemoryAgainstFree.test_totalcCsRt�dkrt�d��t�dkr(t�d��t�j}t��j}t||�tksNJ�dS)N��r���free version too old�r@rr�free version too recent)	r�rr�r{rsr�r��absrr�r8r8r9�	test_useds





z,TestSystemVirtualMemoryAgainstFree.test_usedcCs*t�j}t��j}t||�tks&J�dSrU)r{rdr�r�r�rr�r8r8r9�	test_free's
z,TestSystemVirtualMemoryAgainstFree.test_freecCsPt�}|j}|dkrt�d��t��j}t||�tksLJd|||jf��dS)Nrz%free does not support 'shared' column�	%s %s 
%s)	r{rzrr�r�r�r�r�output)r�rd�
free_valuer�r8r8r9�test_shared-s

��z.TestSystemVirtualMemoryAgainstFree.test_sharedcCsrtddg�}|�d�}d|dvr.t�d��n@t|d��d�}t��j}t||�t	ksnJd	|||f��dS)
Nrdrerj�	availablerz(free does not support 'available' columnrxr?r�)
rrCrr�rnr�r�r�r�r)r�rorpr�r�r8r8r9�test_available8s

��z1TestSystemVirtualMemoryAgainstFree.test_availableN)	�__name__�
__module__�__qualname__r�rr�r�r�r�r8r8r8r9r�s



r�c@sZeZdZdd�Ze�dd��Ze�dd��Ze�dd��Ze�d	d
��Ze�dd��Z	d
S)�$TestSystemVirtualMemoryAgainstVmstatcCs.td�d}t��j}t||�tks*J�dS)Nztotal memory�)r|r�r�rrr�r�r�Zvmstat_valuer�r8r8r9r�Js
z/TestSystemVirtualMemoryAgainstVmstat.test_totalcCsVt�dkrt�d��t�dkr(t�d��td�d}t��j}t||�tksRJ�dS)Nr�r�r�r�zused memoryr�)	r�rr�r|r�r�rsr�rr�r8r8r9r�Os





z.TestSystemVirtualMemoryAgainstVmstat.test_usedcCs.td�d}t��j}t||�tks*J�dS)Nzfree memoryr�)r|r�r�rdr�rr�r8r8r9r�as
z.TestSystemVirtualMemoryAgainstVmstat.test_freecCs.td�d}t��j}t||�tks*J�dS)Nz
buffer memoryr�)r|r�r��buffersr�rr�r8r8r9�test_buffersgs
z1TestSystemVirtualMemoryAgainstVmstat.test_bufferscCs.td�d}t��j}t||�tks*J�dS)Nz
active memoryr�)r|r�r��activer�rr�r8r8r9�test_activems
z0TestSystemVirtualMemoryAgainstVmstat.test_activecCs.td�d}t��j}t||�tks*J�dS)Nzinactive memoryr�)r|r�r��inactiver�rr�r8r8r9�
test_inactivess
z2TestSystemVirtualMemoryAgainstVmstat.test_inactiveN)
r�r�r�r�rr�r�r�r�r�r8r8r8r9r�Hs



r�c@sBeZdZdd�Ze�dd��Zdd�Zdd�Zd	d
�Zdd�Z	d
S)�TestSystemVirtualMemoryMocksc	Cs�t�d���}td|i���^}tjdd���.}t�d�t��}|j	sJJ�t
|�dksZJ�|d}dt|j�vstJ�d	t|j�vs�J�d
t|j�vs�J�dt|j�vs�J�dt|j�vs�J�d
t|j�vs�J�dt|j�vs�J�|j
dks�J�|jdks�J�|jdk�sJ�|jdk�sJ�|jdk�s,J�|jdk�s<J�|jdk�sLJ�Wd�n1�sb0YWd�n1�s�0YdS)NaL            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemAvailable:         -1 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            SReclaimable:     346648 kB
            �
/proc/meminfoT��record�alwaysrxrz#memory stats couldn't be determined�cachedrzr�r�r�r�)�textwrap�dedent�encoder��warnings�catch_warnings�simplefilterr�r��calledrE�str�messager�r�r�rzr�r��slab�r�r�r��ws�ret�wr8r8r9�test_warnings_on_misses|s,


z4TestSystemVirtualMemoryMocks.test_warnings_on_missescCs�i}td��:}|D]$}|��}t|d�d||d<qWd�n1sL0Yt|�}d|vr�|d}t||�|d}|dks�J�dS)Nr�rxr�rs
MemAvailable:�dr#)r!rCrnr r�)r�ZmemsrLrMrN�a�bZdiff_percentr8r8r9�test_avail_old_percent�s
8z3TestSystemVirtualMemoryMocks.test_avail_old_percentc	Cs�t�d���}td|i��v}tjdd��}t��}Wd�n1sF0Y|jsZJ�|j	dkshJ�|d}dt
|j�vs�J�Wd�n1s�0YdS)Na            Active:          9444728 kB
            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemAvailable:    6574984 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            SReclaimable:     346648 kB
            r�Tr�l �"r�,inactive memory stats couldn't be determined�r�r�r�r�r�r�r�r�r�r�r�r�r�r8r8r9� test_avail_old_comes_from_kernel�s&
�z=TestSystemVirtualMemoryMocks.test_avail_old_comes_from_kernelc	Cs�t�d���}td|i��v}tjdd��}t��}Wd�n1sF0Y|jsZJ�|j	dkshJ�|d}dt
|j�vs�J�Wd�n1s�0YdS)Nat            Active:          9444728 kB
            Active(anon):    6145416 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            r�Tr��`LGrr�r�r�r8r8r9�test_avail_old_missing_fields�s&
�z:TestSystemVirtualMemoryMocks.test_avail_old_missing_fieldsc
Cs�t�d���}td|i���tdttjd���ltj	dd��@}t
��}|jdksTJ�|d}d	t
|j�vsnJ�Wd�n1s�0YWd�n1s�0YWd�n1s�0YdS)
Na�            Active:          9444728 kB
            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            SReclaimable:     346648 kB
            r�z/proc/zoneinfo�no such file or directoryTr�r�rr�)r�r�r�r�r��IOError�errno�ENOENTr�r�r�r�r�r�r�)r�r�r�r�r�r8r8r9�test_avail_old_missing_zoneinfo�s
���z<TestSystemVirtualMemoryMocks.test_avail_old_missing_zoneinfocCs�t�d���}td|i���}t��}|js.J�|jdks<J�|jdksJJ�|j	dksXJ�|j
dksfJ�|jdkstJ�|jdks�J�|j
d	ks�J�|jd
ks�J�|jdks�J�Wd�n1s�0YdS)Na�            MemTotal:              100 kB
            MemFree:               2 kB
            MemAvailable:          3 kB
            Buffers:               4 kB
            Cached:                5 kB
            SwapCached:            6 kB
            Active:                7 kB
            Inactive:              8 kB
            Active(anon):          9 kB
            Inactive(anon):        10 kB
            Active(file):          11 kB
            Inactive(file):        12 kB
            Unevictable:           13 kB
            Mlocked:               14 kB
            SwapTotal:             15 kB
            SwapFree:              16 kB
            Dirty:                 17 kB
            Writeback:             18 kB
            AnonPages:             19 kB
            Mapped:                20 kB
            Shmem:                 21 kB
            Slab:                  22 kB
            SReclaimable:          23 kB
            SUnreclaim:            24 kB
            KernelStack:           25 kB
            PageTables:            26 kB
            NFS_Unstable:          27 kB
            Bounce:                28 kB
            WritebackTmp:          29 kB
            CommitLimit:           30 kB
            Committed_AS:          31 kB
            VmallocTotal:          32 kB
            VmallocUsed:           33 kB
            VmallocChunk:          34 kB
            HardwareCorrupted:     35 kB
            AnonHugePages:         36 kB
            ShmemHugePages:        37 kB
            ShmemPmdMapped:        38 kB
            CmaTotal:              39 kB
            CmaFree:               40 kB
            HugePages_Total:       41 kB
            HugePages_Free:        42 kB
            HugePages_Rsvd:        43 kB
            HugePages_Surp:        44 kB
            Hugepagesize:          45 kB
            DirectMap46k:          46 kB
            DirectMap47M:          47 kB
            DirectMap48G:          48 kB
            r�i�i�ipiTii iX�)r�r�r�r�r�r�r�rrrdr�r�rzr�r�r�r�)r�r�r�Zmemr8r8r9�test_virtual_memory_mockeds2
z7TestSystemVirtualMemoryMocks.test_virtual_memory_mockedN)
r�r�r�r�rr�r�r�r�r�r8r8r8r9r�zs$
r�c@s\eZdZedd��Zdd�Ze�dd��Ze�dd��Zd	d
�Z	dd�Z
d
d�Zdd�ZdS)�TestSystemSwapMemorycCs@td��}|��}Wd�n1s&0Yd|vo>d|vS)z3Return True if /proc/meminfo provides swap metrics.r�Nz
SwapTotal:z	SwapFree:)rB�read)rL�datar8r8r9�meminfo_has_swap_infoNs
&z*TestSystemSwapMemory.meminfo_has_swap_infocCs*t�j}t��j}t||�tks&J�dSrU)rurrr��swap_memoryr�r�r�r�r�r8r8r9r�Us
zTestSystemSwapMemory.test_totalcCs*t�j}t��j}t||�tks&J�dSrU)rursr�r�r�rr�r8r8r9r�Zs
zTestSystemSwapMemory.test_usedcCs*t�j}t��j}t||�tks&J�dSrU)rurdr�r�r�rr�r8r8r9r�`s
zTestSystemSwapMemory.test_freec	Cs�tjddd���}tjdd��r}t�d�t��}|js:J�t|�dksJJ�|d}dt	|j
�vsdJ�|jdksrJ�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)	N�psutil._common.openT�r�r�r�rxrz9'sin' and 'sout' swap memory stats couldn't be determined)
rr�r�r�r�r�r�r�rEr�r��sin�sout�r�r�r�r�r�r8r8r9�test_missing_sin_soutfs

��z*TestSystemSwapMemory.test_missing_sin_soutc	Cs�tdttjd����}tjdd��r}t�d�t��}|j	s>J�t
|�dksNJ�|d}dt|j�vshJ�|j
dksvJ�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)	Nz/proc/vmstatr�Tr�r�rxrzK'sin' and 'sout' swap memory stats couldn't be determined and were set to 0)r�r�r�r�r�r�r�r�r�r�rEr�r�r�r�r�r8r8r9�test_no_vmstat_mockedus �

��z*TestSystemSwapMemory.test_no_vmstat_mockedcCs�|��st�d��t�d��}t��}Wd�n1s:0Y|jrNJ�ddlm	}|�
�\}}}}}}}||9}||9}|j|ks�J�t|j
|�tks�J�dS)Nz!/proc/meminfo has no swap metricsz"psutil._pslinux.cext.linux_sysinfor)r�rr�rr�r�r�r�Zpsutil._psutil_linuxZ
_psutil_linuxZ
linux_sysinforrr�rdr)r�r��swapZcextrqrrrdZunit_multiplierr8r8r9�test_meminfo_against_sysinfo�s
&
z1TestSystemSwapMemory.test_meminfo_against_sysinfocCsBtddi��"}t��|js J�Wd�n1s40YdS)Nr�r^)r�r�r�r��r�r�r8r8r9�#test_emulate_meminfo_has_no_metrics�sz8TestSystemSwapMemory.test_emulate_meminfo_has_no_metricsN)
r�r�r��staticmethodr�r�rr�r�r�r�r�r�r8r8r8r9r�Ls


r�c@seZdZdd�ZdS)�TestSystemCPUTimescCs�t��j}t�dt��d�d}ttt	|�
d���}|dkrLd|vsXJ�nd|vsXJ�|dkrnd|vszJ�nd|vszJ�|d	kr�d
|vs�J�nd
|vs�J�dS)Nz
\d+\.\d+\.\d+�r�.)r����steal)r�rr'Zguest)r�r�rZ
guest_nice)r��	cpu_times�_fieldsr�r��os�unamer�r�rnrC)r�rNZ
kernel_verZkernel_ver_infor8r8r9�test_fields�s
zTestSystemCPUTimes.test_fieldsN)r�r�r�rr8r8r8r9r��sr�c@s�eZdZejjej�d�dd�dd��Z	ejjej�d�dd�dd	��Z
ejjed
�dd�dd
��Zejjed�dd�dd��Z
dd�ZdS)�TestSystemCPUCountLogical�/sys/devices/system/cpu/onlinez-/sys/devices/system/cpu/online does not existr�cCsjtd��}|����}Wd�n1s*0Ydt|�vrft|�d�d�d}t��|ksfJ�dS)Nr
�-rx)rBr�r~r�rnrCr��	cpu_count)r�rL�valuer8r8r9�test_against_sysdev_cpu_online�s

*z8TestSystemCPUCountLogical.test_against_sysdev_cpu_online�/sys/devices/system/cpuz&/sys/devices/system/cpu does not existcCs0t�d�}tdd�|D��}t��|ks,J�dS)NrcSs g|]}t�d|�dur|�qS)zcpu\d+$N)r��searchrvr8r8r9r]�r^zITestSystemCPUCountLogical.test_against_sysdev_cpu_num.<locals>.<listcomp>)r�listdirrEr�r)r�Zls�countr8r8r9�test_against_sysdev_cpu_num�s
z5TestSystemCPUCountLogical.test_against_sysdev_cpu_numZnprocznproc utility not availablecCs$ttd��}tjdd�|ks J�dS)Nznproc --allT��logical)rnrr�r)r��numr8r8r9�test_against_nproc�sz,TestSystemCPUCountLogical.test_against_nproc�lscpu�lscpu utility not availablecCs8td�}tdd�|�d�D��}tjdd�|ks4J�dS)N�lscpu -pcSsg|]}|�d�s|�qS)�#�rkrvr8r8r9r]�r^z@TestSystemCPUCountLogical.test_against_lscpu.<locals>.<listcomp>rjTr)rrErCr�r)r�rorr8r8r9�test_against_lscpu�sz,TestSystemCPUCountLogical.test_against_lscpuc	Cs�ddl}|j��}tjdtd���`}|j��|ks6J�|js@J�tjddd��F}|j��dusbJ�|jdkspJ�|jdddks�J�Wd�n1s�0Yt	d	d
��}|�
�}Wd�n1s�0Yt�|�}tjd|dd��$}|j��|k�sJ�Wd�n1�s0Yt
d	di��0}|j��|k�sHJ�|j�sTJ�Wd�n1�sj0YWd�n1�s�0YdS)
Nrzpsutil._pslinux.os.sysconf�r�r�Tr�r��
/proc/stat�
/proc/cpuinfo�rb��return_valuer�r^)�psutil._pslinux�_pslinuxZcpu_count_logicalrr�rFr�Z
call_countZ	call_argsrBr�r�r�r�)r�r��originalr�rLZcpuinfo_data�	fake_filer8r8r9�test_emulate_fallbacks�s.
�
4&
�4z0TestSystemCPUCountLogical.test_emulate_fallbacksN)r�r�r�r�mark�skipifr�path�existsrrrrrr(r8r8r8r9r	�s&�
�

�

�
r	c@s:eZdZejjed�dd�dd��Zdd�Zdd	�Z	d
S)�TestSystemCPUCountCoresrrr�cCs\td�}t�}|�d�D]&}|�d�s|�d�}|�|d�qtjdd�t|�ksXJ�dS)Nrrjr�,rxFr)r�setrCrk�addr�rrE)r�roZcore_idsrMrNr8r8r9rs

z*TestSystemCPUCountCores.test_against_lscpucCsdtj��}tjdgd��$}tj��}|js.J�Wd�n1sB0Y|dur`||ks`J�dS)N�	glob.glob�r#)r�r%�cpu_count_coresrr�r�)r�Zmeth_1r�Zmeth_2r8r8r9�
test_method_2s

(z%TestSystemCPUCountCores.test_method_2c	Cs�tjdgd��P}tjddd��"}tj��dus2J�Wd�n1sF0YWd�n1sd0Y|jsxJ�|js�J�dS)Nr1r2r�Tr�)rr�r�r%r3r�)r��m1�m2r8r8r9�test_emulate_nones
N
z)TestSystemCPUCountCores.test_emulate_noneN)
r�r�r�rr)r*rrr4r7r8r8r8r9r-s
�
	r-c@s�eZdZejjedd�dd��Zejjedd�ejjedd�dd���Z	ejjedd�dd	��Z
ejjedd�d
d��Zejjedd�dd
��ZdS)�TestSystemCPUFrequency�
not supportedr�csT�fdd�}tjj�tjd|dd��t��s2J�Wd�n1sF0YdS)Ncs|�d�rdS�|�SdS)N�&/sys/devices/system/cpu/cpufreq/policyFr�r+�Zorig_existsr8r9�path_exists_mock(s
zMTestSystemCPUFrequency.test_emulate_use_second_file.<locals>.path_exists_mock�os.path.existsT)r�r�)rr+r,rr�r��cpu_freq)r�r=r8r<r9�test_emulate_use_second_file%s�z3TestSystemCPUFrequency.test_emulate_use_second_filez,aarch64 does not report mhz in /proc/cpuinfocs��fdd�}tjj�z�tjd|d��xttj�t��}|sDJ|��|j	dksRJ�|j
dks`J�tjdd�D] }|j	dks~J�|j
dkslJ�qlWd�n1s�0YWttj�tt�nttj�tt�0dS)Ncs|�d�rdS�|�SdS)Nz/sys/devices/system/cpu/Frr;�Zos_path_existsr8r9r=;s
zITestSystemCPUFrequency.test_emulate_use_cpuinfo.<locals>.path_exists_mockr>r�T�Zpercpu)rr+r,rr�rr�r%r?�max�min)r�r=r��freqr8rAr9�test_emulate_use_cpuinfo4s 
0

�
z/TestSystemCPUFrequency.test_emulate_use_cpuinfoc	s��fdd�}t�trdnd}tj||d���tjddd��Vt��}|jd	ksRJ�|jd
krj|jdksjJ�|jd
kr�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)
Ncs�|�d�r|�d�rt�d�S|�d�r<|�d�r<t�d�S|�d�rZ|�d�rZt�d�S|dkrlt�d	�S�|g|�Ri|��SdS)
N�/scaling_cur_freqr:�500000�/scaling_min_freq�600000�/scaling_max_freqs700000r scpu MHz     : 500��endswithrkr�r�r��r�r8r9r�Rs�
�
�

z;TestSystemCPUFrequency.test_emulate_data.<locals>.open_mockr�r�rr>Tr2�@@rB���@g�@�	rBrrr�r�r?�currentrErD�r�r�r�rFr8rOr9�test_emulate_dataPs

z(TestSystemCPUFrequency.test_emulate_datac
sb�fdd�}t�trdnd}tj||d��� tjddd���tjd	d
d���tjdd�}|djd
kslJ�|djdkr�|djdks�J�|djdkr�|djdks�J�|djdks�J�|djdkr�|djdks�J�|djdkr�|djdks�J�Wd�n1�s0YWd�n1�s40YWd�n1�sT0YdS)Ncs�|}|�d�r"|�d�r"t�d�S|�d�r@|�d�r@t�d�S|�d�r^|�d�r^t�d�S|�d�r||�d�r|t�d	�S|�d�r�|�d�r�t�d
�S|�d�r�|�d�r�t�d�S|dkr�t�d
�S�|g|�Ri|��SdS)NrHz'/sys/devices/system/cpu/cpufreq/policy0s100000rJ�200000rLs300000z'/sys/devices/system/cpu/cpufreq/policy1s400000rIrKr s#cpu MHz     : 100
cpu MHz     : 400rM)r�r�r��nrOr8r9r�ss8�
�
�
�
�
�

z@TestSystemCPUFrequency.test_emulate_multi_cpu.<locals>.open_mockr�r�rr>Tr2�!psutil._pslinux.cpu_count_logicalr�rCr�Y@rBgi@g�r@rxgy@rPrQrRrTr8rOr9�test_emulate_multi_cpuqs&�z-TestSystemCPUFrequency.test_emulate_multi_cpuc
s��fdd�}t�trdnd}tj||d���tjddd��Ttjd	d
d��&t��}|jdksbJ�Wd�n1sv0YWd�n1s�0YWd�n1s�0YdS)NcsX|�d�rttjd��n<|�d�r,t�d�S|dkr>t�d�S�|g|�Ri|��SdS)NrHrYz/cpuinfo_cur_freqrVr scpu MHz     : 200)rNr�r�r�r�r�r�rOr8r9r��s



zOTestSystemCPUFrequency.test_emulate_no_scaling_cur_freq_file.<locals>.open_mockr�r�rr>Tr2rXrx��)rBrrr�r�r?rSrTr8rOr9�%test_emulate_no_scaling_cur_freq_file�s
�z<TestSystemCPUFrequency.test_emulate_no_scaling_cur_freq_fileN)
r�r�r�rr)r*rr@rrGrUrZr\r8r8r8r9r8#s
�
 
3r8c@seZdZdd�ZdS)�TestSystemCPUStatscCs*td�}t��j}t||�dks&J�dS)N�
interruptsi�)r|r�Z	cpu_statsr^r�r�r8r8r9�test_interrupts�s
z"TestSystemCPUStats.test_interruptsN)r�r�r�r_r8r8r8r9r]�s	r]c@s&eZdZejjedd�dd��ZdS)�TestLoadAvgr9r�cCs�t��}td��}|����}Wd�n1s20Ytt|d�|d�dks\J�tt|d�|d�dks|J�tt|d�|d�dks�J�dS)Nz
/proc/loadavgrrxr�)r��
getloadavgrBr�rCr��float)r�r�rLZ
proc_valuer8r8r9�test_getloadavg�s
*  zTestLoadAvg.test_getloadavgN)r�r�r�rr)r*rrcr8r8r8r9r`�sr`c@seZdZdd�ZdS)�TestSystemNetIfAddrscCs�t����D]�\}}|D]�}|jtjkr<|jt|�ks�J�q|jtjkr�|jt	|�ksZJ�|j
t|�kslJ�|jdur�|jt
|�ks�J�q�t
|�dks�J�q|jtjkr|j�d�d}|t|�vsJ�qqdS)Nz0.0.0.0�%r)r�Znet_if_addrs�items�familyZAF_LINK�addressrcr*r+r:�netmaskr<�	broadcastr>rJrCrT)r�r��addrs�addrrhr8r8r9�test_ips�s
zTestSystemNetIfAddrs.test_ipsN)r�r�r�rmr8r8r8r9rd�srd�QEMU user not supportedc@sPeZdZejjed�dd�dd��Zdd�Zejjed�dd�dd	��Z	d
S)�TestSystemNetIfStats�ifconfig�ifconfig utility not availabler�c	Csrt����D]`\}}ztd|�}Wnty6Yq0|jd|vksNJ|��|jtt�	d|�d�ksJ�qdS)N�ifconfig %s�RUNNINGz(?i)MTU[: ](\d+)r)
r��net_if_statsrfr�RuntimeErrorZisup�mturnr�r�)r�r��statsror8r8r9�test_against_ifconfig	s�z*TestSystemNetIfStats.test_against_ifconfigc	Cs`t����D]N\}}td|��*}|jt|�����ks<J�Wd�q1sP0YqdS)Nz/sys/class/net/%s/mtu)r�rtrfrBrvrnr�r~)r�r�rwrLr8r8r9�test_mtuszTestSystemNetIfStats.test_mtuc	Cs
d}t����D]�\}}ztd|�}Wnty:Yq0t�d|�}|r�t|���dkr�|d7}t	|�
d����d��}t	|j
�d��}||ks�J�qt�d|�}|rt|���dkr|d7}t	|�
d������}t	|j
�d��}||ksJ�q|�s|�d	��dS)
Nrrrzflags=(\d+)?<(.*?)>r�rxr.z(.*)  MTU:(\d+)  Metric:(\d+)r�zno matches were found)r�rtrfrrur�rrErPr/�group�lowerrC�flags�fail)r�Z
matches_foundr�rwro�matchZifconfig_flagsZpsutil_flagsr8r8r9�
test_flagss(zTestSystemNetIfStats.test_flagsN)
r�r�r�rr)r*rrxryrr8r8r8r9ros
�

�roc@s0eZdZejjed�dd�e�dd���ZdS)�TestSystemNetIOCountersrprqr�c	Cs$dd�}tjddd�}|��D�]�\}}z||�}WntyJYqYn0t|j|d�dksfJ�t|j|d�dks�J�t|j|d	�d
ks�J�t|j|d�d
ks�J�t|j	|d�d
ks�J�t|j
|d�d
ks�J�t|j|d�d
k�sJ�t|j|d�d
ksJ�qdS)NcSs�i}td|�}tt�d|�d�|d<tt�d|�d�|d<tt�d|�d�|d<tt�d|�d	�|d
<tt�d|�d�|d<tt�d|�d	�|d
<tt�d|�d�|d<tt�d|�d�|d<|S)NrrzRX packets[: ](\d+)r�packets_recvzTX packets[: ](\d+)�packets_sentzerrors[: ](\d+)�errinrx�erroutzdropped[: ](\d+)�dropin�dropoutz#RX (?:packets \d+ +)?bytes[: ](\d+)�
bytes_recvz#TX (?:packets \d+ +)?bytes[: ](\d+)�
bytes_sent)rrnr�r�)Znicr�ror8r8r9rpEs&����z?TestSystemNetIOCounters.test_against_ifconfig.<locals>.ifconfigTF)Zpernic�nowrapr�i(r�r�r�r�r��
r�r�r�)
r��net_io_countersrfrur�r�r�r�r�r�r�r�r�)r�rpZnior�rwZifconfig_retr8r8r9rx@s&
����z-TestSystemNetIOCounters.test_against_ifconfigN)	r�r�r�rr)r*rrrxr8r8r8r9r�>s

�r�c@s8eZdZejded�ejddd�dd���Zdd	�Zd
S)�TestSystemNetConnectionsz psutil._pslinux.socket.inet_ntoprzpsutil._pslinux.supports_ipv6Fr2cCsPz*t�tjtj�}|�|j�|�d�Wntjy>Yn0tjdd�dS)N)z::1rZinet6��kind)	r*rJ�SOCK_STREAMZ
addCleanup�close�bind�errorr��net_connections)r�Z
supports_ipv6rKr7r8r8r9�test_emulate_ipv6_unsupportedtsz6TestSystemNetConnections.test_emulate_ipv6_unsupportedcCsPt�d�}td|i��&}tjdd�|js.J�Wd�n1sB0YdS)Na            0: 00000003 000 000 0001 03 462170 @/tmp/dbus-Qw2hMPIU3n
            0: 00000003 000 000 0001 03 35010 @/tmp/dbus-tB2X8h69BQ
            0: 00000003 000 000 0001 03 34424 @/tmp/dbus-cHy80Y8O
            000000000000000000000000000000000000000000000000000000
            z/proc/net/unix�unixr�)r�r�r�r�r�r�)r�r�r�r8r8r9�test_emulate_unix�s
z*TestSystemNetConnections.test_emulate_unixN)r�r�r�rr�rFr�r�r8r8r8r9r�rs
r�c@sBeZdZejjeed�dd�e�dd���Z	dd�Z
dd	�Zd
S)�TestSystemDiskPartitions�statvfszos.statvfs() not availabler�cCsvdd�}tjdd�D]\}t�|j�}||j�\}}}}|j|ksDJ�t|j|�tksZJ�t|j|�tksJ�qdS)NcSsztd|���}|�d�}|�d�|�d�}|��dd�\}}}}|dkrRd}t|�t|�t|�}}}||||fS)Nzdf -P -B 1 "%s"rjrr@�nonerY)rr~rC�poprn)r+rorprM�devrrrsrdr8r8r9�df�s


z4TestSystemDiskPartitions.test_against_df.<locals>.dfF)�all)	r��disk_partitions�
disk_usageZ
mountpointrrr�rdrrs)r�r��part�usagerqrrrsrdr8r8r9�test_against_df�sz(TestSystemDiskPartitions.test_against_dfc	Cstd��}|��}Wd�n1s&0Yd|vr`t��D]}|jdkr@q^q@|�d��n�t�d�}tj	d|dd��v}tj	dd	gd
��F}t��}|j
s�J�|j
s�J�|s�J�|djdks�J�Wd�n1s�0YWd�n1s�0YdS)Nz/proc/filesystems�zfszcouldn't find any ZFS partitionz
nodev	zfs
r�Tr"�$psutil._pslinux.cext.disk_partitions)z	/dev/sdb3�/r��rwr2r)rBr�r�r�Zfstyper}r�r�rr�r�)r�rLr�r�r'r5r6r�r8r8r9�test_zfs_fs�s,
&

��

z$TestSystemDiskPartitions.test_zfs_fsc	Cs�zttjddd��L}t�t��t��Wd�n1s:0Y|jsNJ�Wd�n1sb0YWdt_ndt_0dS)Nzos.path.realpathz
/non/existentr2�/proc)	rr�r�raisesrr�r�r��PROCFS_PATHr�r8r8r9�test_emulate_realpath_fail�s�&*z3TestSystemDiskPartitions.test_emulate_realpath_failN)r�r�r�rr)r*�hasattrrrr�r�r�r8r8r8r9r��s�r�c@sDeZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dS)�TestSystemDiskIoCountersc	Cs�d}td|i���tjddd���tjdd�}|jdks<J�|jd	ksJJ�|jd
tks\J�|j	dksjJ�|j
dksxJ�|jd
ks�J�|jdtks�J�|j
dks�J�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)Nz+   3     0   1 hda 2 3 4 5 6 7 8 9 10 11 12�/proc/diskstats�!psutil._pslinux.is_storage_deviceTr2F�r�rxr�r�r@ryr��r��r�rr�r��disk_io_counters�
read_count�read_merged_count�
read_bytes�SECTOR_SIZE�	read_time�write_count�write_merged_count�write_bytes�
write_time�	busy_time�r�r�r�r8r8r9�test_emulate_kernel_2_4�s�z0TestSystemDiskIoCounters.test_emulate_kernel_2_4c	Cs�d}td|i���tjddd���tjdd�}|jdks<J�|jd	ksJJ�|jd
tks\J�|j	dksjJ�|j
dksxJ�|jd
ks�J�|jdtks�J�|j
dks�J�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)Nz'   3    0   hda 1 2 3 4 5 6 7 8 9 10 11r�r�Tr2Fr�rxr�r�r@ryrr�r�r�r�r�r8r8r9�test_emulate_kernel_2_6_full�s�z5TestSystemDiskIoCounters.test_emulate_kernel_2_6_fullc	Cs�tddi���tjddd���tjdd�}|jdks8J�|jd	tksJJ�|jd
ksXJ�|j	dtksjJ�|j
dksxJ�|jdks�J�|jdks�J�|j
dks�J�|jdks�J�Wd�n1s�0YWd�n1s�0YdS)
Nr�z   3    1   hda 1 2 3 4r�Tr2Fr�rxr�r�r@r)r�rr�r�r�r�r�r�r�r�r�r�r�r�r�)r�r�r8r8r9�test_emulate_kernel_2_6_limited�s�z8TestSystemDiskIoCounters.test_emulate_kernel_2_6_limitedc	Cs�t�d�}td|i���tjddd��vtjddd�}t|�dksFJ�|d	jd
ksXJ�|djd
ksjJ�|d	j	dks|J�|dj	dks�J�Wd�n1s�0YWd�n1s�0YdS)
N�x            3    0   nvme0n1 1 2 3 4 5 6 7 8 9 10 11
            3    0   nvme0n1p1 1 2 3 4 5 6 7 8 9 10 11
            r�r�Fr2T��perdiskr�r��nvme0n1rxZ	nvme0n1p1ry)
r�r�r�rr�r�r�rEr�r�r�r8r8r9�test_emulate_include_partitionss
�z8TestSystemDiskIoCounters.test_emulate_include_partitionsc	Cst�d�}td|i��Xtjddd��*tjddd�}|dusBJ�Wd�n1sV0YWd�n1st0Ydd�}t�d�}td|i��jtjdd	|d
��:tjddd�}|jdks�J�|jdks�J�Wd�n1s�0YWd�n1�s0YdS)
Nr�r�r�Fr2r�cSs|dkS)Nr�r8)r�r8r8r9�is_storage_device4szSTestSystemDiskIoCounters.test_emulate_exclude_partitions.<locals>.is_storage_deviceTr�rxry)	r�r�r�rr�r�r�r�r�)r�r�r�r�r8r8r9�test_emulate_exclude_partitions%s$
�H
�z8TestSystemDiskIoCounters.test_emulate_exclude_partitionscCshdd�}tjdd�}tjdd|d��tjdd�}Wd�n1sF0Yt|�t|�ksdJ�dS)NcSs|dkS)Nr�r8r;r8r8r9r,Fsz?TestSystemDiskIoCounters.test_emulate_use_sysfs.<locals>.existsT)r��psutil._pslinux.os.path.existsr�)r�r�rr�rE)r�r,ZwprocfsZwsysfsr8r8r9�test_emulate_use_sysfsEs�*z/TestSystemDiskIoCounters.test_emulate_use_sysfsc	Csndd�}tjdd|d��Bt�t��t��Wd�n1sB0YWd�n1s`0YdS)NcSsdS)NFr8r;r8r8r9r,Qsz>TestSystemDiskIoCounters.test_emulate_not_impl.<locals>.existsr�Tr�)rr�rr��NotImplementedErrorr�r�)r�r,r8r8r9�test_emulate_not_implPs�z.TestSystemDiskIoCounters.test_emulate_not_implN)
r�r�r�r�r�r�r�r�r�r�r8r8r8r9r��s r�c@sjeZdZdd�Zdd�Zejjedd�dd��Z	ejje
d	�d
d�ejjedd�dd���Zd
d�ZdS)�TestRootFsDeviceFindercCs(t�d�j}t�|�|_t�|�|_dS)Nr�)rr�st_dev�major�minor)r�r�r8r8r9�setUp]szTestRootFsDeviceFinder.setUpcCs�t�}tj�d�r|��n2t�t��|��Wd�n1sD0Ytj�d|j|j	f�rp|�
�n2t�t��|�
�Wd�n1s�0Y|��dS�Nz/proc/partitionsz/sys/dev/block/%s:%s/uevent)rrr+r,�ask_proc_partitionsrr�rr�r��ask_sys_dev_block�ask_sys_class_block)r��finderr8r8r9�test_call_methodsbs
&�
&z(TestRootFsDeviceFinder.test_call_methodszunsupported on GITHUB_ACTIONSr�cCs�t�}|��dusJ�d}}}tj�d�r6|��}tj�d|j|jf�rV|��}|�	�}|ph|ph|}|r~|r~||ks~J�|r�|r�||ks�J�|r�|r�||ks�J�dSr�)
r�findrr+r,r�r�r�r�r�)r�r�r�r��c�baser8r8r9�test_comparisonsrs"�z'TestRootFsDeviceFinder.test_comparisonsZfindmntzfindmnt utility not availablecCs"t���}td�}||ksJ�dS)Nzfindmnt -o SOURCE -rn /)rr�r)r�r�Z
findmnt_valuer8r8r9�test_against_findmnt�s
z+TestRootFsDeviceFinder.test_against_findmntcCs�tjddgd��\}t��d}|js(J�tsP|jdks:J�|jt���ks^J�n|jdks^J�Wd�n1sr0YdS)Nr�)�	/dev/rootr�Zext4r�r2rr�)	rr�r�r�r�rZdevicerr�)r�r�r�r8r8r9�test_disk_partitions_mocked�s�
z2TestRootFsDeviceFinder.test_disk_partitions_mockedN)
r�r�r�r�r�rr)r*rr�rr�r�r8r8r8r9r�[s

�r�c@sbeZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Ze	�e
jje
d
d�dd���Zdd�ZdS)�TestMisccCs(td�}t��}t|�t|�ks$J�dS)Nz	boot time)r|r��	boot_timernr�r8r8r9�test_boot_time�szTestMisc.test_boot_timec	sb|��}t�|�ttj�|d�d��.}|�d�|�d�|�d�Wd�n1sZ0Y�z�t��fdd�}tr�dnd	}tj	||d
���~t
t�t�
t��t��Wd�n1s�0Yt�
t��tjdd�Wd�n1s�0Yt�
t��t��Wd�n1�s00Yt�
t��tjdd�Wd�n1�sh0Yt�
t��t��Wd�n1�s�0Yt�
t��tjdd�Wd�n1�s�0Y|t_t��d
k�s�J�tt���d
k�sJ�tjdd�}t|�d
k�s*J�tjdd�}ttt|��d
k�sNJ�ttj�|d�d��.}|�d�|�d�|�d�Wd�n1�s�0Yt��d
k�s�J�ttjdd��d
k�s�J�tt���d
k�s�J�ttttjdd���d
k�sJ�Wd�n1�s0YWt�|�t
t�nt�|�t
t�0tjdk�s^J�dS)Nrr�zcpu   0 0 0 0 0 0 0 0 0 0
zcpu0  0 0 0 0 0 0 0 0 0 0
zcpu1  0 0 0 0 0 0 0 0 0 0
cs,|�d�rttjd���|g|�Ri|��S)Nr�zrejecting access for test)rkr�r�r�r�rOr8r9r��s
z4TestMisc.test_no_procfs_on_import.<locals>.open_mockr�r�rTrCrzcpu   1 0 0 0 0 0 0 0 0 0
zcpu0  1 0 0 0 0 0 0 0 0 0
zcpu1  1 0 0 0 0 0 0 0 0 0
r�)�
get_testfnr�mkdirrBr+rH�writerrr�rr�rr�r�r�cpu_percent�cpu_times_percentr��sumr��shutil�rmtree)r�Z	my_procfsrLr�r�Zper_cpu_percentZper_cpu_times_percentr8rOr9�test_no_procfs_on_import�sZ


(&*(,(,

*�*

�

z!TestMisc.test_no_procfs_on_importcCsjt�d���}td|i��B}t��|js.J�tjdd�t��tjdd�Wd�n1sb0Yt�d���}td|i���t��}|js�J�tjdd�}t��}tjdd�}|dks�J�t|�dks�J�t|�dks�J�t|�dks�J�tt	t|��dk�sJ�tt	t|��dk�s&J�|j
dk�s6J�|jdk�sFJ�Wd�n1�s\0YdS)Nz~            cpu   0 0 0 0 0 0 0 1 0 0
            cpu0  0 0 0 0 0 0 0 1 0 0
            cpu1  0 0 0 0 0 0 0 1 0 0
            rTrCz~            cpu   1 0 0 0 0 0 0 0 0 0
            cpu0  1 0 0 0 0 0 0 0 0 0
            cpu1  1 0 0 0 0 0 0 0 0 0
            rrY)r�r�r�r�r�r�r�r�r�r�r�user)r�r�r�r�Zcpu_percent_percpur�Zcpu_times_percent_percpur8r8r9�test_cpu_steal_decrease�s,
*
z TestMisc.test_cpu_steal_decreasec	Csptjddd��N}t�t��tj��Wd�n1s:0Y|jsNJ�Wd�n1sb0YdS)Nr�Tr�)	rr�rr�rur�r%r�r�r�r8r8r9�test_boot_time_mockeds(zTestMisc.test_boot_time_mockedcCs t��D]}|jdvsJ�qdS)N)z:0z:0.0)r�Zusers�host)r�r�r8r8r9�
test_usersszTestMisc.test_userscCs|��}t�|��z�|t_t�t��t��Wd�n1sD0Yt�t��t�	�Wd�n1sv0Yt�t��tj	dd�Wd�n1s�0Yt�t��t�
�Wd�n1s�0Yt�t��t��Wd�n1�s0Yt�t��t��Wd�n1�sF0Yt�t��t�
�Wd�n1�sz0Yt�t��t��Wd�n1�s�0Yt�tj��t��Wd�n1�s�0YWdt_ndt_0dS)NTrCr�)r�rr�r�r�rr�r�r�rr�r�r�rtr�Z
NoSuchProcess�Process)r�Ztdirr8r8r9�test_procfs_path s.
&&*&((((*zTestMisc.test_procfs_pathzskip if pytest-parallelr�cCs�t��~t��}|��}t|�tr&dndks0J�t|dd�d�dj}|j|ksTJ�t�|�}|�	�|t�
�vsvJ�Wd�n1s�0YdS)Nr�r�cSs|jSrU)�idrVr8r8r9�<lambda>Hr^z)TestMisc.test_issue_687.<locals>.<lambda>)�keyrx)rr�r��threadsrEr�sortedr��pidZas_dictZpids)r��pr��tid�ptr8r8r9�test_issue_687<s
zTestMisc.test_issue_687cCsLtddi��,}t�t���s J�|js*J�Wd�n1s>0YdS)N�/proc/%s/statusrY)r�r�Z
pid_existsr�getpidr�r�r8r8r9�test_pid_exists_no_proc_statusNsz'TestMisc.test_pid_exists_no_proc_statusN)r�r�r�r�r�r�r�r�r�rrr)r*rr�r�r8r8r8r9r��sB'r�z
no batteryc@sjeZdZejjed�dd�dd��Zdd�Zdd	�Z	d
d�Z
dd
�Zdd�Zdd�Z
dd�Zdd�ZdS)�TestSensorsBatteryZacpizacpi utility not availabler�cCsHtd�}t|�d�d���dd��}t��j}t||�dksDJ�dS)Nzacpi -br.rxrerY)	rrnrCr~�replacer��sensors_battery�percentr�)r�roZ
acpi_valuer�r8r8r9�test_percent_s
zTestSensorsBattery.test_percentcs~�fdd�}t�trdnd}tj||d��@}t��jdus>J�t��jtjksRJ�|j	s\J�Wd�n1sp0YdS)Ncs.|�d�rt�d�S�|g|�Ri|��SdS)N�z
AC0/onlinez	AC/online�1�rNr�r�r�rOr8r9r�hs

z@TestSensorsBattery.test_emulate_power_plugged.<locals>.open_mockr�r�rT)
rBrrr�r�r�
power_pluggedZsecsleftZPOWER_TIME_UNLIMITEDr��r�r�r�r�r8rOr9�test_emulate_power_pluggedfs��z-TestSensorsBattery.test_emulate_power_pluggedcsj�fdd�}t�trdnd}tj||d��,}t��jdus>J�|jsHJ�Wd�n1s\0YdS)NcsF|�d�rttjd��n*|�d�r,t�d�S�|g|�Ri|��SdS)NrrY�/statusZcharging�rNr�r�r�r�r�r�rOr8r9r�{s



zBTestSensorsBattery.test_emulate_power_plugged_2.<locals>.open_mockr�r�rT�rBrrr�r�rrr�r	r8rOr9�test_emulate_power_plugged_2xsz/TestSensorsBattery.test_emulate_power_plugged_2csj�fdd�}t�trdnd}tj||d��,}t��jdus>J�|jsHJ�Wd�n1s\0YdS)Ncs.|�d�rt�d�S�|g|�Ri|��SdS)Nr�0rr�rOr8r9r��s

zDTestSensorsBattery.test_emulate_power_not_plugged.<locals>.open_mockr�r�rFr
r	r8rOr9�test_emulate_power_not_plugged�sz1TestSensorsBattery.test_emulate_power_not_pluggedcsj�fdd�}t�trdnd}tj||d��,}t��jdus>J�|jsHJ�Wd�n1s\0YdS)NcsF|�d�rttjd��n*|�d�r,t�d�S�|g|�Ri|��SdS)NrrYrZdischargingrr�rOr8r9r��s



zFTestSensorsBattery.test_emulate_power_not_plugged_2.<locals>.open_mockr�r�rFr
r	r8rOr9� test_emulate_power_not_plugged_2�sz3TestSensorsBattery.test_emulate_power_not_plugged_2csj�fdd�}t�trdnd}tj||d��,}t��jdus>J�|jsHJ�Wd�n1s\0YdS)NcsF|�d�rttjd��n*|�d�r,t�d�S�|g|�Ri|��SdS)N)�"/sys/class/power_supply/AC0/online�!/sys/class/power_supply/AC/onlinerY�#/sys/class/power_supply/BAT0/statuss???)rkr�r�r�r�r�r�rOr8r9r��s



zETestSensorsBattery.test_emulate_power_undetermined.<locals>.open_mockr�r�rr
r	r8rOr9�test_emulate_power_undetermined�sz2TestSensorsBattery.test_emulate_power_undeterminedcCsLtddi��,}t��jdks J�|js*J�Wd�n1s>0YdS)N�(/sys/class/power_supply/BAT0/energy_fullrr)r�r�rrr�r�r8r8r9�test_emulate_energy_full_0�s�z-TestSensorsBattery.test_emulate_energy_full_0c
Cs�tdttjd����tdttjd���Ntddi��"t��jdksHJ�Wd�n1s\0YWd�n1sz0YWd�n1s�0YdS)NrrYz(/sys/class/power_supply/BAT0/charge_fullz%/sys/class/power_supply/BAT0/capacitys88�X)r�r�r�r�r�r�rr�r�r8r8r9�"test_emulate_energy_full_not_avail�s
�
��z5TestSensorsBattery.test_emulate_energy_full_not_availc
Cs�tdttjd����tdttjd���Ttdttjd���"t��jdusNJ�Wd�n1sb0YWd�n1s�0YWd�n1s�0YdS)NrrYrr)r�r�r�r�r�rrrr8r8r9�test_emulate_no_power�s��
�z(TestSensorsBattery.test_emulate_no_powerN)r�r�r�rr)r*rrr
rrrrrrrr8r8r8r9r\s
rc@seZdZdd�ZdS)�TestSensorsBatteryEmulatedc	s��fdd�}t�trdnd}tjddgd��N}tj||d�� }t��dusNJ�Wd�n1sb0YWd�n1s�0Y|js�J�|js�J�dS)	NcsV|�d�rt�d�S|�d�r(t�d�S|�d�r<t�d�S�|g|�Ri|��SdS)Nz/energy_nowZ60000000z
/power_now�0z/energy_fullZ60000001�rNr�r�r�rOr8r9r��s





z5TestSensorsBatteryEmulated.test_it.<locals>.open_mockr�r�z
os.listdirZBAT0r2r)rBrrr�r�rr�)r�r�r�ZmlistdirZmopenr8rOr9�test_it�s
L
z"TestSensorsBatteryEmulated.test_itN)r�r�r�rr8r8r8r9r�src@seZdZdd�Zdd�ZdS)�TestSensorsTemperaturesc	s��fdd�}t�trdnd}tj||d���tjddgd��Xt��d	d
}|jdks\J�|jdksjJ�|jd
ksxJ�|j	dks�J�Wd�n1s�0YWd�n1s�0YdS)Ncs~|�d�rt�d�S|�d�r(t�d�S|�d�r<t�d�S|�d�rPt�d�S|�d	�rdt�d
�S�|g|�Ri|��SdS)N�/namer�z/temp1_label�labelz/temp1_input�30000z
/temp1_maxs40000z/temp1_crit�50000)rNr�r�r�r�rOr8r9r��s









zCTestSensorsTemperatures.test_emulate_class_hwmon.<locals>.open_mockr�r�rr1z/sys/class/hwmon/hwmon0/temp1r2r�rr"�>@gD@�I@�
rBrrr�r�Zsensors_temperaturesr"rS�high�critical)r�r�r��tempr8rOr9�test_emulate_class_hwmon�s�z0TestSensorsTemperatures.test_emulate_class_hwmonc	s��fdd�}dd�}t�tr dnd}tj||d���tjdd	|d
��Xt��dd}|jd
ksdJ�|jdksrJ�|jdks�J�|j	dks�J�Wd�n1s�0YWd�n1s�0YdS)Ncsj|�d�rt�d�S|�d�r(t�d�S|�d�r<t�d�S|�d�rPt�d�S�|g|�Ri|��SdS)	NZ0_tempr$r*r#Z0_typer)�typer�)rNr�r�r�r�rOr8r9r�s







zETestSensorsTemperatures.test_emulate_class_thermal.<locals>.open_mockcSs:|dkrgS|dkrgS|dkr&dgS|dkr6ddgSgS)Nz/sys/class/hwmon/hwmon*/temp*_*z&/sys/class/hwmon/hwmon*/device/temp*_*z /sys/class/thermal/thermal_zone*z /sys/class/thermal/thermal_zone0z,/sys/class/thermal/thermal_zone0/trip_point*z2/sys/class/thermal/thermal_zone1/trip_point_0_typez2/sys/class/thermal/thermal_zone1/trip_point_0_tempr8r;r8r8r9�	glob_mock$s�zETestSensorsTemperatures.test_emulate_class_thermal.<locals>.glob_mockr�r�rr1Tr�r�rrYr%r&r')r�r�r-r�r*r8rOr9�test_emulate_class_thermalsz2TestSensorsTemperatures.test_emulate_class_thermalN)r�r�r�r+r.r8r8r8r9r �sr c@seZdZdd�ZdS)�TestSensorsFansc	s��fdd�}t�trdnd}tj||d��ltjddgd��<t��d	d
}|jdks\J�|jdksjJ�Wd�n1s~0YWd�n1s�0YdS)
NcsV|�d�rt�d�S|�d�r(t�d�S|�d�r<t�d�S�|g|�Ri|��SdS)Nr!r�z/fan1_labelr"z/fan1_input�2000rr�rOr8r9r�@s





z4TestSensorsFans.test_emulate_data.<locals>.open_mockr�r�rr1z/sys/class/hwmon/hwmon2/fan1r2r�rr"i�)rBrrr�r�Zsensors_fansr"rS)r�r�r�Zfanr8rOr9rU?s
�z!TestSensorsFans.test_emulate_dataN)r�r�r�rUr8r8r8r9r/=sr/c@s�eZdZe�dd��Zdd�Zejje	dd�dd��Z
d	d
�Zdd�Zd
d�Z
dd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zejjedd�d d!��Zd"d#�Zd$d%�Zd&d'�Zd(S))�TestProcesscCs�|��}tj�|j���\}}}t�|j�jdd�}t|tdd�|D���dksVJ�t|tdd�|D���dksxJ�t|tdd�|D���dks�J�dS)NF)ZgroupedcSsg|]}|j|j�qSr8)Z
private_dirtyZ
private_cleanrvr8r8r9r]br^z?TestProcess.test_parse_smaps_vs_memory_maps.<locals>.<listcomp>r�cSsg|]
}|j�qSr8)�pssrvr8r8r9r]er^cSsg|]
}|j�qSr8)r�rvr8r8r9r]fr^)	Zspawn_testprocr�r%r�r��_parse_smaps�memory_mapsr�r�)r�Zsproc�ussr2r��mapsr8r8r9�test_parse_smaps_vs_memory_maps\s��"z+TestProcess.test_parse_smaps_vs_memory_mapscCs�t�d���}tdt��|i��\}tj�t���}|�	�\}}}|j
sLJ�|dksXJ�|dksdJ�|dkspJ�Wd�n1s�0YdS)Nan            fffff0 r-xp 00000000 00:00 0                  [vsyscall]
            Size:                  1 kB
            Rss:                   2 kB
            Pss:                   3 kB
            Shared_Clean:          4 kB
            Shared_Dirty:          5 kB
            Private_Clean:         6 kB
            Private_Dirty:         7 kB
            Referenced:            8 kB
            Anonymous:             9 kB
            LazyFree:              10 kB
            AnonHugePages:         11 kB
            ShmemPmdMapped:        12 kB
            Shared_Hugetlb:        13 kB
            Private_Hugetlb:       14 kB
            Swap:                  15 kB
            SwapPss:               16 kB
            KernelPageSize:        17 kB
            MMUPageSize:           18 kB
            Locked:                19 kB
            VmFlags: rd ex
            �/proc/%s/smapsilr�i<)r�r�r�r�rr�r�r%r�r3r�)r�r�r�r�r5r2r�r8r8r9�test_parse_smaps_mockedhs
z#TestProcess.test_parse_smaps_mockedzunreliable on PYPYr�cCsdd�}|��}t|d��"||�jdks.J�Wd�n1sB0Yt|��"||�jdkshJ�Wd�n1s|0Yt|d��"||�jdks�J�Wd�n1s�0Yt|d��"||�jdks�J�Wd�n1s�0Yt|d��$||�jdk�sJ�Wd�n1�s40Yt|d��$||�jdk�s^J�Wd�n1�st0Yt�rt|�t|d	��$||�jdk�s�J�Wd�n1�s�0Yt|�t|d
��$||�jdk�s�J�Wd�n1�s
0YdS)NcSsZt��}t��t}|��D].}|jtj�|�kr:|St��|krqqqtd��dS)Nztimeout looking for test file)	r�r��timer	�
open_filesr+r�abspathru)�fnamer�Z	giveup_at�filer8r8r9�
get_test_file�sz7TestProcess.test_open_files_mode.<locals>.get_test_filer��rr�zr+zw+za+rWzx+)r�rB�moderr)r�r?Ztestfnr8r8r9�test_open_files_mode�s*0
000444z TestProcess.test_open_files_modec	s�t�������t|��d���t��fdd��tjdtt	j
d�d��*}���gksZJ�|jsdJ�Wd�n1sx0Ytjdtt	jd�d��*}���gks�J�|js�J�Wd�n1s�0YWd�n1s�0YdS)Nr�cst����t��kSrU�rEr;r8��filesr�r8r9r��r^z7TestProcess.test_open_files_file_gone.<locals>.<lambda>�psutil._pslinux.os.readlinkrYr)
r�r�r;rBr�rrr��OSErrorr�r�r��EINVALr�r8rDr9�test_open_files_file_gone�s$
�(
�z%TestProcess.test_open_files_file_gonec	s�t�������t|��d��~t��fdd��tr:dnd}tj|t	t
jd�d��*}���gksfJ�|jspJ�Wd�n1s�0YWd�n1s�0YdS)Nr�cst����t��kSrUrCr8rDr8r9r��r^z5TestProcess.test_open_files_fd_gone.<locals>.<lambda>r�r�rYr)
r�r�r;rBr�rrrr�r�r�r�r��r�r�r�r8rDr9�test_open_files_fd_gone�s�z#TestProcess.test_open_files_fd_gonec
s�t�������t|��d���t��fdd��d}tj|tt	j
d�d��T}t�d��*���gksjJ�|jstJ�Wd�n1s�0YWd�n1s�0YWd�n1s�0YdS)Nr�cst����t��kSrUrCr8rDr8r9r��r^z:TestProcess.test_open_files_enametoolong.<locals>.<lambda>rFrYr�psutil._pslinux.debug)r�r�r;rBr�rrr�rGr��ENAMETOOLONGr�rJr8rDr9�test_open_files_enametoolong�s�z(TestProcess.test_open_files_enametoolongcCsXtjdid��6}tj�t�����dus,J�|js6J�Wd�n1sJ0YdS)Nz)psutil._pslinux._psposix.get_terminal_mapr2)	rr�r�r%r�rr�Zterminalr�r�r8r8r9�test_terminal_mocked�s�z TestProcess.test_terminal_mockedcCs�t��}t�d�}tjd|dd��.}|��ddgks8J�|jsBJ�Wd�n1sV0Yt�d�}tjd|dd��.}|��gd�ks�J�|js�J�Wd�n1s�0YdS)	Nzfoobarr�Tr"�foo�barz	foobar�rPrQrY�r�r�r�r�rr�Zcmdliner��r�r�r'r�r8r8r9�test_cmdline_mocked�s
�(
�zTestProcess.test_cmdline_mockedcCs�t��}t�d�}tjd|dd��.}|��ddgks8J�|jsBJ�Wd�n1sV0Yt�d�}tjd|dd��.}|��gd�ks�J�|js�J�Wd�n1s�0YdS)	Nzfoo bar r�Tr"rPrQz	foo bar  rRrSrTr8r8r9�test_cmdline_spaces_mockeds
�(
�z&TestProcess.test_cmdline_spaces_mockedcCsdt��}t�d�}tjd|dd��.}|��ddgks8J�|jsBJ�Wd�n1sV0YdS)Nzfoo barr�Tr"rPrQrSrTr8r8r9�test_cmdline_mixed_separatorss
�z)TestProcess.test_cmdline_mixed_separatorscCsZtjddd��8t����dks$J�t����dks8J�Wd�n1sL0YdS)NrFz/home/foo (deleted)r2z	/home/foo)rr�r�r��exe�cwdrr8r8r9�!test_readlink_path_deleted_mocked!s
�z-TestProcess.test_readlink_path_deleted_mockedc	s��fdd�}t�trdnd}tj||d��2}t����}|jsBJ�|gksNJ�Wd�n1sb0Y�fdd�}tj||d��Ht�	tj
��t����Wd�n1s�0YWd�n1s�0YdS)Ncs:|�dt���r ttjd��n�|g|�Ri|��SdS�Nz
/proc/%s/taskrY)rkrr�r�r�r�r�rOr8r9�open_mock_1-sz4TestProcess.test_threads_mocked.<locals>.open_mock_1r�r�rcs:|�dt���r ttjd��n�|g|�Ri|��SdSr[)rkrr�r�r��EPERMr�rOr8r9�open_mock_2<sz4TestProcess.test_threads_mocked.<locals>.open_mock_2)rBrrr�r�r�r�r�rr�ZAccessDenied)r�r\r�r�r�r^r8rOr9�test_threads_mocked(s
*zTestProcess.test_threads_mockedc	Cs�tjdttjd�d��`}tjdgd��2t����}|js>J�|dksJJ�Wd�n1s^0YWd�n1s|0YdS)Nzpsutil._pslinux.readlinkrYrzpsutil._pslinux.Process.cmdliner2)	rr�rGr�r�r�r�rXr�)r�r�r�r8r8r9�test_exe_mockedFs��
zTestProcess.test_exe_mockedc	Cs�tdt��ttjd���T}t��}t�	t
��|��Wd�n1sL0Y|js`J�Wd�n1st0YdS)Nr8rY)
r�rr�r�r�r�r�r�rr�rr4r��r�r�r�r8r8r9�test_issue_1014Rs�&zTestProcess.test_issue_1014r9c
Cs�tjdttjd�d���}tjddd��X}t��}|��t�	tj
��}|�tj�Wd�n1sf0YWd�n1s�0YWd�n1s�0Y|j
s�J�|j
s�J�|jj|jks�J�|jj|��ks�J�dS)Nzpsutil._pslinux.prlimitrYrz"psutil._pslinux.Process._is_zombieTr2)rr�rGr�ZENOSYSr�r�r�rr�Z
ZombieProcessZrlimitZ
RLIMIT_NOFILEr�r
r�)r�r5r6r��cmr8r8r9�test_rlimit_zombie]s ��f

zTestProcess.test_rlimit_zombiecCsgd�}d�|���}tdt��|i���t��}|��dksDJ�|��tj	ksVJ�|�
�dksfJ�|��dtt�
�ks�J�|��}|jdtks�J�|jdtks�J�|jd	tks�J�|jd
tks�J�|jdtks�J�|��dks�J�Wd�n1�s
0YdS)N)*rz(cat)�Z�1rrrrrrrrr�2�3�4�5rrrr�6rrrrrrrrrrrrrrrrrkrr�7r}z
/proc/%s/stat�catrxrr�r�r@ryr�)rHr�r�rr�r�r�r��statusZ
STATUS_ZOMBIE�ppidZcreate_timerr�rr��system�
children_user�children_systemZiowaitZcpu_num)r�r�r�r��cpur8r8r9�test_stat_file_parsingqs,z"TestProcess.test_stat_file_parsingcCst�d���}tdt��|i���t��}|��j	dks>J�|��j
dksPJ�|��dks`J�|��}|j
dksvJ�|jdks�J�|jdks�J�|��}|j
d	ks�J�|jd
ks�J�|jdks�J�|j��ttd��ks�J�Wd�n1s�0YdS)
Nz�            Uid:	1000	1001	1002	1003
            Gid:	1004	1005	1006	1007
            Threads:	66
            Cpus_allowed:	f
            Cpus_allowed_list:	0-7
            voluntary_ctxt_switches:	12
            nonvoluntary_ctxt_switches:	13r�r��
�Bi�i�i�i�i�i�r�)r�r�r�r�rr�r�r��num_ctx_switches�	voluntary�involuntary�num_threads�uids�realZ	effectiveZsaved�gids�_proc�_get_eligible_cpus�listrG)r�r�r�r{r}r8r8r9�test_status_file_parsing�sz$TestProcess.test_status_file_parsingc	Cs�tjdttjd�d��\}t��}t�d��*|��gks<J�|jsFJ�Wd�n1sZ0YWd�n1sx0YdS)NrFrYrrL)	rr�rGr�rMr�r�r�r�rar8r8r9�!test_net_connections_enametoolong�s
�z-TestProcess.test_net_connections_enametoolongN)r�r�r�rr7r9rr)r*rrBrIrKrNrOrUrVrWrZr_r`rbr
rdrtr�r�r8r8r8r9r1Zs*
"
"
<r1c@s�eZdZdZedd��Zdd�Zdd�Zej	j
edd	�d
d��Zdd
�Z
dd�Zdd�Zdd�Ze�dd��Zdd�Zdd�ZdS)�TestProcessAgainstStatusa/proc/pid/stat and /proc/pid/status have many values in common.
    Whenever possible, psutil uses /proc/pid/stat (it's faster).
    For all those cases we check that the value found in
    /proc/pid/stat (by psutil) matches the one found in
    /proc/pid/status.
    cCst��|_dSrU)r�r��proc)�clsr8r8r9�
setUpClass�sz#TestProcessAgainstStatus.setUpClassc
Cs�tj�d|jj���}|D]j}|��}|�|�r|�d�d}zt|�WWd�St	y�|YWd�S0qt	d|��Wd�n1s�0YdS)Nr��	r�z
can't find %r)
r��_psplatform�	open_textr�r�r~rk�	partitionrnrF)r�Z	linestartrLrMr
r8r8r9�read_status_file�s
�
z)TestProcessAgainstStatus.read_status_filecCs |�d�}|j��|ksJ�dS)NzName:)r�r�r��r�r
r8r8r9�	test_name�s
z"TestProcessAgainstStatus.test_namernr�cCsH|�d�}||�d�d|�d��}|�dd�}|j��|ksDJ�dS)NzState:�(rx�)r}r)r�r��rfindrr�rnr�r8r8r9�test_status�s
z$TestProcessAgainstStatus.test_statuscCs |�d�}|j��|ksJ�dS)NzPPid:)r�r�ror�r8r8r9�	test_ppid�s
z"TestProcessAgainstStatus.test_ppidcCs |�d�}|j��|ksJ�dS)NzThreads:)r�r�rzr�r8r8r9�test_num_threads�s
z)TestProcessAgainstStatus.test_num_threadscCs:|�d�}ttt|��dd���}|j��|ks6J�dS)NzUid:rxr@)r�r�r�rnrCr�r{r�r8r8r9�	test_uids	s
z"TestProcessAgainstStatus.test_uidscCs:|�d�}ttt|��dd���}|j��|ks6J�dS)NzGid:rxr@)r�r�r�rnrCr�r}r�r8r8r9�	test_gids	s
z"TestProcessAgainstStatus.test_gidscCs@|�d�}|j��j|ksJ�|�d�}|j��j|ks<J�dS)Nzvoluntary_ctxt_switches:znonvoluntary_ctxt_switches:)r�r�rwrxryr�r8r8r9�test_num_ctx_switches	s

z.TestProcessAgainstStatus.test_num_ctx_switchescCsN|�d�}dt|�vrJtt|�d��\}}|j��tt||d��ksJJ�dS)N�Cpus_allowed_list:rrx)	r�r�r�rnrCr�Zcpu_affinityr�rG)r�r
Zmin_Zmax_r8r8r9�test_cpu_affinity	s
z*TestProcessAgainstStatus.test_cpu_affinitycCsf|�d�}t�d��}|jj��Wd�n1s60Ydt|�vrX|jrbJ�n
|jsbJ�dS)Nr�zpsutil._pslinux.per_cpu_timesr)r�rr�r�r~rr�r�)r�r
r�r8r8r9�test_cpu_affinity_eligible_cpus	s
*z8TestProcessAgainstStatus.test_cpu_affinity_eligible_cpusN)r�r�r��__doc__�classmethodr�r�r�rr)r*rr�r�r�r�r�rr�r�r�r8r8r8r9r��s


r�c@seZdZdd�ZdS)�	TestUtilscCsPtjddd��.}tj�d�dks$J�|js.J�Wd�n1sB0YdS)Nzos.readlinkz
foo (deleted)r2rQrP)rr�r�r��readlinkr�r�r8r8r9�
test_readlink)	szTestUtils.test_readlinkN)r�r�r�r�r8r8r8r9r�'	sr�)fr��
__future__rrlr-r��globr�rr�r�r*r3r�r:r�r�rZpsutil._compatrrrZpsutil.testsrrr	r
rrr
rrrrrrrrrrrrrrrrr$rrr r!r+r<�dirname�__file__ZHEREr2ZSIOCGIFCONFr`r;r=r�ZEMPTY_TEMPERATURESr:r<r>rTrcrur{r|r��contextmanagerr�r�r)r*r�r�r�r�r�r	r-r8r]r`rdror�r�r�r�r�r�rrr r/r1r�r�r8r8r8r9�<module>s	

91RZI'63AH8CzSPKok\ r.�}�}�4psutil/tests/__pycache__/test_process.cpython-39.pycnu�[���a

��?hn��@sdZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlZddlmZddlmZddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlm Z ddlm!Z!ddl"m#Z#ddl"m$Z$ddl"m%Z%ddl"m&Z&ddl"m'Z'ddl"m(Z(ddl"m)Z)ddl"m*Z*ddl"m+Z+ddl"m,Z,ddl"m-Z-ddl"m.Z.ddl"m/Z/dd l"m0Z0dd!l"m1Z1dd"l"m2Z2dd#l"m3Z3dd$l"m4Z4dd%l"m5Z5dd&l"m6Z6dd'l"m7Z7dd(l"m8Z8dd)l"m9Z9dd*l"m:Z:dd+l"m;Z;dd,l"m<Z<dd-l"m=Z=dd.l"m>Z>dd/l"m?Z?dd0l"m@Z@dd1l"mAZAdd2l"mBZBGd3d4�d4e4�ZCe�r�e�D�dk�r�Gd5d6�d6eC�ZEGd7d8�d8e4�ZFdS)9�Tests for psutil.Process class.�N)�AIX)�BSD)�LINUX)�MACOS)�NETBSD)�OPENBSD)�OSX)�POSIX)�SUNOS)�WINDOWS)�	open_text)�PY3)�FileNotFoundError)�long)�redirect_stderr)�super)�APPVEYOR)�
CI_TESTING)�GITHUB_ACTIONS)�GLOBAL_TIMEOUT)�HAS_CPU_AFFINITY)�HAS_ENVIRON)�
HAS_IONICE)�HAS_MEMORY_MAPS)�HAS_PROC_CPU_NUM)�HAS_PROC_IO_COUNTERS)�
HAS_RLIMIT)�HAS_THREADS)�MACOS_11PLUS)�PYPY)�
PYTHON_EXE)�PYTHON_EXE_ENV)�	QEMU_USER)�PsutilTestCase)�
ThreadTask)�
call_until)�copyload_shared_lib)�create_c_exe)�
create_py_exe)�mock)�process_namespace)�pytest��
reap_children)�retry_on_failure)�sh)�skip_on_access_denied)�skip_on_not_implemented)�wait_for_pidc@seZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Ze	j
jedd
�dd��Z
dd�Ze	j
jedd
�dd��Zdd�Zdd�Zdd�Zdd�Zdd�Ze	j
jedd
�d d!��Ze	j
jedd
�d"d#��Ze	j
jed$d
�d%d&��Zd'd(�Ze	j
jed)d
�d*d+��Ze	j
jed$d
�eed,�d-d.���Z e	j
je!d$d
�e	j
jed/d
�d0d1���Z"e	j
je!d$d
�e	j
je#d2d
�d3d4���Z$e	j
je%d$d
�d5d6��Z&e	j
je%d$d
�d7d8��Z'e	j
je%d$d
�d9d:��Z(e	j
je%d$d
�d;d<��Z)e	j
je%d$d
�d=d>��Z*d?d@�Z+e	j
je#dAd
�dBdC��Z,e	j
je-d$d
�dDdE��Z.e/�e0e1d,�e	j
je-d$d
�dFdG����Z2e/�dHdI��Z3dJdK�Z4e	j
je5d$d
�dLdM��Z6e	j
je5d$d
�dNdO��Z7dPdQ�Z8dRdS�Z9e	j
jedd
�dTdU��Z:dVdW�Z;e	j
je<dXd
�dYdZ��Z=d[d\�Z>e	j
je<�ped]d
�e	j
jed^d
�d_d`���Z?e	j
je@dad
�e	j
jeAdbd
�e	j
je<dXd
�e	j
jedcd
�ddde�����ZBe	j
jed)d
�dfdg��ZCe	j
jed)d
�dhdi��ZDdjdk�ZEe	j
jedd
�dldm��ZFdndo�ZGdpdq�ZHdrds�ZIe	j
jeJd$d
�dtdu��ZKe	j
jeJd$d
�dvdw��ZLe	j
jeJd$d
�dxdy��ZMe	j
jeNdzd
�e	j
jeOd{d
�d|d}���ZPe	j
jeNdzd
�e	j
jeOd{d
�d~d���ZQe	j
jed)d
�d�d���ZReed,�e	j
jeS�p�ed�d
�d�d����ZTd�d��ZUd�d��ZVd�d��ZWe	j
jedd
�e/�d�d����ZXd�d��ZYd�d��ZZd�d��Z[d�d��Z\d�d��Z]d�d��Z^d�d��Z_d�d��Z`d�d��Zad�d��Zbd�d��Zce	j
jed)d
�d�d���Zde	j
jed)d
�d�d���Zee	j
jed)d
�d�d���Zfd�d��Zgd�d��Zhe	j
jeid$d
�d�d���Zje	j
jeid$d
�e	j
jed)d
�e	j
jekd�d
�e	j
jed�d
�d�d������Zld�S)��TestProcessrcOsF|j|i|��}zt�|j�WStjy@|�|j��Yn0dS�N)�spawn_testproc�psutil�Process�pid�
NoSuchProcessZ
assertPidGone)�self�args�kwargs�sproc�r?�E/usr/local/lib64/python3.9/site-packages/psutil/tests/test_process.py�spawn_psprocSszTestProcess.spawn_psproccCsNt��}|jt��ksJ�t�t��d|_Wd�n1s@0YdS)N�!)r7r8r9�os�getpidr,�raises�AttributeError�r;�pr?r?r@�test_pid]szTestProcess.test_pidcCsJ|��}|��|��}tr,|tjks<J�n|tjks<J�|�|�dSr5)rA�kill�waitr�signal�SIGTERM�SIGKILL�assertProcessGone�r;rH�coder?r?r@�	test_killcszTestProcess.test_killcCsJ|��}|��|��}tr,|tjks<J�n|tjks<J�|�|�dSr5)rA�	terminaterKrrLrMrOrPr?r?r@�test_terminatemszTestProcess.test_terminatecCsXtr
tjntj}|��}|�|�|��}tr<||ksJJ�n||ksJJ�|�|�dSr5)	r
rLrNrMrA�send_signalrKrrO)r;�sigrHrQr?r?r@�test_send_signalws
zTestProcess.test_send_signalz	not POSIX��reasonc	Cs�tj}|��}tjdttjd�d��Ft�	t
j��|�|�Wd�n1sR0YWd�n1sp0Y|��}tjdttj
d�d��Ft�	t
j��|�|�Wd�n1s�0YWd�n1s�0YdS)Nzpsutil.os.kill���side_effect)rLrMrAr*�patch�OSError�errnoZESRCHr,rEr7r:rU�EPERM�AccessDenied)r;rVrHr?r?r@�test_send_signal_mocked�s�F�z#TestProcess.test_send_signal_mockedcCs�tddg}|�|�}|��}|dks(J�|�|�tddg}|j|tjd�}|��}|dks`J�|�|�tddg}|�|�}|��}|dks�J�|�|�tdd	g}|�|�}|��}|dks�J�|�|�dS)
N�-c�passrz1 / 0)�stderr�zimport sys; sys.exit(5);�zimport os; os._exit(5);)r!rArKrO�
subprocess�PIPE)r;�cmdrHrQr?r?r@�test_wait_exited�s(









zTestProcess.test_wait_exitedzfails on NETBSDcCs�|��}tr�|�tj�t�tj��|j	dd�Wd�n1sF0Y|�tj
�t�tj��|j	dd�Wd�n1s�0Y|�tj�|�	�tjks�J�|�	�tjks�J�n�|��t�tj��|j	dd�Wd�n1�s0Y|�
�t�tj��|j	dd�Wd�n1�sD0Y|��|�	�tjk�sjJ�|�	�tjk�s~J�dS)N���MbP?)�timeout)rAr
rUrL�SIGSTOPr,rEr7�TimeoutExpiredrK�SIGCONTrM�suspend�resumerSrGr?r?r@�test_wait_stopped�s(**,,zTestProcess.test_wait_stoppedcCs�|��\}}t�tj��|�d�Wd�n1s80Yt�tj��|�d�Wd�n1sn0Y|��|��|��}|��}tr�|tj	ks�J�|dus�J�n|tj	ks�J�|tj	ks�J�dS)N�{�G�z�?)
�spawn_children_pairr,rEr7rorKrSr
rLrM)r;�child�
grandchildZ	child_retZgrandchild_retr?r?r@�test_wait_non_children�s((z"TestProcess.test_wait_non_childrencCs�|��}|��t�tj��|�d�Wd�n1s<0Yt�tj��|�d�Wd�n1sr0Yt�t��|�d�Wd�n1s�0YdS)Nrtr���)rA�namer,rEr7rorK�
ValueErrorrGr?r?r@�test_wait_timeout�s((zTestProcess.test_wait_timeoutcCs�|��}t�tj��|�d�Wd�n1s40Y|��t��t}t��|kr�z|�d�}Wq�WqRtjy�YqR0qR|�	d��t
r�|tjks�J�n|tj
ks�J�|�|�dS)Nrrm)rAr,rEr7rorKrJ�timer�failr
rLrNrMrO)r;rHZstop_atrQr?r?r@�test_wait_timeout_nonblocking�s (

z)TestProcess.test_wait_timeout_nonblockingcCs�t��}|jdd�|jdd�td�D]*}|jdd�}t|t�sFJ�|dks(J�q(t�t��|jdd�Wd�n1s�0YdS)Nrl)�interval�d�ry)	r7r8�cpu_percent�range�
isinstance�floatr,rEr{)r;rH�_�percentr?r?r@�test_cpu_percent�szTestProcess.test_cpu_percentcCsHtjddd��&}t����|js&J�Wd�n1s:0YdS)Nzpsutil.cpu_count)Zreturn_value)r*r]r7r8r��called)r;�mr?r?r@�test_cpu_percent_numcpus_nonesz)TestProcess.test_cpu_percent_numcpus_nonezQEMU user not supportedc	Cs�t����}|jdksJ|��|jdks0J|��|jdksBJ|��|jdksTJ|��trj|jdksjJ|��|j	D]}t
�dt
�t
||���qpdS)Nr�z%H:%M:%S)r7r8�	cpu_times�user�system�
children_user�children_systemrZiowait�_fieldsr}�strftime�	localtime�getattr)r;�timesrzr?r?r@�test_cpu_timess
zTestProcess.test_cpu_timescCs�t����dd�\}}t��dd�\}}t||g�t||g�dkrZ|�d||f��t||g�t||g�dkr�|�d||f��dS)N�皙�����?zexpected: %s, found: %s)r7r8r�rCr��max�minr~)r;�	user_timeZkernel_time�utimeZktimer?r?r@�test_cpu_times_2szTestProcess.test_cpu_times_2z
not supportedcCsPt��}|��}|dksJ�t��dkr4|dks4J�|��tt���vsLJ�dS�Nrrf)r7r8Zcpu_num�	cpu_countr�)r;rH�numr?r?r@�test_cpu_num)szTestProcess.test_cpu_numcCsZ|��}t��}|��}t||�}|dkr@|�d|||f��t�dt�|����dS)Nr�z'expected: %s, found: %s, difference: %sz%Y %m %d %H:%M:%S)rAr}�create_time�absr~r�r�)r;rH�nowr��
differencer?r?r@�test_create_time2s��zTestProcess.test_create_timez
POSIX onlycCsVt����}|durRztj�td��}WntyDt�	d��Yn0||ksRJ�dS)N�ttyzcan't rely on `tty` CLI)
r7r8�terminalrC�path�realpathr0�RuntimeErrorr,�skip)r;r�r�r?r?r@�
test_terminalDszTestProcess.test_terminal)Zonly_ifcCs�t��}|��}ttd��}|��Wd�n1s80Y|��}ts�ts�|j|jksbJ�|j	|j	ksrJ�t
r�|j|jks�J�|j|jks�J�n |j
|j
ks�J�|j|jks�J�|��}t|��d��8}tr�|�tddd��n|�dd�Wd�n1�s0Y|��}|j	|j	k�s2J�|j|jk�sDJ�|j|jk�sVJ�|j
|j
k�shJ�t
�r�|j|jk�s�J�|j|jk�s�J�tt|��D]>}t�r�|dk�r��q�||dk�s�J�||dk�s�J��q�dS)N�rb�wb�xi@B�asciir�r)r7r8Zio_counters�openr!�readrrZ
read_countZwrite_countrZ
read_charsZwrite_chars�
read_bytes�write_bytes�
get_testfnr�write�bytesr��len)r;rHZio1�fZio2�ir?r?r@�test_io_countersPs>&.zTestProcess.test_io_countersz
linux onlycsR�fdd�}t���ts.���dtjks.J�tjdks<J�tjdksJJ�tjdksXJ�tjdksfJ����}|�||���tj�t	����tjdfks�J�t
�t�� �jtjdd�Wd�n1s�0Y��tj�t	����tjdfk�sJ��jtjdd�t	����tjdfk�s.J�t
�t�� �jtjd	d�Wd�n1�s`0Yz�jtjdd�Wntj
�y�Yn0t
jtd
d����tjd�Wd�n1�s�0Yt
jtd
d����tjd�Wd�n1�s0Yt
jtdd���jdd�Wd�n1�sD0YdS)
Ncs&|\}}|tjkrd}��||�dS�Nr)r7�IOPRIO_CLASS_NONE�ionice)�initZioclass�value�rHr?r@�cleanup~s
z.TestProcess.test_ionice_linux.<locals>.cleanuprrfr����r��zioclass accepts no value��matchz$'ioclass' argument must be specified)r7r8rr�r�ZIOPRIO_CLASS_RTZIOPRIO_CLASS_BEZIOPRIO_CLASS_IDLE�
addCleanup�tupler,rEr{ra)r;r�r�r?r�r@�test_ionice_linux{s@.0..�zTestProcess.test_ionice_linuxz!not supported on this win versioncCs*t��}ts|��tjksJ�|��}|�|j|�|�tj�|��tjksRJ�|�tj�|��tjkspJ�z|�tj�Wntj	y�Yn0|��tjks�J�t
jtdd�� |jtjdd�Wd�n1s�0Yt
jt
dd�� |�tjd�Wd�n1�s0YdS)Nz&value argument not accepted on Windowsr�rfr�zis not a valid priority)r7r8rr�Z
IOPRIO_NORMALr�ZIOPRIO_VERYLOWZ
IOPRIO_LOWZIOPRIO_HIGHrar,rE�	TypeErrorr{)r;rHr�r?r?r@�test_ionice_win�s(�.zTestProcess.test_ionice_wincCs�ddl}t�t���}dd�tt�D�}|s4J|��|D]�}tt|�}|dksRJ�|t|�vr�|t||�kspJ�trvq8|�|�|�	|�ks�J�q8|�|�}t
|�dks�J�|ddks�J�|ddks8J�q8dS)NrcSsg|]}|�d�r|�qS)ZRLIMIT��
startswith��.0r�r?r?r@�
<listcomp>��z/TestProcess.test_rlimit_get.<locals>.<listcomp>r�ryrf)�resourcer7r8rCrD�dirr�r �rlimitZ	getrlimitr�)r;r�rH�namesrzr��retr?r?r@�test_rlimit_get�s 

zTestProcess.test_rlimit_getcCs�|��}|�tjd�|�tj�dks*J�trntjtdd��"tj�	d��d�Wd�n1sd0Yt�t��|�tjd�Wd�n1s�0YdS)N)rgrgzcan't use prlimitr�r)rgrgrg)
rAr�r7Z
RLIMIT_NOFILErr,rEr{Z_psplatformr8rGr?r?r@�test_rlimit_set�s0zTestProcess.test_rlimit_setcCsHt��}|��}|�tj�\}}z�|�tjd|f�t|d��}|�d�Wd�n1s^0Yt�t	��D}t|d��}|�d�Wd�n1s�0YWd�n1s�0Yt
r�|jjn|jdtj
ks�J�W|�tj||f�|�tj�||fk�sDJ�n.|�tj||f�|�tj�||fk�sBJ�0dS)N�r�sXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXsXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXr)r7r8r�r��RLIMIT_FSIZEr�r�r,rE�IOErrorrr�r_ZEFBIG)r;rH�testfn�soft�hardr��excr?r?r@�test_rlimit�s(F"�zTestProcess.test_rlimitcCs�t��}|�tj�\}}z�|�tjd|f�|�tjtj|f�t|��d��}|�d�Wd�n1sn0YW|�tj||f�|�tj�||fks�J�n,|�tj||f�|�tj�||fks�J�0dS)Nr�r�sXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)r7r8r�r��
RLIM_INFINITYr�r�r�)r;rHr�r�r�r?r?r@�test_rlimit_infinity�s*�z TestProcess.test_rlimit_infinitycCs<t��}|�tj�\}}|tjks&J�|�tj||f�dSr5)r7r8r�r�r�)r;rHr�r�r?r?r@�test_rlimit_infinity_value
sz&TestProcess.test_rlimit_infinity_valuecCs�t��}tr:z|��}WqBtjy6t�d��YqB0n|��}t��(|��}||dksbJ�Wd�n1sv0YdS)N�$on OpenBSD this requires root accessrf)r7r8r�num_threadsrar,r�r%)r;rH�step1�step2r?r?r@�test_num_threadsszTestProcess.test_num_threadszWINDOWS onlycCst��}|��dksJ�dSr�)r7r8Znum_handlesrGr?r?r@�test_num_handles*szTestProcess.test_num_handlescCs�t��}tr:z|��}WqBtjy6t�d��YqB0n|��}t��n|��}t|�t|�dksjJ�|d}|j	|dks�J�|j
|dks�J�|j|dks�J�Wd�n1s�0YdS)Nr�rfrr�)r7r8r�threadsrar,r�r%r��idr��system_time)r;rHr�r�Zathreadr?r?r@�test_threads0szTestProcess.test_threadscCs�|��}tr8z|��Wntjy6t�d��Yn0t|��j	t
dd�|��D���dksdJ�t|��jt
dd�|��D���dks�J�dS)Nr�cSsg|]
}|j�qSr?)r�r�r?r?r@r�Or�z.TestProcess.test_threads_2.<locals>.<listcomp>r�cSsg|]
}|j�qSr?)r�r�r?r?r@r�Ur�)rArr�r7rar,r�r�r�r��sumr�rGr?r?r@�test_threads_2Ds$"������zTestProcess.test_threads_2cCs�t��}|��dd�\}}|��}|dks0J�|dks<J�dgd}|��dd�\}}|��}||ksnJ�||kszJ�||ks�J�~tr�|��}	|	j|	jks�J�|	j|	jks�J�|��}	|	j	D]}
t
|	|
�dks�J�q�dS)Nr�ri`�)r7r8Zmemory_info�memory_percentrZrssZwset�vmsZpagefiler�r�)r;rHZrss1Zvms1Zpercent1ZmemarrZrss2Zvms2Zpercent2�memrzr?r?r@�test_memory_infoZs&

zTestProcess.test_memory_infocCs�t��}t��j}|��}|jD]8}t||�}|dks:J�|dkrFts trLq ||ks J�q tsft	sft
rt|jdkstJ�tr�|jdks�J�|j
dks�J�dS)Nrr�)r7r8Zvirtual_memory�totalZmemory_full_infor�r�r	rrr�ussZpssZswap)r;rHr�r�rzr�r?r?r@�test_memory_full_infoys


z!TestProcess.test_memory_full_infoc
Cs�t��}|��}t|�tt|��ks(J�|jdd�}|D�]}|j�d�s8trZd|jvrZq8tj�	|j�srJ|j��t
�rz*tj�|j�s�tj�|j�s�J|j��WnZt
y�ts��n@td��}|��}Wd�n1s�0Yd|j|vr��Yn0q8dtj�|j�vr8zt�|j�}Wnt�y6Yq80t�|j�s8J|j��q8|D]l}|jD]^}t||�}	|dk�rz�q^|d	v�r�|	�s�J|	��n&t|	ttf��s�J�|	d
k�s^J|	���q^�qTdS)NF)Zgrouped�[z
/bin/qemu-z/proc/self/smapsz%s (deleted)Z64r�)�addrZpermsr)r7r8�memory_mapsr��setr�r�r#rC�isabsr
�exists�islink�AssertionErrorrr
r��basename�statr�S_ISREG�st_moder�r�r��intr)
r;rH�mapsZext_maps�ntr��data�st�fnamer�r?r?r@�test_memory_maps�sL
��
&




zTestProcess.test_memory_mapscs`t��}t��>}dd���fdd�|��D�}�|�|vs>J�Wd�n1sR0YdS)NcSstj�tj�|��Sr5)rCr�r��normcaser�r?r?r@�normpath�sz8TestProcess.test_memory_maps_lists_lib.<locals>.normpathcsg|]}�|j��qSr?)r�r��rr?r@r��r�z:TestProcess.test_memory_maps_lists_lib.<locals>.<listcomp>)r7r8r'r�)r;rHr�Zlibpathsr?rr@�test_memory_maps_lists_lib�s
z&TestProcess.test_memory_maps_lists_libcCsbt��}|��t�t��|jdd�Wd�n1s<0YtsRtsRtr^|jdd�dS)Nz?!?)Zmemtyper�)	r7r8r�r,rEr{rrrrGr?r?r@�test_memory_percent�s*zTestProcess.test_memory_percentcCsL|��}|��sJ�|��s J�|��|��|��r<J�|��rHJ�dSr5)rA�
is_runningrJrKrGr?r?r@�test_is_running�szTestProcess.test_is_runningcCs�|��}|��}z|tksJ�Wn�ty�tr`t|�tt�kr`tjj}||�|t�ks�J�nLdt	j
dt	j
df}z |�|d�t�|d�ks�J�Wnty�Yn0Yn0t|ddg�}|dks�J�dS)Nz%s.%srrfrZrczimport os; print('hey')Zhey)
rA�exer!rrr�rCr�r�sys�version_info�replacer0)r;rHrr�ver�outr?r?r@�test_exe�s zTestProcess.test_execCs�tddg}|�|�}tr.|��gkr.t�d��ts:ts:trP|��dtks�J�n�tr�t	r�|��d}|tkr�d�
|��dd��d�
|dd��ks�J�dStr�d�
|��dd��d�
|�ks�J�dSd�
|���d�
|�ks�J�dS)Nrc�2import time; [time.sleep(0.1) for x in range(100)]�OPENBSD: returned EBUSYr� rfr�)r!rAr�cmdliner,r�rrrr�joinr#)r;rrH�pyexer?r?r@�test_cmdline�s$�

,$zTestProcess.test_cmdlinezbroken on PYPYcCs�tg}|�dgd�|�ddg�|�|�}trhz|��|ksDJ�Wq�tjydt�d��Yq�0nHt	r�|��dd�|ks�J�n*|��}t
r�|gkr�t�d��||ks�J�dS)Nz-v�2rcrz#OPENBSD: process turned into zombier�r)r!�extendrArrr7�
ZombieProcessr,r�r#r)r;rrHr�r?r?r@�test_long_cmdlines"�

zTestProcess.test_long_cmdlinecCsH|��}|����}tj�tj�tj����}|�	|�sDJ||f��dSr5)
rArz�lowerrCr�rr�r�
executabler�)r;rHrzr!r?r?r@�	test_name(szTestProcess.test_namezunreliable on PYPYzunreliable on QEMU usercCs�t|jtjdd��}|ddg}|�|�}tr�z|��tj�	|�ksHJ�Wq�t
y�|��tj
kr�tj�	|��|���s�J�n�Yq�0n|��tj�	|�ks�J�dS)Nr���suffixrcr)r)r��string�digitsrArrzrCr�rr�statusr7�
STATUS_ZOMBIEr��r;r!rrHr?r?r@�test_long_name.s�

zTestProcess.test_long_namezbroken on SUNOSz
broken on AIXzbroken on QEMU usercCspt|jdd��}|ddg}|�|�}|��|ks4J�|��tj�|�ksLJ�tj�|�	��tj�|�kslJ�dS)Nz	foo bar )r*rcr)
r)r�rArrzrCr�rrrr0r?r?r@�test_prog_w_funky_nameKs�
z"TestProcess.test_prog_w_funky_namecCsXt��}|��\}}}|t��ks&J�|t��ks6J�ttd�rTt��|��ksTJ�dS�N�	getresuid)r7r8�uidsrC�getuid�geteuid�hasattrr4�r;rH�realZ	effectiveZ_savedr?r?r@�	test_uids^s
zTestProcess.test_uidscCsXt��}|��\}}}|t��ks&J�|t��ks6J�ttd�rTt��|��ksTJ�dSr3)r7r8�gidsrC�getgid�getegidr8�	getresgidr9r?r?r@�	test_gidsls
zTestProcess.test_gidsc
s��fdd�}t���t�t����d�Wd�n1s>0Y���}|�||�t�r(d}tjtj	tj
tjtjtj
fD]�}|j|d���z��|�Wntjy�YnN0���}|tjtjtj
fvr�||ks�|dur�|}||ks�J�n||k�sJ�Wd�q�1�s0Yq�n�z�ttd��rVt�tjt������k�sVJ���d����dk�srJ�ttd��r�t�tjt������k�s�J�t�s���d����dk�s�J�Wntj�y�Yn0dS)Ncs(z��|�Wntjy"Yn0dSr5)�nicer7ra)r�r�r?r@r�{sz&TestProcess.test_nice.<locals>.cleanup�str)�prio�getpriorityrfr)r7r8r,rEr�rAr�rZIDLE_PRIORITY_CLASSZBELOW_NORMAL_PRIORITY_CLASSZNORMAL_PRIORITY_CLASSZABOVE_NORMAL_PRIORITY_CLASSZHIGH_PRIORITY_CLASSZREALTIME_PRIORITY_CLASS�subTestrar8rCrD�PRIO_PROCESSrDr)r;r�r�Zhighest_priorCZnew_prior?r�r@�	test_nicezs`(��2��
��
zTestProcess.test_nicecCst��}|��tjksJ�dSr5)r7r8r.ZSTATUS_RUNNINGrGr?r?r@�test_status�szTestProcess.test_statuscCs||��}|��}trh|�d�\}}t��}|�d�r>t�d��||ksJJ�dt	j
vrx|t	j
dksxJ�n|t��ksxJ�dS)N�\�$zrunning as service accountZ
USERDOMAIN)rA�usernamer�split�getpass�getuser�endswithr,r�rC�environ)r;rHrK�domainZgetpass_userr?r?r@�
test_username�s


zTestProcess.test_usernamecCs |��}|��t��ksJ�dSr5)rA�cwdrC�getcwdrGr?r?r@�test_cwd�szTestProcess.test_cwdcs(tddg}|�|��t�fdd��dS)NrczFimport os, time; os.chdir('..'); [time.sleep(0.1) for x in range(100)]cs���tj�t���kSr5)rSrCr��dirnamerTr?r�r?r@�<lambda>�r�z(TestProcess.test_cwd_2.<locals>.<lambda>)r!rAr&)r;rjr?r�r@�
test_cwd_2�s�
zTestProcess.test_cwd_2cCs�t��}|��}|sJ|��|�|j|�ttd�rL|tt�|j��ksLJ�t	|�t	t
|��ksdJ�ttt	tjdd����}|D]j}|�|g�|��|gks�J�ttd�r�|��tt�|j��ks�J�t|d�r�|��d|�
�ks�J�q�|�g�t�r|��|j��k�s(J�n|��|k�s(J�ttd��rR|��tt�|j��k�sRJ�t�t��|�d�Wd�n1�s~0Y|�|�|�t
|��|�t|��dS)N�sched_getaffinityT�Zpercpu�num_cpurrf)r7r8�cpu_affinityr�r8rC�listrYr9r�r�r�r�r[r�_procZ_get_eligible_cpusr,rEr�r�)r;rH�initialZall_cpus�nr?r?r@�test_cpu_affinity�s4



*
zTestProcess.test_cpu_affinitycCs|��}ttjdd��dg}t�t��|�|�Wd�n1sH0Yt�t�� |�tdd��Wd�n1s�0Yt�t	��|�ddg�Wd�n1s�0Yt�t��|�ddg�Wd�n1s�0YdS)	NTrZ�
i'i�*r�1ry)
rAr�r7r�r,rEr{r\r�r�)r;rHZinvalid_cpur?r?r@�test_cpu_affinity_errs	s(.,z"TestProcess.test_cpu_affinity_errscCs�t��}|��}|sJ|��|�|j|�t|�dkrB|dd�}g}tt|�d�D](}t�||�D]}|rf|�t	|��qfqV|D]&}|�|�t
|���t
|�ks�J�q�dS)N�rf)r7r8r\r�r�r��	itertools�combinations�appendr]�sorted)r;rHr_Zcombosr�ZsubsetZcombor?r?r@�"test_cpu_affinity_all_combinationss
z.TestProcess.test_cpu_affinity_all_combinationsz
broken on BSDzunreliable on APPVEYORcsrt���|��}����|�vs$J�t|d���}|�d�|��t��fdd������dd��D�}tj	�
|�|vs~J�tr��D]}|j	|kr�|jdks�J�q�Wd�n1s�0Y�D]}tj	�
|j	�s�J|��q�d|}|�td	|g��td
�D]2}dd����D�}||v�r(�qLt�d��qtj	�
|�|v�sLJ�|D]}tj	�
|��sPJ|���qPdS)
Nr�sxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcst����t��kSr5)r��
open_filesr?��filesrHr?r@rW7r�z-TestProcess.test_open_files.<locals>.<lambda>cSsg|]}tj�|j��qSr?�rCr�rr�r?r?r@r�9r�z/TestProcess.test_open_files.<locals>.<listcomp>r�zIimport time; f = open(r'%s', 'r'); [time.sleep(0.1) for x in range(100)];rcr�cSsg|]}tj�|j��qSr?rnr�r?r?r@r�Jr�rt)r7r8r�rkr�r��flushr&rCr�rr�position�isfilerAr!r�r}�sleep)r;r�r��	filenames�filerr�r?rlr@�test_open_files+s<

.��
zTestProcess.test_open_filescCst��}tjj}|��}t|d���}|��D]*}||j�||j�ksR|j	|�
�kr,qnq,|�dt|�����||j�||j�ks�J�t
r�|j	dks�J�n|j	|�
�ks�J�|��d}|d|jks�J�|d|j	ks�J�|j|��vs�J�Wd�n1�s0YdS)N�wzno file found; files=%sryrrf)r7r8rCr�rr�r�rkrz�fd�filenor~�reprr)r;rHrr��fileobjrtZntupler?r?r@�test_open_files_2Ts*���zTestProcess.test_open_files_2cCs�t��}|��}|��}t|d�}|�|j�|��|dksBJ�t��}|�|j�|��|dksjJ�|��|��|��|ks�J�dS)Nrvrfr�)r7r8r�Znum_fdsr�r��close�socket)r;rHr��startrt�sockr?r?r@�test_num_fdsss
zTestProcess.test_num_fdsz not reliable on OPENBSD & NETBSDcCsTt��}t|���}td�D](}t�d�t|���}||krdSq|�d��dS)Nr�g�������?z2num ctx switches still the same after 2 iterations)r7r8r�Znum_ctx_switchesr�r}rrr~)r;rH�beforer��afterr?r?r@�test_num_ctx_switches�s
z!TestProcess.test_num_ctx_switchescCsFt��}ttd�r&|��t��ks&J�|��}|��t��ksBJ�dS)N�getppid)r7r8r8rC�ppidr�rArDrGr?r?r@�	test_ppid�s

zTestProcess.test_ppidcCsD|��}|��jt��ksJ�t��d}t�|���dus@J�dSr�)rA�parentr9rCrDr7�pidsr8)r;rHZ
lowest_pidr?r?r@�test_parent�szTestProcess.test_parentcCs8t��}|��\}}|��|ks$J�|��|ks4J�dSr5)r7r8rur��r;r�rvrwr?r?r@�test_parent_multi�szTestProcess.test_parent_multicCs`t��}|��sJ�|��\}}|��d|ks4J�|��d|ksHJ�|��d|ks\J�dSr�)r7r8�parentsrur�r?r?r@�test_parents�szTestProcess.test_parentscCs�t��}|��gksJ�|jdd�gks,J�|jdd�}|��}|jdd�}||fD]>}t|�dkshJ�|dj|jks|J�|d��|jksTJ�qTdS)NT��	recursiver)�
creationflagsrf)r7r8�childrenrAr�r9r�)r;r�rvZ	children1Z	children2r�r?r?r@�
test_children�szTestProcess.test_childrencCsft��}|��\}}|��|gks&J�|jdd�||gks>J�|��|��|jdd�gksbJ�dS)NTr�)r7r8rur�rSrKr�r?r?r@�test_children_recursive�sz#TestProcess.test_children_recursivec	Cs�t�t�}t��D]2}z||��d7<WqtjyBYq0qt|��dd�d�dd}t	rx|dkrxt
�d��t�|�}z|j
dd	�}Wntjy�Yn0t|�tt|��ks�J�dS)
NrfcSs|dS)Nrfr?)r�r?r?r@rW�r�z6TestProcess.test_children_duplicates.<locals>.<lambda>)�keyryrzPID 0Tr�)�collections�defaultdictrr7�process_iterr��Errorri�itemsrr,r�r8r�rar�r�)r;�tablerHr9�cr?r?r@�test_children_duplicates�s


z$TestProcess.test_children_duplicatescCs|t��}|��\}}|jdd�}t|�dks0J�|d|ks@J�|d|ksPJ�|��}|d|kshJ�|d|ksxJ�dS)NTr�r�rrf)r7r8rur�r�r�)r;r�rvrwr�r�r?r?r@�test_parents_and_children�sz%TestProcess.test_parents_and_childrencCsX|��}|��td�D] }|��tjkr.q:t�d�q|��|��tjksTJ�dS)Nr�rt)	rArqr�r.r7ZSTATUS_STOPPEDr}rrrr)r;rHr�r?r?r@�test_suspend_resume�szTestProcess.test_suspend_resumecCslt�t��t�d�Wd�n1s*0Yt�t��t�d�Wd�n1s^0YdS)Nrcry)r,rEr�r7r8r{�r;r?r?r@�test_invalid_pid�s(zTestProcess.test_invalid_pidc	Cs�t��}|jddgd�}t|���ddgks0J�t�tt����}|jdgdd�}t|dt�sp|ddkspJ�t	j
ddtjd	��,|jd
gdd�d
diks�J�Wd�n1s�0Yt	j
ddt�|j
d�d	��Lt�tj��|jd
gd�Wd�n1�s0YWd�n1�s,0Yt	j
ddt�|j
d�d	��.|jd
gdd�d
dik�spJ�Wd�n1�s�0Yt	j
ddtd	��h|��}d
t|���v�s�J�t�t��|jd
gd�Wd�n1�s�0YWd�n1�s0Yt�t��|�d�Wd�n1�sF0Yt�t��|�dg�Wd�n1�s~0Yt�t��|�ddg�Wd�n1�s�0YdS)
Nrrz)�attrsZnet_connections�foo)r�Zad_valuezpsutil.Process.niceT)�creater\rArf�bar)r7r8Zas_dictri�keysr�r�r�r]r*r]rar:r9r,rEr%�NotImplementedErrorr�r{)r;rH�dr?r?r@�test_as_dict�sJ�:�N�>�N*,zTestProcess.test_as_dictc	Cs�t��}t�d��V}|��� |��|��Wd�n1sB0Y|jdksZJ�Wd�n1sn0Yt�d�� }|��|��Wd�n1s�0Y|jdks�J�dS)N�$psutil._psplatform.Process.cpu_timesrfr��r7r8r*r]�oneshotr�Z
call_count�r;rHr�r?r?r@�test_oneshot/s
&,&zTestProcess.test_oneshotcCs8t��}t�d���}t�d���}|���X|��|��|��� |��|��Wd�n1sh0YWd�n1s�0Y|jdks�J�|jdks�J�Wd�n1s�0YWd�n1s�0Yt�d�� }|��|��Wd�n1�s0Y|jdk�s4J�dS)Nr�z(psutil._psplatform.Process.oneshot_enterrfr�r�)r;rH�m1�m2r�r?r?r@�test_oneshot_twice<s

DJ(zTestProcess.test_oneshot_twicecCs�|��\}}|��}|��}||ks(J�|���0|��|ksBJ�|��|ksRJ�Wd�n1sf0Y|���0|��|ks�J�|��|ks�J�Wd�n1s�0YdSr5)rur�r�)r;�p1�p2Zp1_ppidZp2_ppidr?r?r@�test_oneshot_cachePs
.
zTestProcess.test_oneshot_cachecsn�fdd�}����������tr8t�fdd������t��}|�|j�D]\}}|||�qVdS)Ncs�z
|�}WnNtjy �Yn\tjy2YnJtjyXtrR|dvrRYdS�Yn$0trj|dvrjdS��d||f��dS)N)r�r�)rrzz+%r didn't raise NSP and returned %r instead)r7r%r:rarrr~)�funZfun_namer�r�r?r@�assert_raises_nspgs

�zFTestProcess.test_halfway_terminated_process.<locals>.assert_raises_nspcs�jt��vSr5)r9r7r�r?r�r?r@rW~r�z=TestProcess.test_halfway_terminated_process.<locals>.<lambda>)	rArSrKrr&rOr+�iter�all)r;r��nsr�rzr?)rHr;r@�test_halfway_terminated_process_s
z+TestProcess.test_halfway_terminated_processcCs|��\}}|�|�dSr5)Zspawn_zombieZassertProcessZombie)r;�_parentZzombier?r?r@�test_zombie_process�szTestProcess.test_zombie_processcCsVt��}tjdt�d�d��&}|��s*J�|js4J�Wd�n1sH0YdS)Nzpsutil.Processrr[)r7r8r*r]r%rr�r�r?r?r@�$test_zombie_process_is_running_w_exc�s
�z0TestProcess.test_zombie_process_is_running_w_exccCs\t��}tjdt�d�d��,}|��tjks0J�|js:J�Wd�n1sN0YdS)Nz!psutil._psplatform.Process.statusrr[)r7r8r*r]r%r.r/r�r�r?r?r@� test_zombie_process_status_w_exc�s�z,TestProcess.test_zombie_process_status_w_excc	
Cs�trddlm}nddlm}|��}t�|j�}|j|��df|_t	t�
��|jtjvsbJ�|��rnJ�t
j�tjdd��Ft|���}t	t�
��Wd�n1s�0YWd�n1s�0Yd|j|��vs�J�|jtjvs�J�|t�|j�k�sJ�d}t|�}|j|j|jdd	�D]p\}}|j|d
��Htjtj|d��|�Wd�n1�sx0YWd�n1�s�0Y�q4dt|�v�s�J�dt|�v�s�J�tjtj|d��|��Wd�n1�s�0Ytjtj|d��|��Wd�n1�s40Ytjtj|d��|��Wd�n1�sn0Ytjtj|d��|� �Wd�n1�s�0YdS)
Nr)�StringIOr�ZPSUTIL_DEBUGTz-refreshing Process instance for reused PID %sz4process no longer exists and its PID has been reusedF)�clear_cache)rzr�zterminated + PID reused)!r�ior�r6r7r8r9r��_identr]r�Z_pmaprr*r]�object�_commonr�getvaluer+r��settersZkillersrEr,rEr:rBryr�r�r�r�)	r;r�ZsubprHr��msgr�r�rzr?r?r@�test_reused_pid�sDH��J(((zTestProcess.test_reused_pidc	Cs�dt��vrjt�tj��t�d�Wd�n1s80Yt�d�rPJ�t�d���dksfJ�dSt�d�}tr~tj	nt
}t�|��|��Wd�n1s�0Yt�|��|��Wd�n1s�0Yt�|��|�
�Wd�n1�s0Yt�|��|��Wd�n1�sD0Yt�|��|��Wd�n1�sx0Yt�|��|�tj�Wd�n1�s�0Yt|�}|�|j|j�D]�\}}z
|�}Wntj	�y�Ynb0|dv�r|jdk�s^J�nD|dk�rFt�r.dnd}|��|k�s^J�n|dk�r�|�s�J|���q�t�s�dt��v�szJ�t�d��s�J�dS)Nrrf)r5r<rKzNT AUTHORITY\SYSTEM�rootrz)r7r�r,rEr:r8Z
pid_existsr�rrar{rKrSrqrrrJrUrLrMr+r�Zgettersr�r:rKr)r;rHr�r�r�rzr�r�r?r?r@�
test_pid_0�sH(
&&(((,



zTestProcess.test_pid_0cCsHdd�}d|_t��}||���}|tj���}tsDtrD||ksDJ�dS)NcSsFgd�}tr|�gd��|D]}|�|d�qtdd�|��D��S)N)�PLAT�HOMEZPYTEST_CURRENT_TESTZPYTEST_VERSION)Z__CF_USER_TEXT_ENCODINGZVERSIONER_PYTHON_PREFER_32_BIT�VERSIONER_PYTHON_VERSIONr�cSs8g|]0\}}|�dd��dd�|�dd��dd�f�qS)�
rZ�
)r)r��k�vr?r?r@r�
s��z@TestProcess.test_environ.<locals>.clean_dict.<locals>.<listcomp>)rr$�pop�dictr�)r��excluderzr?r?r@�
clean_dict�s�z,TestProcess.test_environ.<locals>.clean_dict)ZmaxDiffr7r8rPrC�copyr	r)r;r�rHZd1Zd2r?r?r@�test_environ�szTestProcess.test_environz<macOS 11+ can't get another process environment, issue #2084z(sometimes fails on `assert is_running()`cCs�t�d�}t|��|d�}|j|gtjtjd�}t�|j	�}t
|j	�|��sRJ�|j�
�dksdJ�tr�tr�z|��}Wq�tjy�YdS0n|��}|ddd�ks�J�|��|jdks�J�dS)	Na�
            #include <unistd.h>
            #include <fcntl.h>

            char * const argv[] = {"cat", 0};
            char * const envp[] = {"A=1", "X", "C=3", 0};

            int main(void) {
                // Close stderr on exec so parent can wait for the
                // execve to finish.
                if (fcntl(2, F_SETFD, FD_CLOEXEC) != 0)
                    return 0;
                return execve("/bin/cat", argv, envp);
            }
            )Zc_code)�stdinrer�rc�3)�A�Cr)�textwrap�dedentr(r�r6rhrir7r8r9r3rrer�rrrPra�communicate�
returncode)r;rQZcexer>rH�envr?r?r@�test_weird_environs$
�

zTestProcess.test_weird_environN)m�__name__�
__module__�__qualname__�__doc__rArIrRrTrWr,�mark�skipifr
rbrkrrsrxr|rr�r�r#r�r�rr�r�r�rr2rr�rr�rr�rr�r�r�r�r�r�r�rr�r/r1rr�r�r�rr
rrrrr"r r&r)r1rrr2r;r@rGrHrRrUrXrrardrjrrrur{r�rr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rr�rr�r?r?r?r@r4Ps 










)+�








/
	





B

&

&

�



4
&



..
��r4csreZdZdZeed�r&e��Ze��Z	�fdd�Z
�fdd�Z�fdd�Zd	d
�Z
ejjddd
�dd��Z�ZS)�LimitedUserTestCasez�Repeat the previous tests by using a limited user.
        Executed only on UNIX and only if the user who run the test script
        is root.
        r6csXt�j|i|��dd�t|�D�D].}t||���fdd�}t||t�||��q$dS)NcSsg|]}|�d�r|�qS)�testr�r�r?r?r@r�`r�z0LimitedUserTestCase.__init__.<locals>.<listcomp>cs$z
��WntjyYn0dSr5)r7rar���methr?r@�test_cs
z+LimitedUserTestCase.__init__.<locals>.test_)r�__init__r�r��setattr�types�
MethodType)r;r<r=�attrr���	__class__r�r@r�\s

zLimitedUserTestCase.__init__cs"t���t�d�t�d�dS)Ni�)r�setUprC�setegid�seteuidr�r�r?r@r�ks

zLimitedUserTestCase.setUpcs&t�|j�t�|j�t���dSr5)rCr��PROCESS_UIDr��PROCESS_GIDr�tearDownr�r�r?r@r�pszLimitedUserTestCase.tearDowncCs6zt���d�Wntjy&Yn0|�d��dS)Nryzexception not raised)r7r8rArar~r�r?r?r@rGus
zLimitedUserTestCase.test_niceTzcauses problem as rootrXcCsdSr5r?r�r?r?r@r�}sz'LimitedUserTestCase.test_zombie_process)r�r�r�r�r8rCr6r�r=r�r�r�r�rGr,r�r�r��
__classcell__r?r?r�r@r�Qs
r�c@s4eZdZdZedd��Zdd�Zdd�Zdd	�Zd
S)�	TestPopenzTests for psutil.Popen class.cCs
t�dSr5r-)�clsr?r?r@�
tearDownClass�szTestPopen.tearDownClassc	Cs�tddg}tj|tjtjtd��j}|��|��|jt	|�sDJ�t
�t��|j
Wd�n1sj0Y|��Wd�n1s�0Ytr�|�d�tjks�J�n|�d�tjks�J�dS)Nrc�3import time; [time.sleep(0.1) for x in range(100)];��stdoutrer�rg)r!r7�Popenrhrir"rzr�r�r�r,rErFr�rSr
rKrLrM�r;rj�procr?r?r@�	test_misc�s*��$&zTestPopen.test_misccCs|tjtdgtjtjtjtd��}|��Wd�n1s<0Y|jjsRJ�|j	js^J�|j
jsjJ�|jdksxJ�dS)Nz-V)r�rer�r�r)r7rr!rhrir"r�r��closedrer�r�)r;rr?r?r@�test_ctx_manager�s�&zTestPopen.test_ctx_managerc	Csrtddg}tj|tjtjtd���:}|��|��t�	tj
��|��Wd�n1s^0Yt�	tj
��|��Wd�n1s�0Yt�	tj
��|�t
j�Wd�n1s�0Yt�rNt�	tj
��|�t
j�Wd�n1�s
0Yt�	tj
��|�t
j�Wd�n1�sD0YWd�n1�sd0YdS)Nrcr�r�)r!r7rrhrir"rSrKr,rEr:rJrUrLrMrZCTRL_C_EVENTZCTRL_BREAK_EVENTrr?r?r@�test_kill_terminate�s0��&&*,zTestPopen.test_kill_terminateN)	r�r�r�r��classmethodr�rrrr?r?r?r@r��s
r�)Gr�r�r_rMrfrCrLr}rr,rhrr�r}r�r7rrrrrrr	r
rrZpsutil._commonr
Zpsutil._compatrrrrrZpsutil.testsrrrrrrrrrrrrrr r!r"r#r$r%r&r'r(r)r*r+r,r.r/r0r1r2r3r4r6r�r�r?r?r?r@�<module>s�6PKok\��yZ556psutil/tests/__pycache__/test_contracts.cpython-39.pycnu�[���a

��?h!1�@s�dZddlZddlZddlZddlmZddlmZddlmZddlmZddlmZddlm	Z	dd	lm
Z
dd
lmZddlmZddl
mZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZGdd�de�ZGdd�de�ZGdd �d e�Z Gd!d"�d"e�Z!Gd#d$�d$e�Z"dS)%z�Contracts tests. These tests mainly check API sanity in terms of
returned types and APIs availability.
Some of these are duplicates of tests test_system.py and test_process.py.
�N)�AIX)�FREEBSD)�LINUX)�MACOS)�NETBSD)�OPENBSD)�POSIX)�SUNOS)�WINDOWS)�long)�GITHUB_ACTIONS)�HAS_CPU_FREQ)�HAS_NET_IO_COUNTERS)�HAS_SENSORS_FANS)�HAS_SENSORS_TEMPERATURES)�PYPY)�	QEMU_USER)�SKIP_SYSCONS)�PsutilTestCase)�create_sockets)�enum)�
is_namedtuple)�kernel_version)�pytestc@sHeZdZdd�Zdd�Zdd�Zdd�Zejj	e
o4ed	d
�dd��Zd
S)�TestAvailConstantsAPIscCsttd�tptptksJ�dS)NZPROCFS_PATH)�hasattr�psutilrr	r��self�r�G/usr/local/lib64/python3.9/site-packages/psutil/tests/test_contracts.py�test_PROCFS_PATH3sz'TestAvailConstantsAPIs.test_PROCFS_PATHcCsj|j}|ttd�t�|ttd�t�|ttd�t�|ttd�t�|ttd�t�|ttd�t�dS)NZABOVE_NORMAL_PRIORITY_CLASSZBELOW_NORMAL_PRIORITY_CLASSZHIGH_PRIORITY_CLASSZIDLE_PRIORITY_CLASSZNORMAL_PRIORITY_CLASSZREALTIME_PRIORITY_CLASS��assertEqualrrr
�rZaerrr �test_win_priority6sz(TestAvailConstantsAPIs.test_win_prioritycCsJ|j}|ttd�t�|ttd�t�|ttd�t�|ttd�t�dS)NZIOPRIO_CLASS_NONEZIOPRIO_CLASS_RTZIOPRIO_CLASS_BEZIOPRIO_CLASS_IDLE)r#rrrr$rrr �test_linux_ioprio_linux?s
z.TestAvailConstantsAPIs.test_linux_ioprio_linuxcCsJ|j}|ttd�t�|ttd�t�|ttd�t�|ttd�t�dS)NZIOPRIO_HIGHZ
IOPRIO_NORMALZ
IOPRIO_LOWZIOPRIO_VERYLOWr"r$rrr �test_linux_ioprio_windowsFs
z0TestAvailConstantsAPIs.test_linux_ioprio_windows�%unsupported on GITHUB_ACTIONS + LINUX��reasoncCs�|j}|ttd�tpt�|ttd�tp*t�|ttd�tp>t�|ttd�tpRt�|ttd�tpft�|ttd�tpzt�|ttd�tp�t�|ttd�tp�t�|ttd	�tp�t�|ttd
�tp�t�|ttd�tp�t�|ttd�t�t�r�t�d
k�r|ttd�t�t�dk�r0|ttd�t�t�dk�rL|ttd�t�t�dk�rh|ttd�t�t�d
k�r�|ttd�t�|ttd�t�|ttd�t�|ttd�t�dS)NZ
RLIM_INFINITYZ	RLIMIT_ASZRLIMIT_COREZ
RLIMIT_CPUZRLIMIT_DATAZRLIMIT_FSIZEZRLIMIT_MEMLOCKZ
RLIMIT_NOFILEZRLIMIT_NPROCZ
RLIMIT_RSSZRLIMIT_STACKZRLIMIT_LOCKS)���ZRLIMIT_MSGQUEUE)r+r,�ZRLIMIT_NICEZ
RLIMIT_RTPRIO)r+r,�Z
RLIMIT_RTTIMEZRLIMIT_SIGPENDINGZRLIMIT_SWAPZ
RLIMIT_SBSIZEZRLIMIT_NPTS)r#rrrrrrr$rrr �test_rlimitMs6z"TestAvailConstantsAPIs.test_rlimitN)
�__name__�
__module__�__qualname__r!r%r&r'r�mark�skipifrrr0rrrr r2s	�rc@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)�TestAvailSystemAPIscCsttd�tksJ�dS)NZwin_service_iter�rrr
rrrr �test_win_service_iterrsz)TestAvailSystemAPIs.test_win_service_itercCsttd�tksJ�dS)NZwin_service_getr7rrrr �test_win_service_getusz(TestAvailSystemAPIs.test_win_service_getcCs&ttd�tptptptptks"J�dS)N�cpu_freq)rrrrr
rrrrrr �
test_cpu_freqxs�z!TestAvailSystemAPIs.test_cpu_freqcCsttd�tptksJ�dS)N�sensors_temperatures)rrrrrrrr �test_sensors_temperatures}sz-TestAvailSystemAPIs.test_sensors_temperaturescCsttd�tksJ�dS)N�sensors_fans)rrrrrrr �test_sensors_fans�sz%TestAvailSystemAPIs.test_sensors_fanscCs"ttd�tptptptksJ�dS)NZsensors_battery)rrrr
rrrrrr �test_battery�s�z TestAvailSystemAPIs.test_batteryN)	r1r2r3r8r9r;r=r?r@rrrr r6qsr6c@s�eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zej	j
eo<edd�d
d��Z
dd�Zdd�Zdd�Zdd�Zdd�Zdd�ZdS)�TestAvailProcessAPIscCs4ttjd�tp(tp(tp(tp(tp(tp(t	p(t
ks0J�dS)N�environ)rr�Processrrr
rr	rrrrrrr �test_environ�s 
�������z!TestAvailProcessAPIs.test_environcCsttjd�tksJ�dS�NZuids�rrrCrrrrr �	test_uids�szTestAvailProcessAPIs.test_uidscCsttjd�tksJ�dSrErFrrrr �	test_gids�szTestAvailProcessAPIs.test_gidscCsttjd�tksJ�dS)N�terminalrFrrrr �
test_terminal�sz"TestAvailProcessAPIs.test_terminalcCsttjd�tptksJ�dS)NZionice)rrrCrr
rrrr �test_ionice�sz TestAvailProcessAPIs.test_ionicer(r)cCsttjd�tptksJ�dS)NZrlimit)rrrCrrrrrr r0�sz TestAvailProcessAPIs.test_rlimitcCs"ttjd�}|tptksJ�dS)NZio_counters)rrrCrr	�rZhasitrrr �test_io_counters�sz%TestAvailProcessAPIs.test_io_counterscCsttjd�tksJ�dS)NZnum_fdsrFrrrr �test_num_fds�sz!TestAvailProcessAPIs.test_num_fdscCsttjd�tksJ�dS)NZnum_handles)rrrCr
rrrr �test_num_handles�sz%TestAvailProcessAPIs.test_num_handlescCs ttjd�tptptksJ�dS)NZcpu_affinity)rrrCrr
rrrrr �test_cpu_affinity�s

�z&TestAvailProcessAPIs.test_cpu_affinitycCs ttjd�tptptksJ�dS)NZcpu_num)rrrCrrr	rrrr �test_cpu_num�s

�z!TestAvailProcessAPIs.test_cpu_numcCs*ttjd�}|tptptptks&J�dS)NZmemory_maps)rrrCrrrrrLrrr �test_memory_maps�sz%TestAvailProcessAPIs.test_memory_mapsN)r1r2r3rDrGrHrJrKrr4r5rrr0rMrNrOrPrQrRrrrr rA�s �
rAc@s&eZdZdZedd��Zedfdd�Zdd�Zd	d
�Z	dd�Z
d
d�Zej
jeoZe��dkdd�ej
jedd�dd���Zdd�Zdd�Zej
jedd�dd��Zdd�Zej
jedd�dd ��Zej
jedd�d!d"��Zej
jedd�d#d$��Zej
jedd�d%d&��Z d'd(�Z!d)d*�Z"d+S),�TestSystemAPITypesz�Check the return types of system related APIs.
    Mainly we want to test we never return unicode on Python 2, see:
    https://github.com/giampaolo/psutil/issues/1039.
    cCst��|_dS�N)rrC�proc)�clsrrr �
setUpClass�szTestSystemAPITypes.setUpClassTcCs8t|�sJ�|D]"}t||�s"J�|r|dksJ�qdS)Nr)r�
isinstance)r�nt�type_Zgezero�nrrr �assert_ntuple_of_nums�s
z(TestSystemAPITypes.assert_ntuple_of_numscCs.|�t���tjdd�D]}|�|�qdS)NT)Zpercpu)r\rZ	cpu_times)rrYrrr �test_cpu_times�sz!TestSystemAPITypes.test_cpu_timescCs0ttjdd�t�sJ�ttjdd�t�s,J�dS)N��intervalg�h㈵��>)rXrZcpu_percent�floatrrrr �test_cpu_percent�sz#TestSystemAPITypes.test_cpu_percentcCs(|�tjdd��|�tjdd��dS)Nr^g-C��6?)r\rZcpu_times_percentrrrr �test_cpu_times_percent�sz)TestSystemAPITypes.test_cpu_times_percentcCstt��t�sJ�dSrT)rXr�	cpu_count�intrrrr �test_cpu_count�sz!TestSystemAPITypes.test_cpu_count�arm64zskipped due to #1892r)z
not supportedcCs2t��durt�d��|jt��tttfd�dS)Nzcpu_freq() returns None�rZ)rr:r�skipr\r`rdrrrrr r;�s
z TestSystemAPITypes.test_cpu_freqcCs>tjdd���D](\}}t|t�s&J�|j|ttfd�qdS)NT)Zperdiskrg)rZdisk_io_counters�itemsrX�strr\rdr)r�k�vrrr �test_disk_io_counters�sz(TestSystemAPITypes.test_disk_io_counterscCsRt��D]D}t|jt�sJ�t|jt�s,J�t|jt�s<J�t|jt�sJ�qdSrT)rZdisk_partitionsrXZdevicerjZ
mountpointZfstype�opts)rZdiskrrr �test_disk_partitions�s
z'TestSystemAPITypes.test_disk_partitionsz
requires rootcCsbt��Ht�d�}t|�tt|��ks*J�|D]}t|�s.J�q.Wd�n1sT0YdS)N�all)rrZnet_connections�len�setr)r�ret�connrrr �test_net_connectionss

z'TestSystemAPITypes.test_net_connectionscCs�t����D]�\}}t|t�s"J�|D]t}tdurJtsJt|jtj�sZJ�nt|jt	�sZJ�t|j
t�sjJ�t|jttd�f�s�J�t|j
ttd�f�s&J�q&qdSrT)rZnet_if_addrsrirXrjrr�family�IntEnumrd�address�netmask�type�	broadcast)r�ifname�addrs�addrrrr �test_net_if_addrssz$TestSystemAPITypes.test_net_if_addrszQEMU user not supportedcCs�t����D]r\}}t|t�s"J�t|jt�s2J�tdurNt|jtj	�s^J�nt|jt
�s^J�t|jt
�snJ�t|jt
�sJ�qdSrT)
rZnet_if_statsrirXrjZisup�boolrZduplexrwrd�speedZmtu)rr|�inforrr �test_net_if_statssz$TestSystemAPITypes.test_net_if_statscCs$tjdd�D]}t|t�sJ�qdS)NT)Zpernic)rZnet_io_countersrXrj)rr|rrr �test_net_io_counters"sz'TestSystemAPITypes.test_net_io_counterscCs\t����D]J\}}t|t�s"J�|D].}t|jt�s:J�t|jttt	d�f�s&J�q&qdSrT)
rr>rirXrj�label�currentr`rdrz�r�nameZunits�unitrrr r?(s
z$TestSystemAPITypes.test_sensors_fanscCs�t����D]~\}}t|t�s"J�|D]b}t|jt�s:J�t|jttt	d�f�sTJ�t|j
ttt	d�f�snJ�t|jttt	d�f�s&J�q&qdSrT)rr<rirXrjr�r�r`rdrz�high�criticalr�rrr r=1sz,TestSystemAPITypes.test_sensors_temperaturescCstt��t�sJ�dSrT)rXrZ	boot_timer`rrrr �test_boot_time<sz!TestSystemAPITypes.test_boot_timecCsjt��D]\}t|jt�sJ�t|jttd�f�s4J�t|jttd�f�sLJ�t|jt	td�f�sJ�qdSrT)
rZusersrXr�rjrIrz�host�pidrd)r�userrrr �
test_users@s
zTestSystemAPITypes.test_usersN)#r1r2r3�__doc__�classmethodrWr`r\r]rarbrerr4r5r�platform�machiner
r;rmrorrurrr�rr�rr?rr=r�r�rrrr rS�s8
�






rSc@s&eZdZejjedd�dd��ZdS)�TestProcessWaitTypez	not POSIXr)cCs\t�|��j�}|��|��}|tjks0J�tdurJt	|tj
�sXJ�nt	|t�sXJ�dSrT)rrCZspawn_testprocr��	terminate�wait�signal�SIGTERMrrXrwrd)r�p�coderrr �test_negative_signalJsz(TestProcessWaitType.test_negative_signalN)r1r2r3rr4r5rr�rrrr r�Isr�)#r�r�r�rrrrrrrrr	r
Zpsutil._compatrZpsutil.testsrr
rrrrrrrrrrrrrr6rArSr�rrrr �<module>sB?>PKok\R4�(||1psutil/tests/__pycache__/test_misc.cpython-39.pycnu�[���a

��?h���@sfdZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZ
ddl
mZddl
m
Z
ddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*dd lm+Z+dd!lm,Z,dd"lm-Z-Gd#d$�d$e'�Z.Gd%d&�d&e'�Z/Gd'd(�d(e'�Z0Gd)d*�d*e'�Z1e�2d+d,�Z3Gd-d.�d.e'�Z4e*j5j6ej7�8e&�d/d0�Gd1d2�d2e'��Z9dS)3zMiscellaneous tests.�N)�POSIX)�WINDOWS)�bcat)�cat)�debug)�
isfile_strict)�memoize)�memoize_when_activated��parse_environ_block)�
supports_ipv6��wrap_numbers)�PY3)�FileNotFoundError)�redirect_stderr)�
CI_TESTING)�HAS_BATTERY)�HAS_MEMORY_MAPS)�HAS_NET_IO_COUNTERS)�HAS_SENSORS_BATTERY)�HAS_SENSORS_FANS)�HAS_SENSORS_TEMPERATURES)�
PYTHON_EXE)�PYTHON_EXE_ENV)�	QEMU_USER)�SCRIPTS_DIR)�PsutilTestCase)�mock)�process_namespace)�pytest)�
reload_module)�sh)�system_namespacec@s�eZdZdd�Zefdd�Zdd�Zdd�Zd	d
�Zdd�Z	d
d�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�ZdS) �TestSpecialMethodscCszt�t��"tjj�dd�Wd�n1s20Yt�tj��t�dd�Wd�n1sl0YdS)N��)	r �raises�
OverflowError�psutilZ_psplatformZcextZcheck_pid_range�
NoSuchProcess�Process��self�r.�B/usr/local/lib64/python3.9/site-packages/psutil/tests/test_misc.py�test_check_pid_range>s0z'TestSpecialMethods.test_check_pid_rangecCst�|��j�}||�}d|vs$J�d|j|vs6J�dt|���|�dd�vsVJ�d|vsbJ�d|vsnJ�|��|��||�}d|vs�J�d|vs�J�t	j
jtjd	t�t
���d
��Jt��}||�}d|j|vs�J�d|vs�J�d|vs�J�Wd�n1�s0Yt	j
jtjd	t�t
���d
��Pt��}||�}d|j|v�s\J�d
|v�sjJ�d|v�sxJ�Wd�n1�s�0Yt	j
jtjd	t�t
���d
��Bt��}||�}d|j|v�s�J�d|v�s�J�Wd�n1�s0YdS)Nzpsutil.Processzpid=%sz	name='%s'zname=u'zname='zstatus=z	exitcode=zstatus='terminated'�name�Zside_effectzstatus='zombie'zname=Z
terminated)r)r+Zspawn_testproc�pid�strr1�replace�	terminate�waitr�patch�object�
ZombieProcess�os�getpidr*�AccessDenied)r-�func�p�rr.r.r/�test_process__repr__DsR �,�.�z'TestSpecialMethods.test_process__repr__cCs|jtd�dS)N)r>)rAr4r,r.r.r/�test_process__str__psz&TestSpecialMethods.test_process__str__cCstt���dksJ�dS)Nzpsutil.Error())�reprr)�Errorr,r.r.r/�test_error__repr__ssz%TestSpecialMethods.test_error__repr__cCstt���dksJ�dS)N�)r4r)rDr,r.r.r/�test_error__str__vsz$TestSpecialMethods.test_error__str__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)N�Az=psutil.NoSuchProcess(pid=321, msg='process no longer exists')r1�msg�r1rIz5psutil.NoSuchProcess(pid=321, name='name', msg='msg'))rCr)r*r,r.r.r/�test_no_such_process__repr__ys����z/TestSpecialMethods.test_no_such_process__repr__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)NrHz"process no longer exists (pid=321)r1rIrJ�msg (pid=321, name='name'))r4r)r*r,r.r.r/�test_no_such_process__str__�s����z.TestSpecialMethods.test_no_such_process__str__cCs8tt�d��dksJ�ttjddddd��dks4J�dS)NrHzGpsutil.ZombieProcess(pid=321, msg="PID still exists but it's a zombie")r1�@�foo�r1�ppidrIz?psutil.ZombieProcess(pid=321, ppid=320, name='name', msg='foo'))rCr)r:r,r.r.r/�test_zombie_process__repr__�s����z.TestSpecialMethods.test_zombie_process__repr__cCs8tt�d��dksJ�ttjddddd��dks4J�dS)NrHz,PID still exists but it's a zombie (pid=321)r1rNrOrPz$foo (pid=321, ppid=320, name='name'))r4r)r:r,r.r.r/�test_zombie_process__str__�s����z-TestSpecialMethods.test_zombie_process__str__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)NrHzpsutil.AccessDenied(pid=321)r1rIrJz4psutil.AccessDenied(pid=321, name='name', msg='msg'))rCr)r=r,r.r.r/�test_access_denied__repr__�s
��z-TestSpecialMethods.test_access_denied__repr__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)NrHz	(pid=321)r1rIrJrL)r4r)r=r,r.r.r/�test_access_denied__str__�s
��z,TestSpecialMethods.test_access_denied__str__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)N�z?psutil.TimeoutExpired(seconds=5, msg='timeout after 5 seconds')rHr1�r3r1zUpsutil.TimeoutExpired(pid=321, name='name', seconds=5, msg='timeout after 5 seconds'))rCr)�TimeoutExpiredr,r.r.r/�test_timeout_expired__repr__�s����z/TestSpecialMethods.test_timeout_expired__repr__cCs6tt�d��dksJ�ttjdddd��dks2J�dS)NrVztimeout after 5 secondsrHr1rWz.timeout after 5 seconds (pid=321, name='name'))r4r)rXr,r.r.r/�test_timeout_expired__str__�s
��z.TestSpecialMethods.test_timeout_expired__str__cCs>t��}t��}||ksJ�d|_||ks.J�|dks:J�dS)N)rrrO)r)r+�_ident)r-�p1�p2r.r.r/�test_process__eq__�sz%TestSpecialMethods.test_process__eq__cCs(tt��t��g�}t|�dks$J�dS)N�)�setr)r+�len)r-�sr.r.r/�test_process__hash__�sz'TestSpecialMethods.test_process__hash__N)�__name__�
__module__�__qualname__r0rCrArBrErGrKrMrRrSrTrUrYrZr^rcr.r.r.r/r$=s,


r$c@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)�TestMiscc	Cs�tt�}|D]~}|dvrq|�d�szt|�Wqty�|tjvr�tt|�}|dur^Yq|jdur�d|j��vr�|�	d|��Yq0qtjD]}||vs�J�q�dS)N)r�long�tests�test�PermissionError�ProcessLookupError�_�
deprecatedz%r not in psutil.__all__)
�dirr)�
startswith�
__import__�ImportError�__all__�getattr�__doc__�lower�fail)r-Z
dir_psutilr1�funr.r.r/�test__all__�s&


��
zTestMisc.test__all__cCs$d�dd�tjD��tjks J�dS)N�.cSsg|]}t|��qSr.)r4)�.0�xr.r.r/�
<listcomp>��z)TestMisc.test_version.<locals>.<listcomp>)�joinr)�version_info�__version__r,r.r.r/�test_version�s��zTestMisc.test_versioncCs"t��}d|_d|��vsJ�dS)N�1rO)r)r+rO�as_dict)r-r?r.r.r/�!test_process_as_dict_no_new_names�sz*TestMisc.test_process_as_dict_no_new_namesc
Cs�dd�}t��}|t�����t|�}|j|jdd�D]^\}}|j||d��8z
|�}WntjynYn
0||�Wd�q81s�0Yq8t�}|�|j�D]v\}}|dvr�q�t	r�|dkr�q�|j|d��8z
|�}Wntj
y�Yn
0||�Wd�q�1�s0Yq�t�t�
tjd	d
dd���}t|tj��sPJ�|jd	k�s`J�|jd
k�spJ�|jdk�s�J�t�t�
tjd	d
d
dd���}t|tj��s�J�|jd	k�s�J�|jd
k�s�J�|jd
k�s�J�|jdk�s�J�t�t�
tj
dd
dd���}t|tj
��sJ�|jdk�s.J�|jd
k�s>J�|jdk�sNJ�t�t�
tjdd	d
d���}t|tj��s|J�|jdk�s�J�|jd	k�s�J�|jd
k�s�J�dS)NcSs4t�t�|��t�|�}t�|�}||ks0J�dS�N)�json�loads�dumps�pickle)�ret�a�br.r.r/�checks

z*TestMisc.test_serialization.<locals>.checkT)�clear_cache)�procr1>Zwin_service_iterZwin_service_getZnet_if_stats)r1i�r1rI)r3r1rI�*)r3r1rQrI�{�!)�secondsr3r1)r)r+r�r�iterZgettersZsubTestrDr#rr=r�r�r�r*�
isinstancer3r1rIr:rQrXr�)r-r�r��nsrxr1r�r�r.r.r/�test_serializationst	
(
*�������zTestMisc.test_serializationc	Cs�tjjtjdtjd��"}t��|js*J�Wd�n1s>0Ytjjtjdt�d�d��"}t��|jsvJ�Wd�n1s�0Ytjjtjdtd��L}t	�
t��t��Wd�n1s�0Y|js�J�Wd�n1s�0Ytjjtjdt�d�d��R}|�tj��t��Wd�n1�sL0Y|j�sbJ�Wd�n1�sx0YdS)NZ
_get_identr2r_)
rr8r9r)r+r=�calledr:�
ValueErrorr r'r*�assertRaises)r-�methr.r.r/�test_ad_on_process_creationas4
�(�(�&(�(z$TestMisc.test_ad_on_process_creationc	Csztjddd��Xt�t��}tt�Wd�n1s80Ydt|j��	�vsXJ�Wd�n1sl0YdS)Nzpsutil._psplatform.cext.versionz0.0.0�Zreturn_valuezversion conflict)
rr8r r'rrr!r)r4�valuerv)r-�cmr.r.r/�test_sanity_version_check~s�&z"TestMisc.test_sanity_version_checkN)	rdrerfryr�r�r�r�r�r.r.r.r/rg�s!]rgc@sReZdZdd�ZeZddd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dd�ZdS)�TestMemoizeDecoratorcCs
g|_dSr���callsr,r.r.r/�setUp�szTestMemoizeDecorator.setUpNcCs2td�D]2}|�}|jdifgks&J�|dur||ksJ�qtd�D]:}|d�}|jdifdifgksjJ�|durD||ksDJ�qDtd�D]H}|ddd�}|jdifdifdddifgks�J�|dur�||ks�J�q�t|j�dks�J�|��|�}|du�r
||k�s
J�t|j�dk�sJ�|jd	k�s.J�dS)
Nr%r.r_�r_��barr����
My docstring.)�ranger�ra�cache_clearru)r-�obj�expected_retvalrmr�r.r.r/�run_against�s,$
z TestMemoizeDecorator.run_againstcs&t�fdd��}|�|j|dd�dS)Ncs�j�||f�dS�r���r��append��args�kwargs�Z	baseclassr.r/rO�sz/TestMemoizeDecorator.test_function.<locals>.foor��r�)rr�)r-rOr.r�r/�
test_function�sz"TestMemoizeDecorator.test_functioncs>tG�fdd�d��}|�|j|dd�|���dks:J�dS)Ncs$eZdZdZ�fdd�Zdd�ZdS)z,TestMemoizeDecorator.test_class.<locals>.Foor�cs�j�||f�dSr�r��r-r�r�r�r.r/�__init__�sz5TestMemoizeDecorator.test_class.<locals>.Foo.__init__cSsdS)Nr�r.r,r.r.r/r��sz0TestMemoizeDecorator.test_class.<locals>.Foo.barN)rdrerfrur�r�r.r�r.r/�Foo�sr�r�r�)rr�r��r-r�r.r�r/�
test_class�s
	zTestMemoizeDecorator.test_classcCs�tGdd�d��}|�|�us"J�t|��t|��ks:J�t|d��t|d��ksVJ�t|ddd��t|ddd��kszJ�t|d��t|d��ks�J�dS)Nc@seZdZdd�ZdS)z6TestMemoizeDecorator.test_class_singleton.<locals>.Barc_sdSr�r.r�r.r.r/r��sz?TestMemoizeDecorator.test_class_singleton.<locals>.Bar.__init__N)rdrerfr�r.r.r.r/�Bar�sr�r_r�)rOr%)r�id)r-r�r.r.r/�test_class_singleton�s$z)TestMemoizeDecorator.test_class_singletoncs,G�fdd�d�}|�|j|�jdd�dS)Ncs eZdZee�fdd���ZdS)z3TestMemoizeDecorator.test_staticmethod.<locals>.Foocs�j�||f�dSr�r�r�r�r.r/r��sz7TestMemoizeDecorator.test_staticmethod.<locals>.Foo.barN)rdrerf�staticmethodrr�r.r�r.r/r��sr�r�r��r�r�r�r.r�r/�test_staticmethod�sz&TestMemoizeDecorator.test_staticmethodcs,G�fdd�d�}|�|j|�jdd�dS)Ncs eZdZee�fdd���ZdS)z2TestMemoizeDecorator.test_classmethod.<locals>.Foocs�j�||f�dSr�r�)�clsr�r�r�r.r/r��sz6TestMemoizeDecorator.test_classmethod.<locals>.Foo.barN)rdrerf�classmethodrr�r.r�r.r/r��sr�r�r�r�r�r.r�r/�test_classmethod�sz%TestMemoizeDecorator.test_classmethodcst�fdd��}g�td�D].}|�}dif}||ks:J�t��dksJ�qtd�D]0}|d�}dif}||kstJ�t��dksTJ�qTtd�D]8}|ddd�}dddif}||ks�J�t��d	ks�J�q�|��|�}dif}||ks�J�t��d
ks�J�|jdk�s
J�dS)Ncs��d�||fS)�Foo docstring.N�r�r�r�r.r/rO�s
z/TestMemoizeDecorator.test_original.<locals>.foor%r.r_r�r�r�r�r�r�)rr�rar�ru)r-rOrmr��expectedr.r�r/�
test_original�s0z"TestMemoizeDecorator.test_original)N)rdrerfr��tearDownr�r�r�r�r�r�r�r.r.r.r/r��s


r�c@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)�TestCommonModulecs�G�fdd�d�}|�}g�|��|��t��dks<J�g�|j�|�|��|��t��dkslJ�g�|j�|�|��|��t��dks�J�dS)NcseZdZe�fdd��ZdS)z9TestCommonModule.test_memoize_when_activated.<locals>.Foocs��d�dSr�r�r,r�r.r/rOsz=TestCommonModule.test_memoize_when_activated.<locals>.Foo.fooN)rdrerfr	rOr.r�r.r/r�sr�r%r_)rOraZcache_activateZcache_deactivate)r-r��fr.r�r/�test_memoize_when_activateds z,TestCommonModule.test_memoize_when_activatedcCs�dd�}td�|d�diks J�td�|d�d|d�diks@J�td	�|d�d|d�d
iks`J�td�|d�d|d�diks�J�td�|d�diks�J�td
�|d�diks�J�td�|d�diks�J�dS)NcSstr|��S|Sr�)r�upper)rbr.r.r/�k0sz4TestCommonModule.test_parse_environ_block.<locals>.kza=1r�r�z	a=1b=2r��2za=1b=rFz
a=1b=2c=3zxxxa=1z	a=1=b=2za=1b=2r
)r-r�r.r.r/�test_parse_environ_block/s�
 �
z)TestCommonModule.test_parse_environ_blockc	Cs�|�tj�t��rRt�d��(}d|_t��t�r8J�Wd�n1sL0Yt��tjdtjd��$}t�rzJ�|js�J�Wd�n1s�0Yt��tjdtj	d��,}t�r�J�t��|js�J�Wd�n1s�0Yt��tjdtj	d��0}t��rJ�t��|j�s0J�Wd�n1�sF0Yn`t
�tj��Bt�tjtj
�}z|�d�W|��n
|��0Wd�n1�s�0YdS)Nzpsutil._common.socketFzpsutil._common.socket.socketr2z!psutil._common.socket.socket.bind)z::1r)Z
addCleanuprr�rr8�has_ipv6�socket�errorr��gaierrorr r'�AF_INET6�SOCK_STREAM�bind�close)r-rb�sockr.r.r/�test_supports_ipv6DsF(�
(�
(�.z#TestCommonModule.test_supports_ipv6c	Cs�tj�t�}t|�sJ�ttj�|��r,J�tjdtt	j
d�d��Bt�t��t|�Wd�n1sl0YWd�n1s�0Ytjdtt	j
d�d��Bt�t��t|�Wd�n1s�0YWd�n1s�0Ytjdtt	jd�d��t|��r"J�Wd�n1�s80Ytjddd��t|��r`J�Wd�n1�sv0YdS)Nzpsutil._common.os.statrOr2zpsutil._common.stat.S_ISREGFr�)r;�path�abspath�__file__r�dirnamerr8�OSError�errno�EPERMr r'�EACCES�ENOENT)r-Z	this_filer.r.r/�test_isfile_strictks&�D�D�.z#TestCommonModule.test_isfile_strictc	Cs�trddlm}nddlm}tj�tjdd��Lt|���"}t	d�t
j��Wd�n1sd0YWd�n1s�0Y|�
�}|�d�s�J|��d|vs�J�t�dd�|vs�J�tj�tjdd��Ht|���}t	td	��Wd�n1�s0YWd�n1�s(0Y|�
�}d
|v�sHJ�d|v�sVJ�tj�tjdd��Tt|���(}tdd
�}d|_t	|�Wd�n1�s�0YWd�n1�s�0Y|�
�}d
|v�s�J�d|v�s�J�dS)Nr)�StringIOZPSUTIL_DEBUGTZhellozpsutil-debugz.pyc�.pyzthis is an errorzignoring ValueErrorz'this is an error'r%zno such filez/foo)r�ior�rr8r9r)�_commonrr�sys�stderr�flush�getvaluerpr�r5r�r��filename)r-r�r�rI�excr.r.r/�
test_debug�s2FL
HzTestCommonModule.test_debugcCs�|��}t|d��}|�d�Wd�n1s20Yt|�dksLJ�t|�dks\J�t�t��t|d�Wd�n1s�0Yt�t��t|d�Wd�n1s�0Yt|ddd�dks�J�t|ddd�dks�J�dS)N�wrOsfooz-invalidr�)�fallback)Z
get_testfn�open�writerrr r'r)r-Ztestfnr�r.r.r/�
test_cat_bcat�s(**zTestCommonModule.test_cat_bcatN)	rdrerfr�r�r�r�r�r�r.r.r.r/r�s'!r�rOza b cc@s�eZdZdd�ZeZdd�Zdd�Zdd�Zd	d
�Zdd�Z	d
d�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zejjedd�dd��ZdS) �TestWrapNumberscCst��dSr�)rr�r,r.r.r/r��szTestWrapNumbers.setUpcCs&dtddd�i}t|d�|ks"J�dS�N�disk1rV�disk_io��ntr�r-�inputr.r.r/�test_first_call�szTestWrapNumbers.test_first_callcCs8dtddd�i}t|d�|ks"J�t|d�|ks4J�dSr�r�r�r.r.r/�test_input_hasnt_changed�sz(TestWrapNumbers.test_input_hasnt_changedcCs�dtddd�i}t|d�|ks"J�dtddd�i}t|d�|ksDJ�dtddd�i}t|d�|ksfJ�dtddd�i}t|d�|ks�J�dS)	Nr�rVr��
����r�r�r.r.r/�test_increase_but_no_wrap�sz)TestWrapNumbers.test_increase_but_no_wrapcCs�dtddd�i}t|d�|ks"J�dtddd�i}t|d�dtddd�iksPJ�dtddd�i}t|d�dtddd�iks~J�dtddd�i}t|d�dtddd�iks�J�dtddd�i}t|d�dtddd	�iks�J�dtddd�i}t|d�dtddd	�ik�s
J�dtd
dd�i}t|d�dtddd	�ik�s:J�dtddd�i}t|d�dtddd	�ik�sjJ�dtddd�i}t|d�dtddd	�ik�s�J�dS)
Nr��dr�r�n�Z�r���2��(r�r�r.r.r/�	test_wrap�s$   zTestWrapNumbers.test_wrapcCstdtddd�i}t|d�|ks"J�tddd�tddd�d�}t|d�|ksNJ�dtddd�i}t|d�|kspJ�dS)Nr�rVr���r�Zdisk2�r�r�r.r.r/�test_changing_keys�sz"TestWrapNumbers.test_changing_keyscCs0tddd�tddd�d�}t|d�|ks,J�tddd�tddd�d�}t|d�tddd�tddd�d�ksnJ�dtddd�i}t|d�|ks�J�tddd�tddd�d�}t|d�|ks�J�tddd�tddd�d�}t|d�|ks�J�tddd�tddd�d�}t|d�tddd�tddd�d�k�s,J�dS)Nrr	rr�rr
r�r�r�r.r.r/�test_changing_keys_w_wrap�s$

�

�z)TestWrapNumbers.test_changing_keys_w_wrapcCsbddddd�}t|d�|ks J�t|d�|ks2J�ddddd�}t|d�}|dd	d
ks^J�dS)N)	i,���#�R������ȷ)	i�r%itUiirrrr)	�6ri�$i�Nr��ri��)	iU	i�i"ri�i4i�i�i\)�nvme0n1Z	nvme0n1p1Z	nvme0n1p2Z	nvme0n1p3r�)	r	rrrrrrrrr"ri�r
)r-�d�outr.r.r/�test_real_datas��
zTestWrapNumbers.test_real_datacCsbdtddd�i}t|d�t��}|dd|iks6J�|ddiiksJJ�|ddiiks^J�dS)Nr�rVr�rr_r%�r�r�
cache_info�r-r�cacher.r.r/�test_cache_first_call(s
z%TestWrapNumbers.test_cache_first_callcCs�dtddd�i}t|d�dtddd�i}t|d�t��}|dd|iksPJ�|dddddd�ikslJ�|ddiiks�J�dS)	Nr�rVr�rrr_�)r�r)r�r_�r�r%r%r&r(r.r.r/�test_cache_call_twice0s

�
z%TestWrapNumbers.test_cache_call_twicecCs�dtddd�i}t|d�dtddd�i}t|d�t��}|dd|iksPJ�|dddddd�ikslJ�|dddtd	g�iiks�J�d
d�}dtddd�i}t|d�t��}|dd|iks�J�|�dtddd�i}t|d�t��}|dd|ik�sJ�|�dtddd
�i}t|d�t��}|dd|ik�sDJ�|dddddd�ik�sbJ�|dddtd	g�iik�s�J�dS)Nr�r	r�rrr_r+r%r,cSsFt��}|dddddd�iks$J�|dddtdg�iiksBJ�dS)	Nr_r�rr	r+r%r�r,)rr'r`)r)r.r.r/�check_cache_infoKs��
z9TestWrapNumbers.test_cache_wrap.<locals>.check_cache_inforrr)r�rr'r`)r-rr)r.r.r.r/�test_cache_wrap<s:

�



�zTestWrapNumbers.test_cache_wrapcCs�dtddd�i}t|d�tddd�tddd�d�}t|d�t��}|dd|iksZJ�|dddddd�iksvJ�|d	diiks�J�dS)
Nr�rVr�rrrr_r+r%r&r(r.r.r/�test_cache_changing_keysns

�
z(TestWrapNumbers.test_cache_changing_keyscCs\dtddd�i}t|d�t|d�t�d�t��iiifksDJ�t�d�t�d�dS)Nr�rVr�z?!?)r�rr�r'r�r.r.r/�test_cache_clearzs



z TestWrapNumbers.test_cache_clear�
not supported��reasoncCs�t��rt��st�d��t��t��t��}|D]}d|vsFJ�d|vs6J�q6tj��t��}|D]}d|vszJ�d|vsjJ�qjtj��t��}|iiifks�J�dS)Nzno disks or NICs availablezpsutil.disk_io_counterszpsutil.net_io_counters)r)Zdisk_io_countersZnet_io_countersr �skiprr'r�)r-�cachesr)r.r.r/�test_cache_clear_public_apis�s 


z,TestWrapNumbers.test_cache_clear_public_apisN)rdrerfr�r�rrrrrrr%r*r-r/r0r1r �mark�skipifrr7r.r.r.r/r��s 
2	r�zcan't locate scripts directoryr3c@s�eZdZdZedd��Zedd��Zdd�Zej	j
edd	�d
d��Zdd
�Z
dd�Zdd�Zdd�Zej	j
eoxe��dd	�dd��Zdd�Zdd�Zdd�Zej	j
edd	�dd��Zej	j
ed d	�d!d"��Zd#d$�Zd%d&�Zd'd(�Zd)d*�Zd+d,�Z d-d.�Z!ej	j
e"d/d	�d0d1��Z#d2d3�Z$ej	j
e%d d	�d4d5��Z&ej	j
e'd d	�d6d7��Z(ej	j
e)d d	�ej	j
e*d8d	�d9d:���Z+ej	j
e)d d	�ej	j
e*d8d	�d;d<���Z,d=S)>�TestScriptsz-Tests for scripts in the "scripts" directory.c
Os�|�dt�dtj�t|�}t|g}|D]}|�|�q*zt|fi|���	�}WnFt
y�}z.dt|�vr�t|�WYd}~S�WYd}~n
d}~00|s�J|��|S)N�envz%sr=)�
setdefaultrr;r�rrrr�r"�strip�RuntimeErrorr4)�exer�r��cmd�argr$�errr.r.r/�
assert_stdout�szTestScripts.assert_stdoutcCs\tj�t|�}trt|dd�nt|��}|��}Wd�n1sD0Yt�|�dS)N�utf8)�encoding)	r;r�rrrr��read�ast�parse)r?r��srcr.r.r/�
assert_syntax�s&zTestScripts.assert_syntaxcCsVt|�}t�t�D]>}|�d�rdtj�|�d|vr|�dtj�t|���qdS)Nr�Ztest_rzno test defined for %r script)	ror;�listdirr�endswithr��splitextrwr)r-Zmethsr1r.r.r/�
test_coverage�s
��zTestScripts.test_coveragez
POSIX onlyr3cCs`t�t�D]P\}}}|D]@}|�d�rtj�||�}tjt�|�tj@s|�	d|��qq
dS)Nr�z%r is not executable)
r;�walkrrLr�r�stat�S_IXUSR�ST_MODErw)r-�root�dirs�files�filer�r.r.r/�test_executable�s
zTestScripts.test_executablecCs|�d�dS)Nz
disk_usage.py�rCr,r.r.r/�test_disk_usage�szTestScripts.test_disk_usagecCs|�d�dS)Nzfree.pyrXr,r.r.r/�	test_free�szTestScripts.test_freecCs|�d�dS)Nz
meminfo.pyrXr,r.r.r/�test_meminfo�szTestScripts.test_meminfocCs|�dtt����dS)Nzprocinfo.py�rCr4r;r<r,r.r.r/�
test_procinfo�szTestScripts.test_procinfozno userscCs|�d�dS)Nzwho.pyrXr,r.r.r/�test_who�szTestScripts.test_whocCs|�d�dS)Nzps.pyrXr,r.r.r/�test_ps�szTestScripts.test_pscCs|�d�dS)Nz	pstree.pyrXr,r.r.r/�test_pstree�szTestScripts.test_pstreecCs|�d�dS)Nz
netstat.pyrXr,r.r.r/�test_netstat�szTestScripts.test_netstatzQEMU user not supportedcCs|�d�dS)Nzifconfig.pyrXr,r.r.r/�
test_ifconfig�szTestScripts.test_ifconfigr2cCs|�dtt����dS)Nzpmap.pyr\r,r.r.r/�	test_pmap�szTestScripts.test_pmapcCs*dt����jvrt�d��|�d�dS)NZussr2zprocsmem.py)r)r+Zmemory_full_info�_fieldsr r5rCr,r.r.r/�
test_procsmem�s
zTestScripts.test_procsmemcCs|�d�dS)Nz
killall.py�rJr,r.r.r/�test_killall�szTestScripts.test_killallcCs|�d�dS)Nz	nettop.pyrfr,r.r.r/�test_nettop�szTestScripts.test_nettopcCs|�d�dS)Nztop.pyrfr,r.r.r/�test_top�szTestScripts.test_topcCs|�d�dS)Nziotop.pyrfr,r.r.r/�
test_iotopszTestScripts.test_iotopcCs,|�dt�����}tt���|vs(J�dS)Nzpidof.py)rCr)r+r1r4r;r<)r-�outputr.r.r/�
test_pidofszTestScripts.test_pidofzWINDOWS onlycCs|�d�dS)Nzwinservices.pyrXr,r.r.r/�test_winservicesszTestScripts.test_winservicescCs|�d�dS)Nzcpu_distribution.pyrfr,r.r.r/�test_cpu_distributionsz!TestScripts.test_cpu_distributioncCs t��st�d��|�d�dS)Nzno temperaturesztemperatures.py)r)Zsensors_temperaturesr r5rCr,r.r.r/�test_temperaturess
zTestScripts.test_temperaturescCs t��st�d��|�d�dS)Nzno fanszfans.py)r)Zsensors_fansr r5rCr,r.r.r/�	test_fanss
zTestScripts.test_fansz
no batterycCs|�d�dS)Nz
battery.pyrXr,r.r.r/�test_batteryszTestScripts.test_batterycCs|�d�dS)Nz
sensors.pyrXr,r.r.r/�test_sensorsszTestScripts.test_sensorsN)-rdrerfrur�rCrJrNr r8r9rrWrYrZr[r]rr)Zusersr^r_r`rarrbrrcrergrhrirjrlrrmrnrrorrprrrqrrr.r.r.r/r:�sP








r:):rurG�collectionsr�r�r;r�r�rPr�r)Zpsutil.testsrrZpsutil._commonrrrrrr	rrrZpsutil._compatrrrrrrrrrrrrrrrrrr r!r"r#r$rgr�r��
namedtupler�r�r8r9r��existsr:r.r.r.r/�<module>st8!h�PKok\�6�r�r3psutil/tests/__pycache__/test_system.cpython-39.pycnu�[���a

��?hO��@s�dZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddlm
Z
ddlmZddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*ddlm+Z+dd lm,Z,dd!lm-Z-dd"lm.Z.dd#lm/Z/dd$lm0Z0dd%lm1Z1dd&lm2Z2Gd'd(�d(e-�Z3Gd)d*�d*e-�Z4Gd+d,�d,e-�Z5Gd-d.�d.e-�Z6Gd/d0�d0e-�Z7Gd1d2�d2e-�Z8Gd3d4�d4e-�Z9Gd5d6�d6e-�Z:dS)7zTests for system APIS.�N)�AIX)�BSD)�FREEBSD)�LINUX)�MACOS)�NETBSD)�OPENBSD)�POSIX)�SUNOS)�WINDOWS)�PY3)�FileNotFoundError)�long)�ASCII_FS)�
CI_TESTING)�DEVNULL)�GITHUB_ACTIONS)�GLOBAL_TIMEOUT)�HAS_BATTERY)�HAS_CPU_FREQ)�HAS_GETLOADAVG)�HAS_NET_IO_COUNTERS)�HAS_SENSORS_BATTERY)�HAS_SENSORS_FANS)�HAS_SENSORS_TEMPERATURES)�IS_64BIT)�MACOS_12PLUS)�PYPY)�	QEMU_USER)�UNICODE_SUFFIX)�PsutilTestCase)�check_net_address)�enum)�mock)�pytest)�retry_on_failurec@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)�TestProcessItercCs~t��dd�t��D�vsJ�|��}|jdd�t��D�vsBJ�t�|j�}|��|��|jdd�t��D�vszJ�dS)NcSsg|]
}|j�qS���pid��.0�xr'r'�D/usr/local/lib64/python3.9/site-packages/psutil/tests/test_system.py�
<listcomp>C�z5TestProcessIter.test_pid_presence.<locals>.<listcomp>cSsg|]
}|j�qSr'r(r*r'r'r-r.Er/cSsg|]
}|j�qSr'r(r*r'r'r-r.Ir/)	�os�getpid�psutil�process_iter�spawn_testprocr)�Process�kill�wait��selfZsproc�pr'r'r-�test_pid_presenceBsz!TestProcessIter.test_pid_presencecCs>dd�t��D�}t|dd�d�tt|�dd�d�ks:J�dS)NcSsg|]}|�qSr'r'r*r'r'r-r.Lr/z6TestProcessIter.test_no_duplicates.<locals>.<listcomp>cSs|jS�Nr(�r,r'r'r-�<lambda>Mr/z4TestProcessIter.test_no_duplicates.<locals>.<lambda>)�keycSs|jSr<r(r=r'r'r-r>Nr/)r2r3�sorted�set)r9�lsr'r'r-�test_no_duplicatesKs�z"TestProcessIter.test_no_duplicatesc	Csztt���td�D]`}tjdt�t���d��*ttjdgd��gksLJ�Wd�n1s`0Ytj�	�qdS�N�zpsutil.Process.as_dict�Zside_effect�	cpu_times��attrs)
�listr2r3�ranger#�patchZ
NoSuchProcessr0r1�cache_clear�r9r,r'r'r-�test_emulate_nspQs�8z TestProcessIter.test_emulate_nspc
Cs�tt���td�D]�}tjdt�t���d��Nt	�
tj��"ttjdgd��Wd�n1sf0YWd�n1s�0Ytj��qdSrD)rJr2r3rKr#rL�AccessDeniedr0r1r$�raisesrMrNr'r'r-�test_emulate_access_denied[s�Nz*TestProcessIter.test_emulate_access_deniedcCs�tjdgd�D]}t|j���dgksJ�qtjdgd�D]}t|j���dgks:J�q:t�t��"ttjdgd��Wd�n1s�0Ytj	dt�
dd�d��T}tjddgd�D](}|jddus�J�|jddks�J�q�|js�J�Wd�n1�s0Ytj	dt�
dd�d��d}t�}tjddg|d	�D].}|jd|u�sXJ�|jddk�s@J��q@|j�s|J�Wd�n1�s�0YdS)
Nr)rHZfooz$psutil._psplatform.Process.cpu_timesr�rFrG)rIZad_value)
r2r3rJ�info�keysr$rQ�
ValueErrorr#rLrP�called�object)r9r:�m�flagr'r'r-�
test_attrsfs60
�*
��
zTestProcessIter.test_attrscCs.tt���tjsJ�tj��tjr*J�dSr<)rJr2r3Z_pmaprM�r9r'r'r-�test_cache_clear�s

z TestProcessIter.test_cache_clearN)	�__name__�
__module__�__qualname__r;rCrOrRr[r]r'r'r'r-r&As	
r&c@sTeZdZejjeoedd�dd��Zejjeo0edd�dd��Z	dd�Z
d	d
�ZdS)�TestProcessAPIsz-spawn_testproc() unreliable on PYPY + WINDOWS��reasoncs4�fdd�}g�|��}|��}|��}dd�|||fD�}t�t��tj|dd�Wd�n1sj0Yt�t��tj|dd�Wd�n1s�0Yt��}tj|d	|d
�\}}t��|dks�J�|gks�J�t|�dks�J��gk�sJ�|D]}	t	|	d
��r
J��q
t
d�dd��}
|��|
||�\}}|jdd�|D�v�sbJ�t
�r�|��jtjk�s�J�n|��jdk�s�J��|jgk�s�J�|D]}	t	|	d
��r�J��q�t
d�dd��}|��|��|||�\}}t��t|j|j|jg�k�sJ�|D]}	t	|	d
��sJ��qdS)Ncs��|j�dSr<)�appendr))r:��pidsr'r-�callback�sz1TestProcessAPIs.test_wait_procs.<locals>.callbackcSsg|]}t�|j��qSr'�r2r5r)r*r'r'r-r.�r/z3TestProcessAPIs.test_wait_procs.<locals>.<listcomp>���)�timeout�)rgg{�G�z�?�rjrgg�?��
returncode�cSs<tj|d|d�\}}t|�dks$J�t|�dks4J�||fS)N���Q��?rlrkrE�r2�
wait_procs�len��procsrg�gone�aliver'r'r-�test_1�s�
z/TestProcessAPIs.test_wait_procs.<locals>.test_1cSsg|]
}|j�qSr'r(r*r'r'r-r.�r/cSs<tj|d|d�\}}t|�dks$J�t|�dks4J�||fS)Nrprlrmrrqrtr'r'r-�test_2�s�
z/TestProcessAPIs.test_wait_procs.<locals>.test_2)r4r$rQrVr2rr�	TypeError�timers�hasattrr%�	terminater)r	�poprn�signal�SIGTERMrA)r9rg�sproc1�sproc2�sproc3ru�trvrwr:rxryr'rer-�test_wait_procs�sJ,,

"zTestProcessAPIs.test_wait_procscCsL|��}|��}|��}dd�|||fD�}|D]}|��q0t�|�dS)NcSsg|]}t�|j��qSr'rhr*r'r'r-r.�r/z>TestProcessAPIs.test_wait_procs_no_timeout.<locals>.<listcomp>)r4r}r2rr)r9r�r�r�rur:r'r'r-�test_wait_procs_no_timeout�s
z*TestProcessAPIs.test_wait_procs_no_timeoutcCsp|��}t�|j�sJ�t�|j�}|��|��t�|j�rDJ�t�d�rRJ�t�d�dt��vkslJ�dS)Nrir)r4r2�
pid_existsr)r5r6r7rfr8r'r'r-�test_pid_exists�szTestProcessAPIs.test_pid_existsc	Cs�t��}|D]D}zt�|�s J�WqtyNt�d�|t��vsJJ�Yq0qtt|�dt|�d�}|D]}t�|�rpJ�qpdS)Ng�������?i�:i�>)r2rfr��AssertionErrorr{�sleeprK�max)r9rfr)r'r'r-�test_pid_exists_2�s
z!TestProcessAPIs.test_pid_exists_2N)r^r_r`r$�mark�skipifrrr�r�r�r�r'r'r'r-ra�s�
;�
	
rac@sFeZdZdd�Zejjeo"e�	�dd�dd��Z
dd�Zd	d
�ZdS)�TestMiscAPIscCs6t��}t|t�sJ�|dks"J�|t��ks2J�dS�Nr)r2Z	boot_time�
isinstance�floatr{)r9Zbtr'r'r-�test_boot_time�szTestMiscAPIs.test_boot_time�unreliable on CIrbc	Cs�t��}|gksJ�|D]�}|j|d���|js4J�t|jt�sDJ�t|jttd�f�s\J�|jdur~t|jttd�f�s~J�|j|j|j	dks�J�t
j
�|j	�ts�t
r�|jdus�J�nt�|j�Wd�q1s�0YqdS)N)�user�)r2�users�subTest�namer��strZterminal�type�host�started�datetime�
fromtimestamprrr)r5)r9r�r�r'r'r-�
test_users�s 

zTestMiscAPIs.test_userscCs,tj}tt_zt��W|t_n|t_0dSr<)�sys�stdoutrr2�test)r9r�r'r'r-�	test_tests

zTestMiscAPIs.test_testcCs|gd�}|D]}ttt|�t�sJ|��qtjdk�r6tjs@J�tjrJJ�|�d�dt	j
��vrxtjslJ�|�d�n�dt	j
��vr�tj
s�J�tjtjtjg�d�dks�J�|�d	�|�d
�|�d�|�d�n\d
t	j
��vs�dt	j
��v�rtj�sJ�|�d�n&dt	j
��v�rXtj�s*J�|�d�n"tj�sBJ�tj�rNJ�|�d�|D]}tt|��r\J|���q\dS)N)	r	rrrrrrrr
�posixr	�linuxrZbsdTrkrrrr�sunos�solarisr
�darwinrr)r��getattrr2�boolr0r�r	r�remover��platform�lowerrrrrr�countr
r)r9�namesr�r'r'r-�test_os_constantssF




��


��
zTestMiscAPIs.test_os_constantsN)
r^r_r`r�r$r�r�rr2r�r�r�r�r'r'r'r-r��s�
	r�c@seZdZdd�Zdd�ZdS)�TestMemoryAPIscCs�t��}|jdksJ|��|jdks,J|��d|jkrBdksLnJ|��|jdks^J|��|jdkspJ|��|jD]l}t||�}|dkr�t	|t
tf�s�J�|dkrv|dks�|�d||f��||jkrv|�d||j||f��qvdS)Nr�d�percent�totalz%r < 0 (%s)z%r > total (total=%s, %s=%s))
r2Zvirtual_memoryr��	availabler��used�free�_fieldsr�r��intr�fail)r9�memr��valuer'r'r-�test_virtual_memoryHs& 


��z"TestMemoryAPIs.test_virtual_memorycCs�t��}|jdksJ�|jdks(J|��|jdks:J|��|jdkrX|jdksjJ|��n|jdksjJ|��d|jkr�dks�nJ|��|jdks�J|��|jdks�J|��dS)N)r�r�r�r��sin�soutrr�)	r2Zswap_memoryr�r�r�r�r�r�r�)r9r�r'r'r-�test_swap_memory\s	
 zTestMemoryAPIs.test_swap_memoryN)r^r_r`r�r�r'r'r'r-r�Gsr�c@s�eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	e
jje
oLedd�dd��Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd �Ze
jjeo�e��d!kd"d�e
jjed#d�d$d%���Ze
jjed#d�d&d'��Zd(S))�TestCpuAPIscCs�t��}|dusJ�|ttjdd��ks,J�|dks8J�tj�d�r�td��}|��}Wd�n1sj0Yd|vr�t	�
d��dS)NT��percpurkz
/proc/cpuinfozphysical idz#cpuinfo doesn't include physical id)r2�	cpu_countrsrGr0�path�exists�open�readr$�skip)r9�logical�fdZcpuinfo_datar'r'r-�test_cpu_count_logicalts
&z"TestCpuAPIs.test_cpu_count_logicalcCsht��}tjdd�}|dur&t�d��trLt��dd�dkrL|dusdJ�n|dksXJ�||ksdJ�dS)NF�r�zcpu_count_cores() is NonerE)�rkrk)r2r�r$r�rr��getwindowsversion)r9r�Zcoresr'r'r-�test_cpu_count_cores�s
z TestCpuAPIs.test_cpu_count_coresc	Cs�dD]�}tjd|d��*}t��dus(J�|js2J�Wd�n1sF0Ytjd|d��.}tjdd�dustJ�|js~J�Wd�q1s�0YqdS)N)rirNz$psutil._psplatform.cpu_count_logical�Zreturn_valuez"psutil._psplatform.cpu_count_coresFr�)r#rLr2r�rW)r9�valrYr'r'r-�test_cpu_count_none�s�(�zTestCpuAPIs.test_cpu_count_nonecCsjd}t��}t|�|D]&}t|t�s*J�|dks6J�||7}qtt|t|��d�dks^J�t|�dS)Nrr�r�)r2rG�sumr�r��round�absr�)r9r��times�cp_timer'r'r-�test_cpu_times�s
zTestCpuAPIs.test_cpu_timescCsLtt���}t��t}t��|kr>tt���}||krdSq|�d��dS)Nztime remained the same)r�r2rGr{rr�)r9�t1Zstop_at�t2r'r'r-�test_cpu_times_time_increases�sz)TestCpuAPIs.test_cpu_times_time_increasescCs�tjdd�D]b}d}t|�|D]&}t|t�s2J�|dks>J�||7}q tt|t|��d�dksfJ�t|�qttjdd�d�ttjdd��ks�J�dS)NTr�rr�r�F)	r2rGr�r�r�r�r�r�rs)r9r�r�r�r'r'r-�test_per_cpu_times�s


�zTestCpuAPIs.test_per_cpu_timescCs�tjdd�}t��t}t��|kr.|�d�Stjdd�}t||�D]4\}}t�|�t�|�}}||}|dkrDdSqDqdS)NTr�rjg�������?)r2rGr{rr��zipZ_cpu_busy_time)r9Ztot1Z	giveup_atZtot2r�r��
differencer'r'r-�test_per_cpu_times_2�s
z TestCpuAPIs.test_per_cpu_times_2zunreliable on OPENBSD + CIrbc	Cs�t��}tjdd�}|�dd�t|�D��}|jD]T}|j|||d��0tt||�t||��dkshJ�Wd�q21s|0Yq2dS)NTr�cSsg|]}t|��qSr')r�)r+�numr'r'r-r.�r/z9TestCpuAPIs.test_cpu_times_comparison.<locals>.<listcomp>)�field�base�per_cpurk)r2rG�_maker�r�r�r�r�)r9r�r�Z
summed_valuesr�r'r'r-�test_cpu_times_comparison�s
��z%TestCpuAPIs.test_cpu_times_comparisonc
Cs�z>t|t�sJ�|dksJ�|dus(J�|dt��ks<J�WnBty�}z*td|t�|�t�|�f��WYd}~n
d}~00dS)Nr�g�gY@z
%s
last=%s
new=%s)r�r�r2r�r��pprint�pformat)r9r�Zlast_retZnew_ret�errr'r'r-�_test_cpu_percents��zTestCpuAPIs._test_cpu_percentcCsrtjdd�}td�D]"}tjdd�}|�|||�|}qt�t��tjdd�Wd�n1sd0YdS�N���MbP?)�intervalr�ri)r2�cpu_percentrKr�r$rQrV)r9�last�_�newr'r'r-�test_cpu_percentszTestCpuAPIs.test_cpu_percentcCs�tjddd�}t|�t��ks"J�td�D].}tjddd�}|D]}|�|||�q@|}q*t�t��tjddd�Wd�n1s�0YdS)Nr�T�r�r�r�ri)	r2r�rsr�rKr�r$rQrV�r9r�r�r�r�r'r'r-�test_per_cpu_percentsz TestCpuAPIs.test_per_cpu_percentcCs�tjdd�}td�D]>}tjdd�}|D]}|�|||�q(|�t|�||�|}qt�t��tjdd�Wd�n1s�0YdSr�)r2�cpu_times_percentrKr�r�r$rQrVr�r'r'r-�test_cpu_times_percent"sz"TestCpuAPIs.test_cpu_times_percentcCsztjddd�}t|�t��ks"J�td�D]J}tjddd�}|D].}|D]}|�|||�qH|�t|�||�q@|}q*dS)Nr�Tr�r�)r2r�rsr�rKr�r�)r9r�r�r��cpur�r'r'r-�test_per_cpu_times_percent-sz&TestCpuAPIs.test_per_cpu_times_percentcCs~tjdd�dd�tjdd�D�}tjd|d��:tjdd�D]}|D]}|�|dd�qFq>Wd�n1sp0YdS)NTr�cSs*g|]"}|�dd�tt|j��D���qS)cSsg|]}d�qS)rr'r*r'r'r-r.<r/zNTestCpuAPIs.test_per_cpu_times_percent_negative.<locals>.<listcomp>.<listcomp>)r�rKrsr�r*r'r'r-r.;s�zCTestCpuAPIs.test_per_cpu_times_percent_negative.<locals>.<listcomp>zpsutil.cpu_timesr�)r2r�rGr#rLr�)r9Z
zero_timesr�r�r'r'r-�#test_per_cpu_times_percent_negative8s
�z/TestCpuAPIs.test_per_cpu_times_percent_negativecCsTt��}|jdksJ�|jD]2}t||�}|dks6J�ts|dvr|dksJ�qdS)N)�ctx_switches�
interruptsZsoft_interruptsZsyscallsr)r�r�)r2Z	cpu_statsr�r�r)r9�infosr�r�r'r'r-�test_cpu_statsDs

zTestCpuAPIs.test_cpu_stats�arm64zskipped due to #1892�
not supportedcCs`dd�}tjdd�}tr&|s&t�d��|s2J|��|tjdd�g�tr\t|�t��ks\J�dS)NcSsl|D]b}|jdksJ�|jdkr0|j|jks0J�|jD].}t||�}t|tttf�sXJ�|dks6J�q6qdS)N)�current�minr�r�r)r�r�rr�r�r�rr�)rB�ntr�r�r'r'r-�check_lsZs


z+TestCpuAPIs.test_cpu_freq.<locals>.check_lsTr�zreturns empty list on FreeBSDF)r2Zcpu_freqrr$r�rrsr�)r9rrBr'r'r-�
test_cpu_freqUs

zTestCpuAPIs.test_cpu_freqcCs@t��}t|�dksJ�|D]}t|t�s.J�|dksJ�qdS)Nrmr�)r2�
getloadavgrsr�r�)r9Zloadavg�loadr'r'r-�test_getloadavgns
zTestCpuAPIs.test_getloadavgN)r^r_r`r�r�r�r�r�r�r�r$r�r�rrr�r�r�r�r�r�r�r�rr��machinerrrr	r'r'r'r-r�ss2
"�
	�r�c@s�eZdZejjeoedd�dd��Zejje	dd�dd��Z
dd	�Zd
d�Zejje
ofej�d�d
d�ejjeo~e��dd�dd���Zdd�ZdS)�TestDiskAPIszunreliable on PYPY32 + 32BITrbcCs>t�t���}|jdksJ�|jdks.J|��|jdks@J|��|jdksRJ|��|j|jksfJ|��|j|jkszJ|��d|jkr�dks�nJ|j��t	t
d�r�t
�t���}d}|j|jks�J�t|j|j�|ks�J�ts�t|j|j�|ks�J�|�
�}t�t��t�|�Wd�n1�s00YdS)N)r�r�r�r�rr��
disk_usageiP)r2rr0�getcwdr�r�r�r�r�r|�shutilr�rZ
get_testfnr$rQr
)r9�usageZshutil_usageZ	tolerance�fnamer'r'r-�test_disk_usagexs$"
zTestDiskAPIs.test_disk_usageznot an ASCII fscCs8t�t��t�t�Wd�n1s*0YdSr<)r$rQ�UnicodeEncodeErrorr2rrr\r'r'r-�test_disk_usage_unicode�sz$TestDiskAPIs.test_disk_usage_unicodecCst�d�dS)N�.)r2rr\r'r'r-�test_disk_usage_bytes�sz"TestDiskAPIs.test_disk_usage_bytescCs~dd�}tjdd�}|sJ�|D]b}||�tr<d|jvr<q tsXtj�|j�s^J|��n|jtj�|j	�stJ|��|j
s J|��q tjdd�}|s�J�tjdd�D]�}||�ts�|j	r�zt�|j	�Wnbt�y,}zHt
�rt�r|jtjk�rWYd}~q�|jtjtjfv�r�WYd}~q�d}~00tj�|j	�s�J|��q�dd�}|t�}d	d
�tjdd�D�}||v�szJ�dS)NcSsDt|jt�sJ�t|jt�s J�t|jt�s0J�t|jt�s@J�dSr<)r��devicer��
mountpoint�fstype�opts�rr'r'r-�check_ntuple�sz7TestDiskAPIs.test_disk_partitions.<locals>.check_ntupleF)�allZcdromTcSs.tj�|�}tj�|�s&tj�|�}q|��Sr<)r0r��abspath�ismount�dirnamer�)r�r'r'r-�find_mount_point�sz;TestDiskAPIs.test_disk_partitions.<locals>.find_mount_pointcSsg|]}|jr|j���qSr')rr�r*r'r'r-r.�s�z5TestDiskAPIs.test_disk_partitions.<locals>.<listcomp>)r2Zdisk_partitionsrrr	r0r�r�rrr�stat�OSErrorrr�errno�EIO�EPERM�EACCES�__file__)r9rrBZdiskr�r �mountZmountsr'r'r-�test_disk_partitions�s>

�z!TestDiskAPIs.test_disk_partitionsz/proc/diskstatsz3/proc/diskstats not available on this linux versionr�cCsvdd�}tjdd�}|dus$Jd��||�tjdd�}t|�tt|��ksPJ�|D]}|sdJ|��|||�qTdS)NcSs�|d|jksJ�|d|jks$J�|d|jks6J�|d|jksHJ�ts�ts�|d|jksbJ�|d|jkstJ�tr�|d|j	ks�J�|d|j
ks�J�|d	|jks�J�ntr�|d|jks�J�|j
D]}t||�dks�J|��q�dS)
NrrkrErm��r���)Z
read_countZwrite_count�
read_bytes�write_bytesrrZ	read_timeZ
write_timerZread_merged_countZwrite_merged_countZ	busy_timerr�r�)rr�r'r'r-r�s
z8TestDiskAPIs.test_disk_io_counters.<locals>.check_ntupleF�Zperdiskzno disks on this system?T)r2�disk_io_countersrsrA�r9r�retr?r'r'r-�test_disk_io_counters�sz"TestDiskAPIs.test_disk_io_counterscCsdtjdid��B}tjdd�dus$J�tjdd�iks8J�|jsBJ�Wd�n1sV0YdS)Nz#psutil._psplatform.disk_io_countersr�Fr0T)r#rLr2r1rW�r9rYr'r'r-�test_disk_io_counters_no_disks�s�z+TestDiskAPIs.test_disk_io_counters_no_disksN)r^r_r`r$r�r�rrrrrrr)rr0r�r�rr2r1r4r6r'r'r'r-rws"
�

:��rc@s�eZdZejjedd�dd��Zejjedd�dd��Zejje	dd�dd	��Z
d
d�Zejje	dd�dd
��Zejje
p�ep�edd�dd��ZdS)�TestNetAPIsrrbcCsddd�}tjdd�}||�tjdd�}|gks4J�|D]&}|sDJ�t|t�sRJ�|||�q8dS)NcSs(|d|jksJ�|d|jks$J�|d|jks6J�|d|jksHJ�|d|jksZJ�|d|jkslJ�|d|jks~J�|d|jks�J�|jdks�J|��|jdks�J|��|jdks�J|��|jdks�J|��|jdks�J|��|jdks�J|��|jdk�sJ|��|jdk�s$J|��dS)	NrrkrErmr*r+r�r,)Z
bytes_sentZ
bytes_recvZpackets_sentZpackets_recvZerrinZerroutZdropinZdropoutrr'r'r-r	s z6TestNetAPIs.test_net_io_counters.<locals>.check_ntupleF�ZpernicT)r2�net_io_countersr�r�r2r'r'r-�test_net_io_counterssz TestNetAPIs.test_net_io_counterscCsdtjdid��B}tjdd�dus$J�tjdd�iks8J�|jsBJ�Wd�n1sV0YdS)Nz"psutil._psplatform.net_io_countersr�Fr8T)r#rLr2r9rWr5r'r'r-�test_net_io_counters_no_nics$s�z(TestNetAPIs.test_net_io_counters_no_nicszQEMU user not supportedc
Cs�t��}|sJ|��t��}ttjtjtjg�}|��D�]\}}t	|t
�sPJ�tt|��t|�kshJ�|D�]�}t	|jt
�s�J�t	|jt
�s�J�t	|jt
td�f�s�J�t	|jt
td�f�s�J�|j|vs�J�tr�ts�t	|jtj�s�J�||j�r�|jtjk�rNt�|j�}t�|�� |�|jdf�Wd�n1�sB0Yn�|jtjk�r�t�|jdtjtjdtj�d}|\}	}
}}}
t�|	|
|�}t�|��|�|
�Wd�n1�s�0Y|j|j|j|jfD]*}|du�r�|jtjk�r�t||j��q�|j�r(|jdu�s<J�ql|jrl|jduslJ�qlq8t�sRt �sRt!�rrt"td��r�tjtjk�s�J�n0t#�r�tjtj$k�s�J�nt%�r�tjdk�s�J�dS)Nr�AF_LINKri)&r2�net_if_addrs�net_if_statsrA�socket�AF_INET�AF_INET6r<�itemsr�r�rs�familyr��address�netmaskr��	broadcastrrr"�IntEnum�isup�
contextlib�closing�bind�getaddrinfo�SOCK_STREAM�
AI_PASSIVEZptpr!rrr
r|r�	AF_PACKETr)r9�nicsZ	nic_statsZfamiliesZnic�addrs�addr�srT�af�socktype�protoZ
_canonname�sa�ipr'r'r-�test_net_if_addrs/sl
2��*�
zTestNetAPIs.test_net_if_addrscCs�trdtjddddfg}ndg}tjd|d��L}t��dd}|jsJJ�tr^|jdkslJ�n|jdkslJ�Wd�n1s�0YdS)	N�em1z06:3d:29)rZriz06-3d-29NNNzpsutil._psplatform.net_if_addrsr�rz06:3d:29:00:00:00z06-3d-29-00-00-00)r	r2r<r#rLr=rWrD)r9r3rYrRr'r'r-� test_net_if_addrs_mac_null_bytesus�
z,TestNetAPIs.test_net_if_addrs_mac_null_bytesc
Cs�t��}|sJ|��tjtjtjf}|��D]p\}}t|t�sBJ�|\}}}}}	t|t�s^J�||vsjJ�||vsvJ�|dks�J�|dks�J�t|	t�s,J�q,dSr�)	r2r>ZNIC_DUPLEX_FULLZNIC_DUPLEX_HALFZNIC_DUPLEX_UNKNOWNrBr�r�r�)
r9rPZall_duplexesr��statsrHZduplex�speedZmtu�flagsr'r'r-�test_net_if_stats�s�zTestNetAPIs.test_net_if_statszLINUX or BSD or MACOS specificcCsXtjdttjd�d��.}t��}|iks,J�|js6J�Wd�n1sJ0YdS)Nzpsutil._psutil_posix.net_if_mturSrF)r#rLr"r#ZENODEVr2r>rW)r9rYr3r'r'r-�test_net_if_stats_enodev�s
�z$TestNetAPIs.test_net_if_stats_enodevN)r^r_r`r$r�r�rr:r;rrYr[r_rrrr`r'r'r'r-r7s



E
�r7c@s�eZdZejjedd�dd��Zejjedd�dd��Zejje	dd�ejje
dd�dd	���Zejjedd�d
d��Z
dS)
�TestSensorsAPIsrrbcCs�t��}|��D]x\}}t|t�s&J�|D]\}t|jt�s>J�|jdurV|jdksVJ�|jdurn|jdksnJ�|jdur*|jdks*J�q*qdSr�)	r2�sensors_temperaturesrBr�r��labelr�high�critical)r9�tempsr��entries�entryr'r'r-�test_sensors_temperatures�s


z)TestSensorsAPIs.test_sensors_temperaturescCs�ddgi}tjd|d��X}tjdd�dd}|js8J�|jdksFJ�|jd	ksTJ�|jd
ksbJ�Wd�n1sv0YdS)NZcoretemp)rcgI@gN@g�Q@z'psutil._psplatform.sensors_temperaturesr�T)Z
fahrenheitrg�^@g�a@g�c@)r#rLr2rbrWrrdre)r9�drYrfr'r'r-�#test_sensors_temperatures_fahreneit�s
�
z3TestSensorsAPIs.test_sensors_temperatures_fahreneitz
no batterycCspt��}|jdksJ�|jdks$J�|jtjtjfvrF|jdks\J�n|jtjkr\|js\J�t|jt�slJ�dS)Nrr�)	r2Zsensors_batteryr�ZsecsleftZPOWER_TIME_UNKNOWNZPOWER_TIME_UNLIMITEDZ
power_pluggedr�r�)r9r3r'r'r-�test_sensors_battery�s�
z$TestSensorsAPIs.test_sensors_batterycCsht��}|��D]R\}}t|t�s&J�|D]6}t|jt�s>J�t|jttf�sRJ�|jdks*J�q*qdSr�)	r2Zsensors_fansrBr�r�rcrr�r)r9Zfansr�rgrhr'r'r-�test_sensors_fans�sz!TestSensorsAPIs.test_sensors_fansN)r^r_r`r$r�r�rrirkrrrlrrmr'r'r'r-ra�s


ra);�__doc__rIr�r#r0r�r�rrr?r�r{r2rrrrrrrr	r
rZpsutil._compatrr
rZpsutil.testsrrrrrrrrrrrrrrrrrr r!r"r#r$r%r&rar�r�r�rr7rar'r'r'r-�<module>sxHfX,#PKok\}��}}2psutil/tests/__pycache__/test_sunos.cpython-39.pycnu�[���a

��?h��@sjdZddlZddlZddlmZddlmZddlmZddlmZejj	edd�Gd	d
�d
e��Z
dS)zSun OS specific tests.�N)�SUNOS)�PsutilTestCase)�pytest)�shz
SUNOS only)�reasonc@seZdZdd�Zdd�ZdS)�SunOSSpecificTestCasec	Cs�tdtjd�}|���d�dd�}|s4td��d}}|D],}|��}t|d�d}t|d	�d}q@||}t��}|j	|ks�J�|j
|ks�J�|j|ks�J�dS)
Nz#env PATH=/usr/sbin:/sbin:%s swap -l�PATH�
�zno swap device(s) configuredr�i�)r�os�environ�strip�split�
ValueError�int�psutilZswap_memory�total�used�free)	�self�out�linesrr�line�fieldsrZpsutil_swap�r�C/usr/local/lib64/python3.9/site-packages/psutil/tests/test_sunos.py�test_swap_memorysz&SunOSSpecificTestCase.test_swap_memorycCs&td�}t��t|�d��ks"J�dS)Nz/usr/sbin/psrinfor	)rr�	cpu_count�lenr)rrrrr�test_cpu_count%sz$SunOSSpecificTestCase.test_cpu_countN)�__name__�
__module__�__qualname__rr!rrrrrsr)�__doc__r
rrZpsutil.testsrrr�markZskipifrrrrr�<module>sPKok\豍BB0psutil/tests/__pycache__/__main__.cpython-39.pycnu�[���a

��?h5�@s"dZddlmZe�gd��dS)z>Run unit tests. This is invoked by:
$ python -m psutil.tests.
�)�pytest)z-vz-sz
--tb=shortN)�__doc__Zpsutil.testsr�main�rr�A/usr/local/lib64/python3.9/site-packages/psutil/tests/__main__.py�<module>sPKok\ށ�,,4psutil/tests/__pycache__/test_unicode.cpython-39.pycnu�[���a

��?h;1�@sdZddlZddlZddlZddlZddlmZddlZddlmZddlm	Z	ddlm
Z
ddlmZddlm
Z
dd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$e�r�dd �Z!d!d"�Z%Gd#d$�d$e�Z&ej'j(d%d&�ej'j)ed'd(�ej'j)e�o�ed)d(�Gd*d+�d+e&����Z*ej'j)ed,d(�Gd-d.�d.e*��Z+Gd/d0�d0e&�Z,dS)1a>	Notes about unicode handling in psutil
======================================.

Starting from version 5.3.0 psutil adds unicode support, see:
https://github.com/giampaolo/psutil/issues/1040
The notes below apply to *any* API returning a string such as
process exe(), cwd() or username():

* all strings are encoded by using the OS filesystem encoding
  (sys.getfilesystemencoding()) which varies depending on the platform
  (e.g. "UTF-8" on macOS, "mbcs" on Win)
* no API call is supposed to crash with UnicodeDecodeError
* instead, in case of badly encoded data returned by the OS, the
  following error handlers are used to replace the corrupted characters in
  the string:
    * Python 3: sys.getfilesystemencodeerrors() (PY 3.6+) or
      "surrogatescape" on POSIX and "replace" on Windows
    * Python 2: "replace"
* on Python 2 all APIs return bytes (str type), never unicode
* on Python 2, you can go back to unicode by doing:

    >>> unicode(p.exe(), sys.getdefaultencoding(), errors="replace")

For a detailed explanation of how psutil handles unicode see #1040.

Tests
=====

List of APIs returning or dealing with a string:
('not tested' means they are not tested to deal with non-ASCII strings):

* Process.cmdline()
* Process.cwd()
* Process.environ()
* Process.exe()
* Process.memory_maps()
* Process.name()
* Process.net_connections('unix')
* Process.open_files()
* Process.username()             (not tested)

* disk_io_counters()             (not tested)
* disk_partitions()              (not tested)
* disk_usage(str)
* net_connections('unix')
* net_if_addrs()                 (not tested)
* net_if_stats()                 (not tested)
* net_io_counters()              (not tested)
* sensors_fans()                 (not tested)
* sensors_temperatures()         (not tested)
* users()                        (not tested)

* WindowsService.binpath()       (not tested)
* WindowsService.description()   (not tested)
* WindowsService.display_name()  (not tested)
* WindowsService.name()          (not tested)
* WindowsService.status()        (not tested)
* WindowsService.username()      (not tested)

In here we create a unicode path with a funky non-ASCII name and (where
possible) make psutil return it back (e.g. on name(), exe(), open_files(),
etc.) and make sure that:

* psutil never crashes with UnicodeDecodeError
* the returned path matches
�N)�closing)�BSD)�POSIX)�WINDOWS)�PY3)�super)�APPVEYOR)�ASCII_FS)�
CI_TESTING)�HAS_ENVIRON)�HAS_MEMORY_MAPS)�HAS_NET_CONNECTIONS_UNIX)�INVALID_UNICODE_SUFFIX)�PYPY��
TESTFN_PREFIX)�UNICODE_SUFFIX)�PsutilTestCase)�bind_unix_socket)�chdir)�copyload_shared_lib)�
create_py_exe)�
get_testfn)�pytest)�
safe_mkdir��safe_rmpath)�skip_on_access_denied)�spawn_testproc)�	terminatecCs6ddlm}z
||�WSty0t��Yn0dS)Nrr)�psutil.testsrZWindowsError�	traceback�	print_exc)�pathZrm�r$�E/usr/local/lib64/python3.9/site-packages/psutil/tests/test_unicode.pyrqs

rc	Cs�d}t|d�}z�z<t|�t|�t|gd�}t�||d�t|d�Wn2ttfy~YW|durrt|�t|�dS0W|dur�t|�t|�dSW|dur�t|�t|�n|dur�t|�t|�0dS)z`Return True if both the fs and the subprocess module can
    deal with a unicode file name.
    N��suffix)�cmdz-2FT)	rrrr�shutil�copyfile�UnicodeEncodeError�IOErrorr)r'�sprocZtestfnr$r$r%�try_unicode�s2
��
�r.cs0eZdZdZe�fdd��Z�fdd�Z�ZS)�BaseUnicodeTestNcsNt���d|_d|_|jdurJt|j�s2d|_nt|jd�|_t|j�dS)NFTr&)r�
setUpClass�
skip_tests�
funky_name�funky_suffixr.rr)�cls��	__class__r$r%r0�s


zBaseUnicodeTest.setUpClasscst���|jrt�d��dS)Nzcan't handle unicode str)r�setUpr1r�skip��selfr5r$r%r7�s
zBaseUnicodeTest.setUp)�__name__�
__module__�__qualname__r3�classmethodr0r7�
__classcell__r$r$r5r%r/�sr/�serial)�namezASCII fs��reasonztoo much trouble on PYPY2c@s�eZdZdZeZdd�Zdd�Zdd�Zdd	�Z	d
d�Z
ejj
eoDedd
�dd��Zejj
edd
�dd��Zejj
edd
�ejj
edd
�e�dd����Zdd�Zejj
edd
�ejj
edd
�ejj
edd
�dd����ZdS)�
TestFSAPIsz1Test FS APIs with a funky, valid, UTF8 path name.cCsZt|jt�rdnd}t���*t�d�|jt�|�vWd�S1sL0YdS)N�.�ignore)�
isinstancer2�str�warnings�catch_warnings�simplefilter�os�listdir)r:�herer$r$r%�expect_exact_path_match�s

z"TestFSAPIs.expect_exact_path_matchcCsb|jddg}|�|�}t�|j�}|��}t|t�s8J�|��r^t	j
�|�t	j
�|j�ks^J�dS�Nz-cz2import time; [time.sleep(0.1) for x in range(100)])r2r�psutil�Process�pid�exerGrHrOrLr#�normcase)r:r(�subp�prTr$r$r%�
test_proc_exe�s�
zTestFSAPIs.test_proc_execCsV|jddg}|�|�}t�|j���}t|t�s4J�|��rR|t	j
�|j�ksRJ�dSrP)r2rrQrRrSrArGrHrOrLr#�basename)r:r(rVrAr$r$r%�test_proc_name�s�
zTestFSAPIs.test_proc_namecCsZ|jddg}|�|�}t�|j�}|��}|D]}t|t�s.J�q.|��rV||ksVJ�dSrP)	r2rrQrRrS�cmdlinerGrHrO)r:r(rVrWr[�partr$r$r%�test_proc_cmdline�s�
zTestFSAPIs.test_proc_cmdlinecCs�|jd}|�t|�t|�t|�� t��}|��}Wd�n1sL0Yt|��t	�shJ�|�
�r|||ks|J�dS�N�2)r2�
addCleanuprrrrQrR�cwdrGrHrO)r:�dnamerWrar$r$r%�
test_proc_cwd�s

&zTestFSAPIs.test_proc_cwdzfails on PYPY + WINDOWSrBcCs�t��}t|���}t|jd��t|���}Wd�n1sB0Y||��j}t|t	�shJ�t
rz|szt�d��|�
�r�tj�|�tj�|j�ks�J�dS)N�rbzopen_files on BSD is broken)rQrR�setZ
open_files�openr2�popr#rGrHrrr8rOrLrU)r:rW�start�newr#r$r$r%�test_proc_open_files�s*
zTestFSAPIs.test_proc_open_filesz
POSIX onlycCs�|j|jd�}zt|�}Wn$ty>tr0�n
t�d��Yn0t|��@t�	��
d�d}t|jt
�slJ�|j|kszJ�Wd�n1s�0YdS)Nr&�
not supported�unixr)rr3rr+rrr8rrQrR�net_connectionsrG�laddrrH)r:rA�sock�connr$r$r%�test_proc_net_connectionss
z$TestFSAPIs.test_proc_net_connectionszcan't list UNIX socketscCs�dd�}|j|jd�}zt|�}Wn$tyFtr8�n
t�d��Yn0t|��Btj	dd�}||�}t
|jt�svJ�|j|ks�J�Wd�n1s�0YdS)NcSs2|D] }tj�|j��t�r|Sqtd��dS)Nzconnection not found)rLr#rYrn�
startswithr�
ValueError)�consrpr$r$r%�	find_socks
z2TestFSAPIs.test_net_connections.<locals>.find_sockr&rkrl)�kind)
rr3rr+rrr8rrQrmrGrnrH)r:rurArortrpr$r$r%�test_net_connectionss
zTestFSAPIs.test_net_connectionscCs,|jd}|�t|�t|�t�|�dSr^)r2r`rrrQ�
disk_usage)r:rbr$r$r%�test_disk_usage/s
zTestFSAPIs.test_disk_usagerkz&ctypes does not support unicode on PY2zunstable on PYPYcs�t|jd��h}dd���fdd�t����D�}dd�|D�}�|�|vsNJ�|D]}t|t�sRJ�qRWd�n1sz0YdS)Nr&cSstj�tj�|��S)N)rLr#�realpathrU)rWr$r$r%�normpath?sz-TestFSAPIs.test_memory_maps.<locals>.normpathcsg|]}�|j��qSr$)r#��.0�x�r{r$r%�
<listcomp>Bsz/TestFSAPIs.test_memory_maps.<locals>.<listcomp>cSsg|]}t|vr|�qSr$rr|r$r$r%r�F�)rr3rQrRZmemory_mapsrGrH)r:Z
funky_pathZlibpathsr#r$rr%�test_memory_maps5s

�zTestFSAPIs.test_memory_mapsN)r;r<r=�__doc__rr3rOrXrZr]rcr�mark�skipifrrrjrrqr
rrwryrrr�r$r$r$r%rD�s0




��rDzunreliable on CIc@seZdZdZeZdd�ZdS)�TestFSAPIsWithInvalidPathz-Test FS APIs with a funky, invalid path name.cCsdS)NTr$r9r$r$r%rORsz1TestFSAPIsWithInvalidPath.expect_exact_path_matchN)r;r<r=r�rr3rOr$r$r$r%r�Lsr�c@sJeZdZdZerendZejj	e
dd�ejj	eo4edd�dd���Z
dS)	�
TestNonFSAPISz&Unicode tests for non fs-related APIs.�èrkrBzsegfaults on PYPY + WINDOWScCsxtj��}|j|d<|j|d�}t�|j�}|��}|��D]$\}}t	|t
�sRJ�t	|t
�s<J�q<|d|jkstJ�dS)NZ	FUNNY_ARG)�env)rL�environ�copyr3rrQrRrS�itemsrGrH)r:r�r-rW�k�vr$r$r%�test_proc_environas

zTestNonFSAPIS.test_proc_environN)r;r<r=r�rrr3rr�r�rrrr�r$r$r$r%r�\s
r�)-r�rLr)r!rI�
contextlibrrQrrrZpsutil._compatrrr rr	r
rrr
rrrrrrrrrrrrrrrrr.r/r�Zxdist_groupr�rDr�r�r$r$r$r%�<module>sZCPKok\7؄؄psutil/tests/test_windows.pynu�[���#!/usr/bin/env python3
# -*- coding: UTF-8 -*

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Windows specific tests."""

import datetime
import errno
import glob
import os
import platform
import re
import signal
import subprocess
import sys
import time
import warnings

import psutil
from psutil import WINDOWS
from psutil._compat import FileNotFoundError
from psutil._compat import super
from psutil._compat import which
from psutil.tests import APPVEYOR
from psutil.tests import GITHUB_ACTIONS
from psutil.tests import HAS_BATTERY
from psutil.tests import IS_64BIT
from psutil.tests import PY3
from psutil.tests import PYPY
from psutil.tests import TOLERANCE_DISK_USAGE
from psutil.tests import TOLERANCE_SYS_MEM
from psutil.tests import PsutilTestCase
from psutil.tests import mock
from psutil.tests import pytest
from psutil.tests import retry_on_failure
from psutil.tests import sh
from psutil.tests import spawn_testproc
from psutil.tests import terminate


if WINDOWS and not PYPY:
    with warnings.catch_warnings():
        warnings.simplefilter("ignore")
        import win32api  # requires "pip install pywin32"
        import win32con
        import win32process
        import wmi  # requires "pip install wmi" / "make install-pydeps-test"

if WINDOWS:
    from psutil._pswindows import convert_oserror


cext = psutil._psplatform.cext


@pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
@pytest.mark.skipif(PYPY, reason="pywin32 not available on PYPY")
# https://github.com/giampaolo/psutil/pull/1762#issuecomment-632892692
@pytest.mark.skipif(
    GITHUB_ACTIONS and not PY3, reason="pywin32 broken on GITHUB + PY2"
)
class WindowsTestCase(PsutilTestCase):
    pass


def powershell(cmd):
    """Currently not used, but available just in case. Usage:

    >>> powershell(
        "Get-CIMInstance Win32_PageFileUsage | Select AllocatedBaseSize")
    """
    if not which("powershell.exe"):
        raise pytest.skip("powershell.exe not available")
    cmdline = (
        'powershell.exe -ExecutionPolicy Bypass -NoLogo -NonInteractive '
        + '-NoProfile -WindowStyle Hidden -Command "%s"' % cmd
    )
    return sh(cmdline)


def wmic(path, what, converter=int):
    """Currently not used, but available just in case. Usage:

    >>> wmic("Win32_OperatingSystem", "FreePhysicalMemory")
    2134124534
    """
    out = sh("wmic path %s get %s" % (path, what)).strip()
    data = "".join(out.splitlines()[1:]).strip()  # get rid of the header
    if converter is not None:
        if "," in what:
            return tuple([converter(x) for x in data.split()])
        else:
            return converter(data)
    else:
        return data


# ===================================================================
# System APIs
# ===================================================================


class TestCpuAPIs(WindowsTestCase):
    @pytest.mark.skipif(
        'NUMBER_OF_PROCESSORS' not in os.environ,
        reason="NUMBER_OF_PROCESSORS env var is not available",
    )
    def test_cpu_count_vs_NUMBER_OF_PROCESSORS(self):
        # Will likely fail on many-cores systems:
        # https://stackoverflow.com/questions/31209256
        num_cpus = int(os.environ['NUMBER_OF_PROCESSORS'])
        assert num_cpus == psutil.cpu_count()

    def test_cpu_count_vs_GetSystemInfo(self):
        # Will likely fail on many-cores systems:
        # https://stackoverflow.com/questions/31209256
        sys_value = win32api.GetSystemInfo()[5]
        psutil_value = psutil.cpu_count()
        assert sys_value == psutil_value

    def test_cpu_count_logical_vs_wmi(self):
        w = wmi.WMI()
        procs = sum(
            proc.NumberOfLogicalProcessors for proc in w.Win32_Processor()
        )
        assert psutil.cpu_count() == procs

    def test_cpu_count_cores_vs_wmi(self):
        w = wmi.WMI()
        cores = sum(proc.NumberOfCores for proc in w.Win32_Processor())
        assert psutil.cpu_count(logical=False) == cores

    def test_cpu_count_vs_cpu_times(self):
        assert psutil.cpu_count() == len(psutil.cpu_times(percpu=True))

    def test_cpu_freq(self):
        w = wmi.WMI()
        proc = w.Win32_Processor()[0]
        assert proc.CurrentClockSpeed == psutil.cpu_freq().current
        assert proc.MaxClockSpeed == psutil.cpu_freq().max


class TestSystemAPIs(WindowsTestCase):
    def test_nic_names(self):
        out = sh('ipconfig /all')
        nics = psutil.net_io_counters(pernic=True).keys()
        for nic in nics:
            if "pseudo-interface" in nic.replace(' ', '-').lower():
                continue
            if nic not in out:
                raise self.fail(
                    "%r nic wasn't found in 'ipconfig /all' output" % nic
                )

    def test_total_phymem(self):
        w = wmi.WMI().Win32_ComputerSystem()[0]
        assert int(w.TotalPhysicalMemory) == psutil.virtual_memory().total

    def test_free_phymem(self):
        w = wmi.WMI().Win32_PerfRawData_PerfOS_Memory()[0]
        assert (
            abs(int(w.AvailableBytes) - psutil.virtual_memory().free)
            < TOLERANCE_SYS_MEM
        )

    def test_total_swapmem(self):
        w = wmi.WMI().Win32_PerfRawData_PerfOS_Memory()[0]
        assert (
            int(w.CommitLimit) - psutil.virtual_memory().total
            == psutil.swap_memory().total
        )
        if psutil.swap_memory().total == 0:
            assert psutil.swap_memory().free == 0
            assert psutil.swap_memory().used == 0

    def test_percent_swapmem(self):
        if psutil.swap_memory().total > 0:
            w = wmi.WMI().Win32_PerfRawData_PerfOS_PagingFile(Name="_Total")[0]
            # calculate swap usage to percent
            percentSwap = int(w.PercentUsage) * 100 / int(w.PercentUsage_Base)
            # exact percent may change but should be reasonable
            # assert within +/- 5% and between 0 and 100%
            assert psutil.swap_memory().percent >= 0
            assert abs(psutil.swap_memory().percent - percentSwap) < 5
            assert psutil.swap_memory().percent <= 100

    # @pytest.mark.skipif(wmi is None, reason="wmi module is not installed")
    # def test__UPTIME(self):
    #     # _UPTIME constant is not public but it is used internally
    #     # as value to return for pid 0 creation time.
    #     # WMI behaves the same.
    #     w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
    #     p = psutil.Process(0)
    #     wmic_create = str(w.CreationDate.split('.')[0])
    #     psutil_create = time.strftime("%Y%m%d%H%M%S",
    #                                   time.localtime(p.create_time()))

    # Note: this test is not very reliable
    @pytest.mark.skipif(APPVEYOR, reason="test not relieable on appveyor")
    @retry_on_failure()
    def test_pids(self):
        # Note: this test might fail if the OS is starting/killing
        # other processes in the meantime
        w = wmi.WMI().Win32_Process()
        wmi_pids = set([x.ProcessId for x in w])
        psutil_pids = set(psutil.pids())
        assert wmi_pids == psutil_pids

    @retry_on_failure()
    def test_disks(self):
        ps_parts = psutil.disk_partitions(all=True)
        wmi_parts = wmi.WMI().Win32_LogicalDisk()
        for ps_part in ps_parts:
            for wmi_part in wmi_parts:
                if ps_part.device.replace('\\', '') == wmi_part.DeviceID:
                    if not ps_part.mountpoint:
                        # this is usually a CD-ROM with no disk inserted
                        break
                    if 'cdrom' in ps_part.opts:
                        break
                    if ps_part.mountpoint.startswith('A:'):
                        break  # floppy
                    try:
                        usage = psutil.disk_usage(ps_part.mountpoint)
                    except FileNotFoundError:
                        # usually this is the floppy
                        break
                    assert usage.total == int(wmi_part.Size)
                    wmi_free = int(wmi_part.FreeSpace)
                    assert usage.free == wmi_free
                    # 10 MB tolerance
                    if abs(usage.free - wmi_free) > 10 * 1024 * 1024:
                        raise self.fail(
                            "psutil=%s, wmi=%s" % (usage.free, wmi_free)
                        )
                    break
            else:
                raise self.fail("can't find partition %s" % repr(ps_part))

    @retry_on_failure()
    def test_disk_usage(self):
        for disk in psutil.disk_partitions():
            if 'cdrom' in disk.opts:
                continue
            sys_value = win32api.GetDiskFreeSpaceEx(disk.mountpoint)
            psutil_value = psutil.disk_usage(disk.mountpoint)
            assert abs(sys_value[0] - psutil_value.free) < TOLERANCE_DISK_USAGE
            assert (
                abs(sys_value[1] - psutil_value.total) < TOLERANCE_DISK_USAGE
            )
            assert psutil_value.used == psutil_value.total - psutil_value.free

    def test_disk_partitions(self):
        sys_value = [
            x + '\\'
            for x in win32api.GetLogicalDriveStrings().split("\\\x00")
            if x and not x.startswith('A:')
        ]
        psutil_value = [
            x.mountpoint
            for x in psutil.disk_partitions(all=True)
            if not x.mountpoint.startswith('A:')
        ]
        assert sys_value == psutil_value

    def test_net_if_stats(self):
        ps_names = set(cext.net_if_stats())
        wmi_adapters = wmi.WMI().Win32_NetworkAdapter()
        wmi_names = set()
        for wmi_adapter in wmi_adapters:
            wmi_names.add(wmi_adapter.Name)
            wmi_names.add(wmi_adapter.NetConnectionID)
        assert ps_names & wmi_names, "no common entries in %s, %s" % (
            ps_names,
            wmi_names,
        )

    def test_boot_time(self):
        wmi_os = wmi.WMI().Win32_OperatingSystem()
        wmi_btime_str = wmi_os[0].LastBootUpTime.split('.')[0]
        wmi_btime_dt = datetime.datetime.strptime(
            wmi_btime_str, "%Y%m%d%H%M%S"
        )
        psutil_dt = datetime.datetime.fromtimestamp(psutil.boot_time())
        diff = abs((wmi_btime_dt - psutil_dt).total_seconds())
        assert diff <= 5

    def test_boot_time_fluctuation(self):
        # https://github.com/giampaolo/psutil/issues/1007
        with mock.patch('psutil._pswindows.cext.boot_time', return_value=5):
            assert psutil.boot_time() == 5
        with mock.patch('psutil._pswindows.cext.boot_time', return_value=4):
            assert psutil.boot_time() == 5
        with mock.patch('psutil._pswindows.cext.boot_time', return_value=6):
            assert psutil.boot_time() == 5
        with mock.patch('psutil._pswindows.cext.boot_time', return_value=333):
            assert psutil.boot_time() == 333


# ===================================================================
# sensors_battery()
# ===================================================================


class TestSensorsBattery(WindowsTestCase):
    def test_has_battery(self):
        if win32api.GetPwrCapabilities()['SystemBatteriesPresent']:
            assert psutil.sensors_battery() is not None
        else:
            assert psutil.sensors_battery() is None

    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_percent(self):
        w = wmi.WMI()
        battery_wmi = w.query('select * from Win32_Battery')[0]
        battery_psutil = psutil.sensors_battery()
        assert (
            abs(battery_psutil.percent - battery_wmi.EstimatedChargeRemaining)
            < 1
        )

    @pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
    def test_power_plugged(self):
        w = wmi.WMI()
        battery_wmi = w.query('select * from Win32_Battery')[0]
        battery_psutil = psutil.sensors_battery()
        # Status codes:
        # https://msdn.microsoft.com/en-us/library/aa394074(v=vs.85).aspx
        assert battery_psutil.power_plugged == (battery_wmi.BatteryStatus == 2)

    def test_emulate_no_battery(self):
        with mock.patch(
            "psutil._pswindows.cext.sensors_battery",
            return_value=(0, 128, 0, 0),
        ) as m:
            assert psutil.sensors_battery() is None
            assert m.called

    def test_emulate_power_connected(self):
        with mock.patch(
            "psutil._pswindows.cext.sensors_battery", return_value=(1, 0, 0, 0)
        ) as m:
            assert (
                psutil.sensors_battery().secsleft
                == psutil.POWER_TIME_UNLIMITED
            )
            assert m.called

    def test_emulate_power_charging(self):
        with mock.patch(
            "psutil._pswindows.cext.sensors_battery", return_value=(0, 8, 0, 0)
        ) as m:
            assert (
                psutil.sensors_battery().secsleft
                == psutil.POWER_TIME_UNLIMITED
            )
            assert m.called

    def test_emulate_secs_left_unknown(self):
        with mock.patch(
            "psutil._pswindows.cext.sensors_battery",
            return_value=(0, 0, 0, -1),
        ) as m:
            assert (
                psutil.sensors_battery().secsleft == psutil.POWER_TIME_UNKNOWN
            )
            assert m.called


# ===================================================================
# Process APIs
# ===================================================================


class TestProcess(WindowsTestCase):
    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    def test_issue_24(self):
        p = psutil.Process(0)
        with pytest.raises(psutil.AccessDenied):
            p.kill()

    def test_special_pid(self):
        p = psutil.Process(4)
        assert p.name() == 'System'
        # use __str__ to access all common Process properties to check
        # that nothing strange happens
        str(p)
        p.username()
        assert p.create_time() >= 0.0
        try:
            rss, _vms = p.memory_info()[:2]
        except psutil.AccessDenied:
            # expected on Windows Vista and Windows 7
            if platform.uname()[1] not in ('vista', 'win-7', 'win7'):
                raise
        else:
            assert rss > 0

    def test_send_signal(self):
        p = psutil.Process(self.pid)
        with pytest.raises(ValueError):
            p.send_signal(signal.SIGINT)

    def test_num_handles_increment(self):
        p = psutil.Process(os.getpid())
        before = p.num_handles()
        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, os.getpid()
        )
        after = p.num_handles()
        assert after == before + 1
        win32api.CloseHandle(handle)
        assert p.num_handles() == before

    def test_ctrl_signals(self):
        p = psutil.Process(self.spawn_testproc().pid)
        p.send_signal(signal.CTRL_C_EVENT)
        p.send_signal(signal.CTRL_BREAK_EVENT)
        p.kill()
        p.wait()
        with pytest.raises(psutil.NoSuchProcess):
            p.send_signal(signal.CTRL_C_EVENT)
        with pytest.raises(psutil.NoSuchProcess):
            p.send_signal(signal.CTRL_BREAK_EVENT)

    def test_username(self):
        name = win32api.GetUserNameEx(win32con.NameSamCompatible)
        if name.endswith('$'):
            # When running as a service account (most likely to be
            # NetworkService), these user name calculations don't produce the
            # same result, causing the test to fail.
            raise pytest.skip('running as service account')
        assert psutil.Process().username() == name

    def test_cmdline(self):
        sys_value = re.sub('[ ]+', ' ', win32api.GetCommandLine()).strip()
        psutil_value = ' '.join(psutil.Process().cmdline())
        if sys_value[0] == '"' != psutil_value[0]:
            # The PyWin32 command line may retain quotes around argv[0] if they
            # were used unnecessarily, while psutil will omit them. So remove
            # the first 2 quotes from sys_value if not in psutil_value.
            # A path to an executable will not contain quotes, so this is safe.
            sys_value = sys_value.replace('"', '', 2)
        assert sys_value == psutil_value

    # XXX - occasional failures

    # def test_cpu_times(self):
    #     handle = win32api.OpenProcess(win32con.PROCESS_QUERY_INFORMATION,
    #                                   win32con.FALSE, os.getpid())
    #     self.addCleanup(win32api.CloseHandle, handle)
    #     sys_value = win32process.GetProcessTimes(handle)
    #     psutil_value = psutil.Process().cpu_times()
    #     self.assertAlmostEqual(
    #         psutil_value.user, sys_value['UserTime'] / 10000000.0,
    #         delta=0.2)
    #     self.assertAlmostEqual(
    #         psutil_value.user, sys_value['KernelTime'] / 10000000.0,
    #         delta=0.2)

    def test_nice(self):
        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, os.getpid()
        )
        self.addCleanup(win32api.CloseHandle, handle)
        sys_value = win32process.GetPriorityClass(handle)
        psutil_value = psutil.Process().nice()
        assert psutil_value == sys_value

    def test_memory_info(self):
        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, self.pid
        )
        self.addCleanup(win32api.CloseHandle, handle)
        sys_value = win32process.GetProcessMemoryInfo(handle)
        psutil_value = psutil.Process(self.pid).memory_info()
        assert sys_value['PeakWorkingSetSize'] == psutil_value.peak_wset
        assert sys_value['WorkingSetSize'] == psutil_value.wset
        assert (
            sys_value['QuotaPeakPagedPoolUsage']
            == psutil_value.peak_paged_pool
        )
        assert sys_value['QuotaPagedPoolUsage'] == psutil_value.paged_pool
        assert (
            sys_value['QuotaPeakNonPagedPoolUsage']
            == psutil_value.peak_nonpaged_pool
        )
        assert (
            sys_value['QuotaNonPagedPoolUsage'] == psutil_value.nonpaged_pool
        )
        assert sys_value['PagefileUsage'] == psutil_value.pagefile
        assert sys_value['PeakPagefileUsage'] == psutil_value.peak_pagefile

        assert psutil_value.rss == psutil_value.wset
        assert psutil_value.vms == psutil_value.pagefile

    def test_wait(self):
        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, self.pid
        )
        self.addCleanup(win32api.CloseHandle, handle)
        p = psutil.Process(self.pid)
        p.terminate()
        psutil_value = p.wait()
        sys_value = win32process.GetExitCodeProcess(handle)
        assert psutil_value == sys_value

    def test_cpu_affinity(self):
        def from_bitmask(x):
            return [i for i in range(64) if (1 << i) & x]

        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, self.pid
        )
        self.addCleanup(win32api.CloseHandle, handle)
        sys_value = from_bitmask(
            win32process.GetProcessAffinityMask(handle)[0]
        )
        psutil_value = psutil.Process(self.pid).cpu_affinity()
        assert psutil_value == sys_value

    def test_io_counters(self):
        handle = win32api.OpenProcess(
            win32con.PROCESS_QUERY_INFORMATION, win32con.FALSE, os.getpid()
        )
        self.addCleanup(win32api.CloseHandle, handle)
        sys_value = win32process.GetProcessIoCounters(handle)
        psutil_value = psutil.Process().io_counters()
        assert psutil_value.read_count == sys_value['ReadOperationCount']
        assert psutil_value.write_count == sys_value['WriteOperationCount']
        assert psutil_value.read_bytes == sys_value['ReadTransferCount']
        assert psutil_value.write_bytes == sys_value['WriteTransferCount']
        assert psutil_value.other_count == sys_value['OtherOperationCount']
        assert psutil_value.other_bytes == sys_value['OtherTransferCount']

    def test_num_handles(self):
        import ctypes
        import ctypes.wintypes

        PROCESS_QUERY_INFORMATION = 0x400
        handle = ctypes.windll.kernel32.OpenProcess(
            PROCESS_QUERY_INFORMATION, 0, self.pid
        )
        self.addCleanup(ctypes.windll.kernel32.CloseHandle, handle)

        hndcnt = ctypes.wintypes.DWORD()
        ctypes.windll.kernel32.GetProcessHandleCount(
            handle, ctypes.byref(hndcnt)
        )
        sys_value = hndcnt.value
        psutil_value = psutil.Process(self.pid).num_handles()
        assert psutil_value == sys_value

    def test_error_partial_copy(self):
        # https://github.com/giampaolo/psutil/issues/875
        exc = WindowsError()
        exc.winerror = 299
        with mock.patch("psutil._psplatform.cext.proc_cwd", side_effect=exc):
            with mock.patch("time.sleep") as m:
                p = psutil.Process()
                with pytest.raises(psutil.AccessDenied):
                    p.cwd()
        assert m.call_count >= 5

    def test_exe(self):
        # NtQuerySystemInformation succeeds if process is gone. Make sure
        # it raises NSP for a non existent pid.
        pid = psutil.pids()[-1] + 99999
        proc = psutil._psplatform.Process(pid)
        with pytest.raises(psutil.NoSuchProcess):
            proc.exe()


class TestProcessWMI(WindowsTestCase):
    """Compare Process API results with WMI."""

    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    def test_name(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        assert p.name() == w.Caption

    # This fail on github because using virtualenv for test environment
    @pytest.mark.skipif(
        GITHUB_ACTIONS, reason="unreliable path on GITHUB_ACTIONS"
    )
    def test_exe(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        # Note: wmi reports the exe as a lower case string.
        # Being Windows paths case-insensitive we ignore that.
        assert p.exe().lower() == w.ExecutablePath.lower()

    def test_cmdline(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        assert ' '.join(p.cmdline()) == w.CommandLine.replace('"', '')

    def test_username(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        domain, _, username = w.GetOwner()
        username = "%s\\%s" % (domain, username)
        assert p.username() == username

    @retry_on_failure()
    def test_memory_rss(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        rss = p.memory_info().rss
        assert rss == int(w.WorkingSetSize)

    @retry_on_failure()
    def test_memory_vms(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        vms = p.memory_info().vms
        # http://msdn.microsoft.com/en-us/library/aa394372(VS.85).aspx
        # ...claims that PageFileUsage is represented in Kilo
        # bytes but funnily enough on certain platforms bytes are
        # returned instead.
        wmi_usage = int(w.PageFileUsage)
        if vms not in (wmi_usage, wmi_usage * 1024):
            raise self.fail("wmi=%s, psutil=%s" % (wmi_usage, vms))

    def test_create_time(self):
        w = wmi.WMI().Win32_Process(ProcessId=self.pid)[0]
        p = psutil.Process(self.pid)
        wmic_create = str(w.CreationDate.split('.')[0])
        psutil_create = time.strftime(
            "%Y%m%d%H%M%S", time.localtime(p.create_time())
        )
        assert wmic_create == psutil_create


# ---


@pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
class TestDualProcessImplementation(PsutilTestCase):
    """Certain APIs on Windows have 2 internal implementations, one
    based on documented Windows APIs, another one based
    NtQuerySystemInformation() which gets called as fallback in
    case the first fails because of limited permission error.
    Here we test that the two methods return the exact same value,
    see:
    https://github.com/giampaolo/psutil/issues/304.
    """

    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc().pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    def test_memory_info(self):
        mem_1 = psutil.Process(self.pid).memory_info()
        with mock.patch(
            "psutil._psplatform.cext.proc_memory_info",
            side_effect=OSError(errno.EPERM, "msg"),
        ) as fun:
            mem_2 = psutil.Process(self.pid).memory_info()
            assert len(mem_1) == len(mem_2)
            for i in range(len(mem_1)):
                assert mem_1[i] >= 0
                assert mem_2[i] >= 0
                assert abs(mem_1[i] - mem_2[i]) < 512
            assert fun.called

    def test_create_time(self):
        ctime = psutil.Process(self.pid).create_time()
        with mock.patch(
            "psutil._psplatform.cext.proc_times",
            side_effect=OSError(errno.EPERM, "msg"),
        ) as fun:
            assert psutil.Process(self.pid).create_time() == ctime
            assert fun.called

    def test_cpu_times(self):
        cpu_times_1 = psutil.Process(self.pid).cpu_times()
        with mock.patch(
            "psutil._psplatform.cext.proc_times",
            side_effect=OSError(errno.EPERM, "msg"),
        ) as fun:
            cpu_times_2 = psutil.Process(self.pid).cpu_times()
            assert fun.called
            assert abs(cpu_times_1.user - cpu_times_2.user) < 0.01
            assert abs(cpu_times_1.system - cpu_times_2.system) < 0.01

    def test_io_counters(self):
        io_counters_1 = psutil.Process(self.pid).io_counters()
        with mock.patch(
            "psutil._psplatform.cext.proc_io_counters",
            side_effect=OSError(errno.EPERM, "msg"),
        ) as fun:
            io_counters_2 = psutil.Process(self.pid).io_counters()
            for i in range(len(io_counters_1)):
                assert abs(io_counters_1[i] - io_counters_2[i]) < 5
            assert fun.called

    def test_num_handles(self):
        num_handles = psutil.Process(self.pid).num_handles()
        with mock.patch(
            "psutil._psplatform.cext.proc_num_handles",
            side_effect=OSError(errno.EPERM, "msg"),
        ) as fun:
            assert psutil.Process(self.pid).num_handles() == num_handles
            assert fun.called

    def test_cmdline(self):
        for pid in psutil.pids():
            try:
                a = cext.proc_cmdline(pid, use_peb=True)
                b = cext.proc_cmdline(pid, use_peb=False)
            except OSError as err:
                err = convert_oserror(err)
                if not isinstance(
                    err, (psutil.AccessDenied, psutil.NoSuchProcess)
                ):
                    raise
            else:
                assert a == b


@pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
class RemoteProcessTestCase(PsutilTestCase):
    """Certain functions require calling ReadProcessMemory.
    This trivially works when called on the current process.
    Check that this works on other processes, especially when they
    have a different bitness.
    """

    @staticmethod
    def find_other_interpreter():
        # find a python interpreter that is of the opposite bitness from us
        code = "import sys; sys.stdout.write(str(sys.maxsize > 2**32))"

        # XXX: a different and probably more stable approach might be to access
        # the registry but accessing 64 bit paths from a 32 bit process
        for filename in glob.glob(r"C:\Python*\python.exe"):
            proc = subprocess.Popen(
                args=[filename, "-c", code],
                stdout=subprocess.PIPE,
                stderr=subprocess.STDOUT,
            )
            output, _ = proc.communicate()
            proc.wait()
            if output == str(not IS_64BIT):
                return filename

    test_args = ["-c", "import sys; sys.stdin.read()"]

    def setUp(self):
        super().setUp()

        other_python = self.find_other_interpreter()
        if other_python is None:
            raise pytest.skip(
                "could not find interpreter with opposite bitness"
            )
        if IS_64BIT:
            self.python64 = sys.executable
            self.python32 = other_python
        else:
            self.python64 = other_python
            self.python32 = sys.executable

        env = os.environ.copy()
        env["THINK_OF_A_NUMBER"] = str(os.getpid())
        self.proc32 = self.spawn_testproc(
            [self.python32] + self.test_args, env=env, stdin=subprocess.PIPE
        )
        self.proc64 = self.spawn_testproc(
            [self.python64] + self.test_args, env=env, stdin=subprocess.PIPE
        )

    def tearDown(self):
        super().tearDown()
        self.proc32.communicate()
        self.proc64.communicate()

    def test_cmdline_32(self):
        p = psutil.Process(self.proc32.pid)
        assert len(p.cmdline()) == 3
        assert p.cmdline()[1:] == self.test_args

    def test_cmdline_64(self):
        p = psutil.Process(self.proc64.pid)
        assert len(p.cmdline()) == 3
        assert p.cmdline()[1:] == self.test_args

    def test_cwd_32(self):
        p = psutil.Process(self.proc32.pid)
        assert p.cwd() == os.getcwd()

    def test_cwd_64(self):
        p = psutil.Process(self.proc64.pid)
        assert p.cwd() == os.getcwd()

    def test_environ_32(self):
        p = psutil.Process(self.proc32.pid)
        e = p.environ()
        assert "THINK_OF_A_NUMBER" in e
        assert e["THINK_OF_A_NUMBER"] == str(os.getpid())

    def test_environ_64(self):
        p = psutil.Process(self.proc64.pid)
        try:
            p.environ()
        except psutil.AccessDenied:
            pass


# ===================================================================
# Windows services
# ===================================================================


@pytest.mark.skipif(not WINDOWS, reason="WINDOWS only")
class TestServices(PsutilTestCase):
    def test_win_service_iter(self):
        valid_statuses = set([
            "running",
            "paused",
            "start",
            "pause",
            "continue",
            "stop",
            "stopped",
        ])
        valid_start_types = set(["automatic", "manual", "disabled"])
        valid_statuses = set([
            "running",
            "paused",
            "start_pending",
            "pause_pending",
            "continue_pending",
            "stop_pending",
            "stopped",
        ])
        for serv in psutil.win_service_iter():
            data = serv.as_dict()
            assert isinstance(data['name'], str)
            assert data['name'].strip()
            assert isinstance(data['display_name'], str)
            assert isinstance(data['username'], str)
            assert data['status'] in valid_statuses
            if data['pid'] is not None:
                psutil.Process(data['pid'])
            assert isinstance(data['binpath'], str)
            assert isinstance(data['username'], str)
            assert isinstance(data['start_type'], str)
            assert data['start_type'] in valid_start_types
            assert data['status'] in valid_statuses
            assert isinstance(data['description'], str)
            pid = serv.pid()
            if pid is not None:
                p = psutil.Process(pid)
                assert p.is_running()
            # win_service_get
            s = psutil.win_service_get(serv.name())
            # test __eq__
            assert serv == s

    def test_win_service_get(self):
        ERROR_SERVICE_DOES_NOT_EXIST = (
            psutil._psplatform.cext.ERROR_SERVICE_DOES_NOT_EXIST
        )
        ERROR_ACCESS_DENIED = psutil._psplatform.cext.ERROR_ACCESS_DENIED

        name = next(psutil.win_service_iter()).name()
        with pytest.raises(psutil.NoSuchProcess) as cm:
            psutil.win_service_get(name + '???')
        assert cm.value.name == name + '???'

        # test NoSuchProcess
        service = psutil.win_service_get(name)
        if PY3:
            args = (0, "msg", 0, ERROR_SERVICE_DOES_NOT_EXIST)
        else:
            args = (ERROR_SERVICE_DOES_NOT_EXIST, "msg")
        exc = WindowsError(*args)
        with mock.patch(
            "psutil._psplatform.cext.winservice_query_status", side_effect=exc
        ):
            with pytest.raises(psutil.NoSuchProcess):
                service.status()
        with mock.patch(
            "psutil._psplatform.cext.winservice_query_config", side_effect=exc
        ):
            with pytest.raises(psutil.NoSuchProcess):
                service.username()

        # test AccessDenied
        if PY3:
            args = (0, "msg", 0, ERROR_ACCESS_DENIED)
        else:
            args = (ERROR_ACCESS_DENIED, "msg")
        exc = WindowsError(*args)
        with mock.patch(
            "psutil._psplatform.cext.winservice_query_status", side_effect=exc
        ):
            with pytest.raises(psutil.AccessDenied):
                service.status()
        with mock.patch(
            "psutil._psplatform.cext.winservice_query_config", side_effect=exc
        ):
            with pytest.raises(psutil.AccessDenied):
                service.username()

        # test __str__ and __repr__
        assert service.name() in str(service)
        assert service.display_name() in str(service)
        assert service.name() in repr(service)
        assert service.display_name() in repr(service)
PKok\�P�
�H�Hpsutil/tests/test_testutils.pynu�[���#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Tests for testing utils (psutil.tests namespace)."""

import collections
import contextlib
import errno
import os
import socket
import stat
import subprocess
import textwrap
import unittest
import warnings

import psutil
import psutil.tests
from psutil import FREEBSD
from psutil import NETBSD
from psutil import POSIX
from psutil._common import open_binary
from psutil._common import open_text
from psutil._common import supports_ipv6
from psutil._compat import PY3
from psutil.tests import CI_TESTING
from psutil.tests import COVERAGE
from psutil.tests import HAS_NET_CONNECTIONS_UNIX
from psutil.tests import HERE
from psutil.tests import PYTHON_EXE
from psutil.tests import PYTHON_EXE_ENV
from psutil.tests import PsutilTestCase
from psutil.tests import TestMemoryLeak
from psutil.tests import bind_socket
from psutil.tests import bind_unix_socket
from psutil.tests import call_until
from psutil.tests import chdir
from psutil.tests import create_sockets
from psutil.tests import fake_pytest
from psutil.tests import filter_proc_net_connections
from psutil.tests import get_free_port
from psutil.tests import is_namedtuple
from psutil.tests import mock
from psutil.tests import process_namespace
from psutil.tests import pytest
from psutil.tests import reap_children
from psutil.tests import retry
from psutil.tests import retry_on_failure
from psutil.tests import safe_mkdir
from psutil.tests import safe_rmpath
from psutil.tests import system_namespace
from psutil.tests import tcp_socketpair
from psutil.tests import terminate
from psutil.tests import unix_socketpair
from psutil.tests import wait_for_file
from psutil.tests import wait_for_pid


# ===================================================================
# --- Unit tests for test utilities.
# ===================================================================


class TestRetryDecorator(PsutilTestCase):
    @mock.patch('time.sleep')
    def test_retry_success(self, sleep):
        # Fail 3 times out of 5; make sure the decorated fun returns.

        @retry(retries=5, interval=1, logfun=None)
        def foo():
            while queue:
                queue.pop()
                1 / 0  # noqa
            return 1

        queue = list(range(3))
        assert foo() == 1
        assert sleep.call_count == 3

    @mock.patch('time.sleep')
    def test_retry_failure(self, sleep):
        # Fail 6 times out of 5; th function is supposed to raise exc.
        @retry(retries=5, interval=1, logfun=None)
        def foo():
            while queue:
                queue.pop()
                1 / 0  # noqa
            return 1

        queue = list(range(6))
        with pytest.raises(ZeroDivisionError):
            foo()
        assert sleep.call_count == 5

    @mock.patch('time.sleep')
    def test_exception_arg(self, sleep):
        @retry(exception=ValueError, interval=1)
        def foo():
            raise TypeError

        with pytest.raises(TypeError):
            foo()
        assert sleep.call_count == 0

    @mock.patch('time.sleep')
    def test_no_interval_arg(self, sleep):
        # if interval is not specified sleep is not supposed to be called

        @retry(retries=5, interval=None, logfun=None)
        def foo():
            1 / 0  # noqa

        with pytest.raises(ZeroDivisionError):
            foo()
        assert sleep.call_count == 0

    @mock.patch('time.sleep')
    def test_retries_arg(self, sleep):
        @retry(retries=5, interval=1, logfun=None)
        def foo():
            1 / 0  # noqa

        with pytest.raises(ZeroDivisionError):
            foo()
        assert sleep.call_count == 5

    @mock.patch('time.sleep')
    def test_retries_and_timeout_args(self, sleep):
        with pytest.raises(ValueError):
            retry(retries=5, timeout=1)


class TestSyncTestUtils(PsutilTestCase):
    def test_wait_for_pid(self):
        wait_for_pid(os.getpid())
        nopid = max(psutil.pids()) + 99999
        with mock.patch('psutil.tests.retry.__iter__', return_value=iter([0])):
            with pytest.raises(psutil.NoSuchProcess):
                wait_for_pid(nopid)

    def test_wait_for_file(self):
        testfn = self.get_testfn()
        with open(testfn, 'w') as f:
            f.write('foo')
        wait_for_file(testfn)
        assert not os.path.exists(testfn)

    def test_wait_for_file_empty(self):
        testfn = self.get_testfn()
        with open(testfn, 'w'):
            pass
        wait_for_file(testfn, empty=True)
        assert not os.path.exists(testfn)

    def test_wait_for_file_no_file(self):
        testfn = self.get_testfn()
        with mock.patch('psutil.tests.retry.__iter__', return_value=iter([0])):
            with pytest.raises(IOError):
                wait_for_file(testfn)

    def test_wait_for_file_no_delete(self):
        testfn = self.get_testfn()
        with open(testfn, 'w') as f:
            f.write('foo')
        wait_for_file(testfn, delete=False)
        assert os.path.exists(testfn)

    def test_call_until(self):
        call_until(lambda: 1)
        # TODO: test for timeout


class TestFSTestUtils(PsutilTestCase):
    def test_open_text(self):
        with open_text(__file__) as f:
            assert f.mode == 'r'

    def test_open_binary(self):
        with open_binary(__file__) as f:
            assert f.mode == 'rb'

    def test_safe_mkdir(self):
        testfn = self.get_testfn()
        safe_mkdir(testfn)
        assert os.path.isdir(testfn)
        safe_mkdir(testfn)
        assert os.path.isdir(testfn)

    def test_safe_rmpath(self):
        # test file is removed
        testfn = self.get_testfn()
        open(testfn, 'w').close()
        safe_rmpath(testfn)
        assert not os.path.exists(testfn)
        # test no exception if path does not exist
        safe_rmpath(testfn)
        # test dir is removed
        os.mkdir(testfn)
        safe_rmpath(testfn)
        assert not os.path.exists(testfn)
        # test other exceptions are raised
        with mock.patch(
            'psutil.tests.os.stat', side_effect=OSError(errno.EINVAL, "")
        ) as m:
            with pytest.raises(OSError):
                safe_rmpath(testfn)
            assert m.called

    def test_chdir(self):
        testfn = self.get_testfn()
        base = os.getcwd()
        os.mkdir(testfn)
        with chdir(testfn):
            assert os.getcwd() == os.path.join(base, testfn)
        assert os.getcwd() == base


class TestProcessUtils(PsutilTestCase):
    def test_reap_children(self):
        subp = self.spawn_testproc()
        p = psutil.Process(subp.pid)
        assert p.is_running()
        reap_children()
        assert not p.is_running()
        assert not psutil.tests._pids_started
        assert not psutil.tests._subprocesses_started

    def test_spawn_children_pair(self):
        child, grandchild = self.spawn_children_pair()
        assert child.pid != grandchild.pid
        assert child.is_running()
        assert grandchild.is_running()
        children = psutil.Process().children()
        assert children == [child]
        children = psutil.Process().children(recursive=True)
        assert len(children) == 2
        assert child in children
        assert grandchild in children
        assert child.ppid() == os.getpid()
        assert grandchild.ppid() == child.pid

        terminate(child)
        assert not child.is_running()
        assert grandchild.is_running()

        terminate(grandchild)
        assert not grandchild.is_running()

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_spawn_zombie(self):
        _parent, zombie = self.spawn_zombie()
        assert zombie.status() == psutil.STATUS_ZOMBIE

    def test_terminate(self):
        # by subprocess.Popen
        p = self.spawn_testproc()
        terminate(p)
        self.assertPidGone(p.pid)
        terminate(p)
        # by psutil.Process
        p = psutil.Process(self.spawn_testproc().pid)
        terminate(p)
        self.assertPidGone(p.pid)
        terminate(p)
        # by psutil.Popen
        cmd = [
            PYTHON_EXE,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)];",
        ]
        p = psutil.Popen(
            cmd,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            env=PYTHON_EXE_ENV,
        )
        terminate(p)
        self.assertPidGone(p.pid)
        terminate(p)
        # by PID
        pid = self.spawn_testproc().pid
        terminate(pid)
        self.assertPidGone(p.pid)
        terminate(pid)
        # zombie
        if POSIX:
            parent, zombie = self.spawn_zombie()
            terminate(parent)
            terminate(zombie)
            self.assertPidGone(parent.pid)
            self.assertPidGone(zombie.pid)


class TestNetUtils(PsutilTestCase):
    def bind_socket(self):
        port = get_free_port()
        with contextlib.closing(bind_socket(addr=('', port))) as s:
            assert s.getsockname()[1] == port

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_bind_unix_socket(self):
        name = self.get_testfn()
        sock = bind_unix_socket(name)
        with contextlib.closing(sock):
            assert sock.family == socket.AF_UNIX
            assert sock.type == socket.SOCK_STREAM
            assert sock.getsockname() == name
            assert os.path.exists(name)
            assert stat.S_ISSOCK(os.stat(name).st_mode)
        # UDP
        name = self.get_testfn()
        sock = bind_unix_socket(name, type=socket.SOCK_DGRAM)
        with contextlib.closing(sock):
            assert sock.type == socket.SOCK_DGRAM

    def tcp_tcp_socketpair(self):
        addr = ("127.0.0.1", get_free_port())
        server, client = tcp_socketpair(socket.AF_INET, addr=addr)
        with contextlib.closing(server):
            with contextlib.closing(client):
                # Ensure they are connected and the positions are
                # correct.
                assert server.getsockname() == addr
                assert client.getpeername() == addr
                assert client.getsockname() != addr

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @pytest.mark.skipif(
        NETBSD or FREEBSD, reason="/var/run/log UNIX socket opened by default"
    )
    def test_unix_socketpair(self):
        p = psutil.Process()
        num_fds = p.num_fds()
        assert (
            filter_proc_net_connections(p.net_connections(kind='unix')) == []
        )
        name = self.get_testfn()
        server, client = unix_socketpair(name)
        try:
            assert os.path.exists(name)
            assert stat.S_ISSOCK(os.stat(name).st_mode)
            assert p.num_fds() - num_fds == 2
            assert (
                len(
                    filter_proc_net_connections(p.net_connections(kind='unix'))
                )
                == 2
            )
            assert server.getsockname() == name
            assert client.getpeername() == name
        finally:
            client.close()
            server.close()

    def test_create_sockets(self):
        with create_sockets() as socks:
            fams = collections.defaultdict(int)
            types = collections.defaultdict(int)
            for s in socks:
                fams[s.family] += 1
                # work around http://bugs.python.org/issue30204
                types[s.getsockopt(socket.SOL_SOCKET, socket.SO_TYPE)] += 1
            assert fams[socket.AF_INET] >= 2
            if supports_ipv6():
                assert fams[socket.AF_INET6] >= 2
            if POSIX and HAS_NET_CONNECTIONS_UNIX:
                assert fams[socket.AF_UNIX] >= 2
            assert types[socket.SOCK_STREAM] >= 2
            assert types[socket.SOCK_DGRAM] >= 2


@pytest.mark.xdist_group(name="serial")
class TestMemLeakClass(TestMemoryLeak):
    @retry_on_failure()
    def test_times(self):
        def fun():
            cnt['cnt'] += 1

        cnt = {'cnt': 0}
        self.execute(fun, times=10, warmup_times=15)
        assert cnt['cnt'] == 26

    def test_param_err(self):
        with pytest.raises(ValueError):
            self.execute(lambda: 0, times=0)
        with pytest.raises(ValueError):
            self.execute(lambda: 0, times=-1)
        with pytest.raises(ValueError):
            self.execute(lambda: 0, warmup_times=-1)
        with pytest.raises(ValueError):
            self.execute(lambda: 0, tolerance=-1)
        with pytest.raises(ValueError):
            self.execute(lambda: 0, retries=-1)

    @retry_on_failure()
    @pytest.mark.skipif(CI_TESTING, reason="skipped on CI")
    @pytest.mark.skipif(COVERAGE, reason="skipped during test coverage")
    def test_leak_mem(self):
        ls = []

        def fun(ls=ls):
            ls.append("x" * 248 * 1024)

        try:
            # will consume around 60M in total
            with pytest.raises(AssertionError, match="extra-mem"):
                self.execute(fun, times=100)
        finally:
            del ls

    def test_unclosed_files(self):
        def fun():
            f = open(__file__)
            self.addCleanup(f.close)
            box.append(f)

        box = []
        kind = "fd" if POSIX else "handle"
        with pytest.raises(AssertionError, match="unclosed " + kind):
            self.execute(fun)

    def test_tolerance(self):
        def fun():
            ls.append("x" * 24 * 1024)

        ls = []
        times = 100
        self.execute(
            fun, times=times, warmup_times=0, tolerance=200 * 1024 * 1024
        )
        assert len(ls) == times + 1

    def test_execute_w_exc(self):
        def fun_1():
            1 / 0  # noqa

        self.execute_w_exc(ZeroDivisionError, fun_1)
        with pytest.raises(ZeroDivisionError):
            self.execute_w_exc(OSError, fun_1)

        def fun_2():
            pass

        with pytest.raises(AssertionError):
            self.execute_w_exc(ZeroDivisionError, fun_2)


class TestFakePytest(PsutilTestCase):
    def run_test_class(self, klass):
        suite = unittest.TestSuite()
        suite.addTest(klass)
        runner = unittest.TextTestRunner()
        result = runner.run(suite)
        return result

    def test_raises(self):
        with fake_pytest.raises(ZeroDivisionError) as cm:
            1 / 0  # noqa
        assert isinstance(cm.value, ZeroDivisionError)

        with fake_pytest.raises(ValueError, match="foo") as cm:
            raise ValueError("foo")

        try:
            with fake_pytest.raises(ValueError, match="foo") as cm:
                raise ValueError("bar")
        except AssertionError as err:
            assert str(err) == '"foo" does not match "bar"'
        else:
            raise self.fail("exception not raised")

    def test_mark(self):
        @fake_pytest.mark.xdist_group(name="serial")
        def foo():
            return 1

        assert foo() == 1

        @fake_pytest.mark.xdist_group(name="serial")
        class Foo:
            def bar(self):
                return 1

        assert Foo().bar() == 1

    def test_skipif(self):
        class TestCase(unittest.TestCase):
            @fake_pytest.mark.skipif(True, reason="reason")
            def foo(self):
                assert 1 == 1  # noqa

        result = self.run_test_class(TestCase("foo"))
        assert result.wasSuccessful()
        assert len(result.skipped) == 1
        assert result.skipped[0][1] == "reason"

        class TestCase(unittest.TestCase):
            @fake_pytest.mark.skipif(False, reason="reason")
            def foo(self):
                assert 1 == 1  # noqa

        result = self.run_test_class(TestCase("foo"))
        assert result.wasSuccessful()
        assert len(result.skipped) == 0

    @pytest.mark.skipif(not PY3, reason="not PY3")
    def test_skip(self):
        class TestCase(unittest.TestCase):
            def foo(self):
                fake_pytest.skip("reason")
                assert 1 == 0  # noqa

        result = self.run_test_class(TestCase("foo"))
        assert result.wasSuccessful()
        assert len(result.skipped) == 1
        assert result.skipped[0][1] == "reason"

    def test_main(self):
        tmpdir = self.get_testfn(dir=HERE)
        os.mkdir(tmpdir)
        with open(os.path.join(tmpdir, "__init__.py"), "w"):
            pass
        with open(os.path.join(tmpdir, "test_file.py"), "w") as f:
            f.write(textwrap.dedent("""\
                import unittest

                class TestCase(unittest.TestCase):
                    def test_passed(self):
                        pass
                """).lstrip())
        with mock.patch.object(psutil.tests, "HERE", tmpdir):
            with self.assertWarnsRegex(
                UserWarning, "Fake pytest module was used"
            ):
                suite = fake_pytest.main()
                assert suite.countTestCases() == 1

    def test_warns(self):
        # success
        with fake_pytest.warns(UserWarning):
            warnings.warn("foo", UserWarning, stacklevel=1)

        # failure
        try:
            with fake_pytest.warns(UserWarning):
                warnings.warn("foo", DeprecationWarning, stacklevel=1)
        except AssertionError:
            pass
        else:
            raise self.fail("exception not raised")

        # match success
        with fake_pytest.warns(UserWarning, match="foo"):
            warnings.warn("foo", UserWarning, stacklevel=1)

        # match failure
        try:
            with fake_pytest.warns(UserWarning, match="foo"):
                warnings.warn("bar", UserWarning, stacklevel=1)
        except AssertionError:
            pass
        else:
            raise self.fail("exception not raised")


class TestTestingUtils(PsutilTestCase):
    def test_process_namespace(self):
        p = psutil.Process()
        ns = process_namespace(p)
        ns.test()
        fun = [x for x in ns.iter(ns.getters) if x[1] == 'ppid'][0][0]
        assert fun() == p.ppid()

    def test_system_namespace(self):
        ns = system_namespace()
        fun = [x for x in ns.iter(ns.getters) if x[1] == 'net_if_addrs'][0][0]
        assert fun() == psutil.net_if_addrs()


class TestOtherUtils(PsutilTestCase):
    def test_is_namedtuple(self):
        assert is_namedtuple(collections.namedtuple('foo', 'a b c')(1, 2, 3))
        assert not is_namedtuple(tuple())
PKok\�c�;1;1psutil/tests/test_unicode.pynu�[���#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Notes about unicode handling in psutil
======================================.

Starting from version 5.3.0 psutil adds unicode support, see:
https://github.com/giampaolo/psutil/issues/1040
The notes below apply to *any* API returning a string such as
process exe(), cwd() or username():

* all strings are encoded by using the OS filesystem encoding
  (sys.getfilesystemencoding()) which varies depending on the platform
  (e.g. "UTF-8" on macOS, "mbcs" on Win)
* no API call is supposed to crash with UnicodeDecodeError
* instead, in case of badly encoded data returned by the OS, the
  following error handlers are used to replace the corrupted characters in
  the string:
    * Python 3: sys.getfilesystemencodeerrors() (PY 3.6+) or
      "surrogatescape" on POSIX and "replace" on Windows
    * Python 2: "replace"
* on Python 2 all APIs return bytes (str type), never unicode
* on Python 2, you can go back to unicode by doing:

    >>> unicode(p.exe(), sys.getdefaultencoding(), errors="replace")

For a detailed explanation of how psutil handles unicode see #1040.

Tests
=====

List of APIs returning or dealing with a string:
('not tested' means they are not tested to deal with non-ASCII strings):

* Process.cmdline()
* Process.cwd()
* Process.environ()
* Process.exe()
* Process.memory_maps()
* Process.name()
* Process.net_connections('unix')
* Process.open_files()
* Process.username()             (not tested)

* disk_io_counters()             (not tested)
* disk_partitions()              (not tested)
* disk_usage(str)
* net_connections('unix')
* net_if_addrs()                 (not tested)
* net_if_stats()                 (not tested)
* net_io_counters()              (not tested)
* sensors_fans()                 (not tested)
* sensors_temperatures()         (not tested)
* users()                        (not tested)

* WindowsService.binpath()       (not tested)
* WindowsService.description()   (not tested)
* WindowsService.display_name()  (not tested)
* WindowsService.name()          (not tested)
* WindowsService.status()        (not tested)
* WindowsService.username()      (not tested)

In here we create a unicode path with a funky non-ASCII name and (where
possible) make psutil return it back (e.g. on name(), exe(), open_files(),
etc.) and make sure that:

* psutil never crashes with UnicodeDecodeError
* the returned path matches
"""

import os
import shutil
import traceback
import warnings
from contextlib import closing

import psutil
from psutil import BSD
from psutil import POSIX
from psutil import WINDOWS
from psutil._compat import PY3
from psutil._compat import super
from psutil.tests import APPVEYOR
from psutil.tests import ASCII_FS
from psutil.tests import CI_TESTING
from psutil.tests import HAS_ENVIRON
from psutil.tests import HAS_MEMORY_MAPS
from psutil.tests import HAS_NET_CONNECTIONS_UNIX
from psutil.tests import INVALID_UNICODE_SUFFIX
from psutil.tests import PYPY
from psutil.tests import TESTFN_PREFIX
from psutil.tests import UNICODE_SUFFIX
from psutil.tests import PsutilTestCase
from psutil.tests import bind_unix_socket
from psutil.tests import chdir
from psutil.tests import copyload_shared_lib
from psutil.tests import create_py_exe
from psutil.tests import get_testfn
from psutil.tests import pytest
from psutil.tests import safe_mkdir
from psutil.tests import safe_rmpath
from psutil.tests import skip_on_access_denied
from psutil.tests import spawn_testproc
from psutil.tests import terminate


if APPVEYOR:

    def safe_rmpath(path):  # NOQA
        # TODO - this is quite random and I'm not sure why it happens,
        # nor I can reproduce it locally:
        # https://ci.appveyor.com/project/giampaolo/psutil/build/job/
        #     jiq2cgd6stsbtn60
        # safe_rmpath() happens after reap_children() so this is weird
        # Perhaps wait_procs() on Windows is broken? Maybe because
        # of STILL_ACTIVE?
        # https://github.com/giampaolo/psutil/blob/
        #     68c7a70728a31d8b8b58f4be6c4c0baa2f449eda/psutil/arch/
        #     windows/process_info.c#L146
        from psutil.tests import safe_rmpath as rm

        try:
            return rm(path)
        except WindowsError:
            traceback.print_exc()


def try_unicode(suffix):
    """Return True if both the fs and the subprocess module can
    deal with a unicode file name.
    """
    sproc = None
    testfn = get_testfn(suffix=suffix)
    try:
        safe_rmpath(testfn)
        create_py_exe(testfn)
        sproc = spawn_testproc(cmd=[testfn])
        shutil.copyfile(testfn, testfn + '-2')
        safe_rmpath(testfn + '-2')
    except (UnicodeEncodeError, IOError):
        return False
    else:
        return True
    finally:
        if sproc is not None:
            terminate(sproc)
        safe_rmpath(testfn)


# ===================================================================
# FS APIs
# ===================================================================


class BaseUnicodeTest(PsutilTestCase):
    funky_suffix = None

    @classmethod
    def setUpClass(cls):
        super().setUpClass()
        cls.skip_tests = False
        cls.funky_name = None
        if cls.funky_suffix is not None:
            if not try_unicode(cls.funky_suffix):
                cls.skip_tests = True
            else:
                cls.funky_name = get_testfn(suffix=cls.funky_suffix)
                create_py_exe(cls.funky_name)

    def setUp(self):
        super().setUp()
        if self.skip_tests:
            raise pytest.skip("can't handle unicode str")


@pytest.mark.xdist_group(name="serial")
@pytest.mark.skipif(ASCII_FS, reason="ASCII fs")
@pytest.mark.skipif(PYPY and not PY3, reason="too much trouble on PYPY2")
class TestFSAPIs(BaseUnicodeTest):
    """Test FS APIs with a funky, valid, UTF8 path name."""

    funky_suffix = UNICODE_SUFFIX

    def expect_exact_path_match(self):
        # Do not expect psutil to correctly handle unicode paths on
        # Python 2 if os.listdir() is not able either.
        here = '.' if isinstance(self.funky_name, str) else u'.'
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")
            return self.funky_name in os.listdir(here)

    # ---

    def test_proc_exe(self):
        cmd = [
            self.funky_name,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        subp = self.spawn_testproc(cmd)
        p = psutil.Process(subp.pid)
        exe = p.exe()
        assert isinstance(exe, str)
        if self.expect_exact_path_match():
            assert os.path.normcase(exe) == os.path.normcase(self.funky_name)

    def test_proc_name(self):
        cmd = [
            self.funky_name,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        subp = self.spawn_testproc(cmd)
        name = psutil.Process(subp.pid).name()
        assert isinstance(name, str)
        if self.expect_exact_path_match():
            assert name == os.path.basename(self.funky_name)

    def test_proc_cmdline(self):
        cmd = [
            self.funky_name,
            "-c",
            "import time; [time.sleep(0.1) for x in range(100)]",
        ]
        subp = self.spawn_testproc(cmd)
        p = psutil.Process(subp.pid)
        cmdline = p.cmdline()
        for part in cmdline:
            assert isinstance(part, str)
        if self.expect_exact_path_match():
            assert cmdline == cmd

    def test_proc_cwd(self):
        dname = self.funky_name + "2"
        self.addCleanup(safe_rmpath, dname)
        safe_mkdir(dname)
        with chdir(dname):
            p = psutil.Process()
            cwd = p.cwd()
        assert isinstance(p.cwd(), str)
        if self.expect_exact_path_match():
            assert cwd == dname

    @pytest.mark.skipif(PYPY and WINDOWS, reason="fails on PYPY + WINDOWS")
    def test_proc_open_files(self):
        p = psutil.Process()
        start = set(p.open_files())
        with open(self.funky_name, 'rb'):
            new = set(p.open_files())
        path = (new - start).pop().path
        assert isinstance(path, str)
        if BSD and not path:
            # XXX - see https://github.com/giampaolo/psutil/issues/595
            raise pytest.skip("open_files on BSD is broken")
        if self.expect_exact_path_match():
            assert os.path.normcase(path) == os.path.normcase(self.funky_name)

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    def test_proc_net_connections(self):
        name = self.get_testfn(suffix=self.funky_suffix)
        try:
            sock = bind_unix_socket(name)
        except UnicodeEncodeError:
            if PY3:
                raise
            else:
                raise pytest.skip("not supported")
        with closing(sock):
            conn = psutil.Process().net_connections('unix')[0]
            assert isinstance(conn.laddr, str)
            assert conn.laddr == name

    @pytest.mark.skipif(not POSIX, reason="POSIX only")
    @pytest.mark.skipif(
        not HAS_NET_CONNECTIONS_UNIX, reason="can't list UNIX sockets"
    )
    @skip_on_access_denied()
    def test_net_connections(self):
        def find_sock(cons):
            for conn in cons:
                if os.path.basename(conn.laddr).startswith(TESTFN_PREFIX):
                    return conn
            raise ValueError("connection not found")

        name = self.get_testfn(suffix=self.funky_suffix)
        try:
            sock = bind_unix_socket(name)
        except UnicodeEncodeError:
            if PY3:
                raise
            else:
                raise pytest.skip("not supported")
        with closing(sock):
            cons = psutil.net_connections(kind='unix')
            conn = find_sock(cons)
            assert isinstance(conn.laddr, str)
            assert conn.laddr == name

    def test_disk_usage(self):
        dname = self.funky_name + "2"
        self.addCleanup(safe_rmpath, dname)
        safe_mkdir(dname)
        psutil.disk_usage(dname)

    @pytest.mark.skipif(not HAS_MEMORY_MAPS, reason="not supported")
    @pytest.mark.skipif(
        not PY3, reason="ctypes does not support unicode on PY2"
    )
    @pytest.mark.skipif(PYPY, reason="unstable on PYPY")
    def test_memory_maps(self):
        # XXX: on Python 2, using ctypes.CDLL with a unicode path
        # opens a message box which blocks the test run.
        with copyload_shared_lib(suffix=self.funky_suffix) as funky_path:

            def normpath(p):
                return os.path.realpath(os.path.normcase(p))

            libpaths = [
                normpath(x.path) for x in psutil.Process().memory_maps()
            ]
            # ...just to have a clearer msg in case of failure
            libpaths = [x for x in libpaths if TESTFN_PREFIX in x]
            assert normpath(funky_path) in libpaths
            for path in libpaths:
                assert isinstance(path, str)


@pytest.mark.skipif(CI_TESTING, reason="unreliable on CI")
class TestFSAPIsWithInvalidPath(TestFSAPIs):
    """Test FS APIs with a funky, invalid path name."""

    funky_suffix = INVALID_UNICODE_SUFFIX

    def expect_exact_path_match(self):
        # Invalid unicode names are supposed to work on Python 2.
        return True


# ===================================================================
# Non fs APIs
# ===================================================================


class TestNonFSAPIS(BaseUnicodeTest):
    """Unicode tests for non fs-related APIs."""

    funky_suffix = UNICODE_SUFFIX if PY3 else 'è'

    @pytest.mark.skipif(not HAS_ENVIRON, reason="not supported")
    @pytest.mark.skipif(PYPY and WINDOWS, reason="segfaults on PYPY + WINDOWS")
    def test_proc_environ(self):
        # Note: differently from others, this test does not deal
        # with fs paths. On Python 2 subprocess module is broken as
        # it's not able to handle with non-ASCII env vars, so
        # we use "è", which is part of the extended ASCII table
        # (unicode point <= 255).
        env = os.environ.copy()
        env['FUNNY_ARG'] = self.funky_suffix
        sproc = self.spawn_testproc(env=env)
        p = psutil.Process(sproc.pid)
        env = p.environ()
        for k, v in env.items():
            assert isinstance(k, str)
            assert isinstance(v, str)
        assert env['FUNNY_ARG'] == self.funky_suffix
PKok\�μ[d[dpsutil/tests/test_linux.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Linux specific tests."""

from __future__ import division

import collections
import contextlib
import errno
import glob
import io
import os
import re
import shutil
import socket
import struct
import textwrap
import time
import warnings

import psutil
from psutil import LINUX
from psutil._compat import PY3
from psutil._compat import FileNotFoundError
from psutil._compat import basestring
from psutil.tests import AARCH64
from psutil.tests import GITHUB_ACTIONS
from psutil.tests import GLOBAL_TIMEOUT
from psutil.tests import HAS_BATTERY
from psutil.tests import HAS_CPU_FREQ
from psutil.tests import HAS_GETLOADAVG
from psutil.tests import HAS_RLIMIT
from psutil.tests import PYPY
from psutil.tests import PYTEST_PARALLEL
from psutil.tests import QEMU_USER
from psutil.tests import TOLERANCE_DISK_USAGE
from psutil.tests import TOLERANCE_SYS_MEM
from psutil.tests import PsutilTestCase
from psutil.tests import ThreadTask
from psutil.tests import call_until
from psutil.tests import mock
from psutil.tests import pytest
from psutil.tests import reload_module
from psutil.tests import retry_on_failure
from psutil.tests import safe_rmpath
from psutil.tests import sh
from psutil.tests import skip_on_not_implemented
from psutil.tests import which


if LINUX:
    from psutil._pslinux import CLOCK_TICKS
    from psutil._pslinux import RootFsDeviceFinder
    from psutil._pslinux import calculate_avail_vmem
    from psutil._pslinux import open_binary


HERE = os.path.abspath(os.path.dirname(__file__))
SIOCGIFADDR = 0x8915
SIOCGIFCONF = 0x8912
SIOCGIFHWADDR = 0x8927
SIOCGIFNETMASK = 0x891B
SIOCGIFBRDADDR = 0x8919
if LINUX:
    SECTOR_SIZE = 512
EMPTY_TEMPERATURES = not glob.glob('/sys/class/hwmon/hwmon*')


# =====================================================================
# --- utils
# =====================================================================


def get_ipv4_address(ifname):
    import fcntl

    ifname = ifname[:15]
    if PY3:
        ifname = bytes(ifname, 'ascii')
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    with contextlib.closing(s):
        return socket.inet_ntoa(
            fcntl.ioctl(s.fileno(), SIOCGIFADDR, struct.pack('256s', ifname))[
                20:24
            ]
        )


def get_ipv4_netmask(ifname):
    import fcntl

    ifname = ifname[:15]
    if PY3:
        ifname = bytes(ifname, 'ascii')
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    with contextlib.closing(s):
        return socket.inet_ntoa(
            fcntl.ioctl(
                s.fileno(), SIOCGIFNETMASK, struct.pack('256s', ifname)
            )[20:24]
        )


def get_ipv4_broadcast(ifname):
    import fcntl

    ifname = ifname[:15]
    if PY3:
        ifname = bytes(ifname, 'ascii')
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    with contextlib.closing(s):
        return socket.inet_ntoa(
            fcntl.ioctl(
                s.fileno(), SIOCGIFBRDADDR, struct.pack('256s', ifname)
            )[20:24]
        )


def get_ipv6_addresses(ifname):
    with open("/proc/net/if_inet6") as f:
        all_fields = []
        for line in f:
            fields = line.split()
            if fields[-1] == ifname:
                all_fields.append(fields)

        if len(all_fields) == 0:
            raise ValueError("could not find interface %r" % ifname)

    for i in range(len(all_fields)):
        unformatted = all_fields[i][0]
        groups = []
        for j in range(0, len(unformatted), 4):
            groups.append(unformatted[j : j + 4])
        formatted = ":".join(groups)
        packed = socket.inet_pton(socket.AF_INET6, formatted)
        all_fields[i] = socket.inet_ntop(socket.AF_INET6, packed)
    return all_fields


def get_mac_address(ifname):
    import fcntl

    ifname = ifname[:15]
    if PY3:
        ifname = bytes(ifname, 'ascii')
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    with contextlib.closing(s):
        info = fcntl.ioctl(
            s.fileno(), SIOCGIFHWADDR, struct.pack('256s', ifname)
        )
        if PY3:

            def ord(x):
                return x

        else:
            import __builtin__

            ord = __builtin__.ord
        return ''.join(['%02x:' % ord(char) for char in info[18:24]])[:-1]


def free_swap():
    """Parse 'free' cmd and return swap memory's s total, used and free
    values.
    """
    out = sh(["free", "-b"], env={"LANG": "C.UTF-8"})
    lines = out.split('\n')
    for line in lines:
        if line.startswith('Swap'):
            _, total, used, free = line.split()
            nt = collections.namedtuple('free', 'total used free')
            return nt(int(total), int(used), int(free))
    raise ValueError(
        "can't find 'Swap' in 'free' output:\n%s" % '\n'.join(lines)
    )


def free_physmem():
    """Parse 'free' cmd and return physical memory's total, used
    and free values.
    """
    # Note: free can have 2 different formats, invalidating 'shared'
    # and 'cached' memory which may have different positions so we
    # do not return them.
    # https://github.com/giampaolo/psutil/issues/538#issuecomment-57059946
    out = sh(["free", "-b"], env={"LANG": "C.UTF-8"})
    lines = out.split('\n')
    for line in lines:
        if line.startswith('Mem'):
            total, used, free, shared = (int(x) for x in line.split()[1:5])
            nt = collections.namedtuple(
                'free', 'total used free shared output'
            )
            return nt(total, used, free, shared, out)
    raise ValueError(
        "can't find 'Mem' in 'free' output:\n%s" % '\n'.join(lines)
    )


def vmstat(stat):
    out = sh(["vmstat", "-s"], env={"LANG": "C.UTF-8"})
    for line in out.split("\n"):
        line = line.strip()
        if stat in line:
            return int(line.split(' ')[0])
    raise ValueError("can't find %r in 'vmstat' output" % stat)


def get_free_version_info():
    out = sh(["free", "-V"]).strip()
    if 'UNKNOWN' in out:
        raise pytest.skip("can't determine free version")
    return tuple(map(int, re.findall(r'\d+', out.split()[-1])))


@contextlib.contextmanager
def mock_open_content(pairs):
    """Mock open() builtin and forces it to return a certain content
    for a given path. `pairs` is a {"path": "content", ...} dict.
    """

    def open_mock(name, *args, **kwargs):
        if name in pairs:
            content = pairs[name]
            if PY3:
                if isinstance(content, basestring):
                    return io.StringIO(content)
                else:
                    return io.BytesIO(content)
            else:
                return io.BytesIO(content)
        else:
            return orig_open(name, *args, **kwargs)

    orig_open = open
    patch_point = 'builtins.open' if PY3 else '__builtin__.open'
    with mock.patch(patch_point, create=True, side_effect=open_mock) as m:
        yield m


@contextlib.contextmanager
def mock_open_exception(for_path, exc):
    """Mock open() builtin and raises `exc` if the path being opened
    matches `for_path`.
    """

    def open_mock(name, *args, **kwargs):
        if name == for_path:
            raise exc
        else:
            return orig_open(name, *args, **kwargs)

    orig_open = open
    patch_point = 'builtins.open' if PY3 else '__builtin__.open'
    with mock.patch(patch_point, create=True, side_effect=open_mock) as m:
        yield m


# =====================================================================
# --- system virtual memory
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemVirtualMemoryAgainstFree(PsutilTestCase):
    def test_total(self):
        cli_value = free_physmem().total
        psutil_value = psutil.virtual_memory().total
        assert cli_value == psutil_value

    @retry_on_failure()
    def test_used(self):
        # Older versions of procps used slab memory to calculate used memory.
        # This got changed in:
        # https://gitlab.com/procps-ng/procps/commit/
        #     05d751c4f076a2f0118b914c5e51cfbb4762ad8e
        # Newer versions of procps are using yet another way to compute used
        # memory.
        # https://gitlab.com/procps-ng/procps/commit/
        #     2184e90d2e7cdb582f9a5b706b47015e56707e4d
        if get_free_version_info() < (3, 3, 12):
            raise pytest.skip("free version too old")
        if get_free_version_info() >= (4, 0, 0):
            raise pytest.skip("free version too recent")
        cli_value = free_physmem().used
        psutil_value = psutil.virtual_memory().used
        assert abs(cli_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_free(self):
        cli_value = free_physmem().free
        psutil_value = psutil.virtual_memory().free
        assert abs(cli_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_shared(self):
        free = free_physmem()
        free_value = free.shared
        if free_value == 0:
            raise pytest.skip("free does not support 'shared' column")
        psutil_value = psutil.virtual_memory().shared
        assert (
            abs(free_value - psutil_value) < TOLERANCE_SYS_MEM
        ), '%s %s \n%s' % (free_value, psutil_value, free.output)

    @retry_on_failure()
    def test_available(self):
        # "free" output format has changed at some point:
        # https://github.com/giampaolo/psutil/issues/538#issuecomment-147192098
        out = sh(["free", "-b"])
        lines = out.split('\n')
        if 'available' not in lines[0]:
            raise pytest.skip("free does not support 'available' column")
        else:
            free_value = int(lines[1].split()[-1])
            psutil_value = psutil.virtual_memory().available
            assert (
                abs(free_value - psutil_value) < TOLERANCE_SYS_MEM
            ), '%s %s \n%s' % (free_value, psutil_value, out)


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemVirtualMemoryAgainstVmstat(PsutilTestCase):
    def test_total(self):
        vmstat_value = vmstat('total memory') * 1024
        psutil_value = psutil.virtual_memory().total
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_used(self):
        # Older versions of procps used slab memory to calculate used memory.
        # This got changed in:
        # https://gitlab.com/procps-ng/procps/commit/
        #     05d751c4f076a2f0118b914c5e51cfbb4762ad8e
        # Newer versions of procps are using yet another way to compute used
        # memory.
        # https://gitlab.com/procps-ng/procps/commit/
        #     2184e90d2e7cdb582f9a5b706b47015e56707e4d
        if get_free_version_info() < (3, 3, 12):
            raise pytest.skip("free version too old")
        if get_free_version_info() >= (4, 0, 0):
            raise pytest.skip("free version too recent")
        vmstat_value = vmstat('used memory') * 1024
        psutil_value = psutil.virtual_memory().used
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_free(self):
        vmstat_value = vmstat('free memory') * 1024
        psutil_value = psutil.virtual_memory().free
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_buffers(self):
        vmstat_value = vmstat('buffer memory') * 1024
        psutil_value = psutil.virtual_memory().buffers
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_active(self):
        vmstat_value = vmstat('active memory') * 1024
        psutil_value = psutil.virtual_memory().active
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_inactive(self):
        vmstat_value = vmstat('inactive memory') * 1024
        psutil_value = psutil.virtual_memory().inactive
        assert abs(vmstat_value - psutil_value) < TOLERANCE_SYS_MEM


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemVirtualMemoryMocks(PsutilTestCase):
    def test_warnings_on_misses(self):
        # Emulate a case where /proc/meminfo provides few info.
        # psutil is supposed to set the missing fields to 0 and
        # raise a warning.
        content = textwrap.dedent("""\
            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemAvailable:         -1 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            SReclaimable:     346648 kB
            """).encode()
        with mock_open_content({'/proc/meminfo': content}) as m:
            with warnings.catch_warnings(record=True) as ws:
                warnings.simplefilter("always")
                ret = psutil.virtual_memory()
                assert m.called
                assert len(ws) == 1
                w = ws[0]
                assert "memory stats couldn't be determined" in str(w.message)
                assert "cached" in str(w.message)
                assert "shared" in str(w.message)
                assert "active" in str(w.message)
                assert "inactive" in str(w.message)
                assert "buffers" in str(w.message)
                assert "available" in str(w.message)
                assert ret.cached == 0
                assert ret.active == 0
                assert ret.inactive == 0
                assert ret.shared == 0
                assert ret.buffers == 0
                assert ret.available == 0
                assert ret.slab == 0

    @retry_on_failure()
    def test_avail_old_percent(self):
        # Make sure that our calculation of avail mem for old kernels
        # is off by max 15%.
        mems = {}
        with open_binary('/proc/meminfo') as f:
            for line in f:
                fields = line.split()
                mems[fields[0]] = int(fields[1]) * 1024

        a = calculate_avail_vmem(mems)
        if b'MemAvailable:' in mems:
            b = mems[b'MemAvailable:']
            diff_percent = abs(a - b) / a * 100
            assert diff_percent < 15

    def test_avail_old_comes_from_kernel(self):
        # Make sure "MemAvailable:" coluimn is used instead of relying
        # on our internal algorithm to calculate avail mem.
        content = textwrap.dedent("""\
            Active:          9444728 kB
            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemAvailable:    6574984 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            SReclaimable:     346648 kB
            """).encode()
        with mock_open_content({'/proc/meminfo': content}) as m:
            with warnings.catch_warnings(record=True) as ws:
                ret = psutil.virtual_memory()
            assert m.called
            assert ret.available == 6574984 * 1024
            w = ws[0]
            assert "inactive memory stats couldn't be determined" in str(
                w.message
            )

    def test_avail_old_missing_fields(self):
        # Remove Active(file), Inactive(file) and SReclaimable
        # from /proc/meminfo and make sure the fallback is used
        # (free + cached),
        content = textwrap.dedent("""\
            Active:          9444728 kB
            Active(anon):    6145416 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            """).encode()
        with mock_open_content({"/proc/meminfo": content}) as m:
            with warnings.catch_warnings(record=True) as ws:
                ret = psutil.virtual_memory()
            assert m.called
            assert ret.available == 2057400 * 1024 + 4818144 * 1024
            w = ws[0]
            assert "inactive memory stats couldn't be determined" in str(
                w.message
            )

    def test_avail_old_missing_zoneinfo(self):
        # Remove /proc/zoneinfo file. Make sure fallback is used
        # (free + cached).
        content = textwrap.dedent("""\
            Active:          9444728 kB
            Active(anon):    6145416 kB
            Active(file):    2950064 kB
            Buffers:          287952 kB
            Cached:          4818144 kB
            Inactive(file):  1578132 kB
            Inactive(anon):   574764 kB
            Inactive(file):  1567648 kB
            MemFree:         2057400 kB
            MemTotal:       16325648 kB
            Shmem:            577588 kB
            SReclaimable:     346648 kB
            """).encode()
        with mock_open_content({"/proc/meminfo": content}):
            with mock_open_exception(
                "/proc/zoneinfo",
                IOError(errno.ENOENT, 'no such file or directory'),
            ):
                with warnings.catch_warnings(record=True) as ws:
                    ret = psutil.virtual_memory()
                    assert ret.available == 2057400 * 1024 + 4818144 * 1024
                    w = ws[0]
                    assert (
                        "inactive memory stats couldn't be determined"
                        in str(w.message)
                    )

    def test_virtual_memory_mocked(self):
        # Emulate /proc/meminfo because neither vmstat nor free return slab.
        content = textwrap.dedent("""\
            MemTotal:              100 kB
            MemFree:               2 kB
            MemAvailable:          3 kB
            Buffers:               4 kB
            Cached:                5 kB
            SwapCached:            6 kB
            Active:                7 kB
            Inactive:              8 kB
            Active(anon):          9 kB
            Inactive(anon):        10 kB
            Active(file):          11 kB
            Inactive(file):        12 kB
            Unevictable:           13 kB
            Mlocked:               14 kB
            SwapTotal:             15 kB
            SwapFree:              16 kB
            Dirty:                 17 kB
            Writeback:             18 kB
            AnonPages:             19 kB
            Mapped:                20 kB
            Shmem:                 21 kB
            Slab:                  22 kB
            SReclaimable:          23 kB
            SUnreclaim:            24 kB
            KernelStack:           25 kB
            PageTables:            26 kB
            NFS_Unstable:          27 kB
            Bounce:                28 kB
            WritebackTmp:          29 kB
            CommitLimit:           30 kB
            Committed_AS:          31 kB
            VmallocTotal:          32 kB
            VmallocUsed:           33 kB
            VmallocChunk:          34 kB
            HardwareCorrupted:     35 kB
            AnonHugePages:         36 kB
            ShmemHugePages:        37 kB
            ShmemPmdMapped:        38 kB
            CmaTotal:              39 kB
            CmaFree:               40 kB
            HugePages_Total:       41 kB
            HugePages_Free:        42 kB
            HugePages_Rsvd:        43 kB
            HugePages_Surp:        44 kB
            Hugepagesize:          45 kB
            DirectMap46k:          46 kB
            DirectMap47M:          47 kB
            DirectMap48G:          48 kB
            """).encode()
        with mock_open_content({"/proc/meminfo": content}) as m:
            mem = psutil.virtual_memory()
            assert m.called
            assert mem.total == 100 * 1024
            assert mem.free == 2 * 1024
            assert mem.buffers == 4 * 1024
            # cached mem also includes reclaimable memory
            assert mem.cached == (5 + 23) * 1024
            assert mem.shared == 21 * 1024
            assert mem.active == 7 * 1024
            assert mem.inactive == 8 * 1024
            assert mem.slab == 22 * 1024
            assert mem.available == 3 * 1024


# =====================================================================
# --- system swap memory
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemSwapMemory(PsutilTestCase):
    @staticmethod
    def meminfo_has_swap_info():
        """Return True if /proc/meminfo provides swap metrics."""
        with open("/proc/meminfo") as f:
            data = f.read()
        return 'SwapTotal:' in data and 'SwapFree:' in data

    def test_total(self):
        free_value = free_swap().total
        psutil_value = psutil.swap_memory().total
        assert abs(free_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_used(self):
        free_value = free_swap().used
        psutil_value = psutil.swap_memory().used
        assert abs(free_value - psutil_value) < TOLERANCE_SYS_MEM

    @retry_on_failure()
    def test_free(self):
        free_value = free_swap().free
        psutil_value = psutil.swap_memory().free
        assert abs(free_value - psutil_value) < TOLERANCE_SYS_MEM

    def test_missing_sin_sout(self):
        with mock.patch('psutil._common.open', create=True) as m:
            with warnings.catch_warnings(record=True) as ws:
                warnings.simplefilter("always")
                ret = psutil.swap_memory()
                assert m.called
                assert len(ws) == 1
                w = ws[0]
                assert (
                    "'sin' and 'sout' swap memory stats couldn't be determined"
                    in str(w.message)
                )
                assert ret.sin == 0
                assert ret.sout == 0

    def test_no_vmstat_mocked(self):
        # see https://github.com/giampaolo/psutil/issues/722
        with mock_open_exception(
            "/proc/vmstat", IOError(errno.ENOENT, 'no such file or directory')
        ) as m:
            with warnings.catch_warnings(record=True) as ws:
                warnings.simplefilter("always")
                ret = psutil.swap_memory()
                assert m.called
                assert len(ws) == 1
                w = ws[0]
                assert (
                    "'sin' and 'sout' swap memory stats couldn't "
                    "be determined and were set to 0"
                    in str(w.message)
                )
                assert ret.sin == 0
                assert ret.sout == 0

    def test_meminfo_against_sysinfo(self):
        # Make sure the content of /proc/meminfo about swap memory
        # matches sysinfo() syscall, see:
        # https://github.com/giampaolo/psutil/issues/1015
        if not self.meminfo_has_swap_info():
            raise pytest.skip("/proc/meminfo has no swap metrics")
        with mock.patch('psutil._pslinux.cext.linux_sysinfo') as m:
            swap = psutil.swap_memory()
        assert not m.called
        import psutil._psutil_linux as cext

        _, _, _, _, total, free, unit_multiplier = cext.linux_sysinfo()
        total *= unit_multiplier
        free *= unit_multiplier
        assert swap.total == total
        assert abs(swap.free - free) < TOLERANCE_SYS_MEM

    def test_emulate_meminfo_has_no_metrics(self):
        # Emulate a case where /proc/meminfo provides no swap metrics
        # in which case sysinfo() syscall is supposed to be used
        # as a fallback.
        with mock_open_content({"/proc/meminfo": b""}) as m:
            psutil.swap_memory()
            assert m.called


# =====================================================================
# --- system CPU
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemCPUTimes(PsutilTestCase):
    def test_fields(self):
        fields = psutil.cpu_times()._fields
        kernel_ver = re.findall(r'\d+\.\d+\.\d+', os.uname()[2])[0]
        kernel_ver_info = tuple(map(int, kernel_ver.split('.')))
        if kernel_ver_info >= (2, 6, 11):
            assert 'steal' in fields
        else:
            assert 'steal' not in fields
        if kernel_ver_info >= (2, 6, 24):
            assert 'guest' in fields
        else:
            assert 'guest' not in fields
        if kernel_ver_info >= (3, 2, 0):
            assert 'guest_nice' in fields
        else:
            assert 'guest_nice' not in fields


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemCPUCountLogical(PsutilTestCase):
    @pytest.mark.skipif(
        not os.path.exists("/sys/devices/system/cpu/online"),
        reason="/sys/devices/system/cpu/online does not exist",
    )
    def test_against_sysdev_cpu_online(self):
        with open("/sys/devices/system/cpu/online") as f:
            value = f.read().strip()
        if "-" in str(value):
            value = int(value.split('-')[1]) + 1
            assert psutil.cpu_count() == value

    @pytest.mark.skipif(
        not os.path.exists("/sys/devices/system/cpu"),
        reason="/sys/devices/system/cpu does not exist",
    )
    def test_against_sysdev_cpu_num(self):
        ls = os.listdir("/sys/devices/system/cpu")
        count = len([x for x in ls if re.search(r"cpu\d+$", x) is not None])
        assert psutil.cpu_count() == count

    @pytest.mark.skipif(
        not which("nproc"), reason="nproc utility not available"
    )
    def test_against_nproc(self):
        num = int(sh("nproc --all"))
        assert psutil.cpu_count(logical=True) == num

    @pytest.mark.skipif(
        not which("lscpu"), reason="lscpu utility not available"
    )
    def test_against_lscpu(self):
        out = sh("lscpu -p")
        num = len([x for x in out.split('\n') if not x.startswith('#')])
        assert psutil.cpu_count(logical=True) == num

    def test_emulate_fallbacks(self):
        import psutil._pslinux

        original = psutil._pslinux.cpu_count_logical()
        # Here we want to mock os.sysconf("SC_NPROCESSORS_ONLN") in
        # order to cause the parsing of /proc/cpuinfo and /proc/stat.
        with mock.patch(
            'psutil._pslinux.os.sysconf', side_effect=ValueError
        ) as m:
            assert psutil._pslinux.cpu_count_logical() == original
            assert m.called

            # Let's have open() return empty data and make sure None is
            # returned ('cause we mimic os.cpu_count()).
            with mock.patch('psutil._common.open', create=True) as m:
                assert psutil._pslinux.cpu_count_logical() is None
                assert m.call_count == 2
                # /proc/stat should be the last one
                assert m.call_args[0][0] == '/proc/stat'

            # Let's push this a bit further and make sure /proc/cpuinfo
            # parsing works as expected.
            with open('/proc/cpuinfo', 'rb') as f:
                cpuinfo_data = f.read()
            fake_file = io.BytesIO(cpuinfo_data)
            with mock.patch(
                'psutil._common.open', return_value=fake_file, create=True
            ) as m:
                assert psutil._pslinux.cpu_count_logical() == original

            # Finally, let's make /proc/cpuinfo return meaningless data;
            # this way we'll fall back on relying on /proc/stat
            with mock_open_content({"/proc/cpuinfo": b""}) as m:
                assert psutil._pslinux.cpu_count_logical() == original
                assert m.called


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemCPUCountCores(PsutilTestCase):
    @pytest.mark.skipif(
        not which("lscpu"), reason="lscpu utility not available"
    )
    def test_against_lscpu(self):
        out = sh("lscpu -p")
        core_ids = set()
        for line in out.split('\n'):
            if not line.startswith('#'):
                fields = line.split(',')
                core_ids.add(fields[1])
        assert psutil.cpu_count(logical=False) == len(core_ids)

    def test_method_2(self):
        meth_1 = psutil._pslinux.cpu_count_cores()
        with mock.patch('glob.glob', return_value=[]) as m:
            meth_2 = psutil._pslinux.cpu_count_cores()
            assert m.called
        if meth_1 is not None:
            assert meth_1 == meth_2

    def test_emulate_none(self):
        with mock.patch('glob.glob', return_value=[]) as m1:
            with mock.patch('psutil._common.open', create=True) as m2:
                assert psutil._pslinux.cpu_count_cores() is None
        assert m1.called
        assert m2.called


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemCPUFrequency(PsutilTestCase):
    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_emulate_use_second_file(self):
        # https://github.com/giampaolo/psutil/issues/981
        def path_exists_mock(path):
            if path.startswith("/sys/devices/system/cpu/cpufreq/policy"):
                return False
            else:
                return orig_exists(path)

        orig_exists = os.path.exists
        with mock.patch(
            "os.path.exists", side_effect=path_exists_mock, create=True
        ):
            assert psutil.cpu_freq()

    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    @pytest.mark.skipif(
        AARCH64, reason="aarch64 does not report mhz in /proc/cpuinfo"
    )
    def test_emulate_use_cpuinfo(self):
        # Emulate a case where /sys/devices/system/cpu/cpufreq* does not
        # exist and /proc/cpuinfo is used instead.
        def path_exists_mock(path):
            if path.startswith('/sys/devices/system/cpu/'):
                return False
            else:
                return os_path_exists(path)

        os_path_exists = os.path.exists
        try:
            with mock.patch("os.path.exists", side_effect=path_exists_mock):
                reload_module(psutil._pslinux)
                ret = psutil.cpu_freq()
                assert ret, ret
                assert ret.max == 0.0
                assert ret.min == 0.0
                for freq in psutil.cpu_freq(percpu=True):
                    assert freq.max == 0.0
                    assert freq.min == 0.0
        finally:
            reload_module(psutil._pslinux)
            reload_module(psutil)

    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_emulate_data(self):
        def open_mock(name, *args, **kwargs):
            if name.endswith('/scaling_cur_freq') and name.startswith(
                "/sys/devices/system/cpu/cpufreq/policy"
            ):
                return io.BytesIO(b"500000")
            elif name.endswith('/scaling_min_freq') and name.startswith(
                "/sys/devices/system/cpu/cpufreq/policy"
            ):
                return io.BytesIO(b"600000")
            elif name.endswith('/scaling_max_freq') and name.startswith(
                "/sys/devices/system/cpu/cpufreq/policy"
            ):
                return io.BytesIO(b"700000")
            elif name == '/proc/cpuinfo':
                return io.BytesIO(b"cpu MHz     : 500")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            with mock.patch('os.path.exists', return_value=True):
                freq = psutil.cpu_freq()
                assert freq.current == 500.0
                # when /proc/cpuinfo is used min and max frequencies are not
                # available and are set to 0.
                if freq.min != 0.0:
                    assert freq.min == 600.0
                if freq.max != 0.0:
                    assert freq.max == 700.0

    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_emulate_multi_cpu(self):
        def open_mock(name, *args, **kwargs):
            n = name
            if n.endswith('/scaling_cur_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy0"
            ):
                return io.BytesIO(b"100000")
            elif n.endswith('/scaling_min_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy0"
            ):
                return io.BytesIO(b"200000")
            elif n.endswith('/scaling_max_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy0"
            ):
                return io.BytesIO(b"300000")
            elif n.endswith('/scaling_cur_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy1"
            ):
                return io.BytesIO(b"400000")
            elif n.endswith('/scaling_min_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy1"
            ):
                return io.BytesIO(b"500000")
            elif n.endswith('/scaling_max_freq') and n.startswith(
                "/sys/devices/system/cpu/cpufreq/policy1"
            ):
                return io.BytesIO(b"600000")
            elif name == '/proc/cpuinfo':
                return io.BytesIO(b"cpu MHz     : 100\ncpu MHz     : 400")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            with mock.patch('os.path.exists', return_value=True):
                with mock.patch(
                    'psutil._pslinux.cpu_count_logical', return_value=2
                ):
                    freq = psutil.cpu_freq(percpu=True)
                    assert freq[0].current == 100.0
                    if freq[0].min != 0.0:
                        assert freq[0].min == 200.0
                    if freq[0].max != 0.0:
                        assert freq[0].max == 300.0
                    assert freq[1].current == 400.0
                    if freq[1].min != 0.0:
                        assert freq[1].min == 500.0
                    if freq[1].max != 0.0:
                        assert freq[1].max == 600.0

    @pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
    def test_emulate_no_scaling_cur_freq_file(self):
        # See: https://github.com/giampaolo/psutil/issues/1071
        def open_mock(name, *args, **kwargs):
            if name.endswith('/scaling_cur_freq'):
                raise IOError(errno.ENOENT, "")
            elif name.endswith('/cpuinfo_cur_freq'):
                return io.BytesIO(b"200000")
            elif name == '/proc/cpuinfo':
                return io.BytesIO(b"cpu MHz     : 200")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            with mock.patch('os.path.exists', return_value=True):
                with mock.patch(
                    'psutil._pslinux.cpu_count_logical', return_value=1
                ):
                    freq = psutil.cpu_freq()
                    assert freq.current == 200


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemCPUStats(PsutilTestCase):

    # XXX: fails too often.
    # def test_ctx_switches(self):
    #     vmstat_value = vmstat("context switches")
    #     psutil_value = psutil.cpu_stats().ctx_switches
    #     self.assertAlmostEqual(vmstat_value, psutil_value, delta=500)

    def test_interrupts(self):
        vmstat_value = vmstat("interrupts")
        psutil_value = psutil.cpu_stats().interrupts
        assert abs(vmstat_value - psutil_value) < 500


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestLoadAvg(PsutilTestCase):
    @pytest.mark.skipif(not HAS_GETLOADAVG, reason="not supported")
    def test_getloadavg(self):
        psutil_value = psutil.getloadavg()
        with open("/proc/loadavg") as f:
            proc_value = f.read().split()

        assert abs(float(proc_value[0]) - psutil_value[0]) < 1
        assert abs(float(proc_value[1]) - psutil_value[1]) < 1
        assert abs(float(proc_value[2]) - psutil_value[2]) < 1


# =====================================================================
# --- system network
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemNetIfAddrs(PsutilTestCase):
    def test_ips(self):
        for name, addrs in psutil.net_if_addrs().items():
            for addr in addrs:
                if addr.family == psutil.AF_LINK:
                    assert addr.address == get_mac_address(name)
                elif addr.family == socket.AF_INET:
                    assert addr.address == get_ipv4_address(name)
                    assert addr.netmask == get_ipv4_netmask(name)
                    if addr.broadcast is not None:
                        assert addr.broadcast == get_ipv4_broadcast(name)
                    else:
                        assert get_ipv4_broadcast(name) == '0.0.0.0'
                elif addr.family == socket.AF_INET6:
                    # IPv6 addresses can have a percent symbol at the end.
                    # E.g. these 2 are equivalent:
                    # "fe80::1ff:fe23:4567:890a"
                    # "fe80::1ff:fe23:4567:890a%eth0"
                    # That is the "zone id" portion, which usually is the name
                    # of the network interface.
                    address = addr.address.split('%')[0]
                    assert address in get_ipv6_addresses(name)

    # XXX - not reliable when having virtual NICs installed by Docker.
    # @pytest.mark.skipif(not which('ip'), reason="'ip' utility not available")
    # def test_net_if_names(self):
    #     out = sh("ip addr").strip()
    #     nics = [x for x in psutil.net_if_addrs().keys() if ':' not in x]
    #     found = 0
    #     for line in out.split('\n'):
    #         line = line.strip()
    #         if re.search(r"^\d+:", line):
    #             found += 1
    #             name = line.split(':')[1].strip()
    #             self.assertIn(name, nics)
    #     self.assertEqual(len(nics), found, msg="%s\n---\n%s" % (
    #         pprint.pformat(nics), out))


@pytest.mark.skipif(not LINUX, reason="LINUX only")
@pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
class TestSystemNetIfStats(PsutilTestCase):
    @pytest.mark.skipif(
        not which("ifconfig"), reason="ifconfig utility not available"
    )
    def test_against_ifconfig(self):
        for name, stats in psutil.net_if_stats().items():
            try:
                out = sh("ifconfig %s" % name)
            except RuntimeError:
                pass
            else:
                assert stats.isup == ('RUNNING' in out), out
                assert stats.mtu == int(
                    re.findall(r'(?i)MTU[: ](\d+)', out)[0]
                )

    def test_mtu(self):
        for name, stats in psutil.net_if_stats().items():
            with open("/sys/class/net/%s/mtu" % name) as f:
                assert stats.mtu == int(f.read().strip())

    @pytest.mark.skipif(
        not which("ifconfig"), reason="ifconfig utility not available"
    )
    def test_flags(self):
        # first line looks like this:
        # "eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500"
        matches_found = 0
        for name, stats in psutil.net_if_stats().items():
            try:
                out = sh("ifconfig %s" % name)
            except RuntimeError:
                pass
            else:
                match = re.search(r"flags=(\d+)?<(.*?)>", out)
                if match and len(match.groups()) >= 2:
                    matches_found += 1
                    ifconfig_flags = set(match.group(2).lower().split(","))
                    psutil_flags = set(stats.flags.split(","))
                    assert ifconfig_flags == psutil_flags
                else:
                    # ifconfig has a different output on CentOS 6
                    # let's try that
                    match = re.search(r"(.*)  MTU:(\d+)  Metric:(\d+)", out)
                    if match and len(match.groups()) >= 3:
                        matches_found += 1
                        ifconfig_flags = set(match.group(1).lower().split())
                        psutil_flags = set(stats.flags.split(","))
                        assert ifconfig_flags == psutil_flags

        if not matches_found:
            raise self.fail("no matches were found")


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemNetIOCounters(PsutilTestCase):
    @pytest.mark.skipif(
        not which("ifconfig"), reason="ifconfig utility not available"
    )
    @retry_on_failure()
    def test_against_ifconfig(self):
        def ifconfig(nic):
            ret = {}
            out = sh("ifconfig %s" % nic)
            ret['packets_recv'] = int(
                re.findall(r'RX packets[: ](\d+)', out)[0]
            )
            ret['packets_sent'] = int(
                re.findall(r'TX packets[: ](\d+)', out)[0]
            )
            ret['errin'] = int(re.findall(r'errors[: ](\d+)', out)[0])
            ret['errout'] = int(re.findall(r'errors[: ](\d+)', out)[1])
            ret['dropin'] = int(re.findall(r'dropped[: ](\d+)', out)[0])
            ret['dropout'] = int(re.findall(r'dropped[: ](\d+)', out)[1])
            ret['bytes_recv'] = int(
                re.findall(r'RX (?:packets \d+ +)?bytes[: ](\d+)', out)[0]
            )
            ret['bytes_sent'] = int(
                re.findall(r'TX (?:packets \d+ +)?bytes[: ](\d+)', out)[0]
            )
            return ret

        nio = psutil.net_io_counters(pernic=True, nowrap=False)
        for name, stats in nio.items():
            try:
                ifconfig_ret = ifconfig(name)
            except RuntimeError:
                continue
            assert (
                abs(stats.bytes_recv - ifconfig_ret['bytes_recv']) < 1024 * 10
            )
            assert (
                abs(stats.bytes_sent - ifconfig_ret['bytes_sent']) < 1024 * 10
            )
            assert (
                abs(stats.packets_recv - ifconfig_ret['packets_recv']) < 1024
            )
            assert (
                abs(stats.packets_sent - ifconfig_ret['packets_sent']) < 1024
            )
            assert abs(stats.errin - ifconfig_ret['errin']) < 10
            assert abs(stats.errout - ifconfig_ret['errout']) < 10
            assert abs(stats.dropin - ifconfig_ret['dropin']) < 10
            assert abs(stats.dropout - ifconfig_ret['dropout']) < 10


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemNetConnections(PsutilTestCase):
    @mock.patch('psutil._pslinux.socket.inet_ntop', side_effect=ValueError)
    @mock.patch('psutil._pslinux.supports_ipv6', return_value=False)
    def test_emulate_ipv6_unsupported(self, supports_ipv6, inet_ntop):
        # see: https://github.com/giampaolo/psutil/issues/623
        try:
            s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
            self.addCleanup(s.close)
            s.bind(("::1", 0))
        except socket.error:
            pass
        psutil.net_connections(kind='inet6')

    def test_emulate_unix(self):
        content = textwrap.dedent("""\
            0: 00000003 000 000 0001 03 462170 @/tmp/dbus-Qw2hMPIU3n
            0: 00000003 000 000 0001 03 35010 @/tmp/dbus-tB2X8h69BQ
            0: 00000003 000 000 0001 03 34424 @/tmp/dbus-cHy80Y8O
            000000000000000000000000000000000000000000000000000000
            """)
        with mock_open_content({"/proc/net/unix": content}) as m:
            psutil.net_connections(kind='unix')
            assert m.called


# =====================================================================
# --- system disks
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemDiskPartitions(PsutilTestCase):
    @pytest.mark.skipif(
        not hasattr(os, 'statvfs'), reason="os.statvfs() not available"
    )
    @skip_on_not_implemented()
    def test_against_df(self):
        # test psutil.disk_usage() and psutil.disk_partitions()
        # against "df -a"
        def df(path):
            out = sh('df -P -B 1 "%s"' % path).strip()
            lines = out.split('\n')
            lines.pop(0)
            line = lines.pop(0)
            dev, total, used, free = line.split()[:4]
            if dev == 'none':
                dev = ''
            total, used, free = int(total), int(used), int(free)
            return dev, total, used, free

        for part in psutil.disk_partitions(all=False):
            usage = psutil.disk_usage(part.mountpoint)
            _, total, used, free = df(part.mountpoint)
            assert usage.total == total
            assert abs(usage.free - free) < TOLERANCE_DISK_USAGE
            assert abs(usage.used - used) < TOLERANCE_DISK_USAGE

    def test_zfs_fs(self):
        # Test that ZFS partitions are returned.
        with open("/proc/filesystems") as f:
            data = f.read()
        if 'zfs' in data:
            for part in psutil.disk_partitions():
                if part.fstype == 'zfs':
                    break
            else:
                raise self.fail("couldn't find any ZFS partition")
        else:
            # No ZFS partitions on this system. Let's fake one.
            fake_file = io.StringIO(u"nodev\tzfs\n")
            with mock.patch(
                'psutil._common.open', return_value=fake_file, create=True
            ) as m1:
                with mock.patch(
                    'psutil._pslinux.cext.disk_partitions',
                    return_value=[('/dev/sdb3', '/', 'zfs', 'rw')],
                ) as m2:
                    ret = psutil.disk_partitions()
                    assert m1.called
                    assert m2.called
                    assert ret
                    assert ret[0].fstype == 'zfs'

    def test_emulate_realpath_fail(self):
        # See: https://github.com/giampaolo/psutil/issues/1307
        try:
            with mock.patch(
                'os.path.realpath', return_value='/non/existent'
            ) as m:
                with pytest.raises(FileNotFoundError):
                    psutil.disk_partitions()
                assert m.called
        finally:
            psutil.PROCFS_PATH = "/proc"


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSystemDiskIoCounters(PsutilTestCase):
    def test_emulate_kernel_2_4(self):
        # Tests /proc/diskstats parsing format for 2.4 kernels, see:
        # https://github.com/giampaolo/psutil/issues/767
        content = "   3     0   1 hda 2 3 4 5 6 7 8 9 10 11 12"
        with mock_open_content({'/proc/diskstats': content}):
            with mock.patch(
                'psutil._pslinux.is_storage_device', return_value=True
            ):
                ret = psutil.disk_io_counters(nowrap=False)
                assert ret.read_count == 1
                assert ret.read_merged_count == 2
                assert ret.read_bytes == 3 * SECTOR_SIZE
                assert ret.read_time == 4
                assert ret.write_count == 5
                assert ret.write_merged_count == 6
                assert ret.write_bytes == 7 * SECTOR_SIZE
                assert ret.write_time == 8
                assert ret.busy_time == 10

    def test_emulate_kernel_2_6_full(self):
        # Tests /proc/diskstats parsing format for 2.6 kernels,
        # lines reporting all metrics:
        # https://github.com/giampaolo/psutil/issues/767
        content = "   3    0   hda 1 2 3 4 5 6 7 8 9 10 11"
        with mock_open_content({"/proc/diskstats": content}):
            with mock.patch(
                'psutil._pslinux.is_storage_device', return_value=True
            ):
                ret = psutil.disk_io_counters(nowrap=False)
                assert ret.read_count == 1
                assert ret.read_merged_count == 2
                assert ret.read_bytes == 3 * SECTOR_SIZE
                assert ret.read_time == 4
                assert ret.write_count == 5
                assert ret.write_merged_count == 6
                assert ret.write_bytes == 7 * SECTOR_SIZE
                assert ret.write_time == 8
                assert ret.busy_time == 10

    def test_emulate_kernel_2_6_limited(self):
        # Tests /proc/diskstats parsing format for 2.6 kernels,
        # where one line of /proc/partitions return a limited
        # amount of metrics when it bumps into a partition
        # (instead of a disk). See:
        # https://github.com/giampaolo/psutil/issues/767
        with mock_open_content({"/proc/diskstats": "   3    1   hda 1 2 3 4"}):
            with mock.patch(
                'psutil._pslinux.is_storage_device', return_value=True
            ):
                ret = psutil.disk_io_counters(nowrap=False)
                assert ret.read_count == 1
                assert ret.read_bytes == 2 * SECTOR_SIZE
                assert ret.write_count == 3
                assert ret.write_bytes == 4 * SECTOR_SIZE

                assert ret.read_merged_count == 0
                assert ret.read_time == 0
                assert ret.write_merged_count == 0
                assert ret.write_time == 0
                assert ret.busy_time == 0

    def test_emulate_include_partitions(self):
        # Make sure that when perdisk=True disk partitions are returned,
        # see:
        # https://github.com/giampaolo/psutil/pull/1313#issuecomment-408626842
        content = textwrap.dedent("""\
            3    0   nvme0n1 1 2 3 4 5 6 7 8 9 10 11
            3    0   nvme0n1p1 1 2 3 4 5 6 7 8 9 10 11
            """)
        with mock_open_content({"/proc/diskstats": content}):
            with mock.patch(
                'psutil._pslinux.is_storage_device', return_value=False
            ):
                ret = psutil.disk_io_counters(perdisk=True, nowrap=False)
                assert len(ret) == 2
                assert ret['nvme0n1'].read_count == 1
                assert ret['nvme0n1p1'].read_count == 1
                assert ret['nvme0n1'].write_count == 5
                assert ret['nvme0n1p1'].write_count == 5

    def test_emulate_exclude_partitions(self):
        # Make sure that when perdisk=False partitions (e.g. 'sda1',
        # 'nvme0n1p1') are skipped and not included in the total count.
        # https://github.com/giampaolo/psutil/pull/1313#issuecomment-408626842
        content = textwrap.dedent("""\
            3    0   nvme0n1 1 2 3 4 5 6 7 8 9 10 11
            3    0   nvme0n1p1 1 2 3 4 5 6 7 8 9 10 11
            """)
        with mock_open_content({"/proc/diskstats": content}):
            with mock.patch(
                'psutil._pslinux.is_storage_device', return_value=False
            ):
                ret = psutil.disk_io_counters(perdisk=False, nowrap=False)
                assert ret is None

        def is_storage_device(name):
            return name == 'nvme0n1'

        content = textwrap.dedent("""\
            3    0   nvme0n1 1 2 3 4 5 6 7 8 9 10 11
            3    0   nvme0n1p1 1 2 3 4 5 6 7 8 9 10 11
            """)
        with mock_open_content({"/proc/diskstats": content}):
            with mock.patch(
                'psutil._pslinux.is_storage_device',
                create=True,
                side_effect=is_storage_device,
            ):
                ret = psutil.disk_io_counters(perdisk=False, nowrap=False)
                assert ret.read_count == 1
                assert ret.write_count == 5

    def test_emulate_use_sysfs(self):
        def exists(path):
            return path == '/proc/diskstats'

        wprocfs = psutil.disk_io_counters(perdisk=True)
        with mock.patch(
            'psutil._pslinux.os.path.exists', create=True, side_effect=exists
        ):
            wsysfs = psutil.disk_io_counters(perdisk=True)
        assert len(wprocfs) == len(wsysfs)

    def test_emulate_not_impl(self):
        def exists(path):
            return False

        with mock.patch(
            'psutil._pslinux.os.path.exists', create=True, side_effect=exists
        ):
            with pytest.raises(NotImplementedError):
                psutil.disk_io_counters()


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestRootFsDeviceFinder(PsutilTestCase):
    def setUp(self):
        dev = os.stat("/").st_dev
        self.major = os.major(dev)
        self.minor = os.minor(dev)

    def test_call_methods(self):
        finder = RootFsDeviceFinder()
        if os.path.exists("/proc/partitions"):
            finder.ask_proc_partitions()
        else:
            with pytest.raises(FileNotFoundError):
                finder.ask_proc_partitions()
        if os.path.exists(
            "/sys/dev/block/%s:%s/uevent" % (self.major, self.minor)
        ):
            finder.ask_sys_dev_block()
        else:
            with pytest.raises(FileNotFoundError):
                finder.ask_sys_dev_block()
        finder.ask_sys_class_block()

    @pytest.mark.skipif(GITHUB_ACTIONS, reason="unsupported on GITHUB_ACTIONS")
    def test_comparisons(self):
        finder = RootFsDeviceFinder()
        assert finder.find() is not None

        a = b = c = None
        if os.path.exists("/proc/partitions"):
            a = finder.ask_proc_partitions()
        if os.path.exists(
            "/sys/dev/block/%s:%s/uevent" % (self.major, self.minor)
        ):
            b = finder.ask_sys_class_block()
        c = finder.ask_sys_dev_block()

        base = a or b or c
        if base and a:
            assert base == a
        if base and b:
            assert base == b
        if base and c:
            assert base == c

    @pytest.mark.skipif(
        not which("findmnt"), reason="findmnt utility not available"
    )
    @pytest.mark.skipif(GITHUB_ACTIONS, reason="unsupported on GITHUB_ACTIONS")
    def test_against_findmnt(self):
        psutil_value = RootFsDeviceFinder().find()
        findmnt_value = sh("findmnt -o SOURCE -rn /")
        assert psutil_value == findmnt_value

    def test_disk_partitions_mocked(self):
        with mock.patch(
            'psutil._pslinux.cext.disk_partitions',
            return_value=[('/dev/root', '/', 'ext4', 'rw')],
        ) as m:
            part = psutil.disk_partitions()[0]
            assert m.called
            if not GITHUB_ACTIONS:
                assert part.device != "/dev/root"
                assert part.device == RootFsDeviceFinder().find()
            else:
                assert part.device == "/dev/root"


# =====================================================================
# --- misc
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestMisc(PsutilTestCase):
    def test_boot_time(self):
        vmstat_value = vmstat('boot time')
        psutil_value = psutil.boot_time()
        assert int(vmstat_value) == int(psutil_value)

    def test_no_procfs_on_import(self):
        my_procfs = self.get_testfn()
        os.mkdir(my_procfs)

        with open(os.path.join(my_procfs, 'stat'), 'w') as f:
            f.write('cpu   0 0 0 0 0 0 0 0 0 0\n')
            f.write('cpu0  0 0 0 0 0 0 0 0 0 0\n')
            f.write('cpu1  0 0 0 0 0 0 0 0 0 0\n')

        try:
            orig_open = open

            def open_mock(name, *args, **kwargs):
                if name.startswith('/proc'):
                    raise IOError(errno.ENOENT, 'rejecting access for test')
                return orig_open(name, *args, **kwargs)

            patch_point = 'builtins.open' if PY3 else '__builtin__.open'
            with mock.patch(patch_point, side_effect=open_mock):
                reload_module(psutil)

                with pytest.raises(IOError):
                    psutil.cpu_times()
                with pytest.raises(IOError):
                    psutil.cpu_times(percpu=True)
                with pytest.raises(IOError):
                    psutil.cpu_percent()
                with pytest.raises(IOError):
                    psutil.cpu_percent(percpu=True)
                with pytest.raises(IOError):
                    psutil.cpu_times_percent()
                with pytest.raises(IOError):
                    psutil.cpu_times_percent(percpu=True)

                psutil.PROCFS_PATH = my_procfs

                assert psutil.cpu_percent() == 0
                assert sum(psutil.cpu_times_percent()) == 0

                # since we don't know the number of CPUs at import time,
                # we awkwardly say there are none until the second call
                per_cpu_percent = psutil.cpu_percent(percpu=True)
                assert sum(per_cpu_percent) == 0

                # ditto awkward length
                per_cpu_times_percent = psutil.cpu_times_percent(percpu=True)
                assert sum(map(sum, per_cpu_times_percent)) == 0

                # much user, very busy
                with open(os.path.join(my_procfs, 'stat'), 'w') as f:
                    f.write('cpu   1 0 0 0 0 0 0 0 0 0\n')
                    f.write('cpu0  1 0 0 0 0 0 0 0 0 0\n')
                    f.write('cpu1  1 0 0 0 0 0 0 0 0 0\n')

                assert psutil.cpu_percent() != 0
                assert sum(psutil.cpu_percent(percpu=True)) != 0
                assert sum(psutil.cpu_times_percent()) != 0
                assert (
                    sum(map(sum, psutil.cpu_times_percent(percpu=True))) != 0
                )
        finally:
            shutil.rmtree(my_procfs)
            reload_module(psutil)

        assert psutil.PROCFS_PATH == '/proc'

    def test_cpu_steal_decrease(self):
        # Test cumulative cpu stats decrease. We should ignore this.
        # See issue #1210.
        content = textwrap.dedent("""\
            cpu   0 0 0 0 0 0 0 1 0 0
            cpu0  0 0 0 0 0 0 0 1 0 0
            cpu1  0 0 0 0 0 0 0 1 0 0
            """).encode()
        with mock_open_content({"/proc/stat": content}) as m:
            # first call to "percent" functions should read the new stat file
            # and compare to the "real" file read at import time - so the
            # values are meaningless
            psutil.cpu_percent()
            assert m.called
            psutil.cpu_percent(percpu=True)
            psutil.cpu_times_percent()
            psutil.cpu_times_percent(percpu=True)

        content = textwrap.dedent("""\
            cpu   1 0 0 0 0 0 0 0 0 0
            cpu0  1 0 0 0 0 0 0 0 0 0
            cpu1  1 0 0 0 0 0 0 0 0 0
            """).encode()
        with mock_open_content({"/proc/stat": content}):
            # Increase "user" while steal goes "backwards" to zero.
            cpu_percent = psutil.cpu_percent()
            assert m.called
            cpu_percent_percpu = psutil.cpu_percent(percpu=True)
            cpu_times_percent = psutil.cpu_times_percent()
            cpu_times_percent_percpu = psutil.cpu_times_percent(percpu=True)
            assert cpu_percent != 0
            assert sum(cpu_percent_percpu) != 0
            assert sum(cpu_times_percent) != 0
            assert sum(cpu_times_percent) != 100.0
            assert sum(map(sum, cpu_times_percent_percpu)) != 0
            assert sum(map(sum, cpu_times_percent_percpu)) != 100.0
            assert cpu_times_percent.steal == 0
            assert cpu_times_percent.user != 0

    def test_boot_time_mocked(self):
        with mock.patch('psutil._common.open', create=True) as m:
            with pytest.raises(RuntimeError):
                psutil._pslinux.boot_time()
            assert m.called

    def test_users(self):
        # Make sure the C extension converts ':0' and ':0.0' to
        # 'localhost'.
        for user in psutil.users():
            assert user.host not in (":0", ":0.0")

    def test_procfs_path(self):
        tdir = self.get_testfn()
        os.mkdir(tdir)
        try:
            psutil.PROCFS_PATH = tdir
            with pytest.raises(IOError):
                psutil.virtual_memory()
            with pytest.raises(IOError):
                psutil.cpu_times()
            with pytest.raises(IOError):
                psutil.cpu_times(percpu=True)
            with pytest.raises(IOError):
                psutil.boot_time()
            # self.assertRaises(IOError, psutil.pids)
            with pytest.raises(IOError):
                psutil.net_connections()
            with pytest.raises(IOError):
                psutil.net_io_counters()
            with pytest.raises(IOError):
                psutil.net_if_stats()
            # self.assertRaises(IOError, psutil.disk_io_counters)
            with pytest.raises(IOError):
                psutil.disk_partitions()
            with pytest.raises(psutil.NoSuchProcess):
                psutil.Process()
        finally:
            psutil.PROCFS_PATH = "/proc"

    @retry_on_failure()
    @pytest.mark.skipif(PYTEST_PARALLEL, reason="skip if pytest-parallel")
    def test_issue_687(self):
        # In case of thread ID:
        # - pid_exists() is supposed to return False
        # - Process(tid) is supposed to work
        # - pids() should not return the TID
        # See: https://github.com/giampaolo/psutil/issues/687
        with ThreadTask():
            p = psutil.Process()
            threads = p.threads()
            assert len(threads) == (3 if QEMU_USER else 2)
            tid = sorted(threads, key=lambda x: x.id)[1].id
            assert p.pid != tid
            pt = psutil.Process(tid)
            pt.as_dict()
            assert tid not in psutil.pids()

    def test_pid_exists_no_proc_status(self):
        # Internally pid_exists relies on /proc/{pid}/status.
        # Emulate a case where this file is empty in which case
        # psutil is supposed to fall back on using pids().
        with mock_open_content({"/proc/%s/status": ""}) as m:
            assert psutil.pid_exists(os.getpid())
            assert m.called


# =====================================================================
# --- sensors
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
@pytest.mark.skipif(not HAS_BATTERY, reason="no battery")
class TestSensorsBattery(PsutilTestCase):
    @pytest.mark.skipif(not which("acpi"), reason="acpi utility not available")
    def test_percent(self):
        out = sh("acpi -b")
        acpi_value = int(out.split(",")[1].strip().replace('%', ''))
        psutil_value = psutil.sensors_battery().percent
        assert abs(acpi_value - psutil_value) < 1

    def test_emulate_power_plugged(self):
        # Pretend the AC power cable is connected.
        def open_mock(name, *args, **kwargs):
            if name.endswith(('AC0/online', 'AC/online')):
                return io.BytesIO(b"1")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock) as m:
            assert psutil.sensors_battery().power_plugged is True
            assert (
                psutil.sensors_battery().secsleft
                == psutil.POWER_TIME_UNLIMITED
            )
            assert m.called

    def test_emulate_power_plugged_2(self):
        # Same as above but pretend /AC0/online does not exist in which
        # case code relies on /status file.
        def open_mock(name, *args, **kwargs):
            if name.endswith(('AC0/online', 'AC/online')):
                raise IOError(errno.ENOENT, "")
            elif name.endswith("/status"):
                return io.StringIO(u"charging")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock) as m:
            assert psutil.sensors_battery().power_plugged is True
            assert m.called

    def test_emulate_power_not_plugged(self):
        # Pretend the AC power cable is not connected.
        def open_mock(name, *args, **kwargs):
            if name.endswith(('AC0/online', 'AC/online')):
                return io.BytesIO(b"0")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock) as m:
            assert psutil.sensors_battery().power_plugged is False
            assert m.called

    def test_emulate_power_not_plugged_2(self):
        # Same as above but pretend /AC0/online does not exist in which
        # case code relies on /status file.
        def open_mock(name, *args, **kwargs):
            if name.endswith(('AC0/online', 'AC/online')):
                raise IOError(errno.ENOENT, "")
            elif name.endswith("/status"):
                return io.StringIO(u"discharging")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock) as m:
            assert psutil.sensors_battery().power_plugged is False
            assert m.called

    def test_emulate_power_undetermined(self):
        # Pretend we can't know whether the AC power cable not
        # connected (assert fallback to False).
        def open_mock(name, *args, **kwargs):
            if name.startswith((
                '/sys/class/power_supply/AC0/online',
                '/sys/class/power_supply/AC/online',
            )):
                raise IOError(errno.ENOENT, "")
            elif name.startswith("/sys/class/power_supply/BAT0/status"):
                return io.BytesIO(b"???")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock) as m:
            assert psutil.sensors_battery().power_plugged is None
            assert m.called

    def test_emulate_energy_full_0(self):
        # Emulate a case where energy_full files returns 0.
        with mock_open_content(
            {"/sys/class/power_supply/BAT0/energy_full": b"0"}
        ) as m:
            assert psutil.sensors_battery().percent == 0
            assert m.called

    def test_emulate_energy_full_not_avail(self):
        # Emulate a case where energy_full file does not exist.
        # Expected fallback on /capacity.
        with mock_open_exception(
            "/sys/class/power_supply/BAT0/energy_full",
            IOError(errno.ENOENT, ""),
        ):
            with mock_open_exception(
                "/sys/class/power_supply/BAT0/charge_full",
                IOError(errno.ENOENT, ""),
            ):
                with mock_open_content(
                    {"/sys/class/power_supply/BAT0/capacity": b"88"}
                ):
                    assert psutil.sensors_battery().percent == 88

    def test_emulate_no_power(self):
        # Emulate a case where /AC0/online file nor /BAT0/status exist.
        with mock_open_exception(
            "/sys/class/power_supply/AC/online", IOError(errno.ENOENT, "")
        ):
            with mock_open_exception(
                "/sys/class/power_supply/AC0/online", IOError(errno.ENOENT, "")
            ):
                with mock_open_exception(
                    "/sys/class/power_supply/BAT0/status",
                    IOError(errno.ENOENT, ""),
                ):
                    assert psutil.sensors_battery().power_plugged is None


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSensorsBatteryEmulated(PsutilTestCase):
    def test_it(self):
        def open_mock(name, *args, **kwargs):
            if name.endswith("/energy_now"):
                return io.StringIO(u"60000000")
            elif name.endswith("/power_now"):
                return io.StringIO(u"0")
            elif name.endswith("/energy_full"):
                return io.StringIO(u"60000001")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch('os.listdir', return_value=["BAT0"]) as mlistdir:
            with mock.patch(patch_point, side_effect=open_mock) as mopen:
                assert psutil.sensors_battery() is not None
        assert mlistdir.called
        assert mopen.called


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSensorsTemperatures(PsutilTestCase):
    def test_emulate_class_hwmon(self):
        def open_mock(name, *args, **kwargs):
            if name.endswith('/name'):
                return io.StringIO(u"name")
            elif name.endswith('/temp1_label'):
                return io.StringIO(u"label")
            elif name.endswith('/temp1_input'):
                return io.BytesIO(b"30000")
            elif name.endswith('/temp1_max'):
                return io.BytesIO(b"40000")
            elif name.endswith('/temp1_crit'):
                return io.BytesIO(b"50000")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            # Test case with /sys/class/hwmon
            with mock.patch(
                'glob.glob', return_value=['/sys/class/hwmon/hwmon0/temp1']
            ):
                temp = psutil.sensors_temperatures()['name'][0]
                assert temp.label == 'label'
                assert temp.current == 30.0
                assert temp.high == 40.0
                assert temp.critical == 50.0

    def test_emulate_class_thermal(self):
        def open_mock(name, *args, **kwargs):
            if name.endswith('0_temp'):
                return io.BytesIO(b"50000")
            elif name.endswith('temp'):
                return io.BytesIO(b"30000")
            elif name.endswith('0_type'):
                return io.StringIO(u"critical")
            elif name.endswith('type'):
                return io.StringIO(u"name")
            else:
                return orig_open(name, *args, **kwargs)

        def glob_mock(path):
            if path == '/sys/class/hwmon/hwmon*/temp*_*':  # noqa
                return []
            elif path == '/sys/class/hwmon/hwmon*/device/temp*_*':
                return []
            elif path == '/sys/class/thermal/thermal_zone*':
                return ['/sys/class/thermal/thermal_zone0']
            elif path == '/sys/class/thermal/thermal_zone0/trip_point*':
                return [
                    '/sys/class/thermal/thermal_zone1/trip_point_0_type',
                    '/sys/class/thermal/thermal_zone1/trip_point_0_temp',
                ]
            return []

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            with mock.patch('glob.glob', create=True, side_effect=glob_mock):
                temp = psutil.sensors_temperatures()['name'][0]
                assert temp.label == ''  # noqa
                assert temp.current == 30.0
                assert temp.high == 50.0
                assert temp.critical == 50.0


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestSensorsFans(PsutilTestCase):
    def test_emulate_data(self):
        def open_mock(name, *args, **kwargs):
            if name.endswith('/name'):
                return io.StringIO(u"name")
            elif name.endswith('/fan1_label'):
                return io.StringIO(u"label")
            elif name.endswith('/fan1_input'):
                return io.StringIO(u"2000")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock):
            with mock.patch(
                'glob.glob', return_value=['/sys/class/hwmon/hwmon2/fan1']
            ):
                fan = psutil.sensors_fans()['name'][0]
                assert fan.label == 'label'
                assert fan.current == 2000


# =====================================================================
# --- test process
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestProcess(PsutilTestCase):
    @retry_on_failure()
    def test_parse_smaps_vs_memory_maps(self):
        sproc = self.spawn_testproc()
        uss, pss, swap = psutil._pslinux.Process(sproc.pid)._parse_smaps()
        maps = psutil.Process(sproc.pid).memory_maps(grouped=False)
        assert (
            abs(uss - sum([x.private_dirty + x.private_clean for x in maps]))
            < 4096
        )
        assert abs(pss - sum([x.pss for x in maps])) < 4096
        assert abs(swap - sum([x.swap for x in maps])) < 4096

    def test_parse_smaps_mocked(self):
        # See: https://github.com/giampaolo/psutil/issues/1222
        content = textwrap.dedent("""\
            fffff0 r-xp 00000000 00:00 0                  [vsyscall]
            Size:                  1 kB
            Rss:                   2 kB
            Pss:                   3 kB
            Shared_Clean:          4 kB
            Shared_Dirty:          5 kB
            Private_Clean:         6 kB
            Private_Dirty:         7 kB
            Referenced:            8 kB
            Anonymous:             9 kB
            LazyFree:              10 kB
            AnonHugePages:         11 kB
            ShmemPmdMapped:        12 kB
            Shared_Hugetlb:        13 kB
            Private_Hugetlb:       14 kB
            Swap:                  15 kB
            SwapPss:               16 kB
            KernelPageSize:        17 kB
            MMUPageSize:           18 kB
            Locked:                19 kB
            VmFlags: rd ex
            """).encode()
        with mock_open_content({"/proc/%s/smaps" % os.getpid(): content}) as m:
            p = psutil._pslinux.Process(os.getpid())
            uss, pss, swap = p._parse_smaps()
            assert m.called
            assert uss == (6 + 7 + 14) * 1024
            assert pss == 3 * 1024
            assert swap == 15 * 1024

    # On PYPY file descriptors are not closed fast enough.
    @pytest.mark.skipif(PYPY, reason="unreliable on PYPY")
    def test_open_files_mode(self):
        def get_test_file(fname):
            p = psutil.Process()
            giveup_at = time.time() + GLOBAL_TIMEOUT
            while True:
                for file in p.open_files():
                    if file.path == os.path.abspath(fname):
                        return file
                    elif time.time() > giveup_at:
                        break
            raise RuntimeError("timeout looking for test file")

        testfn = self.get_testfn()
        with open(testfn, "w"):
            assert get_test_file(testfn).mode == "w"
        with open(testfn):
            assert get_test_file(testfn).mode == "r"
        with open(testfn, "a"):
            assert get_test_file(testfn).mode == "a"
        with open(testfn, "r+"):
            assert get_test_file(testfn).mode == "r+"
        with open(testfn, "w+"):
            assert get_test_file(testfn).mode == "r+"
        with open(testfn, "a+"):
            assert get_test_file(testfn).mode == "a+"
        # note: "x" bit is not supported
        if PY3:
            safe_rmpath(testfn)
            with open(testfn, "x"):
                assert get_test_file(testfn).mode == "w"
            safe_rmpath(testfn)
            with open(testfn, "x+"):
                assert get_test_file(testfn).mode == "r+"

    def test_open_files_file_gone(self):
        # simulates a file which gets deleted during open_files()
        # execution
        p = psutil.Process()
        files = p.open_files()
        with open(self.get_testfn(), 'w'):
            # give the kernel some time to see the new file
            call_until(lambda: len(p.open_files()) != len(files))
            with mock.patch(
                'psutil._pslinux.os.readlink',
                side_effect=OSError(errno.ENOENT, ""),
            ) as m:
                assert p.open_files() == []
                assert m.called
            # also simulate the case where os.readlink() returns EINVAL
            # in which case psutil is supposed to 'continue'
            with mock.patch(
                'psutil._pslinux.os.readlink',
                side_effect=OSError(errno.EINVAL, ""),
            ) as m:
                assert p.open_files() == []
                assert m.called

    def test_open_files_fd_gone(self):
        # Simulate a case where /proc/{pid}/fdinfo/{fd} disappears
        # while iterating through fds.
        # https://travis-ci.org/giampaolo/psutil/jobs/225694530
        p = psutil.Process()
        files = p.open_files()
        with open(self.get_testfn(), 'w'):
            # give the kernel some time to see the new file
            call_until(lambda: len(p.open_files()) != len(files))
            patch_point = 'builtins.open' if PY3 else '__builtin__.open'
            with mock.patch(
                patch_point, side_effect=IOError(errno.ENOENT, "")
            ) as m:
                assert p.open_files() == []
                assert m.called

    def test_open_files_enametoolong(self):
        # Simulate a case where /proc/{pid}/fd/{fd} symlink
        # points to a file with full path longer than PATH_MAX, see:
        # https://github.com/giampaolo/psutil/issues/1940
        p = psutil.Process()
        files = p.open_files()
        with open(self.get_testfn(), 'w'):
            # give the kernel some time to see the new file
            call_until(lambda: len(p.open_files()) != len(files))
            patch_point = 'psutil._pslinux.os.readlink'
            with mock.patch(
                patch_point, side_effect=OSError(errno.ENAMETOOLONG, "")
            ) as m:
                with mock.patch("psutil._pslinux.debug"):
                    assert p.open_files() == []
                    assert m.called

    # --- mocked tests

    def test_terminal_mocked(self):
        with mock.patch(
            'psutil._pslinux._psposix.get_terminal_map', return_value={}
        ) as m:
            assert psutil._pslinux.Process(os.getpid()).terminal() is None
            assert m.called

    # TODO: re-enable this test.
    # def test_num_ctx_switches_mocked(self):
    #     with mock.patch('psutil._common.open', create=True) as m:
    #         self.assertRaises(
    #             NotImplementedError,
    #             psutil._pslinux.Process(os.getpid()).num_ctx_switches)
    #         assert m.called

    def test_cmdline_mocked(self):
        # see: https://github.com/giampaolo/psutil/issues/639
        p = psutil.Process()
        fake_file = io.StringIO(u'foo\x00bar\x00')
        with mock.patch(
            'psutil._common.open', return_value=fake_file, create=True
        ) as m:
            assert p.cmdline() == ['foo', 'bar']
            assert m.called
        fake_file = io.StringIO(u'foo\x00bar\x00\x00')
        with mock.patch(
            'psutil._common.open', return_value=fake_file, create=True
        ) as m:
            assert p.cmdline() == ['foo', 'bar', '']
            assert m.called

    def test_cmdline_spaces_mocked(self):
        # see: https://github.com/giampaolo/psutil/issues/1179
        p = psutil.Process()
        fake_file = io.StringIO(u'foo bar ')
        with mock.patch(
            'psutil._common.open', return_value=fake_file, create=True
        ) as m:
            assert p.cmdline() == ['foo', 'bar']
            assert m.called
        fake_file = io.StringIO(u'foo bar  ')
        with mock.patch(
            'psutil._common.open', return_value=fake_file, create=True
        ) as m:
            assert p.cmdline() == ['foo', 'bar', '']
            assert m.called

    def test_cmdline_mixed_separators(self):
        # https://github.com/giampaolo/psutil/issues/
        #    1179#issuecomment-552984549
        p = psutil.Process()
        fake_file = io.StringIO(u'foo\x20bar\x00')
        with mock.patch(
            'psutil._common.open', return_value=fake_file, create=True
        ) as m:
            assert p.cmdline() == ['foo', 'bar']
            assert m.called

    def test_readlink_path_deleted_mocked(self):
        with mock.patch(
            'psutil._pslinux.os.readlink', return_value='/home/foo (deleted)'
        ):
            assert psutil.Process().exe() == "/home/foo"
            assert psutil.Process().cwd() == "/home/foo"

    def test_threads_mocked(self):
        # Test the case where os.listdir() returns a file (thread)
        # which no longer exists by the time we open() it (race
        # condition). threads() is supposed to ignore that instead
        # of raising NSP.
        def open_mock_1(name, *args, **kwargs):
            if name.startswith('/proc/%s/task' % os.getpid()):
                raise IOError(errno.ENOENT, "")
            else:
                return orig_open(name, *args, **kwargs)

        orig_open = open
        patch_point = 'builtins.open' if PY3 else '__builtin__.open'
        with mock.patch(patch_point, side_effect=open_mock_1) as m:
            ret = psutil.Process().threads()
            assert m.called
            assert ret == []

        # ...but if it bumps into something != ENOENT we want an
        # exception.
        def open_mock_2(name, *args, **kwargs):
            if name.startswith('/proc/%s/task' % os.getpid()):
                raise IOError(errno.EPERM, "")
            else:
                return orig_open(name, *args, **kwargs)

        with mock.patch(patch_point, side_effect=open_mock_2):
            with pytest.raises(psutil.AccessDenied):
                psutil.Process().threads()

    def test_exe_mocked(self):
        with mock.patch(
            'psutil._pslinux.readlink', side_effect=OSError(errno.ENOENT, "")
        ) as m:
            # de-activate guessing from cmdline()
            with mock.patch(
                'psutil._pslinux.Process.cmdline', return_value=[]
            ):
                ret = psutil.Process().exe()
                assert m.called
                assert ret == ""  # noqa

    def test_issue_1014(self):
        # Emulates a case where smaps file does not exist. In this case
        # wrap_exception decorator should not raise NoSuchProcess.
        with mock_open_exception(
            '/proc/%s/smaps' % os.getpid(), IOError(errno.ENOENT, "")
        ) as m:
            p = psutil.Process()
            with pytest.raises(FileNotFoundError):
                p.memory_maps()
            assert m.called

    @pytest.mark.skipif(not HAS_RLIMIT, reason="not supported")
    def test_rlimit_zombie(self):
        # Emulate a case where rlimit() raises ENOSYS, which may
        # happen in case of zombie process:
        # https://travis-ci.org/giampaolo/psutil/jobs/51368273
        with mock.patch(
            "psutil._pslinux.prlimit", side_effect=OSError(errno.ENOSYS, "")
        ) as m1:
            with mock.patch(
                "psutil._pslinux.Process._is_zombie", return_value=True
            ) as m2:
                p = psutil.Process()
                p.name()
                with pytest.raises(psutil.ZombieProcess) as cm:
                    p.rlimit(psutil.RLIMIT_NOFILE)
        assert m1.called
        assert m2.called
        assert cm.value.pid == p.pid
        assert cm.value.name == p.name()

    def test_stat_file_parsing(self):
        args = [
            "0",  # pid
            "(cat)",  # name
            "Z",  # status
            "1",  # ppid
            "0",  # pgrp
            "0",  # session
            "0",  # tty
            "0",  # tpgid
            "0",  # flags
            "0",  # minflt
            "0",  # cminflt
            "0",  # majflt
            "0",  # cmajflt
            "2",  # utime
            "3",  # stime
            "4",  # cutime
            "5",  # cstime
            "0",  # priority
            "0",  # nice
            "0",  # num_threads
            "0",  # itrealvalue
            "6",  # starttime
            "0",  # vsize
            "0",  # rss
            "0",  # rsslim
            "0",  # startcode
            "0",  # endcode
            "0",  # startstack
            "0",  # kstkesp
            "0",  # kstkeip
            "0",  # signal
            "0",  # blocked
            "0",  # sigignore
            "0",  # sigcatch
            "0",  # wchan
            "0",  # nswap
            "0",  # cnswap
            "0",  # exit_signal
            "6",  # processor
            "0",  # rt priority
            "0",  # policy
            "7",  # delayacct_blkio_ticks
        ]
        content = " ".join(args).encode()
        with mock_open_content({"/proc/%s/stat" % os.getpid(): content}):
            p = psutil.Process()
            assert p.name() == 'cat'
            assert p.status() == psutil.STATUS_ZOMBIE
            assert p.ppid() == 1
            assert p.create_time() == 6 / CLOCK_TICKS + psutil.boot_time()
            cpu = p.cpu_times()
            assert cpu.user == 2 / CLOCK_TICKS
            assert cpu.system == 3 / CLOCK_TICKS
            assert cpu.children_user == 4 / CLOCK_TICKS
            assert cpu.children_system == 5 / CLOCK_TICKS
            assert cpu.iowait == 7 / CLOCK_TICKS
            assert p.cpu_num() == 6

    def test_status_file_parsing(self):
        content = textwrap.dedent("""\
            Uid:\t1000\t1001\t1002\t1003
            Gid:\t1004\t1005\t1006\t1007
            Threads:\t66
            Cpus_allowed:\tf
            Cpus_allowed_list:\t0-7
            voluntary_ctxt_switches:\t12
            nonvoluntary_ctxt_switches:\t13""").encode()
        with mock_open_content({"/proc/%s/status" % os.getpid(): content}):
            p = psutil.Process()
            assert p.num_ctx_switches().voluntary == 12
            assert p.num_ctx_switches().involuntary == 13
            assert p.num_threads() == 66
            uids = p.uids()
            assert uids.real == 1000
            assert uids.effective == 1001
            assert uids.saved == 1002
            gids = p.gids()
            assert gids.real == 1004
            assert gids.effective == 1005
            assert gids.saved == 1006
            assert p._proc._get_eligible_cpus() == list(range(8))

    def test_net_connections_enametoolong(self):
        # Simulate a case where /proc/{pid}/fd/{fd} symlink points to
        # a file with full path longer than PATH_MAX, see:
        # https://github.com/giampaolo/psutil/issues/1940
        with mock.patch(
            'psutil._pslinux.os.readlink',
            side_effect=OSError(errno.ENAMETOOLONG, ""),
        ) as m:
            p = psutil.Process()
            with mock.patch("psutil._pslinux.debug"):
                assert p.net_connections() == []
                assert m.called


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestProcessAgainstStatus(PsutilTestCase):
    """/proc/pid/stat and /proc/pid/status have many values in common.
    Whenever possible, psutil uses /proc/pid/stat (it's faster).
    For all those cases we check that the value found in
    /proc/pid/stat (by psutil) matches the one found in
    /proc/pid/status.
    """

    @classmethod
    def setUpClass(cls):
        cls.proc = psutil.Process()

    def read_status_file(self, linestart):
        with psutil._psplatform.open_text(
            '/proc/%s/status' % self.proc.pid
        ) as f:
            for line in f:
                line = line.strip()
                if line.startswith(linestart):
                    value = line.partition('\t')[2]
                    try:
                        return int(value)
                    except ValueError:
                        return value
            raise ValueError("can't find %r" % linestart)

    def test_name(self):
        value = self.read_status_file("Name:")
        assert self.proc.name() == value

    @pytest.mark.skipif(QEMU_USER, reason="QEMU user not supported")
    def test_status(self):
        value = self.read_status_file("State:")
        value = value[value.find('(') + 1 : value.rfind(')')]
        value = value.replace(' ', '-')
        assert self.proc.status() == value

    def test_ppid(self):
        value = self.read_status_file("PPid:")
        assert self.proc.ppid() == value

    def test_num_threads(self):
        value = self.read_status_file("Threads:")
        assert self.proc.num_threads() == value

    def test_uids(self):
        value = self.read_status_file("Uid:")
        value = tuple(map(int, value.split()[1:4]))
        assert self.proc.uids() == value

    def test_gids(self):
        value = self.read_status_file("Gid:")
        value = tuple(map(int, value.split()[1:4]))
        assert self.proc.gids() == value

    @retry_on_failure()
    def test_num_ctx_switches(self):
        value = self.read_status_file("voluntary_ctxt_switches:")
        assert self.proc.num_ctx_switches().voluntary == value
        value = self.read_status_file("nonvoluntary_ctxt_switches:")
        assert self.proc.num_ctx_switches().involuntary == value

    def test_cpu_affinity(self):
        value = self.read_status_file("Cpus_allowed_list:")
        if '-' in str(value):
            min_, max_ = map(int, value.split('-'))
            assert self.proc.cpu_affinity() == list(range(min_, max_ + 1))

    def test_cpu_affinity_eligible_cpus(self):
        value = self.read_status_file("Cpus_allowed_list:")
        with mock.patch("psutil._pslinux.per_cpu_times") as m:
            self.proc._proc._get_eligible_cpus()
        if '-' in str(value):
            assert not m.called
        else:
            assert m.called


# =====================================================================
# --- test utils
# =====================================================================


@pytest.mark.skipif(not LINUX, reason="LINUX only")
class TestUtils(PsutilTestCase):
    def test_readlink(self):
        with mock.patch("os.readlink", return_value="foo (deleted)") as m:
            assert psutil._psplatform.readlink("bar") == "foo"
            assert m.called
PKok\�u���psutil/tests/test_sunos.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Sun OS specific tests."""

import os

import psutil
from psutil import SUNOS
from psutil.tests import PsutilTestCase
from psutil.tests import pytest
from psutil.tests import sh


@pytest.mark.skipif(not SUNOS, reason="SUNOS only")
class SunOSSpecificTestCase(PsutilTestCase):
    def test_swap_memory(self):
        out = sh('env PATH=/usr/sbin:/sbin:%s swap -l' % os.environ['PATH'])
        lines = out.strip().split('\n')[1:]
        if not lines:
            raise ValueError('no swap device(s) configured')
        total = free = 0
        for line in lines:
            fields = line.split()
            total = int(fields[3]) * 512
            free = int(fields[4]) * 512
        used = total - free

        psutil_swap = psutil.swap_memory()
        assert psutil_swap.total == total
        assert psutil_swap.used == used
        assert psutil_swap.free == free

    def test_cpu_count(self):
        out = sh("/usr/sbin/psrinfo")
        assert psutil.cpu_count() == len(out.split('\n'))
PKok\0��DDpsutil/tests/test_posix.pynu�[���#!/usr/bin/env python3
# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""POSIX specific tests."""

import datetime
import errno
import os
import re
import subprocess
import time

import psutil
from psutil import AIX
from psutil import BSD
from psutil import LINUX
from psutil import MACOS
from psutil import OPENBSD
from psutil import POSIX
from psutil import SUNOS
from psutil.tests import AARCH64
from psutil.tests import HAS_NET_IO_COUNTERS
from psutil.tests import PYTHON_EXE
from psutil.tests import QEMU_USER
from psutil.tests import PsutilTestCase
from psutil.tests import mock
from psutil.tests import pytest
from psutil.tests import retry_on_failure
from psutil.tests import sh
from psutil.tests import skip_on_access_denied
from psutil.tests import spawn_testproc
from psutil.tests import terminate
from psutil.tests import which


if POSIX:
    import mmap
    import resource

    from psutil._psutil_posix import getpagesize


def ps(fmt, pid=None):
    """Wrapper for calling the ps command with a little bit of cross-platform
    support for a narrow range of features.
    """

    cmd = ['ps']

    if LINUX:
        cmd.append('--no-headers')

    if pid is not None:
        cmd.extend(['-p', str(pid)])
    else:
        if SUNOS or AIX:
            cmd.append('-A')
        else:
            cmd.append('ax')

    if SUNOS:
        fmt = fmt.replace("start", "stime")

    cmd.extend(['-o', fmt])

    output = sh(cmd)

    output = output.splitlines() if LINUX else output.splitlines()[1:]

    all_output = []
    for line in output:
        line = line.strip()

        try:
            line = int(line)
        except ValueError:
            pass

        all_output.append(line)

    if pid is None:
        return all_output
    else:
        return all_output[0]


# ps "-o" field names differ wildly between platforms.
# "comm" means "only executable name" but is not available on BSD platforms.
# "args" means "command with all its arguments", and is also not available
# on BSD platforms.
# "command" is like "args" on most platforms, but like "comm" on AIX,
# and not available on SUNOS.
# so for the executable name we can use "comm" on Solaris and split "command"
# on other platforms.
# to get the cmdline (with args) we have to use "args" on AIX and
# Solaris, and can use "command" on all others.


def ps_name(pid):
    field = "command"
    if SUNOS:
        field = "comm"
    command = ps(field, pid).split()
    if QEMU_USER:
        assert "/bin/qemu-" in command[0]
        return command[1]
    return command[0]


def ps_args(pid):
    field = "command"
    if AIX or SUNOS:
        field = "args"
    out = ps(field, pid)
    # observed on BSD + Github CI: '/usr/local/bin/python3 -E -O (python3.9)'
    out = re.sub(r"\(python.*?\)$", "", out)
    return out.strip()


def ps_rss(pid):
    field = "rss"
    if AIX:
        field = "rssize"
    return ps(field, pid)


def ps_vsz(pid):
    field = "vsz"
    if AIX:
        field = "vsize"
    return ps(field, pid)


def df(device):
    try:
        out = sh("df -k %s" % device).strip()
    except RuntimeError as err:
        if "device busy" in str(err).lower():
            raise pytest.skip("df returned EBUSY")
        raise
    line = out.split('\n')[1]
    fields = line.split()
    sys_total = int(fields[1]) * 1024
    sys_used = int(fields[2]) * 1024
    sys_free = int(fields[3]) * 1024
    sys_percent = float(fields[4].replace('%', ''))
    return (sys_total, sys_used, sys_free, sys_percent)


@pytest.mark.skipif(not POSIX, reason="POSIX only")
class TestProcess(PsutilTestCase):
    """Compare psutil results against 'ps' command line utility (mainly)."""

    @classmethod
    def setUpClass(cls):
        cls.pid = spawn_testproc(
            [PYTHON_EXE, "-E", "-O"], stdin=subprocess.PIPE
        ).pid

    @classmethod
    def tearDownClass(cls):
        terminate(cls.pid)

    def test_ppid(self):
        ppid_ps = ps('ppid', self.pid)
        ppid_psutil = psutil.Process(self.pid).ppid()
        assert ppid_ps == ppid_psutil

    def test_uid(self):
        uid_ps = ps('uid', self.pid)
        uid_psutil = psutil.Process(self.pid).uids().real
        assert uid_ps == uid_psutil

    def test_gid(self):
        gid_ps = ps('rgid', self.pid)
        gid_psutil = psutil.Process(self.pid).gids().real
        assert gid_ps == gid_psutil

    def test_username(self):
        username_ps = ps('user', self.pid)
        username_psutil = psutil.Process(self.pid).username()
        assert username_ps == username_psutil

    def test_username_no_resolution(self):
        # Emulate a case where the system can't resolve the uid to
        # a username in which case psutil is supposed to return
        # the stringified uid.
        p = psutil.Process()
        with mock.patch("psutil.pwd.getpwuid", side_effect=KeyError) as fun:
            assert p.username() == str(p.uids().real)
            assert fun.called

    @skip_on_access_denied()
    @retry_on_failure()
    def test_rss_memory(self):
        # give python interpreter some time to properly initialize
        # so that the results are the same
        time.sleep(0.1)
        rss_ps = ps_rss(self.pid)
        rss_psutil = psutil.Process(self.pid).memory_info()[0] / 1024
        assert rss_ps == rss_psutil

    @skip_on_access_denied()
    @retry_on_failure()
    def test_vsz_memory(self):
        # give python interpreter some time to properly initialize
        # so that the results are the same
        time.sleep(0.1)
        vsz_ps = ps_vsz(self.pid)
        vsz_psutil = psutil.Process(self.pid).memory_info()[1] / 1024
        assert vsz_ps == vsz_psutil

    def test_name(self):
        name_ps = ps_name(self.pid)
        # remove path if there is any, from the command
        name_ps = os.path.basename(name_ps).lower()
        name_psutil = psutil.Process(self.pid).name().lower()
        # ...because of how we calculate PYTHON_EXE; on MACOS this may
        # be "pythonX.Y".
        name_ps = re.sub(r"\d.\d", "", name_ps)
        name_psutil = re.sub(r"\d.\d", "", name_psutil)
        # ...may also be "python.X"
        name_ps = re.sub(r"\d", "", name_ps)
        name_psutil = re.sub(r"\d", "", name_psutil)
        assert name_ps == name_psutil

    def test_name_long(self):
        # On UNIX the kernel truncates the name to the first 15
        # characters. In such a case psutil tries to determine the
        # full name from the cmdline.
        name = "long-program-name"
        cmdline = ["long-program-name-extended", "foo", "bar"]
        with mock.patch("psutil._psplatform.Process.name", return_value=name):
            with mock.patch(
                "psutil._psplatform.Process.cmdline", return_value=cmdline
            ):
                p = psutil.Process()
                assert p.name() == "long-program-name-extended"

    def test_name_long_cmdline_ad_exc(self):
        # Same as above but emulates a case where cmdline() raises
        # AccessDenied in which case psutil is supposed to return
        # the truncated name instead of crashing.
        name = "long-program-name"
        with mock.patch("psutil._psplatform.Process.name", return_value=name):
            with mock.patch(
                "psutil._psplatform.Process.cmdline",
                side_effect=psutil.AccessDenied(0, ""),
            ):
                p = psutil.Process()
                assert p.name() == "long-program-name"

    def test_name_long_cmdline_nsp_exc(self):
        # Same as above but emulates a case where cmdline() raises NSP
        # which is supposed to propagate.
        name = "long-program-name"
        with mock.patch("psutil._psplatform.Process.name", return_value=name):
            with mock.patch(
                "psutil._psplatform.Process.cmdline",
                side_effect=psutil.NoSuchProcess(0, ""),
            ):
                p = psutil.Process()
                with pytest.raises(psutil.NoSuchProcess):
                    p.name()

    @pytest.mark.skipif(MACOS or BSD, reason="ps -o start not available")
    def test_create_time(self):
        time_ps = ps('start', self.pid)
        time_psutil = psutil.Process(self.pid).create_time()
        time_psutil_tstamp = datetime.datetime.fromtimestamp(
            time_psutil
        ).strftime("%H:%M:%S")
        # sometimes ps shows the time rounded up instead of down, so we check
        # for both possible values
        round_time_psutil = round(time_psutil)
        round_time_psutil_tstamp = datetime.datetime.fromtimestamp(
            round_time_psutil
        ).strftime("%H:%M:%S")
        assert time_ps in [time_psutil_tstamp, round_time_psutil_tstamp]

    def test_exe(self):
        ps_pathname = ps_name(self.pid)
        psutil_pathname = psutil.Process(self.pid).exe()
        try:
            assert ps_pathname == psutil_pathname
        except AssertionError:
            # certain platforms such as BSD are more accurate returning:
            # "/usr/local/bin/python2.7"
            # ...instead of:
            # "/usr/local/bin/python"
            # We do not want to consider this difference in accuracy
            # an error.
            adjusted_ps_pathname = ps_pathname[: len(ps_pathname)]
            assert ps_pathname == adjusted_ps_pathname

    # On macOS the official python installer exposes a python wrapper that
    # executes a python executable hidden inside an application bundle inside
    # the Python framework.
    # There's a race condition between the ps call & the psutil call below
    # depending on the completion of the execve call so let's retry on failure
    @retry_on_failure()
    def test_cmdline(self):
        ps_cmdline = ps_args(self.pid)
        psutil_cmdline = " ".join(psutil.Process(self.pid).cmdline())
        if AARCH64 and len(ps_cmdline) < len(psutil_cmdline):
            assert psutil_cmdline.startswith(ps_cmdline)
        else:
            assert ps_cmdline == psutil_cmdline

    # On SUNOS "ps" reads niceness /proc/pid/psinfo which returns an
    # incorrect value (20); the real deal is getpriority(2) which
    # returns 0; psutil relies on it, see:
    # https://github.com/giampaolo/psutil/issues/1082
    # AIX has the same issue
    @pytest.mark.skipif(SUNOS, reason="not reliable on SUNOS")
    @pytest.mark.skipif(AIX, reason="not reliable on AIX")
    def test_nice(self):
        ps_nice = ps('nice', self.pid)
        psutil_nice = psutil.Process().nice()
        assert ps_nice == psutil_nice


@pytest.mark.skipif(not POSIX, reason="POSIX only")
class TestSystemAPIs(PsutilTestCase):
    """Test some system APIs."""

    @retry_on_failure()
    def test_pids(self):
        # Note: this test might fail if the OS is starting/killing
        # other processes in the meantime
        pids_ps = sorted(ps("pid"))
        pids_psutil = psutil.pids()

        # on MACOS and OPENBSD ps doesn't show pid 0
        if MACOS or (OPENBSD and 0 not in pids_ps):
            pids_ps.insert(0, 0)

        # There will often be one more process in pids_ps for ps itself
        if len(pids_ps) - len(pids_psutil) > 1:
            difference = [x for x in pids_psutil if x not in pids_ps] + [
                x for x in pids_ps if x not in pids_psutil
            ]
            raise self.fail("difference: " + str(difference))

    # for some reason ifconfig -a does not report all interfaces
    # returned by psutil
    @pytest.mark.skipif(SUNOS, reason="unreliable on SUNOS")
    @pytest.mark.skipif(not which('ifconfig'), reason="no ifconfig cmd")
    @pytest.mark.skipif(not HAS_NET_IO_COUNTERS, reason="not supported")
    def test_nic_names(self):
        output = sh("ifconfig -a")
        for nic in psutil.net_io_counters(pernic=True):
            for line in output.split():
                if line.startswith(nic):
                    break
            else:
                raise self.fail(
                    "couldn't find %s nic in 'ifconfig -a' output\n%s"
                    % (nic, output)
                )

    # @pytest.mark.skipif(CI_TESTING and not psutil.users(),
    #                     reason="unreliable on CI")
    @retry_on_failure()
    def test_users(self):
        out = sh("who -u")
        if not out.strip():
            raise pytest.skip("no users on this system")
        lines = out.split('\n')
        users = [x.split()[0] for x in lines]
        terminals = [x.split()[1] for x in lines]
        assert len(users) == len(psutil.users())
        with self.subTest(psutil=psutil.users(), who=out):
            for idx, u in enumerate(psutil.users()):
                assert u.name == users[idx]
                assert u.terminal == terminals[idx]
                if u.pid is not None:  # None on OpenBSD
                    psutil.Process(u.pid)

    @retry_on_failure()
    def test_users_started(self):
        out = sh("who -u")
        if not out.strip():
            raise pytest.skip("no users on this system")
        tstamp = None
        # '2023-04-11 09:31' (Linux)
        started = re.findall(r"\d\d\d\d-\d\d-\d\d \d\d:\d\d", out)
        if started:
            tstamp = "%Y-%m-%d %H:%M"
        else:
            # 'Apr 10 22:27' (macOS)
            started = re.findall(r"[A-Z][a-z][a-z] \d\d \d\d:\d\d", out)
            if started:
                tstamp = "%b %d %H:%M"
            else:
                # 'Apr 10'
                started = re.findall(r"[A-Z][a-z][a-z] \d\d", out)
                if started:
                    tstamp = "%b %d"
                else:
                    # 'apr 10' (sunOS)
                    started = re.findall(r"[a-z][a-z][a-z] \d\d", out)
                    if started:
                        tstamp = "%b %d"
                        started = [x.capitalize() for x in started]

        if not tstamp:
            raise pytest.skip(
                "cannot interpret tstamp in who output\n%s" % (out)
            )

        with self.subTest(psutil=psutil.users(), who=out):
            for idx, u in enumerate(psutil.users()):
                psutil_value = datetime.datetime.fromtimestamp(
                    u.started
                ).strftime(tstamp)
                assert psutil_value == started[idx]

    def test_pid_exists_let_raise(self):
        # According to "man 2 kill" possible error values for kill
        # are (EINVAL, EPERM, ESRCH). Test that any other errno
        # results in an exception.
        with mock.patch(
            "psutil._psposix.os.kill", side_effect=OSError(errno.EBADF, "")
        ) as m:
            with pytest.raises(OSError):
                psutil._psposix.pid_exists(os.getpid())
            assert m.called

    def test_os_waitpid_let_raise(self):
        # os.waitpid() is supposed to catch EINTR and ECHILD only.
        # Test that any other errno results in an exception.
        with mock.patch(
            "psutil._psposix.os.waitpid", side_effect=OSError(errno.EBADF, "")
        ) as m:
            with pytest.raises(OSError):
                psutil._psposix.wait_pid(os.getpid())
            assert m.called

    def test_os_waitpid_eintr(self):
        # os.waitpid() is supposed to "retry" on EINTR.
        with mock.patch(
            "psutil._psposix.os.waitpid", side_effect=OSError(errno.EINTR, "")
        ) as m:
            with pytest.raises(psutil._psposix.TimeoutExpired):
                psutil._psposix.wait_pid(os.getpid(), timeout=0.01)
            assert m.called

    def test_os_waitpid_bad_ret_status(self):
        # Simulate os.waitpid() returning a bad status.
        with mock.patch(
            "psutil._psposix.os.waitpid", return_value=(1, -1)
        ) as m:
            with pytest.raises(ValueError):
                psutil._psposix.wait_pid(os.getpid())
            assert m.called

    # AIX can return '-' in df output instead of numbers, e.g. for /proc
    @pytest.mark.skipif(AIX, reason="unreliable on AIX")
    @retry_on_failure()
    def test_disk_usage(self):
        tolerance = 4 * 1024 * 1024  # 4MB
        for part in psutil.disk_partitions(all=False):
            usage = psutil.disk_usage(part.mountpoint)
            try:
                sys_total, sys_used, sys_free, sys_percent = df(part.device)
            except RuntimeError as err:
                # see:
                # https://travis-ci.org/giampaolo/psutil/jobs/138338464
                # https://travis-ci.org/giampaolo/psutil/jobs/138343361
                err = str(err).lower()
                if (
                    "no such file or directory" in err
                    or "raw devices not supported" in err
                    or "permission denied" in err
                ):
                    continue
                raise
            else:
                assert abs(usage.total - sys_total) < tolerance
                assert abs(usage.used - sys_used) < tolerance
                assert abs(usage.free - sys_free) < tolerance
                assert abs(usage.percent - sys_percent) <= 1


@pytest.mark.skipif(not POSIX, reason="POSIX only")
class TestMisc(PsutilTestCase):
    def test_getpagesize(self):
        pagesize = getpagesize()
        assert pagesize > 0
        assert pagesize == resource.getpagesize()
        assert pagesize == mmap.PAGESIZE
PKok\�����psutil/tests/test_aix.pynu�[���#!/usr/bin/env python3

# Copyright (c) 2009, Giampaolo Rodola'
# Copyright (c) 2017, Arnon Yaari
# All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""AIX specific tests."""

import re

import psutil
from psutil import AIX
from psutil.tests import PsutilTestCase
from psutil.tests import pytest
from psutil.tests import sh


@pytest.mark.skipif(not AIX, reason="AIX only")
class AIXSpecificTestCase(PsutilTestCase):
    def test_virtual_memory(self):
        out = sh('/usr/bin/svmon -O unit=KB')
        re_pattern = r"memory\s*"
        for field in ("size inuse free pin virtual available mmode").split():
            re_pattern += r"(?P<%s>\S+)\s+" % (field,)
        matchobj = re.search(re_pattern, out)

        assert matchobj is not None

        KB = 1024
        total = int(matchobj.group("size")) * KB
        available = int(matchobj.group("available")) * KB
        used = int(matchobj.group("inuse")) * KB
        free = int(matchobj.group("free")) * KB

        psutil_result = psutil.virtual_memory()

        # TOLERANCE_SYS_MEM from psutil.tests is not enough. For some reason
        # we're seeing differences of ~1.2 MB. 2 MB is still a good tolerance
        # when compared to GBs.
        TOLERANCE_SYS_MEM = 2 * KB * KB  # 2 MB
        assert psutil_result.total == total
        assert abs(psutil_result.used - used) < TOLERANCE_SYS_MEM
        assert abs(psutil_result.available - available) < TOLERANCE_SYS_MEM
        assert abs(psutil_result.free - free) < TOLERANCE_SYS_MEM

    def test_swap_memory(self):
        out = sh('/usr/sbin/lsps -a')
        # From the man page, "The size is given in megabytes" so we assume
        # we'll always have 'MB' in the result
        # TODO maybe try to use "swap -l" to check "used" too, but its units
        # are not guaranteed to be "MB" so parsing may not be consistent
        matchobj = re.search(
            r"(?P<space>\S+)\s+"
            r"(?P<vol>\S+)\s+"
            r"(?P<vg>\S+)\s+"
            r"(?P<size>\d+)MB",
            out,
        )

        assert matchobj is not None

        total_mb = int(matchobj.group("size"))
        MB = 1024**2
        psutil_result = psutil.swap_memory()
        # we divide our result by MB instead of multiplying the lsps value by
        # MB because lsps may round down, so we round down too
        assert int(psutil_result.total / MB) == total_mb

    def test_cpu_stats(self):
        out = sh('/usr/bin/mpstat -a')

        re_pattern = r"ALL\s*"
        for field in (
            "min maj mpcs mpcr dev soft dec ph cs ics bound rq "
            "push S3pull S3grd S0rd S1rd S2rd S3rd S4rd S5rd "
            "sysc"
        ).split():
            re_pattern += r"(?P<%s>\S+)\s+" % (field,)
        matchobj = re.search(re_pattern, out)

        assert matchobj is not None

        # numbers are usually in the millions so 1000 is ok for tolerance
        CPU_STATS_TOLERANCE = 1000
        psutil_result = psutil.cpu_stats()
        assert (
            abs(psutil_result.ctx_switches - int(matchobj.group("cs")))
            < CPU_STATS_TOLERANCE
        )
        assert (
            abs(psutil_result.syscalls - int(matchobj.group("sysc")))
            < CPU_STATS_TOLERANCE
        )
        assert (
            abs(psutil_result.interrupts - int(matchobj.group("dev")))
            < CPU_STATS_TOLERANCE
        )
        assert (
            abs(psutil_result.soft_interrupts - int(matchobj.group("soft")))
            < CPU_STATS_TOLERANCE
        )

    def test_cpu_count_logical(self):
        out = sh('/usr/bin/mpstat -a')
        mpstat_lcpu = int(re.search(r"lcpu=(\d+)", out).group(1))
        psutil_lcpu = psutil.cpu_count(logical=True)
        assert mpstat_lcpu == psutil_lcpu

    def test_net_if_addrs_names(self):
        out = sh('/etc/ifconfig -l')
        ifconfig_names = set(out.split())
        psutil_names = set(psutil.net_if_addrs().keys())
        assert ifconfig_names == psutil_names
PKok\�$�B\B\psutil/__init__.pynu�[���# -*- coding: utf-8 -*-

# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""psutil is a cross-platform library for retrieving information on
running processes and system utilization (CPU, memory, disks, network,
sensors) in Python. Supported platforms:

 - Linux
 - Windows
 - macOS
 - FreeBSD
 - OpenBSD
 - NetBSD
 - Sun Solaris
 - AIX

Works with Python versions 2.7 and 3.6+.
"""

from __future__ import division

import collections
import contextlib
import datetime
import functools
import os
import signal
import subprocess
import sys
import threading
import time


try:
    import pwd
except ImportError:
    pwd = None

from . import _common
from ._common import AIX
from ._common import BSD
from ._common import CONN_CLOSE
from ._common import CONN_CLOSE_WAIT
from ._common import CONN_CLOSING
from ._common import CONN_ESTABLISHED
from ._common import CONN_FIN_WAIT1
from ._common import CONN_FIN_WAIT2
from ._common import CONN_LAST_ACK
from ._common import CONN_LISTEN
from ._common import CONN_NONE
from ._common import CONN_SYN_RECV
from ._common import CONN_SYN_SENT
from ._common import CONN_TIME_WAIT
from ._common import FREEBSD  # NOQA
from ._common import LINUX
from ._common import MACOS
from ._common import NETBSD  # NOQA
from ._common import NIC_DUPLEX_FULL
from ._common import NIC_DUPLEX_HALF
from ._common import NIC_DUPLEX_UNKNOWN
from ._common import OPENBSD  # NOQA
from ._common import OSX  # deprecated alias
from ._common import POSIX  # NOQA
from ._common import POWER_TIME_UNKNOWN
from ._common import POWER_TIME_UNLIMITED
from ._common import STATUS_DEAD
from ._common import STATUS_DISK_SLEEP
from ._common import STATUS_IDLE
from ._common import STATUS_LOCKED
from ._common import STATUS_PARKED
from ._common import STATUS_RUNNING
from ._common import STATUS_SLEEPING
from ._common import STATUS_STOPPED
from ._common import STATUS_TRACING_STOP
from ._common import STATUS_WAITING
from ._common import STATUS_WAKING
from ._common import STATUS_ZOMBIE
from ._common import SUNOS
from ._common import WINDOWS
from ._common import AccessDenied
from ._common import Error
from ._common import NoSuchProcess
from ._common import TimeoutExpired
from ._common import ZombieProcess
from ._common import debug
from ._common import memoize_when_activated
from ._common import wrap_numbers as _wrap_numbers
from ._compat import PY3 as _PY3
from ._compat import PermissionError
from ._compat import ProcessLookupError
from ._compat import SubprocessTimeoutExpired as _SubprocessTimeoutExpired
from ._compat import long


if LINUX:
    # This is public API and it will be retrieved from _pslinux.py
    # via sys.modules.
    PROCFS_PATH = "/proc"

    from . import _pslinux as _psplatform
    from ._pslinux import IOPRIO_CLASS_BE  # NOQA
    from ._pslinux import IOPRIO_CLASS_IDLE  # NOQA
    from ._pslinux import IOPRIO_CLASS_NONE  # NOQA
    from ._pslinux import IOPRIO_CLASS_RT  # NOQA

elif WINDOWS:
    from . import _pswindows as _psplatform
    from ._psutil_windows import ABOVE_NORMAL_PRIORITY_CLASS  # NOQA
    from ._psutil_windows import BELOW_NORMAL_PRIORITY_CLASS  # NOQA
    from ._psutil_windows import HIGH_PRIORITY_CLASS  # NOQA
    from ._psutil_windows import IDLE_PRIORITY_CLASS  # NOQA
    from ._psutil_windows import NORMAL_PRIORITY_CLASS  # NOQA
    from ._psutil_windows import REALTIME_PRIORITY_CLASS  # NOQA
    from ._pswindows import CONN_DELETE_TCB  # NOQA
    from ._pswindows import IOPRIO_HIGH  # NOQA
    from ._pswindows import IOPRIO_LOW  # NOQA
    from ._pswindows import IOPRIO_NORMAL  # NOQA
    from ._pswindows import IOPRIO_VERYLOW  # NOQA

elif MACOS:
    from . import _psosx as _psplatform

elif BSD:
    from . import _psbsd as _psplatform

elif SUNOS:
    from . import _pssunos as _psplatform
    from ._pssunos import CONN_BOUND  # NOQA
    from ._pssunos import CONN_IDLE  # NOQA

    # This is public writable API which is read from _pslinux.py and
    # _pssunos.py via sys.modules.
    PROCFS_PATH = "/proc"

elif AIX:
    from . import _psaix as _psplatform

    # This is public API and it will be retrieved from _pslinux.py
    # via sys.modules.
    PROCFS_PATH = "/proc"

else:  # pragma: no cover
    raise NotImplementedError('platform %s is not supported' % sys.platform)


# fmt: off
__all__ = [
    # exceptions
    "Error", "NoSuchProcess", "ZombieProcess", "AccessDenied",
    "TimeoutExpired",

    # constants
    "version_info", "__version__",

    "STATUS_RUNNING", "STATUS_IDLE", "STATUS_SLEEPING", "STATUS_DISK_SLEEP",
    "STATUS_STOPPED", "STATUS_TRACING_STOP", "STATUS_ZOMBIE", "STATUS_DEAD",
    "STATUS_WAKING", "STATUS_LOCKED", "STATUS_WAITING", "STATUS_LOCKED",
    "STATUS_PARKED",

    "CONN_ESTABLISHED", "CONN_SYN_SENT", "CONN_SYN_RECV", "CONN_FIN_WAIT1",
    "CONN_FIN_WAIT2", "CONN_TIME_WAIT", "CONN_CLOSE", "CONN_CLOSE_WAIT",
    "CONN_LAST_ACK", "CONN_LISTEN", "CONN_CLOSING", "CONN_NONE",
    # "CONN_IDLE", "CONN_BOUND",

    "AF_LINK",

    "NIC_DUPLEX_FULL", "NIC_DUPLEX_HALF", "NIC_DUPLEX_UNKNOWN",

    "POWER_TIME_UNKNOWN", "POWER_TIME_UNLIMITED",

    "BSD", "FREEBSD", "LINUX", "NETBSD", "OPENBSD", "MACOS", "OSX", "POSIX",
    "SUNOS", "WINDOWS", "AIX",

    # "RLIM_INFINITY", "RLIMIT_AS", "RLIMIT_CORE", "RLIMIT_CPU", "RLIMIT_DATA",
    # "RLIMIT_FSIZE", "RLIMIT_LOCKS", "RLIMIT_MEMLOCK", "RLIMIT_NOFILE",
    # "RLIMIT_NPROC", "RLIMIT_RSS", "RLIMIT_STACK", "RLIMIT_MSGQUEUE",
    # "RLIMIT_NICE", "RLIMIT_RTPRIO", "RLIMIT_RTTIME", "RLIMIT_SIGPENDING",

    # classes
    "Process", "Popen",

    # functions
    "pid_exists", "pids", "process_iter", "wait_procs",             # proc
    "virtual_memory", "swap_memory",                                # memory
    "cpu_times", "cpu_percent", "cpu_times_percent", "cpu_count",   # cpu
    "cpu_stats",  # "cpu_freq", "getloadavg"
    "net_io_counters", "net_connections", "net_if_addrs",           # network
    "net_if_stats",
    "disk_io_counters", "disk_partitions", "disk_usage",            # disk
    # "sensors_temperatures", "sensors_battery", "sensors_fans"     # sensors
    "users", "boot_time",                                           # others
]
# fmt: on


__all__.extend(_psplatform.__extra__all__)

# Linux, FreeBSD
if hasattr(_psplatform.Process, "rlimit"):
    # Populate global namespace with RLIM* constants.
    from . import _psutil_posix

    _globals = globals()
    _name = None
    for _name in dir(_psutil_posix):
        if _name.startswith('RLIM') and _name.isupper():
            _globals[_name] = getattr(_psutil_posix, _name)
            __all__.append(_name)
    del _globals, _name

AF_LINK = _psplatform.AF_LINK

__author__ = "Giampaolo Rodola'"
__version__ = "6.1.0"
version_info = tuple([int(num) for num in __version__.split('.')])

_timer = getattr(time, 'monotonic', time.time)
_TOTAL_PHYMEM = None
_LOWEST_PID = None
_SENTINEL = object()

# Sanity check in case the user messed up with psutil installation
# or did something weird with sys.path. In this case we might end
# up importing a python module using a C extension module which
# was compiled for a different version of psutil.
# We want to prevent that by failing sooner rather than later.
# See: https://github.com/giampaolo/psutil/issues/564
if int(__version__.replace('.', '')) != getattr(
    _psplatform.cext, 'version', None
):
    msg = "version conflict: %r C extension " % _psplatform.cext.__file__
    msg += "module was built for another version of psutil"
    if hasattr(_psplatform.cext, 'version'):
        msg += " (%s instead of %s)" % (
            '.'.join([x for x in str(_psplatform.cext.version)]),
            __version__,
        )
    else:
        msg += " (different than %s)" % __version__
    msg += "; you may try to 'pip uninstall psutil', manually remove %s" % (
        getattr(
            _psplatform.cext,
            "__file__",
            "the existing psutil install directory",
        )
    )
    msg += " or clean the virtual env somehow, then reinstall"
    raise ImportError(msg)


# =====================================================================
# --- Utils
# =====================================================================


if hasattr(_psplatform, 'ppid_map'):
    # Faster version (Windows and Linux).
    _ppid_map = _psplatform.ppid_map
else:  # pragma: no cover

    def _ppid_map():
        """Return a {pid: ppid, ...} dict for all running processes in
        one shot. Used to speed up Process.children().
        """
        ret = {}
        for pid in pids():
            try:
                ret[pid] = _psplatform.Process(pid).ppid()
            except (NoSuchProcess, ZombieProcess):
                pass
        return ret


def _pprint_secs(secs):
    """Format seconds in a human readable form."""
    now = time.time()
    secs_ago = int(now - secs)
    fmt = "%H:%M:%S" if secs_ago < 60 * 60 * 24 else "%Y-%m-%d %H:%M:%S"
    return datetime.datetime.fromtimestamp(secs).strftime(fmt)


# =====================================================================
# --- Process class
# =====================================================================


class Process(object):  # noqa: UP004
    """Represents an OS process with the given PID.
    If PID is omitted current process PID (os.getpid()) is used.
    Raise NoSuchProcess if PID does not exist.

    Note that most of the methods of this class do not make sure that
    the PID of the process being queried has been reused. That means
    that you may end up retrieving information for another process.

    The only exceptions for which process identity is pre-emptively
    checked and guaranteed are:

     - parent()
     - children()
     - nice() (set)
     - ionice() (set)
     - rlimit() (set)
     - cpu_affinity (set)
     - suspend()
     - resume()
     - send_signal()
     - terminate()
     - kill()

    To prevent this problem for all other methods you can use
    is_running() before querying the process.
    """

    def __init__(self, pid=None):
        self._init(pid)

    def _init(self, pid, _ignore_nsp=False):
        if pid is None:
            pid = os.getpid()
        else:
            if not _PY3 and not isinstance(pid, (int, long)):
                msg = "pid must be an integer (got %r)" % pid
                raise TypeError(msg)
            if pid < 0:
                msg = "pid must be a positive integer (got %s)" % pid
                raise ValueError(msg)
            try:
                _psplatform.cext.check_pid_range(pid)
            except OverflowError:
                msg = "process PID out of range (got %s)" % pid
                raise NoSuchProcess(pid, msg=msg)

        self._pid = pid
        self._name = None
        self._exe = None
        self._create_time = None
        self._gone = False
        self._pid_reused = False
        self._hash = None
        self._lock = threading.RLock()
        # used for caching on Windows only (on POSIX ppid may change)
        self._ppid = None
        # platform-specific modules define an _psplatform.Process
        # implementation class
        self._proc = _psplatform.Process(pid)
        self._last_sys_cpu_times = None
        self._last_proc_cpu_times = None
        self._exitcode = _SENTINEL
        self._ident = (self.pid, None)
        try:
            self._ident = self._get_ident()
        except AccessDenied:
            # This should happen on Windows only, since we use the fast
            # create time method. AFAIK, on all other platforms we are
            # able to get create time for all PIDs.
            pass
        except ZombieProcess:
            # Zombies can still be queried by this class (although
            # not always) and pids() return them so just go on.
            pass
        except NoSuchProcess:
            if not _ignore_nsp:
                msg = "process PID not found"
                raise NoSuchProcess(pid, msg=msg)
            else:
                self._gone = True

    def _get_ident(self):
        """Return a (pid, uid) tuple which is supposed to identify a
        Process instance univocally over time. The PID alone is not
        enough, as it can be assigned to a new process after this one
        terminates, so we add process creation time to the mix. We need
        this in order to prevent killing the wrong process later on.
        This is also known as PID reuse or PID recycling problem.

        The reliability of this strategy mostly depends on
        create_time() precision, which is 0.01 secs on Linux. The
        assumption is that, after a process terminates, the kernel
        won't reuse the same PID after such a short period of time
        (0.01 secs). Technically this is inherently racy, but
        practically it should be good enough.
        """
        if WINDOWS:
            # Use create_time() fast method in order to speedup
            # `process_iter()`. This means we'll get AccessDenied for
            # most ADMIN processes, but that's fine since it means
            # we'll also get AccessDenied on kill().
            # https://github.com/giampaolo/psutil/issues/2366#issuecomment-2381646555
            self._create_time = self._proc.create_time(fast_only=True)
            return (self.pid, self._create_time)
        else:
            return (self.pid, self.create_time())

    def __str__(self):
        info = collections.OrderedDict()
        info["pid"] = self.pid
        if self._name:
            info['name'] = self._name
        with self.oneshot():
            if self._pid_reused:
                info["status"] = "terminated + PID reused"
            else:
                try:
                    info["name"] = self.name()
                    info["status"] = self.status()
                except ZombieProcess:
                    info["status"] = "zombie"
                except NoSuchProcess:
                    info["status"] = "terminated"
                except AccessDenied:
                    pass

            if self._exitcode not in (_SENTINEL, None):
                info["exitcode"] = self._exitcode
            if self._create_time is not None:
                info['started'] = _pprint_secs(self._create_time)

            return "%s.%s(%s)" % (
                self.__class__.__module__,
                self.__class__.__name__,
                ", ".join(["%s=%r" % (k, v) for k, v in info.items()]),
            )

    __repr__ = __str__

    def __eq__(self, other):
        # Test for equality with another Process object based
        # on PID and creation time.
        if not isinstance(other, Process):
            return NotImplemented
        if OPENBSD or NETBSD:  # pragma: no cover
            # Zombie processes on Open/NetBSD have a creation time of
            # 0.0. This covers the case when a process started normally
            # (so it has a ctime), then it turned into a zombie. It's
            # important to do this because is_running() depends on
            # __eq__.
            pid1, ident1 = self._ident
            pid2, ident2 = other._ident
            if pid1 == pid2:
                if ident1 and not ident2:
                    try:
                        return self.status() == STATUS_ZOMBIE
                    except Error:
                        pass
        return self._ident == other._ident

    def __ne__(self, other):
        return not self == other

    def __hash__(self):
        if self._hash is None:
            self._hash = hash(self._ident)
        return self._hash

    def _raise_if_pid_reused(self):
        """Raises NoSuchProcess in case process PID has been reused."""
        if self._pid_reused or (not self.is_running() and self._pid_reused):
            # We may directly raise NSP in here already if PID is just
            # not running, but I prefer NSP to be raised naturally by
            # the actual Process API call. This way unit tests will tell
            # us if the API is broken (aka don't raise NSP when it
            # should). We also remain consistent with all other "get"
            # APIs which don't use _raise_if_pid_reused().
            msg = "process no longer exists and its PID has been reused"
            raise NoSuchProcess(self.pid, self._name, msg=msg)

    @property
    def pid(self):
        """The process PID."""
        return self._pid

    # --- utility methods

    @contextlib.contextmanager
    def oneshot(self):
        """Utility context manager which considerably speeds up the
        retrieval of multiple process information at the same time.

        Internally different process info (e.g. name, ppid, uids,
        gids, ...) may be fetched by using the same routine, but
        only one information is returned and the others are discarded.
        When using this context manager the internal routine is
        executed once (in the example below on name()) and the
        other info are cached.

        The cache is cleared when exiting the context manager block.
        The advice is to use this every time you retrieve more than
        one information about the process. If you're lucky, you'll
        get a hell of a speedup.

        >>> import psutil
        >>> p = psutil.Process()
        >>> with p.oneshot():
        ...     p.name()  # collect multiple info
        ...     p.cpu_times()  # return cached value
        ...     p.cpu_percent()  # return cached value
        ...     p.create_time()  # return cached value
        ...
        >>>
        """
        with self._lock:
            if hasattr(self, "_cache"):
                # NOOP: this covers the use case where the user enters the
                # context twice:
                #
                # >>> with p.oneshot():
                # ...    with p.oneshot():
                # ...
                #
                # Also, since as_dict() internally uses oneshot()
                # I expect that the code below will be a pretty common
                # "mistake" that the user will make, so let's guard
                # against that:
                #
                # >>> with p.oneshot():
                # ...    p.as_dict()
                # ...
                yield
            else:
                try:
                    # cached in case cpu_percent() is used
                    self.cpu_times.cache_activate(self)
                    # cached in case memory_percent() is used
                    self.memory_info.cache_activate(self)
                    # cached in case parent() is used
                    self.ppid.cache_activate(self)
                    # cached in case username() is used
                    if POSIX:
                        self.uids.cache_activate(self)
                    # specific implementation cache
                    self._proc.oneshot_enter()
                    yield
                finally:
                    self.cpu_times.cache_deactivate(self)
                    self.memory_info.cache_deactivate(self)
                    self.ppid.cache_deactivate(self)
                    if POSIX:
                        self.uids.cache_deactivate(self)
                    self._proc.oneshot_exit()

    def as_dict(self, attrs=None, ad_value=None):
        """Utility method returning process information as a
        hashable dictionary.
        If *attrs* is specified it must be a list of strings
        reflecting available Process class' attribute names
        (e.g. ['cpu_times', 'name']) else all public (read
        only) attributes are assumed.
        *ad_value* is the value which gets assigned in case
        AccessDenied or ZombieProcess exception is raised when
        retrieving that particular process information.
        """
        valid_names = _as_dict_attrnames
        if attrs is not None:
            if not isinstance(attrs, (list, tuple, set, frozenset)):
                msg = "invalid attrs type %s" % type(attrs)
                raise TypeError(msg)
            attrs = set(attrs)
            invalid_names = attrs - valid_names
            if invalid_names:
                msg = "invalid attr name%s %s" % (
                    "s" if len(invalid_names) > 1 else "",
                    ", ".join(map(repr, invalid_names)),
                )
                raise ValueError(msg)

        retdict = {}
        ls = attrs or valid_names
        with self.oneshot():
            for name in ls:
                try:
                    if name == 'pid':
                        ret = self.pid
                    else:
                        meth = getattr(self, name)
                        ret = meth()
                except (AccessDenied, ZombieProcess):
                    ret = ad_value
                except NotImplementedError:
                    # in case of not implemented functionality (may happen
                    # on old or exotic systems) we want to crash only if
                    # the user explicitly asked for that particular attr
                    if attrs:
                        raise
                    continue
                retdict[name] = ret
        return retdict

    def parent(self):
        """Return the parent process as a Process object pre-emptively
        checking whether PID has been reused.
        If no parent is known return None.
        """
        lowest_pid = _LOWEST_PID if _LOWEST_PID is not None else pids()[0]
        if self.pid == lowest_pid:
            return None
        ppid = self.ppid()
        if ppid is not None:
            ctime = self.create_time()
            try:
                parent = Process(ppid)
                if parent.create_time() <= ctime:
                    return parent
                # ...else ppid has been reused by another process
            except NoSuchProcess:
                pass

    def parents(self):
        """Return the parents of this process as a list of Process
        instances. If no parents are known return an empty list.
        """
        parents = []
        proc = self.parent()
        while proc is not None:
            parents.append(proc)
            proc = proc.parent()
        return parents

    def is_running(self):
        """Return whether this process is running.

        It also checks if PID has been reused by another process, in
        which case it will remove the process from `process_iter()`
        internal cache and return False.
        """
        if self._gone or self._pid_reused:
            return False
        try:
            # Checking if PID is alive is not enough as the PID might
            # have been reused by another process. Process identity /
            # uniqueness over time is guaranteed by (PID + creation
            # time) and that is verified in __eq__.
            self._pid_reused = self != Process(self.pid)
            if self._pid_reused:
                _pids_reused.add(self.pid)
                raise NoSuchProcess(self.pid)
            return True
        except ZombieProcess:
            # We should never get here as it's already handled in
            # Process.__init__; here just for extra safety.
            return True
        except NoSuchProcess:
            self._gone = True
            return False

    # --- actual API

    @memoize_when_activated
    def ppid(self):
        """The process parent PID.
        On Windows the return value is cached after first call.
        """
        # On POSIX we don't want to cache the ppid as it may unexpectedly
        # change to 1 (init) in case this process turns into a zombie:
        # https://github.com/giampaolo/psutil/issues/321
        # http://stackoverflow.com/questions/356722/

        # XXX should we check creation time here rather than in
        # Process.parent()?
        self._raise_if_pid_reused()
        if POSIX:
            return self._proc.ppid()
        else:  # pragma: no cover
            self._ppid = self._ppid or self._proc.ppid()
            return self._ppid

    def name(self):
        """The process name. The return value is cached after first call."""
        # Process name is only cached on Windows as on POSIX it may
        # change, see:
        # https://github.com/giampaolo/psutil/issues/692
        if WINDOWS and self._name is not None:
            return self._name
        name = self._proc.name()
        if POSIX and len(name) >= 15:
            # On UNIX the name gets truncated to the first 15 characters.
            # If it matches the first part of the cmdline we return that
            # one instead because it's usually more explicative.
            # Examples are "gnome-keyring-d" vs. "gnome-keyring-daemon".
            try:
                cmdline = self.cmdline()
            except (AccessDenied, ZombieProcess):
                # Just pass and return the truncated name: it's better
                # than nothing. Note: there are actual cases where a
                # zombie process can return a name() but not a
                # cmdline(), see:
                # https://github.com/giampaolo/psutil/issues/2239
                pass
            else:
                if cmdline:
                    extended_name = os.path.basename(cmdline[0])
                    if extended_name.startswith(name):
                        name = extended_name
        self._name = name
        self._proc._name = name
        return name

    def exe(self):
        """The process executable as an absolute path.
        May also be an empty string.
        The return value is cached after first call.
        """

        def guess_it(fallback):
            # try to guess exe from cmdline[0] in absence of a native
            # exe representation
            cmdline = self.cmdline()
            if cmdline and hasattr(os, 'access') and hasattr(os, 'X_OK'):
                exe = cmdline[0]  # the possible exe
                # Attempt to guess only in case of an absolute path.
                # It is not safe otherwise as the process might have
                # changed cwd.
                if (
                    os.path.isabs(exe)
                    and os.path.isfile(exe)
                    and os.access(exe, os.X_OK)
                ):
                    return exe
            if isinstance(fallback, AccessDenied):
                raise fallback
            return fallback

        if self._exe is None:
            try:
                exe = self._proc.exe()
            except AccessDenied as err:
                return guess_it(fallback=err)
            else:
                if not exe:
                    # underlying implementation can legitimately return an
                    # empty string; if that's the case we don't want to
                    # raise AD while guessing from the cmdline
                    try:
                        exe = guess_it(fallback=exe)
                    except AccessDenied:
                        pass
                self._exe = exe
        return self._exe

    def cmdline(self):
        """The command line this process has been called with."""
        return self._proc.cmdline()

    def status(self):
        """The process current status as a STATUS_* constant."""
        try:
            return self._proc.status()
        except ZombieProcess:
            return STATUS_ZOMBIE

    def username(self):
        """The name of the user that owns the process.
        On UNIX this is calculated by using *real* process uid.
        """
        if POSIX:
            if pwd is None:
                # might happen if python was installed from sources
                msg = "requires pwd module shipped with standard python"
                raise ImportError(msg)
            real_uid = self.uids().real
            try:
                return pwd.getpwuid(real_uid).pw_name
            except KeyError:
                # the uid can't be resolved by the system
                return str(real_uid)
        else:
            return self._proc.username()

    def create_time(self):
        """The process creation time as a floating point number
        expressed in seconds since the epoch.
        The return value is cached after first call.
        """
        if self._create_time is None:
            self._create_time = self._proc.create_time()
        return self._create_time

    def cwd(self):
        """Process current working directory as an absolute path."""
        return self._proc.cwd()

    def nice(self, value=None):
        """Get or set process niceness (priority)."""
        if value is None:
            return self._proc.nice_get()
        else:
            self._raise_if_pid_reused()
            self._proc.nice_set(value)

    if POSIX:

        @memoize_when_activated
        def uids(self):
            """Return process UIDs as a (real, effective, saved)
            namedtuple.
            """
            return self._proc.uids()

        def gids(self):
            """Return process GIDs as a (real, effective, saved)
            namedtuple.
            """
            return self._proc.gids()

        def terminal(self):
            """The terminal associated with this process, if any,
            else None.
            """
            return self._proc.terminal()

        def num_fds(self):
            """Return the number of file descriptors opened by this
            process (POSIX only).
            """
            return self._proc.num_fds()

    # Linux, BSD, AIX and Windows only
    if hasattr(_psplatform.Process, "io_counters"):

        def io_counters(self):
            """Return process I/O statistics as a
            (read_count, write_count, read_bytes, write_bytes)
            namedtuple.
            Those are the number of read/write calls performed and the
            amount of bytes read and written by the process.
            """
            return self._proc.io_counters()

    # Linux and Windows
    if hasattr(_psplatform.Process, "ionice_get"):

        def ionice(self, ioclass=None, value=None):
            """Get or set process I/O niceness (priority).

            On Linux *ioclass* is one of the IOPRIO_CLASS_* constants.
            *value* is a number which goes from 0 to 7. The higher the
            value, the lower the I/O priority of the process.

            On Windows only *ioclass* is used and it can be set to 2
            (normal), 1 (low) or 0 (very low).

            Available on Linux and Windows > Vista only.
            """
            if ioclass is None:
                if value is not None:
                    msg = "'ioclass' argument must be specified"
                    raise ValueError(msg)
                return self._proc.ionice_get()
            else:
                self._raise_if_pid_reused()
                return self._proc.ionice_set(ioclass, value)

    # Linux / FreeBSD only
    if hasattr(_psplatform.Process, "rlimit"):

        def rlimit(self, resource, limits=None):
            """Get or set process resource limits as a (soft, hard)
            tuple.

            *resource* is one of the RLIMIT_* constants.
            *limits* is supposed to be a (soft, hard) tuple.

            See "man prlimit" for further info.
            Available on Linux and FreeBSD only.
            """
            if limits is not None:
                self._raise_if_pid_reused()
            return self._proc.rlimit(resource, limits)

    # Windows, Linux and FreeBSD only
    if hasattr(_psplatform.Process, "cpu_affinity_get"):

        def cpu_affinity(self, cpus=None):
            """Get or set process CPU affinity.
            If specified, *cpus* must be a list of CPUs for which you
            want to set the affinity (e.g. [0, 1]).
            If an empty list is passed, all egible CPUs are assumed
            (and set).
            (Windows, Linux and BSD only).
            """
            if cpus is None:
                return sorted(set(self._proc.cpu_affinity_get()))
            else:
                self._raise_if_pid_reused()
                if not cpus:
                    if hasattr(self._proc, "_get_eligible_cpus"):
                        cpus = self._proc._get_eligible_cpus()
                    else:
                        cpus = tuple(range(len(cpu_times(percpu=True))))
                self._proc.cpu_affinity_set(list(set(cpus)))

    # Linux, FreeBSD, SunOS
    if hasattr(_psplatform.Process, "cpu_num"):

        def cpu_num(self):
            """Return what CPU this process is currently running on.
            The returned number should be <= psutil.cpu_count()
            and <= len(psutil.cpu_percent(percpu=True)).
            It may be used in conjunction with
            psutil.cpu_percent(percpu=True) to observe the system
            workload distributed across CPUs.
            """
            return self._proc.cpu_num()

    # All platforms has it, but maybe not in the future.
    if hasattr(_psplatform.Process, "environ"):

        def environ(self):
            """The environment variables of the process as a dict.  Note: this
            might not reflect changes made after the process started.
            """
            return self._proc.environ()

    if WINDOWS:

        def num_handles(self):
            """Return the number of handles opened by this process
            (Windows only).
            """
            return self._proc.num_handles()

    def num_ctx_switches(self):
        """Return the number of voluntary and involuntary context
        switches performed by this process.
        """
        return self._proc.num_ctx_switches()

    def num_threads(self):
        """Return the number of threads used by this process."""
        return self._proc.num_threads()

    if hasattr(_psplatform.Process, "threads"):

        def threads(self):
            """Return threads opened by process as a list of
            (id, user_time, system_time) namedtuples representing
            thread id and thread CPU times (user/system).
            On OpenBSD this method requires root access.
            """
            return self._proc.threads()

    def children(self, recursive=False):
        """Return the children of this process as a list of Process
        instances, pre-emptively checking whether PID has been reused.
        If *recursive* is True return all the parent descendants.

        Example (A == this process):

         A ─┐
            │
            ├─ B (child) ─┐
            │             └─ X (grandchild) ─┐
            │                                └─ Y (great grandchild)
            ├─ C (child)
            └─ D (child)

        >>> import psutil
        >>> p = psutil.Process()
        >>> p.children()
        B, C, D
        >>> p.children(recursive=True)
        B, X, Y, C, D

        Note that in the example above if process X disappears
        process Y won't be listed as the reference to process A
        is lost.
        """
        self._raise_if_pid_reused()
        ppid_map = _ppid_map()
        ret = []
        if not recursive:
            for pid, ppid in ppid_map.items():
                if ppid == self.pid:
                    try:
                        child = Process(pid)
                        # if child happens to be older than its parent
                        # (self) it means child's PID has been reused
                        if self.create_time() <= child.create_time():
                            ret.append(child)
                    except (NoSuchProcess, ZombieProcess):
                        pass
        else:
            # Construct a {pid: [child pids]} dict
            reverse_ppid_map = collections.defaultdict(list)
            for pid, ppid in ppid_map.items():
                reverse_ppid_map[ppid].append(pid)
            # Recursively traverse that dict, starting from self.pid,
            # such that we only call Process() on actual children
            seen = set()
            stack = [self.pid]
            while stack:
                pid = stack.pop()
                if pid in seen:
                    # Since pids can be reused while the ppid_map is
                    # constructed, there may be rare instances where
                    # there's a cycle in the recorded process "tree".
                    continue
                seen.add(pid)
                for child_pid in reverse_ppid_map[pid]:
                    try:
                        child = Process(child_pid)
                        # if child happens to be older than its parent
                        # (self) it means child's PID has been reused
                        intime = self.create_time() <= child.create_time()
                        if intime:
                            ret.append(child)
                            stack.append(child_pid)
                    except (NoSuchProcess, ZombieProcess):
                        pass
        return ret

    def cpu_percent(self, interval=None):
        """Return a float representing the current process CPU
        utilization as a percentage.

        When *interval* is 0.0 or None (default) compares process times
        to system CPU times elapsed since last call, returning
        immediately (non-blocking). That means that the first time
        this is called it will return a meaningful 0.0 value.

        When *interval* is > 0.0 compares process times to system CPU
        times elapsed before and after the interval (blocking).

        In this case is recommended for accuracy that this function
        be called with at least 0.1 seconds between calls.

        A value > 100.0 can be returned in case of processes running
        multiple threads on different CPU cores.

        The returned value is explicitly NOT split evenly between
        all available logical CPUs. This means that a busy loop process
        running on a system with 2 logical CPUs will be reported as
        having 100% CPU utilization instead of 50%.

        Examples:

          >>> import psutil
          >>> p = psutil.Process(os.getpid())
          >>> # blocking
          >>> p.cpu_percent(interval=1)
          2.0
          >>> # non-blocking (percentage since last call)
          >>> p.cpu_percent(interval=None)
          2.9
          >>>
        """
        blocking = interval is not None and interval > 0.0
        if interval is not None and interval < 0:
            msg = "interval is not positive (got %r)" % interval
            raise ValueError(msg)
        num_cpus = cpu_count() or 1

        def timer():
            return _timer() * num_cpus

        if blocking:
            st1 = timer()
            pt1 = self._proc.cpu_times()
            time.sleep(interval)
            st2 = timer()
            pt2 = self._proc.cpu_times()
        else:
            st1 = self._last_sys_cpu_times
            pt1 = self._last_proc_cpu_times
            st2 = timer()
            pt2 = self._proc.cpu_times()
            if st1 is None or pt1 is None:
                self._last_sys_cpu_times = st2
                self._last_proc_cpu_times = pt2
                return 0.0

        delta_proc = (pt2.user - pt1.user) + (pt2.system - pt1.system)
        delta_time = st2 - st1
        # reset values for next call in case of interval == None
        self._last_sys_cpu_times = st2
        self._last_proc_cpu_times = pt2

        try:
            # This is the utilization split evenly between all CPUs.
            # E.g. a busy loop process on a 2-CPU-cores system at this
            # point is reported as 50% instead of 100%.
            overall_cpus_percent = (delta_proc / delta_time) * 100
        except ZeroDivisionError:
            # interval was too low
            return 0.0
        else:
            # Note 1:
            # in order to emulate "top" we multiply the value for the num
            # of CPU cores. This way the busy process will be reported as
            # having 100% (or more) usage.
            #
            # Note 2:
            # taskmgr.exe on Windows differs in that it will show 50%
            # instead.
            #
            # Note 3:
            # a percentage > 100 is legitimate as it can result from a
            # process with multiple threads running on different CPU
            # cores (top does the same), see:
            # http://stackoverflow.com/questions/1032357
            # https://github.com/giampaolo/psutil/issues/474
            single_cpu_percent = overall_cpus_percent * num_cpus
            return round(single_cpu_percent, 1)

    @memoize_when_activated
    def cpu_times(self):
        """Return a (user, system, children_user, children_system)
        namedtuple representing the accumulated process time, in
        seconds.
        This is similar to os.times() but per-process.
        On macOS and Windows children_user and children_system are
        always set to 0.
        """
        return self._proc.cpu_times()

    @memoize_when_activated
    def memory_info(self):
        """Return a namedtuple with variable fields depending on the
        platform, representing memory information about the process.

        The "portable" fields available on all platforms are `rss` and `vms`.

        All numbers are expressed in bytes.
        """
        return self._proc.memory_info()

    @_common.deprecated_method(replacement="memory_info")
    def memory_info_ex(self):
        return self.memory_info()

    def memory_full_info(self):
        """This method returns the same information as memory_info(),
        plus, on some platform (Linux, macOS, Windows), also provides
        additional metrics (USS, PSS and swap).
        The additional metrics provide a better representation of actual
        process memory usage.

        Namely USS is the memory which is unique to a process and which
        would be freed if the process was terminated right now.

        It does so by passing through the whole process address.
        As such it usually requires higher user privileges than
        memory_info() and is considerably slower.
        """
        return self._proc.memory_full_info()

    def memory_percent(self, memtype="rss"):
        """Compare process memory to total physical system memory and
        calculate process memory utilization as a percentage.
        *memtype* argument is a string that dictates what type of
        process memory you want to compare against (defaults to "rss").
        The list of available strings can be obtained like this:

        >>> psutil.Process().memory_info()._fields
        ('rss', 'vms', 'shared', 'text', 'lib', 'data', 'dirty', 'uss', 'pss')
        """
        valid_types = list(_psplatform.pfullmem._fields)
        if memtype not in valid_types:
            msg = "invalid memtype %r; valid types are %r" % (
                memtype,
                tuple(valid_types),
            )
            raise ValueError(msg)
        fun = (
            self.memory_info
            if memtype in _psplatform.pmem._fields
            else self.memory_full_info
        )
        metrics = fun()
        value = getattr(metrics, memtype)

        # use cached value if available
        total_phymem = _TOTAL_PHYMEM or virtual_memory().total
        if not total_phymem > 0:
            # we should never get here
            msg = (
                "can't calculate process memory percent because total physical"
                " system memory is not positive (%r)" % (total_phymem)
            )
            raise ValueError(msg)
        return (value / float(total_phymem)) * 100

    if hasattr(_psplatform.Process, "memory_maps"):

        def memory_maps(self, grouped=True):
            """Return process' mapped memory regions as a list of namedtuples
            whose fields are variable depending on the platform.

            If *grouped* is True the mapped regions with the same 'path'
            are grouped together and the different memory fields are summed.

            If *grouped* is False every mapped region is shown as a single
            entity and the namedtuple will also include the mapped region's
            address space ('addr') and permission set ('perms').
            """
            it = self._proc.memory_maps()
            if grouped:
                d = {}
                for tupl in it:
                    path = tupl[2]
                    nums = tupl[3:]
                    try:
                        d[path] = map(lambda x, y: x + y, d[path], nums)
                    except KeyError:
                        d[path] = nums
                nt = _psplatform.pmmap_grouped
                return [nt(path, *d[path]) for path in d]  # NOQA
            else:
                nt = _psplatform.pmmap_ext
                return [nt(*x) for x in it]

    def open_files(self):
        """Return files opened by process as a list of
        (path, fd) namedtuples including the absolute file name
        and file descriptor number.
        """
        return self._proc.open_files()

    def net_connections(self, kind='inet'):
        """Return socket connections opened by process as a list of
        (fd, family, type, laddr, raddr, status) namedtuples.
        The *kind* parameter filters for connections that match the
        following criteria:

        +------------+----------------------------------------------------+
        | Kind Value | Connections using                                  |
        +------------+----------------------------------------------------+
        | inet       | IPv4 and IPv6                                      |
        | inet4      | IPv4                                               |
        | inet6      | IPv6                                               |
        | tcp        | TCP                                                |
        | tcp4       | TCP over IPv4                                      |
        | tcp6       | TCP over IPv6                                      |
        | udp        | UDP                                                |
        | udp4       | UDP over IPv4                                      |
        | udp6       | UDP over IPv6                                      |
        | unix       | UNIX socket (both UDP and TCP protocols)           |
        | all        | the sum of all the possible families and protocols |
        +------------+----------------------------------------------------+
        """
        return self._proc.net_connections(kind)

    @_common.deprecated_method(replacement="net_connections")
    def connections(self, kind="inet"):
        return self.net_connections(kind=kind)

    # --- signals

    if POSIX:

        def _send_signal(self, sig):
            assert not self.pid < 0, self.pid
            self._raise_if_pid_reused()
            if self.pid == 0:
                # see "man 2 kill"
                msg = (
                    "preventing sending signal to process with PID 0 as it "
                    "would affect every process in the process group of the "
                    "calling process (os.getpid()) instead of PID 0"
                )
                raise ValueError(msg)
            try:
                os.kill(self.pid, sig)
            except ProcessLookupError:
                if OPENBSD and pid_exists(self.pid):
                    # We do this because os.kill() lies in case of
                    # zombie processes.
                    raise ZombieProcess(self.pid, self._name, self._ppid)
                else:
                    self._gone = True
                    raise NoSuchProcess(self.pid, self._name)
            except PermissionError:
                raise AccessDenied(self.pid, self._name)

    def send_signal(self, sig):
        """Send a signal *sig* to process pre-emptively checking
        whether PID has been reused (see signal module constants) .
        On Windows only SIGTERM is valid and is treated as an alias
        for kill().
        """
        if POSIX:
            self._send_signal(sig)
        else:  # pragma: no cover
            self._raise_if_pid_reused()
            if sig != signal.SIGTERM and not self.is_running():
                msg = "process no longer exists"
                raise NoSuchProcess(self.pid, self._name, msg=msg)
            self._proc.send_signal(sig)

    def suspend(self):
        """Suspend process execution with SIGSTOP pre-emptively checking
        whether PID has been reused.
        On Windows this has the effect of suspending all process threads.
        """
        if POSIX:
            self._send_signal(signal.SIGSTOP)
        else:  # pragma: no cover
            self._raise_if_pid_reused()
            self._proc.suspend()

    def resume(self):
        """Resume process execution with SIGCONT pre-emptively checking
        whether PID has been reused.
        On Windows this has the effect of resuming all process threads.
        """
        if POSIX:
            self._send_signal(signal.SIGCONT)
        else:  # pragma: no cover
            self._raise_if_pid_reused()
            self._proc.resume()

    def terminate(self):
        """Terminate the process with SIGTERM pre-emptively checking
        whether PID has been reused.
        On Windows this is an alias for kill().
        """
        if POSIX:
            self._send_signal(signal.SIGTERM)
        else:  # pragma: no cover
            self._raise_if_pid_reused()
            self._proc.kill()

    def kill(self):
        """Kill the current process with SIGKILL pre-emptively checking
        whether PID has been reused.
        """
        if POSIX:
            self._send_signal(signal.SIGKILL)
        else:  # pragma: no cover
            self._raise_if_pid_reused()
            self._proc.kill()

    def wait(self, timeout=None):
        """Wait for process to terminate and, if process is a children
        of os.getpid(), also return its exit code, else None.
        On Windows there's no such limitation (exit code is always
        returned).

        If the process is already terminated immediately return None
        instead of raising NoSuchProcess.

        If *timeout* (in seconds) is specified and process is still
        alive raise TimeoutExpired.

        To wait for multiple Process(es) use psutil.wait_procs().
        """
        if timeout is not None and not timeout >= 0:
            msg = "timeout must be a positive integer"
            raise ValueError(msg)
        if self._exitcode is not _SENTINEL:
            return self._exitcode
        self._exitcode = self._proc.wait(timeout)
        return self._exitcode


# The valid attr names which can be processed by Process.as_dict().
# fmt: off
_as_dict_attrnames = set(
    [x for x in dir(Process) if not x.startswith('_') and x not in
     {'send_signal', 'suspend', 'resume', 'terminate', 'kill', 'wait',
      'is_running', 'as_dict', 'parent', 'parents', 'children', 'rlimit',
      'memory_info_ex', 'connections', 'oneshot'}])
# fmt: on


# =====================================================================
# --- Popen class
# =====================================================================


class Popen(Process):
    """Same as subprocess.Popen, but in addition it provides all
    psutil.Process methods in a single class.
    For the following methods which are common to both classes, psutil
    implementation takes precedence:

    * send_signal()
    * terminate()
    * kill()

    This is done in order to avoid killing another process in case its
    PID has been reused, fixing BPO-6973.

      >>> import psutil
      >>> from subprocess import PIPE
      >>> p = psutil.Popen(["python", "-c", "print 'hi'"], stdout=PIPE)
      >>> p.name()
      'python'
      >>> p.uids()
      user(real=1000, effective=1000, saved=1000)
      >>> p.username()
      'giampaolo'
      >>> p.communicate()
      ('hi', None)
      >>> p.terminate()
      >>> p.wait(timeout=2)
      0
      >>>
    """

    def __init__(self, *args, **kwargs):
        # Explicitly avoid to raise NoSuchProcess in case the process
        # spawned by subprocess.Popen terminates too quickly, see:
        # https://github.com/giampaolo/psutil/issues/193
        self.__subproc = subprocess.Popen(*args, **kwargs)
        self._init(self.__subproc.pid, _ignore_nsp=True)

    def __dir__(self):
        return sorted(set(dir(Popen) + dir(subprocess.Popen)))

    def __enter__(self):
        if hasattr(self.__subproc, '__enter__'):
            self.__subproc.__enter__()
        return self

    def __exit__(self, *args, **kwargs):
        if hasattr(self.__subproc, '__exit__'):
            return self.__subproc.__exit__(*args, **kwargs)
        else:
            if self.stdout:
                self.stdout.close()
            if self.stderr:
                self.stderr.close()
            try:
                # Flushing a BufferedWriter may raise an error.
                if self.stdin:
                    self.stdin.close()
            finally:
                # Wait for the process to terminate, to avoid zombies.
                self.wait()

    def __getattribute__(self, name):
        try:
            return object.__getattribute__(self, name)
        except AttributeError:
            try:
                return object.__getattribute__(self.__subproc, name)
            except AttributeError:
                msg = "%s instance has no attribute '%s'" % (
                    self.__class__.__name__,
                    name,
                )
                raise AttributeError(msg)

    def wait(self, timeout=None):
        if self.__subproc.returncode is not None:
            return self.__subproc.returncode
        ret = super(Popen, self).wait(timeout)  # noqa
        self.__subproc.returncode = ret
        return ret


# =====================================================================
# --- system processes related functions
# =====================================================================


def pids():
    """Return a list of current running PIDs."""
    global _LOWEST_PID
    ret = sorted(_psplatform.pids())
    _LOWEST_PID = ret[0]
    return ret


def pid_exists(pid):
    """Return True if given PID exists in the current process list.
    This is faster than doing "pid in psutil.pids()" and
    should be preferred.
    """
    if pid < 0:
        return False
    elif pid == 0 and POSIX:
        # On POSIX we use os.kill() to determine PID existence.
        # According to "man 2 kill" PID 0 has a special meaning
        # though: it refers to <<every process in the process
        # group of the calling process>> and that is not we want
        # to do here.
        return pid in pids()
    else:
        return _psplatform.pid_exists(pid)


_pmap = {}
_pids_reused = set()


def process_iter(attrs=None, ad_value=None):
    """Return a generator yielding a Process instance for all
    running processes.

    Every new Process instance is only created once and then cached
    into an internal table which is updated every time this is used.
    Cache can optionally be cleared via `process_iter.clear_cache()`.

    The sorting order in which processes are yielded is based on
    their PIDs.

    *attrs* and *ad_value* have the same meaning as in
    Process.as_dict(). If *attrs* is specified as_dict() is called
    and the resulting dict is stored as a 'info' attribute attached
    to returned Process instance.
    If *attrs* is an empty list it will retrieve all process info
    (slow).
    """
    global _pmap

    def add(pid):
        proc = Process(pid)
        pmap[proc.pid] = proc
        return proc

    def remove(pid):
        pmap.pop(pid, None)

    pmap = _pmap.copy()
    a = set(pids())
    b = set(pmap.keys())
    new_pids = a - b
    gone_pids = b - a
    for pid in gone_pids:
        remove(pid)
    while _pids_reused:
        pid = _pids_reused.pop()
        debug("refreshing Process instance for reused PID %s" % pid)
        remove(pid)
    try:
        ls = sorted(list(pmap.items()) + list(dict.fromkeys(new_pids).items()))
        for pid, proc in ls:
            try:
                if proc is None:  # new process
                    proc = add(pid)
                if attrs is not None:
                    proc.info = proc.as_dict(attrs=attrs, ad_value=ad_value)
                yield proc
            except NoSuchProcess:
                remove(pid)
    finally:
        _pmap = pmap


process_iter.cache_clear = lambda: _pmap.clear()  # noqa
process_iter.cache_clear.__doc__ = "Clear process_iter() internal cache."


def wait_procs(procs, timeout=None, callback=None):
    """Convenience function which waits for a list of processes to
    terminate.

    Return a (gone, alive) tuple indicating which processes
    are gone and which ones are still alive.

    The gone ones will have a new *returncode* attribute indicating
    process exit status (may be None).

    *callback* is a function which gets called every time a process
    terminates (a Process instance is passed as callback argument).

    Function will return as soon as all processes terminate or when
    *timeout* occurs.
    Differently from Process.wait() it will not raise TimeoutExpired if
    *timeout* occurs.

    Typical use case is:

     - send SIGTERM to a list of processes
     - give them some time to terminate
     - send SIGKILL to those ones which are still alive

    Example:

    >>> def on_terminate(proc):
    ...     print("process {} terminated".format(proc))
    ...
    >>> for p in procs:
    ...    p.terminate()
    ...
    >>> gone, alive = wait_procs(procs, timeout=3, callback=on_terminate)
    >>> for p in alive:
    ...     p.kill()
    """

    def check_gone(proc, timeout):
        try:
            returncode = proc.wait(timeout=timeout)
        except TimeoutExpired:
            pass
        except _SubprocessTimeoutExpired:
            pass
        else:
            if returncode is not None or not proc.is_running():
                # Set new Process instance attribute.
                proc.returncode = returncode
                gone.add(proc)
                if callback is not None:
                    callback(proc)

    if timeout is not None and not timeout >= 0:
        msg = "timeout must be a positive integer, got %s" % timeout
        raise ValueError(msg)
    gone = set()
    alive = set(procs)
    if callback is not None and not callable(callback):
        msg = "callback %r is not a callable" % callback
        raise TypeError(msg)
    if timeout is not None:
        deadline = _timer() + timeout

    while alive:
        if timeout is not None and timeout <= 0:
            break
        for proc in alive:
            # Make sure that every complete iteration (all processes)
            # will last max 1 sec.
            # We do this because we don't want to wait too long on a
            # single process: in case it terminates too late other
            # processes may disappear in the meantime and their PID
            # reused.
            max_timeout = 1.0 / len(alive)
            if timeout is not None:
                timeout = min((deadline - _timer()), max_timeout)
                if timeout <= 0:
                    break
                check_gone(proc, timeout)
            else:
                check_gone(proc, max_timeout)
        alive = alive - gone  # noqa PLR6104

    if alive:
        # Last attempt over processes survived so far.
        # timeout == 0 won't make this function wait any further.
        for proc in alive:
            check_gone(proc, 0)
        alive = alive - gone  # noqa: PLR6104

    return (list(gone), list(alive))


# =====================================================================
# --- CPU related functions
# =====================================================================


def cpu_count(logical=True):
    """Return the number of logical CPUs in the system (same as
    os.cpu_count() in Python 3.4).

    If *logical* is False return the number of physical cores only
    (e.g. hyper thread CPUs are excluded).

    Return None if undetermined.

    The return value is cached after first call.
    If desired cache can be cleared like this:

    >>> psutil.cpu_count.cache_clear()
    """
    if logical:
        ret = _psplatform.cpu_count_logical()
    else:
        ret = _psplatform.cpu_count_cores()
    if ret is not None and ret < 1:
        ret = None
    return ret


def cpu_times(percpu=False):
    """Return system-wide CPU times as a namedtuple.
    Every CPU time represents the seconds the CPU has spent in the
    given mode. The namedtuple's fields availability varies depending on the
    platform:

     - user
     - system
     - idle
     - nice (UNIX)
     - iowait (Linux)
     - irq (Linux, FreeBSD)
     - softirq (Linux)
     - steal (Linux >= 2.6.11)
     - guest (Linux >= 2.6.24)
     - guest_nice (Linux >= 3.2.0)

    When *percpu* is True return a list of namedtuples for each CPU.
    First element of the list refers to first CPU, second element
    to second CPU and so on.
    The order of the list is consistent across calls.
    """
    if not percpu:
        return _psplatform.cpu_times()
    else:
        return _psplatform.per_cpu_times()


try:
    _last_cpu_times = {threading.current_thread().ident: cpu_times()}
except Exception:  # noqa: BLE001
    # Don't want to crash at import time.
    _last_cpu_times = {}

try:
    _last_per_cpu_times = {
        threading.current_thread().ident: cpu_times(percpu=True)
    }
except Exception:  # noqa: BLE001
    # Don't want to crash at import time.
    _last_per_cpu_times = {}


def _cpu_tot_time(times):
    """Given a cpu_time() ntuple calculates the total CPU time
    (including idle time).
    """
    tot = sum(times)
    if LINUX:
        # On Linux guest times are already accounted in "user" or
        # "nice" times, so we subtract them from total.
        # Htop does the same. References:
        # https://github.com/giampaolo/psutil/pull/940
        # http://unix.stackexchange.com/questions/178045
        # https://github.com/torvalds/linux/blob/
        #     447976ef4fd09b1be88b316d1a81553f1aa7cd07/kernel/sched/
        #     cputime.c#L158
        tot -= getattr(times, "guest", 0)  # Linux 2.6.24+
        tot -= getattr(times, "guest_nice", 0)  # Linux 3.2.0+
    return tot


def _cpu_busy_time(times):
    """Given a cpu_time() ntuple calculates the busy CPU time.
    We do so by subtracting all idle CPU times.
    """
    busy = _cpu_tot_time(times)
    busy -= times.idle
    # Linux: "iowait" is time during which the CPU does not do anything
    # (waits for IO to complete). On Linux IO wait is *not* accounted
    # in "idle" time so we subtract it. Htop does the same.
    # References:
    # https://github.com/torvalds/linux/blob/
    #     447976ef4fd09b1be88b316d1a81553f1aa7cd07/kernel/sched/cputime.c#L244
    busy -= getattr(times, "iowait", 0)
    return busy


def _cpu_times_deltas(t1, t2):
    assert t1._fields == t2._fields, (t1, t2)
    field_deltas = []
    for field in _psplatform.scputimes._fields:
        field_delta = getattr(t2, field) - getattr(t1, field)
        # CPU times are always supposed to increase over time
        # or at least remain the same and that's because time
        # cannot go backwards.
        # Surprisingly sometimes this might not be the case (at
        # least on Windows and Linux), see:
        # https://github.com/giampaolo/psutil/issues/392
        # https://github.com/giampaolo/psutil/issues/645
        # https://github.com/giampaolo/psutil/issues/1210
        # Trim negative deltas to zero to ignore decreasing fields.
        # top does the same. Reference:
        # https://gitlab.com/procps-ng/procps/blob/v3.3.12/top/top.c#L5063
        field_delta = max(0, field_delta)
        field_deltas.append(field_delta)
    return _psplatform.scputimes(*field_deltas)


def cpu_percent(interval=None, percpu=False):
    """Return a float representing the current system-wide CPU
    utilization as a percentage.

    When *interval* is > 0.0 compares system CPU times elapsed before
    and after the interval (blocking).

    When *interval* is 0.0 or None compares system CPU times elapsed
    since last call or module import, returning immediately (non
    blocking). That means the first time this is called it will
    return a meaningless 0.0 value which you should ignore.
    In this case is recommended for accuracy that this function be
    called with at least 0.1 seconds between calls.

    When *percpu* is True returns a list of floats representing the
    utilization as a percentage for each CPU.
    First element of the list refers to first CPU, second element
    to second CPU and so on.
    The order of the list is consistent across calls.

    Examples:

      >>> # blocking, system-wide
      >>> psutil.cpu_percent(interval=1)
      2.0
      >>>
      >>> # blocking, per-cpu
      >>> psutil.cpu_percent(interval=1, percpu=True)
      [2.0, 1.0]
      >>>
      >>> # non-blocking (percentage since last call)
      >>> psutil.cpu_percent(interval=None)
      2.9
      >>>
    """
    tid = threading.current_thread().ident
    blocking = interval is not None and interval > 0.0
    if interval is not None and interval < 0:
        msg = "interval is not positive (got %r)" % interval
        raise ValueError(msg)

    def calculate(t1, t2):
        times_delta = _cpu_times_deltas(t1, t2)
        all_delta = _cpu_tot_time(times_delta)
        busy_delta = _cpu_busy_time(times_delta)

        try:
            busy_perc = (busy_delta / all_delta) * 100
        except ZeroDivisionError:
            return 0.0
        else:
            return round(busy_perc, 1)

    # system-wide usage
    if not percpu:
        if blocking:
            t1 = cpu_times()
            time.sleep(interval)
        else:
            t1 = _last_cpu_times.get(tid) or cpu_times()
        _last_cpu_times[tid] = cpu_times()
        return calculate(t1, _last_cpu_times[tid])
    # per-cpu usage
    else:
        ret = []
        if blocking:
            tot1 = cpu_times(percpu=True)
            time.sleep(interval)
        else:
            tot1 = _last_per_cpu_times.get(tid) or cpu_times(percpu=True)
        _last_per_cpu_times[tid] = cpu_times(percpu=True)
        for t1, t2 in zip(tot1, _last_per_cpu_times[tid]):
            ret.append(calculate(t1, t2))
        return ret


# Use a separate dict for cpu_times_percent(), so it's independent from
# cpu_percent() and they can both be used within the same program.
_last_cpu_times_2 = _last_cpu_times.copy()
_last_per_cpu_times_2 = _last_per_cpu_times.copy()


def cpu_times_percent(interval=None, percpu=False):
    """Same as cpu_percent() but provides utilization percentages
    for each specific CPU time as is returned by cpu_times().
    For instance, on Linux we'll get:

      >>> cpu_times_percent()
      cpupercent(user=4.8, nice=0.0, system=4.8, idle=90.5, iowait=0.0,
                 irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)
      >>>

    *interval* and *percpu* arguments have the same meaning as in
    cpu_percent().
    """
    tid = threading.current_thread().ident
    blocking = interval is not None and interval > 0.0
    if interval is not None and interval < 0:
        msg = "interval is not positive (got %r)" % interval
        raise ValueError(msg)

    def calculate(t1, t2):
        nums = []
        times_delta = _cpu_times_deltas(t1, t2)
        all_delta = _cpu_tot_time(times_delta)
        # "scale" is the value to multiply each delta with to get percentages.
        # We use "max" to avoid division by zero (if all_delta is 0, then all
        # fields are 0 so percentages will be 0 too. all_delta cannot be a
        # fraction because cpu times are integers)
        scale = 100.0 / max(1, all_delta)
        for field_delta in times_delta:
            field_perc = field_delta * scale
            field_perc = round(field_perc, 1)
            # make sure we don't return negative values or values over 100%
            field_perc = min(max(0.0, field_perc), 100.0)
            nums.append(field_perc)
        return _psplatform.scputimes(*nums)

    # system-wide usage
    if not percpu:
        if blocking:
            t1 = cpu_times()
            time.sleep(interval)
        else:
            t1 = _last_cpu_times_2.get(tid) or cpu_times()
        _last_cpu_times_2[tid] = cpu_times()
        return calculate(t1, _last_cpu_times_2[tid])
    # per-cpu usage
    else:
        ret = []
        if blocking:
            tot1 = cpu_times(percpu=True)
            time.sleep(interval)
        else:
            tot1 = _last_per_cpu_times_2.get(tid) or cpu_times(percpu=True)
        _last_per_cpu_times_2[tid] = cpu_times(percpu=True)
        for t1, t2 in zip(tot1, _last_per_cpu_times_2[tid]):
            ret.append(calculate(t1, t2))
        return ret


def cpu_stats():
    """Return CPU statistics."""
    return _psplatform.cpu_stats()


if hasattr(_psplatform, "cpu_freq"):

    def cpu_freq(percpu=False):
        """Return CPU frequency as a namedtuple including current,
        min and max frequency expressed in Mhz.

        If *percpu* is True and the system supports per-cpu frequency
        retrieval (Linux only) a list of frequencies is returned for
        each CPU. If not a list with one element is returned.
        """
        ret = _psplatform.cpu_freq()
        if percpu:
            return ret
        else:
            num_cpus = float(len(ret))
            if num_cpus == 0:
                return None
            elif num_cpus == 1:
                return ret[0]
            else:
                currs, mins, maxs = 0.0, 0.0, 0.0
                set_none = False
                for cpu in ret:
                    currs += cpu.current
                    # On Linux if /proc/cpuinfo is used min/max are set
                    # to None.
                    if LINUX and cpu.min is None:
                        set_none = True
                        continue
                    mins += cpu.min
                    maxs += cpu.max

                current = currs / num_cpus

                if set_none:
                    min_ = max_ = None
                else:
                    min_ = mins / num_cpus
                    max_ = maxs / num_cpus

                return _common.scpufreq(current, min_, max_)

    __all__.append("cpu_freq")


if hasattr(os, "getloadavg") or hasattr(_psplatform, "getloadavg"):
    # Perform this hasattr check once on import time to either use the
    # platform based code or proxy straight from the os module.
    if hasattr(os, "getloadavg"):
        getloadavg = os.getloadavg
    else:
        getloadavg = _psplatform.getloadavg

    __all__.append("getloadavg")


# =====================================================================
# --- system memory related functions
# =====================================================================


def virtual_memory():
    """Return statistics about system memory usage as a namedtuple
    including the following fields, expressed in bytes:

     - total:
       total physical memory available.

     - available:
       the memory that can be given instantly to processes without the
       system going into swap.
       This is calculated by summing different memory values depending
       on the platform and it is supposed to be used to monitor actual
       memory usage in a cross platform fashion.

     - percent:
       the percentage usage calculated as (total - available) / total * 100

     - used:
        memory used, calculated differently depending on the platform and
        designed for informational purposes only:
        macOS: active + wired
        BSD: active + wired + cached
        Linux: total - free

     - free:
       memory not being used at all (zeroed) that is readily available;
       note that this doesn't reflect the actual memory available
       (use 'available' instead)

    Platform-specific fields:

     - active (UNIX):
       memory currently in use or very recently used, and so it is in RAM.

     - inactive (UNIX):
       memory that is marked as not used.

     - buffers (BSD, Linux):
       cache for things like file system metadata.

     - cached (BSD, macOS):
       cache for various things.

     - wired (macOS, BSD):
       memory that is marked to always stay in RAM. It is never moved to disk.

     - shared (BSD):
       memory that may be simultaneously accessed by multiple processes.

    The sum of 'used' and 'available' does not necessarily equal total.
    On Windows 'available' and 'free' are the same.
    """
    global _TOTAL_PHYMEM
    ret = _psplatform.virtual_memory()
    # cached for later use in Process.memory_percent()
    _TOTAL_PHYMEM = ret.total
    return ret


def swap_memory():
    """Return system swap memory statistics as a namedtuple including
    the following fields:

     - total:   total swap memory in bytes
     - used:    used swap memory in bytes
     - free:    free swap memory in bytes
     - percent: the percentage usage
     - sin:     no. of bytes the system has swapped in from disk (cumulative)
     - sout:    no. of bytes the system has swapped out from disk (cumulative)

    'sin' and 'sout' on Windows are meaningless and always set to 0.
    """
    return _psplatform.swap_memory()


# =====================================================================
# --- disks/partitions related functions
# =====================================================================


def disk_usage(path):
    """Return disk usage statistics about the given *path* as a
    namedtuple including total, used and free space expressed in bytes
    plus the percentage usage.
    """
    return _psplatform.disk_usage(path)


def disk_partitions(all=False):
    """Return mounted partitions as a list of
    (device, mountpoint, fstype, opts) namedtuple.
    'opts' field is a raw string separated by commas indicating mount
    options which may vary depending on the platform.

    If *all* parameter is False return physical devices only and ignore
    all others.
    """
    return _psplatform.disk_partitions(all)


def disk_io_counters(perdisk=False, nowrap=True):
    """Return system disk I/O statistics as a namedtuple including
    the following fields:

     - read_count:  number of reads
     - write_count: number of writes
     - read_bytes:  number of bytes read
     - write_bytes: number of bytes written
     - read_time:   time spent reading from disk (in ms)
     - write_time:  time spent writing to disk (in ms)

    Platform specific:

     - busy_time: (Linux, FreeBSD) time spent doing actual I/Os (in ms)
     - read_merged_count (Linux): number of merged reads
     - write_merged_count (Linux): number of merged writes

    If *perdisk* is True return the same information for every
    physical disk installed on the system as a dictionary
    with partition names as the keys and the namedtuple
    described above as the values.

    If *nowrap* is True it detects and adjust the numbers which overflow
    and wrap (restart from 0) and add "old value" to "new value" so that
    the returned numbers will always be increasing or remain the same,
    but never decrease.
    "disk_io_counters.cache_clear()" can be used to invalidate the
    cache.

    On recent Windows versions 'diskperf -y' command may need to be
    executed first otherwise this function won't find any disk.
    """
    kwargs = dict(perdisk=perdisk) if LINUX else {}
    rawdict = _psplatform.disk_io_counters(**kwargs)
    if not rawdict:
        return {} if perdisk else None
    if nowrap:
        rawdict = _wrap_numbers(rawdict, 'psutil.disk_io_counters')
    nt = getattr(_psplatform, "sdiskio", _common.sdiskio)
    if perdisk:
        for disk, fields in rawdict.items():
            rawdict[disk] = nt(*fields)
        return rawdict
    else:
        return nt(*(sum(x) for x in zip(*rawdict.values())))


disk_io_counters.cache_clear = functools.partial(
    _wrap_numbers.cache_clear, 'psutil.disk_io_counters'
)
disk_io_counters.cache_clear.__doc__ = "Clears nowrap argument cache"


# =====================================================================
# --- network related functions
# =====================================================================


def net_io_counters(pernic=False, nowrap=True):
    """Return network I/O statistics as a namedtuple including
    the following fields:

     - bytes_sent:   number of bytes sent
     - bytes_recv:   number of bytes received
     - packets_sent: number of packets sent
     - packets_recv: number of packets received
     - errin:        total number of errors while receiving
     - errout:       total number of errors while sending
     - dropin:       total number of incoming packets which were dropped
     - dropout:      total number of outgoing packets which were dropped
                     (always 0 on macOS and BSD)

    If *pernic* is True return the same information for every
    network interface installed on the system as a dictionary
    with network interface names as the keys and the namedtuple
    described above as the values.

    If *nowrap* is True it detects and adjust the numbers which overflow
    and wrap (restart from 0) and add "old value" to "new value" so that
    the returned numbers will always be increasing or remain the same,
    but never decrease.
    "net_io_counters.cache_clear()" can be used to invalidate the
    cache.
    """
    rawdict = _psplatform.net_io_counters()
    if not rawdict:
        return {} if pernic else None
    if nowrap:
        rawdict = _wrap_numbers(rawdict, 'psutil.net_io_counters')
    if pernic:
        for nic, fields in rawdict.items():
            rawdict[nic] = _common.snetio(*fields)
        return rawdict
    else:
        return _common.snetio(*[sum(x) for x in zip(*rawdict.values())])


net_io_counters.cache_clear = functools.partial(
    _wrap_numbers.cache_clear, 'psutil.net_io_counters'
)
net_io_counters.cache_clear.__doc__ = "Clears nowrap argument cache"


def net_connections(kind='inet'):
    """Return system-wide socket connections as a list of
    (fd, family, type, laddr, raddr, status, pid) namedtuples.
    In case of limited privileges 'fd' and 'pid' may be set to -1
    and None respectively.
    The *kind* parameter filters for connections that fit the
    following criteria:

    +------------+----------------------------------------------------+
    | Kind Value | Connections using                                  |
    +------------+----------------------------------------------------+
    | inet       | IPv4 and IPv6                                      |
    | inet4      | IPv4                                               |
    | inet6      | IPv6                                               |
    | tcp        | TCP                                                |
    | tcp4       | TCP over IPv4                                      |
    | tcp6       | TCP over IPv6                                      |
    | udp        | UDP                                                |
    | udp4       | UDP over IPv4                                      |
    | udp6       | UDP over IPv6                                      |
    | unix       | UNIX socket (both UDP and TCP protocols)           |
    | all        | the sum of all the possible families and protocols |
    +------------+----------------------------------------------------+

    On macOS this function requires root privileges.
    """
    return _psplatform.net_connections(kind)


def net_if_addrs():
    """Return the addresses associated to each NIC (network interface
    card) installed on the system as a dictionary whose keys are the
    NIC names and value is a list of namedtuples for each address
    assigned to the NIC. Each namedtuple includes 5 fields:

     - family: can be either socket.AF_INET, socket.AF_INET6 or
               psutil.AF_LINK, which refers to a MAC address.
     - address: is the primary address and it is always set.
     - netmask: and 'broadcast' and 'ptp' may be None.
     - ptp: stands for "point to point" and references the
            destination address on a point to point interface
            (typically a VPN).
     - broadcast: and *ptp* are mutually exclusive.

    Note: you can have more than one address of the same family
    associated with each interface.
    """
    has_enums = _PY3
    if has_enums:
        import socket
    rawlist = _psplatform.net_if_addrs()
    rawlist.sort(key=lambda x: x[1])  # sort by family
    ret = collections.defaultdict(list)
    for name, fam, addr, mask, broadcast, ptp in rawlist:
        if has_enums:
            try:
                fam = socket.AddressFamily(fam)
            except ValueError:
                if WINDOWS and fam == -1:
                    fam = _psplatform.AF_LINK
                elif (
                    hasattr(_psplatform, "AF_LINK")
                    and fam == _psplatform.AF_LINK
                ):
                    # Linux defines AF_LINK as an alias for AF_PACKET.
                    # We re-set the family here so that repr(family)
                    # will show AF_LINK rather than AF_PACKET
                    fam = _psplatform.AF_LINK
        if fam == _psplatform.AF_LINK:
            # The underlying C function may return an incomplete MAC
            # address in which case we fill it with null bytes, see:
            # https://github.com/giampaolo/psutil/issues/786
            separator = ":" if POSIX else "-"
            while addr.count(separator) < 5:
                addr += "%s00" % separator
        ret[name].append(_common.snicaddr(fam, addr, mask, broadcast, ptp))
    return dict(ret)


def net_if_stats():
    """Return information about each NIC (network interface card)
    installed on the system as a dictionary whose keys are the
    NIC names and value is a namedtuple with the following fields:

     - isup: whether the interface is up (bool)
     - duplex: can be either NIC_DUPLEX_FULL, NIC_DUPLEX_HALF or
               NIC_DUPLEX_UNKNOWN
     - speed: the NIC speed expressed in mega bits (MB); if it can't
              be determined (e.g. 'localhost') it will be set to 0.
     - mtu: the maximum transmission unit expressed in bytes.
    """
    return _psplatform.net_if_stats()


# =====================================================================
# --- sensors
# =====================================================================


# Linux, macOS
if hasattr(_psplatform, "sensors_temperatures"):

    def sensors_temperatures(fahrenheit=False):
        """Return hardware temperatures. Each entry is a namedtuple
        representing a certain hardware sensor (it may be a CPU, an
        hard disk or something else, depending on the OS and its
        configuration).
        All temperatures are expressed in celsius unless *fahrenheit*
        is set to True.
        """

        def convert(n):
            if n is not None:
                return (float(n) * 9 / 5) + 32 if fahrenheit else n

        ret = collections.defaultdict(list)
        rawdict = _psplatform.sensors_temperatures()

        for name, values in rawdict.items():
            while values:
                label, current, high, critical = values.pop(0)
                current = convert(current)
                high = convert(high)
                critical = convert(critical)

                if high and not critical:
                    critical = high
                elif critical and not high:
                    high = critical

                ret[name].append(
                    _common.shwtemp(label, current, high, critical)
                )

        return dict(ret)

    __all__.append("sensors_temperatures")


# Linux
if hasattr(_psplatform, "sensors_fans"):

    def sensors_fans():
        """Return fans speed. Each entry is a namedtuple
        representing a certain hardware sensor.
        All speed are expressed in RPM (rounds per minute).
        """
        return _psplatform.sensors_fans()

    __all__.append("sensors_fans")


# Linux, Windows, FreeBSD, macOS
if hasattr(_psplatform, "sensors_battery"):

    def sensors_battery():
        """Return battery information. If no battery is installed
        returns None.

         - percent: battery power left as a percentage.
         - secsleft: a rough approximation of how many seconds are left
                     before the battery runs out of power. May be
                     POWER_TIME_UNLIMITED or POWER_TIME_UNLIMITED.
         - power_plugged: True if the AC power cable is connected.
        """
        return _psplatform.sensors_battery()

    __all__.append("sensors_battery")


# =====================================================================
# --- other system related functions
# =====================================================================


def boot_time():
    """Return the system boot time expressed in seconds since the epoch."""
    # Note: we are not caching this because it is subject to
    # system clock updates.
    return _psplatform.boot_time()


def users():
    """Return users currently connected on the system as a list of
    namedtuples including the following fields.

     - user: the name of the user
     - terminal: the tty or pseudo-tty associated with the user, if any.
     - host: the host name associated with the entry, if any.
     - started: the creation time as a floating point number expressed in
       seconds since the epoch.
    """
    return _psplatform.users()


# =====================================================================
# --- Windows services
# =====================================================================


if WINDOWS:

    def win_service_iter():
        """Return a generator yielding a WindowsService instance for all
        Windows services installed.
        """
        return _psplatform.win_service_iter()

    def win_service_get(name):
        """Get a Windows service by *name*.
        Raise NoSuchProcess if no service with such name exists.
        """
        return _psplatform.win_service_get(name)


# =====================================================================


def _set_debug(value):
    """Enable or disable PSUTIL_DEBUG option, which prints debugging
    messages to stderr.
    """
    import psutil._common

    psutil._common.PSUTIL_DEBUG = bool(value)
    _psplatform.cext.set_debug(bool(value))


def test():  # pragma: no cover
    from ._common import bytes2human
    from ._compat import get_terminal_size

    today_day = datetime.date.today()
    # fmt: off
    templ = "%-10s %5s %5s %7s %7s %5s %6s %6s %6s  %s"
    attrs = ['pid', 'memory_percent', 'name', 'cmdline', 'cpu_times',
             'create_time', 'memory_info', 'status', 'nice', 'username']
    print(templ % ("USER", "PID", "%MEM", "VSZ", "RSS", "NICE",  # NOQA
                   "STATUS", "START", "TIME", "CMDLINE"))
    # fmt: on
    for p in process_iter(attrs, ad_value=None):
        if p.info['create_time']:
            ctime = datetime.datetime.fromtimestamp(p.info['create_time'])
            if ctime.date() == today_day:
                ctime = ctime.strftime("%H:%M")
            else:
                ctime = ctime.strftime("%b%d")
        else:
            ctime = ''
        if p.info['cpu_times']:
            cputime = time.strftime(
                "%M:%S", time.localtime(sum(p.info['cpu_times']))
            )
        else:
            cputime = ''

        user = p.info['username'] or ''
        if not user and POSIX:
            try:
                user = p.uids()[0]
            except Error:
                pass
        if user and WINDOWS and '\\' in user:
            user = user.split('\\')[1]
        user = user[:9]
        vms = (
            bytes2human(p.info['memory_info'].vms)
            if p.info['memory_info'] is not None
            else ''
        )
        rss = (
            bytes2human(p.info['memory_info'].rss)
            if p.info['memory_info'] is not None
            else ''
        )
        memp = (
            round(p.info['memory_percent'], 1)
            if p.info['memory_percent'] is not None
            else ''
        )
        nice = int(p.info['nice']) if p.info['nice'] else ''
        if p.info['cmdline']:
            cmdline = ' '.join(p.info['cmdline'])
        else:
            cmdline = p.info['name']
        status = p.info['status'][:5] if p.info['status'] else ''

        line = templ % (
            user[:10],
            p.info['pid'],
            memp,
            vms,
            rss,
            nice,
            status,
            ctime,
            cputime,
            cmdline,
        )
        print(line[: get_terminal_size()[0]])  # NOQA


del memoize_when_activated, division
if sys.version_info[0] < 3:
    del num, x  # noqa

if __name__ == "__main__":
    test()
PKok\0T;+ + psutil/_psposix.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Routines common to all posix systems."""

import glob
import os
import signal
import sys
import time

from ._common import MACOS
from ._common import TimeoutExpired
from ._common import memoize
from ._common import sdiskusage
from ._common import usage_percent
from ._compat import PY3
from ._compat import ChildProcessError
from ._compat import FileNotFoundError
from ._compat import InterruptedError
from ._compat import PermissionError
from ._compat import ProcessLookupError
from ._compat import unicode


if MACOS:
    from . import _psutil_osx


if PY3:
    import enum
else:
    enum = None


__all__ = ['pid_exists', 'wait_pid', 'disk_usage', 'get_terminal_map']


def pid_exists(pid):
    """Check whether pid exists in the current process table."""
    if pid == 0:
        # According to "man 2 kill" PID 0 has a special meaning:
        # it refers to <<every process in the process group of the
        # calling process>> so we don't want to go any further.
        # If we get here it means this UNIX platform *does* have
        # a process with id 0.
        return True
    try:
        os.kill(pid, 0)
    except ProcessLookupError:
        return False
    except PermissionError:
        # EPERM clearly means there's a process to deny access to
        return True
    # According to "man 2 kill" possible error values are
    # (EINVAL, EPERM, ESRCH)
    else:
        return True


# Python 3.5 signals enum (contributed by me ^^):
# https://bugs.python.org/issue21076
if enum is not None and hasattr(signal, "Signals"):
    Negsignal = enum.IntEnum(
        'Negsignal', dict([(x.name, -x.value) for x in signal.Signals])
    )

    def negsig_to_enum(num):
        """Convert a negative signal value to an enum."""
        try:
            return Negsignal(num)
        except ValueError:
            return num

else:  # pragma: no cover

    def negsig_to_enum(num):
        return num


def wait_pid(
    pid,
    timeout=None,
    proc_name=None,
    _waitpid=os.waitpid,
    _timer=getattr(time, 'monotonic', time.time),  # noqa: B008
    _min=min,
    _sleep=time.sleep,
    _pid_exists=pid_exists,
):
    """Wait for a process PID to terminate.

    If the process terminated normally by calling exit(3) or _exit(2),
    or by returning from main(), the return value is the positive integer
    passed to *exit().

    If it was terminated by a signal it returns the negated value of the
    signal which caused the termination (e.g. -SIGTERM).

    If PID is not a children of os.getpid() (current process) just
    wait until the process disappears and return None.

    If PID does not exist at all return None immediately.

    If *timeout* != None and process is still alive raise TimeoutExpired.
    timeout=0 is also possible (either return immediately or raise).
    """
    if pid <= 0:
        # see "man waitpid"
        msg = "can't wait for PID 0"
        raise ValueError(msg)
    interval = 0.0001
    flags = 0
    if timeout is not None:
        flags |= os.WNOHANG
        stop_at = _timer() + timeout

    def sleep(interval):
        # Sleep for some time and return a new increased interval.
        if timeout is not None:
            if _timer() >= stop_at:
                raise TimeoutExpired(timeout, pid=pid, name=proc_name)
        _sleep(interval)
        return _min(interval * 2, 0.04)

    # See: https://linux.die.net/man/2/waitpid
    while True:
        try:
            retpid, status = os.waitpid(pid, flags)
        except InterruptedError:
            interval = sleep(interval)
        except ChildProcessError:
            # This has two meanings:
            # - PID is not a child of os.getpid() in which case
            #   we keep polling until it's gone
            # - PID never existed in the first place
            # In both cases we'll eventually return None as we
            # can't determine its exit status code.
            while _pid_exists(pid):
                interval = sleep(interval)
            return
        else:
            if retpid == 0:
                # WNOHANG flag was used and PID is still running.
                interval = sleep(interval)
                continue

            if os.WIFEXITED(status):
                # Process terminated normally by calling exit(3) or _exit(2),
                # or by returning from main(). The return value is the
                # positive integer passed to *exit().
                return os.WEXITSTATUS(status)
            elif os.WIFSIGNALED(status):
                # Process exited due to a signal. Return the negative value
                # of that signal.
                return negsig_to_enum(-os.WTERMSIG(status))
            # elif os.WIFSTOPPED(status):
            #     # Process was stopped via SIGSTOP or is being traced, and
            #     # waitpid() was called with WUNTRACED flag. PID is still
            #     # alive. From now on waitpid() will keep returning (0, 0)
            #     # until the process state doesn't change.
            #     # It may make sense to catch/enable this since stopped PIDs
            #     # ignore SIGTERM.
            #     interval = sleep(interval)
            #     continue
            # elif os.WIFCONTINUED(status):
            #     # Process was resumed via SIGCONT and waitpid() was called
            #     # with WCONTINUED flag.
            #     interval = sleep(interval)
            #     continue
            else:
                # Should never happen.
                raise ValueError("unknown process exit status %r" % status)


def disk_usage(path):
    """Return disk usage associated with path.
    Note: UNIX usually reserves 5% disk space which is not accessible
    by user. In this function "total" and "used" values reflect the
    total and used disk space whereas "free" and "percent" represent
    the "free" and "used percent" user disk space.
    """
    if PY3:
        st = os.statvfs(path)
    else:  # pragma: no cover
        # os.statvfs() does not support unicode on Python 2:
        # - https://github.com/giampaolo/psutil/issues/416
        # - http://bugs.python.org/issue18695
        try:
            st = os.statvfs(path)
        except UnicodeEncodeError:
            if isinstance(path, unicode):
                try:
                    path = path.encode(sys.getfilesystemencoding())
                except UnicodeEncodeError:
                    pass
                st = os.statvfs(path)
            else:
                raise

    # Total space which is only available to root (unless changed
    # at system level).
    total = st.f_blocks * st.f_frsize
    # Remaining free space usable by root.
    avail_to_root = st.f_bfree * st.f_frsize
    # Remaining free space usable by user.
    avail_to_user = st.f_bavail * st.f_frsize
    # Total space being used in general.
    used = total - avail_to_root
    if MACOS:
        # see: https://github.com/giampaolo/psutil/pull/2152
        used = _psutil_osx.disk_usage_used(path, used)
    # Total space which is available to user (same as 'total' but
    # for the user).
    total_user = used + avail_to_user
    # User usage percent compared to the total amount of space
    # the user can use. This number would be higher if compared
    # to root's because the user has less space (usually -5%).
    usage_percent_user = usage_percent(used, total_user, round_=1)

    # NB: the percentage is -5% than what shown by df due to
    # reserved blocks that we are currently not considering:
    # https://github.com/giampaolo/psutil/issues/829#issuecomment-223750462
    return sdiskusage(
        total=total, used=used, free=avail_to_user, percent=usage_percent_user
    )


@memoize
def get_terminal_map():
    """Get a map of device-id -> path as a dict.
    Used by Process.terminal().
    """
    ret = {}
    ls = glob.glob('/dev/tty*') + glob.glob('/dev/pts/*')
    for name in ls:
        assert name not in ret, name
        try:
            ret[os.stat(name).st_rdev] = name
        except FileNotFoundError:
            pass
    return ret
PKok\t�|�ZZpsutil/_pslinux.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Linux platform implementation."""

from __future__ import division

import base64
import collections
import errno
import functools
import glob
import os
import re
import socket
import struct
import sys
import warnings
from collections import defaultdict
from collections import namedtuple

from . import _common
from . import _psposix
from . import _psutil_linux as cext
from . import _psutil_posix as cext_posix
from ._common import NIC_DUPLEX_FULL
from ._common import NIC_DUPLEX_HALF
from ._common import NIC_DUPLEX_UNKNOWN
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import ZombieProcess
from ._common import bcat
from ._common import cat
from ._common import debug
from ._common import decode
from ._common import get_procfs_path
from ._common import isfile_strict
from ._common import memoize
from ._common import memoize_when_activated
from ._common import open_binary
from ._common import open_text
from ._common import parse_environ_block
from ._common import path_exists_strict
from ._common import supports_ipv6
from ._common import usage_percent
from ._compat import PY3
from ._compat import FileNotFoundError
from ._compat import PermissionError
from ._compat import ProcessLookupError
from ._compat import b
from ._compat import basestring


if PY3:
    import enum
else:
    enum = None


# fmt: off
__extra__all__ = [
    'PROCFS_PATH',
    # io prio constants
    "IOPRIO_CLASS_NONE", "IOPRIO_CLASS_RT", "IOPRIO_CLASS_BE",
    "IOPRIO_CLASS_IDLE",
    # connection status constants
    "CONN_ESTABLISHED", "CONN_SYN_SENT", "CONN_SYN_RECV", "CONN_FIN_WAIT1",
    "CONN_FIN_WAIT2", "CONN_TIME_WAIT", "CONN_CLOSE", "CONN_CLOSE_WAIT",
    "CONN_LAST_ACK", "CONN_LISTEN", "CONN_CLOSING",
]
# fmt: on


# =====================================================================
# --- globals
# =====================================================================


POWER_SUPPLY_PATH = "/sys/class/power_supply"
HAS_PROC_SMAPS = os.path.exists('/proc/%s/smaps' % os.getpid())
HAS_PROC_SMAPS_ROLLUP = os.path.exists('/proc/%s/smaps_rollup' % os.getpid())
HAS_PROC_IO_PRIORITY = hasattr(cext, "proc_ioprio_get")
HAS_CPU_AFFINITY = hasattr(cext, "proc_cpu_affinity_get")

# Number of clock ticks per second
CLOCK_TICKS = os.sysconf("SC_CLK_TCK")
PAGESIZE = cext_posix.getpagesize()
BOOT_TIME = None  # set later
LITTLE_ENDIAN = sys.byteorder == 'little'

# "man iostat" states that sectors are equivalent with blocks and have
# a size of 512 bytes. Despite this value can be queried at runtime
# via /sys/block/{DISK}/queue/hw_sector_size and results may vary
# between 1k, 2k, or 4k... 512 appears to be a magic constant used
# throughout Linux source code:
# * https://stackoverflow.com/a/38136179/376587
# * https://lists.gt.net/linux/kernel/2241060
# * https://github.com/giampaolo/psutil/issues/1305
# * https://github.com/torvalds/linux/blob/
#     4f671fe2f9523a1ea206f63fe60a7c7b3a56d5c7/include/linux/bio.h#L99
# * https://lkml.org/lkml/2015/8/17/234
DISK_SECTOR_SIZE = 512

if enum is None:
    AF_LINK = socket.AF_PACKET
else:
    AddressFamily = enum.IntEnum(
        'AddressFamily', {'AF_LINK': int(socket.AF_PACKET)}
    )
    AF_LINK = AddressFamily.AF_LINK

# ioprio_* constants http://linux.die.net/man/2/ioprio_get
if enum is None:
    IOPRIO_CLASS_NONE = 0
    IOPRIO_CLASS_RT = 1
    IOPRIO_CLASS_BE = 2
    IOPRIO_CLASS_IDLE = 3
else:

    class IOPriority(enum.IntEnum):
        IOPRIO_CLASS_NONE = 0
        IOPRIO_CLASS_RT = 1
        IOPRIO_CLASS_BE = 2
        IOPRIO_CLASS_IDLE = 3

    globals().update(IOPriority.__members__)

# See:
# https://github.com/torvalds/linux/blame/master/fs/proc/array.c
# ...and (TASK_* constants):
# https://github.com/torvalds/linux/blob/master/include/linux/sched.h
PROC_STATUSES = {
    "R": _common.STATUS_RUNNING,
    "S": _common.STATUS_SLEEPING,
    "D": _common.STATUS_DISK_SLEEP,
    "T": _common.STATUS_STOPPED,
    "t": _common.STATUS_TRACING_STOP,
    "Z": _common.STATUS_ZOMBIE,
    "X": _common.STATUS_DEAD,
    "x": _common.STATUS_DEAD,
    "K": _common.STATUS_WAKE_KILL,
    "W": _common.STATUS_WAKING,
    "I": _common.STATUS_IDLE,
    "P": _common.STATUS_PARKED,
}

# https://github.com/torvalds/linux/blob/master/include/net/tcp_states.h
TCP_STATUSES = {
    "01": _common.CONN_ESTABLISHED,
    "02": _common.CONN_SYN_SENT,
    "03": _common.CONN_SYN_RECV,
    "04": _common.CONN_FIN_WAIT1,
    "05": _common.CONN_FIN_WAIT2,
    "06": _common.CONN_TIME_WAIT,
    "07": _common.CONN_CLOSE,
    "08": _common.CONN_CLOSE_WAIT,
    "09": _common.CONN_LAST_ACK,
    "0A": _common.CONN_LISTEN,
    "0B": _common.CONN_CLOSING,
}


# =====================================================================
# --- named tuples
# =====================================================================


# fmt: off
# psutil.virtual_memory()
svmem = namedtuple(
    'svmem', ['total', 'available', 'percent', 'used', 'free',
              'active', 'inactive', 'buffers', 'cached', 'shared', 'slab'])
# psutil.disk_io_counters()
sdiskio = namedtuple(
    'sdiskio', ['read_count', 'write_count',
                'read_bytes', 'write_bytes',
                'read_time', 'write_time',
                'read_merged_count', 'write_merged_count',
                'busy_time'])
# psutil.Process().open_files()
popenfile = namedtuple(
    'popenfile', ['path', 'fd', 'position', 'mode', 'flags'])
# psutil.Process().memory_info()
pmem = namedtuple('pmem', 'rss vms shared text lib data dirty')
# psutil.Process().memory_full_info()
pfullmem = namedtuple('pfullmem', pmem._fields + ('uss', 'pss', 'swap'))
# psutil.Process().memory_maps(grouped=True)
pmmap_grouped = namedtuple(
    'pmmap_grouped',
    ['path', 'rss', 'size', 'pss', 'shared_clean', 'shared_dirty',
     'private_clean', 'private_dirty', 'referenced', 'anonymous', 'swap'])
# psutil.Process().memory_maps(grouped=False)
pmmap_ext = namedtuple(
    'pmmap_ext', 'addr perms ' + ' '.join(pmmap_grouped._fields))
# psutil.Process.io_counters()
pio = namedtuple('pio', ['read_count', 'write_count',
                         'read_bytes', 'write_bytes',
                         'read_chars', 'write_chars'])
# psutil.Process.cpu_times()
pcputimes = namedtuple('pcputimes',
                       ['user', 'system', 'children_user', 'children_system',
                        'iowait'])
# fmt: on


# =====================================================================
# --- utils
# =====================================================================


def readlink(path):
    """Wrapper around os.readlink()."""
    assert isinstance(path, basestring), path
    path = os.readlink(path)
    # readlink() might return paths containing null bytes ('\x00')
    # resulting in "TypeError: must be encoded string without NULL
    # bytes, not str" errors when the string is passed to other
    # fs-related functions (os.*, open(), ...).
    # Apparently everything after '\x00' is garbage (we can have
    # ' (deleted)', 'new' and possibly others), see:
    # https://github.com/giampaolo/psutil/issues/717
    path = path.split('\x00')[0]
    # Certain paths have ' (deleted)' appended. Usually this is
    # bogus as the file actually exists. Even if it doesn't we
    # don't care.
    if path.endswith(' (deleted)') and not path_exists_strict(path):
        path = path[:-10]
    return path


def file_flags_to_mode(flags):
    """Convert file's open() flags into a readable string.
    Used by Process.open_files().
    """
    modes_map = {os.O_RDONLY: 'r', os.O_WRONLY: 'w', os.O_RDWR: 'w+'}
    mode = modes_map[flags & (os.O_RDONLY | os.O_WRONLY | os.O_RDWR)]
    if flags & os.O_APPEND:
        mode = mode.replace('w', 'a', 1)
    mode = mode.replace('w+', 'r+')
    # possible values: r, w, a, r+, a+
    return mode


def is_storage_device(name):
    """Return True if the given name refers to a root device (e.g.
    "sda", "nvme0n1") as opposed to a logical partition (e.g.  "sda1",
    "nvme0n1p1"). If name is a virtual device (e.g. "loop1", "ram")
    return True.
    """
    # Re-adapted from iostat source code, see:
    # https://github.com/sysstat/sysstat/blob/
    #     97912938cd476645b267280069e83b1c8dc0e1c7/common.c#L208
    # Some devices may have a slash in their name (e.g. cciss/c0d0...).
    name = name.replace('/', '!')
    including_virtual = True
    if including_virtual:
        path = "/sys/block/%s" % name
    else:
        path = "/sys/block/%s/device" % name
    return os.access(path, os.F_OK)


@memoize
def set_scputimes_ntuple(procfs_path):
    """Set a namedtuple of variable fields depending on the CPU times
    available on this Linux kernel version which may be:
    (user, nice, system, idle, iowait, irq, softirq, [steal, [guest,
     [guest_nice]]])
    Used by cpu_times() function.
    """
    global scputimes
    with open_binary('%s/stat' % procfs_path) as f:
        values = f.readline().split()[1:]
    fields = ['user', 'nice', 'system', 'idle', 'iowait', 'irq', 'softirq']
    vlen = len(values)
    if vlen >= 8:
        # Linux >= 2.6.11
        fields.append('steal')
    if vlen >= 9:
        # Linux >= 2.6.24
        fields.append('guest')
    if vlen >= 10:
        # Linux >= 3.2.0
        fields.append('guest_nice')
    scputimes = namedtuple('scputimes', fields)


try:
    set_scputimes_ntuple("/proc")
except Exception as err:  # noqa: BLE001
    # Don't want to crash at import time.
    debug("ignoring exception on import: %r" % err)
    scputimes = namedtuple('scputimes', 'user system idle')(0.0, 0.0, 0.0)


# =====================================================================
# --- prlimit
# =====================================================================

# Backport of resource.prlimit() for Python 2. Originally this was done
# in C, but CentOS-6 which we use to create manylinux wheels is too old
# and does not support prlimit() syscall. As such the resulting wheel
# would not include prlimit(), even when installed on newer systems.
# This is the only part of psutil using ctypes.

prlimit = None
try:
    from resource import prlimit  # python >= 3.4
except ImportError:
    import ctypes

    libc = ctypes.CDLL(None, use_errno=True)

    if hasattr(libc, "prlimit"):

        def prlimit(pid, resource_, limits=None):
            class StructRlimit(ctypes.Structure):
                _fields_ = [
                    ('rlim_cur', ctypes.c_longlong),
                    ('rlim_max', ctypes.c_longlong),
                ]

            current = StructRlimit()
            if limits is None:
                # get
                ret = libc.prlimit(pid, resource_, None, ctypes.byref(current))
            else:
                # set
                new = StructRlimit()
                new.rlim_cur = limits[0]
                new.rlim_max = limits[1]
                ret = libc.prlimit(
                    pid, resource_, ctypes.byref(new), ctypes.byref(current)
                )

            if ret != 0:
                errno_ = ctypes.get_errno()
                raise OSError(errno_, os.strerror(errno_))
            return (current.rlim_cur, current.rlim_max)


if prlimit is not None:
    __extra__all__.extend(
        [x for x in dir(cext) if x.startswith('RLIM') and x.isupper()]
    )


# =====================================================================
# --- system memory
# =====================================================================


def calculate_avail_vmem(mems):
    """Fallback for kernels < 3.14 where /proc/meminfo does not provide
    "MemAvailable", see:
    https://blog.famzah.net/2014/09/24/.

    This code reimplements the algorithm outlined here:
    https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/
        commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773

    We use this function also when "MemAvailable" returns 0 (possibly a
    kernel bug, see: https://github.com/giampaolo/psutil/issues/1915).
    In that case this routine matches "free" CLI tool result ("available"
    column).

    XXX: on recent kernels this calculation may differ by ~1.5% compared
    to "MemAvailable:", as it's calculated slightly differently.
    It is still way more realistic than doing (free + cached) though.
    See:
    * https://gitlab.com/procps-ng/procps/issues/42
    * https://github.com/famzah/linux-memavailable-procfs/issues/2
    """
    # Note about "fallback" value. According to:
    # https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/
    #     commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773
    # ...long ago "available" memory was calculated as (free + cached),
    # We use fallback when one of these is missing from /proc/meminfo:
    # "Active(file)": introduced in 2.6.28 / Dec 2008
    # "Inactive(file)": introduced in 2.6.28 / Dec 2008
    # "SReclaimable": introduced in 2.6.19 / Nov 2006
    # /proc/zoneinfo: introduced in 2.6.13 / Aug 2005
    free = mems[b'MemFree:']
    fallback = free + mems.get(b"Cached:", 0)
    try:
        lru_active_file = mems[b'Active(file):']
        lru_inactive_file = mems[b'Inactive(file):']
        slab_reclaimable = mems[b'SReclaimable:']
    except KeyError as err:
        debug(
            "%s is missing from /proc/meminfo; using an approximation for "
            "calculating available memory"
            % err.args[0]
        )
        return fallback
    try:
        f = open_binary('%s/zoneinfo' % get_procfs_path())
    except IOError:
        return fallback  # kernel 2.6.13

    watermark_low = 0
    with f:
        for line in f:
            line = line.strip()
            if line.startswith(b'low'):
                watermark_low += int(line.split()[1])
    watermark_low *= PAGESIZE

    avail = free - watermark_low
    pagecache = lru_active_file + lru_inactive_file
    pagecache -= min(pagecache / 2, watermark_low)
    avail += pagecache
    avail += slab_reclaimable - min(slab_reclaimable / 2.0, watermark_low)
    return int(avail)


def virtual_memory():
    """Report virtual memory stats.
    This implementation mimics procps-ng-3.3.12, aka "free" CLI tool:
    https://gitlab.com/procps-ng/procps/blob/
        24fd2605c51fccc375ab0287cec33aa767f06718/proc/sysinfo.c#L778-791
    The returned values are supposed to match both "free" and "vmstat -s"
    CLI tools.
    """
    missing_fields = []
    mems = {}
    with open_binary('%s/meminfo' % get_procfs_path()) as f:
        for line in f:
            fields = line.split()
            mems[fields[0]] = int(fields[1]) * 1024

    # /proc doc states that the available fields in /proc/meminfo vary
    # by architecture and compile options, but these 3 values are also
    # returned by sysinfo(2); as such we assume they are always there.
    total = mems[b'MemTotal:']
    free = mems[b'MemFree:']
    try:
        buffers = mems[b'Buffers:']
    except KeyError:
        # https://github.com/giampaolo/psutil/issues/1010
        buffers = 0
        missing_fields.append('buffers')
    try:
        cached = mems[b"Cached:"]
    except KeyError:
        cached = 0
        missing_fields.append('cached')
    else:
        # "free" cmdline utility sums reclaimable to cached.
        # Older versions of procps used to add slab memory instead.
        # This got changed in:
        # https://gitlab.com/procps-ng/procps/commit/
        #     05d751c4f076a2f0118b914c5e51cfbb4762ad8e
        cached += mems.get(b"SReclaimable:", 0)  # since kernel 2.6.19

    try:
        shared = mems[b'Shmem:']  # since kernel 2.6.32
    except KeyError:
        try:
            shared = mems[b'MemShared:']  # kernels 2.4
        except KeyError:
            shared = 0
            missing_fields.append('shared')

    try:
        active = mems[b"Active:"]
    except KeyError:
        active = 0
        missing_fields.append('active')

    try:
        inactive = mems[b"Inactive:"]
    except KeyError:
        try:
            inactive = (
                mems[b"Inact_dirty:"]
                + mems[b"Inact_clean:"]
                + mems[b"Inact_laundry:"]
            )
        except KeyError:
            inactive = 0
            missing_fields.append('inactive')

    try:
        slab = mems[b"Slab:"]
    except KeyError:
        slab = 0

    used = total - free - cached - buffers
    if used < 0:
        # May be symptomatic of running within a LCX container where such
        # values will be dramatically distorted over those of the host.
        used = total - free

    # - starting from 4.4.0 we match free's "available" column.
    #   Before 4.4.0 we calculated it as (free + buffers + cached)
    #   which matched htop.
    # - free and htop available memory differs as per:
    #   http://askubuntu.com/a/369589
    #   http://unix.stackexchange.com/a/65852/168884
    # - MemAvailable has been introduced in kernel 3.14
    try:
        avail = mems[b'MemAvailable:']
    except KeyError:
        avail = calculate_avail_vmem(mems)
    else:
        if avail == 0:
            # Yes, it can happen (probably a kernel bug):
            # https://github.com/giampaolo/psutil/issues/1915
            # In this case "free" CLI tool makes an estimate. We do the same,
            # and it matches "free" CLI tool.
            avail = calculate_avail_vmem(mems)

    if avail < 0:
        avail = 0
        missing_fields.append('available')
    elif avail > total:
        # If avail is greater than total or our calculation overflows,
        # that's symptomatic of running within a LCX container where such
        # values will be dramatically distorted over those of the host.
        # https://gitlab.com/procps-ng/procps/blob/
        #     24fd2605c51fccc375ab0287cec33aa767f06718/proc/sysinfo.c#L764
        avail = free

    percent = usage_percent((total - avail), total, round_=1)

    # Warn about missing metrics which are set to 0.
    if missing_fields:
        msg = "%s memory stats couldn't be determined and %s set to 0" % (
            ", ".join(missing_fields),
            "was" if len(missing_fields) == 1 else "were",
        )
        warnings.warn(msg, RuntimeWarning, stacklevel=2)

    return svmem(
        total,
        avail,
        percent,
        used,
        free,
        active,
        inactive,
        buffers,
        cached,
        shared,
        slab,
    )


def swap_memory():
    """Return swap memory metrics."""
    mems = {}
    with open_binary('%s/meminfo' % get_procfs_path()) as f:
        for line in f:
            fields = line.split()
            mems[fields[0]] = int(fields[1]) * 1024
    # We prefer /proc/meminfo over sysinfo() syscall so that
    # psutil.PROCFS_PATH can be used in order to allow retrieval
    # for linux containers, see:
    # https://github.com/giampaolo/psutil/issues/1015
    try:
        total = mems[b'SwapTotal:']
        free = mems[b'SwapFree:']
    except KeyError:
        _, _, _, _, total, free, unit_multiplier = cext.linux_sysinfo()
        total *= unit_multiplier
        free *= unit_multiplier

    used = total - free
    percent = usage_percent(used, total, round_=1)
    # get pgin/pgouts
    try:
        f = open_binary("%s/vmstat" % get_procfs_path())
    except IOError as err:
        # see https://github.com/giampaolo/psutil/issues/722
        msg = (
            "'sin' and 'sout' swap memory stats couldn't "
            + "be determined and were set to 0 (%s)" % str(err)
        )
        warnings.warn(msg, RuntimeWarning, stacklevel=2)
        sin = sout = 0
    else:
        with f:
            sin = sout = None
            for line in f:
                # values are expressed in 4 kilo bytes, we want
                # bytes instead
                if line.startswith(b'pswpin'):
                    sin = int(line.split(b' ')[1]) * 4 * 1024
                elif line.startswith(b'pswpout'):
                    sout = int(line.split(b' ')[1]) * 4 * 1024
                if sin is not None and sout is not None:
                    break
            else:
                # we might get here when dealing with exotic Linux
                # flavors, see:
                # https://github.com/giampaolo/psutil/issues/313
                msg = "'sin' and 'sout' swap memory stats couldn't "
                msg += "be determined and were set to 0"
                warnings.warn(msg, RuntimeWarning, stacklevel=2)
                sin = sout = 0
    return _common.sswap(total, used, free, percent, sin, sout)


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return a named tuple representing the following system-wide
    CPU times:
    (user, nice, system, idle, iowait, irq, softirq [steal, [guest,
     [guest_nice]]])
    Last 3 fields may not be available on all Linux kernel versions.
    """
    procfs_path = get_procfs_path()
    set_scputimes_ntuple(procfs_path)
    with open_binary('%s/stat' % procfs_path) as f:
        values = f.readline().split()
    fields = values[1 : len(scputimes._fields) + 1]
    fields = [float(x) / CLOCK_TICKS for x in fields]
    return scputimes(*fields)


def per_cpu_times():
    """Return a list of namedtuple representing the CPU times
    for every CPU available on the system.
    """
    procfs_path = get_procfs_path()
    set_scputimes_ntuple(procfs_path)
    cpus = []
    with open_binary('%s/stat' % procfs_path) as f:
        # get rid of the first line which refers to system wide CPU stats
        f.readline()
        for line in f:
            if line.startswith(b'cpu'):
                values = line.split()
                fields = values[1 : len(scputimes._fields) + 1]
                fields = [float(x) / CLOCK_TICKS for x in fields]
                entry = scputimes(*fields)
                cpus.append(entry)
        return cpus


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    try:
        return os.sysconf("SC_NPROCESSORS_ONLN")
    except ValueError:
        # as a second fallback we try to parse /proc/cpuinfo
        num = 0
        with open_binary('%s/cpuinfo' % get_procfs_path()) as f:
            for line in f:
                if line.lower().startswith(b'processor'):
                    num += 1

        # unknown format (e.g. amrel/sparc architectures), see:
        # https://github.com/giampaolo/psutil/issues/200
        # try to parse /proc/stat as a last resort
        if num == 0:
            search = re.compile(r'cpu\d')
            with open_text('%s/stat' % get_procfs_path()) as f:
                for line in f:
                    line = line.split(' ')[0]
                    if search.match(line):
                        num += 1

        if num == 0:
            # mimic os.cpu_count()
            return None
        return num


def cpu_count_cores():
    """Return the number of CPU cores in the system."""
    # Method #1
    ls = set()
    # These 2 files are the same but */core_cpus_list is newer while
    # */thread_siblings_list is deprecated and may disappear in the future.
    # https://www.kernel.org/doc/Documentation/admin-guide/cputopology.rst
    # https://github.com/giampaolo/psutil/pull/1727#issuecomment-707624964
    # https://lkml.org/lkml/2019/2/26/41
    p1 = "/sys/devices/system/cpu/cpu[0-9]*/topology/core_cpus_list"
    p2 = "/sys/devices/system/cpu/cpu[0-9]*/topology/thread_siblings_list"
    for path in glob.glob(p1) or glob.glob(p2):
        with open_binary(path) as f:
            ls.add(f.read().strip())
    result = len(ls)
    if result != 0:
        return result

    # Method #2
    mapping = {}
    current_info = {}
    with open_binary('%s/cpuinfo' % get_procfs_path()) as f:
        for line in f:
            line = line.strip().lower()
            if not line:
                # new section
                try:
                    mapping[current_info[b'physical id']] = current_info[
                        b'cpu cores'
                    ]
                except KeyError:
                    pass
                current_info = {}
            else:
                # ongoing section
                if line.startswith((b'physical id', b'cpu cores')):
                    key, value = line.split(b'\t:', 1)
                    current_info[key] = int(value)

    result = sum(mapping.values())
    return result or None  # mimic os.cpu_count()


def cpu_stats():
    """Return various CPU stats as a named tuple."""
    with open_binary('%s/stat' % get_procfs_path()) as f:
        ctx_switches = None
        interrupts = None
        soft_interrupts = None
        for line in f:
            if line.startswith(b'ctxt'):
                ctx_switches = int(line.split()[1])
            elif line.startswith(b'intr'):
                interrupts = int(line.split()[1])
            elif line.startswith(b'softirq'):
                soft_interrupts = int(line.split()[1])
            if (
                ctx_switches is not None
                and soft_interrupts is not None
                and interrupts is not None
            ):
                break
    syscalls = 0
    return _common.scpustats(
        ctx_switches, interrupts, soft_interrupts, syscalls
    )


def _cpu_get_cpuinfo_freq():
    """Return current CPU frequency from cpuinfo if available."""
    ret = []
    with open_binary('%s/cpuinfo' % get_procfs_path()) as f:
        for line in f:
            if line.lower().startswith(b'cpu mhz'):
                ret.append(float(line.split(b':', 1)[1]))
    return ret


if os.path.exists("/sys/devices/system/cpu/cpufreq/policy0") or os.path.exists(
    "/sys/devices/system/cpu/cpu0/cpufreq"
):

    def cpu_freq():
        """Return frequency metrics for all CPUs.
        Contrarily to other OSes, Linux updates these values in
        real-time.
        """
        cpuinfo_freqs = _cpu_get_cpuinfo_freq()
        paths = glob.glob(
            "/sys/devices/system/cpu/cpufreq/policy[0-9]*"
        ) or glob.glob("/sys/devices/system/cpu/cpu[0-9]*/cpufreq")
        paths.sort(key=lambda x: int(re.search(r"[0-9]+", x).group()))
        ret = []
        pjoin = os.path.join
        for i, path in enumerate(paths):
            if len(paths) == len(cpuinfo_freqs):
                # take cached value from cpuinfo if available, see:
                # https://github.com/giampaolo/psutil/issues/1851
                curr = cpuinfo_freqs[i] * 1000
            else:
                curr = bcat(pjoin(path, "scaling_cur_freq"), fallback=None)
            if curr is None:
                # Likely an old RedHat, see:
                # https://github.com/giampaolo/psutil/issues/1071
                curr = bcat(pjoin(path, "cpuinfo_cur_freq"), fallback=None)
                if curr is None:
                    online_path = (
                        "/sys/devices/system/cpu/cpu{}/online".format(i)
                    )
                    # if cpu core is offline, set to all zeroes
                    if cat(online_path, fallback=None) == "0\n":
                        ret.append(_common.scpufreq(0.0, 0.0, 0.0))
                        continue
                    msg = "can't find current frequency file"
                    raise NotImplementedError(msg)
            curr = int(curr) / 1000
            max_ = int(bcat(pjoin(path, "scaling_max_freq"))) / 1000
            min_ = int(bcat(pjoin(path, "scaling_min_freq"))) / 1000
            ret.append(_common.scpufreq(curr, min_, max_))
        return ret

else:

    def cpu_freq():
        """Alternate implementation using /proc/cpuinfo.
        min and max frequencies are not available and are set to None.
        """
        return [_common.scpufreq(x, 0.0, 0.0) for x in _cpu_get_cpuinfo_freq()]


# =====================================================================
# --- network
# =====================================================================


net_if_addrs = cext_posix.net_if_addrs


class _Ipv6UnsupportedError(Exception):
    pass


class NetConnections:
    """A wrapper on top of /proc/net/* files, retrieving per-process
    and system-wide open connections (TCP, UDP, UNIX) similarly to
    "netstat -an".

    Note: in case of UNIX sockets we're only able to determine the
    local endpoint/path, not the one it's connected to.
    According to [1] it would be possible but not easily.

    [1] http://serverfault.com/a/417946
    """

    def __init__(self):
        # The string represents the basename of the corresponding
        # /proc/net/{proto_name} file.
        tcp4 = ("tcp", socket.AF_INET, socket.SOCK_STREAM)
        tcp6 = ("tcp6", socket.AF_INET6, socket.SOCK_STREAM)
        udp4 = ("udp", socket.AF_INET, socket.SOCK_DGRAM)
        udp6 = ("udp6", socket.AF_INET6, socket.SOCK_DGRAM)
        unix = ("unix", socket.AF_UNIX, None)
        self.tmap = {
            "all": (tcp4, tcp6, udp4, udp6, unix),
            "tcp": (tcp4, tcp6),
            "tcp4": (tcp4,),
            "tcp6": (tcp6,),
            "udp": (udp4, udp6),
            "udp4": (udp4,),
            "udp6": (udp6,),
            "unix": (unix,),
            "inet": (tcp4, tcp6, udp4, udp6),
            "inet4": (tcp4, udp4),
            "inet6": (tcp6, udp6),
        }
        self._procfs_path = None

    def get_proc_inodes(self, pid):
        inodes = defaultdict(list)
        for fd in os.listdir("%s/%s/fd" % (self._procfs_path, pid)):
            try:
                inode = readlink("%s/%s/fd/%s" % (self._procfs_path, pid, fd))
            except (FileNotFoundError, ProcessLookupError):
                # ENOENT == file which is gone in the meantime;
                # os.stat('/proc/%s' % self.pid) will be done later
                # to force NSP (if it's the case)
                continue
            except OSError as err:
                if err.errno == errno.EINVAL:
                    # not a link
                    continue
                if err.errno == errno.ENAMETOOLONG:
                    # file name too long
                    debug(err)
                    continue
                raise
            else:
                if inode.startswith('socket:['):
                    # the process is using a socket
                    inode = inode[8:][:-1]
                    inodes[inode].append((pid, int(fd)))
        return inodes

    def get_all_inodes(self):
        inodes = {}
        for pid in pids():
            try:
                inodes.update(self.get_proc_inodes(pid))
            except (FileNotFoundError, ProcessLookupError, PermissionError):
                # os.listdir() is gonna raise a lot of access denied
                # exceptions in case of unprivileged user; that's fine
                # as we'll just end up returning a connection with PID
                # and fd set to None anyway.
                # Both netstat -an and lsof does the same so it's
                # unlikely we can do any better.
                # ENOENT just means a PID disappeared on us.
                continue
        return inodes

    @staticmethod
    def decode_address(addr, family):
        """Accept an "ip:port" address as displayed in /proc/net/*
        and convert it into a human readable form, like:

        "0500000A:0016" -> ("10.0.0.5", 22)
        "0000000000000000FFFF00000100007F:9E49" -> ("::ffff:127.0.0.1", 40521)

        The IP address portion is a little or big endian four-byte
        hexadecimal number; that is, the least significant byte is listed
        first, so we need to reverse the order of the bytes to convert it
        to an IP address.
        The port is represented as a two-byte hexadecimal number.

        Reference:
        http://linuxdevcenter.com/pub/a/linux/2000/11/16/LinuxAdmin.html
        """
        ip, port = addr.split(':')
        port = int(port, 16)
        # this usually refers to a local socket in listen mode with
        # no end-points connected
        if not port:
            return ()
        if PY3:
            ip = ip.encode('ascii')
        if family == socket.AF_INET:
            # see: https://github.com/giampaolo/psutil/issues/201
            if LITTLE_ENDIAN:
                ip = socket.inet_ntop(family, base64.b16decode(ip)[::-1])
            else:
                ip = socket.inet_ntop(family, base64.b16decode(ip))
        else:  # IPv6
            ip = base64.b16decode(ip)
            try:
                # see: https://github.com/giampaolo/psutil/issues/201
                if LITTLE_ENDIAN:
                    ip = socket.inet_ntop(
                        socket.AF_INET6,
                        struct.pack('>4I', *struct.unpack('<4I', ip)),
                    )
                else:
                    ip = socket.inet_ntop(
                        socket.AF_INET6,
                        struct.pack('<4I', *struct.unpack('<4I', ip)),
                    )
            except ValueError:
                # see: https://github.com/giampaolo/psutil/issues/623
                if not supports_ipv6():
                    raise _Ipv6UnsupportedError
                else:
                    raise
        return _common.addr(ip, port)

    @staticmethod
    def process_inet(file, family, type_, inodes, filter_pid=None):
        """Parse /proc/net/tcp* and /proc/net/udp* files."""
        if file.endswith('6') and not os.path.exists(file):
            # IPv6 not supported
            return
        with open_text(file) as f:
            f.readline()  # skip the first line
            for lineno, line in enumerate(f, 1):
                try:
                    _, laddr, raddr, status, _, _, _, _, _, inode = (
                        line.split()[:10]
                    )
                except ValueError:
                    raise RuntimeError(
                        "error while parsing %s; malformed line %s %r"
                        % (file, lineno, line)
                    )
                if inode in inodes:
                    # # We assume inet sockets are unique, so we error
                    # # out if there are multiple references to the
                    # # same inode. We won't do this for UNIX sockets.
                    # if len(inodes[inode]) > 1 and family != socket.AF_UNIX:
                    #     raise ValueError("ambiguous inode with multiple "
                    #                      "PIDs references")
                    pid, fd = inodes[inode][0]
                else:
                    pid, fd = None, -1
                if filter_pid is not None and filter_pid != pid:
                    continue
                else:
                    if type_ == socket.SOCK_STREAM:
                        status = TCP_STATUSES[status]
                    else:
                        status = _common.CONN_NONE
                    try:
                        laddr = NetConnections.decode_address(laddr, family)
                        raddr = NetConnections.decode_address(raddr, family)
                    except _Ipv6UnsupportedError:
                        continue
                    yield (fd, family, type_, laddr, raddr, status, pid)

    @staticmethod
    def process_unix(file, family, inodes, filter_pid=None):
        """Parse /proc/net/unix files."""
        with open_text(file) as f:
            f.readline()  # skip the first line
            for line in f:
                tokens = line.split()
                try:
                    _, _, _, _, type_, _, inode = tokens[0:7]
                except ValueError:
                    if ' ' not in line:
                        # see: https://github.com/giampaolo/psutil/issues/766
                        continue
                    raise RuntimeError(
                        "error while parsing %s; malformed line %r"
                        % (file, line)
                    )
                if inode in inodes:  # noqa
                    # With UNIX sockets we can have a single inode
                    # referencing many file descriptors.
                    pairs = inodes[inode]
                else:
                    pairs = [(None, -1)]
                for pid, fd in pairs:
                    if filter_pid is not None and filter_pid != pid:
                        continue
                    else:
                        path = tokens[-1] if len(tokens) == 8 else ''
                        type_ = _common.socktype_to_enum(int(type_))
                        # XXX: determining the remote endpoint of a
                        # UNIX socket on Linux is not possible, see:
                        # https://serverfault.com/questions/252723/
                        raddr = ""
                        status = _common.CONN_NONE
                        yield (fd, family, type_, path, raddr, status, pid)

    def retrieve(self, kind, pid=None):
        if kind not in self.tmap:
            raise ValueError(
                "invalid %r kind argument; choose between %s"
                % (kind, ', '.join([repr(x) for x in self.tmap]))
            )
        self._procfs_path = get_procfs_path()
        if pid is not None:
            inodes = self.get_proc_inodes(pid)
            if not inodes:
                # no connections for this process
                return []
        else:
            inodes = self.get_all_inodes()
        ret = set()
        for proto_name, family, type_ in self.tmap[kind]:
            path = "%s/net/%s" % (self._procfs_path, proto_name)
            if family in (socket.AF_INET, socket.AF_INET6):
                ls = self.process_inet(
                    path, family, type_, inodes, filter_pid=pid
                )
            else:
                ls = self.process_unix(path, family, inodes, filter_pid=pid)
            for fd, family, type_, laddr, raddr, status, bound_pid in ls:
                if pid:
                    conn = _common.pconn(
                        fd, family, type_, laddr, raddr, status
                    )
                else:
                    conn = _common.sconn(
                        fd, family, type_, laddr, raddr, status, bound_pid
                    )
                ret.add(conn)
        return list(ret)


_net_connections = NetConnections()


def net_connections(kind='inet'):
    """Return system-wide open connections."""
    return _net_connections.retrieve(kind)


def net_io_counters():
    """Return network I/O statistics for every network interface
    installed on the system as a dict of raw tuples.
    """
    with open_text("%s/net/dev" % get_procfs_path()) as f:
        lines = f.readlines()
    retdict = {}
    for line in lines[2:]:
        colon = line.rfind(':')
        assert colon > 0, repr(line)
        name = line[:colon].strip()
        fields = line[colon + 1 :].strip().split()

        (
            # in
            bytes_recv,
            packets_recv,
            errin,
            dropin,
            _fifoin,  # unused
            _framein,  # unused
            _compressedin,  # unused
            _multicastin,  # unused
            # out
            bytes_sent,
            packets_sent,
            errout,
            dropout,
            _fifoout,  # unused
            _collisionsout,  # unused
            _carrierout,  # unused
            _compressedout,  # unused
        ) = map(int, fields)

        retdict[name] = (
            bytes_sent,
            bytes_recv,
            packets_sent,
            packets_recv,
            errin,
            errout,
            dropin,
            dropout,
        )
    return retdict


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    duplex_map = {
        cext.DUPLEX_FULL: NIC_DUPLEX_FULL,
        cext.DUPLEX_HALF: NIC_DUPLEX_HALF,
        cext.DUPLEX_UNKNOWN: NIC_DUPLEX_UNKNOWN,
    }
    names = net_io_counters().keys()
    ret = {}
    for name in names:
        try:
            mtu = cext_posix.net_if_mtu(name)
            flags = cext_posix.net_if_flags(name)
            duplex, speed = cext.net_if_duplex_speed(name)
        except OSError as err:
            # https://github.com/giampaolo/psutil/issues/1279
            if err.errno != errno.ENODEV:
                raise
            else:
                debug(err)
        else:
            output_flags = ','.join(flags)
            isup = 'running' in flags
            ret[name] = _common.snicstats(
                isup, duplex_map[duplex], speed, mtu, output_flags
            )
    return ret


# =====================================================================
# --- disks
# =====================================================================


disk_usage = _psposix.disk_usage


def disk_io_counters(perdisk=False):
    """Return disk I/O statistics for every disk installed on the
    system as a dict of raw tuples.
    """

    def read_procfs():
        # OK, this is a bit confusing. The format of /proc/diskstats can
        # have 3 variations.
        # On Linux 2.4 each line has always 15 fields, e.g.:
        # "3     0   8 hda 8 8 8 8 8 8 8 8 8 8 8"
        # On Linux 2.6+ each line *usually* has 14 fields, and the disk
        # name is in another position, like this:
        # "3    0   hda 8 8 8 8 8 8 8 8 8 8 8"
        # ...unless (Linux 2.6) the line refers to a partition instead
        # of a disk, in which case the line has less fields (7):
        # "3    1   hda1 8 8 8 8"
        # 4.18+ has 4 fields added:
        # "3    0   hda 8 8 8 8 8 8 8 8 8 8 8 0 0 0 0"
        # 5.5 has 2 more fields.
        # See:
        # https://www.kernel.org/doc/Documentation/iostats.txt
        # https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
        with open_text("%s/diskstats" % get_procfs_path()) as f:
            lines = f.readlines()
        for line in lines:
            fields = line.split()
            flen = len(fields)
            # fmt: off
            if flen == 15:
                # Linux 2.4
                name = fields[3]
                reads = int(fields[2])
                (reads_merged, rbytes, rtime, writes, writes_merged,
                    wbytes, wtime, _, busy_time, _) = map(int, fields[4:14])
            elif flen == 14 or flen >= 18:
                # Linux 2.6+, line referring to a disk
                name = fields[2]
                (reads, reads_merged, rbytes, rtime, writes, writes_merged,
                    wbytes, wtime, _, busy_time, _) = map(int, fields[3:14])
            elif flen == 7:
                # Linux 2.6+, line referring to a partition
                name = fields[2]
                reads, rbytes, writes, wbytes = map(int, fields[3:])
                rtime = wtime = reads_merged = writes_merged = busy_time = 0
            else:
                raise ValueError("not sure how to interpret line %r" % line)
            yield (name, reads, writes, rbytes, wbytes, rtime, wtime,
                   reads_merged, writes_merged, busy_time)
            # fmt: on

    def read_sysfs():
        for block in os.listdir('/sys/block'):
            for root, _, files in os.walk(os.path.join('/sys/block', block)):
                if 'stat' not in files:
                    continue
                with open_text(os.path.join(root, 'stat')) as f:
                    fields = f.read().strip().split()
                name = os.path.basename(root)
                # fmt: off
                (reads, reads_merged, rbytes, rtime, writes, writes_merged,
                    wbytes, wtime, _, busy_time) = map(int, fields[:10])
                yield (name, reads, writes, rbytes, wbytes, rtime,
                       wtime, reads_merged, writes_merged, busy_time)
                # fmt: on

    if os.path.exists('%s/diskstats' % get_procfs_path()):
        gen = read_procfs()
    elif os.path.exists('/sys/block'):
        gen = read_sysfs()
    else:
        raise NotImplementedError(
            "%s/diskstats nor /sys/block filesystem are available on this "
            "system"
            % get_procfs_path()
        )

    retdict = {}
    for entry in gen:
        # fmt: off
        (name, reads, writes, rbytes, wbytes, rtime, wtime, reads_merged,
            writes_merged, busy_time) = entry
        if not perdisk and not is_storage_device(name):
            # perdisk=False means we want to calculate totals so we skip
            # partitions (e.g. 'sda1', 'nvme0n1p1') and only include
            # base disk devices (e.g. 'sda', 'nvme0n1'). Base disks
            # include a total of all their partitions + some extra size
            # of their own:
            #     $ cat /proc/diskstats
            #     259       0 sda 10485760 ...
            #     259       1 sda1 5186039 ...
            #     259       1 sda2 5082039 ...
            # See:
            # https://github.com/giampaolo/psutil/pull/1313
            continue

        rbytes *= DISK_SECTOR_SIZE
        wbytes *= DISK_SECTOR_SIZE
        retdict[name] = (reads, writes, rbytes, wbytes, rtime, wtime,
                         reads_merged, writes_merged, busy_time)
        # fmt: on

    return retdict


class RootFsDeviceFinder:
    """disk_partitions() may return partitions with device == "/dev/root"
    or "rootfs". This container class uses different strategies to try to
    obtain the real device path. Resources:
    https://bootlin.com/blog/find-root-device/
    https://www.systutorials.com/how-to-find-the-disk-where-root-is-on-in-bash-on-linux/.
    """

    __slots__ = ['major', 'minor']

    def __init__(self):
        dev = os.stat("/").st_dev
        self.major = os.major(dev)
        self.minor = os.minor(dev)

    def ask_proc_partitions(self):
        with open_text("%s/partitions" % get_procfs_path()) as f:
            for line in f.readlines()[2:]:
                fields = line.split()
                if len(fields) < 4:  # just for extra safety
                    continue
                major = int(fields[0]) if fields[0].isdigit() else None
                minor = int(fields[1]) if fields[1].isdigit() else None
                name = fields[3]
                if major == self.major and minor == self.minor:
                    if name:  # just for extra safety
                        return "/dev/%s" % name

    def ask_sys_dev_block(self):
        path = "/sys/dev/block/%s:%s/uevent" % (self.major, self.minor)
        with open_text(path) as f:
            for line in f:
                if line.startswith("DEVNAME="):
                    name = line.strip().rpartition("DEVNAME=")[2]
                    if name:  # just for extra safety
                        return "/dev/%s" % name

    def ask_sys_class_block(self):
        needle = "%s:%s" % (self.major, self.minor)
        files = glob.iglob("/sys/class/block/*/dev")
        for file in files:
            try:
                f = open_text(file)
            except FileNotFoundError:  # race condition
                continue
            else:
                with f:
                    data = f.read().strip()
                    if data == needle:
                        name = os.path.basename(os.path.dirname(file))
                        return "/dev/%s" % name

    def find(self):
        path = None
        if path is None:
            try:
                path = self.ask_proc_partitions()
            except (IOError, OSError) as err:
                debug(err)
        if path is None:
            try:
                path = self.ask_sys_dev_block()
            except (IOError, OSError) as err:
                debug(err)
        if path is None:
            try:
                path = self.ask_sys_class_block()
            except (IOError, OSError) as err:
                debug(err)
        # We use exists() because the "/dev/*" part of the path is hard
        # coded, so we want to be sure.
        if path is not None and os.path.exists(path):
            return path


def disk_partitions(all=False):
    """Return mounted disk partitions as a list of namedtuples."""
    fstypes = set()
    procfs_path = get_procfs_path()
    if not all:
        with open_text("%s/filesystems" % procfs_path) as f:
            for line in f:
                line = line.strip()
                if not line.startswith("nodev"):
                    fstypes.add(line.strip())
                else:
                    # ignore all lines starting with "nodev" except "nodev zfs"
                    fstype = line.split("\t")[1]
                    if fstype == "zfs":
                        fstypes.add("zfs")

    # See: https://github.com/giampaolo/psutil/issues/1307
    if procfs_path == "/proc" and os.path.isfile('/etc/mtab'):
        mounts_path = os.path.realpath("/etc/mtab")
    else:
        mounts_path = os.path.realpath("%s/self/mounts" % procfs_path)

    retlist = []
    partitions = cext.disk_partitions(mounts_path)
    for partition in partitions:
        device, mountpoint, fstype, opts = partition
        if device == 'none':
            device = ''
        if device in ("/dev/root", "rootfs"):
            device = RootFsDeviceFinder().find() or device
        if not all:
            if not device or fstype not in fstypes:
                continue
        ntuple = _common.sdiskpart(device, mountpoint, fstype, opts)
        retlist.append(ntuple)

    return retlist


# =====================================================================
# --- sensors
# =====================================================================


def sensors_temperatures():
    """Return hardware (CPU and others) temperatures as a dict
    including hardware name, label, current, max and critical
    temperatures.

    Implementation notes:
    - /sys/class/hwmon looks like the most recent interface to
      retrieve this info, and this implementation relies on it
      only (old distros will probably use something else)
    - lm-sensors on Ubuntu 16.04 relies on /sys/class/hwmon
    - /sys/class/thermal/thermal_zone* is another one but it's more
      difficult to parse
    """
    ret = collections.defaultdict(list)
    basenames = glob.glob('/sys/class/hwmon/hwmon*/temp*_*')
    # CentOS has an intermediate /device directory:
    # https://github.com/giampaolo/psutil/issues/971
    # https://github.com/nicolargo/glances/issues/1060
    basenames.extend(glob.glob('/sys/class/hwmon/hwmon*/device/temp*_*'))
    basenames = sorted(set([x.split('_')[0] for x in basenames]))

    # Only add the coretemp hwmon entries if they're not already in
    # /sys/class/hwmon/
    # https://github.com/giampaolo/psutil/issues/1708
    # https://github.com/giampaolo/psutil/pull/1648
    basenames2 = glob.glob(
        '/sys/devices/platform/coretemp.*/hwmon/hwmon*/temp*_*'
    )
    repl = re.compile('/sys/devices/platform/coretemp.*/hwmon/')
    for name in basenames2:
        altname = repl.sub('/sys/class/hwmon/', name)
        if altname not in basenames:
            basenames.append(name)

    for base in basenames:
        try:
            path = base + '_input'
            current = float(bcat(path)) / 1000.0
            path = os.path.join(os.path.dirname(base), 'name')
            unit_name = cat(path).strip()
        except (IOError, OSError, ValueError):
            # A lot of things can go wrong here, so let's just skip the
            # whole entry. Sure thing is Linux's /sys/class/hwmon really
            # is a stinky broken mess.
            # https://github.com/giampaolo/psutil/issues/1009
            # https://github.com/giampaolo/psutil/issues/1101
            # https://github.com/giampaolo/psutil/issues/1129
            # https://github.com/giampaolo/psutil/issues/1245
            # https://github.com/giampaolo/psutil/issues/1323
            continue

        high = bcat(base + '_max', fallback=None)
        critical = bcat(base + '_crit', fallback=None)
        label = cat(base + '_label', fallback='').strip()

        if high is not None:
            try:
                high = float(high) / 1000.0
            except ValueError:
                high = None
        if critical is not None:
            try:
                critical = float(critical) / 1000.0
            except ValueError:
                critical = None

        ret[unit_name].append((label, current, high, critical))

    # Indication that no sensors were detected in /sys/class/hwmon/
    if not basenames:
        basenames = glob.glob('/sys/class/thermal/thermal_zone*')
        basenames = sorted(set(basenames))

        for base in basenames:
            try:
                path = os.path.join(base, 'temp')
                current = float(bcat(path)) / 1000.0
                path = os.path.join(base, 'type')
                unit_name = cat(path).strip()
            except (IOError, OSError, ValueError) as err:
                debug(err)
                continue

            trip_paths = glob.glob(base + '/trip_point*')
            trip_points = set([
                '_'.join(os.path.basename(p).split('_')[0:3])
                for p in trip_paths
            ])
            critical = None
            high = None
            for trip_point in trip_points:
                path = os.path.join(base, trip_point + "_type")
                trip_type = cat(path, fallback='').strip()
                if trip_type == 'critical':
                    critical = bcat(
                        os.path.join(base, trip_point + "_temp"), fallback=None
                    )
                elif trip_type == 'high':
                    high = bcat(
                        os.path.join(base, trip_point + "_temp"), fallback=None
                    )

                if high is not None:
                    try:
                        high = float(high) / 1000.0
                    except ValueError:
                        high = None
                if critical is not None:
                    try:
                        critical = float(critical) / 1000.0
                    except ValueError:
                        critical = None

            ret[unit_name].append(('', current, high, critical))

    return dict(ret)


def sensors_fans():
    """Return hardware fans info (for CPU and other peripherals) as a
    dict including hardware label and current speed.

    Implementation notes:
    - /sys/class/hwmon looks like the most recent interface to
      retrieve this info, and this implementation relies on it
      only (old distros will probably use something else)
    - lm-sensors on Ubuntu 16.04 relies on /sys/class/hwmon
    """
    ret = collections.defaultdict(list)
    basenames = glob.glob('/sys/class/hwmon/hwmon*/fan*_*')
    if not basenames:
        # CentOS has an intermediate /device directory:
        # https://github.com/giampaolo/psutil/issues/971
        basenames = glob.glob('/sys/class/hwmon/hwmon*/device/fan*_*')

    basenames = sorted(set([x.split('_')[0] for x in basenames]))
    for base in basenames:
        try:
            current = int(bcat(base + '_input'))
        except (IOError, OSError) as err:
            debug(err)
            continue
        unit_name = cat(os.path.join(os.path.dirname(base), 'name')).strip()
        label = cat(base + '_label', fallback='').strip()
        ret[unit_name].append(_common.sfan(label, current))

    return dict(ret)


def sensors_battery():
    """Return battery information.
    Implementation note: it appears /sys/class/power_supply/BAT0/
    directory structure may vary and provide files with the same
    meaning but under different names, see:
    https://github.com/giampaolo/psutil/issues/966.
    """
    null = object()

    def multi_bcat(*paths):
        """Attempt to read the content of multiple files which may
        not exist. If none of them exist return None.
        """
        for path in paths:
            ret = bcat(path, fallback=null)
            if ret != null:
                try:
                    return int(ret)
                except ValueError:
                    return ret.strip()
        return None

    bats = [
        x
        for x in os.listdir(POWER_SUPPLY_PATH)
        if x.startswith('BAT') or 'battery' in x.lower()
    ]
    if not bats:
        return None
    # Get the first available battery. Usually this is "BAT0", except
    # some rare exceptions:
    # https://github.com/giampaolo/psutil/issues/1238
    root = os.path.join(POWER_SUPPLY_PATH, sorted(bats)[0])

    # Base metrics.
    energy_now = multi_bcat(root + "/energy_now", root + "/charge_now")
    power_now = multi_bcat(root + "/power_now", root + "/current_now")
    energy_full = multi_bcat(root + "/energy_full", root + "/charge_full")
    time_to_empty = multi_bcat(root + "/time_to_empty_now")

    # Percent. If we have energy_full the percentage will be more
    # accurate compared to reading /capacity file (float vs. int).
    if energy_full is not None and energy_now is not None:
        try:
            percent = 100.0 * energy_now / energy_full
        except ZeroDivisionError:
            percent = 0.0
    else:
        percent = int(cat(root + "/capacity", fallback=-1))
        if percent == -1:
            return None

    # Is AC power cable plugged in?
    # Note: AC0 is not always available and sometimes (e.g. CentOS7)
    # it's called "AC".
    power_plugged = None
    online = multi_bcat(
        os.path.join(POWER_SUPPLY_PATH, "AC0/online"),
        os.path.join(POWER_SUPPLY_PATH, "AC/online"),
    )
    if online is not None:
        power_plugged = online == 1
    else:
        status = cat(root + "/status", fallback="").strip().lower()
        if status == "discharging":
            power_plugged = False
        elif status in ("charging", "full"):
            power_plugged = True

    # Seconds left.
    # Note to self: we may also calculate the charging ETA as per:
    # https://github.com/thialfihar/dotfiles/blob/
    #     013937745fd9050c30146290e8f963d65c0179e6/bin/battery.py#L55
    if power_plugged:
        secsleft = _common.POWER_TIME_UNLIMITED
    elif energy_now is not None and power_now is not None:
        try:
            secsleft = int(energy_now / power_now * 3600)
        except ZeroDivisionError:
            secsleft = _common.POWER_TIME_UNKNOWN
    elif time_to_empty is not None:
        secsleft = int(time_to_empty * 60)
        if secsleft < 0:
            secsleft = _common.POWER_TIME_UNKNOWN
    else:
        secsleft = _common.POWER_TIME_UNKNOWN

    return _common.sbattery(percent, secsleft, power_plugged)


# =====================================================================
# --- other system functions
# =====================================================================


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    for item in rawlist:
        user, tty, hostname, tstamp, pid = item
        nt = _common.suser(user, tty or None, hostname, tstamp, pid)
        retlist.append(nt)
    return retlist


def boot_time():
    """Return the system boot time expressed in seconds since the epoch."""
    global BOOT_TIME
    path = '%s/stat' % get_procfs_path()
    with open_binary(path) as f:
        for line in f:
            if line.startswith(b'btime'):
                ret = float(line.strip().split()[1])
                BOOT_TIME = ret
                return ret
        raise RuntimeError("line 'btime' not found in %s" % path)


# =====================================================================
# --- processes
# =====================================================================


def pids():
    """Returns a list of PIDs currently running on the system."""
    return [int(x) for x in os.listdir(b(get_procfs_path())) if x.isdigit()]


def pid_exists(pid):
    """Check for the existence of a unix PID. Linux TIDs are not
    supported (always return False).
    """
    if not _psposix.pid_exists(pid):
        return False
    else:
        # Linux's apparently does not distinguish between PIDs and TIDs
        # (thread IDs).
        # listdir("/proc") won't show any TID (only PIDs) but
        # os.stat("/proc/{tid}") will succeed if {tid} exists.
        # os.kill() can also be passed a TID. This is quite confusing.
        # In here we want to enforce this distinction and support PIDs
        # only, see:
        # https://github.com/giampaolo/psutil/issues/687
        try:
            # Note: already checked that this is faster than using a
            # regular expr. Also (a lot) faster than doing
            # 'return pid in pids()'
            path = "%s/%s/status" % (get_procfs_path(), pid)
            with open_binary(path) as f:
                for line in f:
                    if line.startswith(b"Tgid:"):
                        tgid = int(line.split()[1])
                        # If tgid and pid are the same then we're
                        # dealing with a process PID.
                        return tgid == pid
                raise ValueError("'Tgid' line not found in %s" % path)
        except (EnvironmentError, ValueError):
            return pid in pids()


def ppid_map():
    """Obtain a {pid: ppid, ...} dict for all running processes in
    one shot. Used to speed up Process.children().
    """
    ret = {}
    procfs_path = get_procfs_path()
    for pid in pids():
        try:
            with open_binary("%s/%s/stat" % (procfs_path, pid)) as f:
                data = f.read()
        except (FileNotFoundError, ProcessLookupError):
            # Note: we should be able to access /stat for all processes
            # aka it's unlikely we'll bump into EPERM, which is good.
            pass
        else:
            rpar = data.rfind(b')')
            dset = data[rpar + 2 :].split()
            ppid = int(dset[1])
            ret[pid] = ppid
    return ret


def wrap_exceptions(fun):
    """Decorator which translates bare OSError and IOError exceptions
    into NoSuchProcess and AccessDenied.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except PermissionError:
            raise AccessDenied(self.pid, self._name)
        except ProcessLookupError:
            self._raise_if_zombie()
            raise NoSuchProcess(self.pid, self._name)
        except FileNotFoundError:
            self._raise_if_zombie()
            if not os.path.exists("%s/%s" % (self._procfs_path, self.pid)):
                raise NoSuchProcess(self.pid, self._name)
            raise

    return wrapper


class Process:
    """Linux process implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "_procfs_path", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None
        self._procfs_path = get_procfs_path()

    def _is_zombie(self):
        # Note: most of the times Linux is able to return info about the
        # process even if it's a zombie, and /proc/{pid} will exist.
        # There are some exceptions though, like exe(), cmdline() and
        # memory_maps(). In these cases /proc/{pid}/{file} exists but
        # it's empty. Instead of returning a "null" value we'll raise an
        # exception.
        try:
            data = bcat("%s/%s/stat" % (self._procfs_path, self.pid))
        except (IOError, OSError):
            return False
        else:
            rpar = data.rfind(b')')
            status = data[rpar + 2 : rpar + 3]
            return status == b"Z"

    def _raise_if_zombie(self):
        if self._is_zombie():
            raise ZombieProcess(self.pid, self._name, self._ppid)

    def _raise_if_not_alive(self):
        """Raise NSP if the process disappeared on us."""
        # For those C function who do not raise NSP, possibly returning
        # incorrect or incomplete result.
        os.stat('%s/%s' % (self._procfs_path, self.pid))

    @wrap_exceptions
    @memoize_when_activated
    def _parse_stat_file(self):
        """Parse /proc/{pid}/stat file and return a dict with various
        process info.
        Using "man proc" as a reference: where "man proc" refers to
        position N always subtract 3 (e.g ppid position 4 in
        'man proc' == position 1 in here).
        The return value is cached in case oneshot() ctx manager is
        in use.
        """
        data = bcat("%s/%s/stat" % (self._procfs_path, self.pid))
        # Process name is between parentheses. It can contain spaces and
        # other parentheses. This is taken into account by looking for
        # the first occurrence of "(" and the last occurrence of ")".
        rpar = data.rfind(b')')
        name = data[data.find(b'(') + 1 : rpar]
        fields = data[rpar + 2 :].split()

        ret = {}
        ret['name'] = name
        ret['status'] = fields[0]
        ret['ppid'] = fields[1]
        ret['ttynr'] = fields[4]
        ret['utime'] = fields[11]
        ret['stime'] = fields[12]
        ret['children_utime'] = fields[13]
        ret['children_stime'] = fields[14]
        ret['create_time'] = fields[19]
        ret['cpu_num'] = fields[36]
        try:
            ret['blkio_ticks'] = fields[39]  # aka 'delayacct_blkio_ticks'
        except IndexError:
            # https://github.com/giampaolo/psutil/issues/2455
            debug("can't get blkio_ticks, set iowait to 0")
            ret['blkio_ticks'] = 0

        return ret

    @wrap_exceptions
    @memoize_when_activated
    def _read_status_file(self):
        """Read /proc/{pid}/stat file and return its content.
        The return value is cached in case oneshot() ctx manager is
        in use.
        """
        with open_binary("%s/%s/status" % (self._procfs_path, self.pid)) as f:
            return f.read()

    @wrap_exceptions
    @memoize_when_activated
    def _read_smaps_file(self):
        with open_binary("%s/%s/smaps" % (self._procfs_path, self.pid)) as f:
            return f.read().strip()

    def oneshot_enter(self):
        self._parse_stat_file.cache_activate(self)
        self._read_status_file.cache_activate(self)
        self._read_smaps_file.cache_activate(self)

    def oneshot_exit(self):
        self._parse_stat_file.cache_deactivate(self)
        self._read_status_file.cache_deactivate(self)
        self._read_smaps_file.cache_deactivate(self)

    @wrap_exceptions
    def name(self):
        name = self._parse_stat_file()['name']
        if PY3:
            name = decode(name)
        # XXX - gets changed later and probably needs refactoring
        return name

    @wrap_exceptions
    def exe(self):
        try:
            return readlink("%s/%s/exe" % (self._procfs_path, self.pid))
        except (FileNotFoundError, ProcessLookupError):
            self._raise_if_zombie()
            # no such file error; might be raised also if the
            # path actually exists for system processes with
            # low pids (about 0-20)
            if os.path.lexists("%s/%s" % (self._procfs_path, self.pid)):
                return ""
            raise

    @wrap_exceptions
    def cmdline(self):
        with open_text("%s/%s/cmdline" % (self._procfs_path, self.pid)) as f:
            data = f.read()
        if not data:
            # may happen in case of zombie process
            self._raise_if_zombie()
            return []
        # 'man proc' states that args are separated by null bytes '\0'
        # and last char is supposed to be a null byte. Nevertheless
        # some processes may change their cmdline after being started
        # (via setproctitle() or similar), they are usually not
        # compliant with this rule and use spaces instead. Google
        # Chrome process is an example. See:
        # https://github.com/giampaolo/psutil/issues/1179
        sep = '\x00' if data.endswith('\x00') else ' '
        if data.endswith(sep):
            data = data[:-1]
        cmdline = data.split(sep)
        # Sometimes last char is a null byte '\0' but the args are
        # separated by spaces, see: https://github.com/giampaolo/psutil/
        # issues/1179#issuecomment-552984549
        if sep == '\x00' and len(cmdline) == 1 and ' ' in data:
            cmdline = data.split(' ')
        return cmdline

    @wrap_exceptions
    def environ(self):
        with open_text("%s/%s/environ" % (self._procfs_path, self.pid)) as f:
            data = f.read()
        return parse_environ_block(data)

    @wrap_exceptions
    def terminal(self):
        tty_nr = int(self._parse_stat_file()['ttynr'])
        tmap = _psposix.get_terminal_map()
        try:
            return tmap[tty_nr]
        except KeyError:
            return None

    # May not be available on old kernels.
    if os.path.exists('/proc/%s/io' % os.getpid()):

        @wrap_exceptions
        def io_counters(self):
            fname = "%s/%s/io" % (self._procfs_path, self.pid)
            fields = {}
            with open_binary(fname) as f:
                for line in f:
                    # https://github.com/giampaolo/psutil/issues/1004
                    line = line.strip()
                    if line:
                        try:
                            name, value = line.split(b': ')
                        except ValueError:
                            # https://github.com/giampaolo/psutil/issues/1004
                            continue
                        else:
                            fields[name] = int(value)
            if not fields:
                raise RuntimeError("%s file was empty" % fname)
            try:
                return pio(
                    fields[b'syscr'],  # read syscalls
                    fields[b'syscw'],  # write syscalls
                    fields[b'read_bytes'],  # read bytes
                    fields[b'write_bytes'],  # write bytes
                    fields[b'rchar'],  # read chars
                    fields[b'wchar'],  # write chars
                )
            except KeyError as err:
                raise ValueError(
                    "%r field was not found in %s; found fields are %r"
                    % (err.args[0], fname, fields)
                )

    @wrap_exceptions
    def cpu_times(self):
        values = self._parse_stat_file()
        utime = float(values['utime']) / CLOCK_TICKS
        stime = float(values['stime']) / CLOCK_TICKS
        children_utime = float(values['children_utime']) / CLOCK_TICKS
        children_stime = float(values['children_stime']) / CLOCK_TICKS
        iowait = float(values['blkio_ticks']) / CLOCK_TICKS
        return pcputimes(utime, stime, children_utime, children_stime, iowait)

    @wrap_exceptions
    def cpu_num(self):
        """What CPU the process is on."""
        return int(self._parse_stat_file()['cpu_num'])

    @wrap_exceptions
    def wait(self, timeout=None):
        return _psposix.wait_pid(self.pid, timeout, self._name)

    @wrap_exceptions
    def create_time(self):
        ctime = float(self._parse_stat_file()['create_time'])
        # According to documentation, starttime is in field 21 and the
        # unit is jiffies (clock ticks).
        # We first divide it for clock ticks and then add uptime returning
        # seconds since the epoch.
        # Also use cached value if available.
        bt = BOOT_TIME or boot_time()
        return (ctime / CLOCK_TICKS) + bt

    @wrap_exceptions
    def memory_info(self):
        #  ============================================================
        # | FIELD  | DESCRIPTION                         | AKA  | TOP  |
        #  ============================================================
        # | rss    | resident set size                   |      | RES  |
        # | vms    | total program size                  | size | VIRT |
        # | shared | shared pages (from shared mappings) |      | SHR  |
        # | text   | text ('code')                       | trs  | CODE |
        # | lib    | library (unused in Linux 2.6)       | lrs  |      |
        # | data   | data + stack                        | drs  | DATA |
        # | dirty  | dirty pages (unused in Linux 2.6)   | dt   |      |
        #  ============================================================
        with open_binary("%s/%s/statm" % (self._procfs_path, self.pid)) as f:
            vms, rss, shared, text, lib, data, dirty = (
                int(x) * PAGESIZE for x in f.readline().split()[:7]
            )
        return pmem(rss, vms, shared, text, lib, data, dirty)

    if HAS_PROC_SMAPS_ROLLUP or HAS_PROC_SMAPS:

        def _parse_smaps_rollup(self):
            # /proc/pid/smaps_rollup was added to Linux in 2017. Faster
            # than /proc/pid/smaps. It reports higher PSS than */smaps
            # (from 1k up to 200k higher; tested against all processes).
            # IMPORTANT: /proc/pid/smaps_rollup is weird, because it
            # raises ESRCH / ENOENT for many PIDs, even if they're alive
            # (also as root). In that case we'll use /proc/pid/smaps as
            # fallback, which is slower but has a +50% success rate
            # compared to /proc/pid/smaps_rollup.
            uss = pss = swap = 0
            with open_binary(
                "{}/{}/smaps_rollup".format(self._procfs_path, self.pid)
            ) as f:
                for line in f:
                    if line.startswith(b"Private_"):
                        # Private_Clean, Private_Dirty, Private_Hugetlb
                        uss += int(line.split()[1]) * 1024
                    elif line.startswith(b"Pss:"):
                        pss = int(line.split()[1]) * 1024
                    elif line.startswith(b"Swap:"):
                        swap = int(line.split()[1]) * 1024
            return (uss, pss, swap)

        @wrap_exceptions
        def _parse_smaps(
            self,
            # Gets Private_Clean, Private_Dirty, Private_Hugetlb.
            _private_re=re.compile(br"\nPrivate.*:\s+(\d+)"),
            _pss_re=re.compile(br"\nPss\:\s+(\d+)"),
            _swap_re=re.compile(br"\nSwap\:\s+(\d+)"),
        ):
            # /proc/pid/smaps does not exist on kernels < 2.6.14 or if
            # CONFIG_MMU kernel configuration option is not enabled.

            # Note: using 3 regexes is faster than reading the file
            # line by line.
            # XXX: on Python 3 the 2 regexes are 30% slower than on
            # Python 2 though. Figure out why.
            #
            # You might be tempted to calculate USS by subtracting
            # the "shared" value from the "resident" value in
            # /proc/<pid>/statm. But at least on Linux, statm's "shared"
            # value actually counts pages backed by files, which has
            # little to do with whether the pages are actually shared.
            # /proc/self/smaps on the other hand appears to give us the
            # correct information.
            smaps_data = self._read_smaps_file()
            # Note: smaps file can be empty for certain processes.
            # The code below will not crash though and will result to 0.
            uss = sum(map(int, _private_re.findall(smaps_data))) * 1024
            pss = sum(map(int, _pss_re.findall(smaps_data))) * 1024
            swap = sum(map(int, _swap_re.findall(smaps_data))) * 1024
            return (uss, pss, swap)

        @wrap_exceptions
        def memory_full_info(self):
            if HAS_PROC_SMAPS_ROLLUP:  # faster
                try:
                    uss, pss, swap = self._parse_smaps_rollup()
                except (ProcessLookupError, FileNotFoundError):
                    uss, pss, swap = self._parse_smaps()
            else:
                uss, pss, swap = self._parse_smaps()
            basic_mem = self.memory_info()
            return pfullmem(*basic_mem + (uss, pss, swap))

    else:
        memory_full_info = memory_info

    if HAS_PROC_SMAPS:

        @wrap_exceptions
        def memory_maps(self):
            """Return process's mapped memory regions as a list of named
            tuples. Fields are explained in 'man proc'; here is an updated
            (Apr 2012) version: http://goo.gl/fmebo.

            /proc/{PID}/smaps does not exist on kernels < 2.6.14 or if
            CONFIG_MMU kernel configuration option is not enabled.
            """

            def get_blocks(lines, current_block):
                data = {}
                for line in lines:
                    fields = line.split(None, 5)
                    if not fields[0].endswith(b':'):
                        # new block section
                        yield (current_block.pop(), data)
                        current_block.append(line)
                    else:
                        try:
                            data[fields[0]] = int(fields[1]) * 1024
                        except ValueError:
                            if fields[0].startswith(b'VmFlags:'):
                                # see issue #369
                                continue
                            else:
                                raise ValueError(
                                    "don't know how to interpret line %r"
                                    % line
                                )
                yield (current_block.pop(), data)

            data = self._read_smaps_file()
            # Note: smaps file can be empty for certain processes or for
            # zombies.
            if not data:
                self._raise_if_zombie()
                return []
            lines = data.split(b'\n')
            ls = []
            first_line = lines.pop(0)
            current_block = [first_line]
            for header, data in get_blocks(lines, current_block):
                hfields = header.split(None, 5)
                try:
                    addr, perms, _offset, _dev, _inode, path = hfields
                except ValueError:
                    addr, perms, _offset, _dev, _inode, path = hfields + ['']
                if not path:
                    path = '[anon]'
                else:
                    if PY3:
                        path = decode(path)
                    path = path.strip()
                    if path.endswith(' (deleted)') and not path_exists_strict(
                        path
                    ):
                        path = path[:-10]
                item = (
                    decode(addr),
                    decode(perms),
                    path,
                    data.get(b'Rss:', 0),
                    data.get(b'Size:', 0),
                    data.get(b'Pss:', 0),
                    data.get(b'Shared_Clean:', 0),
                    data.get(b'Shared_Dirty:', 0),
                    data.get(b'Private_Clean:', 0),
                    data.get(b'Private_Dirty:', 0),
                    data.get(b'Referenced:', 0),
                    data.get(b'Anonymous:', 0),
                    data.get(b'Swap:', 0),
                )
                ls.append(item)
            return ls

    @wrap_exceptions
    def cwd(self):
        return readlink("%s/%s/cwd" % (self._procfs_path, self.pid))

    @wrap_exceptions
    def num_ctx_switches(
        self, _ctxsw_re=re.compile(br'ctxt_switches:\t(\d+)')
    ):
        data = self._read_status_file()
        ctxsw = _ctxsw_re.findall(data)
        if not ctxsw:
            raise NotImplementedError(
                "'voluntary_ctxt_switches' and 'nonvoluntary_ctxt_switches'"
                "lines were not found in %s/%s/status; the kernel is "
                "probably older than 2.6.23" % (self._procfs_path, self.pid)
            )
        else:
            return _common.pctxsw(int(ctxsw[0]), int(ctxsw[1]))

    @wrap_exceptions
    def num_threads(self, _num_threads_re=re.compile(br'Threads:\t(\d+)')):
        # Note: on Python 3 using a re is faster than iterating over file
        # line by line. On Python 2 is the exact opposite, and iterating
        # over a file on Python 3 is slower than on Python 2.
        data = self._read_status_file()
        return int(_num_threads_re.findall(data)[0])

    @wrap_exceptions
    def threads(self):
        thread_ids = os.listdir("%s/%s/task" % (self._procfs_path, self.pid))
        thread_ids.sort()
        retlist = []
        hit_enoent = False
        for thread_id in thread_ids:
            fname = "%s/%s/task/%s/stat" % (
                self._procfs_path,
                self.pid,
                thread_id,
            )
            try:
                with open_binary(fname) as f:
                    st = f.read().strip()
            except (FileNotFoundError, ProcessLookupError):
                # no such file or directory or no such process;
                # it means thread disappeared on us
                hit_enoent = True
                continue
            # ignore the first two values ("pid (exe)")
            st = st[st.find(b')') + 2 :]
            values = st.split(b' ')
            utime = float(values[11]) / CLOCK_TICKS
            stime = float(values[12]) / CLOCK_TICKS
            ntuple = _common.pthread(int(thread_id), utime, stime)
            retlist.append(ntuple)
        if hit_enoent:
            self._raise_if_not_alive()
        return retlist

    @wrap_exceptions
    def nice_get(self):
        # with open_text('%s/%s/stat' % (self._procfs_path, self.pid)) as f:
        #   data = f.read()
        #   return int(data.split()[18])

        # Use C implementation
        return cext_posix.getpriority(self.pid)

    @wrap_exceptions
    def nice_set(self, value):
        return cext_posix.setpriority(self.pid, value)

    # starting from CentOS 6.
    if HAS_CPU_AFFINITY:

        @wrap_exceptions
        def cpu_affinity_get(self):
            return cext.proc_cpu_affinity_get(self.pid)

        def _get_eligible_cpus(
            self, _re=re.compile(br"Cpus_allowed_list:\t(\d+)-(\d+)")
        ):
            # See: https://github.com/giampaolo/psutil/issues/956
            data = self._read_status_file()
            match = _re.findall(data)
            if match:
                return list(range(int(match[0][0]), int(match[0][1]) + 1))
            else:
                return list(range(len(per_cpu_times())))

        @wrap_exceptions
        def cpu_affinity_set(self, cpus):
            try:
                cext.proc_cpu_affinity_set(self.pid, cpus)
            except (OSError, ValueError) as err:
                if isinstance(err, ValueError) or err.errno == errno.EINVAL:
                    eligible_cpus = self._get_eligible_cpus()
                    all_cpus = tuple(range(len(per_cpu_times())))
                    for cpu in cpus:
                        if cpu not in all_cpus:
                            raise ValueError(
                                "invalid CPU number %r; choose between %s"
                                % (cpu, eligible_cpus)
                            )
                        if cpu not in eligible_cpus:
                            raise ValueError(
                                "CPU number %r is not eligible; choose "
                                "between %s" % (cpu, eligible_cpus)
                            )
                raise

    # only starting from kernel 2.6.13
    if HAS_PROC_IO_PRIORITY:

        @wrap_exceptions
        def ionice_get(self):
            ioclass, value = cext.proc_ioprio_get(self.pid)
            if enum is not None:
                ioclass = IOPriority(ioclass)
            return _common.pionice(ioclass, value)

        @wrap_exceptions
        def ionice_set(self, ioclass, value):
            if value is None:
                value = 0
            if value and ioclass in (IOPRIO_CLASS_IDLE, IOPRIO_CLASS_NONE):
                raise ValueError("%r ioclass accepts no value" % ioclass)
            if value < 0 or value > 7:
                msg = "value not in 0-7 range"
                raise ValueError(msg)
            return cext.proc_ioprio_set(self.pid, ioclass, value)

    if prlimit is not None:

        @wrap_exceptions
        def rlimit(self, resource_, limits=None):
            # If pid is 0 prlimit() applies to the calling process and
            # we don't want that. We should never get here though as
            # PID 0 is not supported on Linux.
            if self.pid == 0:
                msg = "can't use prlimit() against PID 0 process"
                raise ValueError(msg)
            try:
                if limits is None:
                    # get
                    return prlimit(self.pid, resource_)
                else:
                    # set
                    if len(limits) != 2:
                        msg = (
                            "second argument must be a (soft, hard) "
                            + "tuple, got %s" % repr(limits)
                        )
                        raise ValueError(msg)
                    prlimit(self.pid, resource_, limits)
            except OSError as err:
                if err.errno == errno.ENOSYS:
                    # I saw this happening on Travis:
                    # https://travis-ci.org/giampaolo/psutil/jobs/51368273
                    self._raise_if_zombie()
                raise

    @wrap_exceptions
    def status(self):
        letter = self._parse_stat_file()['status']
        if PY3:
            letter = letter.decode()
        # XXX is '?' legit? (we're not supposed to return it anyway)
        return PROC_STATUSES.get(letter, '?')

    @wrap_exceptions
    def open_files(self):
        retlist = []
        files = os.listdir("%s/%s/fd" % (self._procfs_path, self.pid))
        hit_enoent = False
        for fd in files:
            file = "%s/%s/fd/%s" % (self._procfs_path, self.pid, fd)
            try:
                path = readlink(file)
            except (FileNotFoundError, ProcessLookupError):
                # ENOENT == file which is gone in the meantime
                hit_enoent = True
                continue
            except OSError as err:
                if err.errno == errno.EINVAL:
                    # not a link
                    continue
                if err.errno == errno.ENAMETOOLONG:
                    # file name too long
                    debug(err)
                    continue
                raise
            else:
                # If path is not an absolute there's no way to tell
                # whether it's a regular file or not, so we skip it.
                # A regular file is always supposed to be have an
                # absolute path though.
                if path.startswith('/') and isfile_strict(path):
                    # Get file position and flags.
                    file = "%s/%s/fdinfo/%s" % (
                        self._procfs_path,
                        self.pid,
                        fd,
                    )
                    try:
                        with open_binary(file) as f:
                            pos = int(f.readline().split()[1])
                            flags = int(f.readline().split()[1], 8)
                    except (FileNotFoundError, ProcessLookupError):
                        # fd gone in the meantime; process may
                        # still be alive
                        hit_enoent = True
                    else:
                        mode = file_flags_to_mode(flags)
                        ntuple = popenfile(
                            path, int(fd), int(pos), mode, flags
                        )
                        retlist.append(ntuple)
        if hit_enoent:
            self._raise_if_not_alive()
        return retlist

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        ret = _net_connections.retrieve(kind, self.pid)
        self._raise_if_not_alive()
        return ret

    @wrap_exceptions
    def num_fds(self):
        return len(os.listdir("%s/%s/fd" % (self._procfs_path, self.pid)))

    @wrap_exceptions
    def ppid(self):
        return int(self._parse_stat_file()['ppid'])

    @wrap_exceptions
    def uids(self, _uids_re=re.compile(br'Uid:\t(\d+)\t(\d+)\t(\d+)')):
        data = self._read_status_file()
        real, effective, saved = _uids_re.findall(data)[0]
        return _common.puids(int(real), int(effective), int(saved))

    @wrap_exceptions
    def gids(self, _gids_re=re.compile(br'Gid:\t(\d+)\t(\d+)\t(\d+)')):
        data = self._read_status_file()
        real, effective, saved = _gids_re.findall(data)[0]
        return _common.pgids(int(real), int(effective), int(saved))
PKok\�#�!x�x�psutil/_psutil_linux.abi3.sonuȯ��ELF>0$@��@8	@#"     
%
%PPP``�]�m�m���]�m�m��888$$P�td�T�T�T��Q�tdR�td�]�m�mXXGNU!?��CT��Va2���<�Gc�9
�$҈���GHJMOPSTVWX[\�����
U����ȷ��r:�H���$����"ոf�ذ�u
U;hdeG��������w������l��������W�~�_aY�Q���B ������M<V���5f��(���� ����&�H�--�n��6x7N�v�[���8 ��R"z�E�`?�	 ~06W� &�~�u�7��9z��%�+9�n0=&��6iW�<yc6
�&j��@Ha�$���CQ@0<t] 'W��')�@Cd__gmon_start___init_fini_ITM_deregisterTMCloneTable_ITM_registerTMCloneTable__cxa_finalizepsutil_PyErr_SetFromOSErrnoWithSyscall__errno_locationstrerrorsprintfPyExc_OSErrorPyObject_CallFunctionPyErr_SetObject_Py_DeallocNoSuchProcessAccessDeniedpsutil_check_pid_rangePyArg_ParseTuple_Py_NoneStructPyExc_ValueErrorPyErr_SetStringpsutil_set_debugPyObject_IsTruePSUTIL_DEBUGpsutil_setupgetenvsocketstrncpyioctlclosePy_BuildValuePyErr_SetFromErrno_Py_TrueStruct_Py_FalseStructsetprioritygetnameinfoPyList_NewgetifaddrsPyList_AppendfreeifaddrsgetpriorityPyUnicode_FromStringpsutil_getpagesizesysconfpsutil_pid_existskillpsutil_raise_for_pidPyExc_RuntimeErrorPyErr_FormatPyInit__psutil_posixPyModule_Create2PyModule_AddIntConstantPyLong_FromLongPyModule_AddObjectPyInit__psutil_linuxpsutil_proc_ioprio_getpsutil_proc_ioprio_setpsutil_proc_cpu_affinity_getpsutil_proc_cpu_affinity_setpsutil_disk_partitionspsutil_userspsutil_net_if_duplex_speedpsutil_linux_sysinfoPyEval_SaveThreadsetmntentPyEval_RestoreThreadgetmntentPyUnicode_DecodeFSDefaultendmntentPyErr_SetFromErrnoWithFilenamestderrfprintffwritefputcsyscall__sched_cpuallocsched_getaffinity__sched_cpufreePyExc_OverflowErrorPyErr_NoMemory__sched_cpucountPySequence_CheckPySequence_SizePyErr_OccurredPySequence_GetItemPyLong_AsLongPyExc_TypeErrorsched_setaffinitysetutentgetutentendutentlibpthread.so.0libc.so.6GLIBC_2.2.5GLIBC_2.7GLIBC_2.3GLIBC_2.3.4GLIBC_2.6� ui	��ii
�ii
�ti	�ii
�ui	��m�$�m�$�m�mHrMR`r�r�r[R�r6�rgR�r�.�rsR�r�*s�Rs/ s�R(s`(@s�RHs�'`s�Rhs@)�s�R�s t t�R@tS`tS�t)S�t?S�tOS�tUSuiS uwS@u�S�o	�o�o�o�o�o!�o#�o%�o.�o4�oL�oA�oD�oE(tYHtShtQ�tH�tN�tV�tXu\(uUHuZp p(p0p8p@pHpPpXp
`phppp
xp�p�p�p�pT�p�p�p�p�p�p�p�p�p�p�p �p"q#q$q&q' q((q)0q*8qW@q+Hq,Pq-Xq/`q0hq1pq2xqJ�qO�q3�q5�q6�q7�q8�q9�q:�q;�q<�q=�q>�q?�q@�qB�qCrDr[rFH��H��OH��t��H����5�O�%�O@�%�Oh����%�Oh�����%�Oh����%�Oh����%�Oh����%�Oh����%�Oh����%�Oh�p����%�Oh�`����%�Oh	�P����%�Oh
�@����%�Oh�0����%�Oh� ����%zOh
�����%rOh�����%jOh���%bOh����%ZOh�����%ROh����%JOh����%BOh����%:Oh����%2Oh����%*Oh�p����%"Oh�`����%Oh�P����%Oh�@����%
Oh�0����%Oh� ����%�Nh�����%�Nh�����%�Nh���%�Nh ����%�Nh!�����%�Nh"����%�Nh#����%�Nh$����%�Nh%����%�Nh&����%�Nh'�p����%�Nh(�`����%�Nh)�P����%�Nh*�@����%�Nh+�0����%�Nh,� ����%zNh-�����%rNh.�����%jNh/���%bNh0����%ZNh1�����%RNh2����%JNh3����%BNh4����%:Nh5����%2Nh6����%*Nh7�p����%"Nh8�`����%Nh9�P����%Nh:�@����%
Nh;�0����%Nh<� ����%�Mh=�����%�Mh>�����%�Mh?��H�=IQH�BQH9�tH�FKH��t	�����H�=QH�5QH)�H��H��?H��H�H�tH�]KH��t��fD���=�Pu/UH�=>KH��tH�=�H�=����h�����P]�����{���f.��AUATI��UH���|���I��8H������L��H�5�*L��H��1�����U1�L��L�%yJH�5�*I�<$����I�<$H��H�����H��tH�mtH��1�]A\A]��H���P���H��1�]A\A]ÐATH��H�5�*1�USH��I��L������1�L��H��IH�5N*H�;�>���H�;H��H������H��tH�mtH��1�[]A\�fDH������H��1�[]A\�f.�ATH��H�54*1�USH��I��L���~���1�L��
H�]IH�5�)H�;���H�;H��H�����H��tH�mtH��1�[]A\�fDH���8���H��1�[]A\�f.�H��H��H�5�-1�H�T$������t"�D$��x*H��HH�H���f.�1�H���f�H��HH�5�)H�8���1���fDH��H��H�5�*1�H�T$�����t2H�|$�����x$H��H���҉H�nHH�H���D1�H���f�H��H�=�(���H��t
H�DH�1�H����UH��1�H�5�*H��@H�T$����A��1�E��tc1Ҿ��:����Ń��tXL�D$H�t$�L�������!��D$H��1��&������t�*����t$ H�=U,1����H��@]�f�����H�dGH�8�l���H��@]�fDUH��1�H�5*H��@H�T$�e���A��1�E��tm1Ҿ�����Ń��thL�D$H�t$�L����������D$H��1��v������t/�z����D$ @tCH�5�FH�=2)1��.���H��@]���K���H��FH�8���H��@]�fDH�5�FH�=�(1����H��@]�DH��H��H�5�*1�H�L$H�T$���A��1�E��t�T$�t$1�������tH�LFH�H���H�FH�8�!�����ff.�@H����AW��AV��AUATUSH����u\����H��E�H��E1�E1�j�H���p���H����H��EH�H��[]A\A]A^A_����u�D�wM��t�I��H��N�,7L��L�%\&D�SH��L��1�H��H���O���I9�u�K�vL��H�=(�D�1�����fDH��H�=�'1��w����h���f�H�EH��@AW1�AVAUATUSH��(�k���H�D$H����H�|$��������fH�l$H�����,���0H��DH�I��I��H��H�uD��M��AUI��L��H�=o%1�����I��XZM����H�|$L��L�$�d���L�$����I�*�NI�,$�VH�+�^I�/�fI�m�nH�mH���wH�}H��t�D�7D�����H;�CI��t�H���=H�} D�����H��H���E�E��
���H�}(D�����I��H��CH�I��M�������E1�H�|$H��t
L�$����L�$H�T$H�H�$H��H��M��t
I�*�BM��tI�,$�RH��t
H�+�cM��t
I�/�tM��tI�m��H�D$H�D$H��([]A\A]A^A_��L����I�,$�����L�����H�+�����H�����I�/�����L����I�m�����L����H�mH��������H�l$H������s���fDH�}(D���\���I��H�rBH�I��M����������DH��L�$�L�L�$M����������f.�L���(����L����M����������f.�L����H����������f.�H�����M����������f.�L��������H�|$E1�E1�1�E1�H����������f�H�|$E1�E1�E1�H��������H�MAE1�E1�1�E1�H�8�J�E1����f�UH��SH�����H�T$H�5�%H���H��1��3�A��1�E��t!�t$1����Ƌ��uH�=�%1��H�H��[]ÐH��@H�8���H��[]�f.�AU1�ATUH��H��@�-�H��t(I��H�T$1�H��H�5_#���uI�,$�`E1�H��@L��]A\A]�1Ҿ���Ń���TH�t$L�D$�L���
�����D$H��1������6�����l$ @����@���>@����@���:@����@�� ��@��@�@�ŀ�Rf����f����f���!f����f����f�� � f��@�Ef�����H�=� ���H��H������H��L���Y��H�E�WH��H�E�����H���7����f�L���(����H�=. ��I��H���m���H��L�����I�E��H��I�E����L��������fDH�=��D�I��H������H��L�����I�EuuH��I�E�����L�����|���f.�H�=4"���I��H������H��L���]��I�Eu%H��I�E�C���L���?��6���f.�H��I�E�����L�����z���DH�=����d����H�=���I�,$���E1��W�H��@L��]A\A]�f.�H�=��4�I��H���
���H��L������I�E�a���H��I�E�����L���{��|���fDH�=����I��H�������H��L���M���I�E����H��I�E�C���L���+��6���fDH�=y��I��H���m���H��L������I�E�����H��I�E�=���L������0���L��������f�H�=�4�I��H���
���H��L������I�E�a���H��I�E���L���{�����fDH�=����I��H�������H��L���M���I�E����H��I�E�����L���+����fDH�=P��I��H���m���H��L������I�E�����H��I�E�����L�������fDH�=��D�I��H������H��L������I�E�q���H��I�E�����L�������fDH�=����I��H������H��L���]���I�E�!���H��I�E�q���L���;��d���fDH�=q��I��H���}���H��L���
���I�E����H��I�E�,���L���������fDH�=)�T�I��H���-���H��L������I�E�����H��I�E���L��������fDH�=���I��H������H��L���m���I�E�1���H��I�E�����L���K����fDH�=���I��H�������H��L������I�E���H��I�E�~���L�����q���H��H�E�M���H������@���ff.���v�fDH����H�=&H��H��1����f���1��f�H��1����A���E��t*�#����t��tH�@9H�8�H��������1�H���f�UH��SH��H��������tH��H��[]�����1���uH��H��[]�/��H��8H��H�5qH�8H��1�[]���AT�H�=;�=�H�����	H�5DH��I���=������H�5/L���!�����1�H�5"L���������H�5L��������l�H�5L��������P�H�5�L�������4�H�5�L��������H�5�L���|������H�5�L���`������H�5�L���D������
H�5�L���(������H�5�L���������
H�5�L�������ut�H�5�L�������u\�H�5wL������uD�H�5mL������u,H��������H��H��tH�5XL�����
�E1�L��A\�f.�DAT�H�=�:�-�H��tp�bH�5�H��I���1���uU1�H�5rL������u@�H�5fL������u(��H�5ZL�������u��L��A\�DE1�L��A\�f.�DAW1�AVAUATUH��SH���x�H���wI��H�T$1�H��H�5������V�i�H�|$H�5+L�5�H����H��I����M����f.�L���X�H��H����H�;���H��H����H�{���I��H���3H�KL�CH��H��L��1���H��H����H��L���4�����H�mt%I�/t-H�+�u���H���
��h����H����I�/u�L�����H�+t�L����H��H���D���L���X��AfDH��4�����H��4H�t$H�8�k�I�,$uL����@E1�H��L��[]A\A]A^A_�L����H�m��I�/tH��t�H�+u�H���D��f�L���8���fDL����H�mu�H�����y���@H�14�#H��1�H�5�H�;��H���H�=����H�3�
�������L���H�����H�����I���fDH��H��H�5H1�H�T$��A��1�E��t3�T$��������t$�‰�H�=e1�����
��H���@H�3H�8�!�H���ff.��H��H��H�51�L�D$H�L$H�T$��A��1�E��t.�L$�T$�����
L$������tH��2H�H����H��2H�8�����AWH��1�H�5HAVA�AUA�@ATUSH��H�T$����u��E�Ic�L�?��M��I��H��I��H�����|$H��L���;��Å���H���	�����8�A��u�H��1H�5GE1�H�8�,�H��L��[]A\A]A^A_�H��I�uL�����H����I�m�/E1��ŐH��1�������I���D1���I��H��twL��H��I����A�ą�u�_f���Hc�L9�s�H��H��H�D�H��s���I��H���i���H��L���'���I��B���H��I�t3��A��u�H���������DH��0H�8���I������L�������H��0�bH��1�H�5�H�;���H���H�=���H�3�
�L����L��������f.�AWH��1�H�5�AVAUATUSH��H�L$H�T$�T�����H�|$������H�|$��I��H����L�l$ ��L���H���1�A��>�I���tb�M�H��uXI���wL��L��D��H��H��I	T�H��I9�tyH�|$H���4�H��H��t&H���$�H�mI��u�H��������H��t`1�H�Ĩ[]A\A]A^A_�f�H�D$H�5�H�PH�!/H�81�����D�|$L�꾀�����u/H�/H��H��.H�5FH�D$H�:�;�H�D$�H��.H�8�����l���AW1�AVAUATUSH���;��I��H���,��f.��[�H��H���f�;u�H�{,��I��H���H�{���I��H�����{L:u.�{M0u(�{Nu"L�CL�H�=�L���������H�=���H��H����f�D�CH��L���*�TL���H�=��I�I��H����H��L����������I�/��I�.tjH�mtCI�,$����L������[�H��H��������c��H��L��[]A\A]A^A_ÐH���x��I�,$�����L���e���L���X���fDL���H���s���L����H��H���	���E1�I�/tYI�.taH��tH�mt_M��tI�,$t*I�mt����E1��^����L��E1���������H���L������I�mu���L������I�.u�L������H������I�/u�L�������H��xH���D���u@�D$hH�=-P1��t$PL�L$PL�D$@H�L$HH�T$8H�t$0���ZYH��x�f�H�,H�8�!��H��x�f.�f�ATH��1�H�5�UE1�H��xH�T$�������1Ҿ��7��Ń����L�D$H�t$�L�����f�F���I��H�D$@D$@H�D$ L��1��D$H�D$`�D$h�D$@D$P�������tF�T$\�D$L�t$N��	¸H�H�=11����I��H��t`�����H��xL��]A\�f��������_t��u/1Ҿ��fDH�=����H��xI��]L��A\�fDH�=������E1��Z��H��xL��]A\�H��H���%s (originated from %s)(is)PSUTIL_DEBUGassume no such process (originated from %s)assume access denied (originated from %s)pid must be a positive integer%02x:(siOOOO)socket(SOCK_DGRAM)ioctl(SIOCGIFFLAGS)upbroadcastloopbackpointopointnotrailersnoarppromiscallmultimasterslavemulticastportselautomediadynamic%s syscall failedRLIMIT_ASRLIMIT_CORERLIMIT_CPURLIMIT_DATARLIMIT_FSIZERLIMIT_MEMLOCKRLIMIT_NOFILERLIMIT_NPROCRLIMIT_RSSRLIMIT_STACKRLIMIT_LOCKSRLIMIT_MSGQUEUERLIMIT_NICERLIMIT_RTPRIORLIMIT_RTTIMERLIMIT_SIGPENDINGRLIM_INFINITY_psutil_posixgetpagesizegetprioritynet_if_addrsnet_if_flagsnet_if_is_runningnet_if_mtusetpriorityversionDUPLEX_HALFDUPLEX_FULLDUPLEX_UNKNOWN_psutil_linuxproc_ioprio_getproc_ioprio_setproc_cpu_affinity_getproc_cpu_affinity_setdisk_partitionsusersnet_if_duplex_speedlinux_sysinfocheck_pid_rangeset_debugpsutil/arch/linux/disk.cpsutil-debug [%s:%d]> setmntent() failed(OOss)iiipsutil/arch/linux/proc.cCPU_ALLOC() failediOinvalid CPU valuecould not allocate a large enough CPU setsequence argument expected, got %R:0.0localhostOOOdi(kkkkkkI)socket()ioctl(SIOCETHTOOL)[ii];�l��<��,���ll�������l�����,���D���p����������$����L���L��\�|�$��<L�|\�����|���,|�H���<����4��dzRx�$`��FJw�?;*3$"<D���B�B�D �G�j
 CBBHO CBB<�h���B�M�A �G�L
 CABGO CAB<�����B�M�A �G�L
 CABGO CAB��jD q
KF
J$X��WD F
FFD���)Dd(\����A�PP{
ACXA0�4���A�PP�
AIX
AGYA����aD H
D`�4��K�D�E �B(�A0�A8�G�\�H�P�Z
8A0A(B BBBI�������X<����B�D�B �B(�A0�A8�D`]hLpXhA`l
8A0A(B BBBH0�D��vA�D�D0S
AABSAA@�����B�D�A �G`y
 DBBA�
 DBBKL��
$H��DP<P��WTB<T���iA�D�G O
DAHO
DALXCA����B�������B��
FHL�8�zB�D�B �B(�A0�D8�DP�
8D0A(B BBBD$h�tD W
ESD��yD \
HH`,�&B�N�H �H(�A0�A8�DP�
8D0A(B BBBAL���B�N�B �B(�A0�A8�G��
8A0A(B BBBCL�P�HB�D�B �B(�A0�A8�D@A
8D0A(B BBBB,LP�dD�X�F�_�A�D
JS@|��QB�M�G��
DBCp
DEGZDB�$�$�m�� 
E�m�m���o`� 
�p �(	���ox���o�o����o�m6 F V f v � � � � � � � � !!&!6!F!V!f!v!�!�!�!�!�!�!�!�!""&"6"F"V"f"v"�"�"�"�"�"�"�"�"##&#6#F#V#f#v#�#�#�#�#�#�#�#�#$$&$MR���������r[R6gR�.sR�*�R/�R`(�R�'�R@)�R�������� t�RSS)S?SOSUSiSwS�SGCC: (GNU) 10.2.1 20210130 (Red Hat 10.2.1-11),�$�,p�'Q,�?9�,�E�9z,6P0<�,�_�@H,[h@Cd,�k�CQl�] �$������XintW�x�W��c�����fvW�zbj}\T�gG�8+:	W;	W�	5j;=l�um@
);�nF��F��R;�R�	
W
g	^�}	�u�
3����,W�R�R�R�*W	R�LR�N3RR:vRPR��lWm���
���W�2
��W�')���'�U	P��R 'W�t}�Rb�,RC=��R�hx�	W��:'�fU�TT	RQ�hH'���R�&j�}�"R��b�2R��pid���l�&��U�TT	�TQ�l'�T	�PP�R &��(��GCexc�R��msg�
(��w ��!m�R��"�!��R
	�&	UvB&P�U��wT	`PQ�Ub&3T	PQ=R��wp&Tv#�9$-�NsR�%��Ls�GCexctR��msgu
(��w `�!mzR��"�!�zR
	&	Uv�%PU��wT	0PQ�U�%37T	PQ3R��w�%Tv%\R�$��\4�IC�]
(��wexcdR�� �!mhR��"0!�hR"�%	Uv%�%m(%P4U��wT	PR|G%3ZT	PR��wV%Tv8A�F �'Qo��9�����Xintc�x�cE
�N��o�������R�N��f�c�zbvG
h�}	T�
NI/�;�h�Y�	R
��	�
�
��
4�	�q�� �
��_
�
 �!L
"�#1$�%�&O'�(�)*,+�,}-�
.�/v0�1�2�3�405|6$7H89L:p;�<,<�=�	>?�@�A�BC3D�E�F�G�H]I�J�K�L/Mk
N�O�PQR#S�T
UFVMW�X�Y7Z�[�\�]�^�_�`�ab�cdCeq	f�g�h%i�
j^k�lm�n�o�p>q	rs�t�uv�wx3yyz�{$|k}~	~K����E������
�����q�<�U	��
�6
�&�����~��	�`����=�K�{�q�����
���:�]
� �����f�����8��
��p�x���	�	�3�$���%
����
������L�'	�����P��4���O���k���^��
�f���U�����h�Y�������������
������0@�1G<3N\T�gG�	8�
+	:	c
	;	cm�5
j�
=
lS
u
m�)��
n�������
�c �
�,2cF��^
�RXcq� �	�
?~����
���& �A1���9 6
�
7�
�8�
]9c
;�9=��(
+U
�
,�
�
-Z
^
.S
�
/
� �U�
0
<�
�
=	c
�
>�Th
J	
�
K`
�
L�(
j
M�0
�

NS8
�
O	@
�
P	H
�
QFP
1
R�X

Sq`l������?N�	�6����Q�	���
8

.8WgfN$�	z�XL)�	�N?
y�
Ik�9

H��G�s

w�?

��
x
K
��
9
K
�
��
�
�
J
�
�
�
|�
�
�
9�
�
�
��)
��?

#�-
z�
h	���
�
.����?
�-G��PG99�'
��������
���\���'���
s
�
&�
1�
<�
G)R�]�h�s�~�����@�9
N+1
d
�3�
- �@g��^`�Q ]@���o�

Oq9
qr9
�sGirqt@dmau@
�v@��
���
��
9�MH�K
b�K
��K

�K
�K
!�\��c�
�c��1
w��
��
���(~u
��	�

&�	�
&��,�
�-�
8
�
!	�
�"N
�$�

X
%�
 
5	.u(
�8	�0�EG{ -
�	"�x;�gA�
g��w=	��/w9;�9G�9���
r�	9�	Gk�#3
F
G
�
�
�c
�G
�@

�@
��gj�P9 �g@	�r y�	 r!�^c��	�"�lc���#"@�c�!	�	�	c$E
�!�Bc!"9c4��"&��K�%L
^�!I4�tS!k(���#"�ac�c!j )c�c9#!�!rc�ccc!t
bc��	�c!*c��#!�
Kc:���!a"�Po!�
Lcp��o!/����c�"�����#!Nc���!d���!�����&�$2
�!#cvc"Uko.c'���7�w(mod��\X(v����)7p�*U	 r*T3)37P�*U|*T	lQ*Q9)O7P�*U|*T	vQ*Q4)h7P*U|*T	�Q*Q0)�7PE*U|*T	�Q*Q2)�7Po*U|*T	�Q*Q1)�7P�*U|*T	�Q*Q8)�7P�*U|*T	�Q*Q7)�7P�*U|*T	�Q*Q6)8P*U|*T	�Q*Q5),8PA*U|*T	�Q*Q3)H8Pk*U|*T	�Q*Q:)d8P�*U|*T	�Q*Q<)�8P�*U|*T	R*Q=)�8P�*U|*T	R*Q>)�8P*U|*T	R*Q?)�8P=*U|*T	-R*Q;)�8:U*U	�+�8*U|*T	?R,��`(��-}�$���-b�4��� �����.��	cRF(ret�	c��/ifr�M�@0��){(�E*U�T*T	�R*Q��)�(�f*U2*T2*Q0)�(8�*U�@*Q?)�(��*Uv*T
�)�(��*Uv)�(t�*U	R1)�1)�+5)t*U	R,�	��/���*-}��-b�/�WM �����.��	c��(ret�	c��/ifr�M��.�����.��\G?0�u2`&.�v���)�0K*U|+3K*U|3�*T0�K54�*		4�*W	S	5�6�*�	�	7�*��6�*


+�5K*Uv8�*�0�6�*/
-
9�*�06�*T
R
+�0K*Uv)`04*U	RQ+w0*U|*Tv:�*�0�0P�:4�*y
w
4�*�
�
6�*�
�
8�*�0�;�*8�*�0!6�*209�*�0!6�*WU+�0K*U})�04*U	�P+�0*U|*T}:�*11P�?4�*|z4�*��6�*��8�*%1�;�*8�*+1%6�*539�*+1%6�*ZX+A1K*U})14$*U	�P+#1*U|*T}:�*P1P1P�D4�*}4�*��6�*��8�*u1�;�*8�*{1%
6�*8
6
9�*{1%6�*]
[
+�1K*U})\14)*U	�S+s1*U|*T}3�*�5ES4�*�
�
4�*�
�
56�*�
�
7�*@�6�*;9+�1K*U}8�*�56�*`^9�*�56�*��+�5K*U})�547*U	HQ+�5*U|*T}:�*22PX4�*��4�*��6�*�8�*52�;�*8�*?2!6�*ca9�*?2!6�*��+U2K*U})24=*U	�P+32*U|*T}:�*`2`2P	] 4�*��4�*��6�*�8�*�2�;�*8�*�2!# 6�*fd9�*�2!6�*��+�2K*U})l24B *U	�P+�2*U|*T}:�*�2�2J3b!4�*��4�*��6�*	8�*�2� ;�*8�*�2(!6�*ig9�*�26�*��+�2K*U})�24G!*U	0Q+�2*U|*T}:�*33P9g"4�*��4�*��6�*8�*53�!;�*8�*?3!-"6�*lj9�*?3!6�*��+U3K*U})34L"*U	6Q+33*U|*T}:�*`3`3P?l#4�*��4�*��6�*8�*�3�";�*8�*�3!2#6�*om9�*�3!6�*��+�3K*U})l34Q#*U	@Q+�3*U|*T}:�*�3�3Pq$4�*��4�*��6�*
8�*�3�#;�*8�*�3!7$6�*rp9�*�3!6�*��+�3K*U})�34V$*U	Q+�3*U|*T}:�*44Pv%4�*��4�*��6�*
8�*%4�$;�*8�*/4!<%6�*us9�*/4!6�*��+E4K*U})44[%*U	�R+#4*U|*T}:�*P4P4P{&4�*��4�*��6�*8�*u4�%;�*8�*4!A&6�*xv9�*4!6�*��+�4K*U})\44`&*U	Q+s4*U|*T}:�*�4�4P!�'4�*��4�*��6�*8�*�4�&;�*8�*�4!F'6�*{y9�*�4!6�*��+�4K*U})�44e'*U	Q+�4*U|*T}:�*�4�4P'�(4�*��4�*��6�*8�*5�';�*8�*5!K(6�*~|9�*5!6�*��+55K*U})�44j(*U	 Q+5*U|*T}:�*@5@5P-�)4�*��4�*��6�*!8�*e5�(;�*8�*o5!P)6�*�9�*o5!6�*��+�5K*U})L54o)*U	)Q+c5*U|*T})/^�)*U0)1/��)*Uv*T	�R*Q��)a/��)*U2*T2*Q0)�/8**U��*Q?)�/�+**Uv*T
�)�/�C**Uv)�1�b**U	�P)�1��**U	�P+�1�*Uv<I	�c+=���=��0�>		��?�*>��	�@>���@>���,����'��~,-}����-b�-� �����.��	cbX(ret�	c��/ifr�M�@0��)�'��+*U�T*T	�R*Q��)�'��+*U2*T2*Q0)(8
,*U�@*Q?)(�,,*Uv*T
!�)&(�D,*Uv)8(tc,*U	�T1E(�1T(�,�*��*���1-}*�
-b*/�NJ B+��(ifa+��.�,	c��.�.�jd.9/���.�	0���.1���.r2�
 �.�3�D!*!A�v/,2P�-.�l	�Y"Q"5P.�l	��"�"+�,K*U��2�3..�m	�%#!#5�.�m	�_#[#+�,K*U|2��..�n	��#�#5�.�n	��#�#+�,K*Us2��..�o	�
$	$5�.�o	�G$C$+-K*U2/.�p	��$}$5.�p	��$�$+"-K*U}2@R/.�y��$�$+�-K*U��2p�/.mz�J%D%5�.�z��%�%1�-K2��/.m{��%�%5.�{� &&+�-K*U|2@50.m|�\&V&5p.�|��&�&+�-K*Us2��0.m}��&�&5�.�}�2'.'+.K*U2�0.m~�l'h'50.�~��'�'+�-K*U})�*^�0*U0)�*�1*U��)M+t81*U	�P*Q~*R|*Xs*Y)l+Z1*U��*T��)�+�1r1*T~)�+�1�1*T~),�1�1*T~1B,�)=-��1*Uv)T-�1�1*T~1v.�B��x2C��(�
C��2cDbuf�
x2Derr�	cE,�	cDn�-Dlen�-E���Dptr��@>k�2��2F9�Bg
���2C}�$�Cb�4�Dpid��E��	cE 	�	cB���3C}�$�Cb�4�Dpid��E��	cGo��6��3H}�(��'�'Hb�8�((16|4I.6t*U	FQJ���6i�R4Kpid�od(V(H�&�))1�6�L�6��3*U�T)�6R44*UsL�6�/4*U�TI�6�*T	ZQ*Q�TM�]c|4Npid]�Dret^	cO�Ho6
��4I
6*UNP�2@)a��54�2�)�)4�20***Q�2�hQ�2�l6�2�*|*R�2�)�)�a54�2�*�*4�2�*�*;�2;�2;�21�)�)_)��5*U�T*T	�S*Q�h*R�l+x)�*U0S�1�)�c6T�1UT
2TQ2��w;"2;.2;:2;D2;P2;\2U�15* �V
2V�15 Q2��w;"2;.2;:2QD2^;P2;\29h25*;i2P�2�.v�p74�2
++4�2M+C+Q3�\63�+�+R�2�.�.�74�2,,4�2;,9,;3;31�.�1�.�)�.�=7*Uv*T	�T*Q�\)�.~T7*U0+�.t*U	�TPR406W�84c4b,^,6o4�,�,RR4i6i6]�74c4�,�,;o41x6�)K6�8*T01]6�Wi_%�r� 9�5�����XintW���c�x�fl�}\T�gG�8�	+:	W	;	W�
�5j	=l�	um)�n�$<0P
0
0�\bWq
0�}�W�
0
j^���W�
0
q
j�?���
j�69 6.	�
7�	�8�	]9W	;�9=��(+|	�,$	�-�	^.�	�/
0 0|�0:<�	�=	W	�>jThJ>	�K�	�L�(	jM�0	�
N�8	�O>@	�PDH	�Q�P	1RPX	
S�`.�g	^�.g-
�W	 t6�	�s�	k0�
0
0�	j0�
0
0�
	0�
0
0�	0�
0
0j	0
0
0w
	05
0
0�0P
0
0M0k
0
0�0�
0
0�0�
0
0	lW�
LW�
0
�
c/�0�
�
W�	B09��modJ0-�,#9�DU	�sT3?9�pU|T	�RQ
bT9��U|T	�RQ0l9��U|T	�RQ1�9��U|T	�RQ��9��
��� �9z���9�����Xintc�3�o"�o��o����0�d��Y	��c	��	�	��	�	a�	�	��	� 
�	�(
�	�0
�	�8
5	�@
~	�H
;	�P
�	�X
�	�`
,�h
�
cp
Act
P
vx
�G�
oU�
���
���
T&��
�/	��
0	��
1	��
�2	��
3
-�
[5c�
�7�������	���	X��	�ca���
9Y��
9���f�����}\T�gG�8;	+:	c	;	c;5jn	=l�	ums)n�nFy\	R�_ts�
���
����m(5	�7�	�8�	9�	5:�	�;	c 	*<	c$�
cgj�jWcJ��G``��Ly�"
9c���k(����4����
�����
�����"dc���))��B�J��t�)*cr��I
4���w��9z��
}"�P-L-b2��-�-~��-�-^`L.@.Q���I��.�.N��/t/9�l0Z0��11+1�Cp�K)�1z1 �9J!:/�"T	.Q#:"Uv��<	��1�1$��<	�22#�:f"UvX�=	�,2(2$�=	�f2b2#�:f"U���>	��2�2$��>	��2�2#�:f"Us@mF�33$��F��3�3!�;f�"Uv#%<f"Uv�TmG��3�3$P�G�X4V4#�;f"U��mH��4{4%~;�H��4�4#�;f"Us&5;��I��4�4#D;f"U|!�9r	"U0!�9V-	"Uv"T	�R"Q��!8:JE	"U} L:� a:�!�:��	"U~"Tv"Q!�:y�	"U|"Ts!�:J�	"U}!;4�	"U} 5;�!h;4�	"U}!�;4
"U}!�;�@
"T	�S"Q	�S"R#!�;�
i
"U	�S"T1"QB!
<�
�
"U:#<4"U}'wm'H>p�
�� 0<���9�����Xintc�3�o"�ox�c��o����0�d��e	��c	��	�	��	�	a�	�	��	� 
�	�(
�	�0
�	�8
5	�@
~	�H
;	�P
�	�X
�	�`
,�h
�
cp
Act
P
vx
�G�
oU�
���
���
T&��
�/	��
0	��
1	��
�2	��
3
-�
[5c�
�7�������	���	X��	�cm���
9e��
9���f����c�zb�}\T�gG�8Y	+:	c	;	c1Y5	j�	=	l	u	m�)��	nd��	���
���
��( 
��;
�����w9�~		���$
9 ��$�
cgj�Nct8 
uc��-�0Y
V�)o���.������
�����	c�L	%�"9c@��a�Vo �cq-�I4���
R����2
�I�
��$7
yc��-�

��"dc�����!-8o9ok(�P��
��f�*c������`?���}�(�55b�8�Y5S5��$��~ len�-�5�5!pid���~ i��5�5"��9636�����~#@	7"h���6�6"��o�6�6#�	�"m�	�P7J7$�	"��	��7�7%N@&Uv#�	�"��	-�7�7'�?�(,@��&Ts(<@�&Uv'U@�%�@�&T	T(�?fk&U�T&T	T&Q��~&R��~'�?�'�?�(�@��&T	PT(�@c�&T�&Q}'�@PMR�0=&�w}R(�8
8bR8�P8J8 cpuS	c�8�8"�Sc(99"" Sc�9�9"�Sc�:�:!pidT��"�U-�:�:"�V�<;.;"�W��;�;)��#	�	"�x
-�<�<#�
"�z�x=n=#@+
"����=�=%>&U~#p]
"��
�>>%�>&U~(�>@{
&Us $ &%�>%&U}&T~#��
"m��L>F>$�"����>�>%Q?&U}(`=f&U�T&T	�T&Q��(='&UA(�=�E&T|&Qv(�=�]&Uv'�=�(�=��&T	 T(>��&Uv'6>�(G>q�&U0(^>V�&U|&Tv(�>��&Uv'�>P(?�:&T	�S&Q	�S&Rb(7?]c&U	�S&T1&QB%D?h&U:�<��<y��
}<"��>�>b<2�?
?!pid=�d"�>	cc?_? >c�hm>c�l" 	?	c�?�?*��<�Fp
+&�?�?+%@#@+K@I@%�<!&U�&T1(�<f�
&U�T&T	�S&Q�d&R�h&X�l''=P�*�0<t��}*"�s@o@b*2��@�@!pid+�l"�,	cA�@" ,cjAbA"m,c�A�A,3T<T</�+P/B-B+DUBSB%g<!&U�&T1(J<f�&U�T&T	�T&Q�l(�<9�&U	�S'�<P-�c3.qc/whoc.�$c-�c].qc/whoc0wm0H>��
��  �@H�!��9��@���Xinth�x�h��t����9��9��f��zb{=�h}�9\T�gG	�8I
+:	h
;	h!I	5j|
=l
um�)|�nT���9�	g 	2�
?!	4a
!	5a
	I�
s 	K
�
7!	L
�� �	:�
� 	<
a
!	=	�
_ 	>�
� 	?�(
� 	@�,
� 	A�L&!	B�L!	H�P� 	M�T� 	S�\V 	T�l��9go�� 
D
.!
>��L��"
9h���k(���4�+�� 
A
I
4�Ij��@H��}�}ByBb(��B�But��B�B��xClC9�D�CJ �E�D� ��E�E� �YFCF�?aA�!$
-z $
-EG?GaA� $
��G�G� $
hKHCHsA�!$*-z $*-�H�H � $*�� $*h!0
�7	�
I	I"0
�7	�GICI#�B�$U!`
j�8	��I}I"`
�8	��I�I#xB�$U~!�
��9	��I�I"�
�9	�/J+J#XB�$Uv!�	�:	�iJeJ"
�:	��J�J% B�$U|#kB�$U|!�
~m@��J�J"�
�@�.K(K%C�h$U#6C�$U!�mA�}KwK"@�A��K�K#C�$U~!pmB�LL"��B�L{L#&C�$Uv!�bmC��L�L" �C�	MM#�B�$U|!P��D�GM?M&�B�%A3�$U0&A+&%A�%@A�$Us,%UA�$Us%�A
$U	xT%�A�>$U	�T$T$Q~$Rv%�A�\$U}$T|&%B�&=B�%�B�$Us�&�B�&�B�GB���! @Cd<'�����XintW���c�v}�fj�}\T�gG�8�+:	W;	W�	�5j=l�um
)�n�"��.�	;S!Bx!c�!-�p	3q!	XY!		3h!	
d �!	d(�!	d0�!	
d8�!	d@�!	dHM!	@Ppad	@R�!	dX_!	d`�!	Lh_f	Cl
dC-
vS-g
^�k
(.w���.�.�W��p�.@Cd�} .�M�Mb0.�M�M#p��LC�U���C`7U	�T�Cw�5�(# �CQ�(��9�����Xintc���o�����fxc�}<3N\T�gG�8
	+:	c	;	c�

5j=	=l�	umB)=�nH�N�
y
�

I

k
�
9

H��	G
��	w
��	�
�
����9
�
����
J

|%%
/9::
D�OO
Y�dd
n'
yy
����
��
��
�\��
�'��
�
��
��
�


*
?
#T
.i
9~
D�
O�
Z�
e�
p�
{�"@�	GS!N'	�	q#(N	�!)N	|#*G�#+�-	%	q#.N	�!/N	|#0G	�#1Nd#2�4	U	�"5G	h"6G;"719	�	�":N	�";N	�#<N	�#=N	�#>Nlmi?Gdce@GI#AaC	�	S"DNo"E�G		S"HN	!#I�)9�"J�L	Y	#MN	�"NN�"O5�
��	O
�9	q
�9	�
�Girq
�@dma
�@	�
�@
�@#
�R#
�$fr
�*�#
�0�#
�6�#
�<te1
�BUY��)�%X#
�}	x
�N	�

�N	_"
��
���
�
�9H
��b
���
��

��
��!
�	\�
�c�

�c�
�ew
�
�
�
v$"
�H�(
�a	�
�}	&
���",aLcmdb�	�#c�	�"d�	"e�	�#f�	�g�	�"h�	�#i�	�"j�	�!k�	�"l�	2"m�	J"n�	�"o�	#p�	�"q� 	|"rQ$a�a9gj��ac�ck(T���2
�j)c�c9�#>v�vc-dT���rc	ccc*c6	T��:T�CQ��}:&T!NNb:6T`NZN�;���~ �<	c�N�N!ret=	cTOPO �#>	c�O�O �#?��O�O "@	c�O�O"ifrA9��X"Ba�� �CTP
P#�t�D$�iD�W_
%�uPsP&�C	�
'U�T'T	�R'Q��~&�C��
'U2'T2'Q|&D��
'U��'Q?&UD��
'Uv'T
F�&�D�
'U	�T&�Dn"'Uv(�D�&�D�N'U	�T&�D�m'U	�T)�Dn'Uv*"#��+�##4�L,i_%$>$>&I:;9II:;9
:;9I8	7I
<4:;9I?<4:;9I?<
4:;9I4G:;9.?:;9'I<I.?:;9'<.?:;9'I<.?:;9'<.?:;9'I<.?:;9'I@�B��1���B.?:;9'I@�B:;9I�B4:;9I4:;9I�B��1��14:;9I U!4:;9I�B"U#I$!I/%.?:;9'I@�B%:;9I$>$>&II7I	:;9I
>I:;9(:;9

:;9I8<'II'4:;9I?<4:;9I?<'I>I:;9>I:;9((I!I/
:;9I8
:;9I8:;9
:;9I4:;9I 4:;9I!.?:;9'I<".?:;9'I<#$.?:;9'<%.?:;9'<&.?:;9'I<'.?:;9'I@�B(4:;9I�B)��1*���B+��1,.:;9'I@�B-:;9I�B.4:;9I�B/4:;9I0
:;91��12U31R�BUXYW41�B5U641�B71U8191:1R�BXYW;41<.:;9'I =:;9I>4:;9I?@A
:;9B.:;9'I C:;9ID4:;9IE4:;9IF!I/G.:;9'I@�BH:;9I�BI���B1J.?:;9'@�BK:;9I�BL���B1M.?:;9'I N:;9IO.?:;9'I@�BP.1@�BQ41R1R�BXYWS.1@�BT1U1R�BUXYWV1W.?<n:;%$>$>&I:;9II:;9	
:;9I8
7I<'I
I:;9I''I4:;9II!I/4:;9I.?:;9'I<.?:;9'I<.?:;9'I@�B4:;9I�B��1���B��1%:;9I$>$>&II:;9	
:;9I8

:;9I8:;9I
!I/7I4:;9I?<<<4:;9I.?:;9'I<I.?:;9'<.?:;9'I<.?:;9'<.?:;9'I<.?:;9'I@�B:;9I�B4:;9I�B4:;9I
:;9U ��1!��1"���B#��1$U%&'.?<n:;%:;9I$>$>&II:;9	
:;9I8

:;9I8:;9I
!I/7I4:;9I?<<4:;9I?<:;94:;9I>I:;9(.?:;9'I<I.?:;9'I<.?:;9'I<.?:;9'<.?:;9'<.?:;9'I@�B:;9I�B4:;9I 4:;9I�B!4:;9I"4:;9I�B#U$U%��1&���B'��1(��1)
:;9*1R�BUXYW+1�B,1R�BXYW-.:;9'I .:;9I/:;9I0.?<n:;%:;9I$>&I$>I!I/I	:;9

:;9I87I<
:;9:;9
:;9I84:;9I.?:;9'<.?:;9'I<.?:;9'<I.?:;9'I<.?:;9'I<.?:;9'I@�B:;9I�B4:;9I�B4:;9I�B
:;94:;9I !U"U#��1$���B%��1&��1%$>$>&I:;9II:;9
:;9I8	7I
<4:;9I?<
:;9I8
I!I/!I74:;9I.?:;9'I<I.?:;9'I@�B:;9I�B4:;9I��1���B��1%:;9I$>$>&II:;9	
:;9I8
7I<>I:;9
(((I!I/:;9
:;9I8:;9
:;9I
:;9I4:;9I.?:;9'I<I.?:;9'I<.?:;9'I<.?:;9'I@�B:;9I�B4:;9I 4:;9I�B!4:;9I�B"4:;9I#
:;9$1R�BUXYW%1�B&��1'���B(��1)��1*.:;9'I +:;9I,.?<n:;k.�
psutil/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6m_psutil_common.ctypes.hstdio.htypes.hpyport.htime.hobject.hpyerrors.h_psutil_common.hstdlib.hmodsupport.habstract.hstring.herrno.h=	�$�=w:	�X:X<���I=�Xt����$n$\$����;=�Xt����#�#\#����;=�Xt����8�8yJ
Ct�L��z�4	�Y2�2J
@t�L	�
M
t��t�.��	K�	Y
���
psutil/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6m/usr/include/netinet/usr/include/net/usr/include/asm-generic/usr/include/linux_psutil_posix.cstddef.htypes.hstdio.htypes.hunistd.hstdint.hpyport.htime.hobject.hboolobject.hmethodobject.hmoduleobject.hpyerrors.hresource.hresource.hsocket_type.hsockaddr.hsocket.hin.hconfname.hif.hifaddrs.hint-ll64.h	types.h
if_packet.h
_psutil_common.hnetdb.hlistobject.hunicodeobject.hmodsupport.hioctl.hsocket.hlongobject.hsignal.herrno.h<built-in>3	�'�3v
J3v�
J�=-[/_fsY�	t.Z]'	Y�X:�t:zP:z�P�=-[/[fsY�	u.[Y	uR�	Y�X	ztPX:�|f:
JAt=-^�ZtKp�	t�:

�:s
.:s.
.:s<

�	Y�8	@	�	-��
^�
^X	#
	Y
Y��	�L
IY	\K�	Y
`�	lt�95`525*�XY�	��XJ
��	;	B	
' 	�
	������%L)J	�J	Z	K�q	?	0	��	=	�<
�
�tK	<�<XX	KXK�X�X�X�X�X��	l����������%L)J7�X�
]�
�tK		<.tXJt�����������z���	A�o;;-	D�v	
<:�~t:�Z�h>�-^�.L�fp.	�f5��5
35)�Y�t��J�=�~�/�fsY��u	X���������������	��~��	�	/Ij��t��	�~@��	�	/Ij���	7���	�	/I2���	=���	�	/I2���	� �����	/	;Z�	�����	�	/Ij���	����	�	/Ij���	��~��	�	/Ij��X��	B��~��	�	/Ij���	��~��	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	��~��	�	/Ij��X	���}�>��>KXuI��L.j�lJt=W	[
X.\dX
&�h�./�/	�X	g	EA 	
�
t	K	G? 	�=	I/ 	X	7�0	�	X�	<�	�	<�	<�	<�	<�	<�	<�	<�	<�	<�	<N	<N	<N	<J	
�	Y����<�J�
psutil/usr/include/bits/usr/include/opt/python/cp36-cp36m/include/python3.6m/usr/include/syspsutil/arch/linux_psutil_linux.ctypes.hstdio.hpyport.htime.hobject.hmethodobject.hmoduleobject.h_psutil_common.hmem.hnet.husers.hdisk.hproc.hmodsupport.h	9�0	_ 	XK	K	<K	<LZX	nf<��
psutil/arch/linux/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/opt/python/cp36-cp36m/include/python3.6m/usr/include/syspsutil/arch/linux/../..disk.cstddef.htypes.hstdio.hlibio.hpyport.htime.hobject.hpystate.hpyerrors.hmntent.h_psutil_common.hlistobject.hmodsupport.hunicodeobject.hceval.h<built-in>8	�98y58y.�Z�
t��Y�lt=Y;=X	.j�	��	=	��	=	�.X	A	�
�	�tf�J�f�fj���	b�=t�=yJ	��fXf��	�t��	\$�!.	�X����
psutil/arch/linux/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6mpsutil/arch/linux/../..proc.cstddef.htypes.hstdio.hlibio.htypes.hpyport.htime.hobject.hpyerrors.hsched.h_psutil_common.hsched.hlongobject.habstract.hlistobject.herrno.hunistd.hmodsupport.h<built-in>8	0<)8J?t�=-Zg.Z
-0�
e>Y	�t�88
JAtX>,\U)WJ)�<WJX+ZtK	�t>
>y
_>y�
{�	�o<	uWu;	K	�
�.	��
X	�
g�
:�#q ��	��
Sft��t�[
hJX='.�1'<	
<X"!X
@
��/;
j�.r1'<J.	��
[ 
f�^$�, >m>y
_>y�
���
�����ff	.�zfX	]�JpJ	Y	�	\	�T	@.�.
XXb+.	X<"(	Nu�.�	���
psutil/arch/linux/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6mpsutil/arch/linux/../..users.cstddef.htypes.hstdio.htypes.hpyport.htime.hobject.hutmp.hlistobject.hmodsupport.hunicodeobject.hutmp.h_psutil_common.h.	�@.0.,�X=�Y	 �`�	�	h3JX	=	�.JX	=	�
ff*fJ<'.
�	�	��	�	�
�	��ft��`�"�Y
	r.�����
p�	=u�#<ffXtXttY<d>:XYU���f��f��o�
psutil/arch/linux/usr/include/bits/usr/include/opt/python/cp36-cp36m/include/python3.6m/usr/include/sys/usr/include/asm-generic/usr/include/linuxpsutil/arch/linux/../..mem.ctypes.hstdio.hpyport.htime.hobject.hint-ll64.hposix_types.hsysinfo.hmodsupport.hpyerrors.hsysinfo.h_psutil_common.h6	@C
6	M�M
2	t��i?�
psutil/arch/linux/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/opt/python/cp36-cp36m/include/python3.6m/usr/include/sys/usr/include/asm-generic/usr/include/linux/hdlc/usr/include/linuxpsutil/arch/linux/../..net.cstddef.htypes.hstdio.hstdint.hpyport.htime.hobject.hsockaddr.hsocket.hint-ll64.hioctl.hif.h	ethtool.h	unistd.hmodsupport.hioctl.hstring.h_psutil_common.h
socket.hsocket_type.herrno.h<built-in><	�C9<uX<u� <t<J��/�iMzt<�YzX[�[Z	YS)XX-	[L	1O���=Zu	h�X.�-	e�.�RJ.< 
q��	/	;Y�tz_dsttimesyscallpsutil_setup/project_typeobject_objectob_refcntdoubleNoSuchProcessPy_ssize_tfloat_py_xdecref_tmplong long unsigned inttimezoneunsigned charpsutil_set_debugstrerror_py_decref_tmpPSUTIL_DEBUGshort unsigned intPyObject_IsTruepsutil_PyErr_SetFromOSErrnoWithSyscalltz_minuteswestPyObject_CallFunctionAccessDeniedpsutil/_psutil_common.cob_typeselfPyErr_SetObject__errno_locationPyObject__ssize_tfullmsglong long intGNU C17 10.2.1 20210130 (Red Hat 10.2.1-11) -mtune=generic -march=x86-64 -g -O3 -fwrapv -fPICPyArg_ParseTuplePyExc_ValueError_Py_Deallocshort intargsPSUTIL_CONN_NONE__pid_t_Py_NoneStructpsutil_check_pid_rangePyErr_SetStringsprintfgetenvPyExc_OSError_SC_EQUIV_CLASS_MAXifr_ifrnm_copyifa_addr_SC_THREAD_PRIO_PROTECTsin6_flowinfoifr_ifruPyModule_Create2getnameinfo__priority_which_t_SC_VERSIONPy_BuildValue_SC_NL_NMAXm_methods_longobjectslot_SC_SYNCHRONIZED_IO_SC_THREAD_PRIORITY_SCHEDULING_SC_NPROCESSORS_ONLNsa_datam_base_SC_TIMEOUTSsockaddr_SC_BASE_SC_PII_OSI_COTSsockaddr_un_SC_THREAD_SAFE_FUNCTIONS_SC_ATEXIT_MAXsockaddr_ns_SC_STREAM_MAXifru_slavepsutil_posix_getpriority_SC_PRIORITIZED_IOpy_ptp_SC_V6_ILP32_OFF32_SC_THREAD_SPORADIC_SERVERsll_pkttype__socket_typeifu_dstaddr_SC_SHRT_MIN_SC_USHRT_MAX_SC_NL_TEXTMAX_SC_STREAMS__rlimit_resourceIFF_PORTSEL_SC_THREAD_DESTRUCTOR_ITERATIONS_SC_PIPE_SC_BC_DIM_MAXpsutil_getpagesize_SC_MAPPED_FILES_SC_2_C_BIND_SC_MQ_OPEN_MAX_SC_XOPEN_SHM_SC_INT_MAX_SC_2_FORT_DEVkill_SC_XOPEN_XPG2_SC_NZERO_SC_XOPEN_XPG4IFF_NOTRAILERS_SC_CPUTIME_SC_PII_INTERNET_SC_V7_LP64_OFF64_SC_LEVEL3_CACHE_SIZE_SC_MB_LEN_MAXifreq__u6_addr16__caddr_t_py_tmpsockaddr_ipxsll_hatypegetifaddrsifru_broadaddr_SC_REALTIME_SIGNALS__RLIMIT_LOCKSm_freemoduledefpy_netmask_SC_DEVICE_SPECIFIC_Ruint32_tin_addr_tmem_start_SC_SAVED_IDS__RLIM_NLIMITS_SC_2_C_DEV_SC_XBS5_LPBIG_OFFBIGob_base_SC_2_C_VERSION_SC_SEM_VALUE_MAX_SC_SCHAR_MAXifru_map_SC_SSIZE_MAX_SC_2_UPEPyExc_RuntimeErrorfreeifaddrsRLIMIT_COREifa_name_SC_INT_MIN_SC_BC_BASE_MAX__u6_addr8_SC_POLLsysconfml_flags_SC_SYSTEM_DATABASE_Rsockaddr_dlsll_halen__RLIMIT_NPROC_SC_2_LOCALEDEFIFF_MULTICASTsin_family_SC_XOPEN_XPG3_SC_T_IOV_MAX_SC_LEVEL1_ICACHE_ASSOCin_port_tpy_str_SC_SYMLOOP_MAXretval_SC_TRACE_LOGifa_ifu__u6_addr32append_flag_SC_THREAD_CPUTIMEsin_zero_SC_CHAR_MAX_SC_XBS5_ILP32_OFFBIG_SC_PII_INTERNET_DGRAMs_addrpy_address_SC_2_PBS_TRACK__u16psutil_net_if_flags_SC_FILE_ATTRIBUTES_SC_ASYNCHRONOUS_IO_SC_FSYNC__RLIMIT_NICEsockaddr_inarp_SC_DEVICE_SPECIFICsockaddr_ax25ifa_netmaskIFF_UP_SC_THREAD_ATTR_STACKSIZE_SC_REGEX_VERSION_SC_MEMLOCK_SC_DELAYTIMER_MAX_SC_LONG_BIT_SC_SEM_NSEMS_MAXsockaddr_isoPyModule_AddObjectPyModule_AddIntConstant_SC_XOPEN_STREAMS_SC_LEVEL1_ICACHE_LINESIZE_SC_SIGNALSsll_family__RLIMIT_OFILEIFF_SLAVEioctl_SC_2_PBS_ACCOUNTING_SC_AIO_MAX_SC_LEVEL2_CACHE_LINESIZE_SC_XOPEN_VERSIONmod_methodssa_family_t_SC_CLK_TCK_SC_SHELLsll_protocol_SC_UIO_MAXIOV_SC_TZNAME_MAXifru_data_SC_SPORADIC_SERVER_SC_MEMLOCK_RANGE_SC_AVPHYS_PAGES__RLIMIT_NLIMITS_SC_V7_ILP32_OFFBIG_SC_PII_XTIclosesll_ifindexflag_namesock_SC_V7_LPBIG_OFFBIG_SC_C_LANG_SUPPORT_RSOCK_DCCP_SC_LEVEL3_CACHE_ASSOCuint8_t_SC_FILE_SYSTEMifru_netmask_SC_PAGESIZE_SC_V6_ILP32_OFFBIGSOCK_PACKET__id_t_SC_SIGQUEUE_MAX_SC_SPAWNpsutil_posix_setpriority_SC_DEVICE_IO_SC_V6_LPBIG_OFFBIG_SC_2_VERSIONifru_mtu_SC_LEVEL4_CACHE_SIZEml_nameSOCK_DGRAMm_sizeIFF_LOOPBACK_SC_USER_GROUPS_RPyModuleDef_Slotvisitprocsin_port_SC_LINE_MAXsockaddr_eonpsutil/_psutil_posix.cIFF_AUTOMEDIAlladdrpy_broadcast_SC_LEVEL1_DCACHE_ASSOC_SC_HOST_NAME_MAX_SC_C_LANG_SUPPORT__RLIMIT_RSS_SC_THREAD_STACK_MINIFF_RUNNING_SC_SEMAPHORES_SC_UINT_MAX_SC_CHILD_MAX__RLIMIT_MSGQUEUEPyUnicode_FromString_SC_NGROUPS_MAX_SC_SINGLE_PROCESSIFF_ALLMULTI__be16__in6_usin_addr_SC_TTY_NAME_MAX_SC_PII_INTERNET_STREAMRLIMIT_AS_SC_MEMORY_PROTECTIONPyModuleDef_Basem_name_SC_XOPEN_CRYPTPyCFunctionpsutil_net_if_is_runningifru_newname_Py_FalseStructRLIMIT_FSIZE_SC_CHAR_BIT_SC_LEVEL1_DCACHE_SIZEIFF_NOARP_SC_CLOCK_SELECTIONm_initifrn_namesin6_addrpsutil_net_if_mtu_SC_TIMERSfreefunc_SC_BC_SCALE_MAX_SC_ULONG_MAXuint16_tsin6_port_SC_MQ_PRIO_MAX_SC_TRACE_SC_SPIN_LOCKSifru_flagsaddrlen_SC_LEVEL1_DCACHE_LINESIZE_SC_BC_STRING_MAXPyLong_FromLongmem_endSOCK_STREAMRLIMIT_DATA_SC_V7_ILP32_OFF32_SC_TRACE_SYS_MAX_SC_FD_MGMT_SC_REGEXPsin6_familyIFF_PROMISCRLIMIT_CPUPyInit__psutil_posix_SC_LEVEL1_ICACHE_SIZE_SC_ADVISORY_INFO__RLIMIT_RTPRIO_SC_SHRT_MAX_SC_XBS5_LP64_OFF64__builtin_strncpy_SC_READER_WRITER_LOCKS_SC_SYSTEM_DATABASE_SC_XOPEN_REALTIME_THREADS_SC_THREAD_ROBUST_PRIO_PROTECT_SC_2_CHAR_TERM_SC_PASS_MAX_SC_FIFOm_slots_SC_ARG_MAXifru_hwaddrml_doc_SC_2_PBS_CHECKPOINTPyMethodDef_SC_XOPEN_REALTIMEPRIO_USERifru_dstaddrpsutil_getpagesize_pywrapper_SC_2_FORT_RUNsocket_SC_TRACE_EVENT_FILTERpsutil_raise_for_pid_SC_RE_DUP_MAX_SC_SCHAR_MINsockaddr_in_SC_THREAD_ROBUST_PRIO_INHERIT_SC_THREADSPyList_Append_SC_PII__RLIMIT_SIGPENDING_SC_TRACE_INHERIT_SC_WORD_BIT_SC_XBS5_ILP32_OFF32_SC_PII_OSI_MRLIMIT_NOFILEm_traverse_SC_2_SW_DEV_SC_OPEN_MAX_SC_XOPEN_UNIXsockaddr_in6_SC_AIO_LISTIO_MAXPyErr_SetFromErrno_SC_PII_OSI_SC_UCHAR_MAX_SC_MONOTONIC_CLOCKm_clearpy_tupleifaddrSOCK_RAW_SC_PRIORITY_SCHEDULINGm_doc_SC_SELECT_SC_NETWORKING_SC_TIMER_MAX_SC_TRACE_EVENT_NAME_MAX_SC_V6_LP64_OFF64_SC_GETGR_R_SIZE_MAX_SC_LEVEL2_CACHE_SIZE_SC_LOGIN_NAME_MAX_SC_EXPR_NEST_MAXIFF_POINTOPOINT_SC_NPROCESSORS_CONF__RLIMIT_RTTIMEifru_addr__socklen_ttraverseprocSOCK_SEQPACKETPRIO_PROCESSsockaddr_atpsutil_pid_existsifa_flags_SC_SS_REPL_MAX_SC_RAW_SOCKETS_SC_BARRIERS_SC_CHAR_MIN_SC_LEVEL4_CACHE_LINESIZEpsutil_convert_ipaddr_SC_THREAD_PROCESS_SHARED_SC_NL_MSGMAX_Py_TrueStructsin6_scope_idm_index_SC_LEVEL4_CACHE_ASSOC_SC_CHARCLASS_NAME_MAX_SC_PII_OSI_CLTSpsutil_net_if_addrspy_retlist_SC_TYPED_MEMORY_OBJECTSml_meth__RLIMIT_MEMLOCKbase_addr_SC_2_PBS_SC_PHYS_PAGESIFF_MASTER_SC_PII_SOCKETIFF_DEBUG_SC_MULTI_PROCESS_SC_LEVEL2_CACHE_ASSOC__priority_whichsa_familyifmap_SC_THREAD_KEYS_MAXPyErr_FormatIFF_BROADCAST_SC_THREAD_THREADS_MAX_SC_GETPW_R_SIZE_MAXifa_dataSOCK_NONBLOCKPRIO_PGRP_SC_2_PBS_MESSAGE_SC_RTSIG_MAXsockaddr_ll_SC_THREAD_ATTR_STACKADDRPyList_NewPyModuleDef_SC_FILE_LOCKING_SC_SHARED_MEMORY_OBJECTS_SC_TRACE_NAME_MAX_SC_COLL_WEIGHTS_MAX_SC_XOPEN_ENH_I18N_SC_XOPEN_LEGACY_SC_JOB_CONTROLIFF_DYNAMICifa_nextRLIMIT_STACK_SC_MESSAGE_PASSING_SC_NL_LANGMAX_SC_IOV_MAX_SC_USER_GROUPSSOCK_CLOEXEC_SC_IPV6_SC_LEVEL3_CACHE_LINESIZE_SC_2_PBS_LOCATEifru_ivalue_SC_NL_SETMAX_SC_NL_ARGMAXifu_broadaddrnic_name_SC_THREAD_PRIO_INHERITsll_addr_SC_TRACE_USER_EVENT_MAXinquirysockaddr_x25SOCK_RDM_SC_AIO_PRIO_DELTA_MAX_SC_XOPEN_XCU_VERSIONpsutil_proc_cpu_affinity_getpsutil_userspsutil_disk_partitionspsutil_proc_ioprio_setpsutil_net_if_duplex_speedpsutil_linux_sysinfopsutil_proc_ioprio_getpsutil_proc_cpu_affinity_setPyInit__psutil_linuxpsutil/_psutil_linux.c_IO_buf_end_flags2py_dev_old_offsetPyThreadStateendmntentPyEval_SaveThread_IO_save_end_IO_write_endmnt_dir_IO_write_ptr_IO_buf_base_markers_IO_read_endstderrsetmntent_lock_cur_columnmnt_fsnamePyEval_RestoreThread_posfprintfmnt_passnomnt_opts__builtin_fputcpy_mountp_sbufentry_IO_FILE__builtin_fwritefile__pad4_IO_marker_shortbuf_IO_write_base_unused2_IO_read_ptrPyUnicode_DecodeFSDefaultPyErr_SetFromErrnoWithFilename__pad1__pad2__pad3__pad5mnt_type__off64_t_chain__off_t_IO_backup_base_savemtab_path_mode_IO_read_base_vtable_offset_IO_save_base_filenopsutil/arch/linux/disk.c_IO_lock_tgetmntentmnt_freqPySequence_GetItemncpuscpucount_s__bitsPyExc_OverflowErrorPyErr_NoMemoryPySequence_SizePyLong_AsLongsched_getaffinity__sched_cpufreePyErr_OccurreditemiodataIOPRIO_WHO_PROCESSpy_listcpu_numsetsizepy_cpu_setseq_len__cpu_maskioprio__sched_cpuallocPySequence_Check__cpupsutil/arch/linux/proc.ccpu_set_tioclass__sched_cpucountPyExc_TypeErrorsched_setaffinitypy_username__unusedut_lineexit_statustv_sec__s2_lenpsutil/arch/linux/users.c__s1ut_idut_hostut_addr_v6ut_typesetutentut_userpy_hostname__resultut_tvendutentutmppy_ttye_exitut_sessionut_pid__s1_lenut_exitgetutenttv_usece_terminationprocs__u32loadsfreehightotalramuptime__kernel_long_tbufferramsharedramtotalhighfreeswap__kernel_ulong_ttotalswapmem_unitfreerampsutil/arch/linux/mem.cclock_typemdio_supportpsutil_ethtool_cmd_speedifru_settingsmaxrxpktraw_hdlc_protospeed_hidlciethcmdifs_ifsuparityfr_proto_pvcreservedautonegmaxtxpkt__u8timeoutphy_addresst391t392cisco_protoethtool_cmdlp_advertisingencodingeth_tp_mdixfr_proto_pvc_infoeth_tp_mdix_ctrlintervalmasterpsutil/arch/linux/net.craw_hdlcfr_protociscoif_settingste1_settingsclock_rateloopbackecmdsupportedn391n392n393slot_mapsync_serial_settingsfr_pvc_infotransceiverduplexmemsetfr_pvcuint_speedsync07U7��U�0>T>IUI��T�XcP��U�*�U���T��U�*�T�0QUQ��U�xP�V��V��V��V��V��V��U�&�U���P�V
#V�V
#V��V
#VUG\G��U�^ePe|V��Vf|V��VkrV��Vc�P�A\+>P��U���U���T��U���T���	���PBVHTUTiVj�V%PHTPPTUTE�U�PbTb�V���T���V�E�T�P�	����P��V�
	��

P
Q
VV
J	��JWVWE	����P!
3
PcrPr�\�D
\V
E\��V�
VV
JVW*V��\��\4
D
\JW\��
RQ�*@
RQ���\*@\��0���P��V��0�*@V*EV��V��V�E	
�P��E	\�	0�	"	P"	/	]/	E	0�/	J	]/	J	]J	�	
�P�J	�	\J	_	0�_	r	Pr	{	]{	�	0�{	�	]{	�	]�	�	
�S��	�	\�	�	0��	�	P�	�	]�	�	0��	�	]�	�	]�
%
HQ��
%\�
�
0��
P]%0��	
]*]*]V
�

�P�V
�
\V
o
0�o
�
P�
�
]�
�
0��
�
]�
�
]�
�

�P��
�
\�
�
0��
�
P�
�
]�
�
0��
�
]�
�
]�
E
0Q��
E\�
0�"P"/]/E0�/J]/J]W�
6Q�W�\Wo0�o�P��]��0���]��]��
@Q���\��0���P��]��0���]��]�E
Q��E\�0�"P"/]/E0�/J]/J]J�
�R�J�\J_0�_rPr]�0��]�]��
Q���\��0���P��]��0���]��]�5

Q��5
\��0��
P

]
5
0�
:
]
:
]:
�

 Q�:
�
\:
O
0�O
b
Pb
o
]o
�
0�o
�
]o
�
]�
�

)Q��
�
\�
�
0��
�
P�
�
]�
�
0��
�
]�
�
]U��U�
T
U��T�8	��8SPS�V��U��VjuP��PU��U�$T$��T�KV�V��Vm�VY�^^^��^m�^*<P<�����*�0���P��Z�����0��Z�����0�m�0���S��0���Z��0�*Y0�Y�\��0�(>P>�\(\(0��m\m�P��\��0�*Y0�Y�S�B0�BNPN�S;S;�0��mSm�0���S��P��0�*s0�y�_�e0�elPl|_�_M_M�0��m_m�0���S��0���Z��0�*s0�y�]�p0��]_]_�0���P�m]m�0���S��0���Z��0���Z����'Z'����Z����'Z'����\(\		��\(\��S;S	��S;S��_M_		��_M_��]_]		��]_]������Q�m����Z��Z�Z��Z�Z��\��\m\��\6\��S��S(mS��S6VS��_��_Hm_��_Vm_��]��]��]��]`hUh~�U�`hTh~�T���U��S��U�S!�U�!CSCI�U���T�VU�T�V U !�T�!DVDHQHI�T���U���U���T��U���T���P��P���U����T���U�F�U���T�.V./�T�/EVEF�T�P'T/>T/?�U�/?V��U���U���P��X���U�.P.�\��PUz�U�T]V]z�T�h0�hlPl�]�z]��P��SGWPWZS�S_mS�0���P��V��0��Vm0�mzV�0���P��Q��_�0�(_(�0���_�m0�mz_�0���P�S0�6S6�0���S�m0�mzS+P+�\�z\]aPavVZ�V_V�PV�PV�_(P_		�_(P_S6GS		S6GS��0���V�Vhm0�mzV��V�VmzV	��0���_0�hm0�uz_��_��0���Shm0���S��\05U5��U�0>T>[U[��T�c���������0��*S��S��P�*\c�\��V�P*V��V��_P*_��_��V*V��V��V*V��V��_U&�U�T/U/&�T���S170�7�S��S&S9�]��A��	]	A�])A���]��A��]��\17P7�\��|���\��\&\17P9C\]�\�1\��\�\90�9CV]tPt�V��V��P�&V�0���]�0�-P-�]��0���]�0�&]��	s $ &�7CUFaUa�	s $ &���	s $ &���	s $ &�&	s $ &���^exPx�^��^&^��^��^��^��]��0�&]��]&]��U���U���T��U���T����h�=$�l�!���R��P��P���h�=$�l�!���R���d��1�Ut�U�TUt�T�7IPIOQORT\gP<Ip=&�IOq=&�ORt=&�RVT<Ip
��IOq
��OVQ$6�l$71�UH�U�TH�T�8OPO8S8LPLMS_HS%P%M]_�]��]��UH]�0���P�\M0�_h\h}0�}�\��0���\\8F0�S0�SdPd_M0�_�0��H_h0�h�P�^M0�_�0���^^8F0��0���P�VM0�_}0�}�V��0���P��VV8F0�qA2�_�2�82�q�s����X�8s��_�s����X��s��8s��qws���:�w}s���0�}�	s������	s������4���4�A__�_A__�_A^_�^	A^_�^AV_}V	AV_}VA\h}\		A\h}\��__8F_��_8_8H_��^.^FF0���^^8^
��VV.8VFH0���V.8V��\\FH0���\\��]��]��U
]Ud�U�Td�T�UQ�U�TUQ�T�;0�;ZPZ�V�
V
P$V*NV��P��P��T���L�
��@$���
��!���Q
�0���P��\�<0������fr��kr��������������[f���� (��(;��;M��M_��_r������������(�� (��(H��@H��Hp��`p���������	4
?
JR��*E��*E�	
�
*�	
	BSZm�� 6<�� (�(6�����hhmu���mu������hhuz��������hm��������]�����{~��������&��&%)C]���*�������� 000h�����`h��8F��.FF��.��.8FF��.8��
FH��
������
������8` ��x� 	 
  0$E
P�T�U�m�m�m�m�op r�u�0$`$!�$7�uC�mj�$v�m������'��`(��@)a��)�*�$�.v=/�Q6n rhx�r��n�shx t`������������\[���m��m��T��u�p#.?Vn��� ���9�.>Z0=&w`?��E��CQ��7���'6
:M`s������0<t'<La p��������6i��<y�$�;Qh�����06W��9z	�%�3BYo 'W����� &����u�@H%DQfz ����&j�"�	 	�')	#	@Cd8	crtstuff.cderegister_tm_clones__do_global_dtors_auxcompleted.0__do_global_dtors_aux_fini_array_entryframe_dummy__frame_dummy_init_array_entry_psutil_common.c_psutil_posix.cpsutil_net_if_mtupsutil_net_if_is_runningpsutil_posix_setprioritypsutil_convert_ipaddrpsutil_net_if_addrspsutil_posix_getprioritypsutil_net_if_flagspsutil_getpagesize_pywrappermoduledefmod_methods_psutil_linux.cdisk.cproc.cusers.cmem.cnet.c__FRAME_END____dso_handle_DYNAMIC__GNU_EH_FRAME_HDR__TMC_END___GLOBAL_OFFSET_TABLE_getenv@@GLIBC_2.2.5PyList_NewPySequence_Checkendmntent@@GLIBC_2.2.5PyModule_AddIntConstant__errno_location@@GLIBC_2.2.5strncpy@@GLIBC_2.2.5getpriority@@GLIBC_2.2.5_ITM_deregisterTMCloneTablePyEval_RestoreThreadPyErr_SetFromErrnoPyInit__psutil_linux_Py_DeallocPyErr_NoMemoryPyErr_SetObject__sched_cpucount@@GLIBC_2.6psutil_proc_cpu_affinity_getpsutil_proc_cpu_affinity_set_finipsutil_net_if_duplex_speedPyExc_RuntimeErrorPyInit__psutil_posixPyErr_SetStringPyExc_ValueErrorPyExc_TypeErrorsetpriority@@GLIBC_2.2.5psutil_getpagesizeioctl@@GLIBC_2.2.5close@@GLIBC_2.2.5PySequence_GetItemgetnameinfo@@GLIBC_2.2.5sched_setaffinity@@GLIBC_2.3.4fputc@@GLIBC_2.2.5endutent@@GLIBC_2.2.5PyLong_FromLongPySequence_Sizepsutil_proc_ioprio_getPyList_AppendPyExc_OSErrorfprintf@@GLIBC_2.2.5_Py_FalseStructsyscall@@GLIBC_2.2.5__gmon_start__PyObject_CallFunctionPyExc_OverflowErrorPy_BuildValuekill@@GLIBC_2.2.5PyErr_OccurredPyModule_Create2PyLong_AsLongpsutil_raise_for_pidpsutil_proc_ioprio_setpsutil_PyErr_SetFromOSErrnoWithSyscallgetutent@@GLIBC_2.2.5getmntent@@GLIBC_2.2.5setmntent@@GLIBC_2.2.5_Py_NoneStructgetifaddrs@@GLIBC_2.3__sched_cpufree@@GLIBC_2.7PyObject_IsTruePyArg_ParseTuplepsutil_pid_existspsutil_disk_partitionsNoSuchProcess__sched_cpualloc@@GLIBC_2.7_Py_TrueStructfreeifaddrs@@GLIBC_2.3setutent@@GLIBC_2.2.5psutil_set_debugPyUnicode_DecodeFSDefaultPyUnicode_FromStringPyEval_SaveThreadPyModule_AddObjectAccessDeniedsysconf@@GLIBC_2.2.5PSUTIL_DEBUGpsutil_userssysinfo@@GLIBC_2.2.5PyErr_SetFromErrnoWithFilenamePyErr_Formatsprintf@@GLIBC_2.2.5fwrite@@GLIBC_2.2.5_ITM_registerTMCloneTablesched_getaffinity@@GLIBC_2.3.4strerror@@GLIBC_2.2.5psutil_check_pid_range__cxa_finalize@@GLIBC_2.2.5_initpsutil_setupstderr@@GLIBC_2.2.5psutil_linux_sysinfosocket@@GLIBC_2.2.5.symtab.strtab.shstrtab.note.gnu.build-id.gnu.hash.dynsym.dynstr.gnu.version.gnu.version_r.rela.dyn.rela.plt.init.text.fini.rodata.eh_frame_hdr.eh_frame.init_array.fini_array.data.rel.ro.dynamic.got.got.plt.data.bss.comment.debug_aranges.debug_info.debug_abbrev.debug_line.debug_str.debug_loc.debug_ranges88$.���o``�8  �@���H���o���U���oxx�d��(nB  x  s    ~0$0$� �EE	�2PP���T�T���U�U���m�]��m�]��m�]��m�]���o�_p�p`� r b` ��u�e�0�e/��e�	/gSw���#9�c,/0��#:�B�PE5���!D	�L	d�SPKok\��G��r�r,psutil/__pycache__/_pswindows.cpython-39.pycnu�[���a

��?h��@s�dZddlZddlZddlZddlZddlZddlZddlZddlm	Z	ddl
mZddlmZddlm
Z
ddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%zddl
mZ&Wnle'�y�Z(zRe)e(��*��+d��r�e�,�dd k�r�d!Z-e-d"7Z-e-d#7Z-e.e-��n�WYdZ([(n
dZ([(00e�rddl/Z/ndZ/gd$�Z0d%Z1d&Z2d'ej3vZ4e/du�r0d(Z5ne/�6d)d*d(i�Z7e7j5Z5e&j8ej9e&j:ej;e&j<ej=e&j>ej?e&j@ejAe&jBejCe&jDejEe&jFejGe&jHejIe&jJejKe&jLejMe&jNe1e&jOejPi
ZQe/du�r�Gd+d,�d,e/j6�ZReS��TeRjU�e/du�r�dZVdZWd-ZXd.ZYn Gd/d0�d0e/j6�ZZeS��TeZjU�e[ddd-d.d1d2d d3d4d5d6d7d8d9d:d;d<d=d>d?d@dAdB�Z\e	dCgdD��Z]e	dEgdF��Z^e	dGgdH��Z_e	dIe_j`dJ�Zae	dKdLdMg�Zbe	dNdOdP�cebj`��Zde	dQgdR��ZeedSdT�dUdV��ZfdWdX�ZgedYdZ��Zhd[d\�Zid]d^�Zje&jkZkd_d`�Zldadb�Zmdcdd�Zndedf�Zodgdh�Zpdidj�Zqdkdl�Zrdmdn�Zsdoatdpdq�Zud�drds�Zvdtdu�Zwdvdw�Zxdxdy�Zydzd{�Zzda{d|d}�Z|d~d�Z}d�d��Z~d�d��ZGd�d��d��Z�e&j�Z�e&j�Z�e&j�Z�d�d��Z�d�d�d��Z�d�d��Z�d�d��Z�Gd�d��d��Z�dS)�z Windows platform implementation.�N)�
namedtuple�)�_common)�ENCODING)�
ENCODING_ERRS)�AccessDenied)�
NoSuchProcess)�TimeoutExpired)�	conn_tmap)�conn_to_ntuple)�debug)�
isfile_strict)�memoize)�memoize_when_activated)�parse_environ_block)�
usage_percent)�PY3)�long)�	lru_cache��range)�unicode)�ABOVE_NORMAL_PRIORITY_CLASS)�BELOW_NORMAL_PRIORITY_CLASS)�HIGH_PRIORITY_CLASS)�IDLE_PRIORITY_CLASS)�NORMAL_PRIORITY_CLASS)�REALTIME_PRIORITY_CLASS)�_psutil_windowszdll load failed�z3this Windows version is too old (< Windows Vista); z:psutil 3.4.2 is the latest version which supports Windows z2000, XP and 2003 server)�win_service_iter�win_service_getrrrrrr�IOPRIO_VERYLOW�
IOPRIO_LOW�
IOPRIO_NORMAL�IOPRIO_HIGH�CONN_DELETE_TCB�AF_LINKZ
DELETE_TCBi+Z__pypy__����
AddressFamilyr'c@s$eZdZeZeZeZeZeZeZdS)�PriorityN)	�__name__�
__module__�__qualname__rrrrrr�r.r.�=/usr/local/lib64/python3.9/site-packages/psutil/_pswindows.pyr*ssr*��c@seZdZdZdZdZdZdS)�
IOPriorityrrr0r1N)r+r,r-r"r#r$r%r.r.r.r/r2�sr2�����	�
���
��������)�num_handles�ctx_switches�	user_time�kernel_time�create_time�num_threads�	io_rcount�	io_wcount�	io_rbytes�	io_wbytes�io_count_others�io_bytes_others�num_page_faults�	peak_wset�wset�peak_paged_pool�
paged_pool�peak_non_paged_pool�non_paged_pool�pagefile�
peak_pagefile�mem_private�	scputimes)�user�system�idle�	interrupt�dpc�svmem)�total�	available�percent�used�free�pmem)�rss�vmsrPrQrRrSrTZpeak_nonpaged_poolZ
nonpaged_poolrWrX�private�pfullmem)�uss�
pmmap_grouped�pathrg�	pmmap_extzaddr perms � �pio)Z
read_countZwrite_count�
read_bytes�write_bytes�other_countZother_bytesi)�maxsizecCs@d�|�d�dd��}t�|�}|t|�d�}tj�||�S)z�Convert paths using native DOS format like:
        "\Device\HarddiskVolume1\Windows\systemew\file.txt"
    into:
        "C:\Windows\systemew\file.txt".
    �\Nr1)�join�split�cextZQueryDosDevice�len�osrm)�sZrawdriveZdriveletter�	remainderr.r.r/�convert_dos_path�s
r}cCs&tr|St|t�r|S|�tt�SdS)zmEncode a unicode string to a byte string by using the default fs
    encoding + "replace" error handler.
    N)r�
isinstance�str�encoderr)r{r.r.r/�
py2_strencode�s

r�cCst��S�N)rx�getpagesizer.r.r.r/r��sr�c
CsJt��}|\}}}}|}|}|}||}t|||dd�}	t|||	||�S)z&System virtual memory as a namedtuple.r�Zround_)rx�virtual_memrr`)
�memZtotphysZ	availphysZ_totsysZ	_availsysraZavailrerdrcr.r.r/�virtual_memory�sr�cCspt��}|d}|d}||}|dkrBt��}td||�}nd}d}||}t|d�}t�||||dd�S)z=Swap system memory as a (total, used, free, sin, sout) tuple.rr0g{�G�z�?�r)rxr�Zswap_percent�int�roundrZsswap)r�Z
total_physZtotal_systemraZpercentswaprdrercr.r.r/�swap_memory�s
r�cCsPtrt|t�r|jtdd�}t�|�\}}||}t||dd�}t�	||||�S)z'Return disk usage associated with path.�strict)�errorsrr�)
rr~�bytes�decoderrx�
disk_usagerrZ
sdiskusage)rmrarerdrcr.r.r/r�sr�cCst�|�}dd�|D�S)zReturn disk partitions.cSsg|]}tj|��qSr.)rZ	sdiskpart��.0�xr.r.r/�
<listcomp>+�z#disk_partitions.<locals>.<listcomp>)rx�disk_partitions)�all�rawlistr.r.r/r�(s
r�cCs<t��\}}}tdd�tt���D��}t||||j|j�S)z)Return system CPU times as a named tuple.cSsg|]}t|��qSr.)�sum)r��nr.r.r/r�9r�zcpu_times.<locals>.<listcomp>)rx�	cpu_timesrZ�zip�
per_cpu_timesr^r_)r[r\r]Z
percpu_summedr.r.r/r�3s
�r�cCs:g}t��D](\}}}}}t|||||�}|�|�q|S)z6Return system per-CPU times as a list of named tuples.)rxr�rZ�append)�retr[r\r]r^r_�itemr.r.r/r�?s
r�cCst��S)z0Return the number of logical CPUs in the system.)rx�cpu_count_logicalr.r.r.r/r�Hsr�cCst��S)z-Return the number of CPU cores in the system.)rx�cpu_count_coresr.r.r.r/r�Msr�cCs$t��\}}}}d}t�||||�S)zReturn CPU statistics.r)rx�	cpu_statsrZ	scpustats)rEZ
interruptsZ_dpcsZsyscallsZsoft_interruptsr.r.r/r�Rs
�r�cCs(t��\}}d}t�t|�|t|��gS)zMReturn CPU frequency.
    On Windows per-cpu frequency is not supported.
    r�)rx�cpu_freqrZscpufreq�float)�currZmax_Zmin_r.r.r/r�[sr�FcCs*tst��dat��}tdd�|D��S)z�Return the number of processes in the system run queue averaged
    over the last 1, 5, and 15 minutes respectively as a tuple.
    TcSsg|]}t|d��qS)r0)r�)r��loadr.r.r/r�sr�zgetloadavg.<locals>.<listcomp>)�_loadavg_inititializedrxZinit_loadavg_counter�
getloadavg�tuple)Z	raw_loadsr.r.r/r�gs
r�cCs�|tvr(td|d�dd�tD��f��t|\}}t�|||�}t�}|D]D}|\}}}	}
}}}
t|||	|
||t|dkr~|
ndd�}|�|�qLt	|�S)z�Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    z+invalid %r kind argument; choose between %sz, cSsg|]}t|��qSr.)�reprr�r.r.r/r��r�z#net_connections.<locals>.<listcomp>r(N)�pid)
r
�
ValueErrorrvrx�net_connections�setr�TCP_STATUSES�add�list)�kind�_pidZfamilies�typesr�r�r��fd�fam�type�laddr�raddr�statusr��ntr.r.r/r�{s.���
r�cCszi}t��}|��D]`\}}ts>t|t�s6Jt|���t|�}|\}}}}tt	d�r^t	�
|�}t	�||||d�||<q|S)z)Get NIC stats (isup, duplex, speed, mtu).�	NicDuplex�)rx�net_if_stats�itemsrr~rr�r��hasattrrr�Z	snicstats)r�Zrawdict�namer�ZisupZduplex�speedZmtur.r.r/r��s

r�cCst��}tdd�|��D��S)zsReturn network I/O statistics for every network interface
    installed on the system as a dict of raw tuples.
    cSsg|]\}}t|�|f�qSr.�r�)r��k�vr.r.r/r��r�z#net_io_counters.<locals>.<listcomp>)rx�net_io_counters�dictr��r�r.r.r/r��sr�cCs8g}t��D]&}t|�}t|d�|d<|�|�q|S)z,Return the addresses associated to each NIC.r)rx�net_if_addrsr�r�r�)r�r�r.r.r/r��sr�cCsdt��\}}}}|dk}t|d@�}t|d@�}|r8dS|s@|rHtj}n|dkrVtj}t�|||�S)zReturn battery information.r�r6Nr()rx�sensors_battery�boolrZPOWER_TIME_UNLIMITEDZPOWER_TIME_UNKNOWNZsbattery)Z
acline_status�flagsrcZsecsleftZ
power_pluggedZ
no_batteryZchargingr.r.r/r��sr�cCs,tt���}t|t�dkr tS|a|SdS)z:The system boot time expressed in seconds since the epoch.rN)r�rx�	boot_time�abs�_last_btimer�r.r.r/r��s
r�cCsHg}t��}|D]2}|\}}}t|�}t�|d||d�}|�|�q|S)z:Return currently connected users as a list of namedtuples.N)rx�usersr�rZsuserr�)�retlistr�r�r[�hostnameZtstampr�r.r.r/r��s
r�ccs*t��D]\}}tt|�t|��VqdS)z*Yields a list of WindowsService instances.N)rxZwinservice_enumerate�WindowsServicer�)r��display_namer.r.r/r �sr cCst|d�}|��d|_|S)zBOpen a Windows service and return it as a WindowsService instance.Nr�)r��
_query_config�
_display_name)r�Zservicer.r.r/r!�s
r!c@s�eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
ejdd��Z
dd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zd d!�Zd"d#�Zd$S)%r�z(Represents an installed Windows service.cCs||_||_dSr�)�_namer�)�selfr�r�r.r.r/�__init__szWindowsService.__init__cCs d|j|jf}d|jj|fS)Nz(name=%r, display_name=%r)z%s%s)r�r��	__class__r+)r��detailsr.r.r/�__str__s
�zWindowsService.__str__cCsd|��t|�fS)Nz
<%s at %s>)r��id�r�r.r.r/�__repr__szWindowsService.__repr__cCst|t�stS|j|jkSr�)r~r��NotImplementedr��r��otherr.r.r/�__eq__s
zWindowsService.__eq__cCs
||kSr�r.r�r.r.r/�__ne__szWindowsService.__ne__cCs\|���$t�|j�\}}}}Wd�n1s20Ytt|�t|�t|�t|�d�S)N)r��binpath�username�
start_type)�_wrap_exceptionsrxZwinservice_query_configr�r�r�)r�r�r�r�r�r.r.r/r� s

�(�zWindowsService._query_configcCsP|��� t�|j�\}}Wd�n1s.0Y|dkrDd}t||d�S)Nr)r�r�)r�rxZwinservice_query_statusr�r�)r�r�r�r.r.r/�
_query_status-s

.zWindowsService._query_statusc
cs�z
dVWnxty�}z`t|�r>d|j}td|j|d��n0|jtjtjfvrld|j}td|j|d��n�WYd}~n
d}~00dS)z{Ctx manager which translates bare OSError and WindowsError
        exceptions into NoSuchProcess and AccessDenied.
        Nz2service %r is not querable (not enough privileges)�r�r��msgzservice %r does not exist)	�OSError�is_permission_errr�r�winerrorrxZERROR_INVALID_NAMEZERROR_SERVICE_DOES_NOT_EXISTr)r��errr�r.r.r/r�4s 
���
zWindowsService._wrap_exceptionscCs|jS)z�The service name. This string is how a service is referenced
        and can be passed to win_service_get() to get a new
        WindowsService instance.
        )r�r�r.r.r/r�MszWindowsService.namecCs|jS)z_The service display name. The value is cached when this class
        is instantiated.
        )r�r�r.r.r/r�TszWindowsService.display_namecCs|��dS)zwThe fully qualified path to the service binary/exe file as
        a string, including command line arguments.
        r��r�r�r.r.r/r�ZszWindowsService.binpathcCs|��dS)z,The name of the user that owns this service.r�r�r�r.r.r/r�`szWindowsService.usernamecCs|��dS)zRA string which can either be "automatic", "manual" or
        "disabled".
        r�r�r�r.r.r/r�dszWindowsService.start_typecCs|��dS)zzThe process PID, if any, else None. This can be passed
        to Process class to control the service's process.
        r��r�r�r.r.r/r�lszWindowsService.pidcCs|��dS)zService status as a string.r�r�r�r.r.r/r�rszWindowsService.statuscCstt�|����S)zService long description.)r�rxZwinservice_query_descrr�r�r.r.r/�descriptionvszWindowsService.descriptioncCs>|��}|�|���|��|d<|��|d<|��|d<|S)zUUtility method retrieving all the information above as a
        dictionary.
        r�r�r�)r��updater�r�r�r�)r��dr.r.r/�as_dict|szWindowsService.as_dictN)r+r,r-�__doc__r�r�r�r�r�r�r��
contextlib�contextmanagerr�r�r�r�r�r�r�r�r�r�r.r.r.r/r�s&

r�cCs@t|t�sJ|��|jtjtjfvr(dSt|dd�tjtjfvS)z*Return True if this is a permission error.Tr�r()	r~r��errno�EPERM�EACCES�getattrrxZERROR_ACCESS_DENIEDZERROR_PRIVILEGE_NOT_HELD)�excr.r.r/r��s
�r�cCsFt|t�sJ|��t|�r&t||d�S|jtjkr>t||d�S|�dS)z3Convert OSError into NoSuchProcess or AccessDenied.�r�r�N)r~r�r�rr�ZESRCHr)r�r�r�r.r.r/�convert_oserror�srcst����fdd��}|S)zDDecorator which converts OSError into NoSuchProcess or AccessDenied.c
sTz�|g|�Ri|��WStyN}zt||j|jd��WYd}~n
d}~00dS)Nr)r�rr�r�)r��args�kwargsr���funr.r/�wrapper�sz wrap_exceptions.<locals>.wrapper��	functools�wraps�rrr.rr/�wrap_exceptions�srcst����fdd��}|S)z�Workaround for https://github.com/giampaolo/psutil/issues/875.
    See: https://stackoverflow.com/questions/4457745#4457745.
    cs�d}d}t|�D]z}z�|g|�Ri|��WSty�}z@|}|jtkrrt�|�t|dd�}WYd}~q�WYd}~qd}~00qd��||�}t|j	|j
|d��dS)N�-C��6?�!r0�{�G�z�?zH{} retried {} times, converted to AccessDenied as it's stillreturning {}r�)r�WindowsErrorr��ERROR_PARTIAL_COPY�time�sleep�min�formatrr�r�)r�rr�delay�times�_r�r�rr.r/r�s"

��z)retry_error_partial_copy.<locals>.wrapperrr
r.rr/�retry_error_partial_copy�src@s�eZdZdZgd�Zdd�Zdd�Zdd�Zed	d
��Z	dd�Z
eed
d���Zee
dd���Zee
dd���Zdd�Zdd�Zedd��Zedd��Zdd�Zedd��Zedd ��ZedNd"d#��Zed$d%��ZedOd'd(��Zed)d*��Zed+d,��Zed-d.��Zed/d0��Zed1d2��Zee
d3d4���Zed5d6��Z edPd8d9��Z!ed:d;��Z"ed<d=��Z#ed>d?��Z$ed@dA��Z%edBdC��Z&edDdE��Z'edFdG��Z(edHdI��Z)edJdK��Z*edLdM��Z+d!S)Q�Processz1Wrapper class around underlying C implementation.)�_cacher��_ppidr�cCs||_d|_d|_dSr�)r�r�r)r�r�r.r.r/r��szProcess.__init__cCs|j�|�|j�|�dSr�)�
_proc_infoZcache_activate�exer�r.r.r/�
oneshot_enterszProcess.oneshot_entercCs|j�|�|j�|�dSr�)rZcache_deactivaterr�r.r.r/�oneshot_exit
szProcess.oneshot_exitcCs$t�|j�}t|�tt�ks J�|S)zOReturn multiple information about this process as a
        raw tuple.
        )rxZ	proc_infor�ry�	pinfo_map�r�r�r.r.r/rszProcess._proc_infocCs,|jdkrdS|jdkrdStj�|���S)zbReturn process name, which on Windows is always the final
        part of the executable.
        rzSystem Idle Processr3�System)r�rzrm�basenamerr�r.r.r/r�s


zProcess.namec
Cs�trbzt�|j�}Wqnty^}z2|jdkrHtd|�t|j|j���WYd}~qnd}~00nt�|j�}t	szt
|�}|�d�r�t|�S|S)N�z%r translated into AccessDeniedru)
�PYPYrxZproc_exer�rr�rrr�rr��
startswithr})r�rr�r.r.r/r#s

zProcess.exec
Cs�tjtjkrdztj|jdd�}Wqtty`}z(t|�rJtj|jdd�}n�WYd}~qtd}~00ntj|jdd�}tr||Sdd�|D�SdS)NT)Zuse_pebFcSsg|]}t|��qSr.r�)r�r{r.r.r/r�Jr�z#Process.cmdline.<locals>.<listcomp>)rxZWINVERZWINDOWS_8_1Zproc_cmdliner�r�r�r)r�r�r�r.r.r/�cmdline8szProcess.cmdlinecCs6t�|j�}|r*ts*t|t�s*Jt|���tt|��Sr�)	rxZproc_environr�rr~rr�rr�)r�Zustrr.r.r/�environLszProcess.environcCs4zt�|jWSty.t|j|j��Yn0dSr�)�ppid_mapr��KeyErrorrr�r�r.r.r/�ppidTszProcess.ppidcCs�zt�|j�WSty�}z�t|�r�td�|��}|td|td|td|td|td|td|td|td	|td
|tdf
WYd}~S�WYd}~n
d}~00dS)Nz*attempting memory_info() fallback (slower)rPrQrRrSrTrUrVrWrXrY)rxZproc_memory_infor�r�r�rrr )r�r��infor.r.r/�_get_raw_meminfoZs$









�zProcess._get_raw_meminfocCs(|��}|d}|d}t||f|�S)Nr0r5)r-rf)r��trgrhr.r.r/�memory_infoqszProcess.memory_infocCs,|��}t�|j�}|t�9}t||f�Sr�)r/rxZproc_memory_ussr�r�rj)r�Z	basic_memrkr.r.r/�memory_full_info{s
zProcess.memory_full_infoc
cs�zt�|j�}Wn4tyD}zt||j|j��WYd}~nFd}~00|D]6\}}}}t|�}tsjt|�}t	|�}||||fVqJdSr�)
rxZproc_memory_mapsr�r�rr�r}rr��hex)r��rawr��addr�permrmrgr.r.r/�memory_maps�s&zProcess.memory_mapscCst�|j�Sr�)rx�	proc_killr�r�r.r.r/�kill�szProcess.killcCsX|tjkrt�|j�n<|ttdt��ttdt��fvrHt�|j|�nd}t	|��dS)NZCTRL_C_EVENTZCTRL_BREAK_EVENTzPonly SIGTERM, CTRL_C_EVENT and CTRL_BREAK_EVENT signals are supported on Windows)
�signal�SIGTERMrxr6r�r��objectrzr7r�)r��sigr�r.r.r/�send_signal�s
��zProcess.send_signalNcCs�|durtj}nt|d�}ttdtj�}|dur<|�|nd}zt�|j|�}Wn:tjyvt||j|j��Yntj	y�d}Yn0d}t
|j�s�|S|r�|�|kr�t||j|jd��t�|�t|dd�}q�dS)Ni��	monotonicrrr0r)
rxZINFINITEr�r�rZ	proc_waitr�r	r�ZTimeoutAbandoned�
pid_existsrr)r��timeoutZcext_timeout�timerZstop_atZ	exit_coderr.r.r/�wait�s$	


zProcess.waitcCs2|jdvrdSt�|j�\}}t|�dt|�S)N�rr3zNT AUTHORITY\SYSTEMru)r�rxZ
proc_usernamer�)r��domainr[r.r.r/r��s
zProcess.usernameFc
Csvzt�|j�\}}}|WStyp}z@t|�rZ|r6�td�|��tdWYd}~S�WYd}~n
d}~00dS)Nz*attempting create_time() fallback (slower)rH)rx�
proc_timesr�r�r�rrr )r�Z	fast_only�_userZ_system�createdr�r.r.r/rH�szProcess.create_timecCs|��tdS)NrI)rr r�r.r.r/rI�szProcess.num_threadscCs<t�|j�}g}|D]"\}}}t�|||�}|�|�q|Sr�)rxZproc_threadsr�rZpthreadr�)r�r�r��	thread_id�utimeZstime�ntupler.r.r/�threads�szProcess.threadsc
Cs~zt�|j�\}}}WnVtyl}z>t|�s0�td�|��}|td}|td}WYd}~n
d}~00t�	||dd�S)Nz(attempting cpu_times() fallback (slower)rFrGr�)
rxrDr�r�r�rrr rZ	pcputimes)r�r[r\Z_createdr�r,r.r.r/r��s"zProcess.cpu_timescCst�|jd�dS)NT�rxZproc_suspend_or_resumer�r�r.r.r/�suspendszProcess.suspendcCst�|jd�dS)NFrKr�r.r.r/�resumeszProcess.resumecCs4|jdvrt|j|j��t�|j�}ttj�|��S)NrB)	r�rr�rxZproc_cwdr�rzrm�normpath)r�rmr.r.r/�cwd
s
zProcess.cwdcCsd|jdvrgSt�}t�|j�}|D]6}t|�}t|�r$tsDt|�}t�	|d�}|�
|�q$t|�S)NrBr()r�r�rxZproc_open_filesr}r
rr�rZ	popenfiler�r�)r�r�Zraw_file_names�_filerIr.r.r/�
open_filess
zProcess.open_files�inetcCst||jd�S)N)r�)r�r�)r�r�r.r.r/r�'szProcess.net_connectionscCs t�|j�}tdurt|�}|Sr�)rxZproc_priority_getr��enumr*�r��valuer.r.r/�nice_get+szProcess.nice_getcCst�|j|�Sr�)rxZproc_priority_setr�rTr.r.r/�nice_set2szProcess.nice_setcCs t�|j�}tdurt|�}|Sr�)rxZproc_io_priority_getr�rSr2r!r.r.r/�
ionice_get6szProcess.ionice_getcCs>|rd}t|��|ttttfvr,td|��t�|j|�dS)Nz&value argument not accepted on Windowsz%s is not a valid priority)	�	TypeErrorr"r#r$r%r�rxZproc_io_priority_setr�)r�ZioclassrUr�r.r.r/�
ionice_set=s�zProcess.ionice_setcCs�zt�|j�}Wn~ty�}zft|�s*�td�|��}|td|td|td|td|td|tdf}WYd}~n
d}~00t|�S)Nz*attempting io_counters() fallback (slower)rJrKrLrMrNrO)	rxZproc_io_countersr�r�r�rrr rp)r�r�r�r,r.r.r/�io_countersKs





�zProcess.io_counterscCs t�|j�}|rtjStjSdSr�)rxZproc_is_suspendedr�rZSTATUS_STOPPEDZSTATUS_RUNNING)r�Z	suspendedr.r.r/r�^szProcess.statuscCsdd�}t�|j�}||�S)Ncs�fdd�td�D�S)Ncsg|]}d|>�@r|�qS)rr.)r��i�r�r.r/r�ir�zBProcess.cpu_affinity_get.<locals>.from_bitmask.<locals>.<listcomp>�@rr]r.r]r/�from_bitmaskhsz.Process.cpu_affinity_get.<locals>.from_bitmask)rxZproc_cpu_affinity_getr�)r�r_�bitmaskr.r.r/�cpu_affinity_getfszProcess.cpu_affinity_getcCsndd�}tttt����}|D]4}||vrt|ttf�sFtd|��qtd|��q||�}t	�
|j|�dS)NcSs.|std|��d}|D]}|d|O}q|S)Nzinvalid argument %rrr0)r�)Zls�out�br.r.r/�
to_bitmaskpsz,Process.cpu_affinity_set.<locals>.to_bitmaskz&invalid CPU %r; an integer is requiredzinvalid CPU %r)r�rryr�r~r�rrYr�rxZproc_cpu_affinity_setr�)r�rUrdZallcpus�cpur`r.r.r/�cpu_affinity_setns�zProcess.cpu_affinity_setc
Csfzt�|j�WSty`}z:t|�rJtd�|��tdWYd}~S�WYd}~n
d}~00dS)Nz*attempting num_handles() fallback (slower)rD)rxZproc_num_handlesr�r�r�rrr )r�r�r.r.r/rD�szProcess.num_handlescCs|��td}t�|d�S)NrEr)rr rZpctxsw)r�rEr.r.r/�num_ctx_switches�szProcess.num_ctx_switches)N)F)rR),r+r,r-r��	__slots__r�rrrrr�rrrr'r(r+r-r/r0r5r7r<rAr�rHrIrJr�rLrMrOrQr�rVrWrXrZr[r�rarfrDrgr.r.r.r/r�s�

	


*

















	r)r()NN)�r�r�r�rrzr8�sysr�collectionsrr�rrrrrr	r
rrr
rrrrZ_compatrrrrrrrrrrrrrx�ImportErrorr�r�lowerr&�getwindowsversionr��RuntimeErrorrSZ__extra__all__r&r�builtin_module_namesr%r'�IntEnumr)ZMIB_TCP_STATE_ESTABZCONN_ESTABLISHEDZMIB_TCP_STATE_SYN_SENTZ
CONN_SYN_SENTZMIB_TCP_STATE_SYN_RCVDZ
CONN_SYN_RECVZMIB_TCP_STATE_FIN_WAIT1ZCONN_FIN_WAIT1ZMIB_TCP_STATE_FIN_WAIT2ZCONN_FIN_WAIT2ZMIB_TCP_STATE_TIME_WAITZCONN_TIME_WAITZMIB_TCP_STATE_CLOSEDZ
CONN_CLOSEZMIB_TCP_STATE_CLOSE_WAITZCONN_CLOSE_WAITZMIB_TCP_STATE_LAST_ACKZ
CONN_LAST_ACKZMIB_TCP_STATE_LISTENZCONN_LISTENZMIB_TCP_STATE_CLOSINGZCONN_CLOSINGZMIB_TCP_STATE_DELETE_TCBZPSUTIL_CONN_NONEZ	CONN_NONEr�r*�globalsr��__members__r"r#r$r%r2r�r rZr`rf�_fieldsrjrlrvrnrpr}r�r�r�r�Zdisk_io_countersr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r r!r�Zpidsr>r)r�rrrrr.r.r.r/�<module>s6��



�

�!���


				
0


PKok\�U��*psutil/__pycache__/__init__.cpython-39.pycnu�[���a

��?hB\�@s�dZddlmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZzddl
Z
Wney�dZ
Yn0ddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*dd lm+Z+dd!lm,Z,dd"lm-Z-dd#lm.Z.dd$lm/Z/dd%lm0Z0dd&lm1Z1dd'lm2Z2dd(lm3Z3dd)lm4Z4dd*lm5Z5dd+lm6Z6dd,lm7Z7dd-lm8Z8dd.lm9Z9dd/lm:Z:dd0lm;Z;dd1lm<Z<dd2lm=Z=dd3lm>Z>dd4lm?Z?dd5lm@ZAdd6lBmCZDdd7lBmEZEdd8lBmFZFdd9lBmGZHdd:lBmIZIe �rVd;ZJdd<lmKZLdd=lKmMZMdd>lKmNZNdd?lKmOZOdd@lKmPZP�ne8�r�ddAlmQZLddBlRmSZSddClRmTZTddDlRmUZUddElRmVZVddFlRmWZWddGlRmXZXddHlQmYZYddIlQmZZZddJlQm[Z[ddKlQm\Z\ddLlQm]Z]n~e!�rddMlm^ZLnje�rddNlm_ZLnVe7�rFddOlm`ZLddPl`maZaddQl`mbZbd;ZJn&e�r^ddRlmcZLd;ZJneddSe
je��gdT�Zfef�geLjh�eieLjjdU��r�ddVlmkZkel�ZmdZneoek�D]4Znen�pdW��r�en�q��r�ereken�emen<ef�sen��q�[m[neLjtZtdXZudYZvewdZd[�ev�xd\�D��Zyered]ej�Zzda{da|e}�Z~eev��d\d^��ereLj�d_d�k�r�d`eLj�j�Z�e�da7Z�eieLj�d_��r�e�dbd\��dcd[�e�eLj�j��D��evf7Z�ne�ddev7Z�e�deereLj�dfdg�7Z�e�dh7Z�ee���eieLdi��r�eLj�Z�ndjdk�Z�dldm�Z�Gdndo�doe}�Zje�dpd[�eoej�D��Z�Gdqdr�drej�Z�dsdt�Z�dudv�Z�ia�e��Z�d�dwdx�Z�dydz�e�_�d{e�j�_d�d|d}�Z�d�dd��Z�d�d�d��Z�ze���j�e��iZ�Wne��y�iZ�Yn0ze���j�e�d~d��iZ�Wne��y�iZ�Yn0d�d��Z�d�d��Z�d�d��Z�d�d�d��Z�e����Z�e����Z�d�d�d��Z�d�d��Z�eieLd���r8d�d�d��Z�ef�sd��eied���sPeieLd���rteied���rdej�Z�neLj�Z�ef�sd��d�d��Z�d�d��Z�d�d��Z�d�d�d��Z�d�d�d��Z�e��eAj�d��e�_�d�e�j�_d�d�d��Z�e��eAj�d��e�_�d�e�j�_d�d�d��Z�d�d��Z�d�d��Z�eieLd���rd�d�d��Z�ef�sd��eieLd���r2d�d��Z�ef�sd��eieLd���rPd�d��Z�ef�sd��d�d��Z�d�d��Z�e8�rvd�d��Z�d�d��Z�d�d��Z�d�d��Z�[?[e
jydd�k�r�[�[�e�d�k�r�e��dS)�a/psutil is a cross-platform library for retrieving information on
running processes and system utilization (CPU, memory, disks, network,
sensors) in Python. Supported platforms:

 - Linux
 - Windows
 - macOS
 - FreeBSD
 - OpenBSD
 - NetBSD
 - Sun Solaris
 - AIX

Works with Python versions 2.7 and 3.6+.
�)�divisionN�)�_common)�AIX)�BSD)�
CONN_CLOSE)�CONN_CLOSE_WAIT)�CONN_CLOSING)�CONN_ESTABLISHED)�CONN_FIN_WAIT1)�CONN_FIN_WAIT2)�
CONN_LAST_ACK)�CONN_LISTEN)�	CONN_NONE)�
CONN_SYN_RECV)�
CONN_SYN_SENT)�CONN_TIME_WAIT)�FREEBSD)�LINUX)�MACOS)�NETBSD)�NIC_DUPLEX_FULL)�NIC_DUPLEX_HALF)�NIC_DUPLEX_UNKNOWN)�OPENBSD)�OSX)�POSIX)�POWER_TIME_UNKNOWN)�POWER_TIME_UNLIMITED)�STATUS_DEAD)�STATUS_DISK_SLEEP)�STATUS_IDLE)�
STATUS_LOCKED)�
STATUS_PARKED)�STATUS_RUNNING)�STATUS_SLEEPING)�STATUS_STOPPED)�STATUS_TRACING_STOP)�STATUS_WAITING)�
STATUS_WAKING)�
STATUS_ZOMBIE)�SUNOS)�WINDOWS)�AccessDenied)�Error)�
NoSuchProcess)�TimeoutExpired)�
ZombieProcess)�debug)�memoize_when_activated)�wrap_numbers)�PY3)�PermissionError)�ProcessLookupError)�SubprocessTimeoutExpired)�longz/proc)�_pslinux)�IOPRIO_CLASS_BE)�IOPRIO_CLASS_IDLE)�IOPRIO_CLASS_NONE)�IOPRIO_CLASS_RT)�
_pswindows)�ABOVE_NORMAL_PRIORITY_CLASS)�BELOW_NORMAL_PRIORITY_CLASS)�HIGH_PRIORITY_CLASS)�IDLE_PRIORITY_CLASS)�NORMAL_PRIORITY_CLASS)�REALTIME_PRIORITY_CLASS)�CONN_DELETE_TCB)�IOPRIO_HIGH)�
IOPRIO_LOW)�
IOPRIO_NORMAL)�IOPRIO_VERYLOW)�_psosx)�_psbsd)�_pssunos)�
CONN_BOUND)�	CONN_IDLE)�_psaixzplatform %s is not supported)Gr.r/r1r-r0�version_info�__version__r$r!r%r r&r'r*rr)r"r(r"r#r
rrrrrrrr
rr	r�AF_LINKrrrrrrrrrrrrrr+r,r�Process�Popen�
pid_exists�pids�process_iter�
wait_procs�virtual_memory�swap_memory�	cpu_times�cpu_percent�cpu_times_percent�	cpu_count�	cpu_stats�net_io_counters�net_connections�net_if_addrs�net_if_stats�disk_io_counters�disk_partitions�
disk_usage�users�	boot_time�rlimit)�
_psutil_posixZRLIMzGiampaolo Rodola'z6.1.0cCsg|]}t|��qS�)�int)�.0�numrlrl�;/usr/local/lib64/python3.9/site-packages/psutil/__init__.py�
<listcomp>��rq�.�	monotonic��versionz!version conflict: %r C extension z.module was built for another version of psutilz (%s instead of %s)cCsg|]}|�qSrlrl�rn�xrlrlrprq�rrz (different than %s)z;; you may try to 'pip uninstall psutil', manually remove %s�__file__z%the existing psutil install directoryz1 or clean the virtual env somehow, then reinstall�ppid_mapc
CsBi}t�D]2}zt�|���||<Wq
ttfy:Yq
0q
|S)z{Return a {pid: ppid, ...} dict for all running processes in
        one shot. Used to speed up Process.children().
        )rW�_psplatformrT�ppidr/r1)�ret�pidrlrlrp�	_ppid_maps
rcCs6t��}t||�}|dkr dnd}tj�|��|�S)z(Format seconds in a human readable form.i�Qz%H:%M:%Sz%Y-%m-%d %H:%M:%S)�timerm�datetime�
fromtimestamp�strftime)Zsecs�nowZsecs_ago�fmtrlrlrp�_pprint_secssr�c@s�eZdZdZd�dd�Zd�dd�Zdd	�Zd
d�ZeZdd
�Z	dd�Z
dd�Zdd�Ze
dd��Zejdd��Zd�dd�Zdd�Zdd�Zdd�Zed d!��Zd"d#�Zd$d%�Zd&d'�Zd(d)�Zd*d+�Zd,d-�Zd.d/�Zd�d0d1�Ze �red2d3��Z!d4d5�Z"d6d7�Z#d8d9�Z$e%e&j'd:��rd;d<�Z(e%e&j'd=��r6d�d>d?�Z)e%e&j'd@��rNd�dAdB�Z*e%e&j'dC��rfd�dDdE�Z+e%e&j'dF��r|dGdH�Z,e%e&j'dI��r�dJdK�Z-e.�r�dLdM�Z/dNdO�Z0dPdQ�Z1e%e&j'dR��r�dSdT�Z2d�dUdV�Z3d�dWdX�Z4edYdZ��Z5ed[d\��Z6e7j8d]d^�d_d`��Z9dadb�Z:d�ddde�Z;e%e&j'df��r0d�dhdi�Z<djdk�Z=d�dmdn�Z>e7j8dod^�d�dpdq��Z?e �rfdrds�Z@dtdu�ZAdvdw�ZBdxdy�ZCdzd{�ZDd|d}�ZEd�d~d�ZFdS)�rTaRepresents an OS process with the given PID.
    If PID is omitted current process PID (os.getpid()) is used.
    Raise NoSuchProcess if PID does not exist.

    Note that most of the methods of this class do not make sure that
    the PID of the process being queried has been reused. That means
    that you may end up retrieving information for another process.

    The only exceptions for which process identity is pre-emptively
    checked and guaranteed are:

     - parent()
     - children()
     - nice() (set)
     - ionice() (set)
     - rlimit() (set)
     - cpu_affinity (set)
     - suspend()
     - resume()
     - send_signal()
     - terminate()
     - kill()

    To prevent this problem for all other methods you can use
    is_running() before querying the process.
    NcCs|�|�dS�N)�_init)�selfr~rlrlrp�__init__>szProcess.__init__FcCsR|durt��}nrts4t|ttf�s4d|}t|��|dkrLd|}t|��ztj	�
|�Wn&ty�d|}t||d��Yn0||_
d|_d|_d|_d|_d|_d|_t��|_d|_t�|�|_d|_d|_t|_|jdf|_z|� �|_WnVt!�y
YnDt"�yYn2t�yL|�sBd}t||d��nd|_Yn0dS)	Nzpid must be an integer (got %r)rz'pid must be a positive integer (got %s)z!process PID out of range (got %s)��msgFzprocess PID not foundT)#�os�getpid�_PY3�
isinstancermr9�	TypeError�
ValueErrorr{�cextZcheck_pid_range�
OverflowErrorr/�_pid�_name�_exe�_create_time�_gone�_pid_reused�_hash�	threading�RLock�_lock�_ppidrT�_proc�_last_sys_cpu_times�_last_proc_cpu_times�	_SENTINEL�	_exitcoder~�_ident�
_get_identr-r1)r�r~�_ignore_nspr�rlrlrpr�AsL

z
Process._initcCs2tr |jjdd�|_|j|jfS|j|��fSdS)aReturn a (pid, uid) tuple which is supposed to identify a
        Process instance univocally over time. The PID alone is not
        enough, as it can be assigned to a new process after this one
        terminates, so we add process creation time to the mix. We need
        this in order to prevent killing the wrong process later on.
        This is also known as PID reuse or PID recycling problem.

        The reliability of this strategy mostly depends on
        create_time() precision, which is 0.01 secs on Linux. The
        assumption is that, after a process terminates, the kernel
        won't reuse the same PID after such a short period of time
        (0.01 secs). Technically this is inherently racy, but
        practically it should be good enough.
        T)Z	fast_onlyN)r,r��create_timer�r~�r�rlrlrpr�tszProcess._get_identc	Cst��}|j|d<|jr"|j|d<|����|jr<d|d<n`z|��|d<|��|d<WnBtyrd|d<Yn*t	y�d|d<Ynt
y�Yn0|jtdfvr�|j|d<|j
dur�t|j
�|d<d	|jj|jjd
�dd�|��D��fWd�S1�s0YdS)
Nr~�namezterminated + PID reused�statusZzombieZ
terminated�exitcode�startedz	%s.%s(%s)�, cSsg|]\}}d||f�qS)z%s=%rrl)rn�k�vrlrlrprq�rrz#Process.__str__.<locals>.<listcomp>)�collections�OrderedDictr~r��oneshotr�r�r�r1r/r-r�r�r�r��	__class__�
__module__�__name__�join�items)r��inforlrlrp�__str__�s2





�zProcess.__str__cCsht|t�stStstr\|j\}}|j\}}||kr\|r\|s\z|��tkWStyZYn0|j|jkSr�)	r�rT�NotImplementedrrr�r�r*r.)r��otherZpid1Zident1Zpid2Zident2rlrlrp�__eq__�s


zProcess.__eq__cCs
||kSr�rl)r�r�rlrlrp�__ne__�szProcess.__ne__cCs|jdurt|j�|_|jSr�)r��hashr�r�rlrlrp�__hash__�s
zProcess.__hash__cCs.|js|��s*|jr*d}t|j|j|d��dS)z9Raises NoSuchProcess in case process PID has been reused.z4process no longer exists and its PID has been reusedr�N)r��
is_runningr/r~r�)r�r�rlrlrp�_raise_if_pid_reused�szProcess._raise_if_pid_reusedcCs|jS)zThe process PID.)r�r�rlrlrpr~�szProcess.pidc
cs|j��t|d�rdVn�z�|j�|�|j�|�|j�|�trP|j�|�|j�	�dVW|j�
|�|j�
|�|j�
|�tr�|j�
|�|j��n@|j�
|�|j�
|�|j�
|�tr�|j�
|�|j��0Wd�n1s�0YdS)a#Utility context manager which considerably speeds up the
        retrieval of multiple process information at the same time.

        Internally different process info (e.g. name, ppid, uids,
        gids, ...) may be fetched by using the same routine, but
        only one information is returned and the others are discarded.
        When using this context manager the internal routine is
        executed once (in the example below on name()) and the
        other info are cached.

        The cache is cleared when exiting the context manager block.
        The advice is to use this every time you retrieve more than
        one information about the process. If you're lucky, you'll
        get a hell of a speedup.

        >>> import psutil
        >>> p = psutil.Process()
        >>> with p.oneshot():
        ...     p.name()  # collect multiple info
        ...     p.cpu_times()  # return cached value
        ...     p.cpu_percent()  # return cached value
        ...     p.create_time()  # return cached value
        ...
        >>>
        �_cacheN)r��hasattrr\Zcache_activate�memory_infor|r�uidsr�Z
oneshot_enterZcache_deactivateZoneshot_exitr�rlrlrpr��s.

�zProcess.oneshotcCst}|durvt|ttttf�s2dt|�}t|��t|�}||}|rvdt|�dkrXdndd�	t
t|��f}t|��i}|p�|}|�
��||D]f}z$|dkr�|j}	nt||�}
|
�}	Wn4ttfy�|}	Ynty�|r�Yq�Yn0|	||<q�Wd�n1�s0Y|S)	a�Utility method returning process information as a
        hashable dictionary.
        If *attrs* is specified it must be a list of strings
        reflecting available Process class' attribute names
        (e.g. ['cpu_times', 'name']) else all public (read
        only) attributes are assumed.
        *ad_value* is the value which gets assigned in case
        AccessDenied or ZombieProcess exception is raised when
        retrieving that particular process information.
        Nzinvalid attrs type %szinvalid attr name%s %sr�srur�r~)�_as_dict_attrnamesr��list�tuple�set�	frozenset�typer��lenr��map�reprr�r�r~�getattrr-r1�NotImplementedError)r��attrs�ad_valueZvalid_namesr��
invalid_namesZretdict�lsr�r}�methrlrlrp�as_dict!s<�



*zProcess.as_dictcCsrtdurtnt�d}|j|kr$dS|��}|durn|��}zt|�}|��|krX|WSWntylYn0dS)z�Return the parent process as a Process object pre-emptively
        checking whether PID has been reused.
        If no parent is known return None.
        Nr)�_LOWEST_PIDrWr~r|r�rTr/)r�Z
lowest_pidr|�ctime�parentrlrlrpr�Ps

zProcess.parentcCs,g}|��}|dur(|�|�|��}q|S)z�Return the parents of this process as a list of Process
        instances. If no parents are known return an empty list.
        N)r��append)r��parents�procrlrlrpr�cs

zProcess.parentscCst|js|jrdSz2|t|j�k|_|jr>t�|j�t|j��WdStyVYdStynd|_YdS0dS)z�Return whether this process is running.

        It also checks if PID has been reused by another process, in
        which case it will remove the process from `process_iter()`
        internal cache and return False.
        FTN)r�r�rTr~�_pids_reused�addr/r1r�rlrlrpr�ns
zProcess.is_runningcCs2|��tr|j��S|jp$|j��|_|jSdS)z`The process parent PID.
        On Windows the return value is cached after first call.
        N)r�rr�r|r�r�rlrlrpr|�s

zProcess.ppidc	Cs�tr|jdur|jS|j��}trtt|�dkrtz|��}WnttfyPYn$0|rtt	j
�|d�}|�|�rt|}||_||j_|S)z>The process name. The return value is cached after first call.N�r)
r,r�r�r�rr��cmdliner-r1r��path�basename�
startswith)r�r�r�Z
extended_namerlrlrpr��s

zProcess.namec
s��fdd�}�jdur�z�j��}Wn.tyR}z||d�WYd}~Sd}~00|szz||d�}WntyxYn0|�_�jS)z�The process executable as an absolute path.
        May also be an empty string.
        The return value is cached after first call.
        csd���}|rRttd�rRttd�rR|d}tj�|�rRtj�|�rRt�|tj�rR|St|t	�r`|�|S)N�access�X_OKr)
r�r�r�r��isabs�isfiler�r�r�r-)�fallbackr��exer�rlrp�guess_it�s
�
��
zProcess.exe.<locals>.guess_itN)r�)r�r�r�r-)r�r�r��errrlr�rpr��s
 zProcess.execCs
|j��S)z3The command line this process has been called with.)r�r�r�rlrlrpr��szProcess.cmdlinecCs(z|j��WSty"tYS0dS)z2The process current status as a STATUS_* constant.N)r�r�r1r*r�rlrlrpr��szProcess.statuscCs\trNtdurd}t|��|��j}zt�|�jWStyJt|�YS0n
|j	�
�SdS)ztThe name of the user that owns the process.
        On UNIX this is calculated by using *real* process uid.
        Nz0requires pwd module shipped with standard python)r�pwd�ImportErrorr��real�getpwuid�pw_name�KeyError�strr��username)r�r�Zreal_uidrlrlrpr��s
zProcess.usernamecCs|jdur|j��|_|jS)z�The process creation time as a floating point number
        expressed in seconds since the epoch.
        The return value is cached after first call.
        N)r�r�r�r�rlrlrpr�s
zProcess.create_timecCs
|j��S)z6Process current working directory as an absolute path.)r��cwdr�rlrlrpr�
szProcess.cwdcCs*|dur|j��S|��|j�|�dS)z'Get or set process niceness (priority).N)r�Znice_getr�Znice_set)r��valuerlrlrp�nices
zProcess.nicecCs
|j��S)zVReturn process UIDs as a (real, effective, saved)
            namedtuple.
            )r�r�r�rlrlrpr�szProcess.uidscCs
|j��S)zVReturn process GIDs as a (real, effective, saved)
            namedtuple.
            )r��gidsr�rlrlrpr�"szProcess.gidscCs
|j��S)zVThe terminal associated with this process, if any,
            else None.
            )r��terminalr�rlrlrpr�(szProcess.terminalcCs
|j��S)zcReturn the number of file descriptors opened by this
            process (POSIX only).
            )r��num_fdsr�rlrlrpr�.szProcess.num_fds�io_counterscCs
|j��S)a
Return process I/O statistics as a
            (read_count, write_count, read_bytes, write_bytes)
            namedtuple.
            Those are the number of read/write calls performed and the
            amount of bytes read and written by the process.
            )r�r�r�rlrlrpr�7szProcess.io_counters�
ionice_getcCs@|dur&|durd}t|��|j��S|��|j�||�SdS)a�Get or set process I/O niceness (priority).

            On Linux *ioclass* is one of the IOPRIO_CLASS_* constants.
            *value* is a number which goes from 0 to 7. The higher the
            value, the lower the I/O priority of the process.

            On Windows only *ioclass* is used and it can be set to 2
            (normal), 1 (low) or 0 (very low).

            Available on Linux and Windows > Vista only.
            Nz$'ioclass' argument must be specified)r�r�r�r�Z
ionice_set)r�Zioclassr�r�rlrlrp�ioniceCs
zProcess.ionicerjcCs|dur|��|j�||�S)a-Get or set process resource limits as a (soft, hard)
            tuple.

            *resource* is one of the RLIMIT_* constants.
            *limits* is supposed to be a (soft, hard) tuple.

            See "man prlimit" for further info.
            Available on Linux and FreeBSD only.
            N)r�r�rj)r��resourceZlimitsrlrlrprj[s
zProcess.rlimit�cpu_affinity_getcCsl|durtt|j����S|��|sTt|jd�r>|j��}nttt	t
dd����}|j�tt|���dS)a-Get or set process CPU affinity.
            If specified, *cpus* must be a list of CPUs for which you
            want to set the affinity (e.g. [0, 1]).
            If an empty list is passed, all egible CPUs are assumed
            (and set).
            (Windows, Linux and BSD only).
            N�_get_eligible_cpusT��percpu)
�sortedr�r�rr�r�rr��ranger�r\Zcpu_affinity_setr�)r�Zcpusrlrlrp�cpu_affinitylszProcess.cpu_affinity�cpu_numcCs
|j��S)aZReturn what CPU this process is currently running on.
            The returned number should be <= psutil.cpu_count()
            and <= len(psutil.cpu_percent(percpu=True)).
            It may be used in conjunction with
            psutil.cpu_percent(percpu=True) to observe the system
            workload distributed across CPUs.
            )r�rr�rlrlrpr�szProcess.cpu_num�environcCs
|j��S)z�The environment variables of the process as a dict.  Note: this
            might not reflect changes made after the process started.
            )r�rr�rlrlrpr�szProcess.environcCs
|j��S)z\Return the number of handles opened by this process
            (Windows only).
            )r��num_handlesr�rlrlrpr	�szProcess.num_handlescCs
|j��S)zkReturn the number of voluntary and involuntary context
        switches performed by this process.
        )r��num_ctx_switchesr�rlrlrpr
�szProcess.num_ctx_switchescCs
|j��S)z2Return the number of threads used by this process.)r��num_threadsr�rlrlrpr�szProcess.num_threads�threadscCs
|j��S)z�Return threads opened by process as a list of
            (id, user_time, system_time) namedtuples representing
            thread id and thread CPU times (user/system).
            On OpenBSD this method requires root access.
            )r�rr�rlrlrpr�szProcess.threadsc
Cs0|��t�}g}|sr|��D]P\}}||jkrz&t|�}|��|��krT|�|�WqttfylYq0qn�t	�
t�}|��D]\}}||�|�q�t�}|jg}	|	�r,|	�
�}||vr�q�|�|�||D]T}
z6t|
�}|��|��k}|�r|�|�|	�|
�Wq�ttf�y&Yq�0q�q�|S)u(Return the children of this process as a list of Process
        instances, pre-emptively checking whether PID has been reused.
        If *recursive* is True return all the parent descendants.

        Example (A == this process):

         A ─┐
            │
            ├─ B (child) ─┐
            │             └─ X (grandchild) ─┐
            │                                └─ Y (great grandchild)
            ├─ C (child)
            └─ D (child)

        >>> import psutil
        >>> p = psutil.Process()
        >>> p.children()
        B, C, D
        >>> p.children(recursive=True)
        B, X, Y, C, D

        Note that in the example above if process X disappears
        process Y won't be listed as the reference to process A
        is lost.
        )r�rr�r~rTr�r�r/r1r��defaultdictr�r��popr�)r��	recursiverzr}r~r|�childZreverse_ppid_map�seen�stackZ	child_pidZintimerlrlrp�children�s@





zProcess.childrenc
s|duo|dk}|dur0|dkr0d|}t|��t�p8d��fdd�}|rv|�}|j��}t�|�|�}|j��}n<|j}|j}|�}|j��}|dus�|dur�||_||_dS|j|j|j	|j	}	||}
||_||_z|	|
d}Wnt
�yYdS0|�}t|d�SdS)	aReturn a float representing the current process CPU
        utilization as a percentage.

        When *interval* is 0.0 or None (default) compares process times
        to system CPU times elapsed since last call, returning
        immediately (non-blocking). That means that the first time
        this is called it will return a meaningful 0.0 value.

        When *interval* is > 0.0 compares process times to system CPU
        times elapsed before and after the interval (blocking).

        In this case is recommended for accuracy that this function
        be called with at least 0.1 seconds between calls.

        A value > 100.0 can be returned in case of processes running
        multiple threads on different CPU cores.

        The returned value is explicitly NOT split evenly between
        all available logical CPUs. This means that a busy loop process
        running on a system with 2 logical CPUs will be reported as
        having 100% CPU utilization instead of 50%.

        Examples:

          >>> import psutil
          >>> p = psutil.Process(os.getpid())
          >>> # blocking
          >>> p.cpu_percent(interval=1)
          2.0
          >>> # non-blocking (percentage since last call)
          >>> p.cpu_percent(interval=None)
          2.9
          >>>
        N�r�!interval is not positive (got %r)rcs
t��Sr�)�_timerrl��num_cpusrlrp�timer sz"Process.cpu_percent.<locals>.timer�d)r�r_r�r\r��sleepr�r��user�system�ZeroDivisionError�round)
r��interval�blockingr�rZst1Zpt1Zst2Zpt2Z
delta_procZ
delta_timeZoverall_cpus_percentZsingle_cpu_percentrlrrpr]�s<#



zProcess.cpu_percentcCs
|j��S)a%Return a (user, system, children_user, children_system)
        namedtuple representing the accumulated process time, in
        seconds.
        This is similar to os.times() but per-process.
        On macOS and Windows children_user and children_system are
        always set to 0.
        )r�r\r�rlrlrpr\Ts	zProcess.cpu_timescCs
|j��S)aReturn a namedtuple with variable fields depending on the
        platform, representing memory information about the process.

        The "portable" fields available on all platforms are `rss` and `vms`.

        All numbers are expressed in bytes.
        )r�r�r�rlrlrpr�_s	zProcess.memory_infor�)�replacementcCs|��Sr�)r�r�rlrlrp�memory_info_exjszProcess.memory_info_excCs
|j��S)a]This method returns the same information as memory_info(),
        plus, on some platform (Linux, macOS, Windows), also provides
        additional metrics (USS, PSS and swap).
        The additional metrics provide a better representation of actual
        process memory usage.

        Namely USS is the memory which is unique to a process and which
        would be freed if the process was terminated right now.

        It does so by passing through the whole process address.
        As such it usually requires higher user privileges than
        memory_info() and is considerably slower.
        )r��memory_full_infor�rlrlrpr$nszProcess.memory_full_info�rsscCs�ttjj�}||vr,d|t|�f}t|��|tjjvr>|jn|j}|�}t	||�}t
p^t�j}|dksxd|}t|��|t
|�dS)a�Compare process memory to total physical system memory and
        calculate process memory utilization as a percentage.
        *memtype* argument is a string that dictates what type of
        process memory you want to compare against (defaults to "rss").
        The list of available strings can be obtained like this:

        >>> psutil.Process().memory_info()._fields
        ('rss', 'vms', 'shared', 'text', 'lib', 'data', 'dirty', 'uss', 'pss')
        z&invalid memtype %r; valid types are %rrz`can't calculate process memory percent because total physical system memory is not positive (%r)r)r�r{Zpfullmem�_fieldsr�r�Zpmemr�r$r��
_TOTAL_PHYMEMrZ�total�float)r�ZmemtypeZvalid_typesr�ZfunZmetricsr�Ztotal_phymemrlrlrp�memory_percent~s*
�
��
��zProcess.memory_percent�memory_mapsTc	s�|j��}|r�i�|D]P}|d}|dd�}ztdd��||��|<Wqtyd|�|<Yq0qtj���fdd��D�Stj��fdd�|D�SdS)	a�Return process' mapped memory regions as a list of namedtuples
            whose fields are variable depending on the platform.

            If *grouped* is True the mapped regions with the same 'path'
            are grouped together and the different memory fields are summed.

            If *grouped* is False every mapped region is shown as a single
            entity and the namedtuple will also include the mapped region's
            address space ('addr') and permission set ('perms').
            ��NcSs||Sr�rl)rx�yrlrlrp�<lambda>�rrz%Process.memory_maps.<locals>.<lambda>cs g|]}�|g�|�R��qSrlrl)rnr���d�ntrlrprq�rrz'Process.memory_maps.<locals>.<listcomp>csg|]}�|��qSrlrlrw)r2rlrprq�rr)r�r+r�r�r{Z
pmmap_groupedZ	pmmap_ext)r�Zgrouped�itZtuplr��numsrlr0rpr+�s
zProcess.memory_mapscCs
|j��S)z�Return files opened by process as a list of
        (path, fd) namedtuples including the absolute file name
        and file descriptor number.
        )r��
open_filesr�rlrlrpr5�szProcess.open_files�inetcCs|j�|�S)aTReturn socket connections opened by process as a list of
        (fd, family, type, laddr, raddr, status) namedtuples.
        The *kind* parameter filters for connections that match the
        following criteria:

        +------------+----------------------------------------------------+
        | Kind Value | Connections using                                  |
        +------------+----------------------------------------------------+
        | inet       | IPv4 and IPv6                                      |
        | inet4      | IPv4                                               |
        | inet6      | IPv6                                               |
        | tcp        | TCP                                                |
        | tcp4       | TCP over IPv4                                      |
        | tcp6       | TCP over IPv6                                      |
        | udp        | UDP                                                |
        | udp4       | UDP over IPv4                                      |
        | udp6       | UDP over IPv6                                      |
        | unix       | UNIX socket (both UDP and TCP protocols)           |
        | all        | the sum of all the possible families and protocols |
        +------------+----------------------------------------------------+
        )r�rb�r��kindrlrlrprb�szProcess.net_connectionsrbcCs|j|d�S)N�r8)rbr7rlrlrp�connections�szProcess.connectionscCs�|jdkrJ|j��|��|jdkr2d}t|��zt�|j|�Wnfty�trtt|j�rtt|j|j	|j
��nd|_t|j|j	��Yn t
y�t|j|j	��Yn0dS)Nrz�preventing sending signal to process with PID 0 as it would affect every process in the process group of the calling process (os.getpid()) instead of PID 0T)r~r�r�r��killr7rrVr1r�r�r�r/r6r-�r��sigr�rlrlrp�_send_signal�s
�zProcess._send_signalcCsPtr|�|�n<|��|tjkr@|��s@d}t|j|j|d��|j	�
|�dS)z�Send a signal *sig* to process pre-emptively checking
        whether PID has been reused (see signal module constants) .
        On Windows only SIGTERM is valid and is treated as an alias
        for kill().
        zprocess no longer existsr�N)rr>r��signal�SIGTERMr�r/r~r�r��send_signalr<rlrlrprA�szProcess.send_signalcCs(tr|�tj�n|��|j��dS)z�Suspend process execution with SIGSTOP pre-emptively checking
        whether PID has been reused.
        On Windows this has the effect of suspending all process threads.
        N)rr>r?�SIGSTOPr�r��suspendr�rlrlrprC
szProcess.suspendcCs(tr|�tj�n|��|j��dS)z�Resume process execution with SIGCONT pre-emptively checking
        whether PID has been reused.
        On Windows this has the effect of resuming all process threads.
        N)rr>r?�SIGCONTr�r��resumer�rlrlrprEszProcess.resumecCs(tr|�tj�n|��|j��dS)z�Terminate the process with SIGTERM pre-emptively checking
        whether PID has been reused.
        On Windows this is an alias for kill().
        N)rr>r?r@r�r�r;r�rlrlrp�	terminate#szProcess.terminatecCs(tr|�tj�n|��|j��dS)zjKill the current process with SIGKILL pre-emptively checking
        whether PID has been reused.
        N)rr>r?�SIGKILLr�r�r;r�rlrlrpr;.szProcess.killcCs@|dur|dksd}t|��|jtur,|jS|j�|�|_|jS)a�Wait for process to terminate and, if process is a children
        of os.getpid(), also return its exit code, else None.
        On Windows there's no such limitation (exit code is always
        returned).

        If the process is already terminated immediately return None
        instead of raising NoSuchProcess.

        If *timeout* (in seconds) is specified and process is still
        alive raise TimeoutExpired.

        To wait for multiple Process(es) use psutil.wait_procs().
        Nrz"timeout must be a positive integer)r�r�r�r��wait)r��timeoutr�rlrlrprH8s
zProcess.wait)N)F)NN)N)NN)N)N)F)N)r%)T)r6)r6)N)Gr�r��__qualname__�__doc__r�r�r�r��__repr__r�r�r�r��propertyr~�
contextlib�contextmanagerr�r�r�r�r�r3r|r�r�r�r�r�r�r�r�rr�r�r�r�r�r{rTr�r�rjrrrr,r	r
rrrr]r\r�rZdeprecated_methodr#r$r*r+r5rbr:r>rArCrErFr;rHrlrlrlrprT"s�

3

B
/
*	






F
]






$



rTcCs"g|]}|�d�s|dvr|�qS)�_>r�r�r:rjrEr#r�r�rArHrr�rCr;rF)r�rwrlrlrprqRs�csJeZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zd�fd
d�	Z	�Z
S)rUaSame as subprocess.Popen, but in addition it provides all
    psutil.Process methods in a single class.
    For the following methods which are common to both classes, psutil
    implementation takes precedence:

    * send_signal()
    * terminate()
    * kill()

    This is done in order to avoid killing another process in case its
    PID has been reused, fixing BPO-6973.

      >>> import psutil
      >>> from subprocess import PIPE
      >>> p = psutil.Popen(["python", "-c", "print 'hi'"], stdout=PIPE)
      >>> p.name()
      'python'
      >>> p.uids()
      user(real=1000, effective=1000, saved=1000)
      >>> p.username()
      'giampaolo'
      >>> p.communicate()
      ('hi', None)
      >>> p.terminate()
      >>> p.wait(timeout=2)
      0
      >>>
    cOs(tj|i|��|_|j|jjdd�dS)NT)r�)�
subprocessrU�_Popen__subprocr�r~�r��args�kwargsrlrlrpr�|szPopen.__init__cCstttt�ttj���Sr�)rr��dirrUrQr�rlrlrp�__dir__�sz
Popen.__dir__cCst|jd�r|j��|S)N�	__enter__)r�rRrXr�rlrlrprX�s
zPopen.__enter__cOsjt|jd�r|jj|i|��S|jr.|j��|jr>|j��z|jrP|j��W|��n
|��0dS)N�__exit__)r�rRrY�stdout�close�stderr�stdinrHrSrlrlrprY�s

zPopen.__exit__cCsfzt�||�WSty`zt�|j|�WYStyZd|jj|f}t|��Yn0Yn0dS)Nz!%s instance has no attribute '%s')�object�__getattribute__�AttributeErrorrRr�r�)r�r�r�rlrlrpr_�s�zPopen.__getattribute__Ncs0|jjdur|jjStt|��|�}||j_|Sr�)rR�
returncode�superrUrH)r�rIr}�r�rlrprH�s
z
Popen.wait)N)r�r�rJrKr�rWrXrYr_rH�
__classcell__rlrlrcrprU^s
rUcCstt���}|da|S)z&Return a list of current running PIDs.r)rr{rWr��r}rlrlrprW�srWcCs0|dkrdS|dkr"tr"|t�vSt�|�SdS)z�Return True if given PID exists in the current process list.
    This is faster than doing "pid in psutil.pids()" and
    should be preferred.
    rFN)rrWr{rV�r~rlrlrprV�s

rVc	#s�fdd�}�fdd�}t���tt��}t����}||}||}|D]}||�qJtrzt��}td|�||�qXz�tt	��
��t	t�|��
���}	|	D]V\}}
z2|
dur�||�}
|dur�|
j
||d�|
_|
VWq�ty�||�Yq�0q�W�an�a0dS)a�Return a generator yielding a Process instance for all
    running processes.

    Every new Process instance is only created once and then cached
    into an internal table which is updated every time this is used.
    Cache can optionally be cleared via `process_iter.clear_cache()`.

    The sorting order in which processes are yielded is based on
    their PIDs.

    *attrs* and *ad_value* have the same meaning as in
    Process.as_dict(). If *attrs* is specified as_dict() is called
    and the resulting dict is stored as a 'info' attribute attached
    to returned Process instance.
    If *attrs* is an empty list it will retrieve all process info
    (slow).
    cst|�}|�|j<|Sr�)rTr~)r~r��Zpmaprlrpr��s
zprocess_iter.<locals>.addcs��|d�dSr�)rrfrgrlrp�remove�szprocess_iter.<locals>.removez-refreshing Process instance for reused PID %sN)r�r�)�_pmap�copyr�rW�keysr�rr2rr�r��dict�fromkeysr�r�r/)r�r�r�rh�a�bZnew_pidsZ	gone_pidsr~r�r�rlrgrprX�s2


"
rXcCst��Sr�)ri�clearrlrlrlrpr/	rrr/z$Clear process_iter() internal cache.c	s��fdd�}|dur.|dks.d|}t|��t��t|�}�dur\t��s\d�}t|��|durnt�|}|r�|dur�|dkr�q�|D]J}dt|�}|dur�t|t�|�}|dkr�q�|||�q�|||�q�|�}qn|r�|D]}||d�q�|�}t��t|�fS)a,Convenience function which waits for a list of processes to
    terminate.

    Return a (gone, alive) tuple indicating which processes
    are gone and which ones are still alive.

    The gone ones will have a new *returncode* attribute indicating
    process exit status (may be None).

    *callback* is a function which gets called every time a process
    terminates (a Process instance is passed as callback argument).

    Function will return as soon as all processes terminate or when
    *timeout* occurs.
    Differently from Process.wait() it will not raise TimeoutExpired if
    *timeout* occurs.

    Typical use case is:

     - send SIGTERM to a list of processes
     - give them some time to terminate
     - send SIGKILL to those ones which are still alive

    Example:

    >>> def on_terminate(proc):
    ...     print("process {} terminated".format(proc))
    ...
    >>> for p in procs:
    ...    p.terminate()
    ...
    >>> gone, alive = wait_procs(procs, timeout=3, callback=on_terminate)
    >>> for p in alive:
    ...     p.kill()
    cshz|j|d�}Wn"ty"YnBty2Yn20|dusD|��sd||_��|��durd�|�dS)N)rI)rHr0�_SubprocessTimeoutExpiredr�rar�)r�rIra��callback�gonerlrp�
check_gone2s
zwait_procs.<locals>.check_goneNrz*timeout must be a positive integer, got %szcallback %r is not a callableg�?)r�r��callabler�rr��minr�)	ZprocsrIrsrur��alive�deadliner�Zmax_timeoutrlrrrprY
s8%

rYTcCs.|rt��}nt��}|dur*|dkr*d}|S)azReturn the number of logical CPUs in the system (same as
    os.cpu_count() in Python 3.4).

    If *logical* is False return the number of physical cores only
    (e.g. hyper thread CPUs are excluded).

    Return None if undetermined.

    The return value is cached after first call.
    If desired cache can be cleared like this:

    >>> psutil.cpu_count.cache_clear()
    Nr)r{Zcpu_count_logicalZcpu_count_cores)�logicalr}rlrlrpr_os
r_FcCs|st��St��SdS)a�Return system-wide CPU times as a namedtuple.
    Every CPU time represents the seconds the CPU has spent in the
    given mode. The namedtuple's fields availability varies depending on the
    platform:

     - user
     - system
     - idle
     - nice (UNIX)
     - iowait (Linux)
     - irq (Linux, FreeBSD)
     - softirq (Linux)
     - steal (Linux >= 2.6.11)
     - guest (Linux >= 2.6.24)
     - guest_nice (Linux >= 3.2.0)

    When *percpu* is True return a list of namedtuples for each CPU.
    First element of the list refers to first CPU, second element
    to second CPU and so on.
    The order of the list is consistent across calls.
    N)r{r\Z
per_cpu_timesrrlrlrpr\�sr\rcCs0t|�}tr,|t|dd�8}|t|dd�8}|S)zWGiven a cpu_time() ntuple calculates the total CPU time
    (including idle time).
    ZguestrZ
guest_nice)�sumrr�)�timesZtotrlrlrp�
_cpu_tot_time�s
	r}cCs&t|�}||j8}|t|dd�8}|S)zlGiven a cpu_time() ntuple calculates the busy CPU time.
    We do so by subtracting all idle CPU times.
    Ziowaitr)r}�idler�)r|�busyrlrlrp�_cpu_busy_time�s
r�cCs\|j|jksJ||f��g}tjjD],}t||�t||�}td|�}|�|�q$tj|�S)Nr)r&r{�	scputimesr��maxr�)�t1�t2Zfield_deltas�field�field_deltarlrlrp�_cpu_times_deltas�s
r�c
Cs�t��j}|duo|dk}|dur:|dkr:d|}t|��dd�}|s�|r\t�}t�|�nt�|�pjt�}t�t|<||t|�Sg}|r�tdd�}t�|�nt	�|�p�tdd�}tdd�t	|<t
|t	|�D]\}}	|�|||	��q�|SdS)	a�Return a float representing the current system-wide CPU
    utilization as a percentage.

    When *interval* is > 0.0 compares system CPU times elapsed before
    and after the interval (blocking).

    When *interval* is 0.0 or None compares system CPU times elapsed
    since last call or module import, returning immediately (non
    blocking). That means the first time this is called it will
    return a meaningless 0.0 value which you should ignore.
    In this case is recommended for accuracy that this function be
    called with at least 0.1 seconds between calls.

    When *percpu* is True returns a list of floats representing the
    utilization as a percentage for each CPU.
    First element of the list refers to first CPU, second element
    to second CPU and so on.
    The order of the list is consistent across calls.

    Examples:

      >>> # blocking, system-wide
      >>> psutil.cpu_percent(interval=1)
      2.0
      >>>
      >>> # blocking, per-cpu
      >>> psutil.cpu_percent(interval=1, percpu=True)
      [2.0, 1.0]
      >>>
      >>> # non-blocking (percentage since last call)
      >>> psutil.cpu_percent(interval=None)
      2.9
      >>>
    NrrrcSsNt||�}t|�}t|�}z||d}Wnty>YdS0t|d�SdS)Nrrr)r�r}r�rr)r�r��times_delta�	all_deltaZ
busy_deltaZ	busy_percrlrlrp�	calculates
zcpu_percent.<locals>.calculateTr)r��current_thread�identr�r\r�r�_last_cpu_times�get�_last_per_cpu_times�zipr��
r r�tidr!r�r�r�r}Ztot1r�rlrlrpr]�s,#



r]c
Cs�t��j}|duo|dk}|dur:|dkr:d|}t|��dd�}|s�|r\t�}t�|�nt�|�pjt�}t�t|<||t|�Sg}|r�tdd�}t�|�nt	�|�p�tdd�}tdd�t	|<t
|t	|�D]\}}	|�|||	��q�|SdS)	a�Same as cpu_percent() but provides utilization percentages
    for each specific CPU time as is returned by cpu_times().
    For instance, on Linux we'll get:

      >>> cpu_times_percent()
      cpupercent(user=4.8, nice=0.0, system=4.8, idle=90.5, iowait=0.0,
                 irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)
      >>>

    *interval* and *percpu* arguments have the same meaning as in
    cpu_percent().
    NrrrcSsdg}t||�}t|�}dtd|�}|D]0}||}t|d�}ttd|�d�}|�|�q(tj|�S)NgY@rr)r�r}r�rrwr�r{r�)r�r�r4r�r�Zscaler�Z
field_percrlrlrpr�Ns

z$cpu_times_percent.<locals>.calculateTr)r�r�r�r�r\r�r�_last_cpu_times_2r��_last_per_cpu_times_2r�r�r�rlrlrpr^;s,


r^cCst��S)zReturn CPU statistics.)r{r`rlrlrlrpr`vsr`�cpu_freqcCs�t��}|r|Stt|��}|dkr(dS|dkr8|dSd\}}}d}|D]6}||j7}trl|jdurld}qJ||j7}||j7}qJ||}|r�d}	}
n||}	||}
t�	||	|
�SdS)a:Return CPU frequency as a namedtuple including current,
        min and max frequency expressed in Mhz.

        If *percpu* is True and the system supports per-cpu frequency
        retrieval (Linux only) a list of frequencies is returned for
        each CPU. If not a list with one element is returned.
        rNr)rrrFT)
r{r�r)r��currentrrwr�rZscpufreq)rr}rZcurrsZminsZmaxsZset_none�cpur�Zmin_Zmax_rlrlrpr�}s.



�
getloadavgcCst��}|ja|S)a�Return statistics about system memory usage as a namedtuple
    including the following fields, expressed in bytes:

     - total:
       total physical memory available.

     - available:
       the memory that can be given instantly to processes without the
       system going into swap.
       This is calculated by summing different memory values depending
       on the platform and it is supposed to be used to monitor actual
       memory usage in a cross platform fashion.

     - percent:
       the percentage usage calculated as (total - available) / total * 100

     - used:
        memory used, calculated differently depending on the platform and
        designed for informational purposes only:
        macOS: active + wired
        BSD: active + wired + cached
        Linux: total - free

     - free:
       memory not being used at all (zeroed) that is readily available;
       note that this doesn't reflect the actual memory available
       (use 'available' instead)

    Platform-specific fields:

     - active (UNIX):
       memory currently in use or very recently used, and so it is in RAM.

     - inactive (UNIX):
       memory that is marked as not used.

     - buffers (BSD, Linux):
       cache for things like file system metadata.

     - cached (BSD, macOS):
       cache for various things.

     - wired (macOS, BSD):
       memory that is marked to always stay in RAM. It is never moved to disk.

     - shared (BSD):
       memory that may be simultaneously accessed by multiple processes.

    The sum of 'used' and 'available' does not necessarily equal total.
    On Windows 'available' and 'free' are the same.
    )r{rZr(r'rerlrlrprZ�s5rZcCst��S)a�Return system swap memory statistics as a namedtuple including
    the following fields:

     - total:   total swap memory in bytes
     - used:    used swap memory in bytes
     - free:    free swap memory in bytes
     - percent: the percentage usage
     - sin:     no. of bytes the system has swapped in from disk (cumulative)
     - sout:    no. of bytes the system has swapped out from disk (cumulative)

    'sin' and 'sout' on Windows are meaningless and always set to 0.
    )r{r[rlrlrlrpr[�s
r[cCs
t�|�S)z�Return disk usage statistics about the given *path* as a
    namedtuple including total, used and free space expressed in bytes
    plus the percentage usage.
    )r{rg)r�rlrlrprgsrgcCs
t�|�S)a3Return mounted partitions as a list of
    (device, mountpoint, fstype, opts) namedtuple.
    'opts' field is a raw string separated by commas indicating mount
    options which may vary depending on the platform.

    If *all* parameter is False return physical devices only and ignore
    all others.
    )r{rf)�allrlrlrprfs	rfcCs�trt|d�ni}tjfi|��}|s2|r.iSdS|r@t|d�}ttdtj�}|rt|��D]\}}||�||<qZ|S|dd�t	|�
��D��SdS)a�Return system disk I/O statistics as a namedtuple including
    the following fields:

     - read_count:  number of reads
     - write_count: number of writes
     - read_bytes:  number of bytes read
     - write_bytes: number of bytes written
     - read_time:   time spent reading from disk (in ms)
     - write_time:  time spent writing to disk (in ms)

    Platform specific:

     - busy_time: (Linux, FreeBSD) time spent doing actual I/Os (in ms)
     - read_merged_count (Linux): number of merged reads
     - write_merged_count (Linux): number of merged writes

    If *perdisk* is True return the same information for every
    physical disk installed on the system as a dictionary
    with partition names as the keys and the namedtuple
    described above as the values.

    If *nowrap* is True it detects and adjust the numbers which overflow
    and wrap (restart from 0) and add "old value" to "new value" so that
    the returned numbers will always be increasing or remain the same,
    but never decrease.
    "disk_io_counters.cache_clear()" can be used to invalidate the
    cache.

    On recent Windows versions 'diskperf -y' command may need to be
    executed first otherwise this function won't find any disk.
    )�perdiskN�psutil.disk_io_counters�sdiskiocss|]}t|�VqdSr��r{rwrlrlrp�	<genexpr>Hrrz#disk_io_counters.<locals>.<genexpr>)rrlr{re�
_wrap_numbersr�rr�r�r��values)r��nowraprU�rawdictr2Zdisk�fieldsrlrlrpres 
rer�zClears nowrap argument cachecCsnt��}|s|riSdS|r&t|d�}|rN|��D]\}}tj|�||<q2|Stjdd�t|���D��SdS)acReturn network I/O statistics as a namedtuple including
    the following fields:

     - bytes_sent:   number of bytes sent
     - bytes_recv:   number of bytes received
     - packets_sent: number of packets sent
     - packets_recv: number of packets received
     - errin:        total number of errors while receiving
     - errout:       total number of errors while sending
     - dropin:       total number of incoming packets which were dropped
     - dropout:      total number of outgoing packets which were dropped
                     (always 0 on macOS and BSD)

    If *pernic* is True return the same information for every
    network interface installed on the system as a dictionary
    with network interface names as the keys and the namedtuple
    described above as the values.

    If *nowrap* is True it detects and adjust the numbers which overflow
    and wrap (restart from 0) and add "old value" to "new value" so that
    the returned numbers will always be increasing or remain the same,
    but never decrease.
    "net_io_counters.cache_clear()" can be used to invalidate the
    cache.
    N�psutil.net_io_counterscSsg|]}t|��qSrlr�rwrlrlrprqzrrz#net_io_counters.<locals>.<listcomp>)r{rar�r�rZsnetior�r�)Zpernicr�r�Znicr�rlrlrpraVs
rar�r6cCs
t�|�S)a�Return system-wide socket connections as a list of
    (fd, family, type, laddr, raddr, status, pid) namedtuples.
    In case of limited privileges 'fd' and 'pid' may be set to -1
    and None respectively.
    The *kind* parameter filters for connections that fit the
    following criteria:

    +------------+----------------------------------------------------+
    | Kind Value | Connections using                                  |
    +------------+----------------------------------------------------+
    | inet       | IPv4 and IPv6                                      |
    | inet4      | IPv4                                               |
    | inet6      | IPv6                                               |
    | tcp        | TCP                                                |
    | tcp4       | TCP over IPv4                                      |
    | tcp6       | TCP over IPv6                                      |
    | udp        | UDP                                                |
    | udp4       | UDP over IPv4                                      |
    | udp6       | UDP over IPv6                                      |
    | unix       | UNIX socket (both UDP and TCP protocols)           |
    | all        | the sum of all the possible families and protocols |
    +------------+----------------------------------------------------+

    On macOS this function requires root privileges.
    )r{rbr9rlrlrprb�srbc
Cs�t}|rddl}t��}|jdd�d�t�t�}|D]�\}}}}}}	|r�z|�|�}Wn@t	y�t
rz|dkrztj}nttd�r�|tjkr�tj}Yn0|tjkr�t
r�dnd	}
|�|
�d
kr�|d|
7}q�||�t�|||||	��q6t|�S)a*Return the addresses associated to each NIC (network interface
    card) installed on the system as a dictionary whose keys are the
    NIC names and value is a list of namedtuples for each address
    assigned to the NIC. Each namedtuple includes 5 fields:

     - family: can be either socket.AF_INET, socket.AF_INET6 or
               psutil.AF_LINK, which refers to a MAC address.
     - address: is the primary address and it is always set.
     - netmask: and 'broadcast' and 'ptp' may be None.
     - ptp: stands for "point to point" and references the
            destination address on a point to point interface
            (typically a VPN).
     - broadcast: and *ptp* are mutually exclusive.

    Note: you can have more than one address of the same family
    associated with each interface.
    rNcSs|dS)Nrrl)rxrlrlrpr/�rrznet_if_addrs.<locals>.<lambda>)�key���rS�:�-�z%s00)r��socketr{rc�sortr�r
r��
AddressFamilyr�r,rSr�r�countr�rZsnicaddrrl)Z	has_enumsr�Zrawlistr}r��fam�addr�mask�	broadcastZptp�	separatorrlrlrprc�s0
��
rccCst��S)aReturn information about each NIC (network interface card)
    installed on the system as a dictionary whose keys are the
    NIC names and value is a namedtuple with the following fields:

     - isup: whether the interface is up (bool)
     - duplex: can be either NIC_DUPLEX_FULL, NIC_DUPLEX_HALF or
               NIC_DUPLEX_UNKNOWN
     - speed: the NIC speed expressed in mega bits (MB); if it can't
              be determined (e.g. 'localhost') it will be set to 0.
     - mtu: the maximum transmission unit expressed in bytes.
    )r{rdrlrlrlrprd�srd�sensors_temperaturesc
	s��fdd�}t�t�}t��}|��D]l\}}|r&|�d�\}}}}	||�}||�}||	�}	|rj|	sj|}	n|	rv|sv|	}||�t�	||||	��q.q&t
|�S)a<Return hardware temperatures. Each entry is a namedtuple
        representing a certain hardware sensor (it may be a CPU, an
        hard disk or something else, depending on the OS and its
        configuration).
        All temperatures are expressed in celsius unless *fahrenheit*
        is set to True.
        cs(|dur$�r t|�dddS|SdS)N�	r�� )r))�n��
fahrenheitrlrp�convert�sz%sensors_temperatures.<locals>.convertr)r�r
r�r{r�r�rr�rZshwtemprl)
r�r�r}r�r�r��labelr��high�criticalrlr�rpr��s"	
��sensors_fanscCst��S)z�Return fans speed. Each entry is a namedtuple
        representing a certain hardware sensor.
        All speed are expressed in RPM (rounds per minute).
        )r{r�rlrlrlrpr�	s�sensors_batterycCst��S)a�Return battery information. If no battery is installed
        returns None.

         - percent: battery power left as a percentage.
         - secsleft: a rough approximation of how many seconds are left
                     before the battery runs out of power. May be
                     POWER_TIME_UNLIMITED or POWER_TIME_UNLIMITED.
         - power_plugged: True if the AC power cable is connected.
        )r{r�rlrlrlrpr�	s
cCst��S)zAReturn the system boot time expressed in seconds since the epoch.)r{rirlrlrlrpri2	sricCst��S)a�Return users currently connected on the system as a list of
    namedtuples including the following fields.

     - user: the name of the user
     - terminal: the tty or pseudo-tty associated with the user, if any.
     - host: the host name associated with the entry, if any.
     - started: the creation time as a floating point number expressed in
       seconds since the epoch.
    )r{rhrlrlrlrprh9	s
rhcCst��S)zjReturn a generator yielding a WindowsService instance for all
        Windows services installed.
        )r{�win_service_iterrlrlrlrpr�M	sr�cCs
t�|�S)zjGet a Windows service by *name*.
        Raise NoSuchProcess if no service with such name exists.
        )r{�win_service_get)r�rlrlrpr�S	sr�cCs(ddl}t|�|j_tj�t|��dS)zZEnable or disable PSUTIL_DEBUG option, which prints debugging
    messages to stderr.
    rN)Zpsutil._common�boolrZPSUTIL_DEBUGr{r��	set_debug)r�Zpsutilrlrlrp�
_set_debug]	sr�cCs@ddlm}ddlm}tj��}d}gd�}t|d�t|dd�D�]�}|j	dr�tj�
|j	d�}|��|kr�|�d	�}q�|�d
�}nd}|j	dr�t�d
t�
t|j	d���}nd}|j	dp�d}|s�tr�z|��d}Wnty�Yn0|�rt�rd|v�r|�d�d}|dd�}|j	ddu�rF||j	dj�nd}	|j	ddu�rj||j	dj�nd}
|j	ddu�r�t|j	dd�nd}|j	d�r�t|j	d�nd}|j	d�r�d�|j	d�}
n
|j	d}
|j	d�r�|j	ddd�nd}||dd�|j	d||	|
|||||
f
}t|d|�d��qFdS)Nr)�bytes2human)�get_terminal_sizez)%-10s %5s %5s %7s %7s %5s %6s %6s %6s  %s)
r~r*r�r�r\r�r�r�r�r�)
�USERZPIDz%MEMZVSZZRSSZNICEZSTATUS�STARTZTIMEZCMDLINE)r�r�z%H:%Mz%b%drur\z%M:%Sr�r�\r�r�r*r�r�� r�r�r��
r~)rr��_compatr�r��date�today�printrXr�r�r�r��	localtimer{rr�r.r,�split�vmsr%rrmr�)r�r�Z	today_dayZtemplr��pr�Zcputimerr�r%Zmempr�r�r��linerlrlrp�testg	sr


�������
"
�r�r-�__main__)NN)NN)T)F)NF)NF)F)F)FT)FT)r6)F)�rK�
__future__rr�rNr��	functoolsr�r?rQ�sysr�r�r�r�rurrrrrr	r
rrr
rrrrrrrrrrrrrrrrrrr r!r"r#r$r%r&r'r(r)r*r+r,r-r.r/r0r1r2r3r4r�r�r5r�r6r7r8rqr9ZPROCFS_PATHr:r{r;r<r=r>r?Z_psutil_windowsr@rArBrCrDrErFrGrHrIrJrKrLrMrNrOrPr��platform�__all__�extendZ__extra__all__r�rTrk�globalsZ_globalsr�rVr��isupperr�r�rS�
__author__rRr�r�rQrr'r�r^r�rm�replacer�ryr�r�r�rvrzrr�r�r�rUrWrVrir�rX�cache_clearrYr_r\r�r�r��	Exceptionr�r}r�r�r]rjr�r�r^r`r�r�rZr[rgrfre�partialrarbrcrdr�r�r�rirhr�r�r�r�rorxr�rlrlrlrp�<module>s�
1��
��

7�
W
6

b


�

N
;
(

;

/�
'�
2
"




J
PKok\E��55(psutil/__pycache__/_psosx.cpython-39.pycnu�[���a

��?h?�@s�dZddlZddlZddlZddlmZddlmZddlmZddlm	Z
ddlmZdd	lm
Z
dd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZgZe��ZejZe
jeje
jej e
j!ej"e
j#ej$e
j%ej&e
j'ej(e
j)ej*e
j+ej,e
j-ej.e
j/ej0e
j1ej2e
j3ej4iZ5e
j6ej7e
j8ej9e
j:ej;e
j<ej=e
j>ej?iZ@eAdddddddddddd�ZBeAddddddddd�ZCedgd ��ZDed!gd"��ZEed#gd$��ZFed%eFjGd&�ZHd'd(�ZId)d*�ZJd+d,�ZKd-d.�ZLd/d0�ZMd1d2�ZNd3d4�ZOd5d6�ZPejQZQe
jRZRdMd8d9�ZSd:d;�ZTe
jUZUejVZVdNd=d>�ZWd?d@�ZXdAdB�ZYdCdD�ZZdEdF�Z[ej\Z\dGdH�Z]dIdJ�Z^GdKdL�dL�Z_dS)OzmacOS platform implementation.�N)�
namedtuple�)�_common)�_psposix)�_psutil_osx)�
_psutil_posix)�AccessDenied)�
NoSuchProcess)�
ZombieProcess)�	conn_tmap)�conn_to_ntuple)�
isfile_strict)�memoize_when_activated)�parse_environ_block)�
usage_percent)�PermissionError)�ProcessLookupError��������	�
)�ppid�ruid�euid�suid�rgid�egid�sgid�ttynr�ctime�status�name)�cpuutime�cpustime�rss�vms�pfaults�pageins�
numthreads�volctxsw�	scputimes��user�nice�system�idle�svmem)�total�	available�percent�used�free�active�inactive�wired�pmem)r)r*r+r,�pfullmem)�ussc		CsTt��\}}}}}}||}||}||8}t|||dd�}t||||||||�S)z&System virtual memory as a namedtuple.r�Zround_)�cextZvirtual_memrr5)	r6r;r<r=r:ZspeculativeZavailr9r8�rC�9/usr/local/lib64/python3.9/site-packages/psutil/_psosx.py�virtual_memoryqsrEcCs4t��\}}}}}t||dd�}t�||||||�S)z=Swap system memory as a (total, used, free, sin, sout) tuple.rrA)rBZswap_memrrZsswap)r6r9r:�sinZsoutr8rCrCrD�swap_memory�srGcCst��\}}}}t||||�S)z(Return system CPU times as a namedtuple.)rB�	cpu_timesr/r0rCrCrDrH�srHcCs:g}t��D](}|\}}}}t||||�}|�|�q|S)z)Return system CPU times as a named tuple.)rB�
per_cpu_timesr/�append)�retZcpu_tr1r2r3r4�itemrCrCrDrI�srIcCst��S)z0Return the number of logical CPUs in the system.)rB�cpu_count_logicalrCrCrCrDrM�srMcCst��S)z-Return the number of CPU cores in the system.)rB�cpu_count_coresrCrCrCrDrN�srNcCs"t��\}}}}}t�||||�S�N)rB�	cpu_statsrZ	scpustats)Zctx_switchesZ
interruptsZsoft_interruptsZsyscallsZ_trapsrCrCrDrP�s
��rPcCst��\}}}t�|||�gS)z�Return CPU frequency.
    On macOS per-cpu frequency is not supported.
    Also, the returned frequency never changes, see:
    https://arstechnica.com/civis/viewtopic.php?f=19&t=465002.
    )rB�cpu_freqrZscpufreq)�currZmin_Zmax_rCrCrDrQ�srQFc	Csjg}t��}|D]T}|\}}}}|dkr,d}|sJtj�|�rtj�|�sJqt�||||�}|�|�q|S)z8Return mounted disk partitions as a list of namedtuples.�none�)	rB�disk_partitions�os�path�isabs�existsrZ	sdiskpartrJ)	�all�retlistZ
partitions�	partitionZdeviceZ
mountpointZfstype�opts�ntuplerCrCrDrU�srUcCsbzt��\}}}Wnty&YdS0|dk}|r<tj}n|dkrLtj}n|d}t�|||�S)zReturn battery information.Nr����<)rB�sensors_battery�NotImplementedErrorrZPOWER_TIME_UNLIMITEDZPOWER_TIME_UNKNOWNZsbattery)r8ZminsleftZ
power_pluggedZsecsleftrCrCrDra�sra�inetc	Csjg}t�D]Z}zt|��|�}Wnty6Yq
Yq
0|r
|D]"}t|�|g}|�tj|��q@q
|S)z System-wide network connections.)�pids�Process�net_connectionsr	�listrJrZsconn)�kindrK�pidZcons�crCrCrDrf�s

rfc
Cs�t���}i}|D]�}z&t�|�}t�|�}t�|�\}}Wn2tyn}z|jtjkrZ�WYd}~qd}~00t	t
d�r�t
�|�}d�|�}d|v}	t
�
|	||||�||<q|S)z)Get NIC stats (isup, duplex, speed, mtu).N�	NicDuplex�,�running)�net_io_counters�keys�
cext_posixZ
net_if_mtuZnet_if_flagsZnet_if_duplex_speed�OSError�errnoZENODEV�hasattrrrk�joinZ	snicstats)
�namesrKr&Zmtu�flagsZduplex�speed�errZoutput_flagsZisuprCrCrD�net_if_statss$






�
rycCst��S)z:The system boot time expressed in seconds since the epoch.)rB�	boot_timerCrCrCrDrz!srzc	Cs\g}t��}|D]F}|\}}}}}|dkr,q|s2qt�||p>d|pDd||�}|�|�q|S)z:Return currently connected users as a list of namedtuples.�~N)rB�usersrZsuserrJ)	r[�rawlistrLr1�tty�hostnameZtstampri�ntrCrCrDr|&sr|cCs`t��}d|vr\ztd���|�dd�Wn.ty>YntyZ|�dd�Yn0|S�Nr)rBrdre�create_time�insertr	r)ZlsrCrCrDrd:srdcCs8zt�|�td}|tjkWSty2YdS0dS)Nr%F)rB�proc_kinfo_oneshot�kinfo_proc_map�SZOMBrq)ri�strCrCrD�	is_zombieMs
r�cst����fdd��}|S)z`Decorator which translates bare OSError exceptions into
    NoSuchProcess and AccessDenied.
    cszz�|g|�Ri|��WStyVt|j�rDt|j|j|j��nt|j|j��Yn tytt|j|j��Yn0dSrO)	rr�rir
�_name�_ppidr	rr)�self�args�kwargs��funrCrD�wrapperZs
z wrap_exceptions.<locals>.wrapper)�	functools�wraps)r�r�rCr�rD�wrap_exceptionsUsr�c@sheZdZdZgd�Zdd�Zeedd���Zeedd���Z	d	d
�Z
dd�Zed
d��Zedd��Z
edd��Zedd��Zedd��Zedd��Zedd��Zedd��Zedd��Zedd ��Zed!d"��Zed#d$��Zed%d&��Zed'd(��Zed)d*��Zed+d,��Zed=d.d/��Zed0d1��Zed>d3d4��Zed5d6��Zed7d8��Z ed9d:��Z!ed;d<��Z"d2S)?rez1Wrapper class around underlying C implementation.)�_cacher�r�ricCs||_d|_d|_dSrO)rir�r�)r�rirCrCrD�__init__nszProcess.__init__cCs$t�|j�}t|�tt�ks J�|SrO)rBr�ri�lenr��r�rKrCrCrD�_get_kinfo_procsszProcess._get_kinfo_proccCs$t�|j�}t|�tt�ks J�|SrO)rBZproc_pidtaskinfo_oneshotrir��pidtaskinfo_mapr�rCrCrD�_get_pidtaskinfo{szProcess._get_pidtaskinfocCs|j�|�|j�|�dSrO)r�Zcache_activater��r�rCrCrD�
oneshot_enter�szProcess.oneshot_entercCs|j�|�|j�|�dSrO)r�Zcache_deactivater�r�rCrCrD�oneshot_exit�szProcess.oneshot_exitcCs(|��td}|dur|St�|j�S)Nr&)r�r�rBZ	proc_nameri)r�r&rCrCrDr&�szProcess.namecCst�|j�SrO)rBZproc_exerir�rCrCrD�exe�szProcess.execCst�|j�SrO)rBZproc_cmdlinerir�rCrCrD�cmdline�szProcess.cmdlinecCstt�|j��SrO)rrBZproc_environrir�rCrCrD�environ�szProcess.environcCs|��td|_|jS)Nr)r�r�r�r�rCrCrDr�szProcess.ppidcCst�|j�SrO)rBZproc_cwdrir�rCrCrD�cwd�szProcess.cwdcCs.|��}t�|td|td|td�S)Nrrr�r�rZpuidsr��r�ZrawtuplerCrCrD�uids�s


�zProcess.uidscCs.|��}t�|td|td|td�S)Nr r!r"r�r�rCrCrD�gids�s


�zProcess.gidscCs<|��td}t��}z
||WSty6YdS0dS)Nr#)r�r�rZget_terminal_map�KeyError)r�Ztty_nrZtmaprCrCrD�terminal�s
zProcess.terminalcCs6|��}t|td|td|td|td�S)Nr)r*r+r,)r�r>r�r�rCrCrD�memory_info�s



�zProcess.memory_infocCs"|��}t�|j�}t||f�SrO)r�rBZproc_memory_ussrir?)r�Z	basic_memr@rCrCrD�memory_full_info�szProcess.memory_full_infocCs(|��}t�|td|tddd�S)Nr'r(g)r�rZ	pcputimesr�r�rCrCrDrH�s

�zProcess.cpu_timescCs|��tdS)Nr$)r�r�r�rCrCrDr��szProcess.create_timecCs|��td}t�|d�S)Nr.r)r�r�rZpctxsw)r�ZvolrCrCrD�num_ctx_switches�szProcess.num_ctx_switchescCs|��tdS)Nr-)r�r�r�rCrCrD�num_threads�szProcess.num_threadscCsN|jdkrgSg}t�|j�}|D]&\}}t|�r"t�||�}|�|�q"|Sr�)rirBZproc_open_filesr
rZ	popenfilerJ)r��filesr}rW�fdr^rCrCrD�
open_files�s
zProcess.open_filesrcc	Cs�|tvr(td|d�dd�tD��f��t|\}}t�|j||�}g}|D]2}|\}}}	}
}}t|||	|
||t�}
|�|
�qL|S)Nz+invalid %r kind argument; choose between %sz, cSsg|]}t|��qSrC)�repr)�.0�xrCrCrD�
<listcomp>��z+Process.net_connections.<locals>.<listcomp>)	r�
ValueErrorrtrBZproc_net_connectionsrir�TCP_STATUSESrJ)r�rhZfamilies�typesr}rKrLr��fam�type�laddr�raddrr%r�rCrCrDrf�s ���zProcess.net_connectionscCs|jdkrdSt�|j�Sr�)rirBZproc_num_fdsr�rCrCrD�num_fds	s
zProcess.num_fdsNcCst�|j||j�SrO)rZwait_pidrir�)r��timeoutrCrCrD�waitszProcess.waitcCst�|j�SrO)rp�getpriorityrir�rCrCrD�nice_getszProcess.nice_getcCst�|j|�SrO)rp�setpriorityri)r��valuerCrCrD�nice_setszProcess.nice_setcCs|��td}t�|d�S)Nr%�?)r�r��
PROC_STATUSES�get)r��coderCrCrDr%szProcess.statuscCs<t�|j�}g}|D]"\}}}t�|||�}|�|�q|SrO)rBZproc_threadsrirZpthreadrJ)r�r}r[�	thread_id�utimeZstimer^rCrCrD�threads!szProcess.threads)rc)N)#�__name__�
__module__�__qualname__�__doc__�	__slots__r�r�rr�r�r�r�r&r�r�r�rr�r�r�r�r�r�rHr�r�r�r�rfr�r�r�r�r%r�rCrCrCrDreisr









	










re)F)rc)`r�rrr�rV�collectionsrrTrrrrBrrprr	r
rrr
rrrZ_compatrrZ__extra__all__ZgetpagesizeZPAGESIZEZAF_LINKZTCPS_ESTABLISHEDZCONN_ESTABLISHEDZ
TCPS_SYN_SENTZ
CONN_SYN_SENTZTCPS_SYN_RECEIVEDZ
CONN_SYN_RECVZTCPS_FIN_WAIT_1ZCONN_FIN_WAIT1ZTCPS_FIN_WAIT_2ZCONN_FIN_WAIT2ZTCPS_TIME_WAITZCONN_TIME_WAITZTCPS_CLOSEDZ
CONN_CLOSEZTCPS_CLOSE_WAITZCONN_CLOSE_WAITZ
TCPS_LAST_ACKZ
CONN_LAST_ACKZTCPS_LISTENZCONN_LISTENZTCPS_CLOSINGZCONN_CLOSINGZPSUTIL_CONN_NONEZ	CONN_NONEr�ZSIDLZSTATUS_IDLEZSRUNZSTATUS_RUNNINGZSSLEEPZSTATUS_SLEEPINGZSSTOPZSTATUS_STOPPEDr�Z
STATUS_ZOMBIEr��dictr�r�r/r5r>�_fieldsr?rErGrHrIrMrNrPrQ�
disk_usageZdisk_io_countersrUrarnZnet_if_addrsrfryrzr|rdZ
pid_existsr�r�rerCrCrCrD�<module>s������	
	

PKok\�DB�g[g[)psutil/__pycache__/_common.cpython-39.pycnu�[���a

��?h+t�
@shdZddlmZddlmZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlmZddl	mZddl	mZddl	mZzdd	l	mZWney�dZYn0zdd
l	mZWney�dZYn0ejddkZe�r
ddlZndZee�d��Ze�Zgd
�ZejdkZejdkZ ej!�"d�Z#ej!�"d�Z$e$Z%ej!�"d�Z&ej!�"d�Z'ej!�"d�Z(e&�p�e'�p�e(Z)ej!�"d�Z*ej!�"d�Z+dZ,dZ-dZ.dZ/dZ0dZ1dZ2dZ3dZ4d Z5d!Z6d"Z7d#Z8d$Z9d%Z:d&Z;d'Z<d(Z=d)Z>d*Z?d+Z@d,ZAd-ZBd.ZCd/ZDd0ZEedu�r&d1ZFd2ZGdZHn Gd3d4�d4ejI�ZJeK��LeJjM�edu�rZd5ZNd6ZOn Gd7d8�d8ejI�ZPeK��LePjM�e�Q�ZRe�s�d9ZSn0ze�T�ZSWn"eU�y�e�r�d:nd9ZSYn0ed;gd<��ZVed=gd>��ZWed?gd@��ZXedAgdB��ZYedCgdD��ZZedEgdF��Z[edGgdH��Z\edIgdJ��Z]edKgdL��Z^edMgdN��Z_edOgdP��Z`edQgdR��ZaedSgdT��ZbedUdVdWg�ZcedXgdY��ZdedZd[d\g�Zeed]gd^��Zfed_gd`��Zgedagd`��Zhedbgdc��Ziedddedfg�Zjedgdhdig�Zkedjgdk��Zledldmdng�Zmeeegeegfeegegfegegfeegegfegegfeegeegfegeegfegeegfdo�Znedu�r�en�Legegfegegfdp��edu�r�en�Ldqegeegfi�Gdrds�dseo�ZpGdtdu�duep�ZqGdvdw�dweq�ZrGdxdy�dyep�ZsGdzd{�d{ep�Zte�rBeuevew��r.evd|Zxn
eyevd|�Zxexd}�nd~d�Zzd�d�d��Z{d�d��Z|d�d��Z}d�d��Z~d�d��Ze|d�d���Z�d�d��Z�d�d��Z�d�d��Z�d�d�d��Z�d�d��Z�Gd�d��d��Z�d�d��Z�e��Z�e�j�e�_�e�j�e�_�d�Z�d�d��Z�d�d��Z�ee�fd�d��Z�efd�d��Z�d�d�d��Z�d�d��Z�e�r&d�d��Z�nd�d��Z�e|ej�fd�d���Z�d�d�d��Z�dd�ej�fd�d��Z�d�d��Z�dS)�z9Common objects shared by __init__.py and _ps*.py modules.�)�division)�print_functionN)�
namedtuple)�AF_INET)�
SOCK_DGRAM)�SOCK_STREAM)�AF_INET6)�AF_UNIX��PSUTIL_DEBUG)R�FREEBSD�BSD�LINUX�NETBSD�OPENBSD�MACOS�OSX�POSIX�SUNOS�WINDOWS�
CONN_CLOSE�CONN_CLOSE_WAIT�CONN_CLOSING�CONN_ESTABLISHED�CONN_FIN_WAIT1�CONN_FIN_WAIT2�
CONN_LAST_ACK�CONN_LISTEN�	CONN_NONE�
CONN_SYN_RECV�
CONN_SYN_SENT�CONN_TIME_WAIT�NIC_DUPLEX_FULL�NIC_DUPLEX_HALF�NIC_DUPLEX_UNKNOWN�STATUS_DEAD�STATUS_DISK_SLEEP�STATUS_IDLE�
STATUS_LOCKED�STATUS_RUNNING�STATUS_SLEEPING�STATUS_STOPPED�STATUS_SUSPENDED�STATUS_TRACING_STOP�STATUS_WAITING�STATUS_WAKE_KILL�
STATUS_WAKING�
STATUS_ZOMBIE�
STATUS_PARKED�ENCODING�
ENCODING_ERRSr�pconn�	pcputimes�pctxsw�pgids�pio�pionice�	popenfile�pthread�puids�sconn�	scpustats�sdiskio�	sdiskpart�
sdiskusage�snetio�snicaddr�	snicstats�sswap�suser�	conn_tmap�deprecated_method�
isfile_strict�memoize�parse_environ_block�path_exists_strict�
usage_percent�
supports_ipv6�sockfam_to_enum�socktype_to_enum�wrap_numbers�	open_text�open_binary�cat�bcat�bytes2human�conn_to_ntuple�debug�hilite�term_supports_colors�print_color�posix�nt�linux�darwin)ZfreebsdZmidnightbsdZopenbsdZnetbsd)�sunos�solaris�aix�runningZsleepingz
disk-sleep�stoppedztracing-stopZzombieZdeadz	wake-killZwaking�idle�lockedZwaitingZ	suspendedZparkedZESTABLISHEDZSYN_SENTZSYN_RECVZ	FIN_WAIT1Z	FIN_WAIT2Z	TIME_WAITZCLOSEZ
CLOSE_WAITZLAST_ACKZLISTENZCLOSING�NONE��c@seZdZdZdZdZdS)�	NicDuplexrirjrN)�__name__�
__module__�__qualname__r"r#r$�roro�:/usr/local/lib64/python3.9/site-packages/psutil/_common.pyrk�srk������c@seZdZdZdZdS)�BatteryTimerqrrN)rlrmrn�POWER_TIME_UNKNOWN�POWER_TIME_UNLIMITEDrorororprs�srs�replace�surrogateescaperF)�total�used�free�percent�sinZsoutrB)rxryrzr{r@)�
read_count�write_count�
read_bytes�write_bytesZ	read_timeZ
write_timerA)ZdeviceZ
mountpointZfstype�optsrC)Z
bytes_sentZ
bytes_recvZpackets_sentZpackets_recvZerrinZerroutZdropinZdropoutrG)�nameZterminal�host�started�pidr>)�fd�family�type�laddr�raddr�statusr�rD)r��address�netmask�	broadcastZptprE)ZisupZduplex�speedZmtu�flagsr?)Zctx_switchesZ
interruptsZsoft_interruptsZsyscalls�scpufreq)�current�min�max�shwtemp)�labelr��high�critical�sbattery)r{ZsecsleftZ
power_plugged�sfanr�r�r6)�user�system�
children_user�children_systemr;�pathr�r<)�idZ	user_timeZsystem_timer=)�realZ	effectiveZsavedr8r9)r}r~rr�r:Zioclass�valuer7Z	voluntaryZinvoluntaryr5)r�r�r�r�r�r��addr�ip�port)�allZtcpZtcp4ZudpZudp4ZinetZinet4Zinet6)Ztcp6Zudp6�unixc@s,eZdZdZdZdd�Zdd�Zdd�Zd	S)
�ErrorzQBase exception class. All other psutil exceptions inherit
    from this one.
    �psutilcCsHt��}|D]6}t||d�}|r*|||<q|dkr|dkr|||<q|S)Nr�r)�collections�OrderedDict�getattr)�self�attrs�infor�r�rororp�	_infodict$s

zError._infodictcCsP|�d�}|r,dd�dd�|��D��}nd}d�dd�t|dd	�|fD��S)
N)r��ppidr�z(%s)�, cSsg|]\}}d||f�qS�z%s=%rro��.0�k�vrororp�
<listcomp>3�z!Error.__str__.<locals>.<listcomp>� cSsg|]}|r|�qSroro)r��xrororpr�7r��msg�)r��join�itemsr��r�r��detailsrororp�__str__.s
�z
Error.__str__cCs2|�d�}d�dd�|��D��}d|jj|fS)N)r�r�r��secondsr�r�cSsg|]\}}d||f�qSr�ror�rororpr�<r�z"Error.__repr__.<locals>.<listcomp>z
psutil.%s(%s))r�r�r��	__class__rlr�rororp�__repr__9s
zError.__repr__N)rlrmrn�__doc__r�r�r�rorororpr�s

r�c@s&eZdZdZdZddd�Zdd�ZdS)	�
NoSuchProcesszXException raised when a process with a certain PID doesn't
    or no longer exists.
    r�NcCs$t�|�||_||_|pd|_dS)Nzprocess no longer exists�r��__init__r�r�r��r�r�r�r�rororpr�Gs
zNoSuchProcess.__init__cCs|j|j|j|jffS�N�r�r�r�r��r�rororp�
__reduce__MszNoSuchProcess.__reduce__)NN�rlrmrnr�r�r�rorororpr�@s
r�c@s&eZdZdZdZddd�Zdd�ZdS)	�
ZombieProcessa1Exception raised when querying a zombie process. This is
    raised on macOS, BSD and Solaris only, and not always: depending
    on the query the OS may be able to succeed anyway.
    On Linux all zombie processes are querable (hence this is never
    raised). Windows doesn't have zombie processes.
    r�NcCs$t�||||�||_|pd|_dS)Nz"PID still exists but it's a zombie)r�r�r�r�)r�r�r�r�r�rororpr�[szZombieProcess.__init__cCs|j|j|j|j|jffSr�)r�r�r�r�r�r�rororpr�`szZombieProcess.__reduce__)NNNr�rorororpr�Qs
r�c@s&eZdZdZdZddd�Zdd�ZdS)	�AccessDeniedz@Exception raised when permission to perform an action is denied.r�NcCs$t�|�||_||_|pd|_dS)Nr�r�r�rororpr�is
zAccessDenied.__init__cCs|j|j|j|jffSr�r�r�rororpr�oszAccessDenied.__reduce__)NNNr�rorororpr�ds
r�c@s&eZdZdZdZddd�Zdd�ZdS)	�TimeoutExpiredzWRaised on Process.wait(timeout) if timeout expires and process
    is still alive.
    r�NcCs*t�|�||_||_||_d||_dS)Nztimeout after %s seconds)r�r�r�r�r�r�)r�r�r�r�rororpr�zs

zTimeoutExpired.__init__cCs|j|j|j|jffSr�)r�r�r�r�r�rororpr��szTimeoutExpired.__reduce__)NNr�rorororpr�ss
r��execzvdef raise_from(value, from_value):
    try:
        raise value from from_value
    finally:
        value = None
    cCs|�dSr�ro)r��
from_valuerororp�
raise_from�sr�cCsDzt|�|d}Wnty(YdS0|dur<t||�}|SdS)z5Calculate percentage usage of 'used' against 'total'.�dgN)�float�ZeroDivisionError�round)ryrxZround_�retrororprN�s
rNcs2t�����fdd��}�fdd�}i�||_|S)a�A simple memoize decorator for functions supporting (hashable)
    positional arguments.
    It also provides a cache_clear() function for clearing the cache:

    >>> @memoize
    ... def foo()
    ...     return 1
        ...
    >>> foo()
    1
    >>> foo.cache_clear()
    >>>

    It supports:
     - functions
     - classes (acts as a @singleton)
     - staticmethods
     - classmethods

    It does NOT support:
     - methods
    c
s�|tt|����f}z
�|WSty~z�|i|��}�|<Wn.tyt}zt|d��WYd}~n
d}~00|YS0dSr�)�	frozenset�sortedr��KeyError�	Exceptionr�)�args�kwargs�keyr��err��cache�funrorp�wrapper�s
 zmemoize.<locals>.wrappercs���dS)zClear cache.N)�clearro)r�rorp�cache_clear�szmemoize.<locals>.cache_clear)�	functools�wrapsr�)r�r�r�ror�rprK�srKcs6t����fdd��}dd�}dd�}||_||_|S)a�A memoize decorator which is disabled by default. It can be
    activated and deactivated on request.
    For efficiency reasons it can be used only against class methods
    accepting no arguments.

    >>> class Foo:
    ...     @memoize
    ...     def foo()
    ...         print(1)
    ...
    >>> f = Foo()
    >>> # deactivated (default)
    >>> foo()
    1
    >>> foo()
    1
    >>>
    >>> # activated
    >>> foo.cache_activate(self)
    >>> foo()
    1
    >>> foo()
    >>> foo()
    >>>
    c
s�z|j�}Wn�ty^z�|�WYStyX}zt|d��WYd}~n
d}~00Ynpty�z�|�}Wn.ty�}zt|d��WYd}~n
d}~00z||j�<Wnty�Yn0Yn0|Sr�)�_cache�AttributeErrorr�r�r�)r�r�r��r�rorpr��s"$ z'memoize_when_activated.<locals>.wrappercSs
i|_dS)zsActivate cache. Expects a Process instance. Cache will be
        stored as a "_cache" instance attribute.
        N)r���procrororp�cache_activatesz.memoize_when_activated.<locals>.cache_activatecSs z|`WntyYn0dS)zDeactivate and clear cache.N)r�r�r�rororp�cache_deactivatesz0memoize_when_activated.<locals>.cache_deactivate)r�r�r�r�)r�r�r�r�ror�rp�memoize_when_activated�sr�c
CsZzt�|�}Wn:tyH}z"|jtjtjfvr2�WYd}~dSd}~00t�|j�SdS)z�Same as os.path.isfile() but does not swallow EACCES / EPERM
    exceptions, see:
    http://mail.python.org/pipermail/python-dev/2012-June/120787.html.
    NF)�os�stat�OSError�errno�EPERM�EACCES�S_ISREG�st_mode)r��str�rororprJsrJc
CsRzt�|�Wn:tyH}z"|jtjtjfvr2�WYd}~dSd}~00dSdS)z�Same as os.path.exists() but does not swallow EACCES / EPERM
    exceptions. See:
    http://mail.python.org/pipermail/python-dev/2012-June/120787.html.
    NFT)r�r�r�r�r�r�)r�r�rororprM-srMcCsvtjrtdurdSzHt�ttj�}t�|��|�d�Wd�n1sL0YWdStjypYdS0dS)z2Return True if IPv6 is supported on this platform.NF)z::1rT)�socket�has_ipv6rr�
contextlib�closing�bind�error)�sockrororprO<s(rOcCsvi}d}t}|�d|�}||kr"qr|�d||�}||krh|||�}||d|�}|r`|��}|||<|d}q|S)zCParse a C environ block of environment variables into a dictionary.r��=rj)r�find�upper)�datar��posZWINDOWS_Znext_posZ	equal_posr�r�rororprLJs
rLcCs4tdur|Szt�|�WSty.|YS0dS)z�Convert a numeric socket family value to an IntEnum member.
    If it's not a known member, return the numeric value itself.
    N)�enumr��
AddressFamily�
ValueError��numrororprPfsrPcCs4tdur|Szt�|�WSty.|YS0dS)zConvert a numeric socket type value to an IntEnum member.
    If it's not a known member, return the numeric value itself.
    N)r	r��
SocketKindrrrororprQssrQcCs�|tjtfvr&|rt|�}|r&t|�}|tjkrJ|ttfvrJ|�|t�}nt}t|�}t|�}|durxt	||||||�St
|||||||�SdS)z2Convert a raw connection tuple to a proper ntuple.N)r�rrr�r�getrrPrQr5r>)r��fam�type_r�r�r�Z
status_mapr�rororprX�srXcs�fdd�}|S)z�A decorator which can be used to mark a method as deprecated
    'replcement' is the method name which will be called instead.
    cs:d|j�f�|jdur�|_t�|���fdd��}|S)Nz8%s() is deprecated and will be removed; use %s() insteadcs$tj�tdd�t|��|i|��S)Nri)�category�
stacklevel)�warnings�warn�DeprecationWarningr�)r�r�r�)r��replacementrorp�inner�sz/deprecated_method.<locals>.outer.<locals>.inner)rlr�r�r�)r�r�r)r�rp�outer�s�
z deprecated_method.<locals>.outerro)rrrorrprI�srIc@sBeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	d
S)�_WrapNumberszNWatches numbers so that they don't overflow and wrap
    (reset to zero).
    cCs t��|_i|_i|_i|_dSr�)�	threading�Lock�lockr��	reminders�
reminder_keysr�rororpr��s
z_WrapNumbers.__init__cCsX||jvsJ�||jvsJ�||jvs*J�||j|<t�t�|j|<t�t�|j|<dSr�)r�rr r��defaultdict�int�set)r��
input_dictr�rororp�	_add_dict�s
z_WrapNumbers._add_dictcCs\|j|}t|���t|���}|D]0}|j||D]}|j||=q8|j||=q&dS)z�In case the number of keys changed between calls (e.g. a
        disk disappears) this removes the entry from self.reminders.
        N)r�r#�keysr r)r�r$r��old_dictZ	gone_keysZgone_key�remkeyrororp�_remove_dead_reminders�s
z#_WrapNumbers._remove_dead_remindersc
	Cs||jvr|�||�|S|�||�|j|}i}|D]�}||}z||}Wntyn|||<Yq8Yn0g}tt|��D]f}	||	}
||	}||	f}|
|kr�|j|||7<|j||�|�|�	|
|j||�q�t
|�||<q8||j|<|S)zlCache dict and sum numbers which overflow and wrap.
        Return an updated copy of `input_dict`.
        )r�r%r)r��range�lenrr �add�append�tuple)
r�r$r�r'Znew_dictr�Zinput_tupleZ	old_tuple�bits�iZinput_value�	old_valuer(rororp�run�s2



z_WrapNumbers.runNcCs||j�b|dur0|j��|j��|j��n*|j�|d�|j�|d�|j�|d�Wd�n1sn0YdS)z>Clear the internal cache, optionally only for function 'name'.N)rr�r�rr �pop)r�r�rororpr��s

z_WrapNumbers.cache_clearcCs:|j� |j|j|jfWd�S1s,0YdS)z5Return internal cache dicts as a tuple of 3 elements.N)rr�rr r�rororp�
cache_info�sz_WrapNumbers.cache_info)N)
rlrmrnr�r�r%r)r2r�r4rorororpr�s'
rcCs6tj�t�||�Wd�S1s(0YdS)z�Given an `input_dict` and a function `name`, adjust the numbers
    which "wrap" (restart from zero) across different calls by adding
    "old value" to "new value" and return an updated dict.
    N)�_wnrr2)r$r�rororprRsrRi�cCst|dtd�S)N�rb��	buffering)�open�FILE_READ_BUFFER_SIZE)�fnamerororprTsrTcCs\tst|td�St|tttd�}z
t|_Wn,ty<YntyV|���Yn0|S)z�On Python 3 opens a file in text mode by using fs encoding and
    a proper en/decoding errors handler.
    On Python 2 this is just an alias for open(name, 'rt').
    r7)r8�encoding�errors)	�PY3r9r:r3r4�_CHUNK_SIZEr�r��close)r;�fobjrororprS"s �
rSc	Cs�|tur:||��}|��Wd�S1s.0YnRz6||��}|��Wd�WS1sd0YWnttfy�|YS0dS)z�Read entire file content and return it as a string. File is
    opened in text mode. If specified, `fallback` is the value
    returned in case of error, either if the file does not exist or
    it can't be read().
    N)�_DEFAULT�read�IOErrorr�)r;�fallback�_open�frororprU@s
(
,rUcCst||td�S)z,Same as above but opens file in binary mode.)rErF)rUrT)r;rErororprVQsrV�%(value).1f%(symbol)scCs�d}i}t|dd��D]\}}d|dd>||<qt|dd��D]2}t|�||krFt|�||}|t�SqF|t|d|d�S)z�Used by various scripts. See: http://goo.gl/zeJZl.

    >>> bytes2human(10000)
    '9.8K'
    >>> bytes2human(100001221)
    '95.4M'
    )	�B�K�M�G�T�P�E�Z�YrjN�
r)�symbolr�)�	enumerate�reversed�absr��locals�dict)�n�format�symbols�prefixr0�srSr�rororprWVsrWcCstjdjS)z+Return updated psutil.PROCFS_PATH constant.r�)�sys�modulesZPROCFS_PATHrorororp�get_procfs_pathisr`cCs|jttd�S)N)r<r=)�decoder3r4�r]rororprapsracCs|Sr�rorbrororprauscCs^tjdkrdSz2ddl}|��s$J�|��|�d�dks>J�WntyTYdS0dSdS)Nr^Tr�colorsF)r�r��curses�isattyZ	setuptermZtigetnumr�)�filerdrororpr[~s
r[FcCs�t�s
|Sg}tddddddddd	d
�	}d|d<z||}Wn&tybtd
t|�����Yn0|�|�|r||�d�dd�|�|fS)z*Return an highlighted version of 'string'.Z34Z33Z30Z32Z37Z36Z91Z35Z93)	�blue�brownZdarkgrey�greenZgreyZ	lightblue�redZviolet�yellowZ29Nz#invalid color %r; choose between %s�1z[%sm%s�;)r[rXr�r�listr&r-r�)r]�color�bold�attrrcrororprZ�s2��


rZc	Cs
t�st||d�n�tr.tt|||�|d�n�ddl}d}|jjj}|jjj}t	ddddd�}||d<z||}Wn*t
y�td	|t|�
��f��Yn0|r�|dkr�|d
7}|tjur�dnd}	|j|_||	�}
||
|�zt||d�W||
|�n||
|�0dS)
z$Print a colorized version of string.�rfrN�ri��)rirjrhrkz#invalid color %r; choose between %r�i�i����)r[�printrrZ�ctypes�windllZKernel32�GetStdHandle�SetConsoleTextAttributerXr�rrnr&r^�stderr�c_ulong�restype)r]rorprfrxZ
DEFAULT_COLORrzr{rcZ	handle_id�handlerororpr\�s8
���

r\cCsntrjddl}|�|��j�\}}}}}t|t�rRt|ttt	f�rJd|}nd|}t
d|||ftjd�dS)z@If PSUTIL_DEBUG env var is set, print a debug message to stderr.rNzignoring %szignoring %rzpsutil-debug [%s:%s]> %srr)
r�inspect�getframeinfo�currentframe�f_back�
isinstancer�r�rD�EnvironmentErrorrwr^r|)r�r�r;�lineno�_�_lines�_indexrororprY�s�

�rY)N)N)rH)NF)�r��
__future__rrr�r�r�r�r�r�r�r^rrrrrrr�ImportErrorr	�version_infor>r	�bool�getenvr�objectrB�__all__r�rr�platform�
startswithrrrrrrr
rZAIXr)r*r&r+r-r1r%r/r0r'r(r.r,r2rr rrrr!rrrrrrr"r#r$�IntEnumrk�globals�update�__members__rtrurs�getfilesystemencodingr3r4�getfilesystemencodeerrorsr�rFrBr@rArCrGr>rDrEr?r�r�r�r�r6r;r<r=r8r9r:r7r5r�rHr�r�r�r�r�r�r��__builtins__rX�exec_r�r�rNrKr�rJrMrOrLrPrQrXrIrrRr5r�r4r:rTrSrUrVrWr`ra�stdoutr[rZr\rYrorororp�<module>sp


&

	

�������	

�


�
#



-G



W	

	
�
'PKok\]2�مG�G*psutil/__pycache__/_pssunos.cpython-39.pycnu�[���a

��?h�c�@s6dZddlZddlZddlZddlZddlZddlZddlmZddlm	Z	ddl
mZddl
mZddl
m
Zdd	l
mZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!gd�Z"e�#�Z$ej%Z%ej&dkZ'dZ(dZ)ej*ej+ej,ej-ej.ej/ej0ej1ej2ej3ej4ej-ej5ej6iZ7ej8ej9ej:ej;ej<ej=ej>ej?ej@ejAejBejCejDejEejFejGejHejIejJejKejLejMejNejOejPe(ejQe)iZReSddddd d!d"d#d$d%d&d'd(�ZTed)gd*��ZUed+gd,��ZVed-gd.��ZWed/d0d1g�ZXeXZYed2gd3��ZZed4d5d6�[eZj\��Z]d7d8�Z^d9d:�Z_d;d<�Z`d=d>�Zad?d@�ZbdAdB�ZcdCdD�ZdejeZeejfZfdYdFdG�ZgejhZhejiZidZdIdJ�ZjdKdL�ZkdMdN�ZldOdP�ZmdQdR�ZndSdT�ZodUdV�ZpGdWdX�dX�ZqdS)[z'Sun OS Solaris platform implementation.�N)�
namedtuple)�AF_INET�)�_common)�_psposix)�
_psutil_posix)�
_psutil_sunos)�AF_INET6)�AccessDenied)�
NoSuchProcess)�
ZombieProcess)�debug)�get_procfs_path)�
isfile_strict)�memoize_when_activated)�sockfam_to_enum)�socktype_to_enum)�
usage_percent)�PY3)�FileNotFoundError)�PermissionError)�ProcessLookupError)�b)�	CONN_IDLE�
CONN_BOUNDZPROCFS_PATHlZIDLEZBOUND��������	�
�)�ppid�rss�vms�create_time�nice�num_threads�status�ttynr�uid�euid�gid�egid�	scputimes)�user�system�idleZiowait�	pcputimes)r2r3�
children_user�children_system�svmem)�total�	available�percent�used�free�pmemr&r'�
pmmap_grouped)�pathr&Z	anonymous�locked�	pmmap_extzaddr perms � cCsFt�d�t}t�d�t}}||}t||dd�}t|||||�S)zReport virtual memory metrics.�
SC_PHYS_PAGES�SC_AVPHYS_PAGESr�Zround_)�os�sysconf�	PAGE_SIZErr8)r9r=Zavailr<r;�rJ�;/usr/local/lib64/python3.9/site-packages/psutil/_pssunos.py�virtual_memorys
rLc	Cst��\}}tjddtjdddgtjd�}|��\}}trL|�	t
jj�}|j
dkrdtd|j
��|���d	�d
d�}|s�d}t|��d}}|D]D}	|	��}	|	d
d�\}
}|tt|
�d�7}|tt|�d�7}q�||}t||d
d�}
t�||||
|t|t�S)zReport swap memory metrics.z/usr/bin/envzPATH=/usr/sbin:/sbin:%s�PATHZswapz-l)�stdoutrz'swap -l' failed (retcode=%s)�
rNzno swap device(s) configuredrrirF)�cextZswap_mem�
subprocess�PopenrG�environ�PIPE�communicater�decode�sysrN�encoding�
returncode�RuntimeError�strip�split�intrrZsswaprI)�sinZsout�prN�_�lines�msgr9r=�line�t�fr<r;rJrJrK�swap_memory�s:��	
�rfcCst��}tdd�t|�D��S)z.Return system-wide CPU times as a named tuple.cSsg|]}t|��qSrJ)�sum��.0�xrJrJrK�
<listcomp>��zcpu_times.<locals>.<listcomp>)rP�
per_cpu_timesr1�zip��retrJrJrK�	cpu_times�srqcCst��}dd�|D�S)z6Return system per-CPU times as a list of named tuples.cSsg|]}t|��qSrJ)r1rhrJrJrKrk�rlz!per_cpu_times.<locals>.<listcomp>)rPrmrorJrJrKrm�srmcCs&zt�d�WSty YdS0dS)z0Return the number of logical CPUs in the system.�SC_NPROCESSORS_ONLNN)rGrH�
ValueErrorrJrJrJrK�cpu_count_logical�srtcCst��S)z-Return the number of CPU cores in the system.)rP�cpu_count_coresrJrJrJrKru�srucCs$t��\}}}}d}t�||||�S)z*Return various CPU stats as a named tuple.r)rP�	cpu_statsrZ	scpustats)Zctx_switchesZ
interruptsZsyscallsZ_trapsZsoft_interruptsrJrJrKrv�s
�rvFc
Cs�g}t��}|D]�}|\}}}}|dkr,d}|s�zt|�js@WqWn@ty�}z(td||f�WYd}~qWYd}~n
d}~00t�||||�}	|�|	�q|S)zReturn system disk partitions.�none�zskipping %r: %sN)	rP�disk_partitions�
disk_usager9�OSErrorr
rZ	sdiskpart�append)
�all�retlistZ
partitions�	partitionZdeviceZ
mountpointZfstype�opts�errZntuplerJrJrKry�s 
"ry���c
Cstj��}|dkr|�dd�||vrFtd|d�dd�|D��f��tj|\}}t�|�}t�}|D]�}|\}}	}
}}}
}|	|vr�qh|
|vr�qh|	t	t
fvr�|r�tj|�}|r�tj|�}t|
}
t
|	�}	t|
�}
|dkr�t�||	|
|||
|�}nt�||	|
|||
�}|�|�qht|�S)z�Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    Only INET sockets are returned (UNIX are not).
    r��unixrz+invalid %r kind argument; choose between %sz, cSsg|]}t|��qSrJ)�reprrhrJrJrKrkrlz#net_connections.<locals>.<listcomp>)rZ	conn_tmap�copy�poprs�joinrP�net_connections�setrr	�addr�TCP_STATUSESrrZsconn�pconn�add�list)�kind�_pidZcmapZfamilies�types�rawlistrp�item�fd�fam�type_�laddr�raddrr+�pid�ntrJrJrKr�s>
��


r�cCsTt��}|��D]>\}}|\}}}}ttd�r8t�|�}t�||||d�||<q|S)z)Get NIC stats (isup, duplex, speed, mtu).�	NicDuplexrx)rP�net_if_stats�items�hasattrrr�Z	snicstats)rp�namer�ZisupZduplex�speedZmturJrJrKr�.s

r�cCst��S)z:The system boot time expressed in seconds since the epoch.)rP�	boot_timerJrJrJrKr�>sr�cCs\g}t��}d}|D]B}|\}}}}}}	|s.q||vr:d}t�|||||	�}
|�|
�q|S)z:Return currently connected users as a list of namedtuples.)z:0.0z:0�	localhost)rP�usersrZsuserr|)r~r�r�r�r2�tty�hostnameZtstampZuser_processr�r�rJrJrKr�Csr�cCsdd�t�tt���D�S)z7Returns a list of PIDs currently running on the system.cSsg|]}|��rt|��qSrJ)�isdigitr]rhrJrJrKrk]rlzpids.<locals>.<listcomp>)rG�listdirrrrJrJrJrK�pids[sr�cCs
t�|�S)z&Check for the existence of a unix pid.)r�
pid_exists)r�rJrJrKr�`sr�cst����fdd��}|S)z�Call callable into a try/except clause and translate ENOENT,
    EACCES and EPERM in NoSuchProcess or AccessDenied exceptions.
    c	s�z�|g|�Ri|��WSttfyZt|j�sDt|j|j��nt|j|j|j��YnXtyxt	|j|j��Yn:t
y�|jdkr�dt�vr�t	|j|j��n��Yn0dS�Nr)rrr�r�r�_namer�_ppidrr
r{r�)�self�args�kwargs��funrJrK�wrapperjs


z wrap_exceptions.<locals>.wrapper)�	functools�wraps)r�r�rJr�rK�wrap_exceptionsesr�c@s�eZdZdZgd�Zdd�Zdd�Zdd�Zd	d
�Ze	e
dd���Ze	e
d
d���Ze	e
dd���Z
e	dd��Ze	dd��Ze	dd��Ze	dd��Ze	dd��Ze	dd��Ze	dd��Ze	dd ��Ze	d!d"��Ze	d#d$��Ze	d%d&��Ze	d'd(��Ze	d)d*��Ze	d+d,��Ze	d-d.��Ze	d/d0��ZeZe	d1d2��Ze	d3d4��Z e	d5d6��Z!d7d8�Z"e	dHd:d;��Z#e$d<d=�Z%e$d<d>�Z&e	d?d@��Z'e	dAdB��Z(e	dCdD��Z)e	dIdFdG��Z*dES)J�Processz1Wrapper class around underlying C implementation.)�_cacher�r��_procfs_pathr�cCs||_d|_d|_t�|_dS�N)r�r�r�rr�)r�r�rJrJrK�__init__�szProcess.__init__cCst�d|j|jf�dS)z+Raise NSP if the process disappeared on us.�%s/%sN)rG�statr�r��r�rJrJrK�
_assert_alive�szProcess._assert_alivecCs(|j�|�|j�|�|j�|�dSr�)�_proc_name_and_argsZcache_activate�_proc_basic_info�
_proc_credr�rJrJrK�
oneshot_enter�szProcess.oneshot_entercCs(|j�|�|j�|�|j�|�dSr�)r�Zcache_deactivater�r�r�rJrJrK�oneshot_exit�szProcess.oneshot_exitcCst�|j|j�Sr�)rPZproc_name_and_argsr�r�r�rJrJrKr��szProcess._proc_name_and_argscCsT|jdkr,tj�d|j|jf�s,t|j��t�|j|j�}t|�tt	�ksPJ�|S)Nrz%s/%s/psinfo)
r�rGr@�existsr�r
rPZproc_basic_info�len�
proc_info_map)r�rprJrJrKr��s�
zProcess._proc_basic_infocCst�|j|j�Sr�)rPZ	proc_credr�r�r�rJrJrKr��szProcess._proc_credcCs|��dSr�)r�r�rJrJrKr��szProcess.namecCs8zt�d|j|jf�WSty*Yn0|��dS)Nz%s/%s/path/a.outrx)rG�readlinkr�r�r{�cmdliner�rJrJrK�exe�s�zProcess.execCs|��d�d�S)NrrC)r�r\r�rJrJrKr��szProcess.cmdlinecCst�|j|j�Sr�)rPZproc_environr�r�r�rJrJrKrS�szProcess.environcCs|��tdS)Nr(�r�r�r�rJrJrKr(�szProcess.create_timecCs|��tdS)Nr*r�r�rJrJrKr*�szProcess.num_threadscCs|��tdS)Nr)r�r�rJrJrK�nice_get�szProcess.nice_getcCs&|jdvrt|j|j��t�|j|�S)N)rr)r�r
r��
cext_posix�setpriority)r��valuerJrJrK�nice_set�s
zProcess.nice_setcCs|��td|_|jS)Nr%)r�r�r�r�rJrJrKr%�szProcess.ppidcCs^z|��\}}}}}}Wn6tyN|��td}|��td}d}Yn0t�|||�S)Nr-r.�r�r
r�r�rZpuids)r��real�	effective�savedr`rJrJrK�uids�s
zProcess.uidscCs^z|��\}}}}}}Wn6tyN|��td}|��td}d}Yn0t�|||�S)Nr/r0r�)r�r`r�r�r�rJrJrK�gids�s
zProcess.gidsc
Cs\zt�|j|j�}Wn<tyP}z$|jtjkr:ts:d}n�WYd}~n
d}~00tj	|�S)N)�r�r�r�)
rPZproc_cpu_timesr�r�r{�errno�	EOVERFLOW�	IS_64_BITrr5)r��timesr�rJrJrKrqszProcess.cpu_timescCst�|j|j�Sr�)rPZproc_cpu_numr�r�r�rJrJrK�cpu_numszProcess.cpu_numc	Csz|j}d}t|��td�}|tjkrjdD]<}zt�d||j|f�WSt	yfd}Yq,Yq,0q,|rv|�
�dS)NFr,)rrr�z
%s/%d/path/%dT)r�r�r�r�rPZPRNODEVrGr�r�rr�)r��procfs_path�
hit_enoentr�rjrJrJrK�terminals
�
zProcess.terminalcCsJ|j}zt�d||jf�WStyDt�d||jf�YdS0dS)Nz%s/%s/path/cwdr�rx)r�rGr�r�rr�)r�r�rJrJrK�cwd'szProcess.cwdcCs2|��}|tdd}|tdd}t||�S)Nr&ir')r�r�r>)r�rpr&r'rJrJrK�memory_info4szProcess.memory_infocCs|��td}t�|d�S)Nr+�?)r�r��
PROC_STATUSES�get)r��coderJrJrKr+=szProcess.statusc
Cs�|j}g}t�d||jf�}d}|D]�}t|�}zt�|j||�\}}Wn^ty�}zF|jtj	krvt
svWYd}~q&|jtjkr�d}WYd}~q&�WYd}~q&d}~00t�
|||�}	|�|	�q&|r�|��|S)Nz	%s/%d/lwpFT)r�rGr�r�r]rPZquery_process_thread�EnvironmentErrorr�r�r��ENOENTrZpthreadr|r�)
r�r�rpZtidsr��tid�utimeZstimer�r�rJrJrK�threadsCs,�zProcess.threadsc	Cs�g}d}|j}d||jf}t�d||jf�D]f}tj�||�}tj�|�r0zt�|�}Wntyvd}Yq0Yq00t	|�r0|�
t�|t
|���q0|r�|��|S)NFz
%s/%d/pathz%s/%d/fdT)r�r�rGr�r@r��islinkr�rrr|rZ	popenfiler]r�)r�r~r�r�Zpathdirr�r@�filerJrJrK�
open_fileses"
zProcess.open_filesccs*dt|�g}tj|tjtjd�}|��\}}trFdd�||fD�\}}|jdkr�d|��vrjt|j	|j
��d|��vr�t|j	|j
��td||f��|�
d	�d
d�}t|�D]v\}}|��}|�d�r�|�
d
d
�d
}	||d
��}
|
dkr�tj}
n|
dk�r
tj}
nd}
dtj|
|	dtjfVq�dS)z<Get UNIX sockets used by process by parsing 'pfiles' output.Zpfiles)rN�stderrcss|]}|�tjj�VqdSr�)rVrWrNrXrhrJrJrK�	<genexpr>�sz,Process._get_unix_sockets.<locals>.<genexpr>rzpermission deniedzno such processz%r command error
%srOrNzsockname: AF_UNIXrC�SOCK_STREAM�
SOCK_DGRAMr�rx)�strrQrRrTrUrrY�lowerr
r�r�rrZr\�	enumerate�lstrip�
startswithr[�socketr�r��AF_UNIXr�	CONN_NONE)r�r��cmdr_rNr�ra�ircr@�typerJrJrK�_get_unix_socketszs6
��



zProcess._get_unix_sockets�inetcCsPt||jd�}|s(t�d|j|jf�|dvrL|�dd�|�|j�D��|S)N)r�r�)r}r�cSsg|]}tj|��qSrJ)rr�)ri�connrJrJrKrk�s�z+Process.net_connections.<locals>.<listcomp>)r�r�rGr�r��extendr�)r�r�rprJrJrKr��s

�zProcess.net_connectionsZmmapzpath rss anon lockedzaddr perms path rss anon lockedcCs0dd�}|j}g}zt�|j|�}WnFtyj}z.|jtjkrTtsTgWYd}~S�WYd}~n
d}~00d}|D]�}|\}}	}
}}}
}|||	�}|�d��szt	�
d||j|f�}WnJt�y}z0|jtjkr�d||j|f}d}n�WYd}~n
d}~00|�||
|||
|f�qt|�r,|�
�|S)NcSs0dt|�dd��d�t|�dd��d�fS)Nz%s-%sr�L)�hexr[)�start�endrJrJrK�toaddr�s�z#Process.memory_maps.<locals>.toaddrF�[z
%s/%s/path/%sT)r�rPZproc_memory_mapsr�r{r�r�r�r�rGr�r�r|r�)r�rr�r~r�r�r�r�r�Zaddrsize�permr�r&ZanonrArJrJrK�memory_maps�s6
�zProcess.memory_mapscCstt�d|j|jf��S)Nz%s/%s/fd)r�rGr�r�r�r�rJrJrK�num_fds�szProcess.num_fdscCstjt�|j|j��Sr�)rZpctxswrPZproc_num_ctx_switchesr�r�r�rJrJrK�num_ctx_switches�s�zProcess.num_ctx_switchesNcCst�|j||j�Sr�)rZwait_pidr�r�)r��timeoutrJrJrK�wait�szProcess.wait)r�)N)+�__name__�
__module__�__qualname__�__doc__�	__slots__r�r�r�r�r�rr�r�r�r�r�r�rSr(r*r�r�r%r�r�rqr�r�r�r�Zmemory_full_infor+r�r�r�r�rZnt_mmap_groupedZnt_mmap_extrr	r
rrJrJrJrKr��s�	








	

	
	






!
"


1

r�)F)r�)rrr�r�rGr�rQrW�collectionsrrrxrrrr�rrPr	r
rrr
rrrrrrZ_compatrrrrrZ__extra__all__ZgetpagesizerIZAF_LINK�maxsizer�rrZSSLEEPZSTATUS_SLEEPINGZSRUNZSTATUS_RUNNINGZSZOMBZ
STATUS_ZOMBIEZSSTOPZSTATUS_STOPPEDZSIDLZSTATUS_IDLEZSONPROCZSWAITZSTATUS_WAITINGr�ZTCPS_ESTABLISHEDZCONN_ESTABLISHEDZ
TCPS_SYN_SENTZ
CONN_SYN_SENTZ
TCPS_SYN_RCVDZ
CONN_SYN_RECVZTCPS_FIN_WAIT_1ZCONN_FIN_WAIT1ZTCPS_FIN_WAIT_2ZCONN_FIN_WAIT2ZTCPS_TIME_WAITZCONN_TIME_WAITZTCPS_CLOSEDZ
CONN_CLOSEZTCPS_CLOSE_WAITZCONN_CLOSE_WAITZ
TCPS_LAST_ACKZ
CONN_LAST_ACKZTCPS_LISTENZCONN_LISTENZTCPS_CLOSINGZCONN_CLOSINGZPSUTIL_CONN_NONEr�Z	TCPS_IDLEZ
TCPS_BOUNDr��dictr�r1r5r8r>Zpfullmemr?r��_fieldsrBrLrfrqrmrtrurvZdisk_io_countersrzryZnet_io_countersZnet_if_addrsr�r�r�r�r�r�r�r�rJrJrJrK�<module>s�
������
.	

'PKok\1}:p�;�;(psutil/__pycache__/_psaix.cpython-39.pycnu�[���a

��?h�H�@s�dZddlZddlZddlZddlZddlZddlZddlmZddl	m
Z
ddl	mZddl	mZ
ddl	mZdd	l
mZdd
l
mZddl
mZddl
mZdd
l
mZddl
mZddl
mZddl
mZddl
mZddl
mZddlmZddlmZddlmZddlmZdgZe e
d�Z!e e
d�Z"e e
d�Z#e�$�Z%ej&Z&e
j'e
j(e
j)e
j*e
j+e
j,e
j-e
j,e
j.e
j/iZ0e
j1e
j2e
j3e
j4e
j5e
j6e
j7e
j8e
j9e
j:e
j;e
j<e
j=e
j>e
j?e
j@e
jAe
jBe
jCe
jDe
jEe
jFe
jGe
jHiZIeJdddddddd d!�ZKed"d#d$g�ZLeLZMed%gd&��ZNed'gd(��ZOd)d*�ZPd+d,�ZQd-d.�ZRd/d0�ZSd1d2�ZTd3d4�ZUd5d6�ZVe
jWZWejXZXdKd8d9�ZYejZZZe"�r�e
j[Z[dLd;d<�Z\d=d>�Z]d?d@�Z^dAdB�Z_dCdD�Z`dEdF�ZadGdH�ZbGdIdJ�dJ�ZcdS)MzAIX platform implementation.�N)�
namedtuple�)�_common)�_psposix)�_psutil_aix)�
_psutil_posix)�NIC_DUPLEX_FULL)�NIC_DUPLEX_HALF)�NIC_DUPLEX_UNKNOWN)�AccessDenied)�
NoSuchProcess)�
ZombieProcess)�conn_to_ntuple)�get_procfs_path)�memoize_when_activated)�
usage_percent)�PY3)�FileNotFoundError)�PermissionError)�ProcessLookupErrorZPROCFS_PATH�proc_threads�net_io_counters�proc_io_counters������)�ppid�rss�vms�create_time�nice�num_threads�status�ttynr�pmemr r!�	scputimes)�user�system�idleZiowait�svmem)�total�	available�percent�used�freecCs4t��\}}}}}t|||dd�}t|||||�S)Nr�Zround_)�cextZvirtual_memrr,)r-Zavailr1Z_pinnedZinuser/�r4�9/usr/local/lib64/python3.9/site-packages/psutil/_psaix.py�virtual_memoryksr6cCs:t��\}}}}||}t||dd�}t�||||||�S)z=Swap system memory as a (total, used, free, sin, sout) tuple.rr2)r3Zswap_memrrZsswap)r-r1�sinZsoutr0r/r4r4r5�swap_memoryqsr8cCst��}tdd�t|�D��S)z.Return system-wide CPU times as a named tuple.cSsg|]}t|��qSr4)�sum��.0�xr4r4r5�
<listcomp>��zcpu_times.<locals>.<listcomp>)r3�
per_cpu_timesr(�zip��retr4r4r5�	cpu_times~srCcCst��}dd�|D�S)z6Return system per-CPU times as a list of named tuples.cSsg|]}t|��qSr4)r(r:r4r4r5r=�r>z!per_cpu_times.<locals>.<listcomp>)r3r?rAr4r4r5r?�sr?cCs&zt�d�WSty YdS0dS)z0Return the number of logical CPUs in the system.�SC_NPROCESSORS_ONLNN)�os�sysconf�
ValueErrorr4r4r4r5�cpu_count_logical�srHcCstgd�}tj|tjtjd�}|��\}}trBdd�||fD�\}}|jdkr\td||f��|����}t	|�prdS)N)Zlsdevz-Cc�	processor��stdout�stderrcss|]}|�tjj�VqdS�N��decode�sysrK�encodingr:r4r4r5�	<genexpr>�sz"cpu_count_cores.<locals>.<genexpr>rz%r command error
%s)
�
subprocess�Popen�PIPE�communicater�
returncode�RuntimeError�strip�
splitlines�len)�cmd�prKrLZ
processorsr4r4r5�cpu_count_cores�s�

r^cCs t��\}}}}t�||||�S)z*Return various CPU stats as a named tuple.)r3�	cpu_statsrZ	scpustats)Zctx_switchesZ
interruptsZsoft_interruptsZsyscallsr4r4r5r_�s�r_Fc	Cs\g}t��}|D]F}|\}}}}|dkr,d}|s<t|�js<qt�||||�}|�|�q|S)zReturn system disk partitions.�none�)r3�disk_partitions�
disk_usager-rZ	sdiskpart�append)	�all�retlistZ
partitions�	partitionZdeviceZ
mountpointZfstype�opts�ntupler4r4r5rb�s
rb���cCs�tj}||vr.td|d�dd�|D��f��tj|\}}t�|�}g}|D]X}|\}}	}
}}}
}|	|vrnqN|
|vrxqNt||	|
|||
t|dkr�|ndd�}|�|�qN|S)z�Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    z+invalid %r kind argument; choose between %sz, cSsg|]}t|��qSr4)�reprr:r4r4r5r=�r>z#net_connections.<locals>.<listcomp>rjN��pid)	rZ	conn_tmaprG�joinr3�net_connectionsr�TCP_STATUSESrd)�kind�_pidZcmapZfamilies�types�rawlistrB�item�fd�fam�type_�laddr�raddrr%rm�ntr4r4r5ro�s8��
�
rocCs�ttd�}tdd�t�D��}i}|D]�}t�|�}t�|�}d}d}tjdd|gtj	tj	d�}|�
�\}	}
tr�d	d
�|	|
fD�\}	}
|jdkr�t
�d|	�}|dur�t|�d
��}|�d�}d�|�}d|v}
|�|t�}t�|
||||�||<q&|S)z)Get NIC stats (isup, duplex, speed, mtu).)�FullZHalfcSsg|]}|d�qS)rr4r:r4r4r5r=�r>z net_if_stats.<locals>.<listcomp>rarz/usr/bin/entstatz-drJcss|]}|�tjj�VqdSrMrNr:r4r4r5rRsznet_if_stats.<locals>.<genexpr>z"Running: (\d+) Mbps.*?(\w+) DuplexNrr�,�running)rr	�set�net_if_addrs�
cext_posixZ
net_if_mtuZnet_if_flagsrSrTrUrVrrW�re�search�int�grouprn�getr
rZ	snicstats)Z
duplex_map�namesrB�nameZmtu�flagsZduplex�speedr]rKrLZ	re_resultZoutput_flagsZisupr4r4r5�net_if_stats�s<


��

�

r�cCst��S)z:The system boot time expressed in seconds since the epoch.)r3�	boot_timer4r4r4r5r�sr�cCs\g}t��}d}|D]B}|\}}}}}}	|s.q||vr:d}t�|||||	�}
|�|
�q|S)z:Return currently connected users as a list of namedtuples.)z:0.0z:0�	localhost)r3�usersrZsuserrd)rfrtr�rur)�tty�hostnameZtstampZuser_processrmr{r4r4r5r�$sr�cCsdd�t�t��D�S)z7Returns a list of PIDs currently running on the system.cSsg|]}|��rt|��qSr4)�isdigitr�r:r4r4r5r=>r>zpids.<locals>.<listcomp>)rE�listdirrr4r4r4r5�pids<sr�cCstj�tj�t�t|�d��S)z&Check for the existence of a unix pid.Zpsinfo)rE�path�existsrnr�strrlr4r4r5�
pid_existsAsr�cst����fdd��}|S)z�Call callable into a try/except clause and translate ENOENT,
    EACCES and EPERM in NoSuchProcess or AccessDenied exceptions.
    c	s~z�|g|�Ri|��WSttfyZt|j�sDt|j|j��nt|j|j|j��Yn tyxt	|j|j��Yn0dSrM)
rrr�rmr�_namer
�_ppidrr)�self�args�kwargs��funr4r5�wrapperKs
z wrap_exceptions.<locals>.wrapper)�	functools�wraps)r�r�r4r�r5�wrap_exceptionsFsr�c@sreZdZdZgd�Zdd�Zdd�Zdd�Zee	d	d
���Z
ee	dd���Zed
d��Zedd��Z
edd��Zedd��Zedd��Zedd��Zer�edd��Zed=dd��Zedd��Zed d!��Zed"d#��Zed$d%��Zed&d'��Zed(d)��Zed*d+��Zed,d-��Zed.d/��ZeZed0d1��Zd2d3�Z ed4d5��Z!ed6d7��Z"ed>d9d:��Z#e$�rned;d<��Z%d8S)?�Processz1Wrapper class around underlying C implementation.)�_cacher�r��_procfs_pathrmcCs||_d|_d|_t�|_dSrM)rmr�r�rr�)r�rmr4r4r5�__init__bszProcess.__init__cCs|j�|�|j�|�dSrM)�_proc_basic_infoZcache_activate�
_proc_cred�r�r4r4r5�
oneshot_enterhszProcess.oneshot_entercCs|j�|�|j�|�dSrM)r�Zcache_deactivater�r�r4r4r5�oneshot_exitlszProcess.oneshot_exitcCst�|j|j�SrM)r3Zproc_basic_informr�r�r4r4r5r�pszProcess._proc_basic_infocCst�|j|j�SrM)r3Z	proc_credrmr�r�r4r4r5r�uszProcess._proc_credcCs$|jdkrdSt�|j|j��d�S)NrZswapper�)rmr3Z	proc_namer��rstripr�r4r4r5r�zs
zProcess.namecCs�|��}|sdS|d}tjj|vr�tj�|�sJtj�tj�|��|��}tj�|�rttj�|�rtt�	|tj
�rt|Stj�|�}tjd�
d�D]<}tj�tj�||��}tj�|�r�t�	|tj
�r�|Sq�dS)Nrar�PATH�:)�cmdlinerEr��sep�isabs�abspathrn�cwd�isfile�access�X_OK�basename�environ�split)r�r��exer�Zpossible_exer4r4r5r��s,
�
���
zProcess.execCst�|j�SrM)r3Z	proc_argsrmr�r4r4r5r��szProcess.cmdlinecCst�|j�SrM)r3Zproc_environrmr�r4r4r5r��szProcess.environcCs|��tdS)Nr"�r��
proc_info_mapr�r4r4r5r"�szProcess.create_timecCs|��tdS)Nr$r�r�r4r4r5r$�szProcess.num_threadscCsVt�|j�}g}|D]"\}}}t�|||�}|�|�q|sRt�d|j|jf�|S)N�%s/%s)	r3rrmrZpthreadrdrE�statr�)r�rtrf�	thread_id�utimeZstimerir4r4r5�threads�szProcess.threads�inetcCs,t||jd�}|s(t�d|j|jf�|S)N)rrr�)rormrEr�r�)r�rqrBr4r4r5ro�szProcess.net_connectionscCst�|j�SrM)r��getpriorityrmr�r4r4r5�nice_get�szProcess.nice_getcCst�|j|�SrM)r��setpriorityrm)r��valuer4r4r5�nice_set�szProcess.nice_setcCs|��td|_|jS)Nr)r�r�r�r�r4r4r5r�szProcess.ppidcCs"|��\}}}}}}t�|||�SrM�r�rZpuids)r��real�	effective�saved�_r4r4r5�uids�szProcess.uidscCs"|��\}}}}}}t�|||�SrMr�)r�r�r�r�r�r4r4r5�gids�szProcess.gidscCst�|j|j�}tj|�SrM)r3Zproc_cpu_timesrmr�rZ	pcputimes)r��tr4r4r5rC�szProcess.cpu_timescCsP|��td}|d@d?|d@B}t�d�D]}t�|�j|kr.|Sq.dS)Nr&l��i��z	/dev/**/*)r�r��globrEr��st_rdev)r�Zttydev�devr4r4r5�terminal�s
zProcess.terminalcCsT|j}z t�d||jf�}|�d�WStyNt�d||jf�YdS0dS)Nz	%s/%s/cwd�/r�ra)r�rE�readlinkrmr�rr�)r�Zprocfs_path�resultr4r4r5r��szProcess.cwdcCs2|��}|tdd}|tdd}t||�S)Nr ir!)r�r�r')r�rBr r!r4r4r5�memory_infoszProcess.memory_infocCs|��td}t�|d�S)Nr%�?)r�r��
PROC_STATUSESr�)r��coder4r4r5r%	szProcess.statuscCs�tjddt|j�gtjtjd�}|��\}}trFdd�||fD�\}}d|��vr`t|j|j	��t
�d|�}g}|D]J\}}|��}|�
d�r�|d	d�}|��d
kr�qt|�t�|t|���qt|S)Nz/usr/bin/procfilesz-nrJcss|]}|�tjj�VqdSrMrNr:r4r4r5rRsz%Process.open_files.<locals>.<genexpr>zno such processz(\d+): S_IFREG.*name:(.*)\nz//rzcannot be retrieved)rSrTr�rmrUrVr�lowerrr�r��findallrY�
startswithrdrZ	popenfiler�)r�r]rKrLZ	procfilesrfrvr�r4r4r5�
open_filess,��

zProcess.open_filescCs(|jdkrdStt�d|j|jf��S)Nrz%s/%s/fd)rmr[rEr�r�r�r4r4r5�num_fds)s
zProcess.num_fdscCstjt�|j��SrM)rZpctxswr3Zproc_num_ctx_switchesrmr�r4r4r5�num_ctx_switches/szProcess.num_ctx_switchesNcCst�|j||j�SrM)rZwait_pidrmr�)r��timeoutr4r4r5�wait3szProcess.waitcCsVzt�|j�\}}}}Wn,tyDt|j�s>t|j|j���Yn0t�||||�SrM)	r3rrm�OSErrorr�rr�rZpio)r��rcZwc�rb�wbr4r4r5�io_counters9s
zProcess.io_counters)r�)N)&�__name__�
__module__�__qualname__�__doc__�	__slots__r�r�r�r�rr�r�r�r�r�r�r"r$�HAS_THREADSr�ror�r�rr�r�rCr�r�r�Zmemory_full_infor%r�r�r�r��HAS_PROC_IO_COUNTERSr�r4r4r4r5r�]sv















	



r�)F)rj)dr�r�r�rEr�rSrP�collectionsrrarrrr3rr�rr	r
rrr
rrrrZ_compatrrrrZ__extra__all__�hasattrr�ZHAS_NET_IO_COUNTERSr�ZgetpagesizeZ	PAGE_SIZEZAF_LINKZSIDLZSTATUS_IDLEZSZOMBZ
STATUS_ZOMBIEZSACTIVEZSTATUS_RUNNINGZSSWAPZSSTOPZSTATUS_STOPPEDr�ZTCPS_ESTABLISHEDZCONN_ESTABLISHEDZ
TCPS_SYN_SENTZ
CONN_SYN_SENTZ
TCPS_SYN_RCVDZ
CONN_SYN_RECVZTCPS_FIN_WAIT_1ZCONN_FIN_WAIT1ZTCPS_FIN_WAIT_2ZCONN_FIN_WAIT2ZTCPS_TIME_WAITZCONN_TIME_WAITZTCPS_CLOSEDZ
CONN_CLOSEZTCPS_CLOSE_WAITZCONN_CLOSE_WAITZ
TCPS_LAST_ACKZ
CONN_LAST_ACKZTCPS_LISTENZCONN_LISTENZTCPS_CLOSINGZCONN_CLOSINGZPSUTIL_CONN_NONEZ	CONN_NONErp�dictr�r'Zpfullmemr(r,r6r8rCr?rHr^r_Zdisk_io_countersrcrbr�rror�r�r�r�r�r�r�r4r4r4r5�<module>s�


�	��
	


!,PKok\+�������*psutil/__pycache__/_pslinux.cpython-39.pycnu�[���a

��?hZ�
@s�dZddlmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
ddlZddlZddl
Z
ddlmZddlmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!ddlm"Z"ddlm#Z#ddlm$Z$ddlm%Z%ddlm&Z&ddlm'Z'ddlm(Z(ddlm)Z)ddlm*Z*ddl+m,Z,dd l+m-Z-dd!l+m.Z.dd"l+m/Z/dd#l+m0Z0dd$l+m1Z1e,�r�ddl2Z2ndZ2gd%�Z3d&Z4ej5�6d'e�7��Z8ej5�6d(e�7��Z9e:ed)�Z;e:ed*�Z<e�=d+�Z>e�?�Z@daAejBd,kZCd-ZDe2du�rze
jEZFne2�Gd.d/eHe
jE�i�ZIeIjFZFe2du�r�dZJdZKd0ZLd1ZMn Gd2d3�d3e2jG�ZNeO��PeNjQ�ejRejSejTejUejVejWejXejXejYejZej[ej\d4�Z]ej^ej_ej`ejaejbejcejdejeejfejgejhd5�Zied6gd7��Zjed8gd9��Zked:gd;��Zled<d=�Zmed>emjnd?�Zoed@gdA��ZpedBdCdD�qepjn��ZredEgdF��ZsedGgdH��ZtdIdJ�ZudKdL�ZvdMdN�Zwe#dOdP��ZxzexdQ�WnDey�y2Zzz*edRez�edSdT�dUdUdU�a{WYdZz[zn
dZz[z00dZ|zddVl}m|Z|Wn@e~�y�ddlZej�ddWdX�Z�e:e�dY��r�d�dZdY�Z|Yn0e|du�r�e3��d[d\�e�e�D��d]d^�Z�d_d`�Z�dadb�Z�dcdd�Z�dedf�Z�dgdh�Z�didj�Z�dkdl�Z�dmdn�Z�ej5�6do��sej5�6dp��rdqdr�Z�ndsdr�Z�ej�Z�Gdtdu�duey�Z�Gdvdw�dw�Z�e��Z�d�dydz�Z�d{d|�Z�d}d~�Z�ej�Z�d�d�d��Z�Gd�d��d��Z�d�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�Gd�d��d��Z�dS)�zLinux platform implementation.�)�divisionN)�defaultdict)�
namedtuple�)�_common)�_psposix)�
_psutil_linux)�
_psutil_posix)�NIC_DUPLEX_FULL)�NIC_DUPLEX_HALF)�NIC_DUPLEX_UNKNOWN)�AccessDenied)�
NoSuchProcess)�
ZombieProcess)�bcat)�cat)�debug)�decode)�get_procfs_path)�
isfile_strict)�memoize)�memoize_when_activated)�open_binary)�	open_text)�parse_environ_block)�path_exists_strict)�
supports_ipv6)�
usage_percent)�PY3)�FileNotFoundError)�PermissionError)�ProcessLookupError)�b)�
basestring)ZPROCFS_PATH�IOPRIO_CLASS_NONE�IOPRIO_CLASS_RT�IOPRIO_CLASS_BE�IOPRIO_CLASS_IDLE�CONN_ESTABLISHED�
CONN_SYN_SENT�
CONN_SYN_RECV�CONN_FIN_WAIT1�CONN_FIN_WAIT2�CONN_TIME_WAIT�
CONN_CLOSE�CONN_CLOSE_WAIT�
CONN_LAST_ACK�CONN_LISTEN�CONN_CLOSINGz/sys/class/power_supplyz/proc/%s/smapsz/proc/%s/smaps_rollup�proc_ioprio_get�proc_cpu_affinity_get�
SC_CLK_TCK�littlei�
AddressFamily�AF_LINK��c@seZdZdZdZdZdZdS)�
IOPriorityrrr9r:N)�__name__�
__module__�__qualname__r$r%r&r'�r?r?�;/usr/local/lib64/python3.9/site-packages/psutil/_pslinux.pyr;ysr;)�R�S�D�T�t�Z�X�x�K�W�I�P)Z01�02Z03Z04Z05Z06Z07Z08Z09Z0AZ0B�svmem)�total�	available�percent�used�free�active�inactive�buffers�cached�shared�slab�sdiskio)	�
read_count�write_count�
read_bytes�write_bytesZ	read_timeZ
write_timeZread_merged_countZwrite_merged_count�	busy_time�	popenfile)�path�fd�position�mode�flags�pmemz"rss vms shared text lib data dirty�pfullmem)�uss�pss�swap�
pmmap_grouped)ra�rss�sizeriZshared_cleanZshared_dirtyZ
private_cleanZ
private_dirtyZ
referencedZ	anonymousrj�	pmmap_extzaddr perms � �pio)r[r\r]r^Z
read_charsZwrite_chars�	pcputimes)�user�system�
children_user�children_system�iowaitcCsLt|t�sJ|��t�|�}|�d�d}|�d�rHt|�sH|dd�}|S)zWrapper around os.readlink().�r�
 (deleted)N���)�
isinstancer#�os�readlink�split�endswithr)rar?r?r@r|�s
r|cCsXtjdtjdtjdi}||tjtjBtjB@}|tj@rH|�ddd�}|�dd�}|S)zZConvert file's open() flags into a readable string.
    Used by Process.open_files().
    �r�wzw+�arzr+)r{�O_RDONLY�O_WRONLY�O_RDWR�O_APPEND�replace)reZ	modes_maprdr?r?r@�file_flags_to_mode�s
r�cCs4|�dd�}d}|rd|}nd|}t�|tj�S)z�Return True if the given name refers to a root device (e.g.
    "sda", "nvme0n1") as opposed to a logical partition (e.g.  "sda1",
    "nvme0n1p1"). If name is a virtual device (e.g. "loop1", "ram")
    return True.
    �/�!Tz
/sys/block/%sz/sys/block/%s/device)r�r{�access�F_OK)�nameZincluding_virtualrar?r?r@�is_storage_device�s

r�cCs�td|��$}|����dd�}Wd�n1s60Ygd�}t|�}|dkrb|�d�|dkrt|�d�|d	kr�|�d
�td|�adS)z�Set a namedtuple of variable fields depending on the CPU times
    available on this Linux kernel version which may be:
    (user, nice, system, idle, iowait, irq, softirq, [steal, [guest,
     [guest_nice]]])
    Used by cpu_times() function.
    �%s/statrN)rr�nicers�idlervZirqZsoftirq�Zsteal�	Zguest�
Z
guest_nice�	scputimes)r�readliner}�len�appendrr�)�procfs_path�f�values�fieldsZvlenr?r?r@�set_scputimes_ntuples	2


r��/procz ignoring exception on import: %rr�zuser system idle�)�prlimitT)�	use_errnor�cCs�Gdd�dtj�}|�}|dur8t�||dt�|��}n6|�}|d|_|d|_t�||t�|�t�|��}|dkr�t��}t|t	�
|���|j|jfS)Nc@s eZdZdejfdejfgZdS)zprlimit.<locals>.StructRlimit�rlim_cur�rlim_maxN)r<r=r>�ctypes�
c_longlong�_fields_r?r?r?r@�StructRlimit>s�r�rr)r��	Structure�libcr��byrefr�r��	get_errno�OSErrorr{�strerror)�pid�	resource_�limitsr��current�ret�newZerrno_r?r?r@r�=s

�cCs"g|]}|�d�r|��r|�qS)ZRLIM)�
startswith�isupper��.0rHr?r?r@�
<listcomp>Y�r�c
CsB|d}||�dd�}z|d}|d}|d}Wn:tyn}z"td|jd�|WYd}~Sd}~00ztd	t��}Wnty�|YS0d}|�@|D]*}	|	��}	|	�d
�r�|t	|	�
�d�7}q�Wd�n1s�0Y|t9}||}
||}|t|d|�8}|
|7}
|
|t|d
|�7}
t	|
�S)ayFallback for kernels < 3.14 where /proc/meminfo does not provide
    "MemAvailable", see:
    https://blog.famzah.net/2014/09/24/.

    This code reimplements the algorithm outlined here:
    https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/
        commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773

    We use this function also when "MemAvailable" returns 0 (possibly a
    kernel bug, see: https://github.com/giampaolo/psutil/issues/1915).
    In that case this routine matches "free" CLI tool result ("available"
    column).

    XXX: on recent kernels this calculation may differ by ~1.5% compared
    to "MemAvailable:", as it's calculated slightly differently.
    It is still way more realistic than doing (free + cached) though.
    See:
    * https://gitlab.com/procps-ng/procps/issues/42
    * https://github.com/famzah/linux-memavailable-procfs/issues/2
    �MemFree:�Cached:rs
Active(file):sInactive(file):�
SReclaimable:zY%s is missing from /proc/meminfo; using an approximation for calculating available memoryNz%s/zoneinfoslowrr9g@)
�get�KeyErrorr�argsrr�IOError�stripr��intr}�PAGESIZE�min)�memsrS�fallbackZlru_active_fileZlru_inactive_fileZslab_reclaimable�errr�Z
watermark_low�line�availZ	pagecacher?r?r@�calculate_avail_vmembs<��

4r�cCs�g}i}tdt���:}|D]$}|��}t|d�d||d<qWd�n1sV0Y|d}|d}z|d}Wn ty�d}|�d	�Yn0z|d
}Wn ty�d}|�d�Yn0||�dd�7}z|d
}	WnDt�y,z|d}	Wn"t�y&d}	|�d�Yn0Yn0z|d}
Wn"t�y\d}
|�d�Yn0z|d}WnTt�y�z|d|d|d}Wn"t�y�d}|�d�Yn0Yn0z|d}Wnt�y�d}Yn0||||}
|
dk�r||}
z|d}Wnt�y0t|�}Yn0|dk�rDt|�}|dk�r^d}|�d�n||k�rl|}t|||dd�}|�r�dd�	|�t
|�dk�r�dndf}tj|t
dd �t||||
||
||||	|�S)!a7Report virtual memory stats.
    This implementation mimics procps-ng-3.3.12, aka "free" CLI tool:
    https://gitlab.com/procps-ng/procps/blob/
        24fd2605c51fccc375ab0287cec33aa767f06718/proc/sysinfo.c#L778-791
    The returned values are supposed to match both "free" and "vmstat -s"
    CLI tools.
    �
%s/meminfor�rNs	MemTotal:r�sBuffers:rVr�rWr�sShmem:s
MemShared:rXsActive:rTs	Inactive:sInact_dirty:sInact_clean:sInact_laundry:rUsSlab:s
MemAvailable:rP�Zround_z6%s memory stats couldn't be determined and %s set to 0�, �was�werer9��
stacklevel)rrr}r�r�r�r�r�r�joinr��warnings�warn�RuntimeWarningrN)Zmissing_fieldsr�r�r�r�rOrSrVrWrXrTrUrYrRr�rQ�msgr?r?r@�virtual_memory�s�8���

	


��r�c
Cs�i}tdt���:}|D]$}|��}t|d�d||d<qWd�n1sR0Yz|d}|d}Wn8ty�t��\}}}}}}}||9}||9}Yn0||}t||dd�}	ztd	t��}WnNt�y }
z4d
dt	|
�}t
j|tdd
�d}}
WYd}
~
n�d}
~
00|��d}}
|D]n}|�
d��r`t|�d�d�dd}n&|�
d��r�t|�d�d�dd}
|du�r4|
du�r4�qȐq4d
}|d7}t
j|tdd
�d}}
Wd�n1�s�0Yt�||||	||
�S)zReturn swap memory metrics.r�rr�rNs
SwapTotal:s	SwapFree:r�z	%s/vmstatz,'sin' and 'sout' swap memory stats couldn't z$be determined and were set to 0 (%s)r9r�spswpin� �spswpoutzbe determined and were set to 0)rrr}r�r��cextZ
linux_sysinforr��strr�r�r�r�rZsswap)r�r�r�r�rOrS�_Zunit_multiplierrRrQr�r��sinZsoutr?r?r@�swap_memory'sJ8
��
(r�cCsrt�}t|�td|��}|����}Wd�n1s<0Y|dttj�d�}dd�|D�}t|�S)z�Return a named tuple representing the following system-wide
    CPU times:
    (user, nice, system, idle, iowait, irq, softirq [steal, [guest,
     [guest_nice]]])
    Last 3 fields may not be available on all Linux kernel versions.
    r�NrcSsg|]}t|�t�qSr?��float�CLOCK_TICKSr�r?r?r@r�or�zcpu_times.<locals>.<listcomp>)rr�rr�r}r�r��_fields)r�r�r�r�r?r?r@�	cpu_timescs*r�cCs�t�}t|�g}td|��n}|��|D]L}|�d�r,|��}|dttj�d�}dd�|D�}t|�}|�	|�q,|Wd�S1s�0YdS)zfReturn a list of namedtuple representing the CPU times
    for every CPU available on the system.
    r�scpurcSsg|]}t|�t�qSr?r�r�r?r?r@r��r�z!per_cpu_times.<locals>.<listcomp>N)
rr�rr�r�r}r�r�r�r�)r��cpusr�r�r�r��entryr?r?r@�
per_cpu_timesss
r�cCs�zt�d�WSty�d}tdt���0}|D]}|���d�r2|d7}q2Wd�n1sb0Y|dkr�t�d�}t	dt���:}|D]$}|�
d	�d}|�|�r�|d7}q�Wd�n1s�0Y|dkr�YdS|YS0dS)
z0Return the number of logical CPUs in the system.�SC_NPROCESSORS_ONLNr�
%s/cpuinfos	processorrNzcpu\dr�ro)r{�sysconf�
ValueErrorrr�lowerr��re�compilerr}�match)�numr�r��searchr?r?r@�cpu_count_logical�s$(

(r�c
Cs2t�}d}d}t�|�p t�|�D]>}t|��"}|�|�����Wd�q"1sV0Yq"t|�}|dkrv|Si}i}tdt���~}|D]h}|����}|s�z|d||d<Wnt	y�Yn0i}q�|�
d�r�|�d	d
�\}	}
t|
�||	<q�Wd�n1�s0Yt
|���}|�p0dS)z-Return the number of CPU cores in the system.z9/sys/devices/system/cpu/cpu[0-9]*/topology/core_cpus_listz?/sys/devices/system/cpu/cpu[0-9]*/topology/thread_siblings_listNrr��	cpu cores�physical id)r�r�s	:r)�set�globr�add�readr�r�rr�r�r�r}r��sumr�)�ls�p1�p2rar��result�mappingZcurrent_infor��key�valuer?r?r@�cpu_count_cores�s6
2�
.rcCs�tdt����}d}d}d}|D]r}|�d�r@t|��d�}n6|�d�r\t|��d�}n|�d�rvt|��d�}|dur |dur |dur q�q Wd�n1s�0Yd}t�||||�S)z*Return various CPU stats as a named tuple.r�Nsctxtrsintrssoftirqr)rrr�r�r}rZ	scpustats)r�Zctx_switchesZ
interruptsZsoft_interruptsr�Zsyscallsr?r?r@�	cpu_stats�s,


���$�rc	Cshg}tdt���B}|D],}|���d�r|�t|�dd�d��qWd�n1sZ0Y|S)z7Return current CPU frequency from cpuinfo if available.r�scpu mhz�:rN)rrr�r�r�r�r})r�r�r�r?r?r@�_cpu_get_cpuinfo_freq�s:rz'/sys/devices/system/cpu/cpufreq/policy0z$/sys/devices/system/cpu/cpu0/cpufreqcCs&t�}t�d�pt�d�}|jdd�d�g}tjj}t|�D]�\}}t|�t|�krd||d}nt||d�dd	�}|dur�t||d
�dd	�}|dur�d�	|�}t
|dd	�dkr�|�t�
d
d
d
��q>d}t|��t|�d}tt||d���d}	tt||d���d}
|�t�
||
|	��q>|S)z�Return frequency metrics for all CPUs.
        Contrarily to other OSes, Linux updates these values in
        real-time.
        z,/sys/devices/system/cpu/cpufreq/policy[0-9]*z)/sys/devices/system/cpu/cpu[0-9]*/cpufreqcSstt�d|����S)Nz[0-9]+)r�r�r��group)rHr?r?r@�<lambda>�r�zcpu_freq.<locals>.<lambda>)r�i�Zscaling_cur_freqN�r�Zcpuinfo_cur_freqz$/sys/devices/system/cpu/cpu{}/onlinez0
r�z!can't find current frequency fileZscaling_max_freqZscaling_min_freq)rr��sortr{rar��	enumerater�r�formatrr�r�scpufreq�NotImplementedErrorr�)Z
cpuinfo_freqs�pathsr��pjoin�ira�currZonline_pathr�Zmax_Zmin_r?r?r@�cpu_freq�s8���rcCsdd�t�D�S)z}Alternate implementation using /proc/cpuinfo.
        min and max frequencies are not available and are set to None.
        cSsg|]}t�|dd��qS)r�)rr
r�r?r?r@r�#r�zcpu_freq.<locals>.<listcomp>)rr?r?r?r@rsc@seZdZdS)�_Ipv6UnsupportedErrorN)r<r=r>r?r?r?r@r.src@sZeZdZdZdd�Zdd�Zdd�Zedd	��Zeddd��Z	edd
d��Z
ddd�Zd
S)�NetConnectionsawA wrapper on top of /proc/net/* files, retrieving per-process
    and system-wide open connections (TCP, UDP, UNIX) similarly to
    "netstat -an".

    Note: in case of UNIX sockets we're only able to determine the
    local endpoint/path, not the one it's connected to.
    According to [1] it would be possible but not easily.

    [1] http://serverfault.com/a/417946
    cCs�dtjtjf}dtjtjf}dtjtjf}dtjtjf}dtjdf}|||||f||f|f|f||f|f|f|f||||f||f||fd�|_d|_dS)N�tcp�tcp6�udp�udp6�unix)�allr�tcp4rr�udp4rr�inetZinet4Zinet6)�socket�AF_INET�SOCK_STREAM�AF_INET6�
SOCK_DGRAM�AF_UNIX�tmap�_procfs_path)�selfrrrrrr?r?r@�__init__>s$
�
zNetConnections.__init__cCs�tt�}t�d|j|f�D]�}ztd|j||f�}WnvttfyRYqYqty�}zF|j	t	j
krxWYd}~q|j	t	jkr�t|�WYd}~q�WYd}~qd}~00|�
d�r|dd�dd�}||�|t|�f�q|S)N�%s/%s/fd�%s/%s/fd/%szsocket:[r����)r�listr{�listdirr#r|rr!r��errno�EINVAL�ENAMETOOLONGrr�r�r�)r$r��inodesrb�inoder�r?r?r@�get_proc_inodesUs"
zNetConnections.get_proc_inodescCsFi}t�D]6}z|�|�|��Wq
tttfy>Yq
Yq
0q
|S�N)�pids�updater0rr!r )r$r.r�r?r?r@�get_all_inodesos
zNetConnections.get_all_inodesc	Cs�|�d�\}}t|d�}|s dStr.|�d�}|tjkrntrZt�|t�	|�ddd��}q�t�|t�	|��}n~t�	|�}zRtr�t�tj
tjdgt�
d|��R��}n$t�tj
tjdgt�
d|��R��}Wn ty�t�s�t�n�Yn0t�||�S)	a�Accept an "ip:port" address as displayed in /proc/net/*
        and convert it into a human readable form, like:

        "0500000A:0016" -> ("10.0.0.5", 22)
        "0000000000000000FFFF00000100007F:9E49" -> ("::ffff:127.0.0.1", 40521)

        The IP address portion is a little or big endian four-byte
        hexadecimal number; that is, the least significant byte is listed
        first, so we need to reverse the order of the bytes to convert it
        to an IP address.
        The port is represented as a two-byte hexadecimal number.

        Reference:
        http://linuxdevcenter.com/pub/a/linux/2000/11/16/LinuxAdmin.html
        �:�r?�asciiNr(z>4Iz<4I)r}r�r�encoderr�
LITTLE_ENDIAN�	inet_ntop�base64�	b16decoder�struct�pack�unpackr�rrr�addr)r@�family�ip�portr?r?r@�decode_addresss4



��zNetConnections.decode_addressNccsN|�d�rtj�|�sdSt|���}|��t|d�D]�\}}z(|��dd�\
}}	}
}}}}}}}Wn$ty�t	d|||f��Yn0||vr�||d\}
}nd\}
}|dur�||
kr�q8q8|t
jkr�t|}nt
j}zt�|	|�}	t�|
|�}
Wnt�yYq8Yn0||||	|
||
fVq8Wd�n1�s@0YdS)z.Parse /proc/net/tcp* and /proc/net/udp* files.�6Nrr�z,error while parsing %s; malformed line %s %rr�Nr()r~r{ra�existsrr�rr}r��RuntimeErrorrr�TCP_STATUSESr�	CONN_NONErrDr)�filerA�type_r.�
filter_pidr��linenor�r��laddr�raddr�statusr/r�rbr?r?r@�process_inet�s8���



zNetConnections.process_inetc
cst|���}|��|D]�}|��}z|dd�\}}}}}}}	Wn.tynd|vrZYqtd||f��Yn0|	|vr�||	}
ndg}
|
D]`\}}|dur�||kr�q�q�t|�dkr�|dnd	}
t�t|��}d	}tj	}||||
|||fVq�qWd�n1�s0YdS)
zParse /proc/net/unix files.r�roz)error while parsing %s; malformed line %rrFNr�r(�)
rr�r}r�rHr�rZsocktype_to_enumr�rJ)rKrAr.rMr�r��tokensr�rLr/�pairsr�rbrarPrQr?r?r@�process_unix�s2
��

zNetConnections.process_unixcCs||jvr,td|d�dd�|jD��f��t�|_|durP|�|�}|sXgSn|��}t�}|j|D]�\}}}d|j|f}|tj	tj
fvr�|j|||||d�}	n|j||||d�}	|	D]L\}
}}}}}
}|r�t
�|
|||||
�}nt
�|
|||||
|�}|�|�q�qht|�S)Nz+invalid %r kind argument; choose between %sr�cSsg|]}t|��qSr?)�reprr�r?r?r@r�r�z+NetConnections.retrieve.<locals>.<listcomp>z	%s/net/%s)rM)r"r�r�rr#r0r4r�rrrrRrWr�pconnZsconnr�r))r$�kindr�r.r�Z
proto_namerArLrar�rbrOrPrQZ	bound_pid�connr?r?r@�retrieves<
��

���zNetConnections.retrieve)N)N)N)r<r=r>�__doc__r%r0r4�staticmethodrDrRrWr\r?r?r?r@r2s
4)#rrcCs
t�|�S)z$Return system-wide open connections.)�_net_connectionsr\)rZr?r?r@�net_connections)sr`cCs�tdt���}|��}Wd�n1s,0Yi}|dd�D]�}|�d�}|dkshJt|���|d|���}||dd�����}tt|�\}}}	}
}}}
}}}}}}}}}|||||	||
|f||<qF|S)zsReturn network I/O statistics for every network interface
    installed on the system as a dict of raw tuples.
    z
%s/net/devNr9r5rr)	rr�	readlines�rfindrXr�r}�mapr�)r��lines�retdictr��colonr�r�Z
bytes_recvZpackets_recvZerrinZdropinZ_fifoinZ_frameinZ
_compressedinZ_multicastinZ
bytes_sentZpackets_sentZerroutZdropoutZ_fifooutZ_collisionsoutZ_carrieroutZ_compressedoutr?r?r@�net_io_counters.sH&
��

rgcCs�tjttjttjti}t���}i}|D]�}z&t	�
|�}t	�|�}t�|�\}}Wn<t
y�}z$|jtjkrr�nt|�WYd}~q(d}~00d�|�}	d|v}
t�|
|||||	�||<q(|S)z)Get NIC stats (isup, duplex, speed, mtu).N�,�running)r�ZDUPLEX_FULLr
ZDUPLEX_HALFrZDUPLEX_UNKNOWNrrg�keys�
cext_posixZ
net_if_mtuZnet_if_flagsZnet_if_duplex_speedr�r+ZENODEVrr�rZ	snicstats)Z
duplex_map�namesr�r�ZmtureZduplex�speedr�Zoutput_flagsZisupr?r?r@�net_if_stats]s*�



�
rnFcCs�dd�}dd�}tj�dt��r*|�}n"tj�d�r>|�}ntdt���i}|D]T}|\
}}}}	}
}}}
}}|s~t|�s~qT|	t9}	|
t9}
|||	|
|||
||f	||<qT|S)zcReturn disk I/O statistics for every disk installed on the
    system as a dict of raw tuples.
    cssDtdt���}|��}Wd�n1s,0Y|D�]}|��}t|�}|dkr�|d}t|d�}tt|dd��\
}}}	}
}}}
}}}n�|dks�|dkr�|d}tt|dd��\}}}}	}
}}}
}}}nN|dk�r|d}tt|dd��\}}}
}d	}	}
}}}ntd
|��|||
|||	|
|||f
Vq:dS)N�%s/diskstats�r:r9r���rSrz!not sure how to interpret line %r)rrrar}r�r�rcr�)r�rdr�r�Zflenr��reads�reads_merged�rbytes�rtime�writes�
writes_merged�wbytes�wtimer�r_r?r?r@�read_procfs�s4&
�
��
�
�z%disk_io_counters.<locals>.read_procfscss�t�d�D]�}t�tj�d|��D]�\}}}d|vr6q"ttj�|d��� }|������}Wd�n1sn0Ytj�	|�}t
t|dd��\
}}}	}
}}}
}}}||||	|
|
||||f
Vq"q
dS)N�
/sys/block�statr�)r{r*�walkrar�rr�r�r}�basenamercr�)�block�rootr��filesr�r�r�rsrtrurvrwrxryrzr_r?r?r@�
read_sysfs�s.��z$disk_io_counters.<locals>.read_sysfsror|zC%s/diskstats nor /sys/block filesystem are available on this system)r{rarGrrr��DISK_SECTOR_SIZE)Zperdiskr{r��genrer�r�rsrwruryrvrzrtrxr_r?r?r@�disk_io_counters�s0-����
r�c@s@eZdZdZddgZdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dS)�RootFsDeviceFinderaFdisk_partitions() may return partitions with device == "/dev/root"
    or "rootfs". This container class uses different strategies to try to
    obtain the real device path. Resources:
    https://bootlin.com/blog/find-root-device/
    https://www.systutorials.com/how-to-find-the-disk-where-root-is-on-in-bash-on-linux/.
    �major�minorcCs(t�d�j}t�|�|_t�|�|_dS)Nr�)r{r}�st_devr�r�)r$�devr?r?r@r%�szRootFsDeviceFinder.__init__cCs�tdt����}|��dd�D]�}|��}t|�dkr:q |d��rRt|d�nd}|d��rnt|d�nd}|d}||jkr ||jkr |r d|Wd�Sq Wd�n1s�0YdS)Nz
%s/partitionsr9r�rrr:�/dev/%s)	rrrar}r��isdigitr�r�r�)r$r�r�r�r�r�r�r?r?r@�ask_proc_partitions�sz&RootFsDeviceFinder.ask_proc_partitionscCs�d|j|jf}t|��T}|D]>}|�d�r|���d�d}|rd|Wd�SqWd�n1sr0YdS)Nz/sys/dev/block/%s:%s/ueventzDEVNAME=r9r�)r�r�rr�r��
rpartition)r$rar�r�r�r?r?r@�ask_sys_dev_blocks

z$RootFsDeviceFinder.ask_sys_dev_blockc	Cs�d|j|jf}t�d�}|D]�}zt|�}WntyDYqYq0|�R|����}||kr�tj	�
tj	�|��}d|Wd�SWd�q1s�0YqdS)Nz%s:%sz/sys/class/block/*/devr�)r�r�r��iglobrrr�r�r{rar�dirname)r$�needler�rKr��datar�r?r?r@�ask_sys_class_blocks

z&RootFsDeviceFinder.ask_sys_class_blockc
Cs�d}|durJz|��}Wn0ttfyH}zt|�WYd}~n
d}~00|dur�z|��}Wn0ttfy�}zt|�WYd}~n
d}~00|dur�z|��}Wn0ttfy�}zt|�WYd}~n
d}~00|dur�tj�|�r�|SdSr1)	r�r�r�rr�r�r{rarG)r$rar�r?r?r@�finds$zRootFsDeviceFinder.findN)
r<r=r>r]�	__slots__r%r�r�r�r�r?r?r?r@r��s
	r�cCs4t�}t�}|s�td|��\}|D]F}|��}|�d�sH|�|���q"|�d�d}|dkr"|�d�q"Wd�n1s~0Y|dkr�tj�	d�r�tj�
d�}ntj�
d	|�}g}t�|�}|D]b}	|	\}
}}}|
d
kr�d}
|
dvr�t
���p�|
}
|�s|
r�||v�rq�t�|
|||�}
|�|
�q�|S)
z8Return mounted disk partitions as a list of namedtuples.z%s/filesystemsZnodev�	rZzfsNr�z	/etc/mtabz%s/self/mounts�nonerT)z	/dev/rootZrootfs)r�rrr�r�r�r}r{ra�isfile�realpathr��disk_partitionsr�r�rZ	sdiskpartr�)rZfstypesr�r�r�ZfstypeZmounts_path�retlistZ
partitions�	partitionZdeviceZ
mountpoint�opts�ntupler?r?r@r�5s8
*
r�cCs`t�t�}t�d�}|�t�d��ttdd�|D���}t�d�}t�d�}|D]"}|�	d|�}||vrR|�
|�qR|D�]}z>|d}tt|��d	}t
j�t
j�|�d
�}t|���}	Wntttfy�YqzYn0t|ddd
�}
t|ddd
�}t|ddd
���}|
du�rDzt|
�d	}
Wnt�yBd}
Yn0|du�rxzt|�d	}Wnt�yvd}Yn0||	�
|||
|f�qz|�sXt�d�}tt|��}|D�]�}z<t
j�|d�}tt|��d	}t
j�|d�}t|���}	WnBtttf�y4}
z"t|
�WYd}
~
�q�WYd}
~
n
d}
~
00t�|d�}tdd�|D��}d}d}
|D]�}t
j�||d�}t|dd
���}|dk�r�tt
j�||d�dd
�}n$|dk�r�tt
j�||d�dd
�}
|
du�rzt|
�d	}
Wnt�yd}
Yn0|du�rbzt|�d	}Wnt�y8d}Yn0�qb||	�
d||
|f��q�t|�S)a�Return hardware (CPU and others) temperatures as a dict
    including hardware name, label, current, max and critical
    temperatures.

    Implementation notes:
    - /sys/class/hwmon looks like the most recent interface to
      retrieve this info, and this implementation relies on it
      only (old distros will probably use something else)
    - lm-sensors on Ubuntu 16.04 relies on /sys/class/hwmon
    - /sys/class/thermal/thermal_zone* is another one but it's more
      difficult to parse
    z/sys/class/hwmon/hwmon*/temp*_*z&/sys/class/hwmon/hwmon*/device/temp*_*cSsg|]}|�d�d�qS�r�r�r}r�r?r?r@r�tr�z(sensors_temperatures.<locals>.<listcomp>z5/sys/devices/platform/coretemp.*/hwmon/hwmon*/temp*_*z'/sys/devices/platform/coretemp.*/hwmon/z/sys/class/hwmon/�_inputg@�@r�Z_maxNrZ_crit�_labelrTz /sys/class/thermal/thermal_zone*�temp�typez/trip_point*cSs,g|]$}d�tj�|��d�dd���qS)r�rr:)r�r{rarr})r��pr?r?r@r��s��_type�critical�_temp�high)�collectionsrr)r��extend�sortedr�r�r��subr�r�rr{rar�r�rr�r�r�r�r�dict)r��	basenamesZ
basenames2�replr�Zaltname�baserar��	unit_namer�r��labelr�Z
trip_pathsZtrip_pointsZ
trip_pointZ	trip_typer?r?r@�sensors_temperaturesas�

�

	






$�
�
�


r�cCs�t�t�}t�d�}|s"t�d�}ttdd�|D���}|D]�}ztt|d��}Wn<tt	fy�}z t
|�WYd}~q<WYd}~n
d}~00ttj
�tj
�|�d����}t|dd	d
���}||�t�||��q<t|�S)a�Return hardware fans info (for CPU and other peripherals) as a
    dict including hardware label and current speed.

    Implementation notes:
    - /sys/class/hwmon looks like the most recent interface to
      retrieve this info, and this implementation relies on it
      only (old distros will probably use something else)
    - lm-sensors on Ubuntu 16.04 relies on /sys/class/hwmon
    z/sys/class/hwmon/hwmon*/fan*_*z%/sys/class/hwmon/hwmon*/device/fan*_*cSsg|]}|�d�d�qSr�r�r�r?r?r@r��r�z sensors_fans.<locals>.<listcomp>r�Nr�r�rTr)r�rr)r�r�r�r�rr�r�rrr{rar�r�r�r�rZsfanr�)r�r�r�r�r�r�r�r?r?r@�sensors_fans�s



"r�cs�t���fdd�}dd�t�t�D�}|s.dStj�tt|�d�}||d|d�}||d	|d
�}||d|d�}||d
�}|dur�|dur�zd||}Wq�ty�d}Yq�0n tt	|ddd��}|dkr�dSd}|tj�td�tj�td��}	|	du�r|	dk}n6t	|ddd��
���}
|
dk�r>d}n|
dv�rLd}|�rZtj
}nt|du�r�|du�r�zt||d�}Wnt�y�tj}Yn0n.|du�r�t|d�}|dk�r�tj}ntj}t�|||�S)aReturn battery information.
    Implementation note: it appears /sys/class/power_supply/BAT0/
    directory structure may vary and provide files with the same
    meaning but under different names, see:
    https://github.com/giampaolo/psutil/issues/966.
    c	sP|D]F}t|�d�}|�krzt|�WStyH|��YS0qdS)zvAttempt to read the content of multiple files which may
        not exist. If none of them exist return None.
        rN)rr�r�r�)rrar���nullr?r@�
multi_bcat�sz#sensors_battery.<locals>.multi_bcatcSs&g|]}|�d�sd|��vr|�qS)ZBATZbattery)r�r�r�r?r?r@r�s�z#sensors_battery.<locals>.<listcomp>Nrz/energy_nowz/charge_nowz
/power_nowz/current_nowz/energy_fullz/charge_fullz/time_to_empty_nowgY@r�z	/capacityr(rz
AC0/onlinez	AC/onlinerz/statusrTZdischargingF)Zcharging�fullTi�<)�objectr{r*�POWER_SUPPLY_PATHrar�r��ZeroDivisionErrorr�rr�r�rZPOWER_TIME_UNLIMITEDZPOWER_TIME_UNKNOWNZsbattery)r�Zbatsr�Z
energy_nowZ	power_nowZenergy_fullZ
time_to_emptyrQZ
power_pluggedZonlinerQZsecsleftr?r�r@�sensors_battery�sZ
��





r�c	CsHg}t��}|D]2}|\}}}}}t�||p.d|||�}|�|�q|S)z:Return currently connected users as a list of namedtuples.N)r��usersrZsuserr�)	r�Zrawlist�itemrr�tty�hostnameZtstampr��ntr?r?r@r�Usr�cCs�dt�}t|��^}|D]<}|�d�rt|����d�}|a|Wd�Sqtd|��Wd�n1sv0YdS)zAReturn the system boot time expressed in seconds since the epoch.r�sbtimerNzline 'btime' not found in %s)rrr�r�r�r}�	BOOT_TIMErH)rar�r�r�r?r?r@�	boot_time`s


r�cCsdd�t�tt���D�S)z7Returns a list of PIDs currently running on the system.cSsg|]}|��rt|��qSr?)r�r�r�r?r?r@r�tr�zpids.<locals>.<listcomp>)r{r*r"rr?r?r?r@r2rsr2c	Cs�t�|�sdSz�dt�|f}t|��\}|D]:}|�d�r,t|��d�}||kWd�WSq,td|��Wd�n1s�0YWn ttfy�|t	�vYS0dS)zcCheck for the existence of a unix PID. Linux TIDs are not
    supported (always return False).
    F�%s/%s/statussTgid:rNz'Tgid' line not found in %s)
r�
pid_existsrrr�r�r}r��EnvironmentErrorr2)r�rar�r�Ztgidr?r?r@r�ws



.r�c
Cs�i}t�}t�D]�}z<td||f��}|��}Wd�n1sD0YWnttfyfYq0|�d�}||dd���}t|d�}|||<q|S)zsObtain a {pid: ppid, ...} dict for all running processes in
    one shot. Used to speed up Process.children().
    �
%s/%s/statN�)r9r)	rr2rr�rr!rbr}r�)r�r�r�r�r��rparZdset�ppidr?r?r@�ppid_map�s
*

r�cst����fdd��}|S)zlDecorator which translates bare OSError and IOError exceptions
    into NoSuchProcess and AccessDenied.
    cs�z�|g|�Ri|��WSty8t|j|j��Ynhty^|��t|j|j��YnBty�|��tj	�
d|j|jf�s�t|j|j���Yn0dS)N�%s/%s)r r
r��_namer!�_raise_if_zombierrr{rarGr#)r$r��kwargs��funr?r@�wrapper�sz wrap_exceptions.<locals>.wrapper)�	functools�wraps)r�r�r?r�r@�wrap_exceptions�sr�c@s�eZdZdZgd�Zdd�Zdd�Zdd�Zd	d
�Ze	e
dd���Ze	e
d
d���Ze	e
dd���Z
dd�Zdd�Ze	dd��Ze	dd��Ze	dd��Ze	dd��Ze	dd��Zej�de���r�e	d d!��Ze	d"d#��Ze	d$d%��Ze	ddd'd(��Ze	d)d*��Ze	d+d,��Ze�se �rVd-d.�Z!e	e"�#d/�e"�#d0�e"�#d1�fd2d3��Z$e	d4d5��Z%neZ%e �rle	d6d7��Z&e	d8d9��Z'e	e"�#d:�fd;d<��Z(e	e"�#d=�fd>d?��Z)e	d@dA��Z*e	dBdC��Z+e	dDdE��Z,e-�r�e	dFdG��Z.e"�#dH�fdIdJ�Z/e	dKdL��Z0e1�re	dMdN��Z2e	dOdP��Z3e4d&u�r.e	dedQdR��Z5e	dSdT��Z6e	dUdV��Z7e	dfdXdY��Z8e	dZd[��Z9e	d\d]��Z:e	e"�#d^�fd_d`��Z;e	e"�#da�fdbdc��Z<d&S)g�ProcesszLinux process implementation.)�_cacher��_ppidr#r�cCs||_d|_d|_t�|_dSr1)r�r�r�rr#)r$r�r?r?r@r%�szProcess.__init__c	Cs\ztd|j|jf�}Wnttfy0YdS0|�d�}||d|d�}|dkSdS)Nr�Fr�r9r:�Z)rr#r�r�r�rb)r$r�r�rQr?r?r@�
_is_zombie�s
zProcess._is_zombiecCs|��rt|j|j|j��dSr1)r�rr�r�r��r$r?r?r@r��szProcess._raise_if_zombiecCst�d|j|jf�dS)z+Raise NSP if the process disappeared on us.r�N)r{r}r#r�r�r?r?r@�_raise_if_not_alive�szProcess._raise_if_not_alivecCs�td|j|jf�}|�d�}||�d�d|�}||dd���}i}||d<|d|d	<|d|d
<|d|d<|d
|d<|d|d<|d|d<|d|d<|d|d<|d|d<z|d|d<Wn"ty�td�d|d<Yn0|S)aYParse /proc/{pid}/stat file and return a dict with various
        process info.
        Using "man proc" as a reference: where "man proc" refers to
        position N always subtract 3 (e.g ppid position 4 in
        'man proc' == position 1 in here).
        The return value is cached in case oneshot() ctx manager is
        in use.
        r�r��(rr9Nr�rrQr�r��ttynr��utime��stime�
�children_utimerq�children_stime��create_time�$�cpu_num�'�blkio_ticksz&can't get blkio_ticks, set iowait to 0)rr#r�rbr�r}�
IndexErrorr)r$r�r�r�r�r�r?r?r@�_parse_stat_file�s*
zProcess._parse_stat_filecCs@td|j|jf��}|��Wd�S1s20YdS)z�Read /proc/{pid}/stat file and return its content.
        The return value is cached in case oneshot() ctx manager is
        in use.
        r�N)rr#r�r��r$r�r?r?r@�_read_status_fileszProcess._read_status_filecCsDtd|j|jf��}|����Wd�S1s60YdS)Nz%s/%s/smaps)rr#r�r�r�r�r?r?r@�_read_smaps_fileszProcess._read_smaps_filecCs(|j�|�|j�|�|j�|�dSr1)r�Zcache_activater�r�r�r?r?r@�
oneshot_enter szProcess.oneshot_entercCs(|j�|�|j�|�|j�|�dSr1)r�Zcache_deactivater�r�r�r?r?r@�oneshot_exit%szProcess.oneshot_exitcCs|��d}trt|�}|S)Nr�)r�rr)r$r�r?r?r@r�*szProcess.namec	CsZztd|j|jf�WSttfyT|��tj�d|j|jf�rNYdS�Yn0dS)Nz	%s/%s/exer�rT)	r|r#r�rr!r�r{ra�lexistsr�r?r?r@�exe2szProcess.execCs�td|j|jf��}|��}Wd�n1s20Y|sL|��gS|�d�rZdnd}|�|�rt|dd�}|�|�}|dkr�t|�dkr�d|vr�|�d�}|S)Nz
%s/%s/cmdlinerwror(r)rr#r�r�r�r~r}r�)r$r�r��sep�cmdliner?r?r@r�?s&


zProcess.cmdlinecCsDtd|j|jf��}|��}Wd�n1s20Yt|�S)Nz
%s/%s/environ)rr#r�r�r)r$r�r�r?r?r@�environYs&zProcess.environcCs<t|��d�}t��}z
||WSty6YdS0dS)Nr�)r�r�rZget_terminal_mapr�)r$Ztty_nrr"r?r?r@�terminal_s
zProcess.terminalz/proc/%s/ioc
Csd|j|jf}i}t|��\}|D]F}|��}|r"z|�d�\}}WntyZYq"Yq"0t|�||<q"Wd�n1s~0Y|s�td|��z,t|d|d|d|d|d|d	�WSt	�y}z$td
|j
d||f��WYd}~n
d}~00dS)Nz%s/%s/ios: z%s file was emptyssyscrssyscws
read_bytesswrite_bytessrcharswcharz1%r field was not found in %s; found fields are %rr)r#r�rr�r}r�r�rHrpr�r�)r$�fnamer�r�r�r�r�r�r?r?r@�io_countersks8

,���zProcess.io_counterscCsh|��}t|d�t}t|d�t}t|d�t}t|d�t}t|d�t}t|||||�S)Nr�r�r�r�r�)r�r�r�rq)r$r�r�r�r�r�rvr?r?r@r��szProcess.cpu_timescCst|��d�S)zWhat CPU the process is on.r��r�r�r�r?r?r@r��szProcess.cpu_numNcCst�|j||j�Sr1)rZwait_pidr�r�)r$�timeoutr?r?r@�wait�szProcess.waitcCs&t|��d�}tpt�}|t|S)Nr�)r�r�r�r�r�)r$�ctimeZbtr?r?r@r��s
zProcess.create_timec	Csttd|j|jf��<}dd�|����dd�D�\}}}}}}}Wd�n1sV0Yt|||||||�S)Nz%s/%s/statmcss|]}t|�tVqdSr1)r�r�r�r?r?r@�	<genexpr>�sz&Process.memory_info.<locals>.<genexpr>rS)rr#r�r�r}rf)	r$r�ZvmsrlrX�text�libr�Zdirtyr?r?r@�memory_info�s

�2zProcess.memory_infocCs�d}}}td�|j|j���|}|D]f}|�d�rN|t|��d�d7}q&|�d�rnt|��d�d}q&|�d�r&t|��d�d}q&Wd�n1s�0Y|||fS)Nrz{}/{}/smaps_rollupsPrivate_rr��Pss:�Swap:)rr	r#r�r�r�r})r$rhrirjr�r�r?r?r@�_parse_smaps_rollup�s	�


4zProcess._parse_smaps_rollups\nPrivate.*:\s+(\d+)s\nPss\:\s+(\d+)s\nSwap\:\s+(\d+)cCsZ|��}ttt|�|���d}ttt|�|���d}ttt|�|���d}|||fS)Nr�)r�r�rcr��findall)r$Z_private_reZ_pss_reZ_swap_reZ
smaps_datarhrirjr?r?r@�_parse_smaps�s
zProcess._parse_smapsc	Csftr>z|��\}}}WqLttfy:|��\}}}YqL0n|��\}}}|��}t||||f�Sr1)�HAS_PROC_SMAPS_ROLLUPr	r!rrrrg)r$rhrirjZ	basic_memr?r?r@�memory_full_info�szProcess.memory_full_infocCs^dd�}|��}|s |��gS|�d�}g}|�d�}|g}|||�D�]\}}|�dd�}z|\}	}
}}}
}Wn(ty�|dg\}	}
}}}
}Yn0|s�d}n2tr�t|�}|��}|�d	�r�t	|�s�|dd
�}t|	�t|
�||�
dd�|�
dd�|�
d
d�|�
dd�|�
dd�|�
dd�|�
dd�|�
dd�|�
dd�|�
dd�f
}|�|�qH|S)aQReturn process's mapped memory regions as a list of named
            tuples. Fields are explained in 'man proc'; here is an updated
            (Apr 2012) version: http://goo.gl/fmebo.

            /proc/{PID}/smaps does not exist on kernels < 2.6.14 or if
            CONFIG_MMU kernel configuration option is not enabled.
            c	ss�i}|D]�}|�dd�}|d�d�s@|��|fV|�|�qzt|d�d||d<Wqty�|d�d�r~Yqntd|��Yq0q|��|fVdS)N�rrrr�sVmFlags:z#don't know how to interpret line %r)r}r~�popr�r�r�r�)rd�
current_blockr�r�r�r?r?r@�
get_blockss"��z'Process.memory_maps.<locals>.get_blocks�
rNrrTz[anon]rxrysRss:sSize:rs
Shared_Clean:s
Shared_Dirty:sPrivate_Clean:sPrivate_Dirty:sReferenced:s
Anonymous:r)r�r�r}rr�rrr�r~rr�r�)r$rr�rdr��
first_liner�headerZhfieldsr@Zperms�_offset�_devZ_inoderar�r?r?r@�memory_mapssP


�









�zProcess.memory_mapscCstd|j|jf�S)Nz	%s/%s/cwd)r|r#r�r�r?r?r@�cwdRszProcess.cwdsctxt_switches:\t(\d+)cCsL|��}|�|�}|s,td|j|jf��nt�t|d�t|d��SdS)Nz�'voluntary_ctxt_switches' and 'nonvoluntary_ctxt_switches'lines were not found in %s/%s/status; the kernel is probably older than 2.6.23rr)r�r
rr#r�rZpctxswr�)r$Z	_ctxsw_rer�Zctxswr?r?r@�num_ctx_switchesVs

��zProcess.num_ctx_switchessThreads:\t(\d+)cCs|��}t|�|�d�S�Nr)r�r�r
)r$Z_num_threads_rer�r?r?r@�num_threadseszProcess.num_threadsc
Cst�d|j|jf�}|��g}d}|D]�}d|j|j|f}z8t|��}|����}Wd�n1sl0YWntt	fy�d}Yq*Yn0||�
d�dd�}|�d�}t|d�t
}	t|d	�t
}
t�t|�|	|
�}|�|�q*|�r|��|S)
Nz
%s/%s/taskFz%s/%s/task/%s/statTr�r9r�r�r�)r{r*r#r�rrr�r�rr!r�r}r�r�rZpthreadr�r�r�)r$Z
thread_idsr��
hit_enoent�	thread_idr�r��str�r�r�r�r?r?r@�threadsms2�
.

zProcess.threadscCst�|j�Sr1)rk�getpriorityr�r�r?r?r@�nice_get�szProcess.nice_getcCst�|j|�Sr1)rk�setpriorityr�)r$r�r?r?r@�nice_set�szProcess.nice_setcCst�|j�Sr1)r�r4r�r�r?r?r@�cpu_affinity_get�szProcess.cpu_affinity_getsCpus_allowed_list:\t(\d+)-(\d+)cCsV|��}|�|�}|r@ttt|dd�t|dd�d��Stttt����SdS)Nrr)r�r
r)�ranger�r�r�)r$�_rer�r�r?r?r@�_get_eligible_cpus�s

*zProcess._get_eligible_cpusc
Cs�zt�|j|�Wn�ttfy�}zxt|t�s<|jtjkr�|��}t	t
tt����}|D]4}||vrvtd||f��||vrZtd||f��qZ�WYd}~n
d}~00dS)Nz(invalid CPU number %r; choose between %sz0CPU number %r is not eligible; choose between %s)
r�Zproc_cpu_affinity_setr�r�r�rzr+r,r'�tupler%r�r�)r$r�r�Z
eligible_cpusZall_cpus�cpur?r?r@�cpu_affinity_set�s(����zProcess.cpu_affinity_setcCs,t�|j�\}}tdur t|�}t�||�Sr1)r�r3r��enumr;rZpionice)r$�ioclassr�r?r?r@�
ionice_get�szProcess.ionice_getcCsT|durd}|r(|ttfvr(td|��|dks8|dkrDd}t|��t�|j||�S)Nrz%r ioclass accepts no valuerSzvalue not in 0-7 range)r'r$r�r�Zproc_ioprio_setr�)r$r,r�r�r?r?r@�
ionice_set�szProcess.ionice_setc
Cs�|jdkrd}t|��zL|dur.t|j|�WSt|�dkrRddt|�}t|��t|j||�Wn:ty�}z"|jtjkr�|���WYd}~n
d}~00dS)Nrz)can't use prlimit() against PID 0 processr9z'second argument must be a (soft, hard) z
tuple, got %s)	r�r�r�r�rXr�r+ZENOSYSr�)r$r�r�r�r�r?r?r@�rlimit�s"

��zProcess.rlimitcCs$|��d}tr|��}t�|d�S)NrQ�?)r�rr�
PROC_STATUSESr�)r$�letterr?r?r@rQ�szProcess.statusc
Cs�g}t�d|j|jf�}d}|D�]b}d|j|j|f}zt|�}Wnzttfydd}Yq"Yq"ty�}zF|jtj	kr�WYd}~q"|jtj
kr�t|�WYd}~q"�WYd}~q"d}~00|�d�r"t
|�r"d|j|j|f}zXt|��:}t|����d�}	t|����dd�}
Wd�n1�s20YWnttf�yZd}Yq"0t|
�}t|t|�t|	�||
�}|�|�q"|�r�|��|S)	Nr&Fr'Tr�z%s/%s/fdinfo/%srr�)r{r*r#r�r|rr!r�r+r,r-rr�rrr�r�r}r�r`r�r�)
r$r�r�rrbrKrar�r��posrerdr�r?r?r@�
open_files�sJ
�
:
�zProcess.open_filesrcCst�||j�}|��|Sr1)r_r\r�r�)r$rZr�r?r?r@r`/	szProcess.net_connectionscCstt�d|j|jf��S)Nr&)r�r{r*r#r�r�r?r?r@�num_fds5	szProcess.num_fdscCst|��d�S)Nr�r�r�r?r?r@r�9	szProcess.ppidsUid:\t(\d+)\t(\d+)\t(\d+)cCs6|��}|�|�d\}}}t�t|�t|�t|��Sr)r�r
rZpuidsr�)r$Z_uids_rer��real�	effective�savedr?r?r@�uids=	szProcess.uidssGid:\t(\d+)\t(\d+)\t(\d+)cCs6|��}|�|�d\}}}t�t|�t|�t|��Sr)r�r
rZpgidsr�)r$Z_gids_rer�r6r7r8r?r?r@�gidsC	szProcess.gids)N)N)r)=r<r=r>r]r�r%r�r�r�r�rr�r�r�r�r�r�r�r�r�r�r{rarG�getpidr�r�r�rr�rr�HAS_PROC_SMAPSr	r�r�rr
rrrrrr!r#�HAS_CPU_AFFINITYr$r'r*�HAS_PROC_IO_PRIORITYr-r.r�r/rQr4r`r5r�r9r:r?r?r?r@r��s�%




	
 
	



�
K
�



�







3

r�)N)r)F)F)�r]�
__future__rr;r�r+r�r�r{r�rr=�sysr�rrrTrrrr�r	rkr
rrr
rrrrrrrrrrrrrrrrZ_compatrrr r!r"r#r+Z__extra__all__r�rarGr;r<r�hasattrr>r=r�r�Zgetpagesizer�r��	byteorderr9r��	AF_PACKETr8�IntEnumr�r7r$r%r&r'r;�globalsr3�__members__ZSTATUS_RUNNINGZSTATUS_SLEEPINGZSTATUS_DISK_SLEEPZSTATUS_STOPPEDZSTATUS_TRACING_STOPZ
STATUS_ZOMBIEZSTATUS_DEADZSTATUS_WAKE_KILLZ
STATUS_WAKINGZSTATUS_IDLEZ
STATUS_PARKEDr1r(r)r*r+r,r-r.r/r0r1r2rIrNrZr`rfr�rgrkr�rnrprqr|r�r�r��	Exceptionr�r�r��resource�ImportErrorr��CDLLr�r��dirr�r�r�r�r�r�rrrrZnet_if_addrsrrr_r`rgrn�
disk_usager�r�r�r�r�r�r�r�r2r�r�r�r�r?r?r?r@�<module>sb






�
�����
���

(

�
@<+
�
)u
/"
hK
,v_ PKok\�Dˤ��*psutil/__pycache__/_psposix.cpython-39.pycnu�[���a

��?h+ �@spdZddlZddlZddlZddlZddlZddlmZddlmZddlm	Z	ddlm
Z
ddlmZdd	lm
Z
dd
lmZddlmZddlmZdd
lmZddlmZddlmZer�ddlmZe
r�ddlZndZgd�Zdd�Zedu�r*eed��r*e�dedd�ejD���Zdd�Zndd�Zddeje edej�e!ej"efdd�Z#dd�Z$e	d d!��Z%dS)"z%Routines common to all posix systems.�N�)�MACOS��TimeoutExpired)�memoize)�
sdiskusage)�
usage_percent)�PY3)�ChildProcessError)�FileNotFoundError)�InterruptedError)�PermissionError)�ProcessLookupError)�unicode)�_psutil_osx)�
pid_exists�wait_pid�
disk_usage�get_terminal_mapcCsL|dkrdSzt�|d�Wn&ty0YdStyBYdS0dSdS)z6Check whether pid exists in the current process table.rTFN)�os�killrr
)�pid�r�;/usr/local/lib64/python3.9/site-packages/psutil/_psposix.pyr(sr�Signals�	NegsignalcCsg|]}|j|jf�qSr)�name�value)�.0�xrrr�
<listcomp>B�r cCs&z
t|�WSty |YS0dS)z+Convert a negative signal value to an enum.N)r�
ValueError��numrrr�negsig_to_enumEs
r%cCs|S)Nrr#rrrr%Ns�	monotoniccs��dkrd}t|��d}	d}
�dur8|
tjO}
�����������fdd�}zt��|
�\}}
Wn>ty~||	�}	YqPty�|��r�||	�}	q�YdS0|dkr�||	�}	qPt�|
�r�t�|
�St�|
�r�t	t�
|
��Std|
��qPdS)a�Wait for a process PID to terminate.

    If the process terminated normally by calling exit(3) or _exit(2),
    or by returning from main(), the return value is the positive integer
    passed to *exit().

    If it was terminated by a signal it returns the negated value of the
    signal which caused the termination (e.g. -SIGTERM).

    If PID is not a children of os.getpid() (current process) just
    wait until the process disappears and return None.

    If PID does not exist at all return None immediately.

    If *timeout* != None and process is still alive raise TimeoutExpired.
    timeout=0 is also possible (either return immediately or raise).
    rzcan't wait for PID 0g-C��6?Ncs6�dur ���kr t���d���|��|dd�S)N)rr�g{�G�z�?r)�interval��_min�_sleep�_timerr�	proc_nameZstop_at�timeoutrr�sleepws

zwait_pid.<locals>.sleepzunknown process exit status %r)r"r�WNOHANG�waitpidrr
�	WIFEXITED�WEXITSTATUS�WIFSIGNALEDr%�WTERMSIG)rr.r-�_waitpidr,r*r+Z_pid_exists�msgr(�flagsr/Zretpid�statusrr)rrRs2






rcCs�trt�|�}n`zt�|�}WnPtynt|t�rhz|�t���}WntyZYn0t�|�}n�Yn0|j	|j
}|j|j
}|j|j
}||}t
r�t�||�}||}t||dd�}t||||d�S)a.Return disk usage associated with path.
    Note: UNIX usually reserves 5% disk space which is not accessible
    by user. In this function "total" and "used" values reflect the
    total and used disk space whereas "free" and "percent" represent
    the "free" and "used percent" user disk space.
    r)Zround_)�total�used�free�percent)r	r�statvfs�UnicodeEncodeError�
isinstancer�encode�sys�getfilesystemencoding�f_blocks�f_frsize�f_bfree�f_bavailrrZdisk_usage_usedrr)�path�str:Z
avail_to_rootZ
avail_to_userr;Z
total_userZusage_percent_userrrrr�s.
�rc	Cs^i}t�d�t�d�}|D]<}||vs0J|��z||t�|�j<WqtyVYq0q|S)zNGet a map of device-id -> path as a dict.
    Used by Process.terminal().
    z	/dev/tty*z
/dev/pts/*)�globr�stat�st_rdevr)�retZlsrrrrr�sr)&�__doc__rJr�signalrB�time�_commonrrrrrZ_compatr	r
rrr
rr�r�enum�__all__r�hasattr�IntEnum�dictrrr%r1�getattr�minr/rrrrrrr�<module>sR
�
	�
_5PKok\q�-��-�-)psutil/__pycache__/_compat.cpython-39.pycnu�[���a

��?h�;�@sdZddlZddlZddlZddlZddlZddlZddlZgd�Zej	ddkZ
e�Ze
rze
ZeZeZeZeZdd�ZneZeZeZeZdd�Ze
r�eZneZeedfd	d
�Ze
r�eZeZeZeZeZeZn�ddlZefdd�Zee �d
d��Zee �dd��Zee �dd��Zee �dd��Zee �dd��Zee �dd��Ze�!�dk�r�ze"ej#d��Wn2e�ytYn e"�y�dZ$e%e$��Yn0zddlm&Z&Wn�e'�y<zddl(m)Z)Wn e'�y�ddl*m)Z)Yn0e�+dgd��Z,Gd d!�d!e-�Z.efe/e
ee0e1d�f�e2e3e1e4fd"d#�Z5d5d&d'�Z&Yn0zdd(l6m7Z7Wn*e'�yxej8ej9Bdfd)d*�Z7Yn0zdd+l6m:Z:Wne'�y�d6d-d.�Z:Yn0zdd/l;m<Z=Wn$e'�y�Gd0d1�d1e�Z=Yn0zdd2lm>Z>Wn"e'�yej?d3d4��Z>Yn0dS)7z�Module which provides compatibility with older Python versions.
This is more future-compatible rather than the opposite (prefer latest
Python 3 way of doing things).
�N)�PY3�long�range�super�unicode�
basestring�b�	lru_cache�which�get_terminal_size�redirect_stderr�FileNotFoundError�PermissionError�ProcessLookupError�InterruptedError�ChildProcessError�FileExistsError�cCs
|�d�S)Nzlatin-1)�encode��s�r�:/usr/local/lib64/python3.9/site-packages/psutil/_compat.pyr2srcCs|S�Nrrrrrr;s�cCsd|tu�rHt�|�}z|j|jjd}Wn"ttfyLd}t|��Yn0z
|j	}WnBt
tfy�z|jj	}Wnt
y�d}t|��Yn0Yn0|D]�}|j�
�D]�}zNt|tj�s�t|t�r�|j}q�z
|j}Wq�t
y�|�||�}Yq�0q�Wnt
tf�yYq�Yn0|j|jur��q4q�q��qHq�d}t|��|tu�r\t||�St|�S)zuLike Python 3 builtin super(). If called without any arguments
        it attempts to infer them at runtime.
        rz'super() used in a function with no argsz$super() used in a non-newstyle classzsuper() called outside a method)�	_SENTINEL�sys�	_getframe�f_locals�f_code�co_varnames�
IndexError�KeyError�RuntimeError�__mro__�AttributeError�	__class__�__dict__�values�
isinstance�types�FunctionType�property�fget�__func__�__get__�	TypeError�	func_code�_builtin_super)�type_Ztype_or_objZ
framedepth�f�msg�mro�methrrrrJsH







rcs�fdd�}|S)Ncs*G��fdd�d����j�_�j�_�S)Ncs2eZdZ��fdd�ZG�fdd�de�Z�ZS)zE_instance_checking_exception.<locals>.wrapped.<locals>.TemporaryClasscsht|�dkrNt|d��rN|d}t|�D] }|�d�s*t||t||��q*nt�|�j|i|��dS)Nrr�__)�lenr)�dir�
startswith�setattr�getattrr�__init__)�self�args�kwargsZ	unwrap_me�attr)�TemporaryClassr&rrr>�s

��zN_instance_checking_exception.<locals>.wrapped.<locals>.TemporaryClass.__init__cs eZdZ�fdd�Zdd�ZdS)zS_instance_checking_exception.<locals>.wrapped.<locals>.TemporaryClass.__metaclass__cs�|�Srr)�cls�inst��instance_checkerrr�__instancecheck__�sze_instance_checking_exception.<locals>.wrapped.<locals>.TemporaryClass.__metaclass__.__instancecheck__cSst��d}t||�S�Nr)r�exc_infor))rDZ	classinfo�valuerrr�__subclasscheck__�sze_instance_checking_exception.<locals>.wrapped.<locals>.TemporaryClass.__metaclass__.__subclasscheck__N)�__name__�
__module__�__qualname__rHrLrrFrr�
__metaclass__�srP)rMrNrOr>�typerP�
__classcell__r�rCrG)r&rrC�srC)rM�__doc__rF��base_exceptionrSr�wrapped�sz-_instance_checking_exception.<locals>.wrappedr)rVrWrrUr�_instance_checking_exception�srXcCst|dt�tjkS�N�errno)r=rrZ�ENOENT�rErrrr
�sr
cCst|dt�tjkSrY)r=rrZZESRCHr\rrrr�srcCst|dt�tjtjfvSrY)r=rrZ�EACCES�EPERMr\rrrr�srcCst|dt�tjkSrY)r=rrZZEINTRr\rrrr�srcCst|dt�tjkSrY)r=rrZ�ECHILDr\rrrr�srcCst|dt�tjkSrY)r=rrZ�EEXISTr\rrrr�sr�CPython�permzbbroken or incompatible Python implementation, see: https://github.com/giampaolo/psutil/issues/1659)r	)�RLock�	CacheInfo)�hits�misses�maxsize�currsizec@s$eZdZdZefdd�Zdd�ZdS)�
_HashedSeq��	hashvaluecCs||dd�<||�|_dSrrj)r?�tup�hashrrrr>�sz_HashedSeq.__init__cCs|jSrrj)r?rrr�__hash__�sz_HashedSeq.__hash__N)rMrNrO�	__slots__rmr>rnrrrrri�sric	s�|}	|r.||���}
|	|7}	|
D]}|	|7}	q |rl|	|�fdd�|D��7}	|r�|	|�fdd�|
D��7}	n$||	�dkr��|	d�|vr�|	dSt|	�S)Nc3s|]}�|�VqdSrr)�.0�v�rQrr�	<genexpr>�z_make_key.<locals>.<genexpr>c3s|]\}}�|�VqdSrr)rp�krqrrrrrsrtrr)�itemsri)r@�kwds�typed�kwd_mark�	fasttypes�sorted�tuplerQr9�key�sorted_items�itemrrrr�	_make_key�s
r��dFcs��fdd�}|S)zLeast-recently-used cache decorator, see:
        http://docs.python.org/3/library/functools.html#functools.lru_cache.
        csi�ddg�
d\��t�
�j�t�t��	g���ddg�dd�<�g�d\�����dkrl��
�fdd�}nP�dur������
��
��f	dd�}n*����������	�
���
��fdd�}����	��
fdd	�}��	��
fd
d�}�|_||_||_t�|��S)Nr)rr)rr�rcs"�|i|��}��d7<|SrIr)r@rw�result)�MISSES�stats�
user_functionrr�wrappersz7lru_cache.<locals>.decorating_function.<locals>.wrappercs\�||��}�|��}|�ur2��d7<|S�|i|��}|�|<��d7<|SrIr)r@rwr}r�)	�HITSr��cache�	cache_get�make_key�rootr�rxr�rrr�!s
cs�|s�r�
||��}n|}�	��z��|�}|dur��\}|\}}}}||�<||�<|�}||�<|�<||�<||�<�
�d7<|W�	��SW�	��n
�	��0�|i|��}�	��z��\}|�vr�n�����k�r4|}	||	�<||	�<|	�}�d<|�}
d|�<|�<�|
=|	�|<n,|�}||||g}||�<|�<�|<�
�d7<W�	��n
�	��0|S)Nrr)�acquire�release)r@rwr}�linkr�Z	link_prevZ	link_nextr��lastZoldrootZoldkey)r��KEYr��NEXT�PREV�RESULT�_lenr�r��lockr�rg�
nonlocal_rootr�rxr�rrr�.sR�
cs<���z$t�����t���W���S���0dS)zReport cache statistics.N)r��
_CacheInfor9r�r)r�r�r�r�rgr�rr�
cache_info[s��z:lru_cache.<locals>.decorating_function.<locals>.cache_infocsX���z@����d}||ddg|dd�<ddg�dd�<W���n
���0dS)z%Clear the cache and cache statistics.rN)r��clearr�)r�)r�r�r�r�rr�cache_clearesz;lru_cache.<locals>.decorating_function.<locals>.cache_clear)	r��getr9rc�__wrapped__r�r��	functools�update_wrapper)r�r�r�r��rgrx)r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�decorating_functions,
*-
z&lru_cache.<locals>.decorating_functionr)rgrxr�rr�rr	sir	)r
cs"dd�}tj���r&|�|�r"�SdS|dur>tj�dtj�}|sFdS|�tj�}tj	dkr�tj
|vrt|�dtj
�tj�dd��tj�}t�fd	d
�|D��r��g}q‡fdd�|D�}n�g}t
�}|D]P}tj�|�}||vr�|�|�|D](}	tj�||	�}
||
|�r�|
Sq�q�dS)
aJGiven a command, mode, and a PATH string, return the path which
        conforms to the given mode on the PATH, or None if there is no such
        file.

        `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
        of os.environ.get("PATH"), or can be overridden with a custom search
        path.
        cSs&tj�|�o$t�||�o$tj�|�Sr)�os�path�exists�access�isdir)�fn�moderrr�
_access_check�s

��zwhich.<locals>._access_checkN�PATH�win32r�PATHEXT�c3s |]}����|���VqdSr)�lower�endswith�rp�ext��cmdrrrs�rtzwhich.<locals>.<genexpr>csg|]}�|�qSrrr�r�rr�
<listcomp>�rtzwhich.<locals>.<listcomp>)r�r��dirname�environr��defpath�split�pathsepr�platform�curdir�insert�any�set�normcase�add�join)r�r�r�r��pathext�files�seenr:�normdir�thefile�namerr�rr
}s8





r
)r��P�cCszzddl}ddl}ddl}Wnty2|YS0z*|�d|�d|jd��}|d|dfWStyt|YS0dS)Nr�hhrZ1234)�fcntl�struct�termios�ImportError�unpack�ioctl�
TIOCGWINSZ�	Exception)�fallbackr�r�r��resrrrr�s
�r)�TimeoutExpiredc@seZdZdS)�SubprocessTimeoutExpiredN)rMrNrOrrrrr��sr�)rccs*tj}z|t_|VW|t_n|t_0dSr)r�stderr)�
new_target�originalrrrr�s
r)r�F)r�)@rT�collections�
contextlibrZr�r�rr*�__all__�version_infor�objectr�intrr�xrange�strrrrrr2r
rrrrrr�r�rX�EnvironmentError�python_implementation�OSErrorr`r5r#r	r��	threadingrcZdummy_threading�
namedtupler��listrir��	frozensetrQr{r|r9r��shutilr
�F_OK�X_OKr�
subprocessr�r�r�contextmanagerrrrr�<module>s�

;





�	��
r5PKok\<�RR(psutil/__pycache__/_psbsd.cpython-39.pycnu�[���a

��?h�}�@s�dZddlZddlZddlZddlZddlmZddlmZddlm	Z	ddl
mZddl
mZdd	l
m
Zdd
l
mZddlmZddlmZdd
lmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlmZddlm Z ddlm!Z!gZ"e�r�ej#ej$ej%ej&ej'ej(ej)ej*ej+ej,ej-ej.ej/ej0iZ1n~e�r�ej#ej$ej'ej(ej)ej*ej2ej,ej+ej,ej%ej3ej4ej&iZ1n:e�r�ej#ej$ej'ej(ej)ej*ej+ej,ej%ej3ej4ej&iZ1ej5ej6ej7ej8ej9ej:ej;ej<ej=ej>ej?ej@ejAejBejCejDejEejFejGejHejIejJejKejLiZMe�N�ZOejPZPeQed�ZReQed�ZSeQed�ZTeQed�ZUeVdddd d!d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6�ZWed7gd8��ZXed9gd:��ZYed;gd<��ZZeZZ[ed=gd>��Z\ed?d@�Z]edAdB�Z^e�r8edCgdD��Z_nedCgdE��Z_dFdG�Z`dHdI�ZadJdK�ZbeR�rndLd�ZcndMd�ZcdNec_ddOdP�Zee�s�e�r�dQdR�ZfndSdR�ZfdTdU�Zge�r�dVdW�Zhne�r�dXdW�ZhdvdYdZ�ZiejjZjejkZkejlZlejmZmd[d\�Znd]d^�Zoe�rd_d`�Zpdadb�Zqdcdd�Zrdedf�Zsedgdh��Ztdidj�Zue�rDdkdl�Zvne�rTdmdl�ZvnejvZvdndo�Zwdpdq�Zxejydrds��ZzGdtdu�du�Z{dS)wz5FreeBSD, OpenBSD and NetBSD platforms implementation.�N)�defaultdict)�
namedtuple)�ElementTree�)�_common)�_psposix)�_psutil_bsd)�
_psutil_posix)�FREEBSD)�NETBSD)�OPENBSD)�AccessDenied)�
NoSuchProcess)�
ZombieProcess)�	conn_tmap)�conn_to_ntuple)�debug)�memoize)�memoize_when_activated)�
usage_percent)�FileNotFoundError)�PermissionError)�ProcessLookupError)�which�
per_cpu_times�proc_num_threads�proc_open_files�proc_num_fds��������	�
���
�����������)�ppid�status�real_uid�
effective_uid�	saved_uid�real_gid�
effective_gid�	saved_gid�ttynr�create_time�ctx_switches_vol�ctx_switches_unvol�
read_io_count�write_io_count�	user_time�sys_time�ch_user_time�ch_sys_time�rss�vms�memtext�memdata�memstack�cpunum�name�svmem)�total�	available�percent�used�free�active�inactive�buffers�cached�shared�wired�	scputimes��user�nice�system�idle�irq�pmem)rGrH�text�data�stack�	pcputimes)r\r^�
children_user�children_system�
pmmap_grouped�*path rss, private, ref_count, shadow_count�	pmmap_ext�6addr, perms path rss, private, ref_count, shadow_count�sdiskio)�
read_count�write_count�
read_bytes�write_bytesZ	read_timeZ
write_timeZ	busy_time)rmrnrorpcCs�t��}tr�|\}}}}}}tdd��X}|D]B}|�d�rPt|��d�d}	q,|�d�r,t|��d�d}
q,Wd�n1s�0Y||}||}n,|\}}}}}}}	}
|||}|||}t|||dd�}
t|||
|||||	||
|�S)Nz
/proc/meminfo�rbsBuffers:ris
MemShared:�Zround_)	�cextZvirtual_memr�open�
startswith�int�splitrrN)ZmemrOrSrTrUrYrW�f�linerVrXrRZavailrQ�rz�9/usr/local/lib64/python3.9/site-packages/psutil/_psbsd.py�virtual_memory�s8

4
�r|cCs4t��\}}}}}t||dd�}t�||||||�S)z@System swap memory as (total, used, free, sin, sout) namedtuple.rrr)rsZswap_memrrZsswap)rOrRrS�sinZsoutrQrzrzr{�swap_memory�sr~cCs"t��\}}}}}t|||||�S)z,Return system per-CPU times as a namedtuple.)rs�	cpu_timesrZr[rzrzr{r�srcCs>g}t��D],}|\}}}}}t|||||�}|�|�q|S)�(Return system CPU times as a namedtuple.)rsrrZ�append)�retZcpu_tr\r]r^r_r`�itemrzrzr{r�scCs2t�dkrt�gStjr$d}t|��dt_t�gS)r�r�&supported only starting from FreeBSD 8T)�cpu_count_logicalrr�
__called__�NotImplementedError)�msgrzrzr{r
s
FcCst��S)z0Return the number of logical CPUs in the system.)rsr�rzrzrzr{r�sr�cCst�dkrdSdS)Nr)r�rzrzrzr{�cpu_count_coressr�cCs�d}t��}|durj|�d�}|dkrj|d|d�}t�|�}zt|�d��pRd}W|��n
|��0|s|t�dkr|dS|S)z-Return the number of CPU cores in the system.Nz	</groups>���r%zgroup/children/group/cpur)	rsZcpu_topology�rfindr�
fromstring�len�findall�clearr�)r��s�index�rootrzrzr{r�$s


c	Cs�trt��\}}}}}n�tr�t��\}}}}}}}tdd��4}|D]}|�d�rBt|��d�}qBWd�q�1sv0Yntr�t��\}}}}}}}t	�
||||�S)z*Return various CPU stats as a named tuple.z
/proc/statrqsintrrN)r
rs�	cpu_statsrrtrurvrwrrZ	scpustats)	ZctxswZintrsZ
soft_intrsZsyscallsZ_trapsZ_faults�_forksrxryrzrzr{r�?s�
2�r�c
Cs�g}t�}t|�D]�}zt�|�\}}Wnty>YqYn0|r�z t|�d�d�d�d�}Wnttfy~d}Yn0z t|�d�d�d�d�}Wnttfy�d}Yn0|�	t
�|||��q|S)z�Return frequency metrics for CPUs. As of Dec 2018 only
        CPU 0 appears to be supported by FreeBSD and all other cores
        match the frequency of CPU 0.
        � r��/rN)r��rangers�cpu_freqr�rvrw�
IndexError�
ValueErrorr�r�scpufreq)r��num_cpus�cpu�currentZavailable_freqZmin_freqZmax_freqrzrzr{r�as$
 
 
r�cCstt���}t�|dd�gS)Ng)�floatrsr�rr�)�currrzrzr{r�{sc	Cs@g}t��}|D]*}|\}}}}t�||||�}|�|�q|S)z�Return mounted disk partitions as a list of namedtuples.
    'all' argument is ignored, see:
    https://github.com/giampaolo/psutil/issues/906.
    )rs�disk_partitionsrZ	sdiskpartr�)	�all�retlistZ
partitions�	partitionZdeviceZ
mountpointZfstype�opts�ntuplerzrzr{r��sr�c
Cs�t���}i}|D]�}z&t�|�}t�|�}t�|�\}}Wn2tyn}z|jtjkrZ�WYd}~qd}~00t	t
d�r�t
�|�}d�|�}d|v}	t
�
|	||||�||<q|S)z)Get NIC stats (isup, duplex, speed, mtu).N�	NicDuplex�,�running)�net_io_counters�keys�
cext_posixZ
net_if_mtuZnet_if_flagsZnet_if_duplex_speed�OSError�errnoZENODEV�hasattrrr��joinZ	snicstats)
�namesr�rMZmtu�flagsZduplex�speed�errZoutput_flagsZisuprzrzr{�net_if_stats�s$






�
r�c
Cs�|tjvr*td|d�dd�tD��f��t|\}}t�}trPt�d||�}ntrbt�d|�}nt�||�}|D]6}|\}}}}	}
}}t	||||	|
|t
|�}
|�|
�qrt|�S)z System-wide network connections.�+invalid %r kind argument; choose between %s�, cSsg|]}t|��qSrz��repr��.0�xrzrzr{�
<listcomp>��z#net_connections.<locals>.<listcomp>r�)
rrr�r��setrrs�net_connectionsrr�TCP_STATUSES�add�list)�kind�families�typesr��rawlistr��fd�fam�type�laddr�raddrr6�pid�ntrzrzr{r��s(
���r�cCsbzt��\}}}Wnty&YdS0|dk}|r<tj}n|dkrLtj}n|d}t�|||�S)zReturn battery info.Nrr��<)rs�sensors_batteryr�rZPOWER_TIME_UNLIMITEDZPOWER_TIME_UNKNOWNZsbattery)rQZminsleftZ
power_pluggedZsecsleftrzrzr{r��sr�c	Csttt�}t�}t|�D]X}z@t�|�\}}|dkr6d}d|}|d�t�||||��Wqt	ylYq0q|S)z?Return CPU cores temperatures if available, else an empty dict.rNzCore %sZcoretemp)
rr�r�r�rsZsensors_cpu_temperaturer�rZshwtempr�)r�r�r�r��highrMrzrzr{�sensors_temperatures�s�r�cCst��S)z:The system boot time expressed in seconds since the epoch.)rs�	boot_timerzrzrzr{r�sr�c	Csfg}t��}|D]P}|\}}}}}|dkr6ts2J�d}|dkr@qt�||pLd|||�}|�|�q|S)z:Return currently connected users as a list of namedtuples.r�N�~)rs�usersrrZsuserr�)	r�r�r�r\�tty�hostnameZtstampr�r�rzrzr{r�sr�cCs@ztd���Wn&ty$YdSty6YdS0dSdS)NrFT)�ProcessrMrr
rzrzrzr{�
_pid_0_existssr�cCs*t��}tr&d|vr&t�r&|�dd�|S)z7Returns a list of PIDs currently running on the system.r)rs�pidsrr��insert)r�rzrzr{r�'sr�cCs t�|�}|s|t�vSdSdS)NT�r�
pid_existsr��r��existsrzrzr{r�3s

r�cCs t�|�}|sdS|t�vSdS)NFr�r�rzrzr{r�>s
cCs>z$t�|�td}t�|�tjkWSty8YdS0dS)Nr6F)rs�proc_oneshot_info�kinfo_proc_map�
PROC_STATUSES�getr�
STATUS_ZOMBIEr�)r��strzrzr{�	is_zombieLs
r�cst����fdd��}|S)z`Decorator which translates bare OSError exceptions into
    NoSuchProcess and AccessDenied.
    cs�z�|g|�Ri|��WStyVt|j�rDt|j|j|j��nt|j|j��YnXtytt|j|j��Yn:t	y�|jdkr�dt
�vr�t|j|j��n��Yn0dS)Nr)rr�r�r�_name�_ppidrrr
r�r�)�self�args�kwargs��funrzr{�wrapperYs


z wrap_exceptions.<locals>.wrapper)�	functools�wraps)r�r�rzr�r{�wrap_exceptionsTsr�c	cspz
dVWn`ttfyLt|j�r:t|j|j|j��nt|j|j��Yn tyjt	|j|j��Yn0dS)z8Same as above, for routines relying on reading /proc fs.N)
rrr�r�rr�r�rrr
)�instrzrzr{�wrap_exceptions_procfsos

r�c@s�eZdZdZgd�Zdd�Zdd�Zeedd���Z	d	d
�Z
dd�Zed
d��Zedd��Z
edd��Zedd��Zedd��Zedd��Zedd��Zedd��Zedd��Zer�edd ��Zed!d"��ZeZed#d$��Zed%d&��Zed'd(��Zed)d*��ZedLd,d-��ZedMd/d0��Zed1d2��Zed3d4��Z ed5d6��Z!ed7d8��Z"ed9d:��Z#e$d;d<�Z%e$d;d=�Z&d>d?�Z'e(�r�ed@dA��Z)ne'Z)e*�r�edBdC��Z+ne'Z+e�r�edDdE��Z,edFdG��Z-edHdI��Z.edNdJdK��Z/d.S)Or�z1Wrapper class around underlying C implementation.)�_cacher�r�r�cCs||_d|_d|_dS�N)r�r�r�)r�r�rzrzr{�__init__�szProcess.__init__cCst�|j�dS)z+Raise NSP if the process disappeared on us.N)rs�	proc_namer��r�rzrzr{�
_assert_alive�szProcess._assert_alivecCs$t�|j�}t|�tt�ks J�|S)z;Retrieves multiple process info in one shot as a raw tuple.)rsr�r�r�r��r�r�rzrzr{�oneshot�szProcess.oneshotcCs|j�|�dSr�)r�Zcache_activater�rzrzr{�
oneshot_enter�szProcess.oneshot_entercCs|j�|�dSr�)r�Zcache_deactivater�rzrzr{�oneshot_exit�szProcess.oneshot_exitcCs(|��td}|dur|St�|j�S)NrM)r�r�rsr�r�)r�rMrzrzr{rM�szProcess.namecCs�tr|jdkrdSt�|j�Strj|jdkr0dSt|�� t�d|j�Wd�S1s^0Yn |��}|r�t	|d�p�dSdSdS)Nr�z/proc/%s/exe)
r
r�rsZproc_exerr��os�readlink�cmdliner)r�rrzrzr{�exe�s


0zProcess.exec
Cs�tr|jdkrgStr�zt�|j�WSty�}zt|jtjkr�t|j�r^t	|j|j
|j��q�t|j�s|t
|j|j
|j��q�td|�gWYd}~Sn�WYd}~q�d}~00nt�|j�SdS)Nrz'ignoring %r and returning an empty list)rr�rrsZproc_cmdliner�r��EINVALr�rr�r�r�rr)r�r�rzrzr{r�s

zProcess.cmdlinecCst�|j�Sr�)rsZproc_environr�r�rzrzr{�environ�szProcess.environcCs<|��td}t��}z
||WSty6YdS0dS)Nr=)r�r�rZget_terminal_map�KeyError)r�Ztty_nrZtmaprzrzr{�terminal�s
zProcess.terminalcCs|��td|_|jS)Nr5)r�r�r�r�rzrzr{r5�szProcess.ppidcCs.|��}t�|td|td|td�S)Nr7r8r9)r�rZpuidsr��r�Zrawtuplerzrzr{�uids�s


�zProcess.uidscCs.|��}t�|td|td|td�S)Nr:r;r<)r�rZpgidsr�rrzrzr{�gids�s


�zProcess.gidscCs8|��}t�|td|td|td|td�S)NrCrDrErF)r�rrer�rrzrzr{r�s



�zProcess.cpu_timescCs|��tdS)NrL�r�r�r�rzrzr{�cpu_numszProcess.cpu_numcCs@|��}t|td|td|td|td|td�S)NrGrHrIrJrK)r�rar�rrzrzr{�memory_info
s




�zProcess.memory_infocCs|��tdS)Nr>r
r�rzrzr{r>szProcess.create_timecCs trt�|j�St|���SdSr�)�HAS_PROC_NUM_THREADSrsrr�r��threadsr�rzrzr{�num_threadsszProcess.num_threadscCs$|��}t�|td|td�S)Nr?r@)r�rZpctxswr�rrzrzr{�num_ctx_switches#s


�zProcess.num_ctx_switchescCsHt�|j�}g}|D]"\}}}t�|||�}|�|�qtrD|��|Sr�)rsZproc_threadsr�rZpthreadr�rr�)r�r�r��	thread_id�utimeZstimer�rzrzr{r+szProcess.threads�inetc	Cs�|tvr(td|d�dd�tD��f��t|\}}g}trLt�|j|�}n&trbt�|j||�}nt�|j||�}|D]P}|dd�\}}}	}
}}t	r�||vsv|	|vr�qvt
|||	|
||t�}
|�|
�qv|�
�|S)Nr�r�cSsg|]}t|��qSrzr�r�rzrzr{r�<r�z+Process.net_connections.<locals>.<listcomp>r")rr�r�rrsr�r�rZproc_net_connectionsr
rr�r�r�)r�r�r�r�r�r�r�r�r�r�r�r�r6r�rzrzr{r�7s0���zProcess.net_connectionsNcCst�|j||j�Sr�)rZwait_pidr�r�)r��timeoutrzrzr{�waitUszProcess.waitcCst�|j�Sr�)r��getpriorityr�r�rzrzr{�nice_getYszProcess.nice_getcCst�|j|�Sr�)r��setpriorityr�)r��valuerzrzr{�nice_set]szProcess.nice_setcCs|��td}t�|d�S)Nr6�?)r�r�r�r�)r��coderzrzr{r6aszProcess.statuscCs(|��}t�|td|tddd�S)NrArBr�)r�rZpior�rrzrzr{�io_countersgs

�zProcess.io_counterscCs:tr|jdkrdStstr&t�|j�Sttr0dnd��dS)z)Return process current working directory.rr�r�N)rr�r�HAS_PROC_OPEN_FILESrsZproc_cwdr�r
r�rzrzr{�cwdqs
�zProcess.cwdZmmaprirkcCst�dSr�)r�r�rzrzr{�_not_implemented�szProcess._not_implementedcCst�|j�}dd�|D�S)z8Return files opened by process as a list of namedtuples.cSsg|]\}}t�||��qSrz)rZ	popenfile)r��pathr�rzrzr{r��r�z&Process.open_files.<locals>.<listcomp>)rsrr�)r�r�rzrzr{�
open_files�szProcess.open_filescCst�|j�}tr|��|S)z=Return the number of file descriptors opened by this process.)rsrr�rr�r�rzrzr{�num_fds�szProcess.num_fdscCst�|j�Sr�)rsZproc_cpu_affinity_getr�r�rzrzr{�cpu_affinity_get�szProcess.cpu_affinity_getc
Cs�tttt����}|D]}||vrtd||f��qzt�|j|�WnZty�}zB|j	t	j
t	jfvr�|D]}||vrltd||f��ql�WYd}~n
d}~00dS)Nz#invalid CPU #%i (choose between %s))�tupler�r�rr�rsZproc_cpu_affinity_setr�r�r�rZEDEADLK)r�ZcpusZallcpusr�r�rzrzr{�cpu_affinity_set�s$
���zProcess.cpu_affinity_setcCst�|j�Sr�)rsZproc_memory_mapsr�r�rzrzr{�memory_maps�szProcess.memory_mapscCsP|durt�|j|�St|�dkr2tdt|���|\}}t�|j|||�SdS)Nrz4second argument must be a (soft, hard) tuple, got %s)rsZproc_getrlimitr�r�r�r�Zproc_setrlimit)r��resourceZlimitsZsoft�hardrzrzr{�rlimit�s��zProcess.rlimit)r)N)N)0�__name__�
__module__�__qualname__�__doc__�	__slots__r�r�r�rr�r�r�rMrrrrr5rr	rr
rrZmemory_full_infor>rrrr�rrrr6rrrZnt_mmap_groupedZnt_mmap_extr rr"�HAS_PROC_NUM_FDSr#r$r&r'r*rzrzrzr{r��s�








	










	
��


r�)F)|r.�
contextlibr�r�r��collectionsrrZ	xml.etreerr�rrrrsr	r�r
rrr
rrrrrrrrZ_compatrrrrZ__extra__all__ZSIDLZSTATUS_IDLEZSRUNZSTATUS_RUNNINGZSSLEEPZSTATUS_SLEEPINGZSSTOPZSTATUS_STOPPEDZSZOMBr�ZSWAITZSTATUS_WAITINGZSLOCKZ
STATUS_LOCKEDr�ZSDEADZ
STATUS_WAKINGZSONPROCZTCPS_ESTABLISHEDZCONN_ESTABLISHEDZ
TCPS_SYN_SENTZ
CONN_SYN_SENTZTCPS_SYN_RECEIVEDZ
CONN_SYN_RECVZTCPS_FIN_WAIT_1ZCONN_FIN_WAIT1ZTCPS_FIN_WAIT_2ZCONN_FIN_WAIT2ZTCPS_TIME_WAITZCONN_TIME_WAITZTCPS_CLOSEDZ
CONN_CLOSEZTCPS_CLOSE_WAITZCONN_CLOSE_WAITZ
TCPS_LAST_ACKZ
CONN_LAST_ACKZTCPS_LISTENZCONN_LISTENZTCPS_CLOSINGZCONN_CLOSINGZPSUTIL_CONN_NONEZ	CONN_NONEr�ZgetpagesizeZPAGESIZEZAF_LINKr�ZHAS_PER_CPU_TIMESr
rr0�dictr�rNrZraZpfullmemrerhrjrlr|r~rrr�r�r�r�r�r��
disk_usageZdisk_io_countersr�Znet_if_addrsr�r�r�r�r�r�r�r�r�r�r��contextmanagerr�r�rzrzrzr{�<module>sB�	��
�



�$�����
/


 





	

PKok\s�L+t+tpsutil/_common.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Common objects shared by __init__.py and _ps*.py modules."""

# Note: this module is imported by setup.py so it should not import
# psutil or third-party modules.

from __future__ import division
from __future__ import print_function

import collections
import contextlib
import errno
import functools
import os
import socket
import stat
import sys
import threading
import warnings
from collections import namedtuple
from socket import AF_INET
from socket import SOCK_DGRAM
from socket import SOCK_STREAM


try:
    from socket import AF_INET6
except ImportError:
    AF_INET6 = None
try:
    from socket import AF_UNIX
except ImportError:
    AF_UNIX = None


# can't take it from _common.py as this script is imported by setup.py
PY3 = sys.version_info[0] >= 3
if PY3:
    import enum
else:
    enum = None


PSUTIL_DEBUG = bool(os.getenv('PSUTIL_DEBUG'))
_DEFAULT = object()

# fmt: off
__all__ = [
    # OS constants
    'FREEBSD', 'BSD', 'LINUX', 'NETBSD', 'OPENBSD', 'MACOS', 'OSX', 'POSIX',
    'SUNOS', 'WINDOWS',
    # connection constants
    'CONN_CLOSE', 'CONN_CLOSE_WAIT', 'CONN_CLOSING', 'CONN_ESTABLISHED',
    'CONN_FIN_WAIT1', 'CONN_FIN_WAIT2', 'CONN_LAST_ACK', 'CONN_LISTEN',
    'CONN_NONE', 'CONN_SYN_RECV', 'CONN_SYN_SENT', 'CONN_TIME_WAIT',
    # net constants
    'NIC_DUPLEX_FULL', 'NIC_DUPLEX_HALF', 'NIC_DUPLEX_UNKNOWN',
    # process status constants
    'STATUS_DEAD', 'STATUS_DISK_SLEEP', 'STATUS_IDLE', 'STATUS_LOCKED',
    'STATUS_RUNNING', 'STATUS_SLEEPING', 'STATUS_STOPPED', 'STATUS_SUSPENDED',
    'STATUS_TRACING_STOP', 'STATUS_WAITING', 'STATUS_WAKE_KILL',
    'STATUS_WAKING', 'STATUS_ZOMBIE', 'STATUS_PARKED',
    # other constants
    'ENCODING', 'ENCODING_ERRS', 'AF_INET6',
    # named tuples
    'pconn', 'pcputimes', 'pctxsw', 'pgids', 'pio', 'pionice', 'popenfile',
    'pthread', 'puids', 'sconn', 'scpustats', 'sdiskio', 'sdiskpart',
    'sdiskusage', 'snetio', 'snicaddr', 'snicstats', 'sswap', 'suser',
    # utility functions
    'conn_tmap', 'deprecated_method', 'isfile_strict', 'memoize',
    'parse_environ_block', 'path_exists_strict', 'usage_percent',
    'supports_ipv6', 'sockfam_to_enum', 'socktype_to_enum', "wrap_numbers",
    'open_text', 'open_binary', 'cat', 'bcat',
    'bytes2human', 'conn_to_ntuple', 'debug',
    # shell utils
    'hilite', 'term_supports_colors', 'print_color',
]
# fmt: on


# ===================================================================
# --- OS constants
# ===================================================================


POSIX = os.name == "posix"
WINDOWS = os.name == "nt"
LINUX = sys.platform.startswith("linux")
MACOS = sys.platform.startswith("darwin")
OSX = MACOS  # deprecated alias
FREEBSD = sys.platform.startswith(("freebsd", "midnightbsd"))
OPENBSD = sys.platform.startswith("openbsd")
NETBSD = sys.platform.startswith("netbsd")
BSD = FREEBSD or OPENBSD or NETBSD
SUNOS = sys.platform.startswith(("sunos", "solaris"))
AIX = sys.platform.startswith("aix")


# ===================================================================
# --- API constants
# ===================================================================


# Process.status()
STATUS_RUNNING = "running"
STATUS_SLEEPING = "sleeping"
STATUS_DISK_SLEEP = "disk-sleep"
STATUS_STOPPED = "stopped"
STATUS_TRACING_STOP = "tracing-stop"
STATUS_ZOMBIE = "zombie"
STATUS_DEAD = "dead"
STATUS_WAKE_KILL = "wake-kill"
STATUS_WAKING = "waking"
STATUS_IDLE = "idle"  # Linux, macOS, FreeBSD
STATUS_LOCKED = "locked"  # FreeBSD
STATUS_WAITING = "waiting"  # FreeBSD
STATUS_SUSPENDED = "suspended"  # NetBSD
STATUS_PARKED = "parked"  # Linux

# Process.net_connections() and psutil.net_connections()
CONN_ESTABLISHED = "ESTABLISHED"
CONN_SYN_SENT = "SYN_SENT"
CONN_SYN_RECV = "SYN_RECV"
CONN_FIN_WAIT1 = "FIN_WAIT1"
CONN_FIN_WAIT2 = "FIN_WAIT2"
CONN_TIME_WAIT = "TIME_WAIT"
CONN_CLOSE = "CLOSE"
CONN_CLOSE_WAIT = "CLOSE_WAIT"
CONN_LAST_ACK = "LAST_ACK"
CONN_LISTEN = "LISTEN"
CONN_CLOSING = "CLOSING"
CONN_NONE = "NONE"

# net_if_stats()
if enum is None:
    NIC_DUPLEX_FULL = 2
    NIC_DUPLEX_HALF = 1
    NIC_DUPLEX_UNKNOWN = 0
else:

    class NicDuplex(enum.IntEnum):
        NIC_DUPLEX_FULL = 2
        NIC_DUPLEX_HALF = 1
        NIC_DUPLEX_UNKNOWN = 0

    globals().update(NicDuplex.__members__)

# sensors_battery()
if enum is None:
    POWER_TIME_UNKNOWN = -1
    POWER_TIME_UNLIMITED = -2
else:

    class BatteryTime(enum.IntEnum):
        POWER_TIME_UNKNOWN = -1
        POWER_TIME_UNLIMITED = -2

    globals().update(BatteryTime.__members__)

# --- others

ENCODING = sys.getfilesystemencoding()
if not PY3:
    ENCODING_ERRS = "replace"
else:
    try:
        ENCODING_ERRS = sys.getfilesystemencodeerrors()  # py 3.6
    except AttributeError:
        ENCODING_ERRS = "surrogateescape" if POSIX else "replace"


# ===================================================================
# --- namedtuples
# ===================================================================

# --- for system functions

# fmt: off
# psutil.swap_memory()
sswap = namedtuple('sswap', ['total', 'used', 'free', 'percent', 'sin',
                             'sout'])
# psutil.disk_usage()
sdiskusage = namedtuple('sdiskusage', ['total', 'used', 'free', 'percent'])
# psutil.disk_io_counters()
sdiskio = namedtuple('sdiskio', ['read_count', 'write_count',
                                 'read_bytes', 'write_bytes',
                                 'read_time', 'write_time'])
# psutil.disk_partitions()
sdiskpart = namedtuple('sdiskpart', ['device', 'mountpoint', 'fstype', 'opts'])
# psutil.net_io_counters()
snetio = namedtuple('snetio', ['bytes_sent', 'bytes_recv',
                               'packets_sent', 'packets_recv',
                               'errin', 'errout',
                               'dropin', 'dropout'])
# psutil.users()
suser = namedtuple('suser', ['name', 'terminal', 'host', 'started', 'pid'])
# psutil.net_connections()
sconn = namedtuple('sconn', ['fd', 'family', 'type', 'laddr', 'raddr',
                             'status', 'pid'])
# psutil.net_if_addrs()
snicaddr = namedtuple('snicaddr',
                      ['family', 'address', 'netmask', 'broadcast', 'ptp'])
# psutil.net_if_stats()
snicstats = namedtuple('snicstats',
                       ['isup', 'duplex', 'speed', 'mtu', 'flags'])
# psutil.cpu_stats()
scpustats = namedtuple(
    'scpustats', ['ctx_switches', 'interrupts', 'soft_interrupts', 'syscalls'])
# psutil.cpu_freq()
scpufreq = namedtuple('scpufreq', ['current', 'min', 'max'])
# psutil.sensors_temperatures()
shwtemp = namedtuple(
    'shwtemp', ['label', 'current', 'high', 'critical'])
# psutil.sensors_battery()
sbattery = namedtuple('sbattery', ['percent', 'secsleft', 'power_plugged'])
# psutil.sensors_fans()
sfan = namedtuple('sfan', ['label', 'current'])
# fmt: on

# --- for Process methods

# psutil.Process.cpu_times()
pcputimes = namedtuple(
    'pcputimes', ['user', 'system', 'children_user', 'children_system']
)
# psutil.Process.open_files()
popenfile = namedtuple('popenfile', ['path', 'fd'])
# psutil.Process.threads()
pthread = namedtuple('pthread', ['id', 'user_time', 'system_time'])
# psutil.Process.uids()
puids = namedtuple('puids', ['real', 'effective', 'saved'])
# psutil.Process.gids()
pgids = namedtuple('pgids', ['real', 'effective', 'saved'])
# psutil.Process.io_counters()
pio = namedtuple(
    'pio', ['read_count', 'write_count', 'read_bytes', 'write_bytes']
)
# psutil.Process.ionice()
pionice = namedtuple('pionice', ['ioclass', 'value'])
# psutil.Process.ctx_switches()
pctxsw = namedtuple('pctxsw', ['voluntary', 'involuntary'])
# psutil.Process.net_connections()
pconn = namedtuple(
    'pconn', ['fd', 'family', 'type', 'laddr', 'raddr', 'status']
)

# psutil.net_connections() and psutil.Process.net_connections()
addr = namedtuple('addr', ['ip', 'port'])


# ===================================================================
# --- Process.net_connections() 'kind' parameter mapping
# ===================================================================


conn_tmap = {
    "all": ([AF_INET, AF_INET6, AF_UNIX], [SOCK_STREAM, SOCK_DGRAM]),
    "tcp": ([AF_INET, AF_INET6], [SOCK_STREAM]),
    "tcp4": ([AF_INET], [SOCK_STREAM]),
    "udp": ([AF_INET, AF_INET6], [SOCK_DGRAM]),
    "udp4": ([AF_INET], [SOCK_DGRAM]),
    "inet": ([AF_INET, AF_INET6], [SOCK_STREAM, SOCK_DGRAM]),
    "inet4": ([AF_INET], [SOCK_STREAM, SOCK_DGRAM]),
    "inet6": ([AF_INET6], [SOCK_STREAM, SOCK_DGRAM]),
}

if AF_INET6 is not None:
    conn_tmap.update({
        "tcp6": ([AF_INET6], [SOCK_STREAM]),
        "udp6": ([AF_INET6], [SOCK_DGRAM]),
    })

if AF_UNIX is not None:
    conn_tmap.update({"unix": ([AF_UNIX], [SOCK_STREAM, SOCK_DGRAM])})


# =====================================================================
# --- Exceptions
# =====================================================================


class Error(Exception):
    """Base exception class. All other psutil exceptions inherit
    from this one.
    """

    __module__ = 'psutil'

    def _infodict(self, attrs):
        info = collections.OrderedDict()
        for name in attrs:
            value = getattr(self, name, None)
            if value:  # noqa
                info[name] = value
            elif name == "pid" and value == 0:
                info[name] = value
        return info

    def __str__(self):
        # invoked on `raise Error`
        info = self._infodict(("pid", "ppid", "name"))
        if info:
            details = "(%s)" % ", ".join(
                ["%s=%r" % (k, v) for k, v in info.items()]
            )
        else:
            details = None
        return " ".join([x for x in (getattr(self, "msg", ""), details) if x])

    def __repr__(self):
        # invoked on `repr(Error)`
        info = self._infodict(("pid", "ppid", "name", "seconds", "msg"))
        details = ", ".join(["%s=%r" % (k, v) for k, v in info.items()])
        return "psutil.%s(%s)" % (self.__class__.__name__, details)


class NoSuchProcess(Error):
    """Exception raised when a process with a certain PID doesn't
    or no longer exists.
    """

    __module__ = 'psutil'

    def __init__(self, pid, name=None, msg=None):
        Error.__init__(self)
        self.pid = pid
        self.name = name
        self.msg = msg or "process no longer exists"

    def __reduce__(self):
        return (self.__class__, (self.pid, self.name, self.msg))


class ZombieProcess(NoSuchProcess):
    """Exception raised when querying a zombie process. This is
    raised on macOS, BSD and Solaris only, and not always: depending
    on the query the OS may be able to succeed anyway.
    On Linux all zombie processes are querable (hence this is never
    raised). Windows doesn't have zombie processes.
    """

    __module__ = 'psutil'

    def __init__(self, pid, name=None, ppid=None, msg=None):
        NoSuchProcess.__init__(self, pid, name, msg)
        self.ppid = ppid
        self.msg = msg or "PID still exists but it's a zombie"

    def __reduce__(self):
        return (self.__class__, (self.pid, self.name, self.ppid, self.msg))


class AccessDenied(Error):
    """Exception raised when permission to perform an action is denied."""

    __module__ = 'psutil'

    def __init__(self, pid=None, name=None, msg=None):
        Error.__init__(self)
        self.pid = pid
        self.name = name
        self.msg = msg or ""

    def __reduce__(self):
        return (self.__class__, (self.pid, self.name, self.msg))


class TimeoutExpired(Error):
    """Raised on Process.wait(timeout) if timeout expires and process
    is still alive.
    """

    __module__ = 'psutil'

    def __init__(self, seconds, pid=None, name=None):
        Error.__init__(self)
        self.seconds = seconds
        self.pid = pid
        self.name = name
        self.msg = "timeout after %s seconds" % seconds

    def __reduce__(self):
        return (self.__class__, (self.seconds, self.pid, self.name))


# ===================================================================
# --- utils
# ===================================================================


# This should be in _compat.py rather than here, but does not work well
# with setup.py importing this module via a sys.path trick.
if PY3:
    if isinstance(__builtins__, dict):  # cpython
        exec_ = __builtins__["exec"]
    else:  # pypy
        exec_ = getattr(__builtins__, "exec")  # noqa

    exec_("""def raise_from(value, from_value):
    try:
        raise value from from_value
    finally:
        value = None
    """)
else:

    def raise_from(value, from_value):
        raise value


def usage_percent(used, total, round_=None):
    """Calculate percentage usage of 'used' against 'total'."""
    try:
        ret = (float(used) / total) * 100
    except ZeroDivisionError:
        return 0.0
    else:
        if round_ is not None:
            ret = round(ret, round_)
        return ret


def memoize(fun):
    """A simple memoize decorator for functions supporting (hashable)
    positional arguments.
    It also provides a cache_clear() function for clearing the cache:

    >>> @memoize
    ... def foo()
    ...     return 1
        ...
    >>> foo()
    1
    >>> foo.cache_clear()
    >>>

    It supports:
     - functions
     - classes (acts as a @singleton)
     - staticmethods
     - classmethods

    It does NOT support:
     - methods
    """

    @functools.wraps(fun)
    def wrapper(*args, **kwargs):
        key = (args, frozenset(sorted(kwargs.items())))
        try:
            return cache[key]
        except KeyError:
            try:
                ret = cache[key] = fun(*args, **kwargs)
            except Exception as err:  # noqa: BLE001
                raise raise_from(err, None)
            return ret

    def cache_clear():
        """Clear cache."""
        cache.clear()

    cache = {}
    wrapper.cache_clear = cache_clear
    return wrapper


def memoize_when_activated(fun):
    """A memoize decorator which is disabled by default. It can be
    activated and deactivated on request.
    For efficiency reasons it can be used only against class methods
    accepting no arguments.

    >>> class Foo:
    ...     @memoize
    ...     def foo()
    ...         print(1)
    ...
    >>> f = Foo()
    >>> # deactivated (default)
    >>> foo()
    1
    >>> foo()
    1
    >>>
    >>> # activated
    >>> foo.cache_activate(self)
    >>> foo()
    1
    >>> foo()
    >>> foo()
    >>>
    """

    @functools.wraps(fun)
    def wrapper(self):
        try:
            # case 1: we previously entered oneshot() ctx
            ret = self._cache[fun]
        except AttributeError:
            # case 2: we never entered oneshot() ctx
            try:
                return fun(self)
            except Exception as err:  # noqa: BLE001
                raise raise_from(err, None)
        except KeyError:
            # case 3: we entered oneshot() ctx but there's no cache
            # for this entry yet
            try:
                ret = fun(self)
            except Exception as err:  # noqa: BLE001
                raise raise_from(err, None)
            try:
                self._cache[fun] = ret
            except AttributeError:
                # multi-threading race condition, see:
                # https://github.com/giampaolo/psutil/issues/1948
                pass
        return ret

    def cache_activate(proc):
        """Activate cache. Expects a Process instance. Cache will be
        stored as a "_cache" instance attribute.
        """
        proc._cache = {}

    def cache_deactivate(proc):
        """Deactivate and clear cache."""
        try:
            del proc._cache
        except AttributeError:
            pass

    wrapper.cache_activate = cache_activate
    wrapper.cache_deactivate = cache_deactivate
    return wrapper


def isfile_strict(path):
    """Same as os.path.isfile() but does not swallow EACCES / EPERM
    exceptions, see:
    http://mail.python.org/pipermail/python-dev/2012-June/120787.html.
    """
    try:
        st = os.stat(path)
    except OSError as err:
        if err.errno in (errno.EPERM, errno.EACCES):
            raise
        return False
    else:
        return stat.S_ISREG(st.st_mode)


def path_exists_strict(path):
    """Same as os.path.exists() but does not swallow EACCES / EPERM
    exceptions. See:
    http://mail.python.org/pipermail/python-dev/2012-June/120787.html.
    """
    try:
        os.stat(path)
    except OSError as err:
        if err.errno in (errno.EPERM, errno.EACCES):
            raise
        return False
    else:
        return True


@memoize
def supports_ipv6():
    """Return True if IPv6 is supported on this platform."""
    if not socket.has_ipv6 or AF_INET6 is None:
        return False
    try:
        sock = socket.socket(AF_INET6, socket.SOCK_STREAM)
        with contextlib.closing(sock):
            sock.bind(("::1", 0))
        return True
    except socket.error:
        return False


def parse_environ_block(data):
    """Parse a C environ block of environment variables into a dictionary."""
    # The block is usually raw data from the target process.  It might contain
    # trailing garbage and lines that do not look like assignments.
    ret = {}
    pos = 0

    # localize global variable to speed up access.
    WINDOWS_ = WINDOWS
    while True:
        next_pos = data.find("\0", pos)
        # nul byte at the beginning or double nul byte means finish
        if next_pos <= pos:
            break
        # there might not be an equals sign
        equal_pos = data.find("=", pos, next_pos)
        if equal_pos > pos:
            key = data[pos:equal_pos]
            value = data[equal_pos + 1 : next_pos]
            # Windows expects environment variables to be uppercase only
            if WINDOWS_:
                key = key.upper()
            ret[key] = value
        pos = next_pos + 1

    return ret


def sockfam_to_enum(num):
    """Convert a numeric socket family value to an IntEnum member.
    If it's not a known member, return the numeric value itself.
    """
    if enum is None:
        return num
    else:  # pragma: no cover
        try:
            return socket.AddressFamily(num)
        except ValueError:
            return num


def socktype_to_enum(num):
    """Convert a numeric socket type value to an IntEnum member.
    If it's not a known member, return the numeric value itself.
    """
    if enum is None:
        return num
    else:  # pragma: no cover
        try:
            return socket.SocketKind(num)
        except ValueError:
            return num


def conn_to_ntuple(fd, fam, type_, laddr, raddr, status, status_map, pid=None):
    """Convert a raw connection tuple to a proper ntuple."""
    if fam in (socket.AF_INET, AF_INET6):
        if laddr:
            laddr = addr(*laddr)
        if raddr:
            raddr = addr(*raddr)
    if type_ == socket.SOCK_STREAM and fam in (AF_INET, AF_INET6):
        status = status_map.get(status, CONN_NONE)
    else:
        status = CONN_NONE  # ignore whatever C returned to us
    fam = sockfam_to_enum(fam)
    type_ = socktype_to_enum(type_)
    if pid is None:
        return pconn(fd, fam, type_, laddr, raddr, status)
    else:
        return sconn(fd, fam, type_, laddr, raddr, status, pid)


def deprecated_method(replacement):
    """A decorator which can be used to mark a method as deprecated
    'replcement' is the method name which will be called instead.
    """

    def outer(fun):
        msg = "%s() is deprecated and will be removed; use %s() instead" % (
            fun.__name__,
            replacement,
        )
        if fun.__doc__ is None:
            fun.__doc__ = msg

        @functools.wraps(fun)
        def inner(self, *args, **kwargs):
            warnings.warn(msg, category=DeprecationWarning, stacklevel=2)
            return getattr(self, replacement)(*args, **kwargs)

        return inner

    return outer


class _WrapNumbers:
    """Watches numbers so that they don't overflow and wrap
    (reset to zero).
    """

    def __init__(self):
        self.lock = threading.Lock()
        self.cache = {}
        self.reminders = {}
        self.reminder_keys = {}

    def _add_dict(self, input_dict, name):
        assert name not in self.cache
        assert name not in self.reminders
        assert name not in self.reminder_keys
        self.cache[name] = input_dict
        self.reminders[name] = collections.defaultdict(int)
        self.reminder_keys[name] = collections.defaultdict(set)

    def _remove_dead_reminders(self, input_dict, name):
        """In case the number of keys changed between calls (e.g. a
        disk disappears) this removes the entry from self.reminders.
        """
        old_dict = self.cache[name]
        gone_keys = set(old_dict.keys()) - set(input_dict.keys())
        for gone_key in gone_keys:
            for remkey in self.reminder_keys[name][gone_key]:
                del self.reminders[name][remkey]
            del self.reminder_keys[name][gone_key]

    def run(self, input_dict, name):
        """Cache dict and sum numbers which overflow and wrap.
        Return an updated copy of `input_dict`.
        """
        if name not in self.cache:
            # This was the first call.
            self._add_dict(input_dict, name)
            return input_dict

        self._remove_dead_reminders(input_dict, name)

        old_dict = self.cache[name]
        new_dict = {}
        for key in input_dict:
            input_tuple = input_dict[key]
            try:
                old_tuple = old_dict[key]
            except KeyError:
                # The input dict has a new key (e.g. a new disk or NIC)
                # which didn't exist in the previous call.
                new_dict[key] = input_tuple
                continue

            bits = []
            for i in range(len(input_tuple)):
                input_value = input_tuple[i]
                old_value = old_tuple[i]
                remkey = (key, i)
                if input_value < old_value:
                    # it wrapped!
                    self.reminders[name][remkey] += old_value
                    self.reminder_keys[name][key].add(remkey)
                bits.append(input_value + self.reminders[name][remkey])

            new_dict[key] = tuple(bits)

        self.cache[name] = input_dict
        return new_dict

    def cache_clear(self, name=None):
        """Clear the internal cache, optionally only for function 'name'."""
        with self.lock:
            if name is None:
                self.cache.clear()
                self.reminders.clear()
                self.reminder_keys.clear()
            else:
                self.cache.pop(name, None)
                self.reminders.pop(name, None)
                self.reminder_keys.pop(name, None)

    def cache_info(self):
        """Return internal cache dicts as a tuple of 3 elements."""
        with self.lock:
            return (self.cache, self.reminders, self.reminder_keys)


def wrap_numbers(input_dict, name):
    """Given an `input_dict` and a function `name`, adjust the numbers
    which "wrap" (restart from zero) across different calls by adding
    "old value" to "new value" and return an updated dict.
    """
    with _wn.lock:
        return _wn.run(input_dict, name)


_wn = _WrapNumbers()
wrap_numbers.cache_clear = _wn.cache_clear
wrap_numbers.cache_info = _wn.cache_info


# The read buffer size for open() builtin. This (also) dictates how
# much data we read(2) when iterating over file lines as in:
#   >>> with open(file) as f:
#   ...    for line in f:
#   ...        ...
# Default per-line buffer size for binary files is 1K. For text files
# is 8K. We use a bigger buffer (32K) in order to have more consistent
# results when reading /proc pseudo files on Linux, see:
# https://github.com/giampaolo/psutil/issues/2050
# On Python 2 this also speeds up the reading of big files:
# (namely /proc/{pid}/smaps and /proc/net/*):
# https://github.com/giampaolo/psutil/issues/708
FILE_READ_BUFFER_SIZE = 32 * 1024


def open_binary(fname):
    return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE)


def open_text(fname):
    """On Python 3 opens a file in text mode by using fs encoding and
    a proper en/decoding errors handler.
    On Python 2 this is just an alias for open(name, 'rt').
    """
    if not PY3:
        return open(fname, buffering=FILE_READ_BUFFER_SIZE)

    # See:
    # https://github.com/giampaolo/psutil/issues/675
    # https://github.com/giampaolo/psutil/pull/733
    fobj = open(
        fname,
        buffering=FILE_READ_BUFFER_SIZE,
        encoding=ENCODING,
        errors=ENCODING_ERRS,
    )
    try:
        # Dictates per-line read(2) buffer size. Defaults is 8k. See:
        # https://github.com/giampaolo/psutil/issues/2050#issuecomment-1013387546
        fobj._CHUNK_SIZE = FILE_READ_BUFFER_SIZE
    except AttributeError:
        pass
    except Exception:
        fobj.close()
        raise

    return fobj


def cat(fname, fallback=_DEFAULT, _open=open_text):
    """Read entire file content and return it as a string. File is
    opened in text mode. If specified, `fallback` is the value
    returned in case of error, either if the file does not exist or
    it can't be read().
    """
    if fallback is _DEFAULT:
        with _open(fname) as f:
            return f.read()
    else:
        try:
            with _open(fname) as f:
                return f.read()
        except (IOError, OSError):
            return fallback


def bcat(fname, fallback=_DEFAULT):
    """Same as above but opens file in binary mode."""
    return cat(fname, fallback=fallback, _open=open_binary)


def bytes2human(n, format="%(value).1f%(symbol)s"):
    """Used by various scripts. See: http://goo.gl/zeJZl.

    >>> bytes2human(10000)
    '9.8K'
    >>> bytes2human(100001221)
    '95.4M'
    """
    symbols = ('B', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y')
    prefix = {}
    for i, s in enumerate(symbols[1:]):
        prefix[s] = 1 << (i + 1) * 10
    for symbol in reversed(symbols[1:]):
        if abs(n) >= prefix[symbol]:
            value = float(n) / prefix[symbol]
            return format % locals()
    return format % dict(symbol=symbols[0], value=n)


def get_procfs_path():
    """Return updated psutil.PROCFS_PATH constant."""
    return sys.modules['psutil'].PROCFS_PATH


if PY3:

    def decode(s):
        return s.decode(encoding=ENCODING, errors=ENCODING_ERRS)

else:

    def decode(s):
        return s


# =====================================================================
# --- shell utils
# =====================================================================


@memoize
def term_supports_colors(file=sys.stdout):  # pragma: no cover
    if os.name == 'nt':
        return True
    try:
        import curses

        assert file.isatty()
        curses.setupterm()
        assert curses.tigetnum("colors") > 0
    except Exception:  # noqa: BLE001
        return False
    else:
        return True


def hilite(s, color=None, bold=False):  # pragma: no cover
    """Return an highlighted version of 'string'."""
    if not term_supports_colors():
        return s
    attr = []
    colors = dict(
        blue='34',
        brown='33',
        darkgrey='30',
        green='32',
        grey='37',
        lightblue='36',
        red='91',
        violet='35',
        yellow='93',
    )
    colors[None] = '29'
    try:
        color = colors[color]
    except KeyError:
        raise ValueError(
            "invalid color %r; choose between %s" % (list(colors.keys()))
        )
    attr.append(color)
    if bold:
        attr.append('1')
    return '\x1b[%sm%s\x1b[0m' % (';'.join(attr), s)


def print_color(
    s, color=None, bold=False, file=sys.stdout
):  # pragma: no cover
    """Print a colorized version of string."""
    if not term_supports_colors():
        print(s, file=file)  # NOQA
    elif POSIX:
        print(hilite(s, color, bold), file=file)  # NOQA
    else:
        import ctypes

        DEFAULT_COLOR = 7
        GetStdHandle = ctypes.windll.Kernel32.GetStdHandle
        SetConsoleTextAttribute = (
            ctypes.windll.Kernel32.SetConsoleTextAttribute
        )

        colors = dict(green=2, red=4, brown=6, yellow=6)
        colors[None] = DEFAULT_COLOR
        try:
            color = colors[color]
        except KeyError:
            raise ValueError(
                "invalid color %r; choose between %r"
                % (color, list(colors.keys()))
            )
        if bold and color <= 7:
            color += 8

        handle_id = -12 if file is sys.stderr else -11
        GetStdHandle.restype = ctypes.c_ulong
        handle = GetStdHandle(handle_id)
        SetConsoleTextAttribute(handle, color)
        try:
            print(s, file=file)  # NOQA
        finally:
            SetConsoleTextAttribute(handle, DEFAULT_COLOR)


def debug(msg):
    """If PSUTIL_DEBUG env var is set, print a debug message to stderr."""
    if PSUTIL_DEBUG:
        import inspect

        fname, lineno, _, _lines, _index = inspect.getframeinfo(
            inspect.currentframe().f_back
        )
        if isinstance(msg, Exception):
            if isinstance(msg, (OSError, IOError, EnvironmentError)):
                # ...because str(exc) may contain info about the file name
                msg = "ignoring %s" % msg
            else:
                msg = "ignoring %r" % msg
        print(  # noqa
            "psutil-debug [%s:%s]> %s" % (fname, lineno, msg), file=sys.stderr
        )
PKok\�*Ǘ�H�Hpsutil/_psaix.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'
# Copyright (c) 2017, Arnon Yaari
# All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""AIX platform implementation."""

import functools
import glob
import os
import re
import subprocess
import sys
from collections import namedtuple

from . import _common
from . import _psposix
from . import _psutil_aix as cext
from . import _psutil_posix as cext_posix
from ._common import NIC_DUPLEX_FULL
from ._common import NIC_DUPLEX_HALF
from ._common import NIC_DUPLEX_UNKNOWN
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import ZombieProcess
from ._common import conn_to_ntuple
from ._common import get_procfs_path
from ._common import memoize_when_activated
from ._common import usage_percent
from ._compat import PY3
from ._compat import FileNotFoundError
from ._compat import PermissionError
from ._compat import ProcessLookupError


__extra__all__ = ["PROCFS_PATH"]


# =====================================================================
# --- globals
# =====================================================================


HAS_THREADS = hasattr(cext, "proc_threads")
HAS_NET_IO_COUNTERS = hasattr(cext, "net_io_counters")
HAS_PROC_IO_COUNTERS = hasattr(cext, "proc_io_counters")

PAGE_SIZE = cext_posix.getpagesize()
AF_LINK = cext_posix.AF_LINK

PROC_STATUSES = {
    cext.SIDL: _common.STATUS_IDLE,
    cext.SZOMB: _common.STATUS_ZOMBIE,
    cext.SACTIVE: _common.STATUS_RUNNING,
    cext.SSWAP: _common.STATUS_RUNNING,  # TODO what status is this?
    cext.SSTOP: _common.STATUS_STOPPED,
}

TCP_STATUSES = {
    cext.TCPS_ESTABLISHED: _common.CONN_ESTABLISHED,
    cext.TCPS_SYN_SENT: _common.CONN_SYN_SENT,
    cext.TCPS_SYN_RCVD: _common.CONN_SYN_RECV,
    cext.TCPS_FIN_WAIT_1: _common.CONN_FIN_WAIT1,
    cext.TCPS_FIN_WAIT_2: _common.CONN_FIN_WAIT2,
    cext.TCPS_TIME_WAIT: _common.CONN_TIME_WAIT,
    cext.TCPS_CLOSED: _common.CONN_CLOSE,
    cext.TCPS_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
    cext.TCPS_LAST_ACK: _common.CONN_LAST_ACK,
    cext.TCPS_LISTEN: _common.CONN_LISTEN,
    cext.TCPS_CLOSING: _common.CONN_CLOSING,
    cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
}

proc_info_map = dict(
    ppid=0,
    rss=1,
    vms=2,
    create_time=3,
    nice=4,
    num_threads=5,
    status=6,
    ttynr=7,
)


# =====================================================================
# --- named tuples
# =====================================================================


# psutil.Process.memory_info()
pmem = namedtuple('pmem', ['rss', 'vms'])
# psutil.Process.memory_full_info()
pfullmem = pmem
# psutil.Process.cpu_times()
scputimes = namedtuple('scputimes', ['user', 'system', 'idle', 'iowait'])
# psutil.virtual_memory()
svmem = namedtuple('svmem', ['total', 'available', 'percent', 'used', 'free'])


# =====================================================================
# --- memory
# =====================================================================


def virtual_memory():
    total, avail, free, _pinned, inuse = cext.virtual_mem()
    percent = usage_percent((total - avail), total, round_=1)
    return svmem(total, avail, percent, inuse, free)


def swap_memory():
    """Swap system memory as a (total, used, free, sin, sout) tuple."""
    total, free, sin, sout = cext.swap_mem()
    used = total - free
    percent = usage_percent(used, total, round_=1)
    return _common.sswap(total, used, free, percent, sin, sout)


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return system-wide CPU times as a named tuple."""
    ret = cext.per_cpu_times()
    return scputimes(*[sum(x) for x in zip(*ret)])


def per_cpu_times():
    """Return system per-CPU times as a list of named tuples."""
    ret = cext.per_cpu_times()
    return [scputimes(*x) for x in ret]


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    try:
        return os.sysconf("SC_NPROCESSORS_ONLN")
    except ValueError:
        # mimic os.cpu_count() behavior
        return None


def cpu_count_cores():
    cmd = ["lsdev", "-Cc", "processor"]
    p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    stdout, stderr = p.communicate()
    if PY3:
        stdout, stderr = (
            x.decode(sys.stdout.encoding) for x in (stdout, stderr)
        )
    if p.returncode != 0:
        raise RuntimeError("%r command error\n%s" % (cmd, stderr))
    processors = stdout.strip().splitlines()
    return len(processors) or None


def cpu_stats():
    """Return various CPU stats as a named tuple."""
    ctx_switches, interrupts, soft_interrupts, syscalls = cext.cpu_stats()
    return _common.scpustats(
        ctx_switches, interrupts, soft_interrupts, syscalls
    )


# =====================================================================
# --- disks
# =====================================================================


disk_io_counters = cext.disk_io_counters
disk_usage = _psposix.disk_usage


def disk_partitions(all=False):
    """Return system disk partitions."""
    # TODO - the filtering logic should be better checked so that
    # it tries to reflect 'df' as much as possible
    retlist = []
    partitions = cext.disk_partitions()
    for partition in partitions:
        device, mountpoint, fstype, opts = partition
        if device == 'none':
            device = ''
        if not all:
            # Differently from, say, Linux, we don't have a list of
            # common fs types so the best we can do, AFAIK, is to
            # filter by filesystem having a total size > 0.
            if not disk_usage(mountpoint).total:
                continue
        ntuple = _common.sdiskpart(device, mountpoint, fstype, opts)
        retlist.append(ntuple)
    return retlist


# =====================================================================
# --- network
# =====================================================================


net_if_addrs = cext_posix.net_if_addrs

if HAS_NET_IO_COUNTERS:
    net_io_counters = cext.net_io_counters


def net_connections(kind, _pid=-1):
    """Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    """
    cmap = _common.conn_tmap
    if kind not in cmap:
        raise ValueError(
            "invalid %r kind argument; choose between %s"
            % (kind, ', '.join([repr(x) for x in cmap]))
        )
    families, types = _common.conn_tmap[kind]
    rawlist = cext.net_connections(_pid)
    ret = []
    for item in rawlist:
        fd, fam, type_, laddr, raddr, status, pid = item
        if fam not in families:
            continue
        if type_ not in types:
            continue
        nt = conn_to_ntuple(
            fd,
            fam,
            type_,
            laddr,
            raddr,
            status,
            TCP_STATUSES,
            pid=pid if _pid == -1 else None,
        )
        ret.append(nt)
    return ret


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    duplex_map = {"Full": NIC_DUPLEX_FULL, "Half": NIC_DUPLEX_HALF}
    names = set([x[0] for x in net_if_addrs()])
    ret = {}
    for name in names:
        mtu = cext_posix.net_if_mtu(name)
        flags = cext_posix.net_if_flags(name)

        # try to get speed and duplex
        # TODO: rewrite this in C (entstat forks, so use truss -f to follow.
        # looks like it is using an undocumented ioctl?)
        duplex = ""
        speed = 0
        p = subprocess.Popen(
            ["/usr/bin/entstat", "-d", name],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        stdout, stderr = p.communicate()
        if PY3:
            stdout, stderr = (
                x.decode(sys.stdout.encoding) for x in (stdout, stderr)
            )
        if p.returncode == 0:
            re_result = re.search(
                r"Running: (\d+) Mbps.*?(\w+) Duplex", stdout
            )
            if re_result is not None:
                speed = int(re_result.group(1))
                duplex = re_result.group(2)

        output_flags = ','.join(flags)
        isup = 'running' in flags
        duplex = duplex_map.get(duplex, NIC_DUPLEX_UNKNOWN)
        ret[name] = _common.snicstats(isup, duplex, speed, mtu, output_flags)
    return ret


# =====================================================================
# --- other system functions
# =====================================================================


def boot_time():
    """The system boot time expressed in seconds since the epoch."""
    return cext.boot_time()


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    localhost = (':0.0', ':0')
    for item in rawlist:
        user, tty, hostname, tstamp, user_process, pid = item
        # note: the underlying C function includes entries about
        # system boot, run level and others.  We might want
        # to use them in the future.
        if not user_process:
            continue
        if hostname in localhost:
            hostname = 'localhost'
        nt = _common.suser(user, tty, hostname, tstamp, pid)
        retlist.append(nt)
    return retlist


# =====================================================================
# --- processes
# =====================================================================


def pids():
    """Returns a list of PIDs currently running on the system."""
    return [int(x) for x in os.listdir(get_procfs_path()) if x.isdigit()]


def pid_exists(pid):
    """Check for the existence of a unix pid."""
    return os.path.exists(os.path.join(get_procfs_path(), str(pid), "psinfo"))


def wrap_exceptions(fun):
    """Call callable into a try/except clause and translate ENOENT,
    EACCES and EPERM in NoSuchProcess or AccessDenied exceptions.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except (FileNotFoundError, ProcessLookupError):
            # ENOENT (no such file or directory) gets raised on open().
            # ESRCH (no such process) can get raised on read() if
            # process is gone in meantime.
            if not pid_exists(self.pid):
                raise NoSuchProcess(self.pid, self._name)
            else:
                raise ZombieProcess(self.pid, self._name, self._ppid)
        except PermissionError:
            raise AccessDenied(self.pid, self._name)

    return wrapper


class Process:
    """Wrapper class around underlying C implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "_procfs_path", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None
        self._procfs_path = get_procfs_path()

    def oneshot_enter(self):
        self._proc_basic_info.cache_activate(self)
        self._proc_cred.cache_activate(self)

    def oneshot_exit(self):
        self._proc_basic_info.cache_deactivate(self)
        self._proc_cred.cache_deactivate(self)

    @wrap_exceptions
    @memoize_when_activated
    def _proc_basic_info(self):
        return cext.proc_basic_info(self.pid, self._procfs_path)

    @wrap_exceptions
    @memoize_when_activated
    def _proc_cred(self):
        return cext.proc_cred(self.pid, self._procfs_path)

    @wrap_exceptions
    def name(self):
        if self.pid == 0:
            return "swapper"
        # note: max 16 characters
        return cext.proc_name(self.pid, self._procfs_path).rstrip("\x00")

    @wrap_exceptions
    def exe(self):
        # there is no way to get executable path in AIX other than to guess,
        # and guessing is more complex than what's in the wrapping class
        cmdline = self.cmdline()
        if not cmdline:
            return ''
        exe = cmdline[0]
        if os.path.sep in exe:
            # relative or absolute path
            if not os.path.isabs(exe):
                # if cwd has changed, we're out of luck - this may be wrong!
                exe = os.path.abspath(os.path.join(self.cwd(), exe))
            if (
                os.path.isabs(exe)
                and os.path.isfile(exe)
                and os.access(exe, os.X_OK)
            ):
                return exe
            # not found, move to search in PATH using basename only
            exe = os.path.basename(exe)
        # search for exe name PATH
        for path in os.environ["PATH"].split(":"):
            possible_exe = os.path.abspath(os.path.join(path, exe))
            if os.path.isfile(possible_exe) and os.access(
                possible_exe, os.X_OK
            ):
                return possible_exe
        return ''

    @wrap_exceptions
    def cmdline(self):
        return cext.proc_args(self.pid)

    @wrap_exceptions
    def environ(self):
        return cext.proc_environ(self.pid)

    @wrap_exceptions
    def create_time(self):
        return self._proc_basic_info()[proc_info_map['create_time']]

    @wrap_exceptions
    def num_threads(self):
        return self._proc_basic_info()[proc_info_map['num_threads']]

    if HAS_THREADS:

        @wrap_exceptions
        def threads(self):
            rawlist = cext.proc_threads(self.pid)
            retlist = []
            for thread_id, utime, stime in rawlist:
                ntuple = _common.pthread(thread_id, utime, stime)
                retlist.append(ntuple)
            # The underlying C implementation retrieves all OS threads
            # and filters them by PID.  At this point we can't tell whether
            # an empty list means there were no connections for process or
            # process is no longer active so we force NSP in case the PID
            # is no longer there.
            if not retlist:
                # will raise NSP if process is gone
                os.stat('%s/%s' % (self._procfs_path, self.pid))
            return retlist

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        ret = net_connections(kind, _pid=self.pid)
        # The underlying C implementation retrieves all OS connections
        # and filters them by PID.  At this point we can't tell whether
        # an empty list means there were no connections for process or
        # process is no longer active so we force NSP in case the PID
        # is no longer there.
        if not ret:
            # will raise NSP if process is gone
            os.stat('%s/%s' % (self._procfs_path, self.pid))
        return ret

    @wrap_exceptions
    def nice_get(self):
        return cext_posix.getpriority(self.pid)

    @wrap_exceptions
    def nice_set(self, value):
        return cext_posix.setpriority(self.pid, value)

    @wrap_exceptions
    def ppid(self):
        self._ppid = self._proc_basic_info()[proc_info_map['ppid']]
        return self._ppid

    @wrap_exceptions
    def uids(self):
        real, effective, saved, _, _, _ = self._proc_cred()
        return _common.puids(real, effective, saved)

    @wrap_exceptions
    def gids(self):
        _, _, _, real, effective, saved = self._proc_cred()
        return _common.puids(real, effective, saved)

    @wrap_exceptions
    def cpu_times(self):
        t = cext.proc_cpu_times(self.pid, self._procfs_path)
        return _common.pcputimes(*t)

    @wrap_exceptions
    def terminal(self):
        ttydev = self._proc_basic_info()[proc_info_map['ttynr']]
        # convert from 64-bit dev_t to 32-bit dev_t and then map the device
        ttydev = ((ttydev & 0x0000FFFF00000000) >> 16) | (ttydev & 0xFFFF)
        # try to match rdev of /dev/pts/* files ttydev
        for dev in glob.glob("/dev/**/*"):
            if os.stat(dev).st_rdev == ttydev:
                return dev
        return None

    @wrap_exceptions
    def cwd(self):
        procfs_path = self._procfs_path
        try:
            result = os.readlink("%s/%s/cwd" % (procfs_path, self.pid))
            return result.rstrip('/')
        except FileNotFoundError:
            os.stat("%s/%s" % (procfs_path, self.pid))  # raise NSP or AD
            return ""

    @wrap_exceptions
    def memory_info(self):
        ret = self._proc_basic_info()
        rss = ret[proc_info_map['rss']] * 1024
        vms = ret[proc_info_map['vms']] * 1024
        return pmem(rss, vms)

    memory_full_info = memory_info

    @wrap_exceptions
    def status(self):
        code = self._proc_basic_info()[proc_info_map['status']]
        # XXX is '?' legit? (we're not supposed to return it anyway)
        return PROC_STATUSES.get(code, '?')

    def open_files(self):
        # TODO rewrite without using procfiles (stat /proc/pid/fd/* and then
        # find matching name of the inode)
        p = subprocess.Popen(
            ["/usr/bin/procfiles", "-n", str(self.pid)],
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        stdout, stderr = p.communicate()
        if PY3:
            stdout, stderr = (
                x.decode(sys.stdout.encoding) for x in (stdout, stderr)
            )
        if "no such process" in stderr.lower():
            raise NoSuchProcess(self.pid, self._name)
        procfiles = re.findall(r"(\d+): S_IFREG.*name:(.*)\n", stdout)
        retlist = []
        for fd, path in procfiles:
            path = path.strip()
            if path.startswith("//"):
                path = path[1:]
            if path.lower() == "cannot be retrieved":
                continue
            retlist.append(_common.popenfile(path, int(fd)))
        return retlist

    @wrap_exceptions
    def num_fds(self):
        if self.pid == 0:  # no /proc/0/fd
            return 0
        return len(os.listdir("%s/%s/fd" % (self._procfs_path, self.pid)))

    @wrap_exceptions
    def num_ctx_switches(self):
        return _common.pctxsw(*cext.proc_num_ctx_switches(self.pid))

    @wrap_exceptions
    def wait(self, timeout=None):
        return _psposix.wait_pid(self.pid, timeout, self._name)

    if HAS_PROC_IO_COUNTERS:

        @wrap_exceptions
        def io_counters(self):
            try:
                rc, wc, rb, wb = cext.proc_io_counters(self.pid)
            except OSError:
                # if process is terminated, proc_io_counters returns OSError
                # instead of NSP
                if not pid_exists(self.pid):
                    raise NoSuchProcess(self.pid, self._name)
                raise
            return _common.pio(rc, wc, rb, wb)
PKok\,��S??psutil/_psosx.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""macOS platform implementation."""

import errno
import functools
import os
from collections import namedtuple

from . import _common
from . import _psposix
from . import _psutil_osx as cext
from . import _psutil_posix as cext_posix
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import ZombieProcess
from ._common import conn_tmap
from ._common import conn_to_ntuple
from ._common import isfile_strict
from ._common import memoize_when_activated
from ._common import parse_environ_block
from ._common import usage_percent
from ._compat import PermissionError
from ._compat import ProcessLookupError


__extra__all__ = []


# =====================================================================
# --- globals
# =====================================================================


PAGESIZE = cext_posix.getpagesize()
AF_LINK = cext_posix.AF_LINK

TCP_STATUSES = {
    cext.TCPS_ESTABLISHED: _common.CONN_ESTABLISHED,
    cext.TCPS_SYN_SENT: _common.CONN_SYN_SENT,
    cext.TCPS_SYN_RECEIVED: _common.CONN_SYN_RECV,
    cext.TCPS_FIN_WAIT_1: _common.CONN_FIN_WAIT1,
    cext.TCPS_FIN_WAIT_2: _common.CONN_FIN_WAIT2,
    cext.TCPS_TIME_WAIT: _common.CONN_TIME_WAIT,
    cext.TCPS_CLOSED: _common.CONN_CLOSE,
    cext.TCPS_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
    cext.TCPS_LAST_ACK: _common.CONN_LAST_ACK,
    cext.TCPS_LISTEN: _common.CONN_LISTEN,
    cext.TCPS_CLOSING: _common.CONN_CLOSING,
    cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
}

PROC_STATUSES = {
    cext.SIDL: _common.STATUS_IDLE,
    cext.SRUN: _common.STATUS_RUNNING,
    cext.SSLEEP: _common.STATUS_SLEEPING,
    cext.SSTOP: _common.STATUS_STOPPED,
    cext.SZOMB: _common.STATUS_ZOMBIE,
}

kinfo_proc_map = dict(
    ppid=0,
    ruid=1,
    euid=2,
    suid=3,
    rgid=4,
    egid=5,
    sgid=6,
    ttynr=7,
    ctime=8,
    status=9,
    name=10,
)

pidtaskinfo_map = dict(
    cpuutime=0,
    cpustime=1,
    rss=2,
    vms=3,
    pfaults=4,
    pageins=5,
    numthreads=6,
    volctxsw=7,
)


# =====================================================================
# --- named tuples
# =====================================================================


# fmt: off
# psutil.cpu_times()
scputimes = namedtuple('scputimes', ['user', 'nice', 'system', 'idle'])
# psutil.virtual_memory()
svmem = namedtuple(
    'svmem', ['total', 'available', 'percent', 'used', 'free',
              'active', 'inactive', 'wired'])
# psutil.Process.memory_info()
pmem = namedtuple('pmem', ['rss', 'vms', 'pfaults', 'pageins'])
# psutil.Process.memory_full_info()
pfullmem = namedtuple('pfullmem', pmem._fields + ('uss', ))
# fmt: on


# =====================================================================
# --- memory
# =====================================================================


def virtual_memory():
    """System virtual memory as a namedtuple."""
    total, active, inactive, wired, free, speculative = cext.virtual_mem()
    # This is how Zabbix calculate avail and used mem:
    # https://github.com/zabbix/zabbix/blob/trunk/src/libs/zbxsysinfo/
    #     osx/memory.c
    # Also see: https://github.com/giampaolo/psutil/issues/1277
    avail = inactive + free
    used = active + wired
    # This is NOT how Zabbix calculates free mem but it matches "free"
    # cmdline utility.
    free -= speculative
    percent = usage_percent((total - avail), total, round_=1)
    return svmem(total, avail, percent, used, free, active, inactive, wired)


def swap_memory():
    """Swap system memory as a (total, used, free, sin, sout) tuple."""
    total, used, free, sin, sout = cext.swap_mem()
    percent = usage_percent(used, total, round_=1)
    return _common.sswap(total, used, free, percent, sin, sout)


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return system CPU times as a namedtuple."""
    user, nice, system, idle = cext.cpu_times()
    return scputimes(user, nice, system, idle)


def per_cpu_times():
    """Return system CPU times as a named tuple."""
    ret = []
    for cpu_t in cext.per_cpu_times():
        user, nice, system, idle = cpu_t
        item = scputimes(user, nice, system, idle)
        ret.append(item)
    return ret


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    return cext.cpu_count_logical()


def cpu_count_cores():
    """Return the number of CPU cores in the system."""
    return cext.cpu_count_cores()


def cpu_stats():
    ctx_switches, interrupts, soft_interrupts, syscalls, _traps = (
        cext.cpu_stats()
    )
    return _common.scpustats(
        ctx_switches, interrupts, soft_interrupts, syscalls
    )


def cpu_freq():
    """Return CPU frequency.
    On macOS per-cpu frequency is not supported.
    Also, the returned frequency never changes, see:
    https://arstechnica.com/civis/viewtopic.php?f=19&t=465002.
    """
    curr, min_, max_ = cext.cpu_freq()
    return [_common.scpufreq(curr, min_, max_)]


# =====================================================================
# --- disks
# =====================================================================


disk_usage = _psposix.disk_usage
disk_io_counters = cext.disk_io_counters


def disk_partitions(all=False):
    """Return mounted disk partitions as a list of namedtuples."""
    retlist = []
    partitions = cext.disk_partitions()
    for partition in partitions:
        device, mountpoint, fstype, opts = partition
        if device == 'none':
            device = ''
        if not all:
            if not os.path.isabs(device) or not os.path.exists(device):
                continue
        ntuple = _common.sdiskpart(device, mountpoint, fstype, opts)
        retlist.append(ntuple)
    return retlist


# =====================================================================
# --- sensors
# =====================================================================


def sensors_battery():
    """Return battery information."""
    try:
        percent, minsleft, power_plugged = cext.sensors_battery()
    except NotImplementedError:
        # no power source - return None according to interface
        return None
    power_plugged = power_plugged == 1
    if power_plugged:
        secsleft = _common.POWER_TIME_UNLIMITED
    elif minsleft == -1:
        secsleft = _common.POWER_TIME_UNKNOWN
    else:
        secsleft = minsleft * 60
    return _common.sbattery(percent, secsleft, power_plugged)


# =====================================================================
# --- network
# =====================================================================


net_io_counters = cext.net_io_counters
net_if_addrs = cext_posix.net_if_addrs


def net_connections(kind='inet'):
    """System-wide network connections."""
    # Note: on macOS this will fail with AccessDenied unless
    # the process is owned by root.
    ret = []
    for pid in pids():
        try:
            cons = Process(pid).net_connections(kind)
        except NoSuchProcess:
            continue
        else:
            if cons:
                for c in cons:
                    c = list(c) + [pid]
                    ret.append(_common.sconn(*c))
    return ret


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    names = net_io_counters().keys()
    ret = {}
    for name in names:
        try:
            mtu = cext_posix.net_if_mtu(name)
            flags = cext_posix.net_if_flags(name)
            duplex, speed = cext_posix.net_if_duplex_speed(name)
        except OSError as err:
            # https://github.com/giampaolo/psutil/issues/1279
            if err.errno != errno.ENODEV:
                raise
        else:
            if hasattr(_common, 'NicDuplex'):
                duplex = _common.NicDuplex(duplex)
            output_flags = ','.join(flags)
            isup = 'running' in flags
            ret[name] = _common.snicstats(
                isup, duplex, speed, mtu, output_flags
            )
    return ret


# =====================================================================
# --- other system functions
# =====================================================================


def boot_time():
    """The system boot time expressed in seconds since the epoch."""
    return cext.boot_time()


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    for item in rawlist:
        user, tty, hostname, tstamp, pid = item
        if tty == '~':
            continue  # reboot or shutdown
        if not tstamp:
            continue
        nt = _common.suser(user, tty or None, hostname or None, tstamp, pid)
        retlist.append(nt)
    return retlist


# =====================================================================
# --- processes
# =====================================================================


def pids():
    ls = cext.pids()
    if 0 not in ls:
        # On certain macOS versions pids() C doesn't return PID 0 but
        # "ps" does and the process is querable via sysctl():
        # https://travis-ci.org/giampaolo/psutil/jobs/309619941
        try:
            Process(0).create_time()
            ls.insert(0, 0)
        except NoSuchProcess:
            pass
        except AccessDenied:
            ls.insert(0, 0)
    return ls


pid_exists = _psposix.pid_exists


def is_zombie(pid):
    try:
        st = cext.proc_kinfo_oneshot(pid)[kinfo_proc_map['status']]
        return st == cext.SZOMB
    except OSError:
        return False


def wrap_exceptions(fun):
    """Decorator which translates bare OSError exceptions into
    NoSuchProcess and AccessDenied.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except ProcessLookupError:
            if is_zombie(self.pid):
                raise ZombieProcess(self.pid, self._name, self._ppid)
            else:
                raise NoSuchProcess(self.pid, self._name)
        except PermissionError:
            raise AccessDenied(self.pid, self._name)

    return wrapper


class Process:
    """Wrapper class around underlying C implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None

    @wrap_exceptions
    @memoize_when_activated
    def _get_kinfo_proc(self):
        # Note: should work with all PIDs without permission issues.
        ret = cext.proc_kinfo_oneshot(self.pid)
        assert len(ret) == len(kinfo_proc_map)
        return ret

    @wrap_exceptions
    @memoize_when_activated
    def _get_pidtaskinfo(self):
        # Note: should work for PIDs owned by user only.
        ret = cext.proc_pidtaskinfo_oneshot(self.pid)
        assert len(ret) == len(pidtaskinfo_map)
        return ret

    def oneshot_enter(self):
        self._get_kinfo_proc.cache_activate(self)
        self._get_pidtaskinfo.cache_activate(self)

    def oneshot_exit(self):
        self._get_kinfo_proc.cache_deactivate(self)
        self._get_pidtaskinfo.cache_deactivate(self)

    @wrap_exceptions
    def name(self):
        name = self._get_kinfo_proc()[kinfo_proc_map['name']]
        return name if name is not None else cext.proc_name(self.pid)

    @wrap_exceptions
    def exe(self):
        return cext.proc_exe(self.pid)

    @wrap_exceptions
    def cmdline(self):
        return cext.proc_cmdline(self.pid)

    @wrap_exceptions
    def environ(self):
        return parse_environ_block(cext.proc_environ(self.pid))

    @wrap_exceptions
    def ppid(self):
        self._ppid = self._get_kinfo_proc()[kinfo_proc_map['ppid']]
        return self._ppid

    @wrap_exceptions
    def cwd(self):
        return cext.proc_cwd(self.pid)

    @wrap_exceptions
    def uids(self):
        rawtuple = self._get_kinfo_proc()
        return _common.puids(
            rawtuple[kinfo_proc_map['ruid']],
            rawtuple[kinfo_proc_map['euid']],
            rawtuple[kinfo_proc_map['suid']],
        )

    @wrap_exceptions
    def gids(self):
        rawtuple = self._get_kinfo_proc()
        return _common.puids(
            rawtuple[kinfo_proc_map['rgid']],
            rawtuple[kinfo_proc_map['egid']],
            rawtuple[kinfo_proc_map['sgid']],
        )

    @wrap_exceptions
    def terminal(self):
        tty_nr = self._get_kinfo_proc()[kinfo_proc_map['ttynr']]
        tmap = _psposix.get_terminal_map()
        try:
            return tmap[tty_nr]
        except KeyError:
            return None

    @wrap_exceptions
    def memory_info(self):
        rawtuple = self._get_pidtaskinfo()
        return pmem(
            rawtuple[pidtaskinfo_map['rss']],
            rawtuple[pidtaskinfo_map['vms']],
            rawtuple[pidtaskinfo_map['pfaults']],
            rawtuple[pidtaskinfo_map['pageins']],
        )

    @wrap_exceptions
    def memory_full_info(self):
        basic_mem = self.memory_info()
        uss = cext.proc_memory_uss(self.pid)
        return pfullmem(*basic_mem + (uss,))

    @wrap_exceptions
    def cpu_times(self):
        rawtuple = self._get_pidtaskinfo()
        return _common.pcputimes(
            rawtuple[pidtaskinfo_map['cpuutime']],
            rawtuple[pidtaskinfo_map['cpustime']],
            # children user / system times are not retrievable (set to 0)
            0.0,
            0.0,
        )

    @wrap_exceptions
    def create_time(self):
        return self._get_kinfo_proc()[kinfo_proc_map['ctime']]

    @wrap_exceptions
    def num_ctx_switches(self):
        # Unvoluntary value seems not to be available;
        # getrusage() numbers seems to confirm this theory.
        # We set it to 0.
        vol = self._get_pidtaskinfo()[pidtaskinfo_map['volctxsw']]
        return _common.pctxsw(vol, 0)

    @wrap_exceptions
    def num_threads(self):
        return self._get_pidtaskinfo()[pidtaskinfo_map['numthreads']]

    @wrap_exceptions
    def open_files(self):
        if self.pid == 0:
            return []
        files = []
        rawlist = cext.proc_open_files(self.pid)
        for path, fd in rawlist:
            if isfile_strict(path):
                ntuple = _common.popenfile(path, fd)
                files.append(ntuple)
        return files

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        if kind not in conn_tmap:
            raise ValueError(
                "invalid %r kind argument; choose between %s"
                % (kind, ', '.join([repr(x) for x in conn_tmap]))
            )
        families, types = conn_tmap[kind]
        rawlist = cext.proc_net_connections(self.pid, families, types)
        ret = []
        for item in rawlist:
            fd, fam, type, laddr, raddr, status = item
            nt = conn_to_ntuple(
                fd, fam, type, laddr, raddr, status, TCP_STATUSES
            )
            ret.append(nt)
        return ret

    @wrap_exceptions
    def num_fds(self):
        if self.pid == 0:
            return 0
        return cext.proc_num_fds(self.pid)

    @wrap_exceptions
    def wait(self, timeout=None):
        return _psposix.wait_pid(self.pid, timeout, self._name)

    @wrap_exceptions
    def nice_get(self):
        return cext_posix.getpriority(self.pid)

    @wrap_exceptions
    def nice_set(self, value):
        return cext_posix.setpriority(self.pid, value)

    @wrap_exceptions
    def status(self):
        code = self._get_kinfo_proc()[kinfo_proc_map['status']]
        # XXX is '?' legit? (we're not supposed to return it anyway)
        return PROC_STATUSES.get(code, '?')

    @wrap_exceptions
    def threads(self):
        rawlist = cext.proc_threads(self.pid)
        retlist = []
        for thread_id, utime, stime in rawlist:
            ntuple = _common.pthread(thread_id, utime, stime)
            retlist.append(ntuple)
        return retlist
PKok\݄��psutil/_psutil_posix.abi3.sonuȯ��ELF>�"@@8	@#"PP   ]]@@@�M�]�]�M�]�]��888$$P�td�B�B�B��Q�tdR�td�M�]�]@@GNU�/���Ъ�+���O.����Y*!�8���*.4�ȷԸf���W�~����:hde���l�H�����������$���r:����B ����M<���(�� ����-n�6Nv��8 �R"�~�4W��#�]p%W��%)cP4
��4ia@#�~�b%j	 T7�P5�p$�__gmon_start___init_fini_ITM_deregisterTMCloneTable_ITM_registerTMCloneTable__cxa_finalizepsutil_PyErr_SetFromOSErrnoWithSyscall__errno_locationstrerrorsprintfPyExc_OSErrorPyObject_CallFunctionPyErr_SetObject_Py_DeallocNoSuchProcessAccessDeniedpsutil_check_pid_rangePyArg_ParseTuple_Py_NoneStructPyExc_ValueErrorPyErr_SetStringpsutil_set_debugPyObject_IsTruePSUTIL_DEBUGpsutil_setupgetenvsocketstrncpyioctlclosePy_BuildValuePyErr_SetFromErrno_Py_TrueStruct_Py_FalseStructsetprioritygetnameinfoPyList_NewgetifaddrsPyList_AppendfreeifaddrsgetpriorityPyUnicode_FromStringpsutil_getpagesizesysconfpsutil_pid_existskillpsutil_raise_for_pidPyExc_RuntimeErrorPyErr_FormatPyInit__psutil_posixPyModule_Create2PyModule_AddIntConstantPyLong_FromLongPyModule_AddObjectlibpthread.so.0libc.so.6GLIBC_2.2.5GLIBC_2.3+ ui	E;ii
Qui	E�]0#�]�"�]�]haVB�a�a�adB�a`4�apB�a�,b|Bb) b�B(bP-@b�BHb�&`b�Bhb&�b�B�b�'�_�_�_
�_�_�_�_�_�_1�_&�_(` `(`0`8`@`H`P`	X`
``h`p`.x`�`�`�`�`�`�`�`�`�`�`0�`�`�`�`*�`+�` a!a"a#a$ a%(a'0a(8a)H��H��?H��t�+H����5�?�%�?@�%�?h����%�?h�����%�?h����%�?h����%�?h����%�?h����%�?h����%�?h�p����%�?h�`����%�?h	�P����%�?h
�@����%�?h�0����%�?h� ����%z?h
�����%r?h�����%j?h���%b?h����%Z?h�����%R?h����%J?h����%B?h����%:?h����%2?h����%*?h�p����%"?h�`����%?h�P����%?h�@����%
?h�0����%?h� ����%�>h�����%�>h�����%�>h���%�>h ����%�>h!�����%�>h"����%�>h#����%�>h$���H�=9@H�2@H9�tH�=H��t	�����H�=	@H�5@H)�H��H��?H��H�H�tH�=H��t��fD���=�?u/UH�=�<H��tH�=�:�M����h�����?]�����{���f.��AUATI��UH������I��8H�����L��H�5�L��H��1������U1�L��L�%9<H�5�I�<$���I�<$H��H���
���H��tH�mtH��1�]A\A]��H������H��1�]A\A]ÐATH��H�5D1�USH��I��L���>���1�L��H��;H�5�H�;�.���H�;H��H�����H��tH�mtH��1�[]A\�fDH���H���H��1�[]A\�f.�ATH��H�5�1�USH��I��L�����1�L��
H�;H�5nH�;���H�;H��H����H��tH�mtH��1�[]A\�fDH�����H��1�[]A\�f.�H��H��H�5�1�H�T$�����t"�D$��x*H��:H�H���f.�1�H���f�H�a:H�52H�8�Z���1���fDH��H��H�5�1�H�T$�6�����t2H�|$������x$H�E:���҉H�&:H�H���D1�H���f�H��H�=B�P���H��t
H��9�1�H����UH��1�H�5�H��@H�T$���A��1�E��tc1Ҿ��:����Ń��tXL�D$H�t$�L��������!��D$H��1�������t����t$ H�=/1�����H��@]�f��k���H�$9H�8���H��@]�fDUH��1�H�5�H��@H�T$���A��1�E��tm1Ҿ�����Ń��thL�D$H�t$�L���l�������D$H��1��������t/�����D$ @tCH�5�8H�=�1�����H��@]�����H�d8H�8�,���H��@]�fDH�5Q8H�=�1�����H��@]�DH��H��H�51�H�L$H�T$����A��1�E��t�T$�t$1��������tH�8H�H���H��7H�8�����ff.�@H����AW��AV��AUATUSH����u\����H��E�H��E1�E1�j�H�����H����H�v7H�H��[]A\A]A^A_����u�D�wM��t�I��H��N�,7L��L�%D�SH��L��1�H��H������I9�u�K�vL��H�=��D�1�����fDH��H�=�1��g����h���f�H��6H��@AW1�AVAUATUSH��(����H�D$H����H�|$�c�������fH�l$H�����,���0H�i6H�I��I��H��H�uD��M��AUI��L��H�="1�����I��XZM����H�|$L��L�$�t���L�$����I�*�NI�,$�VH�+�^I�/�fI�m�nH�mH���wH�}H��t�D�7D�����H;�5I��t�H���=H�} D�����H��H���E�E��
���H�}(D�����I��H�l5H�I��M�������E1�H�|$H��t
L�$�^���L�$H�T$H�H�$H��H��M��t
I�*�BM��tI�,$�RH��t
H�+�cM��t
I�/�tM��tI�m��H�D$H�D$H��([]A\A]A^A_��L���x���I�,$�����L���e���H�+�����H���S���I�/�����L���A���I�m�����L���.���H�mH��������H�l$H���c����s���fDH�}(D���\���I��H�*4H�I��M����������DH��L�$���L�$M����������f.�L�������L����M����������f.�L���x�H����������f.�H���X�M����������f.�L���8�����H�|$E1�E1�1�E1�H����������f�H�|$E1�E1�E1�H��������H�
3E1�E1�1�E1�H�8���E1����f�UH��SH����H�T$H�5�H���H��1����A��1�E��t!�t$1��n�Ƌ��uH�=�1��8�H��[]ÐH��2H�8�Q�H��[]�f.�AU1�ATUH��H��@���H��t(I��H�T$1�H��H�5�?��uI�,$�`E1�H��@L��]A\A]�1Ҿ���Ń���TH�t$L�D$�L��������D$H��1������6�����l$ @����@���>@����@���:@����@�� ��@��@�@�ŀ�Rf����f����f���!f����f����f�� � f��@�Ef�����H�=��P�H��H������H��L���i��H�E�WH��H�E�����H�������f�L�������H�=����I��H���m���H��L���
��I�E��H��I�E����L���[����fDH�=���I��H������H��L�����I�EuuH��I�E�����L�����|���f.�H�=N�T�I��H������H��L���m��I�Eu%H��I�E�C���L�����6���f.�H��I�E�����L�����z���DH�=��t��d����H�=��\�I�,$���E1���H��@L��]A\A]�f.�H�=���I��H���
���H��L�����I�E�a���H��I�E�����L�����|���fDH�=M�D�I��H�������H��L���]��I�E����H��I�E�C���L�����6���fDH�=2���I��H���m���H��L���
��I�E�����H��I�E�=���L���[��0���L���N����f�H�=���I��H���
���H��L������I�E�a���H��I�E���L��������fDH�=��D�I��H�������H��L���]���I�E����H��I�E�����L�������fDH�=	���I��H���m���H��L���
���I�E�����H��I�E�����L���[���fDH�=I��I��H������H��L������I�E�q���H��I�E�����L�������fDH�=t�T�I��H������H��L���m���I�E�!���H��I�E�q���L�����d���fDH�=*��I��H���}���H��L������I�E����H��I�E�,���L���k�����fDH�=�
��I��H���-���H��L�������I�E�����H��I�E���L��������fDH�=�
�d�I��H������H��L���}���I�E�1���H��I�E�����L��������fDH�=j
��I��H�������H��L���-���I�E���H��I�E�~���L���{��q���H��H�E�M���H���`��@���ff.�����fDH���w�H�=�H��H��1����f���1��f�H��1����A���E��t*�����t��tH�+H�8����������1�H���f�UH��SH��H���o����tH��H��[]��������uH��H��[]���H��*H��H�5*H�8H��1�[]����AT�H�=�+��H�����	H�5�H��I����������H�5�L��������1�H�5�L���������H�5�L���|����l�H�5�L���`����P�H�5�L���D����4�H�5�L���(�����H�5�L���������H�5�L����������H�5pL����������
H�5aL���������H�5RL���������
H�5FL������ut�H�5:L���h���u\�H�50L���P���uD�H�5&L���8���u,H�������H��H��tH�5L������
�E1�L��A\�H��H���%s (originated from %s)(is)PSUTIL_DEBUGassume no such process (originated from %s)assume access denied (originated from %s)pid must be a positive integerii%02x:(siOOOO)socket(SOCK_DGRAM)ioctl(SIOCGIFFLAGS)upbroadcastdebugloopbackpointopointnotrailersnoarppromiscallmultimasterslavemulticastportselautomediadynamic%s syscall failedRLIMIT_ASRLIMIT_CORERLIMIT_CPURLIMIT_DATARLIMIT_FSIZERLIMIT_MEMLOCKRLIMIT_NOFILERLIMIT_NPROCRLIMIT_RSSRLIMIT_STACKRLIMIT_LOCKSRLIMIT_MSGQUEUERLIMIT_NICERLIMIT_RTPRIORLIMIT_RTTIMERLIMIT_SIGPENDINGRLIM_INFINITY_psutil_posixgetpagesizegetprioritynet_if_addrsnet_if_flagsnet_if_is_runningnet_if_mtusetpriority;�`������ �(��h@�������@���,��`@�|P���<��p�������� ���8zRx�$���`FJw�?;*3$"<D����B�B�D �G�j
 CBBHO CBB<�����B�M�A �G�L
 CABGO CAB<�@��B�M�A �G�L
 CABGO CAB��jD q
KF
J$��WD F
FFD �)Dd(\8��A�PP{
ACXA0����A�PP�
AIX
AGYA�h�aD H
D`���K�D�E �B(�A0�A8�G�\�H�P�Z
8A0A(B BBBI�������X<h��B�D�B �B(�A0�A8�D`]hLpXhA`l
8A0A(B BBBH0���vA�D�D0S
AABSAA@���B�D�A �G`y
 DBBA�
 DBBK��
$��DP<��WTB<T �iA�D�G O
DAHO
DALXCA�P�B��0#�"�]+; 
T7�]�]���o`��
[`x��	���o����o�oD���o�]6 F V f v � � � � � � � � !!&!6!F!V!f!v!�!�!�!�!�!�!�!�!""&"6"F"V"f"v"VB���������adB`4pB�,|B)�BP-�B�&�B&�B�'GCC: (GNU) 10.2.1 20210130 (Red Hat 10.2.1-11),@#�,p&Ql�] @#������XintW�x�W��c�����fvW�zbj}\T�gG�8+:	W;	W�	5j;=l�um@
);�nF��F��R;�R�	
W
g	^�}	�b�
3����,W�R�R�R�*W	R�LR�N3RR:vRPR��lWm���
���W�2
��W�%)���%�U	@��Rp%W�t}�Rb�,RC=��R�hx�	W���%�fU�TT	&BQ�h�%���R%j�}�"R��b�2R��pid���l%��U�TT	�@Q�lf%�T	�@P�Rp$��(��GCexc�R��msg�
(��w ��!m�R��"�!��R
	�$	Uv�$P�U��wT	`@Q�U�$3T	@Q=R��w�$Tv#�9$-�NsR�#��Ls�GCexctR��msgu
(��w `�!mzR��"�!�zR
	X$	Uv$PU��wT	0@Q�U"$37T	@Q3R��w0$Tv%\R@#��\4�IC�]
(��wexcdR�� �!mhR��"0!�hR"�#	UvT#�a#mx#P4U��wT	@R|�#3ZT	@R��w�#Tv8A�F &Qo��9�����Xintc�x�cE
�N��o�������R�N��f�c�zbvG
h�}	T�
NI/�;�h�Y�	R
��	�
�
��
4�	�q�� �
��_
�
 �!L
"�#1$�%�&O'�(�)*,+�,}-�
.�/v0�1�2�3�405|6$7H89L:p;�<,<�=�	>?�@�A�BC3D�E�F�G�H]I�J�K�L/Mk
N�O�PQR#S�T
UFVMW�X�Y7Z�[�\�]�^�_�`�ab�cdCeq	f�g�h%i�
j^k�lm�n�o�p>q	rs�t�uv�wx3yyz�{$|k}~	~K����E������
�����q�<�U	��
�6
�&�����~��	�`����=�K�{�q�����
���:�]
� �����f�����8��
��p�x���	�	�3�$���%
����
������L�'	�����P��4���O���k���^��
�f���U�����h�Y�������������
������0@�1G<3N\T�gG�	8�
+	:	c
	;	cm�5
j�
=
lS
u
m�)��
n�������
�c �
�,2cF��^
�RXcq� �	�
?~����
���& �A1���9 6
�
7�
�8�
]9c
;�9=��(
+U
�
,�
�
-Z
^
.S
�
/
� �U�
0
<�
�
=	c
�
>�Th
J	
�
K`
�
L�(
j
M�0
�

NS8
�
O	@
�
P	H
�
QFP
1
R�X

Sq`l������?N�	�6����Q�	���
8

.8WgfN$�	z�XL)�	�N?
y�
Ik�9

H��G�s

w�?

��
x
K
��
9
K
�
��
�
�
J
�
�
�
|�
�
�
9�
�
�
��)
��?

#�-
z�
h	���
�
.����?
�-G��PG99�'
��������
���\���'���
s
�
&�
1�
<�
G)R�]�h�s�~�����@�9
N+1
d
�3�
- �@g��^`�Q ]@���o�

Oq9
qr9
�sGirqt@dmau@
�v@��
���
��
9�MH�K
b�K
��K

�K
�K
!�\��c�
�c��1
w��
��
���(~u
��	�

&�	�
&��,�
�-�
8
�
!	�
�"N
�$�

X
%�
 
5	.u(
�8	�0�EG{ -
�	"�x;�gA�
g��w=	��/w9;�9G�9���
r�	9�	Gk�#3
F
G
�
�
�c
�G
�@

�@
��gj�P9 �g@	�a y�	@a!�^c��	�"�lc���#"@�c�!	�	�	c$E
�!�Bc!"9c4��"&��K�%L
^�!I4�tS!k(���#"�ac�c!j )c�c9#!�!rc�ccc!t
bc��	�c!*c��#!�
Kc:���!a"�Po!�
Lcp��o!/����c�"�����#!Nc���!d���!�����&�$2
�!#cvc"Uko.c'���P5�w(mod��\X(v����)c5p�*U	@a*T3)�5P�*U|*T	uA*Q9)�5P�*U|*T	A*Q4)�5P*U|*T	�A*Q0)�5PE*U|*T	�A*Q2)�5Po*U|*T	�A*Q1)6P�*U|*T	�A*Q8)(6P�*U|*T	�A*Q7)D6P�*U|*T	�A*Q6)`6P*U|*T	�A*Q5)|6PA*U|*T	�A*Q3)�6Pk*U|*T	�A*Q:)�6P�*U|*T	�A*Q<)�6P�*U|*T	B*Q=)�6P�*U|*T	B*Q>)7P*U|*T	(B*Q?)7P=*U|*T	6B*Q;)(7:U*U	�+?7*U|*T	HB,���&��-}�$���-b�4��� �����.��	cRF(ret�	c��/ifr�M�@0��)�&�E*U�T*T	�B*Q��)�&�f*U2*T2*Q0)'8�*U�@*Q?)'��*Uv*T
�)&'��*Uv)B't�*U	&B1U'�1d'�+�'t*U	&B,�	��P-���*-}��-b�/�WM �����.��	c��(ret�	c��/ifr�M��.�����.��\G?0�u2`&.�v���)�.K*U|+R1K*U|3�*�.�K54�*		4�*W	S	5�6�*�	�	7�*��6�*


+@4K*Uv8�*�.�6�*/
-
9�*�.6�*T
R
+�.K*Uv)�.4*U	[A+�.*U|*Tv:�*//P�:4�*y
w
4�*�
�
6�*�
�
8�*%/�;�*8�*//!6�*209�*//!6�*WU+E/K*U})/4*U	�@+#/*U|*T}:�*P/P/P�?4�*|z4�*��6�*��8�*u/�;�*8�*{/%6�*539�*{/%6�*ZX+�/K*U})\/4$*U	�@+s/*U|*T}:�*�/�/P�D4�*}4�*��6�*��8�*�/�;�*8�*�/%
6�*8
6
9�*�/%6�*]
[
+�/K*U})�/4)*U	�@+�/*U|*T}3�*�3ES4�*�
�
4�*�
�
56�*�
�
7�*@�6�*;9+0K*U}8�*46�*`^9�*46�*��+%4K*U})�347*U	QA+4*U|*T}:�*`0`0PX4�*��4�*��6�*�8�*�0�;�*8�*�0!6�*ca9�*�0!6�*��+�0K*U})l04=*U	�@+�0*U|*T}:�*�0�0P	] 4�*��4�*��6�*�8�*�0�;�*8�*�0!# 6�*fd9�*�0!6�*��+�0K*U})�04B *U	A+�0*U|*T}:�*11J3b!4�*��4�*��6�*	8�*%1� ;�*8�*/1(!6�*ig9�*/16�*��+E1K*U})14G!*U	9A+#1*U|*T}:�*`1`1P9g"4�*��4�*��6�*8�*�1�!;�*8�*�1!-"6�*lj9�*�1!6�*��+�1K*U})l14L"*U	?A+�1*U|*T}:�*�1�1P?l#4�*��4�*��6�*8�*�1�";�*8�*�1!2#6�*om9�*�1!6�*��+�1K*U})�14Q#*U	IA+�1*U|*T}:�*22Pq$4�*��4�*��6�*
8�*%2�#;�*8�*/2!7$6�*rp9�*/2!6�*��+E2K*U})24V$*U	A+#2*U|*T}:�*P2P2Pv%4�*��4�*��6�*
8�*u2�$;�*8�*2!<%6�*us9�*2!6�*��+�2K*U})\24[%*U	�B+s2*U|*T}:�*�2�2P{&4�*��4�*��6�*8�*�2�%;�*8�*�2!A&6�*xv9�*�2!6�*��+�2K*U})�24`&*U	A+�2*U|*T}:�*�2�2P!�'4�*��4�*��6�*8�*3�&;�*8�*3!F'6�*{y9�*3!6�*��+53K*U})�24e'*U	!A+3*U|*T}:�*@3@3P'�(4�*��4�*��6�*8�*e3�';�*8�*o3!K(6�*~|9�*o3!6�*��+�3K*U})L34j(*U	)A+c3*U|*T}:�*�3�3P-�)4�*��4�*��6�*!8�*�3�(;�*8�*�3!P)6�*�9�*�3!6�*��+�3K*U})�34o)*U	2A+�3*U|*T})c-^�)*U0)�-��)*Uv*T	�B*Q��)�-��)*U2*T2*Q0)�-8**U��*Q?)�-�+**Uv*T
�)�-�C**Uv)0�b**U	�@)40��**U	�@+I0�*Uv<I	�c+=���=��0�>		��?�*>��	�@>���@>���,���&��~,-}����-b�-� �����.��	cbX(ret�	c��/ifr�M�@0��)&��+*U�T*T	�B*Q��)6&��+*U2*T2*Q0)T&8
,*U�@*Q?)j&�,,*Uv*T
!�)v&�D,*Uv)�&tc,*U	�@1�&�1�&�,�*�)���1-}*�
-b*/�NJ B+��(ifa+��.�,	c��.�.�jd.9/���.�	0���.1���.r2�
 �.�3�D!*!A�v*2P�-.�l	�Y"Q"5P.�l	��"�"+(+K*U��2�3..�m	�%#!#5�.�m	�_#[#+;+K*U|2��..�n	��#�#5�.�n	��#�#+M+K*Us2��..�o	�
$	$5�.�o	�G$C$+_+K*U2/.�p	��$}$5.�p	��$�$+r+K*U}2@R/.�y��$�$+�+K*U��2p�/.mz�J%D%5�.�z��%�%1,K2��/.m{��%�%5.�{� &&+(,K*U|2@50.m|�\&V&5p.�|��&�&+H,K*Us2��0.m}��&�&5�.�}�2'.'+h,K*U2�0.m~�l'h'50.�~��'�'+�+K*U})%)^�0*U0)=)�1*U��)�)t81*U	�@*Q~*R|*Xs*Y)�)Z1*U��*T��)*�1r1*T~)?*�1�1*T~)b*�1�1*T~1�*�)�+��1*Uv)�+�1�1*T~1�,�B��x2C��(�
C��2cDbuf�
x2Derr�	cE,�	cDn�-Dlen�-E���Dptr��@>k�2��2F9�Bg
���2C}�$�Cb�4�Dpid��E��	cE 	�	cB���3C}�$�Cb�4�Dpid��E��	cGo��`4��3H}�(��'�'Hb�8�((1i4|4I~4t*U	OAJ���4i�R4Kpid�od(V(H�&�))1�4�L5��3*U�T)5R44*UsL!5�/4*U�TII5�*T	cA*Q�TM�]c|4Npid]�Dret^	cO�HoP4
��4IZ4*UNP�2�'a��54�2�)�)4�20***Q�2�hQ�2�l6�2�*|*R�2�'�'�a54�2�*�*4�2�*�*;�2;�2;�21�'�)�'��5*U�T*T	�@*Q�h*R�l+�'�*U0S�1(�c6T�1UT
2TQ2��w;"2;.2;:2;D2;P2;\2U�1�( �V
2V�15 Q2��w;"2;.2;:2QD2^;P2;\29h2�(;i2P�2�,v�p74�2
++4�2M+C+Q3�\63�+�+R�20-0-�74�2,,4�2;,9,;3;31?-�1�,�)�,�=7*Uv*T	�@*Q�\)-~T7*U0+(-t*U	�@PR4�4W�84c4b,^,6o4�,�,RR4�4�4]�74c4�,�,;o41�4�)�4�8*T01�4�Wi_%%$>$>&I:;9II:;9
:;9I8	7I
<4:;9I?<4:;9I?<
4:;9I4G:;9.?:;9'I<I.?:;9'<.?:;9'I<.?:;9'<.?:;9'I<.?:;9'I@�B��1���B.?:;9'I@�B:;9I�B4:;9I4:;9I�B��1��14:;9I U!4:;9I�B"U#I$!I/%.?:;9'I@�B%:;9I$>$>&II7I	:;9I
>I:;9(:;9

:;9I8<'II'4:;9I?<4:;9I?<'I>I:;9>I:;9((I!I/
:;9I8
:;9I8:;9
:;9I4:;9I 4:;9I!.?:;9'I<".?:;9'I<#$.?:;9'<%.?:;9'<&.?:;9'I<'.?:;9'I@�B(4:;9I�B)��1*���B+��1,.:;9'I@�B-:;9I�B.4:;9I�B/4:;9I0
:;91��12U31R�BUXYW41�B5U641�B71U8191:1R�BXYW;41<.:;9'I =:;9I>4:;9I?@A
:;9B.:;9'I C:;9ID4:;9IE4:;9IF!I/G.:;9'I@�BH:;9I�BI���B1J.?:;9'@�BK:;9I�BL���B1M.?:;9'I N:;9IO.?:;9'I@�BP.1@�BQ41R1R�BXYWS.1@�BT1U1R�BUXYWV1W.?<n:;k.�
psutil/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6m_psutil_common.ctypes.hstdio.htypes.hpyport.htime.hobject.hpyerrors.h_psutil_common.hstdlib.hmodsupport.habstract.hstring.herrno.h=	@#�=w:	�X:X<���I=�Xt����$n$\$����;=�Xt����#�#\#����;=�Xt����8�8yJ
Ct�L��z�4	�Y2�2J
@t�L	�
M
t��t�.��	K�	Y
���
psutil/opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/usr/include/bits/usr/include/usr/include/sys/opt/python/cp36-cp36m/include/python3.6m/usr/include/netinet/usr/include/net/usr/include/asm-generic/usr/include/linux_psutil_posix.cstddef.htypes.hstdio.htypes.hunistd.hstdint.hpyport.htime.hobject.hboolobject.hmethodobject.hmoduleobject.hpyerrors.hresource.hresource.hsocket_type.hsockaddr.hsocket.hin.hconfname.hif.hifaddrs.hint-ll64.h	types.h
if_packet.h
_psutil_common.hnetdb.hlistobject.hunicodeobject.hmodsupport.hioctl.hsocket.hlongobject.hsignal.herrno.h<built-in>3	&�3v
J3v�
J�=-[/_fsY�	t.Z]'	Y�X:�t:zP:z�P�=-[/[fsY�	u.[Y	uR�	Y�X	ztPX:�|f:
JAt=-^�ZtKp�	t�:

�:s
.:s.
.:s<

�	Y�8	@	�	-��
^�
^X	#
	Y
Y��	�L
IY	\K�	Y
`�	lt�95`525*�XY�	��XJ
��	;	B	
' 	�
	������%L)J	�J	Z	K�q	?	0	��	=	�<
�
�tK	<�<XX	KXK�X�X�X�X�X��	l����������%L)J7�X�
]�
�tK		<.tXJt�����������z���	A�o;;-	D�v	
<:�~t:�Z�h>�-^�.L�fp.	�f5��5
35)�Y�t��J�=�~�/�fsY��u	X���������������	��~��	�	/Ij��t��	�~@��	�	/Ij���	7���	�	/I2���	=���	�	/I2���	� �����	/	;Z�	�����	�	/Ij���	����	�	/Ij���	��~��	�	/Ij��X��	B��~��	�	/Ij���	��~��	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	����	�	/Ij���	��~��	�	/Ij��X	���}�>��>KXuI��L.j�lJt=W	[
X.\dX
&�h�./�/	�X	g	EA 	
�
t	K	G? 	�=	I/ 	X	P5�0	�	X�	<�	�	<�	<�	<�	<�	<�	<�	<�	<�	<�	<N	<N	<N	<J	
�	Y����<tz_dsttimesyscallpsutil_setup/project_typeobject_objectob_refcntdoubleNoSuchProcessPy_ssize_tfloat_py_xdecref_tmplong long unsigned inttimezoneunsigned charpsutil_set_debugstrerror_py_decref_tmpPSUTIL_DEBUGshort unsigned intPyObject_IsTruepsutil_PyErr_SetFromOSErrnoWithSyscalltz_minuteswestPyObject_CallFunctionAccessDeniedpsutil/_psutil_common.cob_typeselfPyErr_SetObject__errno_locationPyObject__ssize_tfullmsglong long intGNU C17 10.2.1 20210130 (Red Hat 10.2.1-11) -mtune=generic -march=x86-64 -g -O3 -fwrapv -fPICPyArg_ParseTuplePyExc_ValueError_Py_Deallocshort intargsPSUTIL_CONN_NONE__pid_t_Py_NoneStructpsutil_check_pid_rangePyErr_SetStringsprintfgetenvPyExc_OSError_SC_EQUIV_CLASS_MAXifr_ifrnm_copyifa_addr_SC_THREAD_PRIO_PROTECTsin6_flowinfoifr_ifruPyModule_Create2getnameinfo__priority_which_t_SC_VERSIONPy_BuildValue_SC_NL_NMAXm_methods_longobjectslot_SC_SYNCHRONIZED_IO_SC_THREAD_PRIORITY_SCHEDULING_SC_NPROCESSORS_ONLNsa_datam_base_SC_TIMEOUTSsockaddr_SC_BASE_SC_PII_OSI_COTSsockaddr_un_SC_THREAD_SAFE_FUNCTIONS_SC_ATEXIT_MAXsockaddr_ns_SC_STREAM_MAXifru_slavepsutil_posix_getpriority_SC_PRIORITIZED_IOpy_ptp_SC_V6_ILP32_OFF32_SC_THREAD_SPORADIC_SERVERsll_pkttype__socket_typeifu_dstaddr_SC_SHRT_MIN_SC_USHRT_MAX_SC_NL_TEXTMAX_SC_STREAMS__rlimit_resourceIFF_PORTSEL_SC_THREAD_DESTRUCTOR_ITERATIONS_SC_PIPE_SC_BC_DIM_MAXpsutil_getpagesize_SC_MAPPED_FILES_SC_2_C_BIND_SC_MQ_OPEN_MAX_SC_XOPEN_SHM_SC_INT_MAX_SC_2_FORT_DEVkill_SC_XOPEN_XPG2_SC_NZERO_SC_XOPEN_XPG4IFF_NOTRAILERS_SC_CPUTIME_SC_PII_INTERNET_SC_V7_LP64_OFF64_SC_LEVEL3_CACHE_SIZE_SC_MB_LEN_MAXifreq__u6_addr16__caddr_t_py_tmpsockaddr_ipxsll_hatypegetifaddrsifru_broadaddr_SC_REALTIME_SIGNALS__RLIMIT_LOCKSm_freemoduledefpy_netmask_SC_DEVICE_SPECIFIC_Ruint32_tin_addr_tmem_start_SC_SAVED_IDS__RLIM_NLIMITS_SC_2_C_DEV_SC_XBS5_LPBIG_OFFBIGob_base_SC_2_C_VERSION_SC_SEM_VALUE_MAX_SC_SCHAR_MAXifru_map_SC_SSIZE_MAX_SC_2_UPEPyExc_RuntimeErrorfreeifaddrsRLIMIT_COREifa_name_SC_INT_MIN_SC_BC_BASE_MAX__u6_addr8_SC_POLLsysconfml_flags_SC_SYSTEM_DATABASE_Rsockaddr_dlsll_halen__RLIMIT_NPROC_SC_2_LOCALEDEFIFF_MULTICASTsin_family_SC_XOPEN_XPG3_SC_T_IOV_MAX_SC_LEVEL1_ICACHE_ASSOCin_port_tpy_str_SC_SYMLOOP_MAXretval_SC_TRACE_LOGifa_ifu__u6_addr32append_flag_SC_THREAD_CPUTIMEsin_zero_SC_CHAR_MAX_SC_XBS5_ILP32_OFFBIG_SC_PII_INTERNET_DGRAMs_addrpy_address_SC_2_PBS_TRACK__u16psutil_net_if_flags_SC_FILE_ATTRIBUTES_SC_ASYNCHRONOUS_IO_SC_FSYNC__RLIMIT_NICEsockaddr_inarp_SC_DEVICE_SPECIFICsockaddr_ax25ifa_netmaskIFF_UP_SC_THREAD_ATTR_STACKSIZE_SC_REGEX_VERSION_SC_MEMLOCK_SC_DELAYTIMER_MAX_SC_LONG_BIT_SC_SEM_NSEMS_MAXsockaddr_isoPyModule_AddObjectPyModule_AddIntConstant_SC_XOPEN_STREAMS_SC_LEVEL1_ICACHE_LINESIZE_SC_SIGNALSsll_family__RLIMIT_OFILEIFF_SLAVEioctl_SC_2_PBS_ACCOUNTING_SC_AIO_MAX_SC_LEVEL2_CACHE_LINESIZE_SC_XOPEN_VERSIONmod_methodssa_family_t_SC_CLK_TCK_SC_SHELLsll_protocol_SC_UIO_MAXIOV_SC_TZNAME_MAXifru_data_SC_SPORADIC_SERVER_SC_MEMLOCK_RANGE_SC_AVPHYS_PAGES__RLIMIT_NLIMITS_SC_V7_ILP32_OFFBIG_SC_PII_XTIclosesll_ifindexflag_namesock_SC_V7_LPBIG_OFFBIG_SC_C_LANG_SUPPORT_RSOCK_DCCP_SC_LEVEL3_CACHE_ASSOCuint8_t_SC_FILE_SYSTEMifru_netmask_SC_PAGESIZE_SC_V6_ILP32_OFFBIGSOCK_PACKET__id_t_SC_SIGQUEUE_MAX_SC_SPAWNpsutil_posix_setpriority_SC_DEVICE_IO_SC_V6_LPBIG_OFFBIG_SC_2_VERSIONifru_mtu_SC_LEVEL4_CACHE_SIZEml_nameSOCK_DGRAMm_sizeIFF_LOOPBACK_SC_USER_GROUPS_RPyModuleDef_Slotvisitprocsin_port_SC_LINE_MAXsockaddr_eonpsutil/_psutil_posix.cIFF_AUTOMEDIAlladdrpy_broadcast_SC_LEVEL1_DCACHE_ASSOC_SC_HOST_NAME_MAX_SC_C_LANG_SUPPORT__RLIMIT_RSS_SC_THREAD_STACK_MINIFF_RUNNING_SC_SEMAPHORES_SC_UINT_MAX_SC_CHILD_MAX__RLIMIT_MSGQUEUEPyUnicode_FromString_SC_NGROUPS_MAX_SC_SINGLE_PROCESSIFF_ALLMULTI__be16__in6_usin_addr_SC_TTY_NAME_MAX_SC_PII_INTERNET_STREAMRLIMIT_AS_SC_MEMORY_PROTECTIONPyModuleDef_Basem_name_SC_XOPEN_CRYPTPyCFunctionpsutil_net_if_is_runningifru_newname_Py_FalseStructRLIMIT_FSIZE_SC_CHAR_BIT_SC_LEVEL1_DCACHE_SIZEIFF_NOARP_SC_CLOCK_SELECTIONm_initifrn_namesin6_addrpsutil_net_if_mtu_SC_TIMERSfreefunc_SC_BC_SCALE_MAX_SC_ULONG_MAXuint16_tsin6_port_SC_MQ_PRIO_MAX_SC_TRACE_SC_SPIN_LOCKSifru_flagsaddrlen_SC_LEVEL1_DCACHE_LINESIZE_SC_BC_STRING_MAXPyLong_FromLongmem_endSOCK_STREAMRLIMIT_DATA_SC_V7_ILP32_OFF32_SC_TRACE_SYS_MAX_SC_FD_MGMT_SC_REGEXPsin6_familyIFF_PROMISCRLIMIT_CPUPyInit__psutil_posix_SC_LEVEL1_ICACHE_SIZE_SC_ADVISORY_INFO__RLIMIT_RTPRIO_SC_SHRT_MAX_SC_XBS5_LP64_OFF64__builtin_strncpy_SC_READER_WRITER_LOCKS_SC_SYSTEM_DATABASE_SC_XOPEN_REALTIME_THREADS_SC_THREAD_ROBUST_PRIO_PROTECT_SC_2_CHAR_TERM_SC_PASS_MAX_SC_FIFOm_slots_SC_ARG_MAXifru_hwaddrml_doc_SC_2_PBS_CHECKPOINTPyMethodDef_SC_XOPEN_REALTIMEPRIO_USERifru_dstaddrpsutil_getpagesize_pywrapper_SC_2_FORT_RUNsocket_SC_TRACE_EVENT_FILTERpsutil_raise_for_pid_SC_RE_DUP_MAX_SC_SCHAR_MINsockaddr_in_SC_THREAD_ROBUST_PRIO_INHERIT_SC_THREADSPyList_Append_SC_PII__RLIMIT_SIGPENDING_SC_TRACE_INHERIT_SC_WORD_BIT_SC_XBS5_ILP32_OFF32_SC_PII_OSI_MRLIMIT_NOFILEm_traverse_SC_2_SW_DEV_SC_OPEN_MAX_SC_XOPEN_UNIXsockaddr_in6_SC_AIO_LISTIO_MAXPyErr_SetFromErrno_SC_PII_OSI_SC_UCHAR_MAX_SC_MONOTONIC_CLOCKm_clearpy_tupleifaddrSOCK_RAW_SC_PRIORITY_SCHEDULINGm_doc_SC_SELECT_SC_NETWORKING_SC_TIMER_MAX_SC_TRACE_EVENT_NAME_MAX_SC_V6_LP64_OFF64_SC_GETGR_R_SIZE_MAX_SC_LEVEL2_CACHE_SIZE_SC_LOGIN_NAME_MAX_SC_EXPR_NEST_MAXIFF_POINTOPOINT_SC_NPROCESSORS_CONF__RLIMIT_RTTIMEifru_addr__socklen_ttraverseprocSOCK_SEQPACKETPRIO_PROCESSsockaddr_atpsutil_pid_existsifa_flags_SC_SS_REPL_MAX_SC_RAW_SOCKETS_SC_BARRIERS_SC_CHAR_MIN_SC_LEVEL4_CACHE_LINESIZEpsutil_convert_ipaddr_SC_THREAD_PROCESS_SHARED_SC_NL_MSGMAX_Py_TrueStructsin6_scope_idm_index_SC_LEVEL4_CACHE_ASSOC_SC_CHARCLASS_NAME_MAX_SC_PII_OSI_CLTSpsutil_net_if_addrspy_retlist_SC_TYPED_MEMORY_OBJECTSml_meth__RLIMIT_MEMLOCKbase_addr_SC_2_PBS_SC_PHYS_PAGESIFF_MASTER_SC_PII_SOCKETIFF_DEBUG_SC_MULTI_PROCESS_SC_LEVEL2_CACHE_ASSOC__priority_whichsa_familyifmap_SC_THREAD_KEYS_MAXPyErr_FormatIFF_BROADCAST_SC_THREAD_THREADS_MAX_SC_GETPW_R_SIZE_MAXifa_dataSOCK_NONBLOCKPRIO_PGRP_SC_2_PBS_MESSAGE_SC_RTSIG_MAXsockaddr_ll_SC_THREAD_ATTR_STACKADDRPyList_NewPyModuleDef_SC_FILE_LOCKING_SC_SHARED_MEMORY_OBJECTS_SC_TRACE_NAME_MAX_SC_COLL_WEIGHTS_MAX_SC_XOPEN_ENH_I18N_SC_XOPEN_LEGACY_SC_JOB_CONTROLIFF_DYNAMICifa_nextRLIMIT_STACK_SC_MESSAGE_PASSING_SC_NL_LANGMAX_SC_IOV_MAX_SC_USER_GROUPSSOCK_CLOEXEC_SC_IPV6_SC_LEVEL3_CACHE_LINESIZE_SC_2_PBS_LOCATEifru_ivalue_SC_NL_SETMAX_SC_NL_ARGMAXifu_broadaddrnic_name_SC_THREAD_PRIO_INHERITsll_addr_SC_TRACE_USER_EVENT_MAXinquirysockaddr_x25SOCK_RDM_SC_AIO_PRIO_DELTA_MAX_SC_XOPEN_XCU_VERSION07U7��U�0>T>IUI��T�XcP��U�*�U���T��U�*�T�0QUQ��U�xP�V��V��V��V��V��V��U�&�U���P�V
#V�V
#V��V
#VUG\G��U�^ePe|V��Vf|V��VkrV��Vc�P�A\+>P��U���U���T��U���T���	���PBVHTUTiVj�V%PHTPPTUTE�U�PbTb�V���T���V�E�T�P�	����P��V�
	��

P
Q
VV
J	��JWVWE	����P!
3
PcrPr�\�D
\V
E\��V�
VV
JVW*V��\��\4
D
\JW\��
[A�*@
[A���\*@\��0���P��V��0�*@V*EV��V��V�E	
�@��E	\�	0�	"	P"	/	]/	E	0�/	J	]/	J	]J	�	
�@�J	�	\J	_	0�_	r	Pr	{	]{	�	0�{	�	]{	�	]�	�	
�@��	�	\�	�	0��	�	P�	�	]�	�	0��	�	]�	�	]�
%
QA��
%\�
�
0��
P]%0��	
]*]*]V
�

�@�V
�
\V
o
0�o
�
P�
�
]�
�
0��
�
]�
�
]�
�

A��
�
\�
�
0��
�
P�
�
]�
�
0��
�
]�
�
]�
E
9A��
E\�
0�"P"/]/E0�/J]/J]W�
?A�W�\Wo0�o�P��]��0���]��]��
IA���\��0���P��]��0���]��]�E
A��E\�0�"P"/]/E0�/J]/J]J�
�B�J�\J_0�_rPr]�0��]�]��
A���\��0���P��]��0���]��]�5

!A��5
\��0��
P

]
5
0�
:
]
:
]:
�

)A�:
�
\:
O
0�O
b
Pb
o
]o
�
0�o
�
]o
�
]�
�

2A��
�
\�
�
0��
�
P�
�
]�
�
0��
�
]�
�
]U��U�
T
U��T�8	��8SPS�V��U��VjuP��PU��U�$T$��T�KV�V��Vm�VY�^^^��^m�^*<P<�����*�0���P��Z�����0��Z�����0�m�0���S��0���Z��0�*Y0�Y�\��0�(>P>�\(\(0��m\m�P��\��0�*Y0�Y�S�B0�BNPN�S;S;�0��mSm�0���S��P��0�*s0�y�_�e0�elPl|_�_M_M�0��m_m�0���S��0���Z��0�*s0�y�]�p0��]_]_�0���P�m]m�0���S��0���Z��0���Z����'Z'����Z����'Z'����\(\		��\(\��S;S	��S;S��_M_		��_M_��]_]		��]_]������Q�m����Z��Z�Z��Z�Z��\��\m\��\6\��S��S(mS��S6VS��_��_Hm_��_Vm_��]��]��]��]`hUh~�U�`hTh~�T���U��S��U�S!�U�!CSCI�U���T�VU�T�V U !�T�!DVDHQHI�T���U���U���T��U���T���P��P���U����T���U�F�U���T�.V./�T�/EVEF�T�P'T/>T/?�U�/?V��U���U���P��X���U�fr��kr��������������[f���� (��(;��;M��M_��_r������������(�� (��(H��@H��Hp��`p���������	4
?
JR��*E��*E�	
�
*�	
	8`��D��	 
  �"T7
@�BhC�]�]�]�]�_`@a�b��"�"!�"7�bC�]j0#v�]�����&���&���'a�()�$�,v=P-�Q`4n@ahx�a��G���]��]��B��b�`���2G` |���T7��P5���P4
&9Leu��� ������4i@#�3BXhy�4W��#����p%W���p$��b'4I cy%j�"�	 ��%)�crtstuff.cderegister_tm_clones__do_global_dtors_auxcompleted.0__do_global_dtors_aux_fini_array_entryframe_dummy__frame_dummy_init_array_entry_psutil_common.c_psutil_posix.cpsutil_net_if_mtupsutil_net_if_is_runningpsutil_posix_setprioritypsutil_convert_ipaddrpsutil_net_if_addrspsutil_posix_getprioritypsutil_net_if_flagspsutil_getpagesize_pywrappermoduledefmod_methods__FRAME_END____dso_handle_DYNAMIC__GNU_EH_FRAME_HDR__TMC_END___GLOBAL_OFFSET_TABLE_getenv@@GLIBC_2.2.5PyList_NewPyModule_AddIntConstant__errno_location@@GLIBC_2.2.5strncpy@@GLIBC_2.2.5getpriority@@GLIBC_2.2.5_ITM_deregisterTMCloneTablePyErr_SetFromErrno_Py_DeallocPyErr_SetObject_finiPyExc_RuntimeErrorPyInit__psutil_posixPyErr_SetStringPyExc_ValueErrorsetpriority@@GLIBC_2.2.5psutil_getpagesizeioctl@@GLIBC_2.2.5close@@GLIBC_2.2.5getnameinfo@@GLIBC_2.2.5PyLong_FromLongPyList_AppendPyExc_OSError_Py_FalseStruct__gmon_start__PyObject_CallFunctionPy_BuildValuekill@@GLIBC_2.2.5PyModule_Create2psutil_raise_for_pidpsutil_PyErr_SetFromOSErrnoWithSyscall_Py_NoneStructgetifaddrs@@GLIBC_2.3PyObject_IsTruePyArg_ParseTuplepsutil_pid_existsNoSuchProcess_Py_TrueStructfreeifaddrs@@GLIBC_2.3psutil_set_debugPyUnicode_FromStringPyModule_AddObjectAccessDeniedsysconf@@GLIBC_2.2.5PSUTIL_DEBUGPyErr_Formatsprintf@@GLIBC_2.2.5_ITM_registerTMCloneTablestrerror@@GLIBC_2.2.5psutil_check_pid_range__cxa_finalize@@GLIBC_2.2.5_initpsutil_setupsocket@@GLIBC_2.2.5.symtab.strtab.shstrtab.note.gnu.build-id.gnu.hash.dynsym.dynstr.gnu.version.gnu.version_r.rela.dyn.rela.plt.init.text.fini.rodata.eh_frame_hdr.eh_frame.init_array.fini_array.data.rel.ro.dynamic.got.got.plt.data.bss.comment.debug_aranges.debug_info.debug_abbrev.debug_line.debug_str.debug_loc.debug_ranges88$.���o```8��(@��[H���oDDnU���o��Pd�nB��xx  s    `~�"�"��T7T7	�2@@���B�B��hChC���]�M��]�M��]�M��]�M���_�OX�`P@�@a@Q� ��b�R�0�R/��R`	OS�?ےr#M�5/0��M:���,E�p@��
!<	���
SPKok\ػ�هc�cpsutil/_pssunos.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Sun OS Solaris platform implementation."""

import errno
import functools
import os
import socket
import subprocess
import sys
from collections import namedtuple
from socket import AF_INET

from . import _common
from . import _psposix
from . import _psutil_posix as cext_posix
from . import _psutil_sunos as cext
from ._common import AF_INET6
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import ZombieProcess
from ._common import debug
from ._common import get_procfs_path
from ._common import isfile_strict
from ._common import memoize_when_activated
from ._common import sockfam_to_enum
from ._common import socktype_to_enum
from ._common import usage_percent
from ._compat import PY3
from ._compat import FileNotFoundError
from ._compat import PermissionError
from ._compat import ProcessLookupError
from ._compat import b


__extra__all__ = ["CONN_IDLE", "CONN_BOUND", "PROCFS_PATH"]


# =====================================================================
# --- globals
# =====================================================================


PAGE_SIZE = cext_posix.getpagesize()
AF_LINK = cext_posix.AF_LINK
IS_64_BIT = sys.maxsize > 2**32

CONN_IDLE = "IDLE"
CONN_BOUND = "BOUND"

PROC_STATUSES = {
    cext.SSLEEP: _common.STATUS_SLEEPING,
    cext.SRUN: _common.STATUS_RUNNING,
    cext.SZOMB: _common.STATUS_ZOMBIE,
    cext.SSTOP: _common.STATUS_STOPPED,
    cext.SIDL: _common.STATUS_IDLE,
    cext.SONPROC: _common.STATUS_RUNNING,  # same as run
    cext.SWAIT: _common.STATUS_WAITING,
}

TCP_STATUSES = {
    cext.TCPS_ESTABLISHED: _common.CONN_ESTABLISHED,
    cext.TCPS_SYN_SENT: _common.CONN_SYN_SENT,
    cext.TCPS_SYN_RCVD: _common.CONN_SYN_RECV,
    cext.TCPS_FIN_WAIT_1: _common.CONN_FIN_WAIT1,
    cext.TCPS_FIN_WAIT_2: _common.CONN_FIN_WAIT2,
    cext.TCPS_TIME_WAIT: _common.CONN_TIME_WAIT,
    cext.TCPS_CLOSED: _common.CONN_CLOSE,
    cext.TCPS_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
    cext.TCPS_LAST_ACK: _common.CONN_LAST_ACK,
    cext.TCPS_LISTEN: _common.CONN_LISTEN,
    cext.TCPS_CLOSING: _common.CONN_CLOSING,
    cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
    cext.TCPS_IDLE: CONN_IDLE,  # sunos specific
    cext.TCPS_BOUND: CONN_BOUND,  # sunos specific
}

proc_info_map = dict(
    ppid=0,
    rss=1,
    vms=2,
    create_time=3,
    nice=4,
    num_threads=5,
    status=6,
    ttynr=7,
    uid=8,
    euid=9,
    gid=10,
    egid=11,
)


# =====================================================================
# --- named tuples
# =====================================================================


# psutil.cpu_times()
scputimes = namedtuple('scputimes', ['user', 'system', 'idle', 'iowait'])
# psutil.cpu_times(percpu=True)
pcputimes = namedtuple(
    'pcputimes', ['user', 'system', 'children_user', 'children_system']
)
# psutil.virtual_memory()
svmem = namedtuple('svmem', ['total', 'available', 'percent', 'used', 'free'])
# psutil.Process.memory_info()
pmem = namedtuple('pmem', ['rss', 'vms'])
pfullmem = pmem
# psutil.Process.memory_maps(grouped=True)
pmmap_grouped = namedtuple(
    'pmmap_grouped', ['path', 'rss', 'anonymous', 'locked']
)
# psutil.Process.memory_maps(grouped=False)
pmmap_ext = namedtuple(
    'pmmap_ext', 'addr perms ' + ' '.join(pmmap_grouped._fields)
)


# =====================================================================
# --- memory
# =====================================================================


def virtual_memory():
    """Report virtual memory metrics."""
    # we could have done this with kstat, but IMHO this is good enough
    total = os.sysconf('SC_PHYS_PAGES') * PAGE_SIZE
    # note: there's no difference on Solaris
    free = avail = os.sysconf('SC_AVPHYS_PAGES') * PAGE_SIZE
    used = total - free
    percent = usage_percent(used, total, round_=1)
    return svmem(total, avail, percent, used, free)


def swap_memory():
    """Report swap memory metrics."""
    sin, sout = cext.swap_mem()
    # XXX
    # we are supposed to get total/free by doing so:
    # http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/
    #     usr/src/cmd/swap/swap.c
    # ...nevertheless I can't manage to obtain the same numbers as 'swap'
    # cmdline utility, so let's parse its output (sigh!)
    p = subprocess.Popen(
        [
            '/usr/bin/env',
            'PATH=/usr/sbin:/sbin:%s' % os.environ['PATH'],
            'swap',
            '-l',
        ],
        stdout=subprocess.PIPE,
    )
    stdout, _ = p.communicate()
    if PY3:
        stdout = stdout.decode(sys.stdout.encoding)
    if p.returncode != 0:
        raise RuntimeError("'swap -l' failed (retcode=%s)" % p.returncode)

    lines = stdout.strip().split('\n')[1:]
    if not lines:
        msg = 'no swap device(s) configured'
        raise RuntimeError(msg)
    total = free = 0
    for line in lines:
        line = line.split()
        t, f = line[3:5]
        total += int(int(t) * 512)
        free += int(int(f) * 512)
    used = total - free
    percent = usage_percent(used, total, round_=1)
    return _common.sswap(
        total, used, free, percent, sin * PAGE_SIZE, sout * PAGE_SIZE
    )


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return system-wide CPU times as a named tuple."""
    ret = cext.per_cpu_times()
    return scputimes(*[sum(x) for x in zip(*ret)])


def per_cpu_times():
    """Return system per-CPU times as a list of named tuples."""
    ret = cext.per_cpu_times()
    return [scputimes(*x) for x in ret]


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    try:
        return os.sysconf("SC_NPROCESSORS_ONLN")
    except ValueError:
        # mimic os.cpu_count() behavior
        return None


def cpu_count_cores():
    """Return the number of CPU cores in the system."""
    return cext.cpu_count_cores()


def cpu_stats():
    """Return various CPU stats as a named tuple."""
    ctx_switches, interrupts, syscalls, _traps = cext.cpu_stats()
    soft_interrupts = 0
    return _common.scpustats(
        ctx_switches, interrupts, soft_interrupts, syscalls
    )


# =====================================================================
# --- disks
# =====================================================================


disk_io_counters = cext.disk_io_counters
disk_usage = _psposix.disk_usage


def disk_partitions(all=False):
    """Return system disk partitions."""
    # TODO - the filtering logic should be better checked so that
    # it tries to reflect 'df' as much as possible
    retlist = []
    partitions = cext.disk_partitions()
    for partition in partitions:
        device, mountpoint, fstype, opts = partition
        if device == 'none':
            device = ''
        if not all:
            # Differently from, say, Linux, we don't have a list of
            # common fs types so the best we can do, AFAIK, is to
            # filter by filesystem having a total size > 0.
            try:
                if not disk_usage(mountpoint).total:
                    continue
            except OSError as err:
                # https://github.com/giampaolo/psutil/issues/1674
                debug("skipping %r: %s" % (mountpoint, err))
                continue
        ntuple = _common.sdiskpart(device, mountpoint, fstype, opts)
        retlist.append(ntuple)
    return retlist


# =====================================================================
# --- network
# =====================================================================


net_io_counters = cext.net_io_counters
net_if_addrs = cext_posix.net_if_addrs


def net_connections(kind, _pid=-1):
    """Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    Only INET sockets are returned (UNIX are not).
    """
    cmap = _common.conn_tmap.copy()
    if _pid == -1:
        cmap.pop('unix', 0)
    if kind not in cmap:
        raise ValueError(
            "invalid %r kind argument; choose between %s"
            % (kind, ', '.join([repr(x) for x in cmap]))
        )
    families, types = _common.conn_tmap[kind]
    rawlist = cext.net_connections(_pid)
    ret = set()
    for item in rawlist:
        fd, fam, type_, laddr, raddr, status, pid = item
        if fam not in families:
            continue
        if type_ not in types:
            continue
        # TODO: refactor and use _common.conn_to_ntuple.
        if fam in (AF_INET, AF_INET6):
            if laddr:
                laddr = _common.addr(*laddr)
            if raddr:
                raddr = _common.addr(*raddr)
        status = TCP_STATUSES[status]
        fam = sockfam_to_enum(fam)
        type_ = socktype_to_enum(type_)
        if _pid == -1:
            nt = _common.sconn(fd, fam, type_, laddr, raddr, status, pid)
        else:
            nt = _common.pconn(fd, fam, type_, laddr, raddr, status)
        ret.add(nt)
    return list(ret)


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    ret = cext.net_if_stats()
    for name, items in ret.items():
        isup, duplex, speed, mtu = items
        if hasattr(_common, 'NicDuplex'):
            duplex = _common.NicDuplex(duplex)
        ret[name] = _common.snicstats(isup, duplex, speed, mtu, '')
    return ret


# =====================================================================
# --- other system functions
# =====================================================================


def boot_time():
    """The system boot time expressed in seconds since the epoch."""
    return cext.boot_time()


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    localhost = (':0.0', ':0')
    for item in rawlist:
        user, tty, hostname, tstamp, user_process, pid = item
        # note: the underlying C function includes entries about
        # system boot, run level and others.  We might want
        # to use them in the future.
        if not user_process:
            continue
        if hostname in localhost:
            hostname = 'localhost'
        nt = _common.suser(user, tty, hostname, tstamp, pid)
        retlist.append(nt)
    return retlist


# =====================================================================
# --- processes
# =====================================================================


def pids():
    """Returns a list of PIDs currently running on the system."""
    return [int(x) for x in os.listdir(b(get_procfs_path())) if x.isdigit()]


def pid_exists(pid):
    """Check for the existence of a unix pid."""
    return _psposix.pid_exists(pid)


def wrap_exceptions(fun):
    """Call callable into a try/except clause and translate ENOENT,
    EACCES and EPERM in NoSuchProcess or AccessDenied exceptions.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except (FileNotFoundError, ProcessLookupError):
            # ENOENT (no such file or directory) gets raised on open().
            # ESRCH (no such process) can get raised on read() if
            # process is gone in meantime.
            if not pid_exists(self.pid):
                raise NoSuchProcess(self.pid, self._name)
            else:
                raise ZombieProcess(self.pid, self._name, self._ppid)
        except PermissionError:
            raise AccessDenied(self.pid, self._name)
        except OSError:
            if self.pid == 0:
                if 0 in pids():
                    raise AccessDenied(self.pid, self._name)
                else:
                    raise
            raise

    return wrapper


class Process:
    """Wrapper class around underlying C implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "_procfs_path", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None
        self._procfs_path = get_procfs_path()

    def _assert_alive(self):
        """Raise NSP if the process disappeared on us."""
        # For those C function who do not raise NSP, possibly returning
        # incorrect or incomplete result.
        os.stat('%s/%s' % (self._procfs_path, self.pid))

    def oneshot_enter(self):
        self._proc_name_and_args.cache_activate(self)
        self._proc_basic_info.cache_activate(self)
        self._proc_cred.cache_activate(self)

    def oneshot_exit(self):
        self._proc_name_and_args.cache_deactivate(self)
        self._proc_basic_info.cache_deactivate(self)
        self._proc_cred.cache_deactivate(self)

    @wrap_exceptions
    @memoize_when_activated
    def _proc_name_and_args(self):
        return cext.proc_name_and_args(self.pid, self._procfs_path)

    @wrap_exceptions
    @memoize_when_activated
    def _proc_basic_info(self):
        if self.pid == 0 and not os.path.exists(
            '%s/%s/psinfo' % (self._procfs_path, self.pid)
        ):
            raise AccessDenied(self.pid)
        ret = cext.proc_basic_info(self.pid, self._procfs_path)
        assert len(ret) == len(proc_info_map)
        return ret

    @wrap_exceptions
    @memoize_when_activated
    def _proc_cred(self):
        return cext.proc_cred(self.pid, self._procfs_path)

    @wrap_exceptions
    def name(self):
        # note: max len == 15
        return self._proc_name_and_args()[0]

    @wrap_exceptions
    def exe(self):
        try:
            return os.readlink(
                "%s/%s/path/a.out" % (self._procfs_path, self.pid)
            )
        except OSError:
            pass  # continue and guess the exe name from the cmdline
        # Will be guessed later from cmdline but we want to explicitly
        # invoke cmdline here in order to get an AccessDenied
        # exception if the user has not enough privileges.
        self.cmdline()
        return ""

    @wrap_exceptions
    def cmdline(self):
        return self._proc_name_and_args()[1].split(' ')

    @wrap_exceptions
    def environ(self):
        return cext.proc_environ(self.pid, self._procfs_path)

    @wrap_exceptions
    def create_time(self):
        return self._proc_basic_info()[proc_info_map['create_time']]

    @wrap_exceptions
    def num_threads(self):
        return self._proc_basic_info()[proc_info_map['num_threads']]

    @wrap_exceptions
    def nice_get(self):
        # Note #1: getpriority(3) doesn't work for realtime processes.
        # Psinfo is what ps uses, see:
        # https://github.com/giampaolo/psutil/issues/1194
        return self._proc_basic_info()[proc_info_map['nice']]

    @wrap_exceptions
    def nice_set(self, value):
        if self.pid in (2, 3):
            # Special case PIDs: internally setpriority(3) return ESRCH
            # (no such process), no matter what.
            # The process actually exists though, as it has a name,
            # creation time, etc.
            raise AccessDenied(self.pid, self._name)
        return cext_posix.setpriority(self.pid, value)

    @wrap_exceptions
    def ppid(self):
        self._ppid = self._proc_basic_info()[proc_info_map['ppid']]
        return self._ppid

    @wrap_exceptions
    def uids(self):
        try:
            real, effective, saved, _, _, _ = self._proc_cred()
        except AccessDenied:
            real = self._proc_basic_info()[proc_info_map['uid']]
            effective = self._proc_basic_info()[proc_info_map['euid']]
            saved = None
        return _common.puids(real, effective, saved)

    @wrap_exceptions
    def gids(self):
        try:
            _, _, _, real, effective, saved = self._proc_cred()
        except AccessDenied:
            real = self._proc_basic_info()[proc_info_map['gid']]
            effective = self._proc_basic_info()[proc_info_map['egid']]
            saved = None
        return _common.puids(real, effective, saved)

    @wrap_exceptions
    def cpu_times(self):
        try:
            times = cext.proc_cpu_times(self.pid, self._procfs_path)
        except OSError as err:
            if err.errno == errno.EOVERFLOW and not IS_64_BIT:
                # We may get here if we attempt to query a 64bit process
                # with a 32bit python.
                # Error originates from read() and also tools like "cat"
                # fail in the same way (!).
                # Since there simply is no way to determine CPU times we
                # return 0.0 as a fallback. See:
                # https://github.com/giampaolo/psutil/issues/857
                times = (0.0, 0.0, 0.0, 0.0)
            else:
                raise
        return _common.pcputimes(*times)

    @wrap_exceptions
    def cpu_num(self):
        return cext.proc_cpu_num(self.pid, self._procfs_path)

    @wrap_exceptions
    def terminal(self):
        procfs_path = self._procfs_path
        hit_enoent = False
        tty = wrap_exceptions(self._proc_basic_info()[proc_info_map['ttynr']])
        if tty != cext.PRNODEV:
            for x in (0, 1, 2, 255):
                try:
                    return os.readlink(
                        '%s/%d/path/%d' % (procfs_path, self.pid, x)
                    )
                except FileNotFoundError:
                    hit_enoent = True
                    continue
        if hit_enoent:
            self._assert_alive()

    @wrap_exceptions
    def cwd(self):
        # /proc/PID/path/cwd may not be resolved by readlink() even if
        # it exists (ls shows it). If that's the case and the process
        # is still alive return None (we can return None also on BSD).
        # Reference: http://goo.gl/55XgO
        procfs_path = self._procfs_path
        try:
            return os.readlink("%s/%s/path/cwd" % (procfs_path, self.pid))
        except FileNotFoundError:
            os.stat("%s/%s" % (procfs_path, self.pid))  # raise NSP or AD
            return ""

    @wrap_exceptions
    def memory_info(self):
        ret = self._proc_basic_info()
        rss = ret[proc_info_map['rss']] * 1024
        vms = ret[proc_info_map['vms']] * 1024
        return pmem(rss, vms)

    memory_full_info = memory_info

    @wrap_exceptions
    def status(self):
        code = self._proc_basic_info()[proc_info_map['status']]
        # XXX is '?' legit? (we're not supposed to return it anyway)
        return PROC_STATUSES.get(code, '?')

    @wrap_exceptions
    def threads(self):
        procfs_path = self._procfs_path
        ret = []
        tids = os.listdir('%s/%d/lwp' % (procfs_path, self.pid))
        hit_enoent = False
        for tid in tids:
            tid = int(tid)
            try:
                utime, stime = cext.query_process_thread(
                    self.pid, tid, procfs_path
                )
            except EnvironmentError as err:
                if err.errno == errno.EOVERFLOW and not IS_64_BIT:
                    # We may get here if we attempt to query a 64bit process
                    # with a 32bit python.
                    # Error originates from read() and also tools like "cat"
                    # fail in the same way (!).
                    # Since there simply is no way to determine CPU times we
                    # return 0.0 as a fallback. See:
                    # https://github.com/giampaolo/psutil/issues/857
                    continue
                # ENOENT == thread gone in meantime
                if err.errno == errno.ENOENT:
                    hit_enoent = True
                    continue
                raise
            else:
                nt = _common.pthread(tid, utime, stime)
                ret.append(nt)
        if hit_enoent:
            self._assert_alive()
        return ret

    @wrap_exceptions
    def open_files(self):
        retlist = []
        hit_enoent = False
        procfs_path = self._procfs_path
        pathdir = '%s/%d/path' % (procfs_path, self.pid)
        for fd in os.listdir('%s/%d/fd' % (procfs_path, self.pid)):
            path = os.path.join(pathdir, fd)
            if os.path.islink(path):
                try:
                    file = os.readlink(path)
                except FileNotFoundError:
                    hit_enoent = True
                    continue
                else:
                    if isfile_strict(file):
                        retlist.append(_common.popenfile(file, int(fd)))
        if hit_enoent:
            self._assert_alive()
        return retlist

    def _get_unix_sockets(self, pid):
        """Get UNIX sockets used by process by parsing 'pfiles' output."""
        # TODO: rewrite this in C (...but the damn netstat source code
        # does not include this part! Argh!!)
        cmd = ["pfiles", str(pid)]
        p = subprocess.Popen(
            cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
        )
        stdout, stderr = p.communicate()
        if PY3:
            stdout, stderr = (
                x.decode(sys.stdout.encoding) for x in (stdout, stderr)
            )
        if p.returncode != 0:
            if 'permission denied' in stderr.lower():
                raise AccessDenied(self.pid, self._name)
            if 'no such process' in stderr.lower():
                raise NoSuchProcess(self.pid, self._name)
            raise RuntimeError("%r command error\n%s" % (cmd, stderr))

        lines = stdout.split('\n')[2:]
        for i, line in enumerate(lines):
            line = line.lstrip()
            if line.startswith('sockname: AF_UNIX'):
                path = line.split(' ', 2)[2]
                type = lines[i - 2].strip()
                if type == 'SOCK_STREAM':
                    type = socket.SOCK_STREAM
                elif type == 'SOCK_DGRAM':
                    type = socket.SOCK_DGRAM
                else:
                    type = -1
                yield (-1, socket.AF_UNIX, type, path, "", _common.CONN_NONE)

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        ret = net_connections(kind, _pid=self.pid)
        # The underlying C implementation retrieves all OS connections
        # and filters them by PID.  At this point we can't tell whether
        # an empty list means there were no connections for process or
        # process is no longer active so we force NSP in case the PID
        # is no longer there.
        if not ret:
            # will raise NSP if process is gone
            os.stat('%s/%s' % (self._procfs_path, self.pid))

        # UNIX sockets
        if kind in ('all', 'unix'):
            ret.extend([
                _common.pconn(*conn)
                for conn in self._get_unix_sockets(self.pid)
            ])
        return ret

    nt_mmap_grouped = namedtuple('mmap', 'path rss anon locked')
    nt_mmap_ext = namedtuple('mmap', 'addr perms path rss anon locked')

    @wrap_exceptions
    def memory_maps(self):
        def toaddr(start, end):
            return '%s-%s' % (
                hex(start)[2:].strip('L'),
                hex(end)[2:].strip('L'),
            )

        procfs_path = self._procfs_path
        retlist = []
        try:
            rawlist = cext.proc_memory_maps(self.pid, procfs_path)
        except OSError as err:
            if err.errno == errno.EOVERFLOW and not IS_64_BIT:
                # We may get here if we attempt to query a 64bit process
                # with a 32bit python.
                # Error originates from read() and also tools like "cat"
                # fail in the same way (!).
                # Since there simply is no way to determine CPU times we
                # return 0.0 as a fallback. See:
                # https://github.com/giampaolo/psutil/issues/857
                return []
            else:
                raise
        hit_enoent = False
        for item in rawlist:
            addr, addrsize, perm, name, rss, anon, locked = item
            addr = toaddr(addr, addrsize)
            if not name.startswith('['):
                try:
                    name = os.readlink(
                        '%s/%s/path/%s' % (procfs_path, self.pid, name)
                    )
                except OSError as err:
                    if err.errno == errno.ENOENT:
                        # sometimes the link may not be resolved by
                        # readlink() even if it exists (ls shows it).
                        # If that's the case we just return the
                        # unresolved link path.
                        # This seems an inconsistency with /proc similar
                        # to: http://goo.gl/55XgO
                        name = '%s/%s/path/%s' % (procfs_path, self.pid, name)
                        hit_enoent = True
                    else:
                        raise
            retlist.append((addr, perm, name, rss, anon, locked))
        if hit_enoent:
            self._assert_alive()
        return retlist

    @wrap_exceptions
    def num_fds(self):
        return len(os.listdir("%s/%s/fd" % (self._procfs_path, self.pid)))

    @wrap_exceptions
    def num_ctx_switches(self):
        return _common.pctxsw(
            *cext.proc_num_ctx_switches(self.pid, self._procfs_path)
        )

    @wrap_exceptions
    def wait(self, timeout=None):
        return _psposix.wait_pid(self.pid, timeout, self._name)
PKok\;�`�}�}psutil/_psbsd.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""FreeBSD, OpenBSD and NetBSD platforms implementation."""

import contextlib
import errno
import functools
import os
from collections import defaultdict
from collections import namedtuple
from xml.etree import ElementTree  # noqa ICN001

from . import _common
from . import _psposix
from . import _psutil_bsd as cext
from . import _psutil_posix as cext_posix
from ._common import FREEBSD
from ._common import NETBSD
from ._common import OPENBSD
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import ZombieProcess
from ._common import conn_tmap
from ._common import conn_to_ntuple
from ._common import debug
from ._common import memoize
from ._common import memoize_when_activated
from ._common import usage_percent
from ._compat import FileNotFoundError
from ._compat import PermissionError
from ._compat import ProcessLookupError
from ._compat import which


__extra__all__ = []


# =====================================================================
# --- globals
# =====================================================================


if FREEBSD:
    PROC_STATUSES = {
        cext.SIDL: _common.STATUS_IDLE,
        cext.SRUN: _common.STATUS_RUNNING,
        cext.SSLEEP: _common.STATUS_SLEEPING,
        cext.SSTOP: _common.STATUS_STOPPED,
        cext.SZOMB: _common.STATUS_ZOMBIE,
        cext.SWAIT: _common.STATUS_WAITING,
        cext.SLOCK: _common.STATUS_LOCKED,
    }
elif OPENBSD:
    PROC_STATUSES = {
        cext.SIDL: _common.STATUS_IDLE,
        cext.SSLEEP: _common.STATUS_SLEEPING,
        cext.SSTOP: _common.STATUS_STOPPED,
        # According to /usr/include/sys/proc.h SZOMB is unused.
        # test_zombie_process() shows that SDEAD is the right
        # equivalent. Also it appears there's no equivalent of
        # psutil.STATUS_DEAD. SDEAD really means STATUS_ZOMBIE.
        # cext.SZOMB: _common.STATUS_ZOMBIE,
        cext.SDEAD: _common.STATUS_ZOMBIE,
        cext.SZOMB: _common.STATUS_ZOMBIE,
        # From http://www.eecs.harvard.edu/~margo/cs161/videos/proc.h.txt
        # OpenBSD has SRUN and SONPROC: SRUN indicates that a process
        # is runnable but *not* yet running, i.e. is on a run queue.
        # SONPROC indicates that the process is actually executing on
        # a CPU, i.e. it is no longer on a run queue.
        # As such we'll map SRUN to STATUS_WAKING and SONPROC to
        # STATUS_RUNNING
        cext.SRUN: _common.STATUS_WAKING,
        cext.SONPROC: _common.STATUS_RUNNING,
    }
elif NETBSD:
    PROC_STATUSES = {
        cext.SIDL: _common.STATUS_IDLE,
        cext.SSLEEP: _common.STATUS_SLEEPING,
        cext.SSTOP: _common.STATUS_STOPPED,
        cext.SZOMB: _common.STATUS_ZOMBIE,
        cext.SRUN: _common.STATUS_WAKING,
        cext.SONPROC: _common.STATUS_RUNNING,
    }

TCP_STATUSES = {
    cext.TCPS_ESTABLISHED: _common.CONN_ESTABLISHED,
    cext.TCPS_SYN_SENT: _common.CONN_SYN_SENT,
    cext.TCPS_SYN_RECEIVED: _common.CONN_SYN_RECV,
    cext.TCPS_FIN_WAIT_1: _common.CONN_FIN_WAIT1,
    cext.TCPS_FIN_WAIT_2: _common.CONN_FIN_WAIT2,
    cext.TCPS_TIME_WAIT: _common.CONN_TIME_WAIT,
    cext.TCPS_CLOSED: _common.CONN_CLOSE,
    cext.TCPS_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
    cext.TCPS_LAST_ACK: _common.CONN_LAST_ACK,
    cext.TCPS_LISTEN: _common.CONN_LISTEN,
    cext.TCPS_CLOSING: _common.CONN_CLOSING,
    cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
}

PAGESIZE = cext_posix.getpagesize()
AF_LINK = cext_posix.AF_LINK

HAS_PER_CPU_TIMES = hasattr(cext, "per_cpu_times")
HAS_PROC_NUM_THREADS = hasattr(cext, "proc_num_threads")
HAS_PROC_OPEN_FILES = hasattr(cext, 'proc_open_files')
HAS_PROC_NUM_FDS = hasattr(cext, 'proc_num_fds')

kinfo_proc_map = dict(
    ppid=0,
    status=1,
    real_uid=2,
    effective_uid=3,
    saved_uid=4,
    real_gid=5,
    effective_gid=6,
    saved_gid=7,
    ttynr=8,
    create_time=9,
    ctx_switches_vol=10,
    ctx_switches_unvol=11,
    read_io_count=12,
    write_io_count=13,
    user_time=14,
    sys_time=15,
    ch_user_time=16,
    ch_sys_time=17,
    rss=18,
    vms=19,
    memtext=20,
    memdata=21,
    memstack=22,
    cpunum=23,
    name=24,
)


# =====================================================================
# --- named tuples
# =====================================================================


# fmt: off
# psutil.virtual_memory()
svmem = namedtuple(
    'svmem', ['total', 'available', 'percent', 'used', 'free',
              'active', 'inactive', 'buffers', 'cached', 'shared', 'wired'])
# psutil.cpu_times()
scputimes = namedtuple(
    'scputimes', ['user', 'nice', 'system', 'idle', 'irq'])
# psutil.Process.memory_info()
pmem = namedtuple('pmem', ['rss', 'vms', 'text', 'data', 'stack'])
# psutil.Process.memory_full_info()
pfullmem = pmem
# psutil.Process.cpu_times()
pcputimes = namedtuple('pcputimes',
                       ['user', 'system', 'children_user', 'children_system'])
# psutil.Process.memory_maps(grouped=True)
pmmap_grouped = namedtuple(
    'pmmap_grouped', 'path rss, private, ref_count, shadow_count')
# psutil.Process.memory_maps(grouped=False)
pmmap_ext = namedtuple(
    'pmmap_ext', 'addr, perms path rss, private, ref_count, shadow_count')
# psutil.disk_io_counters()
if FREEBSD:
    sdiskio = namedtuple('sdiskio', ['read_count', 'write_count',
                                     'read_bytes', 'write_bytes',
                                     'read_time', 'write_time',
                                     'busy_time'])
else:
    sdiskio = namedtuple('sdiskio', ['read_count', 'write_count',
                                     'read_bytes', 'write_bytes'])
# fmt: on


# =====================================================================
# --- memory
# =====================================================================


def virtual_memory():
    mem = cext.virtual_mem()
    if NETBSD:
        total, free, active, inactive, wired, cached = mem
        # On NetBSD buffers and shared mem is determined via /proc.
        # The C ext set them to 0.
        with open('/proc/meminfo', 'rb') as f:
            for line in f:
                if line.startswith(b'Buffers:'):
                    buffers = int(line.split()[1]) * 1024
                elif line.startswith(b'MemShared:'):
                    shared = int(line.split()[1]) * 1024
        # Before avail was calculated as (inactive + cached + free),
        # same as zabbix, but it turned out it could exceed total (see
        # #2233), so zabbix seems to be wrong. Htop calculates it
        # differently, and the used value seem more realistic, so let's
        # match htop.
        # https://github.com/htop-dev/htop/blob/e7f447b/netbsd/NetBSDProcessList.c#L162  # noqa
        # https://github.com/zabbix/zabbix/blob/af5e0f8/src/libs/zbxsysinfo/netbsd/memory.c#L135  # noqa
        used = active + wired
        avail = total - used
    else:
        total, free, active, inactive, wired, cached, buffers, shared = mem
        # matches freebsd-memory CLI:
        # * https://people.freebsd.org/~rse/dist/freebsd-memory
        # * https://www.cyberciti.biz/files/scripts/freebsd-memory.pl.txt
        # matches zabbix:
        # * https://github.com/zabbix/zabbix/blob/af5e0f8/src/libs/zbxsysinfo/freebsd/memory.c#L143  # noqa
        avail = inactive + cached + free
        used = active + wired + cached

    percent = usage_percent((total - avail), total, round_=1)
    return svmem(
        total,
        avail,
        percent,
        used,
        free,
        active,
        inactive,
        buffers,
        cached,
        shared,
        wired,
    )


def swap_memory():
    """System swap memory as (total, used, free, sin, sout) namedtuple."""
    total, used, free, sin, sout = cext.swap_mem()
    percent = usage_percent(used, total, round_=1)
    return _common.sswap(total, used, free, percent, sin, sout)


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return system per-CPU times as a namedtuple."""
    user, nice, system, idle, irq = cext.cpu_times()
    return scputimes(user, nice, system, idle, irq)


if HAS_PER_CPU_TIMES:

    def per_cpu_times():
        """Return system CPU times as a namedtuple."""
        ret = []
        for cpu_t in cext.per_cpu_times():
            user, nice, system, idle, irq = cpu_t
            item = scputimes(user, nice, system, idle, irq)
            ret.append(item)
        return ret

else:
    # XXX
    # Ok, this is very dirty.
    # On FreeBSD < 8 we cannot gather per-cpu information, see:
    # https://github.com/giampaolo/psutil/issues/226
    # If num cpus > 1, on first call we return single cpu times to avoid a
    # crash at psutil import time.
    # Next calls will fail with NotImplementedError
    def per_cpu_times():
        """Return system CPU times as a namedtuple."""
        if cpu_count_logical() == 1:
            return [cpu_times()]
        if per_cpu_times.__called__:
            msg = "supported only starting from FreeBSD 8"
            raise NotImplementedError(msg)
        per_cpu_times.__called__ = True
        return [cpu_times()]

    per_cpu_times.__called__ = False


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    return cext.cpu_count_logical()


if OPENBSD or NETBSD:

    def cpu_count_cores():
        # OpenBSD and NetBSD do not implement this.
        return 1 if cpu_count_logical() == 1 else None

else:

    def cpu_count_cores():
        """Return the number of CPU cores in the system."""
        # From the C module we'll get an XML string similar to this:
        # http://manpages.ubuntu.com/manpages/precise/man4/smp.4freebsd.html
        # We may get None in case "sysctl kern.sched.topology_spec"
        # is not supported on this BSD version, in which case we'll mimic
        # os.cpu_count() and return None.
        ret = None
        s = cext.cpu_topology()
        if s is not None:
            # get rid of padding chars appended at the end of the string
            index = s.rfind("</groups>")
            if index != -1:
                s = s[: index + 9]
                root = ElementTree.fromstring(s)
                try:
                    ret = len(root.findall('group/children/group/cpu')) or None
                finally:
                    # needed otherwise it will memleak
                    root.clear()
        if not ret:
            # If logical CPUs == 1 it's obvious we' have only 1 core.
            if cpu_count_logical() == 1:
                return 1
        return ret


def cpu_stats():
    """Return various CPU stats as a named tuple."""
    if FREEBSD:
        # Note: the C ext is returning some metrics we are not exposing:
        # traps.
        ctxsw, intrs, soft_intrs, syscalls, _traps = cext.cpu_stats()
    elif NETBSD:
        # XXX
        # Note about intrs: the C extension returns 0. intrs
        # can be determined via /proc/stat; it has the same value as
        # soft_intrs thought so the kernel is faking it (?).
        #
        # Note about syscalls: the C extension always sets it to 0 (?).
        #
        # Note: the C ext is returning some metrics we are not exposing:
        # traps, faults and forks.
        ctxsw, intrs, soft_intrs, syscalls, _traps, _faults, _forks = (
            cext.cpu_stats()
        )
        with open('/proc/stat', 'rb') as f:
            for line in f:
                if line.startswith(b'intr'):
                    intrs = int(line.split()[1])
    elif OPENBSD:
        # Note: the C ext is returning some metrics we are not exposing:
        # traps, faults and forks.
        ctxsw, intrs, soft_intrs, syscalls, _traps, _faults, _forks = (
            cext.cpu_stats()
        )
    return _common.scpustats(ctxsw, intrs, soft_intrs, syscalls)


if FREEBSD:

    def cpu_freq():
        """Return frequency metrics for CPUs. As of Dec 2018 only
        CPU 0 appears to be supported by FreeBSD and all other cores
        match the frequency of CPU 0.
        """
        ret = []
        num_cpus = cpu_count_logical()
        for cpu in range(num_cpus):
            try:
                current, available_freq = cext.cpu_freq(cpu)
            except NotImplementedError:
                continue
            if available_freq:
                try:
                    min_freq = int(available_freq.split(" ")[-1].split("/")[0])
                except (IndexError, ValueError):
                    min_freq = None
                try:
                    max_freq = int(available_freq.split(" ")[0].split("/")[0])
                except (IndexError, ValueError):
                    max_freq = None
            ret.append(_common.scpufreq(current, min_freq, max_freq))
        return ret

elif OPENBSD:

    def cpu_freq():
        curr = float(cext.cpu_freq())
        return [_common.scpufreq(curr, 0.0, 0.0)]


# =====================================================================
# --- disks
# =====================================================================


def disk_partitions(all=False):
    """Return mounted disk partitions as a list of namedtuples.
    'all' argument is ignored, see:
    https://github.com/giampaolo/psutil/issues/906.
    """
    retlist = []
    partitions = cext.disk_partitions()
    for partition in partitions:
        device, mountpoint, fstype, opts = partition
        ntuple = _common.sdiskpart(device, mountpoint, fstype, opts)
        retlist.append(ntuple)
    return retlist


disk_usage = _psposix.disk_usage
disk_io_counters = cext.disk_io_counters


# =====================================================================
# --- network
# =====================================================================


net_io_counters = cext.net_io_counters
net_if_addrs = cext_posix.net_if_addrs


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    names = net_io_counters().keys()
    ret = {}
    for name in names:
        try:
            mtu = cext_posix.net_if_mtu(name)
            flags = cext_posix.net_if_flags(name)
            duplex, speed = cext_posix.net_if_duplex_speed(name)
        except OSError as err:
            # https://github.com/giampaolo/psutil/issues/1279
            if err.errno != errno.ENODEV:
                raise
        else:
            if hasattr(_common, 'NicDuplex'):
                duplex = _common.NicDuplex(duplex)
            output_flags = ','.join(flags)
            isup = 'running' in flags
            ret[name] = _common.snicstats(
                isup, duplex, speed, mtu, output_flags
            )
    return ret


def net_connections(kind):
    """System-wide network connections."""
    if kind not in _common.conn_tmap:
        raise ValueError(
            "invalid %r kind argument; choose between %s"
            % (kind, ', '.join([repr(x) for x in conn_tmap]))
        )
    families, types = conn_tmap[kind]
    ret = set()

    if OPENBSD:
        rawlist = cext.net_connections(-1, families, types)
    elif NETBSD:
        rawlist = cext.net_connections(-1, kind)
    else:  # FreeBSD
        rawlist = cext.net_connections(families, types)

    for item in rawlist:
        fd, fam, type, laddr, raddr, status, pid = item
        nt = conn_to_ntuple(
            fd, fam, type, laddr, raddr, status, TCP_STATUSES, pid
        )
        ret.add(nt)
    return list(ret)


# =====================================================================
#  --- sensors
# =====================================================================


if FREEBSD:

    def sensors_battery():
        """Return battery info."""
        try:
            percent, minsleft, power_plugged = cext.sensors_battery()
        except NotImplementedError:
            # See: https://github.com/giampaolo/psutil/issues/1074
            return None
        power_plugged = power_plugged == 1
        if power_plugged:
            secsleft = _common.POWER_TIME_UNLIMITED
        elif minsleft == -1:
            secsleft = _common.POWER_TIME_UNKNOWN
        else:
            secsleft = minsleft * 60
        return _common.sbattery(percent, secsleft, power_plugged)

    def sensors_temperatures():
        """Return CPU cores temperatures if available, else an empty dict."""
        ret = defaultdict(list)
        num_cpus = cpu_count_logical()
        for cpu in range(num_cpus):
            try:
                current, high = cext.sensors_cpu_temperature(cpu)
                if high <= 0:
                    high = None
                name = "Core %s" % cpu
                ret["coretemp"].append(
                    _common.shwtemp(name, current, high, high)
                )
            except NotImplementedError:
                pass

        return ret


# =====================================================================
#  --- other system functions
# =====================================================================


def boot_time():
    """The system boot time expressed in seconds since the epoch."""
    return cext.boot_time()


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    for item in rawlist:
        user, tty, hostname, tstamp, pid = item
        if pid == -1:
            assert OPENBSD
            pid = None
        if tty == '~':
            continue  # reboot or shutdown
        nt = _common.suser(user, tty or None, hostname, tstamp, pid)
        retlist.append(nt)
    return retlist


# =====================================================================
# --- processes
# =====================================================================


@memoize
def _pid_0_exists():
    try:
        Process(0).name()
    except NoSuchProcess:
        return False
    except AccessDenied:
        return True
    else:
        return True


def pids():
    """Returns a list of PIDs currently running on the system."""
    ret = cext.pids()
    if OPENBSD and (0 not in ret) and _pid_0_exists():
        # On OpenBSD the kernel does not return PID 0 (neither does
        # ps) but it's actually querable (Process(0) will succeed).
        ret.insert(0, 0)
    return ret


if NETBSD:

    def pid_exists(pid):
        exists = _psposix.pid_exists(pid)
        if not exists:
            # We do this because _psposix.pid_exists() lies in case of
            # zombie processes.
            return pid in pids()
        else:
            return True

elif OPENBSD:

    def pid_exists(pid):
        exists = _psposix.pid_exists(pid)
        if not exists:
            return False
        else:
            # OpenBSD seems to be the only BSD platform where
            # _psposix.pid_exists() returns True for thread IDs (tids),
            # so we can't use it.
            return pid in pids()

else:  # FreeBSD
    pid_exists = _psposix.pid_exists


def is_zombie(pid):
    try:
        st = cext.proc_oneshot_info(pid)[kinfo_proc_map['status']]
        return PROC_STATUSES.get(st) == _common.STATUS_ZOMBIE
    except OSError:
        return False


def wrap_exceptions(fun):
    """Decorator which translates bare OSError exceptions into
    NoSuchProcess and AccessDenied.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except ProcessLookupError:
            if is_zombie(self.pid):
                raise ZombieProcess(self.pid, self._name, self._ppid)
            else:
                raise NoSuchProcess(self.pid, self._name)
        except PermissionError:
            raise AccessDenied(self.pid, self._name)
        except OSError:
            if self.pid == 0:
                if 0 in pids():
                    raise AccessDenied(self.pid, self._name)
                else:
                    raise
            raise

    return wrapper


@contextlib.contextmanager
def wrap_exceptions_procfs(inst):
    """Same as above, for routines relying on reading /proc fs."""
    try:
        yield
    except (ProcessLookupError, FileNotFoundError):
        # ENOENT (no such file or directory) gets raised on open().
        # ESRCH (no such process) can get raised on read() if
        # process is gone in meantime.
        if is_zombie(inst.pid):
            raise ZombieProcess(inst.pid, inst._name, inst._ppid)
        else:
            raise NoSuchProcess(inst.pid, inst._name)
    except PermissionError:
        raise AccessDenied(inst.pid, inst._name)


class Process:
    """Wrapper class around underlying C implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None

    def _assert_alive(self):
        """Raise NSP if the process disappeared on us."""
        # For those C function who do not raise NSP, possibly returning
        # incorrect or incomplete result.
        cext.proc_name(self.pid)

    @wrap_exceptions
    @memoize_when_activated
    def oneshot(self):
        """Retrieves multiple process info in one shot as a raw tuple."""
        ret = cext.proc_oneshot_info(self.pid)
        assert len(ret) == len(kinfo_proc_map)
        return ret

    def oneshot_enter(self):
        self.oneshot.cache_activate(self)

    def oneshot_exit(self):
        self.oneshot.cache_deactivate(self)

    @wrap_exceptions
    def name(self):
        name = self.oneshot()[kinfo_proc_map['name']]
        return name if name is not None else cext.proc_name(self.pid)

    @wrap_exceptions
    def exe(self):
        if FREEBSD:
            if self.pid == 0:
                return ''  # else NSP
            return cext.proc_exe(self.pid)
        elif NETBSD:
            if self.pid == 0:
                # /proc/0 dir exists but /proc/0/exe doesn't
                return ""
            with wrap_exceptions_procfs(self):
                return os.readlink("/proc/%s/exe" % self.pid)
        else:
            # OpenBSD: exe cannot be determined; references:
            # https://chromium.googlesource.com/chromium/src/base/+/
            #     master/base_paths_posix.cc
            # We try our best guess by using which against the first
            # cmdline arg (may return None).
            cmdline = self.cmdline()
            if cmdline:
                return which(cmdline[0]) or ""
            else:
                return ""

    @wrap_exceptions
    def cmdline(self):
        if OPENBSD and self.pid == 0:
            return []  # ...else it crashes
        elif NETBSD:
            # XXX - most of the times the underlying sysctl() call on
            # NetBSD and OpenBSD returns a truncated string. Also
            # /proc/pid/cmdline behaves the same so it looks like this
            # is a kernel bug.
            try:
                return cext.proc_cmdline(self.pid)
            except OSError as err:
                if err.errno == errno.EINVAL:
                    if is_zombie(self.pid):
                        raise ZombieProcess(self.pid, self._name, self._ppid)
                    elif not pid_exists(self.pid):
                        raise NoSuchProcess(self.pid, self._name, self._ppid)
                    else:
                        # XXX: this happens with unicode tests. It means the C
                        # routine is unable to decode invalid unicode chars.
                        debug("ignoring %r and returning an empty list" % err)
                        return []
                else:
                    raise
        else:
            return cext.proc_cmdline(self.pid)

    @wrap_exceptions
    def environ(self):
        return cext.proc_environ(self.pid)

    @wrap_exceptions
    def terminal(self):
        tty_nr = self.oneshot()[kinfo_proc_map['ttynr']]
        tmap = _psposix.get_terminal_map()
        try:
            return tmap[tty_nr]
        except KeyError:
            return None

    @wrap_exceptions
    def ppid(self):
        self._ppid = self.oneshot()[kinfo_proc_map['ppid']]
        return self._ppid

    @wrap_exceptions
    def uids(self):
        rawtuple = self.oneshot()
        return _common.puids(
            rawtuple[kinfo_proc_map['real_uid']],
            rawtuple[kinfo_proc_map['effective_uid']],
            rawtuple[kinfo_proc_map['saved_uid']],
        )

    @wrap_exceptions
    def gids(self):
        rawtuple = self.oneshot()
        return _common.pgids(
            rawtuple[kinfo_proc_map['real_gid']],
            rawtuple[kinfo_proc_map['effective_gid']],
            rawtuple[kinfo_proc_map['saved_gid']],
        )

    @wrap_exceptions
    def cpu_times(self):
        rawtuple = self.oneshot()
        return _common.pcputimes(
            rawtuple[kinfo_proc_map['user_time']],
            rawtuple[kinfo_proc_map['sys_time']],
            rawtuple[kinfo_proc_map['ch_user_time']],
            rawtuple[kinfo_proc_map['ch_sys_time']],
        )

    if FREEBSD:

        @wrap_exceptions
        def cpu_num(self):
            return self.oneshot()[kinfo_proc_map['cpunum']]

    @wrap_exceptions
    def memory_info(self):
        rawtuple = self.oneshot()
        return pmem(
            rawtuple[kinfo_proc_map['rss']],
            rawtuple[kinfo_proc_map['vms']],
            rawtuple[kinfo_proc_map['memtext']],
            rawtuple[kinfo_proc_map['memdata']],
            rawtuple[kinfo_proc_map['memstack']],
        )

    memory_full_info = memory_info

    @wrap_exceptions
    def create_time(self):
        return self.oneshot()[kinfo_proc_map['create_time']]

    @wrap_exceptions
    def num_threads(self):
        if HAS_PROC_NUM_THREADS:
            # FreeBSD
            return cext.proc_num_threads(self.pid)
        else:
            return len(self.threads())

    @wrap_exceptions
    def num_ctx_switches(self):
        rawtuple = self.oneshot()
        return _common.pctxsw(
            rawtuple[kinfo_proc_map['ctx_switches_vol']],
            rawtuple[kinfo_proc_map['ctx_switches_unvol']],
        )

    @wrap_exceptions
    def threads(self):
        # Note: on OpenSBD this (/dev/mem) requires root access.
        rawlist = cext.proc_threads(self.pid)
        retlist = []
        for thread_id, utime, stime in rawlist:
            ntuple = _common.pthread(thread_id, utime, stime)
            retlist.append(ntuple)
        if OPENBSD:
            self._assert_alive()
        return retlist

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        if kind not in conn_tmap:
            raise ValueError(
                "invalid %r kind argument; choose between %s"
                % (kind, ', '.join([repr(x) for x in conn_tmap]))
            )
        families, types = conn_tmap[kind]
        ret = []

        if NETBSD:
            rawlist = cext.net_connections(self.pid, kind)
        elif OPENBSD:
            rawlist = cext.net_connections(self.pid, families, types)
        else:
            rawlist = cext.proc_net_connections(self.pid, families, types)

        for item in rawlist:
            fd, fam, type, laddr, raddr, status = item[:6]
            if FREEBSD:
                if (fam not in families) or (type not in types):
                    continue
            nt = conn_to_ntuple(
                fd, fam, type, laddr, raddr, status, TCP_STATUSES
            )
            ret.append(nt)

        self._assert_alive()
        return ret

    @wrap_exceptions
    def wait(self, timeout=None):
        return _psposix.wait_pid(self.pid, timeout, self._name)

    @wrap_exceptions
    def nice_get(self):
        return cext_posix.getpriority(self.pid)

    @wrap_exceptions
    def nice_set(self, value):
        return cext_posix.setpriority(self.pid, value)

    @wrap_exceptions
    def status(self):
        code = self.oneshot()[kinfo_proc_map['status']]
        # XXX is '?' legit? (we're not supposed to return it anyway)
        return PROC_STATUSES.get(code, '?')

    @wrap_exceptions
    def io_counters(self):
        rawtuple = self.oneshot()
        return _common.pio(
            rawtuple[kinfo_proc_map['read_io_count']],
            rawtuple[kinfo_proc_map['write_io_count']],
            -1,
            -1,
        )

    @wrap_exceptions
    def cwd(self):
        """Return process current working directory."""
        # sometimes we get an empty string, in which case we turn
        # it into None
        if OPENBSD and self.pid == 0:
            return ""  # ...else it would raise EINVAL
        elif NETBSD or HAS_PROC_OPEN_FILES:
            # FreeBSD < 8 does not support functions based on
            # kinfo_getfile() and kinfo_getvmmap()
            return cext.proc_cwd(self.pid)
        else:
            raise NotImplementedError(
                "supported only starting from FreeBSD 8" if FREEBSD else ""
            )

    nt_mmap_grouped = namedtuple(
        'mmap', 'path rss, private, ref_count, shadow_count'
    )
    nt_mmap_ext = namedtuple(
        'mmap', 'addr, perms path rss, private, ref_count, shadow_count'
    )

    def _not_implemented(self):
        raise NotImplementedError

    # FreeBSD < 8 does not support functions based on kinfo_getfile()
    # and kinfo_getvmmap()
    if HAS_PROC_OPEN_FILES:

        @wrap_exceptions
        def open_files(self):
            """Return files opened by process as a list of namedtuples."""
            rawlist = cext.proc_open_files(self.pid)
            return [_common.popenfile(path, fd) for path, fd in rawlist]

    else:
        open_files = _not_implemented

    # FreeBSD < 8 does not support functions based on kinfo_getfile()
    # and kinfo_getvmmap()
    if HAS_PROC_NUM_FDS:

        @wrap_exceptions
        def num_fds(self):
            """Return the number of file descriptors opened by this process."""
            ret = cext.proc_num_fds(self.pid)
            if NETBSD:
                self._assert_alive()
            return ret

    else:
        num_fds = _not_implemented

    # --- FreeBSD only APIs

    if FREEBSD:

        @wrap_exceptions
        def cpu_affinity_get(self):
            return cext.proc_cpu_affinity_get(self.pid)

        @wrap_exceptions
        def cpu_affinity_set(self, cpus):
            # Pre-emptively check if CPUs are valid because the C
            # function has a weird behavior in case of invalid CPUs,
            # see: https://github.com/giampaolo/psutil/issues/586
            allcpus = tuple(range(len(per_cpu_times())))
            for cpu in cpus:
                if cpu not in allcpus:
                    raise ValueError(
                        "invalid CPU #%i (choose between %s)" % (cpu, allcpus)
                    )
            try:
                cext.proc_cpu_affinity_set(self.pid, cpus)
            except OSError as err:
                # 'man cpuset_setaffinity' about EDEADLK:
                # <<the call would leave a thread without a valid CPU to run
                # on because the set does not overlap with the thread's
                # anonymous mask>>
                if err.errno in (errno.EINVAL, errno.EDEADLK):
                    for cpu in cpus:
                        if cpu not in allcpus:
                            raise ValueError(
                                "invalid CPU #%i (choose between %s)"
                                % (cpu, allcpus)
                            )
                raise

        @wrap_exceptions
        def memory_maps(self):
            return cext.proc_memory_maps(self.pid)

        @wrap_exceptions
        def rlimit(self, resource, limits=None):
            if limits is None:
                return cext.proc_getrlimit(self.pid, resource)
            else:
                if len(limits) != 2:
                    raise ValueError(
                        "second argument must be a (soft, hard) tuple, got %s"
                        % repr(limits)
                    )
                soft, hard = limits
                return cext.proc_setrlimit(self.pid, resource, soft, hard)
PKok\w�]���psutil/_pswindows.pynu�[���# Copyright (c) 2009, Giampaolo Rodola'. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

"""Windows platform implementation."""

import contextlib
import errno
import functools
import os
import signal
import sys
import time
from collections import namedtuple

from . import _common
from ._common import ENCODING
from ._common import ENCODING_ERRS
from ._common import AccessDenied
from ._common import NoSuchProcess
from ._common import TimeoutExpired
from ._common import conn_tmap
from ._common import conn_to_ntuple
from ._common import debug
from ._common import isfile_strict
from ._common import memoize
from ._common import memoize_when_activated
from ._common import parse_environ_block
from ._common import usage_percent
from ._compat import PY3
from ._compat import long
from ._compat import lru_cache
from ._compat import range
from ._compat import unicode
from ._psutil_windows import ABOVE_NORMAL_PRIORITY_CLASS
from ._psutil_windows import BELOW_NORMAL_PRIORITY_CLASS
from ._psutil_windows import HIGH_PRIORITY_CLASS
from ._psutil_windows import IDLE_PRIORITY_CLASS
from ._psutil_windows import NORMAL_PRIORITY_CLASS
from ._psutil_windows import REALTIME_PRIORITY_CLASS


try:
    from . import _psutil_windows as cext
except ImportError as err:
    if (
        str(err).lower().startswith("dll load failed")
        and sys.getwindowsversion()[0] < 6
    ):
        # We may get here if:
        # 1) we are on an old Windows version
        # 2) psutil was installed via pip + wheel
        # See: https://github.com/giampaolo/psutil/issues/811
        msg = "this Windows version is too old (< Windows Vista); "
        msg += "psutil 3.4.2 is the latest version which supports Windows "
        msg += "2000, XP and 2003 server"
        raise RuntimeError(msg)
    else:
        raise

if PY3:
    import enum
else:
    enum = None

# process priority constants, import from __init__.py:
# http://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx
# fmt: off
__extra__all__ = [
    "win_service_iter", "win_service_get",
    # Process priority
    "ABOVE_NORMAL_PRIORITY_CLASS", "BELOW_NORMAL_PRIORITY_CLASS",
    "HIGH_PRIORITY_CLASS", "IDLE_PRIORITY_CLASS", "NORMAL_PRIORITY_CLASS",
    "REALTIME_PRIORITY_CLASS",
    # IO priority
    "IOPRIO_VERYLOW", "IOPRIO_LOW", "IOPRIO_NORMAL", "IOPRIO_HIGH",
    # others
    "CONN_DELETE_TCB", "AF_LINK",
]
# fmt: on


# =====================================================================
# --- globals
# =====================================================================

CONN_DELETE_TCB = "DELETE_TCB"
ERROR_PARTIAL_COPY = 299
PYPY = '__pypy__' in sys.builtin_module_names

if enum is None:
    AF_LINK = -1
else:
    AddressFamily = enum.IntEnum('AddressFamily', {'AF_LINK': -1})
    AF_LINK = AddressFamily.AF_LINK

TCP_STATUSES = {
    cext.MIB_TCP_STATE_ESTAB: _common.CONN_ESTABLISHED,
    cext.MIB_TCP_STATE_SYN_SENT: _common.CONN_SYN_SENT,
    cext.MIB_TCP_STATE_SYN_RCVD: _common.CONN_SYN_RECV,
    cext.MIB_TCP_STATE_FIN_WAIT1: _common.CONN_FIN_WAIT1,
    cext.MIB_TCP_STATE_FIN_WAIT2: _common.CONN_FIN_WAIT2,
    cext.MIB_TCP_STATE_TIME_WAIT: _common.CONN_TIME_WAIT,
    cext.MIB_TCP_STATE_CLOSED: _common.CONN_CLOSE,
    cext.MIB_TCP_STATE_CLOSE_WAIT: _common.CONN_CLOSE_WAIT,
    cext.MIB_TCP_STATE_LAST_ACK: _common.CONN_LAST_ACK,
    cext.MIB_TCP_STATE_LISTEN: _common.CONN_LISTEN,
    cext.MIB_TCP_STATE_CLOSING: _common.CONN_CLOSING,
    cext.MIB_TCP_STATE_DELETE_TCB: CONN_DELETE_TCB,
    cext.PSUTIL_CONN_NONE: _common.CONN_NONE,
}

if enum is not None:

    class Priority(enum.IntEnum):
        ABOVE_NORMAL_PRIORITY_CLASS = ABOVE_NORMAL_PRIORITY_CLASS
        BELOW_NORMAL_PRIORITY_CLASS = BELOW_NORMAL_PRIORITY_CLASS
        HIGH_PRIORITY_CLASS = HIGH_PRIORITY_CLASS
        IDLE_PRIORITY_CLASS = IDLE_PRIORITY_CLASS
        NORMAL_PRIORITY_CLASS = NORMAL_PRIORITY_CLASS
        REALTIME_PRIORITY_CLASS = REALTIME_PRIORITY_CLASS

    globals().update(Priority.__members__)

if enum is None:
    IOPRIO_VERYLOW = 0
    IOPRIO_LOW = 1
    IOPRIO_NORMAL = 2
    IOPRIO_HIGH = 3
else:

    class IOPriority(enum.IntEnum):
        IOPRIO_VERYLOW = 0
        IOPRIO_LOW = 1
        IOPRIO_NORMAL = 2
        IOPRIO_HIGH = 3

    globals().update(IOPriority.__members__)

pinfo_map = dict(
    num_handles=0,
    ctx_switches=1,
    user_time=2,
    kernel_time=3,
    create_time=4,
    num_threads=5,
    io_rcount=6,
    io_wcount=7,
    io_rbytes=8,
    io_wbytes=9,
    io_count_others=10,
    io_bytes_others=11,
    num_page_faults=12,
    peak_wset=13,
    wset=14,
    peak_paged_pool=15,
    paged_pool=16,
    peak_non_paged_pool=17,
    non_paged_pool=18,
    pagefile=19,
    peak_pagefile=20,
    mem_private=21,
)


# =====================================================================
# --- named tuples
# =====================================================================


# fmt: off
# psutil.cpu_times()
scputimes = namedtuple('scputimes',
                       ['user', 'system', 'idle', 'interrupt', 'dpc'])
# psutil.virtual_memory()
svmem = namedtuple('svmem', ['total', 'available', 'percent', 'used', 'free'])
# psutil.Process.memory_info()
pmem = namedtuple(
    'pmem', ['rss', 'vms',
             'num_page_faults', 'peak_wset', 'wset', 'peak_paged_pool',
             'paged_pool', 'peak_nonpaged_pool', 'nonpaged_pool',
             'pagefile', 'peak_pagefile', 'private'])
# psutil.Process.memory_full_info()
pfullmem = namedtuple('pfullmem', pmem._fields + ('uss', ))
# psutil.Process.memory_maps(grouped=True)
pmmap_grouped = namedtuple('pmmap_grouped', ['path', 'rss'])
# psutil.Process.memory_maps(grouped=False)
pmmap_ext = namedtuple(
    'pmmap_ext', 'addr perms ' + ' '.join(pmmap_grouped._fields))
# psutil.Process.io_counters()
pio = namedtuple('pio', ['read_count', 'write_count',
                         'read_bytes', 'write_bytes',
                         'other_count', 'other_bytes'])
# fmt: on


# =====================================================================
# --- utils
# =====================================================================


@lru_cache(maxsize=512)
def convert_dos_path(s):
    r"""Convert paths using native DOS format like:
        "\Device\HarddiskVolume1\Windows\systemew\file.txt"
    into:
        "C:\Windows\systemew\file.txt".
    """
    rawdrive = '\\'.join(s.split('\\')[:3])
    driveletter = cext.QueryDosDevice(rawdrive)
    remainder = s[len(rawdrive) :]
    return os.path.join(driveletter, remainder)


def py2_strencode(s):
    """Encode a unicode string to a byte string by using the default fs
    encoding + "replace" error handler.
    """
    if PY3:
        return s
    else:
        if isinstance(s, str):
            return s
        else:
            return s.encode(ENCODING, ENCODING_ERRS)


@memoize
def getpagesize():
    return cext.getpagesize()


# =====================================================================
# --- memory
# =====================================================================


def virtual_memory():
    """System virtual memory as a namedtuple."""
    mem = cext.virtual_mem()
    totphys, availphys, _totsys, _availsys = mem
    total = totphys
    avail = availphys
    free = availphys
    used = total - avail
    percent = usage_percent((total - avail), total, round_=1)
    return svmem(total, avail, percent, used, free)


def swap_memory():
    """Swap system memory as a (total, used, free, sin, sout) tuple."""
    mem = cext.virtual_mem()

    total_phys = mem[0]
    total_system = mem[2]

    # system memory (commit total/limit) is the sum of physical and swap
    # thus physical memory values need to be subtracted to get swap values
    total = total_system - total_phys
    # commit total is incremented immediately (decrementing free_system)
    # while the corresponding free physical value is not decremented until
    # pages are accessed, so we can't use free system memory for swap.
    # instead, we calculate page file usage based on performance counter
    if total > 0:
        percentswap = cext.swap_percent()
        used = int(0.01 * percentswap * total)
    else:
        percentswap = 0.0
        used = 0

    free = total - used
    percent = round(percentswap, 1)
    return _common.sswap(total, used, free, percent, 0, 0)


# =====================================================================
# --- disk
# =====================================================================


disk_io_counters = cext.disk_io_counters


def disk_usage(path):
    """Return disk usage associated with path."""
    if PY3 and isinstance(path, bytes):
        # XXX: do we want to use "strict"? Probably yes, in order
        # to fail immediately. After all we are accepting input here...
        path = path.decode(ENCODING, errors="strict")
    total, free = cext.disk_usage(path)
    used = total - free
    percent = usage_percent(used, total, round_=1)
    return _common.sdiskusage(total, used, free, percent)


def disk_partitions(all):
    """Return disk partitions."""
    rawlist = cext.disk_partitions(all)
    return [_common.sdiskpart(*x) for x in rawlist]


# =====================================================================
# --- CPU
# =====================================================================


def cpu_times():
    """Return system CPU times as a named tuple."""
    user, system, idle = cext.cpu_times()
    # Internally, GetSystemTimes() is used, and it doesn't return
    # interrupt and dpc times. cext.per_cpu_times() does, so we
    # rely on it to get those only.
    percpu_summed = scputimes(*[sum(n) for n in zip(*cext.per_cpu_times())])
    return scputimes(
        user, system, idle, percpu_summed.interrupt, percpu_summed.dpc
    )


def per_cpu_times():
    """Return system per-CPU times as a list of named tuples."""
    ret = []
    for user, system, idle, interrupt, dpc in cext.per_cpu_times():
        item = scputimes(user, system, idle, interrupt, dpc)
        ret.append(item)
    return ret


def cpu_count_logical():
    """Return the number of logical CPUs in the system."""
    return cext.cpu_count_logical()


def cpu_count_cores():
    """Return the number of CPU cores in the system."""
    return cext.cpu_count_cores()


def cpu_stats():
    """Return CPU statistics."""
    ctx_switches, interrupts, _dpcs, syscalls = cext.cpu_stats()
    soft_interrupts = 0
    return _common.scpustats(
        ctx_switches, interrupts, soft_interrupts, syscalls
    )


def cpu_freq():
    """Return CPU frequency.
    On Windows per-cpu frequency is not supported.
    """
    curr, max_ = cext.cpu_freq()
    min_ = 0.0
    return [_common.scpufreq(float(curr), min_, float(max_))]


_loadavg_inititialized = False


def getloadavg():
    """Return the number of processes in the system run queue averaged
    over the last 1, 5, and 15 minutes respectively as a tuple.
    """
    global _loadavg_inititialized

    if not _loadavg_inititialized:
        cext.init_loadavg_counter()
        _loadavg_inititialized = True

    # Drop to 2 decimal points which is what Linux does
    raw_loads = cext.getloadavg()
    return tuple([round(load, 2) for load in raw_loads])


# =====================================================================
# --- network
# =====================================================================


def net_connections(kind, _pid=-1):
    """Return socket connections.  If pid == -1 return system-wide
    connections (as opposed to connections opened by one process only).
    """
    if kind not in conn_tmap:
        raise ValueError(
            "invalid %r kind argument; choose between %s"
            % (kind, ', '.join([repr(x) for x in conn_tmap]))
        )
    families, types = conn_tmap[kind]
    rawlist = cext.net_connections(_pid, families, types)
    ret = set()
    for item in rawlist:
        fd, fam, type, laddr, raddr, status, pid = item
        nt = conn_to_ntuple(
            fd,
            fam,
            type,
            laddr,
            raddr,
            status,
            TCP_STATUSES,
            pid=pid if _pid == -1 else None,
        )
        ret.add(nt)
    return list(ret)


def net_if_stats():
    """Get NIC stats (isup, duplex, speed, mtu)."""
    ret = {}
    rawdict = cext.net_if_stats()
    for name, items in rawdict.items():
        if not PY3:
            assert isinstance(name, unicode), type(name)
            name = py2_strencode(name)
        isup, duplex, speed, mtu = items
        if hasattr(_common, 'NicDuplex'):
            duplex = _common.NicDuplex(duplex)
        ret[name] = _common.snicstats(isup, duplex, speed, mtu, '')
    return ret


def net_io_counters():
    """Return network I/O statistics for every network interface
    installed on the system as a dict of raw tuples.
    """
    ret = cext.net_io_counters()
    return dict([(py2_strencode(k), v) for k, v in ret.items()])


def net_if_addrs():
    """Return the addresses associated to each NIC."""
    ret = []
    for items in cext.net_if_addrs():
        items = list(items)
        items[0] = py2_strencode(items[0])
        ret.append(items)
    return ret


# =====================================================================
# --- sensors
# =====================================================================


def sensors_battery():
    """Return battery information."""
    # For constants meaning see:
    # https://msdn.microsoft.com/en-us/library/windows/desktop/
    #     aa373232(v=vs.85).aspx
    acline_status, flags, percent, secsleft = cext.sensors_battery()
    power_plugged = acline_status == 1
    no_battery = bool(flags & 128)
    charging = bool(flags & 8)

    if no_battery:
        return None
    if power_plugged or charging:
        secsleft = _common.POWER_TIME_UNLIMITED
    elif secsleft == -1:
        secsleft = _common.POWER_TIME_UNKNOWN

    return _common.sbattery(percent, secsleft, power_plugged)


# =====================================================================
# --- other system functions
# =====================================================================


_last_btime = 0


def boot_time():
    """The system boot time expressed in seconds since the epoch."""
    # This dirty hack is to adjust the precision of the returned
    # value which may have a 1 second fluctuation, see:
    # https://github.com/giampaolo/psutil/issues/1007
    global _last_btime
    ret = float(cext.boot_time())
    if abs(ret - _last_btime) <= 1:
        return _last_btime
    else:
        _last_btime = ret
        return ret


def users():
    """Return currently connected users as a list of namedtuples."""
    retlist = []
    rawlist = cext.users()
    for item in rawlist:
        user, hostname, tstamp = item
        user = py2_strencode(user)
        nt = _common.suser(user, None, hostname, tstamp, None)
        retlist.append(nt)
    return retlist


# =====================================================================
# --- Windows services
# =====================================================================


def win_service_iter():
    """Yields a list of WindowsService instances."""
    for name, display_name in cext.winservice_enumerate():
        yield WindowsService(py2_strencode(name), py2_strencode(display_name))


def win_service_get(name):
    """Open a Windows service and return it as a WindowsService instance."""
    service = WindowsService(name, None)
    service._display_name = service._query_config()['display_name']
    return service


class WindowsService:  # noqa: PLW1641
    """Represents an installed Windows service."""

    def __init__(self, name, display_name):
        self._name = name
        self._display_name = display_name

    def __str__(self):
        details = "(name=%r, display_name=%r)" % (
            self._name,
            self._display_name,
        )
        return "%s%s" % (self.__class__.__name__, details)

    def __repr__(self):
        return "<%s at %s>" % (self.__str__(), id(self))

    def __eq__(self, other):
        # Test for equality with another WindosService object based
        # on name.
        if not isinstance(other, WindowsService):
            return NotImplemented
        return self._name == other._name

    def __ne__(self, other):
        return not self == other

    def _query_config(self):
        with self._wrap_exceptions():
            display_name, binpath, username, start_type = (
                cext.winservice_query_config(self._name)
            )
        # XXX - update _self.display_name?
        return dict(
            display_name=py2_strencode(display_name),
            binpath=py2_strencode(binpath),
            username=py2_strencode(username),
            start_type=py2_strencode(start_type),
        )

    def _query_status(self):
        with self._wrap_exceptions():
            status, pid = cext.winservice_query_status(self._name)
        if pid == 0:
            pid = None
        return dict(status=status, pid=pid)

    @contextlib.contextmanager
    def _wrap_exceptions(self):
        """Ctx manager which translates bare OSError and WindowsError
        exceptions into NoSuchProcess and AccessDenied.
        """
        try:
            yield
        except OSError as err:
            if is_permission_err(err):
                msg = (
                    "service %r is not querable (not enough privileges)"
                    % self._name
                )
                raise AccessDenied(pid=None, name=self._name, msg=msg)
            elif err.winerror in (
                cext.ERROR_INVALID_NAME,
                cext.ERROR_SERVICE_DOES_NOT_EXIST,
            ):
                msg = "service %r does not exist" % self._name
                raise NoSuchProcess(pid=None, name=self._name, msg=msg)
            else:
                raise

    # config query

    def name(self):
        """The service name. This string is how a service is referenced
        and can be passed to win_service_get() to get a new
        WindowsService instance.
        """
        return self._name

    def display_name(self):
        """The service display name. The value is cached when this class
        is instantiated.
        """
        return self._display_name

    def binpath(self):
        """The fully qualified path to the service binary/exe file as
        a string, including command line arguments.
        """
        return self._query_config()['binpath']

    def username(self):
        """The name of the user that owns this service."""
        return self._query_config()['username']

    def start_type(self):
        """A string which can either be "automatic", "manual" or
        "disabled".
        """
        return self._query_config()['start_type']

    # status query

    def pid(self):
        """The process PID, if any, else None. This can be passed
        to Process class to control the service's process.
        """
        return self._query_status()['pid']

    def status(self):
        """Service status as a string."""
        return self._query_status()['status']

    def description(self):
        """Service long description."""
        return py2_strencode(cext.winservice_query_descr(self.name()))

    # utils

    def as_dict(self):
        """Utility method retrieving all the information above as a
        dictionary.
        """
        d = self._query_config()
        d.update(self._query_status())
        d['name'] = self.name()
        d['display_name'] = self.display_name()
        d['description'] = self.description()
        return d

    # actions
    # XXX: the necessary C bindings for start() and stop() are
    # implemented but for now I prefer not to expose them.
    # I may change my mind in the future. Reasons:
    # - they require Administrator privileges
    # - can't implement a timeout for stop() (unless by using a thread,
    #   which sucks)
    # - would require adding ServiceAlreadyStarted and
    #   ServiceAlreadyStopped exceptions, adding two new APIs.
    # - we might also want to have modify(), which would basically mean
    #   rewriting win32serviceutil.ChangeServiceConfig, which involves a
    #   lot of stuff (and API constants which would pollute the API), see:
    #   http://pyxr.sourceforge.net/PyXR/c/python24/lib/site-packages/
    #       win32/lib/win32serviceutil.py.html#0175
    # - psutil is typically about "read only" monitoring stuff;
    #   win_service_* APIs should only be used to retrieve a service and
    #   check whether it's running

    # def start(self, timeout=None):
    #     with self._wrap_exceptions():
    #         cext.winservice_start(self.name())
    #         if timeout:
    #             giveup_at = time.time() + timeout
    #             while True:
    #                 if self.status() == "running":
    #                     return
    #                 else:
    #                     if time.time() > giveup_at:
    #                         raise TimeoutExpired(timeout)
    #                     else:
    #                         time.sleep(.1)

    # def stop(self):
    #     # Note: timeout is not implemented because it's just not
    #     # possible, see:
    #     # http://stackoverflow.com/questions/11973228/
    #     with self._wrap_exceptions():
    #         return cext.winservice_stop(self.name())


# =====================================================================
# --- processes
# =====================================================================


pids = cext.pids
pid_exists = cext.pid_exists
ppid_map = cext.ppid_map  # used internally by Process.children()


def is_permission_err(exc):
    """Return True if this is a permission error."""
    assert isinstance(exc, OSError), exc
    if exc.errno in (errno.EPERM, errno.EACCES):
        return True
    # On Python 2 OSError doesn't always have 'winerror'. Sometimes
    # it does, in which case the original exception was WindowsError
    # (which is a subclass of OSError).
    return getattr(exc, "winerror", -1) in (
        cext.ERROR_ACCESS_DENIED,
        cext.ERROR_PRIVILEGE_NOT_HELD,
    )


def convert_oserror(exc, pid=None, name=None):
    """Convert OSError into NoSuchProcess or AccessDenied."""
    assert isinstance(exc, OSError), exc
    if is_permission_err(exc):
        return AccessDenied(pid=pid, name=name)
    if exc.errno == errno.ESRCH:
        return NoSuchProcess(pid=pid, name=name)
    raise exc


def wrap_exceptions(fun):
    """Decorator which converts OSError into NoSuchProcess or AccessDenied."""

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        try:
            return fun(self, *args, **kwargs)
        except OSError as err:
            raise convert_oserror(err, pid=self.pid, name=self._name)

    return wrapper


def retry_error_partial_copy(fun):
    """Workaround for https://github.com/giampaolo/psutil/issues/875.
    See: https://stackoverflow.com/questions/4457745#4457745.
    """

    @functools.wraps(fun)
    def wrapper(self, *args, **kwargs):
        delay = 0.0001
        times = 33
        for _ in range(times):  # retries for roughly 1 second
            try:
                return fun(self, *args, **kwargs)
            except WindowsError as _:
                err = _
                if err.winerror == ERROR_PARTIAL_COPY:
                    time.sleep(delay)
                    delay = min(delay * 2, 0.04)
                    continue
                raise
        msg = (
            "{} retried {} times, converted to AccessDenied as it's still"
            "returning {}".format(fun, times, err)
        )
        raise AccessDenied(pid=self.pid, name=self._name, msg=msg)

    return wrapper


class Process:
    """Wrapper class around underlying C implementation."""

    __slots__ = ["_cache", "_name", "_ppid", "pid"]

    def __init__(self, pid):
        self.pid = pid
        self._name = None
        self._ppid = None

    # --- oneshot() stuff

    def oneshot_enter(self):
        self._proc_info.cache_activate(self)
        self.exe.cache_activate(self)

    def oneshot_exit(self):
        self._proc_info.cache_deactivate(self)
        self.exe.cache_deactivate(self)

    @memoize_when_activated
    def _proc_info(self):
        """Return multiple information about this process as a
        raw tuple.
        """
        ret = cext.proc_info(self.pid)
        assert len(ret) == len(pinfo_map)
        return ret

    def name(self):
        """Return process name, which on Windows is always the final
        part of the executable.
        """
        # This is how PIDs 0 and 4 are always represented in taskmgr
        # and process-hacker.
        if self.pid == 0:
            return "System Idle Process"
        if self.pid == 4:
            return "System"
        return os.path.basename(self.exe())

    @wrap_exceptions
    @memoize_when_activated
    def exe(self):
        if PYPY:
            try:
                exe = cext.proc_exe(self.pid)
            except WindowsError as err:
                # 24 = ERROR_TOO_MANY_OPEN_FILES. Not sure why this happens
                # (perhaps PyPy's JIT delaying garbage collection of files?).
                if err.errno == 24:
                    debug("%r translated into AccessDenied" % err)
                    raise AccessDenied(self.pid, self._name)
                raise
        else:
            exe = cext.proc_exe(self.pid)
        if not PY3:
            exe = py2_strencode(exe)
        if exe.startswith('\\'):
            return convert_dos_path(exe)
        return exe  # May be "Registry", "MemCompression", ...

    @wrap_exceptions
    @retry_error_partial_copy
    def cmdline(self):
        if cext.WINVER >= cext.WINDOWS_8_1:
            # PEB method detects cmdline changes but requires more
            # privileges: https://github.com/giampaolo/psutil/pull/1398
            try:
                ret = cext.proc_cmdline(self.pid, use_peb=True)
            except OSError as err:
                if is_permission_err(err):
                    ret = cext.proc_cmdline(self.pid, use_peb=False)
                else:
                    raise
        else:
            ret = cext.proc_cmdline(self.pid, use_peb=True)
        if PY3:
            return ret
        else:
            return [py2_strencode(s) for s in ret]

    @wrap_exceptions
    @retry_error_partial_copy
    def environ(self):
        ustr = cext.proc_environ(self.pid)
        if ustr and not PY3:
            assert isinstance(ustr, unicode), type(ustr)
        return parse_environ_block(py2_strencode(ustr))

    def ppid(self):
        try:
            return ppid_map()[self.pid]
        except KeyError:
            raise NoSuchProcess(self.pid, self._name)

    def _get_raw_meminfo(self):
        try:
            return cext.proc_memory_info(self.pid)
        except OSError as err:
            if is_permission_err(err):
                # TODO: the C ext can probably be refactored in order
                # to get this from cext.proc_info()
                debug("attempting memory_info() fallback (slower)")
                info = self._proc_info()
                return (
                    info[pinfo_map['num_page_faults']],
                    info[pinfo_map['peak_wset']],
                    info[pinfo_map['wset']],
                    info[pinfo_map['peak_paged_pool']],
                    info[pinfo_map['paged_pool']],
                    info[pinfo_map['peak_non_paged_pool']],
                    info[pinfo_map['non_paged_pool']],
                    info[pinfo_map['pagefile']],
                    info[pinfo_map['peak_pagefile']],
                    info[pinfo_map['mem_private']],
                )
            raise

    @wrap_exceptions
    def memory_info(self):
        # on Windows RSS == WorkingSetSize and VSM == PagefileUsage.
        # Underlying C function returns fields of PROCESS_MEMORY_COUNTERS
        # struct.
        t = self._get_raw_meminfo()
        rss = t[2]  # wset
        vms = t[7]  # pagefile
        return pmem(*(rss, vms) + t)

    @wrap_exceptions
    def memory_full_info(self):
        basic_mem = self.memory_info()
        uss = cext.proc_memory_uss(self.pid)
        uss *= getpagesize()
        return pfullmem(*basic_mem + (uss,))

    def memory_maps(self):
        try:
            raw = cext.proc_memory_maps(self.pid)
        except OSError as err:
            # XXX - can't use wrap_exceptions decorator as we're
            # returning a generator; probably needs refactoring.
            raise convert_oserror(err, self.pid, self._name)
        else:
            for addr, perm, path, rss in raw:
                path = convert_dos_path(path)
                if not PY3:
                    path = py2_strencode(path)
                addr = hex(addr)
                yield (addr, perm, path, rss)

    @wrap_exceptions
    def kill(self):
        return cext.proc_kill(self.pid)

    @wrap_exceptions
    def send_signal(self, sig):
        if sig == signal.SIGTERM:
            cext.proc_kill(self.pid)
        # py >= 2.7
        elif sig in (
            getattr(signal, "CTRL_C_EVENT", object()),
            getattr(signal, "CTRL_BREAK_EVENT", object()),
        ):
            os.kill(self.pid, sig)
        else:
            msg = (
                "only SIGTERM, CTRL_C_EVENT and CTRL_BREAK_EVENT signals "
                "are supported on Windows"
            )
            raise ValueError(msg)

    @wrap_exceptions
    def wait(self, timeout=None):
        if timeout is None:
            cext_timeout = cext.INFINITE
        else:
            # WaitForSingleObject() expects time in milliseconds.
            cext_timeout = int(timeout * 1000)

        timer = getattr(time, 'monotonic', time.time)
        stop_at = timer() + timeout if timeout is not None else None

        try:
            # Exit code is supposed to come from GetExitCodeProcess().
            # May also be None if OpenProcess() failed with
            # ERROR_INVALID_PARAMETER, meaning PID is already gone.
            exit_code = cext.proc_wait(self.pid, cext_timeout)
        except cext.TimeoutExpired:
            # WaitForSingleObject() returned WAIT_TIMEOUT. Just raise.
            raise TimeoutExpired(timeout, self.pid, self._name)
        except cext.TimeoutAbandoned:
            # WaitForSingleObject() returned WAIT_ABANDONED, see:
            # https://github.com/giampaolo/psutil/issues/1224
            # We'll just rely on the internal polling and return None
            # when the PID disappears. Subprocess module does the same
            # (return None):
            # https://github.com/python/cpython/blob/
            #     be50a7b627d0aa37e08fa8e2d5568891f19903ce/
            #     Lib/subprocess.py#L1193-L1194
            exit_code = None

        # At this point WaitForSingleObject() returned WAIT_OBJECT_0,
        # meaning the process is gone. Stupidly there are cases where
        # its PID may still stick around so we do a further internal
        # polling.
        delay = 0.0001
        while True:
            if not pid_exists(self.pid):
                return exit_code
            if stop_at and timer() >= stop_at:
                raise TimeoutExpired(timeout, pid=self.pid, name=self._name)
            time.sleep(delay)
            delay = min(delay * 2, 0.04)  # incremental delay

    @wrap_exceptions
    def username(self):
        if self.pid in (0, 4):
            return 'NT AUTHORITY\\SYSTEM'
        domain, user = cext.proc_username(self.pid)
        return py2_strencode(domain) + '\\' + py2_strencode(user)

    @wrap_exceptions
    def create_time(self, fast_only=False):
        # Note: proc_times() not put under oneshot() 'cause create_time()
        # is already cached by the main Process class.
        try:
            _user, _system, created = cext.proc_times(self.pid)
            return created
        except OSError as err:
            if is_permission_err(err):
                if fast_only:
                    raise
                debug("attempting create_time() fallback (slower)")
                return self._proc_info()[pinfo_map['create_time']]
            raise

    @wrap_exceptions
    def num_threads(self):
        return self._proc_info()[pinfo_map['num_threads']]

    @wrap_exceptions
    def threads(self):
        rawlist = cext.proc_threads(self.pid)
        retlist = []
        for thread_id, utime, stime in rawlist:
            ntuple = _common.pthread(thread_id, utime, stime)
            retlist.append(ntuple)
        return retlist

    @wrap_exceptions
    def cpu_times(self):
        try:
            user, system, _created = cext.proc_times(self.pid)
        except OSError as err:
            if not is_permission_err(err):
                raise
            debug("attempting cpu_times() fallback (slower)")
            info = self._proc_info()
            user = info[pinfo_map['user_time']]
            system = info[pinfo_map['kernel_time']]
        # Children user/system times are not retrievable (set to 0).
        return _common.pcputimes(user, system, 0.0, 0.0)

    @wrap_exceptions
    def suspend(self):
        cext.proc_suspend_or_resume(self.pid, True)

    @wrap_exceptions
    def resume(self):
        cext.proc_suspend_or_resume(self.pid, False)

    @wrap_exceptions
    @retry_error_partial_copy
    def cwd(self):
        if self.pid in (0, 4):
            raise AccessDenied(self.pid, self._name)
        # return a normalized pathname since the native C function appends
        # "\\" at the and of the path
        path = cext.proc_cwd(self.pid)
        return py2_strencode(os.path.normpath(path))

    @wrap_exceptions
    def open_files(self):
        if self.pid in (0, 4):
            return []
        ret = set()
        # Filenames come in in native format like:
        # "\Device\HarddiskVolume1\Windows\systemew\file.txt"
        # Convert the first part in the corresponding drive letter
        # (e.g. "C:\") by using Windows's QueryDosDevice()
        raw_file_names = cext.proc_open_files(self.pid)
        for _file in raw_file_names:
            _file = convert_dos_path(_file)
            if isfile_strict(_file):
                if not PY3:
                    _file = py2_strencode(_file)
                ntuple = _common.popenfile(_file, -1)
                ret.add(ntuple)
        return list(ret)

    @wrap_exceptions
    def net_connections(self, kind='inet'):
        return net_connections(kind, _pid=self.pid)

    @wrap_exceptions
    def nice_get(self):
        value = cext.proc_priority_get(self.pid)
        if enum is not None:
            value = Priority(value)
        return value

    @wrap_exceptions
    def nice_set(self, value):
        return cext.proc_priority_set(self.pid, value)

    @wrap_exceptions
    def ionice_get(self):
        ret = cext.proc_io_priority_get(self.pid)
        if enum is not None:
            ret = IOPriority(ret)
        return ret

    @wrap_exceptions
    def ionice_set(self, ioclass, value):
        if value:
            msg = "value argument not accepted on Windows"
            raise TypeError(msg)
        if ioclass not in (
            IOPRIO_VERYLOW,
            IOPRIO_LOW,
            IOPRIO_NORMAL,
            IOPRIO_HIGH,
        ):
            raise ValueError("%s is not a valid priority" % ioclass)
        cext.proc_io_priority_set(self.pid, ioclass)

    @wrap_exceptions
    def io_counters(self):
        try:
            ret = cext.proc_io_counters(self.pid)
        except OSError as err:
            if not is_permission_err(err):
                raise
            debug("attempting io_counters() fallback (slower)")
            info = self._proc_info()
            ret = (
                info[pinfo_map['io_rcount']],
                info[pinfo_map['io_wcount']],
                info[pinfo_map['io_rbytes']],
                info[pinfo_map['io_wbytes']],
                info[pinfo_map['io_count_others']],
                info[pinfo_map['io_bytes_others']],
            )
        return pio(*ret)

    @wrap_exceptions
    def status(self):
        suspended = cext.proc_is_suspended(self.pid)
        if suspended:
            return _common.STATUS_STOPPED
        else:
            return _common.STATUS_RUNNING

    @wrap_exceptions
    def cpu_affinity_get(self):
        def from_bitmask(x):
            return [i for i in range(64) if (1 << i) & x]

        bitmask = cext.proc_cpu_affinity_get(self.pid)
        return from_bitmask(bitmask)

    @wrap_exceptions
    def cpu_affinity_set(self, value):
        def to_bitmask(ls):
            if not ls:
                raise ValueError("invalid argument %r" % ls)
            out = 0
            for b in ls:
                out |= 2**b
            return out

        # SetProcessAffinityMask() states that ERROR_INVALID_PARAMETER
        # is returned for an invalid CPU but this seems not to be true,
        # therefore we check CPUs validy beforehand.
        allcpus = list(range(len(per_cpu_times())))
        for cpu in value:
            if cpu not in allcpus:
                if not isinstance(cpu, (int, long)):
                    raise TypeError(
                        "invalid CPU %r; an integer is required" % cpu
                    )
                else:
                    raise ValueError("invalid CPU %r" % cpu)

        bitmask = to_bitmask(value)
        cext.proc_cpu_affinity_set(self.pid, bitmask)

    @wrap_exceptions
    def num_handles(self):
        try:
            return cext.proc_num_handles(self.pid)
        except OSError as err:
            if is_permission_err(err):
                debug("attempting num_handles() fallback (slower)")
                return self._proc_info()[pinfo_map['num_handles']]
            raise

    @wrap_exceptions
    def num_ctx_switches(self):
        ctx_switches = self._proc_info()[pinfo_map['ctx_switches']]
        # only voluntary ctx switches are supported
        return _common.pctxsw(ctx_switches, 0)
PKok\�ɞ1�� netifaces-0.11.0.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: bdist_wheel (0.36.2)
Root-Is-Purelib: false
Tag: cp39-cp39-manylinux_2_5_x86_64
Tag: cp39-cp39-manylinux1_x86_64

PKok\���**"netifaces-0.11.0.dist-info/LICENSEnu�[���Copyright (c) 2007-2018 Alastair Houghton

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
PKok\�zb�

!netifaces-0.11.0.dist-info/RECORDnu�[���netifaces-0.11.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
netifaces-0.11.0.dist-info/LICENSE,sha256=sguJUWS3K5zAFw5sDWCxT5qqzGhiwFVu7UHqo0iINwU,1066
netifaces-0.11.0.dist-info/METADATA,sha256=6Bt05nLw45zo4zoDnKuyNyFePxJdOG25S1chH-5aRxg,8951
netifaces-0.11.0.dist-info/RECORD,,
netifaces-0.11.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
netifaces-0.11.0.dist-info/WHEEL,sha256=mpPt74xeHFyJyXQeJqOMUfs_5y9wKBGl7ZdT5s47bhE,144
netifaces-0.11.0.dist-info/top_level.txt,sha256=PqMTaIuWtSjkdQHX6lH1Lmpv2aqBUYAGqATB8z3A6TQ,10
netifaces-0.11.0.dist-info/zip-safe,sha256=AbpHGcgLb-kRsJGnwFEktk7uzpZOCcBY74-YBdrKVGs,1
netifaces.cpython-39-x86_64-linux-gnu.so,sha256=9HiqVAHU4w1_WXrDxHO4U-wqaoOZr3Vzjr9rKPgAqgU,67181
PKok\t��/�"�"#netifaces-0.11.0.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: netifaces
Version: 0.11.0
Summary: Portable network interface information.
Home-page: https://github.com/al45tair/netifaces
Author: Alastair Houghton
Author-email: alastair@alastairs-place.net
License: MIT License
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Topic :: System :: Networking
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
License-File: LICENSE

netifaces 0.10.8
================

+-------------+------------------+
| Linux/macOS | |BuildStatus|    |
+-------------+------------------+
| Windows     | |WinBuildStatus| |
+-------------+------------------+

.. |BuildStatus| image:: https://travis-ci.org/al45tair/netifaces.svg?branch=master
   :target: https://travis-ci.org/al45tair/netifaces
   :alt: Build Status (Linux/Mac)

.. |WinBuildStatus| image:: https://ci.appveyor.com/api/projects/status/3ctn1bl0aigpfjoo/branch/master?svg=true
   :target: https://ci.appveyor.com/project/al45tair/netifaces/branch/master
   :alt: Build Status (Windows)

.. warning::

   netifaces needs a new maintainer.  al45tair is no longer able to maintain it
   or make new releases due to work commitments.

1. What is this?
----------------

It's been annoying me for some time that there's no easy way to get the
address(es) of the machine's network interfaces from Python.  There is
a good reason for this difficulty, which is that it is virtually impossible
to do so in a portable manner.  However, it seems to me that there should
be a package you can easy_install that will take care of working out the
details of doing so on the machine you're using, then you can get on with
writing Python code without concerning yourself with the nitty gritty of
system-dependent low-level networking APIs.

This package attempts to solve that problem.

2. How do I use it?
-------------------

First you need to install it, which you can do by typing::

  tar xvzf netifaces-0.10.8.tar.gz
  cd netifaces-0.10.8
  python setup.py install

**Note that you will need the relevant developer tools for your platform**,
as netifaces is written in C and installing this way will compile the extension.

Once that's done, you'll need to start Python and do something like the
following::

>>> import netifaces

Then if you enter

>>> netifaces.interfaces()
['lo0', 'gif0', 'stf0', 'en0', 'en1', 'fw0']

you'll see the list of interface identifiers for your machine.

You can ask for the addresses of a particular interface by doing

>>> netifaces.ifaddresses('lo0')
{18: [{'addr': ''}], 2: [{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}], 30: [{'peer': '::1', 'netmask': 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff', 'addr': '::1'}, {'peer': '', 'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::1%lo0'}]}

Hmmmm.  That result looks a bit cryptic; let's break it apart and explain
what each piece means.  It returned a dictionary, so let's look there first::

  { 18: [...], 2: [...], 30: [...] }

Each of the numbers refers to a particular address family.  In this case, we
have three address families listed; on my system, 18 is ``AF_LINK`` (which means
the link layer interface, e.g. Ethernet), 2 is ``AF_INET`` (normal Internet
addresses), and 30 is ``AF_INET6`` (IPv6).

But wait!  Don't use these numbers in your code.  The numeric values here are
system dependent; fortunately, I thought of that when writing netifaces, so
the module declares a range of values that you might need.  e.g.

>>> netifaces.AF_LINK
18

Again, on your system, the number may be different.

So, what we've established is that the dictionary that's returned has one
entry for each address family for which this interface has an address.  Let's
take a look at the ``AF_INET`` addresses now:

>>> addrs = netifaces.ifaddresses('lo0')
>>> addrs[netifaces.AF_INET]
[{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}]

You might be wondering why this value is a list.  The reason is that it's
possible for an interface to have more than one address, even within the
same family.  I'll say that again: *you can have more than one address of
the same type associated with each interface*.

*Asking for "the" address of a particular interface doesn't make sense.*

Right, so, we can see that this particular interface only has one address,
and, because it's a loopback interface, it's point-to-point and therefore
has a *peer* address rather than a broadcast address.

Let's look at a more interesting interface.

>>> addrs = netifaces.ifaddresses('en0')
>>> addrs[netifaces.AF_INET]
[{'broadcast': '10.15.255.255', 'netmask': '255.240.0.0', 'addr': '10.0.1.4'}, {'broadcast': '192.168.0.255', 'addr': '192.168.0.47'}]

This interface has two addresses (see, I told you...)  Both of them are
regular IPv4 addresses, although in one case the netmask has been changed
from its default.  The netmask *may not* appear on your system if it's set
to the default for the address range.

Because this interface isn't point-to-point, it also has broadcast addresses.

Now, say we want, instead of the IP addresses, to get the MAC address; that
is, the hardware address of the Ethernet adapter running this interface.  We
can do

>>> addrs[netifaces.AF_LINK]
[{'addr': '00:12:34:56:78:9a'}]

Note that this may not be available on platforms without getifaddrs(), unless
they happen to implement ``SIOCGIFHWADDR``.  Note also that you just get the
address; it's unlikely that you'll see anything else with an ``AF_LINK`` address.
Oh, and don't assume that all ``AF_LINK`` addresses are Ethernet; you might, for
instance, be on a Mac, in which case:

>>> addrs = netifaces.ifaddresses('fw0')
>>> addrs[netifaces.AF_LINK]
[{'addr': '00:12:34:56:78:9a:bc:de'}]

No, that isn't an exceptionally long Ethernet MAC address---it's a FireWire
address.

As of version 0.10.0, you can also obtain a list of gateways on your
machine:

>>> netifaces.gateways()
{2: [('10.0.1.1', 'en0', True), ('10.2.1.1', 'en1', False)], 30: [('fe80::1', 'en0', True)], 'default': { 2: ('10.0.1.1', 'en0'), 30: ('fe80::1', 'en0') }}

This dictionary is keyed on address family---in this case, ``AF_INET``---and
each entry is a list of gateways as ``(address, interface, is_default)`` tuples.
Notice that here we have two separate gateways for IPv4 (``AF_INET``); some
operating systems support configurations like this and can either route packets
based on their source, or based on administratively configured routing tables.

For convenience, we also allow you to index the dictionary with the special
value ``'default'``, which returns a dictionary mapping address families to the
default gateway in each case.  Thus you can get the default IPv4 gateway with

>>> gws = netifaces.gateways()
>>> gws['default'][netifaces.AF_INET]
('10.0.1.1', 'en0')

Do note that there may be no default gateway for any given address family;
this is currently very common for IPv6 and much less common for IPv4 but it
can happen even for ``AF_INET``.

BTW, if you're trying to configure your machine to have multiple gateways for
the same address family, it's a very good idea to check the documentation for
your operating system *very* carefully, as some systems become extremely
confused or route packets in a non-obvious manner.

I'm very interested in hearing from anyone (on any platform) for whom the
``gateways()`` method doesn't produce the expected results.  It's quite
complicated extracting this information from the operating system (whichever
operating system we're talking about), and so I expect there's at least one
system out there where this just won't work.

3. This is great!  What platforms does it work on?
--------------------------------------------------

It gets regular testing on OS X, Linux and Windows.  It has also been used
successfully on Solaris, and it's expected to work properly on other UNIX-like
systems as well.  If you are running something that is not supported, and
wish to contribute a patch, please use Github to send a pull request.

4. What license is this under?
------------------------------

It's an MIT-style license. See `LICENSE <./LICENSE>`_.

5. Why the jump to 0.10.0?
--------------------------

Because someone released a fork of netifaces with the version 0.9.0.
Hopefully skipping the version number should remove any confusion.  In
addition starting with 0.10.0 Python 3 is now supported and other
features/bugfixes have been included as well.  See the CHANGELOG for a
more complete list of changes.


PKok\$netifaces-0.11.0.dist-info/REQUESTEDnu�[���PKok\�.�

(netifaces-0.11.0.dist-info/top_level.txtnu�[���netifaces
PKok\��2#netifaces-0.11.0.dist-info/zip-safenu�[���
PKok\���$netifaces-0.11.0.dist-info/INSTALLERnu�[���pip
PKok\�94��psutil-6.1.0.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: bdist_wheel (0.37.1)
Root-Is-Purelib: false
Tag: cp36-abi3-manylinux_2_12_x86_64
Tag: cp36-abi3-manylinux2010_x86_64
Tag: cp36-abi3-manylinux_2_17_x86_64
Tag: cp36-abi3-manylinux2014_x86_64

PKok\٘q�psutil-6.1.0.dist-info/LICENSEnu�[���BSD 3-Clause License

Copyright (c) 2009, Jay Loden, Dave Daeschler, Giampaolo Rodola
All rights reserved.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

 * Redistributions of source code must retain the above copyright notice, this
   list of conditions and the following disclaimer.

 * Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

 * Neither the name of the psutil authors nor the names of its contributors
   may be used to endorse or promote products derived from this software without
   specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PKok\W����psutil-6.1.0.dist-info/RECORDnu�[���psutil-6.1.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
psutil-6.1.0.dist-info/LICENSE,sha256=uJwGOzeG4o4MCjjxkx22H-015p3SopZvvs_-4PRsjRA,1548
psutil-6.1.0.dist-info/METADATA,sha256=9ro1bKVP9BN1l8ocnFXMpsbM_WcqDJdsw3BxHQomUug,22295
psutil-6.1.0.dist-info/RECORD,,
psutil-6.1.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
psutil-6.1.0.dist-info/WHEEL,sha256=rgpVBmjjvbINeGKCkWEGd3f40VHMTsDkQj1Lgil82zE,221
psutil-6.1.0.dist-info/top_level.txt,sha256=gCNhn57wzksDjSAISmgMJ0aiXzQulk0GJhb2-BAyYgw,7
psutil/__init__.py,sha256=zyBpkEYkWwlvVnnElydLzmm8umVweXlT4HsBzcTBA9c,89154
psutil/__pycache__/__init__.cpython-39.pyc,,
psutil/__pycache__/_common.cpython-39.pyc,,
psutil/__pycache__/_compat.cpython-39.pyc,,
psutil/__pycache__/_psaix.cpython-39.pyc,,
psutil/__pycache__/_psbsd.cpython-39.pyc,,
psutil/__pycache__/_pslinux.cpython-39.pyc,,
psutil/__pycache__/_psosx.cpython-39.pyc,,
psutil/__pycache__/_psposix.cpython-39.pyc,,
psutil/__pycache__/_pssunos.cpython-39.pyc,,
psutil/__pycache__/_pswindows.cpython-39.pyc,,
psutil/_common.py,sha256=KetpFrG7dgXOVqfG9O1XYPlLrGYPnCHUix5i-Kh213s,29739
psutil/_compat.py,sha256=zrxveFAR5_XqDUStcyXmTBk1rWs0jxLIiOjQnj7SwUk,15253
psutil/_psaix.py,sha256=auBiK5gCD4fOjqrjTwckg7wfOHw6vv3f0hIkGvNcBC4,18663
psutil/_psbsd.py,sha256=Yn4F-8jXZdbJ01R5xWfUESXUuhWUKiPa3JL1MiLA7-E,32205
psutil/_pslinux.py,sha256=PNh1dKVJ9rTXWLHAmeOoc00oI-SQ6QeZx0Lnxeh0JnA,88594
psutil/_psosx.py,sha256=js281YWrza5x0_EeYhjLLypDqzmiehZASGpUkxNhKqw,16136
psutil/_psposix.py,sha256=X9rd7WHKQ6mUAn2ihb03MCnzrBtQsrPRkCouExmuagQ,8235
psutil/_pssunos.py,sha256=Jxefif4mydfeOGKsyN7H7L5QVE4QhlhI8YXOX1HVAKI,25479
psutil/_psutil_linux.abi3.so,sha256=UVS0vmhO15bhmHJ-yUC4brnVLWENGD1dChvfRIv1fzI,115320
psutil/_psutil_posix.abi3.so,sha256=bLoKDfoWp8Pmo_QGNDssv7kIOLk2hnF6HZHYlRWatOI,71640
psutil/_pswindows.py,sha256=EPkkDJi0FZUzpkQWfAjk-oK2kvBI8RAhcPEFh3unLFA,38124
psutil/tests/__init__.py,sha256=2A3mFjGilu2GXXJBJaz2KLMime37ulCtgYWzjvrQLag,66696
psutil/tests/__main__.py,sha256=GYT-hlMnWDtybkJ76DqQcjXPr0jnLeZDTe0lVVeDb7o,309
psutil/tests/__pycache__/__init__.cpython-39.pyc,,
psutil/tests/__pycache__/__main__.cpython-39.pyc,,
psutil/tests/__pycache__/test_aix.cpython-39.pyc,,
psutil/tests/__pycache__/test_bsd.cpython-39.pyc,,
psutil/tests/__pycache__/test_connections.cpython-39.pyc,,
psutil/tests/__pycache__/test_contracts.cpython-39.pyc,,
psutil/tests/__pycache__/test_linux.cpython-39.pyc,,
psutil/tests/__pycache__/test_memleaks.cpython-39.pyc,,
psutil/tests/__pycache__/test_misc.cpython-39.pyc,,
psutil/tests/__pycache__/test_osx.cpython-39.pyc,,
psutil/tests/__pycache__/test_posix.cpython-39.pyc,,
psutil/tests/__pycache__/test_process.cpython-39.pyc,,
psutil/tests/__pycache__/test_process_all.cpython-39.pyc,,
psutil/tests/__pycache__/test_sunos.cpython-39.pyc,,
psutil/tests/__pycache__/test_system.cpython-39.pyc,,
psutil/tests/__pycache__/test_testutils.cpython-39.pyc,,
psutil/tests/__pycache__/test_unicode.cpython-39.pyc,,
psutil/tests/__pycache__/test_windows.cpython-39.pyc,,
psutil/tests/test_aix.py,sha256=x-klXNziKeLIj4eF4kPh-mObIlnuoAJWCA9y96JAXoQ,4035
psutil/tests/test_bsd.py,sha256=94Y34BUn0sBw-87NF0IIeU_NanOEwWDImRhkptQ4VYM,20222
psutil/tests/test_connections.py,sha256=gRqGAkbIjidumXVHVzlfZ0yUINAj9sbDeIbAwiGS6K0,21252
psutil/tests/test_contracts.py,sha256=FSFZpNeJumY9Lgerih3079pg4F80TwR7ftNS5nhFhqg,12577
psutil/tests/test_linux.py,sha256=Hng2l3UMYliIOdc3pF0kiWH6FJ0yGdB_WC-jPAPkbDE,91227
psutil/tests/test_memleaks.py,sha256=q6mV6Mtl_GH6HjQs33hrwaX78oSHZ9oojPyDTYLSBRQ,15411
psutil/tests/test_misc.py,sha256=goOFBTKBYJin5INIRttCEbKWOd64l_XenKK689w8zEY,35975
psutil/tests/test_osx.py,sha256=HeEXXZP3wsWdGZKu870nRtw5ndprn3zuqtvr-o9RpRg,6133
psutil/tests/test_posix.py,sha256=O6zAJEES8CrdSg8s3FWiRTv0xmf3B5_D88XAkxiJXYQ,17408
psutil/tests/test_process.py,sha256=tEirr5yZZbfhMtBZ2Ny5w3Xc1zU0L4DXo4eYTuG08go,63086
psutil/tests/test_process_all.py,sha256=SSbBcdjN51FxV9D_ZDsfULWY1xuV0wAlXVo2-8cw_5M,18616
psutil/tests/test_sunos.py,sha256=tf9OOQyidTFA4WAp2eZoewvwXy95MmTD06JURgnH7ig,1192
psutil/tests/test_system.py,sha256=SogWUMyMK_manymW0Jz_atWDLKNbGVl_krMlwP6eu_k,36431
psutil/tests/test_testutils.py,sha256=MQprQYqROcMc_-5e0GT37uukCgmYGhLQfYTx82nqiYg,18585
psutil/tests/test_unicode.py,sha256=OyGLTxCU8WsSGiNYkvZkWtAzl6lZVBt-yixxSyuqXAk,12603
psutil/tests/test_windows.py,sha256=O3Tt5nCaamkdzEcTSjoopp2NoN6tkWQ6Y66GZwSKEqg,34008
PKok\t�c�WWpsutil-6.1.0.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: psutil
Version: 6.1.0
Summary: Cross-platform lib for process and system monitoring in Python.
Home-page: https://github.com/giampaolo/psutil
Author: Giampaolo Rodola
Author-email: g.rodola@gmail.com
License: BSD-3-Clause
Keywords: ps,top,kill,free,lsof,netstat,nice,tty,ionice,uptime,taskmgr,process,df,iotop,iostat,ifconfig,taskset,who,pidof,pmap,smem,pstree,monitoring,ulimit,prlimit,smem,performance,metrics,agent,observability
Platform: Platform Independent
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Console
Classifier: Environment :: Win32 (MS Windows)
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Information Technology
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows :: Windows 10
Classifier: Operating System :: Microsoft :: Windows :: Windows 7
Classifier: Operating System :: Microsoft :: Windows :: Windows 8
Classifier: Operating System :: Microsoft :: Windows :: Windows 8.1
Classifier: Operating System :: Microsoft :: Windows :: Windows Server 2003
Classifier: Operating System :: Microsoft :: Windows :: Windows Server 2008
Classifier: Operating System :: Microsoft :: Windows :: Windows Vista
Classifier: Operating System :: Microsoft
Classifier: Operating System :: OS Independent
Classifier: Operating System :: POSIX :: AIX
Classifier: Operating System :: POSIX :: BSD :: FreeBSD
Classifier: Operating System :: POSIX :: BSD :: NetBSD
Classifier: Operating System :: POSIX :: BSD :: OpenBSD
Classifier: Operating System :: POSIX :: BSD
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: POSIX :: SunOS/Solaris
Classifier: Operating System :: POSIX
Classifier: Programming Language :: C
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Programming Language :: Python
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: System :: Benchmark
Classifier: Topic :: System :: Hardware :: Hardware Drivers
Classifier: Topic :: System :: Hardware
Classifier: Topic :: System :: Monitoring
Classifier: Topic :: System :: Networking :: Monitoring :: Hardware Watchdog
Classifier: Topic :: System :: Networking :: Monitoring
Classifier: Topic :: System :: Networking
Classifier: Topic :: System :: Operating System
Classifier: Topic :: System :: Systems Administration
Classifier: Topic :: Utilities
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*
Description-Content-Type: text/x-rst
License-File: LICENSE
Provides-Extra: dev
Requires-Dist: black ; extra == 'dev'
Requires-Dist: check-manifest ; extra == 'dev'
Requires-Dist: coverage ; extra == 'dev'
Requires-Dist: packaging ; extra == 'dev'
Requires-Dist: pylint ; extra == 'dev'
Requires-Dist: pyperf ; extra == 'dev'
Requires-Dist: pypinfo ; extra == 'dev'
Requires-Dist: pytest-cov ; extra == 'dev'
Requires-Dist: requests ; extra == 'dev'
Requires-Dist: rstcheck ; extra == 'dev'
Requires-Dist: ruff ; extra == 'dev'
Requires-Dist: sphinx ; extra == 'dev'
Requires-Dist: sphinx-rtd-theme ; extra == 'dev'
Requires-Dist: toml-sort ; extra == 'dev'
Requires-Dist: twine ; extra == 'dev'
Requires-Dist: virtualenv ; extra == 'dev'
Requires-Dist: wheel ; extra == 'dev'
Provides-Extra: test
Requires-Dist: pytest ; extra == 'test'
Requires-Dist: pytest-xdist ; extra == 'test'
Requires-Dist: setuptools ; extra == 'test'

|  |downloads| |stars| |forks| |contributors| |coverage|
|  |version| |py-versions| |packages| |license|
|  |github-actions-wheels|  |github-actions-bsd| |appveyor| |doc| |twitter| |tidelift|

.. |downloads| image:: https://img.shields.io/pypi/dm/psutil.svg
    :target: https://pepy.tech/project/psutil
    :alt: Downloads

.. |stars| image:: https://img.shields.io/github/stars/giampaolo/psutil.svg
    :target: https://github.com/giampaolo/psutil/stargazers
    :alt: Github stars

.. |forks| image:: https://img.shields.io/github/forks/giampaolo/psutil.svg
    :target: https://github.com/giampaolo/psutil/network/members
    :alt: Github forks

.. |contributors| image:: https://img.shields.io/github/contributors/giampaolo/psutil.svg
    :target: https://github.com/giampaolo/psutil/graphs/contributors
    :alt: Contributors

.. |github-actions-wheels| image:: https://img.shields.io/github/actions/workflow/status/giampaolo/psutil/.github/workflows/build.yml.svg?label=Linux%2C%20macOS%2C%20Windows
    :target: https://github.com/giampaolo/psutil/actions?query=workflow%3Abuild
    :alt: Linux, macOS, Windows

.. |github-actions-bsd| image:: https://img.shields.io/github/actions/workflow/status/giampaolo/psutil/.github/workflows/bsd.yml.svg?label=FreeBSD,%20NetBSD,%20OpenBSD
    :target: https://github.com/giampaolo/psutil/actions?query=workflow%3Absd-tests
    :alt: FreeBSD, NetBSD, OpenBSD

.. |appveyor| image:: https://img.shields.io/appveyor/build/giampaolo/psutil/master.svg?maxAge=3600&label=Windows%20(py2)
    :target: https://ci.appveyor.com/project/giampaolo/psutil
    :alt: Windows (Appveyor)

.. |coverage| image:: https://coveralls.io/repos/github/giampaolo/psutil/badge.svg?branch=master
    :target: https://coveralls.io/github/giampaolo/psutil?branch=master
    :alt: Test coverage (coverall.io)

.. |doc| image:: https://readthedocs.org/projects/psutil/badge/?version=latest
    :target: https://psutil.readthedocs.io/en/latest/
    :alt: Documentation Status

.. |version| image:: https://img.shields.io/pypi/v/psutil.svg?label=pypi
    :target: https://pypi.org/project/psutil
    :alt: Latest version

.. |py-versions| image:: https://img.shields.io/pypi/pyversions/psutil.svg
    :alt: Supported Python versions

.. |packages| image:: https://repology.org/badge/tiny-repos/python:psutil.svg
    :target: https://repology.org/metapackage/python:psutil/versions
    :alt: Binary packages

.. |license| image:: https://img.shields.io/pypi/l/psutil.svg
    :target: https://github.com/giampaolo/psutil/blob/master/LICENSE
    :alt: License

.. |twitter| image:: https://img.shields.io/twitter/follow/grodola.svg?label=follow&style=flat&logo=twitter&logoColor=4FADFF
    :target: https://twitter.com/grodola
    :alt: Twitter Follow

.. |tidelift| image:: https://tidelift.com/badges/github/giampaolo/psutil?style=flat
    :target: https://tidelift.com/subscription/pkg/pypi-psutil?utm_source=pypi-psutil&utm_medium=referral&utm_campaign=readme
    :alt: Tidelift

-----

Quick links
===========

- `Home page <https://github.com/giampaolo/psutil>`_
- `Install <https://github.com/giampaolo/psutil/blob/master/INSTALL.rst>`_
- `Documentation <http://psutil.readthedocs.io>`_
- `Download <https://pypi.org/project/psutil/#files>`_
- `Forum <http://groups.google.com/group/psutil/topics>`_
- `StackOverflow <https://stackoverflow.com/questions/tagged/psutil>`_
- `Blog <https://gmpy.dev/tags/psutil>`_
- `What's new <https://github.com/giampaolo/psutil/blob/master/HISTORY.rst>`_


Summary
=======

psutil (process and system utilities) is a cross-platform library for
retrieving information on **running processes** and **system utilization**
(CPU, memory, disks, network, sensors) in Python.
It is useful mainly for **system monitoring**, **profiling and limiting process
resources** and **management of running processes**.
It implements many functionalities offered by classic UNIX command line tools
such as *ps, top, iotop, lsof, netstat, ifconfig, free* and others.
psutil currently supports the following platforms:

- **Linux**
- **Windows**
- **macOS**
- **FreeBSD, OpenBSD**, **NetBSD**
- **Sun Solaris**
- **AIX**

Supported Python versions are **2.7**, **3.6+** and
`PyPy <http://pypy.org/>`__.

Funding
=======

While psutil is free software and will always be, the project would benefit
immensely from some funding.
Keeping up with bug reports and maintenance has become hardly sustainable for
me alone in terms of time.
If you're a company that's making significant use of psutil you can consider
becoming a sponsor via `GitHub Sponsors <https://github.com/sponsors/giampaolo>`__,
`Open Collective <https://opencollective.com/psutil>`__ or
`PayPal <https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=A9ZS7PKKRM3S8>`__
and have your logo displayed in here and psutil `doc <https://psutil.readthedocs.io>`__.

Sponsors
========

.. image:: https://github.com/giampaolo/psutil/raw/master/docs/_static/tidelift-logo.png
  :width: 200
  :alt: Alternative text

`Add your logo <https://github.com/sponsors/giampaolo>`__.

Example usages
==============

This represents pretty much the whole psutil API.

CPU
---

.. code-block:: python

    >>> import psutil
    >>>
    >>> psutil.cpu_times()
    scputimes(user=3961.46, nice=169.729, system=2150.659, idle=16900.540, iowait=629.59, irq=0.0, softirq=19.42, steal=0.0, guest=0, guest_nice=0.0)
    >>>
    >>> for x in range(3):
    ...     psutil.cpu_percent(interval=1)
    ...
    4.0
    5.9
    3.8
    >>>
    >>> for x in range(3):
    ...     psutil.cpu_percent(interval=1, percpu=True)
    ...
    [4.0, 6.9, 3.7, 9.2]
    [7.0, 8.5, 2.4, 2.1]
    [1.2, 9.0, 9.9, 7.2]
    >>>
    >>> for x in range(3):
    ...     psutil.cpu_times_percent(interval=1, percpu=False)
    ...
    scputimes(user=1.5, nice=0.0, system=0.5, idle=96.5, iowait=1.5, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)
    scputimes(user=1.0, nice=0.0, system=0.0, idle=99.0, iowait=0.0, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)
    scputimes(user=2.0, nice=0.0, system=0.0, idle=98.0, iowait=0.0, irq=0.0, softirq=0.0, steal=0.0, guest=0.0, guest_nice=0.0)
    >>>
    >>> psutil.cpu_count()
    4
    >>> psutil.cpu_count(logical=False)
    2
    >>>
    >>> psutil.cpu_stats()
    scpustats(ctx_switches=20455687, interrupts=6598984, soft_interrupts=2134212, syscalls=0)
    >>>
    >>> psutil.cpu_freq()
    scpufreq(current=931.42925, min=800.0, max=3500.0)
    >>>
    >>> psutil.getloadavg()  # also on Windows (emulated)
    (3.14, 3.89, 4.67)

Memory
------

.. code-block:: python

    >>> psutil.virtual_memory()
    svmem(total=10367352832, available=6472179712, percent=37.6, used=8186245120, free=2181107712, active=4748992512, inactive=2758115328, buffers=790724608, cached=3500347392, shared=787554304)
    >>> psutil.swap_memory()
    sswap(total=2097147904, used=296128512, free=1801019392, percent=14.1, sin=304193536, sout=677842944)
    >>>

Disks
-----

.. code-block:: python

    >>> psutil.disk_partitions()
    [sdiskpart(device='/dev/sda1', mountpoint='/', fstype='ext4', opts='rw,nosuid'),
     sdiskpart(device='/dev/sda2', mountpoint='/home', fstype='ext', opts='rw')]
    >>>
    >>> psutil.disk_usage('/')
    sdiskusage(total=21378641920, used=4809781248, free=15482871808, percent=22.5)
    >>>
    >>> psutil.disk_io_counters(perdisk=False)
    sdiskio(read_count=719566, write_count=1082197, read_bytes=18626220032, write_bytes=24081764352, read_time=5023392, write_time=63199568, read_merged_count=619166, write_merged_count=812396, busy_time=4523412)
    >>>

Network
-------

.. code-block:: python

    >>> psutil.net_io_counters(pernic=True)
    {'eth0': netio(bytes_sent=485291293, bytes_recv=6004858642, packets_sent=3251564, packets_recv=4787798, errin=0, errout=0, dropin=0, dropout=0),
     'lo': netio(bytes_sent=2838627, bytes_recv=2838627, packets_sent=30567, packets_recv=30567, errin=0, errout=0, dropin=0, dropout=0)}
    >>>
    >>> psutil.net_connections(kind='tcp')
    [sconn(fd=115, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=48776), raddr=addr(ip='93.186.135.91', port=80), status='ESTABLISHED', pid=1254),
     sconn(fd=117, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=43761), raddr=addr(ip='72.14.234.100', port=80), status='CLOSING', pid=2987),
     ...]
    >>>
    >>> psutil.net_if_addrs()
    {'lo': [snicaddr(family=<AddressFamily.AF_INET: 2>, address='127.0.0.1', netmask='255.0.0.0', broadcast='127.0.0.1', ptp=None),
            snicaddr(family=<AddressFamily.AF_INET6: 10>, address='::1', netmask='ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff', broadcast=None, ptp=None),
            snicaddr(family=<AddressFamily.AF_LINK: 17>, address='00:00:00:00:00:00', netmask=None, broadcast='00:00:00:00:00:00', ptp=None)],
     'wlan0': [snicaddr(family=<AddressFamily.AF_INET: 2>, address='192.168.1.3', netmask='255.255.255.0', broadcast='192.168.1.255', ptp=None),
               snicaddr(family=<AddressFamily.AF_INET6: 10>, address='fe80::c685:8ff:fe45:641%wlan0', netmask='ffff:ffff:ffff:ffff::', broadcast=None, ptp=None),
               snicaddr(family=<AddressFamily.AF_LINK: 17>, address='c4:85:08:45:06:41', netmask=None, broadcast='ff:ff:ff:ff:ff:ff', ptp=None)]}
    >>>
    >>> psutil.net_if_stats()
    {'lo': snicstats(isup=True, duplex=<NicDuplex.NIC_DUPLEX_UNKNOWN: 0>, speed=0, mtu=65536, flags='up,loopback,running'),
     'wlan0': snicstats(isup=True, duplex=<NicDuplex.NIC_DUPLEX_FULL: 2>, speed=100, mtu=1500, flags='up,broadcast,running,multicast')}
    >>>

Sensors
-------

.. code-block:: python

    >>> import psutil
    >>> psutil.sensors_temperatures()
    {'acpitz': [shwtemp(label='', current=47.0, high=103.0, critical=103.0)],
     'asus': [shwtemp(label='', current=47.0, high=None, critical=None)],
     'coretemp': [shwtemp(label='Physical id 0', current=52.0, high=100.0, critical=100.0),
                  shwtemp(label='Core 0', current=45.0, high=100.0, critical=100.0)]}
    >>>
    >>> psutil.sensors_fans()
    {'asus': [sfan(label='cpu_fan', current=3200)]}
    >>>
    >>> psutil.sensors_battery()
    sbattery(percent=93, secsleft=16628, power_plugged=False)
    >>>

Other system info
-----------------

.. code-block:: python

    >>> import psutil
    >>> psutil.users()
    [suser(name='giampaolo', terminal='pts/2', host='localhost', started=1340737536.0, pid=1352),
     suser(name='giampaolo', terminal='pts/3', host='localhost', started=1340737792.0, pid=1788)]
    >>>
    >>> psutil.boot_time()
    1365519115.0
    >>>

Process management
------------------

.. code-block:: python

    >>> import psutil
    >>> psutil.pids()
    [1, 2, 3, 4, 5, 6, 7, 46, 48, 50, 51, 178, 182, 222, 223, 224, 268, 1215,
     1216, 1220, 1221, 1243, 1244, 1301, 1601, 2237, 2355, 2637, 2774, 3932,
     4176, 4177, 4185, 4187, 4189, 4225, 4243, 4245, 4263, 4282, 4306, 4311,
     4312, 4313, 4314, 4337, 4339, 4357, 4358, 4363, 4383, 4395, 4408, 4433,
     4443, 4445, 4446, 5167, 5234, 5235, 5252, 5318, 5424, 5644, 6987, 7054,
     7055, 7071]
    >>>
    >>> p = psutil.Process(7055)
    >>> p
    psutil.Process(pid=7055, name='python3', status='running', started='09:04:44')
    >>> p.pid
    7055
    >>> p.name()
    'python3'
    >>> p.exe()
    '/usr/bin/python3'
    >>> p.cwd()
    '/home/giampaolo'
    >>> p.cmdline()
    ['/usr/bin/python3', 'main.py']
    >>>
    >>> p.ppid()
    7054
    >>> p.parent()
    psutil.Process(pid=4699, name='bash', status='sleeping', started='09:06:44')
    >>> p.parents()
    [psutil.Process(pid=4699, name='bash', started='09:06:44'),
     psutil.Process(pid=4689, name='gnome-terminal-server', status='sleeping', started='0:06:44'),
     psutil.Process(pid=1, name='systemd', status='sleeping', started='05:56:55')]
    >>> p.children(recursive=True)
    [psutil.Process(pid=29835, name='python3', status='sleeping', started='11:45:38'),
     psutil.Process(pid=29836, name='python3', status='waking', started='11:43:39')]
    >>>
    >>> p.status()
    'running'
    >>> p.create_time()
    1267551141.5019531
    >>> p.terminal()
    '/dev/pts/0'
    >>>
    >>> p.username()
    'giampaolo'
    >>> p.uids()
    puids(real=1000, effective=1000, saved=1000)
    >>> p.gids()
    pgids(real=1000, effective=1000, saved=1000)
    >>>
    >>> p.cpu_times()
    pcputimes(user=1.02, system=0.31, children_user=0.32, children_system=0.1, iowait=0.0)
    >>> p.cpu_percent(interval=1.0)
    12.1
    >>> p.cpu_affinity()
    [0, 1, 2, 3]
    >>> p.cpu_affinity([0, 1])  # set
    >>> p.cpu_num()
    1
    >>>
    >>> p.memory_info()
    pmem(rss=10915840, vms=67608576, shared=3313664, text=2310144, lib=0, data=7262208, dirty=0)
    >>> p.memory_full_info()  # "real" USS memory usage (Linux, macOS, Win only)
    pfullmem(rss=10199040, vms=52133888, shared=3887104, text=2867200, lib=0, data=5967872, dirty=0, uss=6545408, pss=6872064, swap=0)
    >>> p.memory_percent()
    0.7823
    >>> p.memory_maps()
    [pmmap_grouped(path='/lib/x8664-linux-gnu/libutil-2.15.so', rss=32768, size=2125824, pss=32768, shared_clean=0, shared_dirty=0, private_clean=20480, private_dirty=12288, referenced=32768, anonymous=12288, swap=0),
     pmmap_grouped(path='/lib/x8664-linux-gnu/libc-2.15.so', rss=3821568, size=3842048, pss=3821568, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=3821568, referenced=3575808, anonymous=3821568, swap=0),
     pmmap_grouped(path='[heap]',  rss=32768, size=139264, pss=32768, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=32768, referenced=32768, anonymous=32768, swap=0),
     pmmap_grouped(path='[stack]', rss=2465792, size=2494464, pss=2465792, shared_clean=0, shared_dirty=0, private_clean=0, private_dirty=2465792, referenced=2277376, anonymous=2465792, swap=0),
     ...]
    >>>
    >>> p.io_counters()
    pio(read_count=478001, write_count=59371, read_bytes=700416, write_bytes=69632, read_chars=456232, write_chars=517543)
    >>>
    >>> p.open_files()
    [popenfile(path='/home/giampaolo/monit.py', fd=3, position=0, mode='r', flags=32768),
     popenfile(path='/var/log/monit.log', fd=4, position=235542, mode='a', flags=33793)]
    >>>
    >>> p.net_connections(kind='tcp')
    [pconn(fd=115, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=48776), raddr=addr(ip='93.186.135.91', port=80), status='ESTABLISHED'),
     pconn(fd=117, family=<AddressFamily.AF_INET: 2>, type=<SocketType.SOCK_STREAM: 1>, laddr=addr(ip='10.0.0.1', port=43761), raddr=addr(ip='72.14.234.100', port=80), status='CLOSING')]
    >>>
    >>> p.threads()
    [pthread(id=5234, user_time=22.5, system_time=9.2891),
     pthread(id=5237, user_time=0.0707, system_time=1.1)]
    >>>
    >>> p.num_threads()
    4
    >>> p.num_fds()
    8
    >>> p.num_ctx_switches()
    pctxsw(voluntary=78, involuntary=19)
    >>>
    >>> p.nice()
    0
    >>> p.nice(10)  # set
    >>>
    >>> p.ionice(psutil.IOPRIO_CLASS_IDLE)  # IO priority (Win and Linux only)
    >>> p.ionice()
    pionice(ioclass=<IOPriority.IOPRIO_CLASS_IDLE: 3>, value=0)
    >>>
    >>> p.rlimit(psutil.RLIMIT_NOFILE, (5, 5))  # set resource limits (Linux only)
    >>> p.rlimit(psutil.RLIMIT_NOFILE)
    (5, 5)
    >>>
    >>> p.environ()
    {'LC_PAPER': 'it_IT.UTF-8', 'SHELL': '/bin/bash', 'GREP_OPTIONS': '--color=auto',
    'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/usr/share/upstart/xdg:/etc/xdg',
     ...}
    >>>
    >>> p.as_dict()
    {'status': 'running', 'num_ctx_switches': pctxsw(voluntary=63, involuntary=1), 'pid': 5457, ...}
    >>> p.is_running()
    True
    >>> p.suspend()
    >>> p.resume()
    >>>
    >>> p.terminate()
    >>> p.kill()
    >>> p.wait(timeout=3)
    <Exitcode.EX_OK: 0>
    >>>
    >>> psutil.test()
    USER         PID %CPU %MEM     VSZ     RSS TTY        START    TIME  COMMAND
    root           1  0.0  0.0   24584    2240            Jun17   00:00  init
    root           2  0.0  0.0       0       0            Jun17   00:00  kthreadd
    ...
    giampaolo  31475  0.0  0.0   20760    3024 /dev/pts/0 Jun19   00:00  python2.4
    giampaolo  31721  0.0  2.2  773060  181896            00:04   10:30  chrome
    root       31763  0.0  0.0       0       0            00:05   00:00  kworker/0:1
    >>>

Further process APIs
--------------------

.. code-block:: python

    >>> import psutil
    >>> for proc in psutil.process_iter(['pid', 'name']):
    ...     print(proc.info)
    ...
    {'pid': 1, 'name': 'systemd'}
    {'pid': 2, 'name': 'kthreadd'}
    {'pid': 3, 'name': 'ksoftirqd/0'}
    ...
    >>>
    >>> psutil.pid_exists(3)
    True
    >>>
    >>> def on_terminate(proc):
    ...     print("process {} terminated".format(proc))
    ...
    >>> # waits for multiple processes to terminate
    >>> gone, alive = psutil.wait_procs(procs_list, timeout=3, callback=on_terminate)
    >>>

Windows services
----------------

.. code-block:: python

    >>> list(psutil.win_service_iter())
    [<WindowsService(name='AeLookupSvc', display_name='Application Experience') at 38850096>,
     <WindowsService(name='ALG', display_name='Application Layer Gateway Service') at 38850128>,
     <WindowsService(name='APNMCP', display_name='Ask Update Service') at 38850160>,
     <WindowsService(name='AppIDSvc', display_name='Application Identity') at 38850192>,
     ...]
    >>> s = psutil.win_service_get('alg')
    >>> s.as_dict()
    {'binpath': 'C:\\Windows\\System32\\alg.exe',
     'description': 'Provides support for 3rd party protocol plug-ins for Internet Connection Sharing',
     'display_name': 'Application Layer Gateway Service',
     'name': 'alg',
     'pid': None,
     'start_type': 'manual',
     'status': 'stopped',
     'username': 'NT AUTHORITY\\LocalService'}

Projects using psutil
=====================

Here's some I find particularly interesting:

- https://github.com/google/grr
- https://github.com/facebook/osquery/
- https://github.com/nicolargo/glances
- https://github.com/aristocratos/bpytop
- https://github.com/Jahaja/psdash
- https://github.com/ajenti/ajenti
- https://github.com/home-assistant/home-assistant/

Portings
========

- Go: https://github.com/shirou/gopsutil
- C: https://github.com/hamon-in/cpslib
- Rust: https://github.com/rust-psutil/rust-psutil
- Nim: https://github.com/johnscillieri/psutil-nim



PKok\ psutil-6.1.0.dist-info/REQUESTEDnu�[���PKok\TFG$psutil-6.1.0.dist-info/top_level.txtnu�[���psutil
PKok\��� psutil-6.1.0.dist-info/INSTALLERnu�[���pip
PK@u\certifi/py.typednu�[���PK@u\���^^certifi/__init__.pynu�[���from .core import contents, where

__all__ = ["contents", "where"]
__version__ = "2024.08.30"
PK@u\ �����certifi/__main__.pynu�[���import argparse

from certifi import contents, where

parser = argparse.ArgumentParser()
parser.add_argument("-c", "--contents", action="store_true")
args = parser.parse_args()

if args.contents:
    print(contents())
else:
    print(where())
PK
@u\b�H+certifi/__pycache__/__init__.cpython-39.pycnu�[���a

��?h^�@s ddlmZmZddgZdZdS)�)�contents�whererrz
2024.08.30N)�corerr�__all__�__version__�rr�:/usr/local/lib/python3.9/site-packages/certifi/__init__.py�<module>sPK@u\�5�!(('certifi/__pycache__/core.cpython-39.pycnu�[���a

��?hJ�@sdZddlZddlZdd�dd�ZejdkrbddlmZmZdada	e
d�dd	�Ze
d�d
d�Zn�ejdkr�dd
lm
ZmZdada	e
d�dd	�Ze
d�dd�ZnjddlZddlZddlmZeeje
fZee
dfZdeee
e
e
d�dd�Ze
d�dd	�Ze
d�dd�ZdS)ze
certifi.py
~~~~~~~~~~

This module returns the installation location of cacert.pem or its contents.
�N)�returncCst�ddd�dS)N)�_CACERT_CTX�__exit__�rr�6/usr/local/lib/python3.9/site-packages/certifi/core.py�exit_cacert_ctx
sr)��)�as_file�filescCs4tdur0ttd��d��att���at�t	�tS�N�certifi�
cacert.pem)
�_CACERT_PATHr
r�joinpathr�str�	__enter__�atexit�registerrrrrr�wheres

rcCstd��d�jdd�S�Nr
r�ascii��encoding)rr�	read_textrrrr�contents.sr)r�)�pathrcCs,tdur(tdd�att���at�t�tSr)r�get_pathrrrrrrrrrrr8s


cCstdddd�Sr�rrrrrrRs)�Unionzos.PathLike�utf-8�strict)�package�resourcer�errorsrcCs:tt�|d��}|��Wd�S1s,0YdS)Nr)�openr�read)r#r$rr%�datarrrrasrcCstj�t�}tj�|d�S)Nr)�osr�dirname�__file__�join)�frrrrlscCstdddd�Srrrrrrrqs)r!r")�__doc__�sysrr�version_info�importlib.resourcesr
rrrrrrrrrr)�types�typingr �
ModuleType�Package�Resourcerrrr�<module>s>

	��PK@u\x����+certifi/__pycache__/__main__.cpython-39.pycnu�[���a

��?h��@sXddlZddlmZmZe��Zejdddd�e��ZejrJe	e��n
e	e��dS)�N)�contents�wherez-cz
--contents�
store_true)�action)
�argparse�certifirr�ArgumentParser�parser�add_argument�
parse_args�args�print�rr�:/usr/local/lib/python3.9/site-packages/certifi/__main__.py�<module>sPK@u\!�AJJcertifi/core.pynu�[���"""
certifi.py
~~~~~~~~~~

This module returns the installation location of cacert.pem or its contents.
"""
import sys
import atexit

def exit_cacert_ctx() -> None:
    _CACERT_CTX.__exit__(None, None, None)  # type: ignore[union-attr]


if sys.version_info >= (3, 11):

    from importlib.resources import as_file, files

    _CACERT_CTX = None
    _CACERT_PATH = None

    def where() -> str:
        # This is slightly terrible, but we want to delay extracting the file
        # in cases where we're inside of a zipimport situation until someone
        # actually calls where(), but we don't want to re-extract the file
        # on every call of where(), so we'll do it once then store it in a
        # global variable.
        global _CACERT_CTX
        global _CACERT_PATH
        if _CACERT_PATH is None:
            # This is slightly janky, the importlib.resources API wants you to
            # manage the cleanup of this file, so it doesn't actually return a
            # path, it returns a context manager that will give you the path
            # when you enter it and will do any cleanup when you leave it. In
            # the common case of not needing a temporary file, it will just
            # return the file system location and the __exit__() is a no-op.
            #
            # We also have to hold onto the actual context manager, because
            # it will do the cleanup whenever it gets garbage collected, so
            # we will also store that at the global level as well.
            _CACERT_CTX = as_file(files("certifi").joinpath("cacert.pem"))
            _CACERT_PATH = str(_CACERT_CTX.__enter__())
            atexit.register(exit_cacert_ctx)

        return _CACERT_PATH

    def contents() -> str:
        return files("certifi").joinpath("cacert.pem").read_text(encoding="ascii")

elif sys.version_info >= (3, 7):

    from importlib.resources import path as get_path, read_text

    _CACERT_CTX = None
    _CACERT_PATH = None

    def where() -> str:
        # This is slightly terrible, but we want to delay extracting the
        # file in cases where we're inside of a zipimport situation until
        # someone actually calls where(), but we don't want to re-extract
        # the file on every call of where(), so we'll do it once then store
        # it in a global variable.
        global _CACERT_CTX
        global _CACERT_PATH
        if _CACERT_PATH is None:
            # This is slightly janky, the importlib.resources API wants you
            # to manage the cleanup of this file, so it doesn't actually
            # return a path, it returns a context manager that will give
            # you the path when you enter it and will do any cleanup when
            # you leave it. In the common case of not needing a temporary
            # file, it will just return the file system location and the
            # __exit__() is a no-op.
            #
            # We also have to hold onto the actual context manager, because
            # it will do the cleanup whenever it gets garbage collected, so
            # we will also store that at the global level as well.
            _CACERT_CTX = get_path("certifi", "cacert.pem")
            _CACERT_PATH = str(_CACERT_CTX.__enter__())
            atexit.register(exit_cacert_ctx)

        return _CACERT_PATH

    def contents() -> str:
        return read_text("certifi", "cacert.pem", encoding="ascii")

else:
    import os
    import types
    from typing import Union

    Package = Union[types.ModuleType, str]
    Resource = Union[str, "os.PathLike"]

    # This fallback will work for Python versions prior to 3.7 that lack the
    # importlib.resources module but relies on the existing `where` function
    # so won't address issues with environments like PyOxidizer that don't set
    # __file__ on modules.
    def read_text(
        package: Package,
        resource: Resource,
        encoding: str = 'utf-8',
        errors: str = 'strict'
    ) -> str:
        with open(where(), encoding=encoding) as data:
            return data.read()

    # If we don't have importlib.resources, then we will just do the old logic
    # of assuming we're on the filesystem and munge the path directly.
    def where() -> str:
        f = os.path.dirname(__file__)

        return os.path.join(f, "cacert.pem")

    def contents() -> str:
        return read_text("certifi", "cacert.pem", encoding="ascii")
PK@u\l�Hڣ���certifi/cacert.pemnu�[���
# Issuer: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA
# Subject: CN=GlobalSign Root CA O=GlobalSign nv-sa OU=Root CA
# Label: "GlobalSign Root CA"
# Serial: 4835703278459707669005204
# MD5 Fingerprint: 3e:45:52:15:09:51:92:e1:b7:5d:37:9f:b1:87:29:8a
# SHA1 Fingerprint: b1:bc:96:8b:d4:f4:9d:62:2a:a8:9a:81:f2:15:01:52:a4:1d:82:9c
# SHA256 Fingerprint: eb:d4:10:40:e4:bb:3e:c7:42:c9:e3:81:d3:1e:f2:a4:1a:48:b6:68:5c:96:e7:ce:f3:c1:df:6c:d4:33:1c:99
-----BEGIN CERTIFICATE-----
MIIDdTCCAl2gAwIBAgILBAAAAAABFUtaw5QwDQYJKoZIhvcNAQEFBQAwVzELMAkG
A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExEDAOBgNVBAsTB1Jv
b3QgQ0ExGzAZBgNVBAMTEkdsb2JhbFNpZ24gUm9vdCBDQTAeFw05ODA5MDExMjAw
MDBaFw0yODAxMjgxMjAwMDBaMFcxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9i
YWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDaDuaZ
jc6j40+Kfvvxi4Mla+pIH/EqsLmVEQS98GPR4mdmzxzdzxtIK+6NiY6arymAZavp
xy0Sy6scTHAHoT0KMM0VjU/43dSMUBUc71DuxC73/OlS8pF94G3VNTCOXkNz8kHp
1Wrjsok6Vjk4bwY8iGlbKk3Fp1S4bInMm/k8yuX9ifUSPJJ4ltbcdG6TRGHRjcdG
snUOhugZitVtbNV4FpWi6cgKOOvyJBNPc1STE4U6G7weNLWLBYy5d4ux2x8gkasJ
U26Qzns3dLlwR5EiUWMWea6xrkEmCMgZK9FGqkjWZCrXgzT/LCrBbBlDSgeF59N8
9iFo7+ryUp9/k5DPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8E
BTADAQH/MB0GA1UdDgQWBBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0B
AQUFAAOCAQEA1nPnfE920I2/7LqivjTFKDK1fPxsnCwrvQmeU79rXqoRSLblCKOz
yj1hTdNGCbM+w6DjY1Ub8rrvrTnhQ7k4o+YviiY776BQVvnGCv04zcQLcFGUl5gE
38NflNUVyRRBnMRddWQVDf9VMOyGj/8N7yy5Y0b2qvzfvGn9LhJIZJrglfCm7ymP
AbEVtQwdpf5pLGkkeB6zpxxxYu7KyJesF12KwvhHhm4qxFYxldBniYUr+WymXUad
DKqC5JlR3XC321Y9YeRq4VzW9v493kHMB65jUr9TU/Qr6cf9tveCX4XSQRjbgbME
HMUfpIBvFSDJ3gyICh3WZlXi/EjJKSZp4A==
-----END CERTIFICATE-----

# Issuer: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited
# Subject: CN=Entrust.net Certification Authority (2048) O=Entrust.net OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.)/(c) 1999 Entrust.net Limited
# Label: "Entrust.net Premium 2048 Secure Server CA"
# Serial: 946069240
# MD5 Fingerprint: ee:29:31:bc:32:7e:9a:e6:e8:b5:f7:51:b4:34:71:90
# SHA1 Fingerprint: 50:30:06:09:1d:97:d4:f5:ae:39:f7:cb:e7:92:7d:7d:65:2d:34:31
# SHA256 Fingerprint: 6d:c4:71:72:e0:1c:bc:b0:bf:62:58:0d:89:5f:e2:b8:ac:9a:d4:f8:73:80:1e:0c:10:b9:c8:37:d2:1e:b1:77
-----BEGIN CERTIFICATE-----
MIIEKjCCAxKgAwIBAgIEOGPe+DANBgkqhkiG9w0BAQUFADCBtDEUMBIGA1UEChML
RW50cnVzdC5uZXQxQDA+BgNVBAsUN3d3dy5lbnRydXN0Lm5ldC9DUFNfMjA0OCBp
bmNvcnAuIGJ5IHJlZi4gKGxpbWl0cyBsaWFiLikxJTAjBgNVBAsTHChjKSAxOTk5
IEVudHJ1c3QubmV0IExpbWl0ZWQxMzAxBgNVBAMTKkVudHJ1c3QubmV0IENlcnRp
ZmljYXRpb24gQXV0aG9yaXR5ICgyMDQ4KTAeFw05OTEyMjQxNzUwNTFaFw0yOTA3
MjQxNDE1MTJaMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3
LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxp
YWIuKTElMCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEG
A1UEAxMqRW50cnVzdC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgp
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArU1LqRKGsuqjIAcVFmQq
K0vRvwtKTY7tgHalZ7d4QMBzQshowNtTK91euHaYNZOLGp18EzoOH1u3Hs/lJBQe
sYGpjX24zGtLA/ECDNyrpUAkAH90lKGdCCmziAv1h3edVc3kw37XamSrhRSGlVuX
MlBvPci6Zgzj/L24ScF2iUkZ/cCovYmjZy/Gn7xxGWC4LeksyZB2ZnuU4q941mVT
XTzWnLLPKQP5L6RQstRIzgUyVYr9smRMDuSYB3Xbf9+5CFVghTAp+XtIpGmG4zU/
HoZdenoVve8AjhUiVBcAkCaTvA5JaJG/+EfTnZVCwQ5N328mz8MYIWJmQ3DW1cAH
4QIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV
HQ4EFgQUVeSB0RGAvtiJuQijMfmhJAkWuXAwDQYJKoZIhvcNAQEFBQADggEBADub
j1abMOdTmXx6eadNl9cZlZD7Bh/KM3xGY4+WZiT6QBshJ8rmcnPyT/4xmf3IDExo
U8aAghOY+rat2l098c5u9hURlIIM7j+VrxGrD9cv3h8Dj1csHsm7mhpElesYT6Yf
zX1XEC+bBAlahLVu2B064dae0Wx5XnkcFMXj0EyTO2U87d89vqbllRrDtRnDvV5b
u/8j72gZyxKTJ1wDLW8w0B62GqzeWvfRqqgnpv55gcR5mTNXuhKwqeBCbJPKVt7+
bYQLCIt+jerXmCHG8+c8eS9enNFMFY3h7CI3zJpDC5fcgJCNs2ebb0gIFVbPv/Er
fF6adulZkMV8gzURZVE=
-----END CERTIFICATE-----

# Issuer: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust
# Subject: CN=Baltimore CyberTrust Root O=Baltimore OU=CyberTrust
# Label: "Baltimore CyberTrust Root"
# Serial: 33554617
# MD5 Fingerprint: ac:b6:94:a5:9c:17:e0:d7:91:52:9b:b1:97:06:a6:e4
# SHA1 Fingerprint: d4:de:20:d0:5e:66:fc:53:fe:1a:50:88:2c:78:db:28:52:ca:e4:74
# SHA256 Fingerprint: 16:af:57:a9:f6:76:b0:ab:12:60:95:aa:5e:ba:de:f2:2a:b3:11:19:d6:44:ac:95:cd:4b:93:db:f3:f2:6a:eb
-----BEGIN CERTIFICATE-----
MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ
RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD
VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX
DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y
ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy
VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr
mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr
IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK
mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu
XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy
dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye
jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1
BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3
DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92
9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx
jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0
Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz
ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS
R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp
-----END CERTIFICATE-----

# Issuer: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc.
# Subject: CN=Entrust Root Certification Authority O=Entrust, Inc. OU=www.entrust.net/CPS is incorporated by reference/(c) 2006 Entrust, Inc.
# Label: "Entrust Root Certification Authority"
# Serial: 1164660820
# MD5 Fingerprint: d6:a5:c3:ed:5d:dd:3e:00:c1:3d:87:92:1f:1d:3f:e4
# SHA1 Fingerprint: b3:1e:b1:b7:40:e3:6c:84:02:da:dc:37:d4:4d:f5:d4:67:49:52:f9
# SHA256 Fingerprint: 73:c1:76:43:4f:1b:c6:d5:ad:f4:5b:0e:76:e7:27:28:7c:8d:e5:76:16:c1:e6:e6:14:1a:2b:2c:bc:7d:8e:4c
-----BEGIN CERTIFICATE-----
MIIEkTCCA3mgAwIBAgIERWtQVDANBgkqhkiG9w0BAQUFADCBsDELMAkGA1UEBhMC
VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRydXN0
Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMW
KGMpIDIwMDYgRW50cnVzdCwgSW5jLjEtMCsGA1UEAxMkRW50cnVzdCBSb290IENl
cnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA2MTEyNzIwMjM0MloXDTI2MTEyNzIw
NTM0MlowgbAxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkw
NwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSBy
ZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDA2IEVudHJ1c3QsIEluYy4xLTArBgNV
BAMTJEVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASIwDQYJ
KoZIhvcNAQEBBQADggEPADCCAQoCggEBALaVtkNC+sZtKm9I35RMOVcF7sN5EUFo
Nu3s/poBj6E4KPz3EEZmLk0eGrEaTsbRwJWIsMn/MYszA9u3g3s+IIRe7bJWKKf4
4LlAcTfFy0cOlypowCKVYhXbR9n10Cv/gkvJrT7eTNuQgFA/CYqEAOwwCj0Yzfv9
KlmaI5UXLEWeH25DeW0MXJj+SKfFI0dcXv1u5x609mhF0YaDW6KKjbHjKYD+JXGI
rb68j6xSlkuqUY3kEzEZ6E5Nn9uss2rVvDlUccp6en+Q3X0dgNmBu1kmwhH+5pPi
94DkZfs0Nw4pgHBNrziGLp5/V6+eF67rHMsoIV+2HNjnogQi+dPa2MsCAwEAAaOB
sDCBrTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zArBgNVHRAEJDAi
gA8yMDA2MTEyNzIwMjM0MlqBDzIwMjYxMTI3MjA1MzQyWjAfBgNVHSMEGDAWgBRo
kORnpKZTgMeGZqTx90tD+4S9bTAdBgNVHQ4EFgQUaJDkZ6SmU4DHhmak8fdLQ/uE
vW0wHQYJKoZIhvZ9B0EABBAwDhsIVjcuMTo0LjADAgSQMA0GCSqGSIb3DQEBBQUA
A4IBAQCT1DCw1wMgKtD5Y+iRDAUgqV8ZyntyTtSx29CW+1RaGSwMCPeyvIWonX9t
O1KzKtvn1ISMY/YPyyYBkVBs9F8U4pN0wBOeMDpQ47RgxRzwIkSNcUesyBrJ6Zua
AGAT/3B+XxFNSRuzFVJ7yVTav52Vr2ua2J7p8eRDjeIRRDq/r72DQnNSi6q7pynP
9WQcCk3RvKqsnyrQ/39/2n3qse0wJcGE2jTSW3iDVuycNsMm4hH2Z0kdkquM++v/
eu6FSqdQgPCnXEqULl8FmTxSQeDNtGPPAUO6nIPcj2A781q0tHuu2guQOHXvgR1m
0vdXcDazv/wor3ElhVsT/h5/WrQ8
-----END CERTIFICATE-----

# Issuer: CN=AAA Certificate Services O=Comodo CA Limited
# Subject: CN=AAA Certificate Services O=Comodo CA Limited
# Label: "Comodo AAA Services root"
# Serial: 1
# MD5 Fingerprint: 49:79:04:b0:eb:87:19:ac:47:b0:bc:11:51:9b:74:d0
# SHA1 Fingerprint: d1:eb:23:a4:6d:17:d6:8f:d9:25:64:c2:f1:f1:60:17:64:d8:e3:49
# SHA256 Fingerprint: d7:a7:a0:fb:5d:7e:27:31:d7:71:e9:48:4e:bc:de:f7:1d:5f:0c:3e:0a:29:48:78:2b:c8:3e:e0:ea:69:9e:f4
-----BEGIN CERTIFICATE-----
MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb
MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj
YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL
MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM
GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP
ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua
BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe
3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4
YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR
rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm
ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU
oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF
MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v
QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t
b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF
AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q
GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz
Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2
G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi
l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3
smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg==
-----END CERTIFICATE-----

# Issuer: CN=QuoVadis Root CA 2 O=QuoVadis Limited
# Subject: CN=QuoVadis Root CA 2 O=QuoVadis Limited
# Label: "QuoVadis Root CA 2"
# Serial: 1289
# MD5 Fingerprint: 5e:39:7b:dd:f8:ba:ec:82:e9:ac:62:ba:0c:54:00:2b
# SHA1 Fingerprint: ca:3a:fb:cf:12:40:36:4b:44:b2:16:20:88:80:48:39:19:93:7c:f7
# SHA256 Fingerprint: 85:a0:dd:7d:d7:20:ad:b7:ff:05:f8:3d:54:2b:20:9d:c7:ff:45:28:f7:d6:77:b1:83:89:fe:a5:e5:c4:9e:86
-----BEGIN CERTIFICATE-----
MIIFtzCCA5+gAwIBAgICBQkwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x
GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv
b3QgQ0EgMjAeFw0wNjExMjQxODI3MDBaFw0zMTExMjQxODIzMzNaMEUxCzAJBgNV
BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W
YWRpcyBSb290IENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCa
GMpLlA0ALa8DKYrwD4HIrkwZhR0In6spRIXzL4GtMh6QRr+jhiYaHv5+HBg6XJxg
Fyo6dIMzMH1hVBHL7avg5tKifvVrbxi3Cgst/ek+7wrGsxDp3MJGF/hd/aTa/55J
WpzmM+Yklvc/ulsrHHo1wtZn/qtmUIttKGAr79dgw8eTvI02kfN/+NsRE8Scd3bB
rrcCaoF6qUWD4gXmuVbBlDePSHFjIuwXZQeVikvfj8ZaCuWw419eaxGrDPmF60Tp
+ARz8un+XJiM9XOva7R+zdRcAitMOeGylZUtQofX1bOQQ7dsE/He3fbE+Ik/0XX1
ksOR1YqI0JDs3G3eicJlcZaLDQP9nL9bFqyS2+r+eXyt66/3FsvbzSUr5R/7mp/i
Ucw6UwxI5g69ybR2BlLmEROFcmMDBOAENisgGQLodKcftslWZvB1JdxnwQ5hYIiz
PtGo/KPaHbDRsSNU30R2be1B2MGyIrZTHN81Hdyhdyox5C315eXbyOD/5YDXC2Og
/zOhD7osFRXql7PSorW+8oyWHhqPHWykYTe5hnMz15eWniN9gqRMgeKh0bpnX5UH
oycR7hYQe7xFSkyyBNKr79X9DFHOUGoIMfmR2gyPZFwDwzqLID9ujWc9Otb+fVuI
yV77zGHcizN300QyNQliBJIWENieJ0f7OyHj+OsdWwIDAQABo4GwMIGtMA8GA1Ud
EwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQahGK8SEwzJQTU7tD2
A8QZRtGUazBuBgNVHSMEZzBlgBQahGK8SEwzJQTU7tD2A8QZRtGUa6FJpEcwRTEL
MAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMT
ElF1b1ZhZGlzIFJvb3QgQ0EgMoICBQkwDQYJKoZIhvcNAQEFBQADggIBAD4KFk2f
BluornFdLwUvZ+YTRYPENvbzwCYMDbVHZF34tHLJRqUDGCdViXh9duqWNIAXINzn
g/iN/Ae42l9NLmeyhP3ZRPx3UIHmfLTJDQtyU/h2BwdBR5YM++CCJpNVjP4iH2Bl
fF/nJrP3MpCYUNQ3cVX2kiF495V5+vgtJodmVjB3pjd4M1IQWK4/YY7yarHvGH5K
WWPKjaJW1acvvFYfzznB4vsKqBUsfU16Y8Zsl0Q80m/DShcK+JDSV6IZUaUtl0Ha
B0+pUNqQjZRG4T7wlP0QADj1O+hA4bRuVhogzG9Yje0uRY/W6ZM/57Es3zrWIozc
hLsib9D45MY56QSIPMO661V6bYCZJPVsAfv4l7CUW+v90m/xd2gNNWQjrLhVoQPR
TUIZ3Ph1WVaj+ahJefivDrkRoHy3au000LYmYjgahwz46P0u05B/B5EqHdZ+XIWD
mbA4CD/pXvk1B+TJYm5Xf6dQlfe6yJvmjqIBxdZmv3lh8zwc4bmCXF2gw+nYSL0Z
ohEUGW6yhhtoPkg3Goi3XZZenMfvJ2II4pEZXNLxId26F0KCl3GBUzGpn/Z9Yr9y
4aOTHcyKJloJONDO1w2AFrR4pTqHTI2KpdVGl/IsELm8VCLAAVBpQ570su9t+Oza
8eOx79+Rj1QqCyXBJhnEUhAFZdWCEOrCMc0u
-----END CERTIFICATE-----

# Issuer: CN=QuoVadis Root CA 3 O=QuoVadis Limited
# Subject: CN=QuoVadis Root CA 3 O=QuoVadis Limited
# Label: "QuoVadis Root CA 3"
# Serial: 1478
# MD5 Fingerprint: 31:85:3c:62:94:97:63:b9:aa:fd:89:4e:af:6f:e0:cf
# SHA1 Fingerprint: 1f:49:14:f7:d8:74:95:1d:dd:ae:02:c0:be:fd:3a:2d:82:75:51:85
# SHA256 Fingerprint: 18:f1:fc:7f:20:5d:f8:ad:dd:eb:7f:e0:07:dd:57:e3:af:37:5a:9c:4d:8d:73:54:6b:f4:f1:fe:d1:e1:8d:35
-----BEGIN CERTIFICATE-----
MIIGnTCCBIWgAwIBAgICBcYwDQYJKoZIhvcNAQEFBQAwRTELMAkGA1UEBhMCQk0x
GTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxGzAZBgNVBAMTElF1b1ZhZGlzIFJv
b3QgQ0EgMzAeFw0wNjExMjQxOTExMjNaFw0zMTExMjQxOTA2NDRaMEUxCzAJBgNV
BAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBMaW1pdGVkMRswGQYDVQQDExJRdW9W
YWRpcyBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDM
V0IWVJzmmNPTTe7+7cefQzlKZbPoFog02w1ZkXTPkrgEQK0CSzGrvI2RaNggDhoB
4hp7Thdd4oq3P5kazethq8Jlph+3t723j/z9cI8LoGe+AaJZz3HmDyl2/7FWeUUr
H556VOijKTVopAFPD6QuN+8bv+OPEKhyq1hX51SGyMnzW9os2l2ObjyjPtr7guXd
8lyyBTNvijbO0BNO/79KDDRMpsMhvVAEVeuxu537RR5kFd5VAYwCdrXLoT9Cabwv
vWhDFlaJKjdhkf2mrk7AyxRllDdLkgbvBNDInIjbC3uBr7E9KsRlOni27tyAsdLT
mZw67mtaa7ONt9XOnMK+pUsvFrGeaDsGb659n/je7Mwpp5ijJUMv7/FfJuGITfhe
btfZFG4ZM2mnO4SJk8RTVROhUXhA+LjJou57ulJCg54U7QVSWllWp5f8nT8KKdjc
T5EOE7zelaTfi5m+rJsziO+1ga8bxiJTyPbH7pcUsMV8eFLI8M5ud2CEpukqdiDt
WAEXMJPpGovgc2PZapKUSU60rUqFxKMiMPwJ7Wgic6aIDFUhWMXhOp8q3crhkODZ
c6tsgLjoC2SToJyMGf+z0gzskSaHirOi4XCPLArlzW1oUevaPwV/izLmE1xr/l9A
4iLItLRkT9a6fUg+qGkM17uGcclzuD87nSVL2v9A6wIDAQABo4IBlTCCAZEwDwYD
VR0TAQH/BAUwAwEB/zCB4QYDVR0gBIHZMIHWMIHTBgkrBgEEAb5YAAMwgcUwgZMG
CCsGAQUFBwICMIGGGoGDQW55IHVzZSBvZiB0aGlzIENlcnRpZmljYXRlIGNvbnN0
aXR1dGVzIGFjY2VwdGFuY2Ugb2YgdGhlIFF1b1ZhZGlzIFJvb3QgQ0EgMyBDZXJ0
aWZpY2F0ZSBQb2xpY3kgLyBDZXJ0aWZpY2F0aW9uIFByYWN0aWNlIFN0YXRlbWVu
dC4wLQYIKwYBBQUHAgEWIWh0dHA6Ly93d3cucXVvdmFkaXNnbG9iYWwuY29tL2Nw
czALBgNVHQ8EBAMCAQYwHQYDVR0OBBYEFPLAE+CCQz777i9nMpY1XNu4ywLQMG4G
A1UdIwRnMGWAFPLAE+CCQz777i9nMpY1XNu4ywLQoUmkRzBFMQswCQYDVQQGEwJC
TTEZMBcGA1UEChMQUXVvVmFkaXMgTGltaXRlZDEbMBkGA1UEAxMSUXVvVmFkaXMg
Um9vdCBDQSAzggIFxjANBgkqhkiG9w0BAQUFAAOCAgEAT62gLEz6wPJv92ZVqyM0
7ucp2sNbtrCD2dDQ4iH782CnO11gUyeim/YIIirnv6By5ZwkajGxkHon24QRiSem
d1o417+shvzuXYO8BsbRd2sPbSQvS3pspweWyuOEn62Iix2rFo1bZhfZFvSLgNLd
+LJ2w/w4E6oM3kJpK27zPOuAJ9v1pkQNn1pVWQvVDVJIxa6f8i+AxeoyUDUSly7B
4f/xI4hROJ/yZlZ25w9Rl6VSDE1JUZU2Pb+iSwwQHYaZTKrzchGT5Or2m9qoXadN
t54CrnMAyNojA+j56hl0YgCUyyIgvpSnWbWCar6ZeXqp8kokUvd0/bpO5qgdAm6x
DYBEwa7TIzdfu4V8K5Iu6H6li92Z4b8nby1dqnuH/grdS/yO9SbkbnBCbjPsMZ57
k8HkyWkaPcBrTiJt7qtYTcbQQcEr6k8Sh17rRdhs9ZgC06DYVYoGmRmioHfRMJ6s
zHXug/WwYjnPbFfiTNKRCw51KBuav/0aQ/HKd/s7j2G4aSgWQgRecCocIdiP4b0j
Wy10QJLZYxkNc91pvGJHvOB0K7Lrfb5BG7XARsWhIstfTsEokt4YutUqKLsRixeT
mJlglFwjz1onl14LBQaTNx47aTbrqZ5hHY8y2o4M1nQ+ewkk2gF3R8Q7zTSMmfXK
4SVhM7JZG+Ju1zdXtg2pEto=
-----END CERTIFICATE-----

# Issuer: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com
# Subject: CN=XRamp Global Certification Authority O=XRamp Security Services Inc OU=www.xrampsecurity.com
# Label: "XRamp Global CA Root"
# Serial: 107108908803651509692980124233745014957
# MD5 Fingerprint: a1:0b:44:b3:ca:10:d8:00:6e:9d:0f:d8:0f:92:0a:d1
# SHA1 Fingerprint: b8:01:86:d1:eb:9c:86:a5:41:04:cf:30:54:f3:4c:52:b7:e5:58:c6
# SHA256 Fingerprint: ce:cd:dc:90:50:99:d8:da:df:c5:b1:d2:09:b7:37:cb:e2:c1:8c:fb:2c:10:c0:ff:0b:cf:0d:32:86:fc:1a:a2
-----BEGIN CERTIFICATE-----
MIIEMDCCAxigAwIBAgIQUJRs7Bjq1ZxN1ZfvdY+grTANBgkqhkiG9w0BAQUFADCB
gjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3dy54cmFtcHNlY3VyaXR5LmNvbTEk
MCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2VydmljZXMgSW5jMS0wKwYDVQQDEyRY
UmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQxMTAxMTcx
NDA0WhcNMzUwMTAxMDUzNzE5WjCBgjELMAkGA1UEBhMCVVMxHjAcBgNVBAsTFXd3
dy54cmFtcHNlY3VyaXR5LmNvbTEkMCIGA1UEChMbWFJhbXAgU2VjdXJpdHkgU2Vy
dmljZXMgSW5jMS0wKwYDVQQDEyRYUmFtcCBHbG9iYWwgQ2VydGlmaWNhdGlvbiBB
dXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCYJB69FbS6
38eMpSe2OAtp87ZOqCwuIR1cRN8hXX4jdP5efrRKt6atH67gBhbim1vZZ3RrXYCP
KZ2GG9mcDZhtdhAoWORlsH9KmHmf4MMxfoArtYzAQDsRhtDLooY2YKTVMIJt2W7Q
DxIEM5dfT2Fa8OT5kavnHTu86M/0ay00fOJIYRyO82FEzG+gSqmUsE3a56k0enI4
qEHMPJQRfevIpoy3hsvKMzvZPTeL+3o+hiznc9cKV6xkmxnr9A8ECIqsAxcZZPRa
JSKNNCyy9mgdEm3Tih4U2sSPpuIjhdV6Db1q4Ons7Be7QhtnqiXtRYMh/MHJfNVi
PvryxS3T/dRlAgMBAAGjgZ8wgZwwEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0P
BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMZPoj0GY4QJnM5i5ASs
jVy16bYbMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwueHJhbXBzZWN1cml0
eS5jb20vWEdDQS5jcmwwEAYJKwYBBAGCNxUBBAMCAQEwDQYJKoZIhvcNAQEFBQAD
ggEBAJEVOQMBG2f7Shz5CmBbodpNl2L5JFMn14JkTpAuw0kbK5rc/Kh4ZzXxHfAR
vbdI4xD2Dd8/0sm2qlWkSLoC295ZLhVbO50WfUfXN+pfTXYSNrsf16GBBEYgoyxt
qZ4Bfj8pzgCT3/3JknOJiWSe5yvkHJEs0rnOfc5vMZnT5r7SHpDwCRR5XCOrTdLa
IR9NmXmd4c8nnxCbHIgNsIpkQTG4DmyQJKSbXHGPurt+HBvbaoAPIbzp26a3QPSy
i6mx5O+aGtA9aZnuqCij4Tyz8LIRnM98QObd50N9otg6tamN8jSZxNQQ4Qb9CYQQ
O+7ETPTsJ3xCwnR8gooJybQDJbw=
-----END CERTIFICATE-----

# Issuer: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority
# Subject: O=The Go Daddy Group, Inc. OU=Go Daddy Class 2 Certification Authority
# Label: "Go Daddy Class 2 CA"
# Serial: 0
# MD5 Fingerprint: 91:de:06:25:ab:da:fd:32:17:0c:bb:25:17:2a:84:67
# SHA1 Fingerprint: 27:96:ba:e6:3f:18:01:e2:77:26:1b:a0:d7:77:70:02:8f:20:ee:e4
# SHA256 Fingerprint: c3:84:6b:f2:4b:9e:93:ca:64:27:4c:0e:c6:7c:1e:cc:5e:02:4f:fc:ac:d2:d7:40:19:35:0e:81:fe:54:6a:e4
-----BEGIN CERTIFICATE-----
MIIEADCCAuigAwIBAgIBADANBgkqhkiG9w0BAQUFADBjMQswCQYDVQQGEwJVUzEh
MB8GA1UEChMYVGhlIEdvIERhZGR5IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBE
YWRkeSBDbGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTA0MDYyOTE3
MDYyMFoXDTM0MDYyOTE3MDYyMFowYzELMAkGA1UEBhMCVVMxITAfBgNVBAoTGFRo
ZSBHbyBEYWRkeSBHcm91cCwgSW5jLjExMC8GA1UECxMoR28gRGFkZHkgQ2xhc3Mg
MiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTCCASAwDQYJKoZIhvcNAQEBBQADggEN
ADCCAQgCggEBAN6d1+pXGEmhW+vXX0iG6r7d/+TvZxz0ZWizV3GgXne77ZtJ6XCA
PVYYYwhv2vLM0D9/AlQiVBDYsoHUwHU9S3/Hd8M+eKsaA7Ugay9qK7HFiH7Eux6w
wdhFJ2+qN1j3hybX2C32qRe3H3I2TqYXP2WYktsqbl2i/ojgC95/5Y0V4evLOtXi
EqITLdiOr18SPaAIBQi2XKVlOARFmR6jYGB0xUGlcmIbYsUfb18aQr4CUWWoriMY
avx4A6lNf4DD+qta/KFApMoZFv6yyO9ecw3ud72a9nmYvLEHZ6IVDd2gWMZEewo+
YihfukEHU1jPEX44dMX4/7VpkI+EdOqXG68CAQOjgcAwgb0wHQYDVR0OBBYEFNLE
sNKR1EwRcbNhyz2h/t2oatTjMIGNBgNVHSMEgYUwgYKAFNLEsNKR1EwRcbNhyz2h
/t2oatTjoWekZTBjMQswCQYDVQQGEwJVUzEhMB8GA1UEChMYVGhlIEdvIERhZGR5
IEdyb3VwLCBJbmMuMTEwLwYDVQQLEyhHbyBEYWRkeSBDbGFzcyAyIENlcnRpZmlj
YXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQAD
ggEBADJL87LKPpH8EsahB4yOd6AzBhRckB4Y9wimPQoZ+YeAEW5p5JYXMP80kWNy
OO7MHAGjHZQopDH2esRU1/blMVgDoszOYtuURXO1v0XJJLXVggKtI3lpjbi2Tc7P
TMozI+gciKqdi0FuFskg5YmezTvacPd+mSYgFFQlq25zheabIZ0KbIIOqPjCDPoQ
HmyW74cNxA9hi63ugyuV+I6ShHI56yDqg+2DzZduCLzrTia2cyvk0/ZM/iZx4mER
dEr/VxqHD3VILs9RaRegAhJhldXRQLIQTO7ErBBDpqWeCtWVYpoNz4iCxTIM5Cuf
ReYNnyicsbkqWletNw+vHX/bvZ8=
-----END CERTIFICATE-----

# Issuer: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority
# Subject: O=Starfield Technologies, Inc. OU=Starfield Class 2 Certification Authority
# Label: "Starfield Class 2 CA"
# Serial: 0
# MD5 Fingerprint: 32:4a:4b:bb:c8:63:69:9b:be:74:9a:c6:dd:1d:46:24
# SHA1 Fingerprint: ad:7e:1c:28:b0:64:ef:8f:60:03:40:20:14:c3:d0:e3:37:0e:b5:8a
# SHA256 Fingerprint: 14:65:fa:20:53:97:b8:76:fa:a6:f0:a9:95:8e:55:90:e4:0f:cc:7f:aa:4f:b7:c2:c8:67:75:21:fb:5f:b6:58
-----BEGIN CERTIFICATE-----
MIIEDzCCAvegAwIBAgIBADANBgkqhkiG9w0BAQUFADBoMQswCQYDVQQGEwJVUzEl
MCMGA1UEChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMp
U3RhcmZpZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDQw
NjI5MTczOTE2WhcNMzQwNjI5MTczOTE2WjBoMQswCQYDVQQGEwJVUzElMCMGA1UE
ChMcU3RhcmZpZWxkIFRlY2hub2xvZ2llcywgSW5jLjEyMDAGA1UECxMpU3RhcmZp
ZWxkIENsYXNzIDIgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwggEgMA0GCSqGSIb3
DQEBAQUAA4IBDQAwggEIAoIBAQC3Msj+6XGmBIWtDBFk385N78gDGIc/oav7PKaf
8MOh2tTYbitTkPskpD6E8J7oX+zlJ0T1KKY/e97gKvDIr1MvnsoFAZMej2YcOadN
+lq2cwQlZut3f+dZxkqZJRRU6ybH838Z1TBwj6+wRir/resp7defqgSHo9T5iaU0
X9tDkYI22WY8sbi5gv2cOj4QyDvvBmVmepsZGD3/cVE8MC5fvj13c7JdBmzDI1aa
K4UmkhynArPkPw2vCHmCuDY96pzTNbO8acr1zJ3o/WSNF4Azbl5KXZnJHoe0nRrA
1W4TNSNe35tfPe/W93bC6j67eA0cQmdrBNj41tpvi/JEoAGrAgEDo4HFMIHCMB0G
A1UdDgQWBBS/X7fRzt0fhvRbVazc1xDCDqmI5zCBkgYDVR0jBIGKMIGHgBS/X7fR
zt0fhvRbVazc1xDCDqmI56FspGowaDELMAkGA1UEBhMCVVMxJTAjBgNVBAoTHFN0
YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAsTKVN0YXJmaWVsZCBD
bGFzcyAyIENlcnRpZmljYXRpb24gQXV0aG9yaXR5ggEAMAwGA1UdEwQFMAMBAf8w
DQYJKoZIhvcNAQEFBQADggEBAAWdP4id0ckaVaGsafPzWdqbAYcaT1epoXkJKtv3
L7IezMdeatiDh6GX70k1PncGQVhiv45YuApnP+yz3SFmH8lU+nLMPUxA2IGvd56D
eruix/U0F47ZEUD0/CwqTRV/p2JdLiXTAAsgGh1o+Re49L2L7ShZ3U0WixeDyLJl
xy16paq8U4Zt3VekyvggQQto8PT7dL5WXXp59fkdheMtlb71cZBDzI0fmgAKhynp
VSJYACPq4xJDKVtHCN2MQWplBqjlIapBtJUhlbl90TSrE9atvNziPTnNvT51cKEY
WQPJIrSPnNVeKtelttQKbfi3QBFGmh95DmK/D5fs4C8fF5Q=
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Assured ID Root CA O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Assured ID Root CA"
# Serial: 17154717934120587862167794914071425081
# MD5 Fingerprint: 87:ce:0b:7b:2a:0e:49:00:e1:58:71:9b:37:a8:93:72
# SHA1 Fingerprint: 05:63:b8:63:0d:62:d7:5a:bb:c8:ab:1e:4b:df:b5:a8:99:b2:4d:43
# SHA256 Fingerprint: 3e:90:99:b5:01:5e:8f:48:6c:00:bc:ea:9d:11:1e:e7:21:fa:ba:35:5a:89:bc:f1:df:69:56:1e:3d:c6:32:5c
-----BEGIN CERTIFICATE-----
MIIDtzCCAp+gAwIBAgIQDOfg5RfYRv6P5WD8G/AwOTANBgkqhkiG9w0BAQUFADBl
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv
b3QgQ0EwHhcNMDYxMTEwMDAwMDAwWhcNMzExMTEwMDAwMDAwWjBlMQswCQYDVQQG
EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl
cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgQ0EwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCtDhXO5EOAXLGH87dg+XESpa7c
JpSIqvTO9SA5KFhgDPiA2qkVlTJhPLWxKISKityfCgyDF3qPkKyK53lTXDGEKvYP
mDI2dsze3Tyoou9q+yHyUmHfnyDXH+Kx2f4YZNISW1/5WBg1vEfNoTb5a3/UsDg+
wRvDjDPZ2C8Y/igPs6eD1sNuRMBhNZYW/lmci3Zt1/GiSw0r/wty2p5g0I6QNcZ4
VYcgoc/lbQrISXwxmDNsIumH0DJaoroTghHtORedmTpyoeb6pNnVFzF1roV9Iq4/
AUaG9ih5yLHa5FcXxH4cDrC0kqZWs72yl+2qp/C3xag/lRbQ/6GW6whfGHdPAgMB
AAGjYzBhMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBRF66Kv9JLLgjEtUYunpyGd823IDzAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYun
pyGd823IDzANBgkqhkiG9w0BAQUFAAOCAQEAog683+Lt8ONyc3pklL/3cmbYMuRC
dWKuh+vy1dneVrOfzM4UKLkNl2BcEkxY5NM9g0lFWJc1aRqoR+pWxnmrEthngYTf
fwk8lOa4JiwgvT2zKIn3X/8i4peEH+ll74fg38FnSbNd67IJKusm7Xi+fT8r87cm
NW1fiQG2SVufAQWbqz0lwcy2f8Lxb4bG+mRo64EtlOtCt/qMHt1i8b5QZ7dsvfPx
H2sMNgcWfzd8qVttevESRmCD1ycEvkvOl77DZypoEd+A5wwzZr8TDRRu838fYxAe
+o0bJW1sj6W3YQGx0qMmoRBxna3iw/nDmVG3KwcIzi7mULKn+gpFL6Lw8g==
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Global Root CA O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Global Root CA"
# Serial: 10944719598952040374951832963794454346
# MD5 Fingerprint: 79:e4:a9:84:0d:7d:3a:96:d7:c0:4f:e2:43:4c:89:2e
# SHA1 Fingerprint: a8:98:5d:3a:65:e5:e5:c4:b2:d7:d6:6d:40:c6:dd:2f:b1:9c:54:36
# SHA256 Fingerprint: 43:48:a0:e9:44:4c:78:cb:26:5e:05:8d:5e:89:44:b4:d8:4f:96:62:bd:26:db:25:7f:89:34:a4:43:c7:01:61
-----BEGIN CERTIFICATE-----
MIIDrzCCApegAwIBAgIQCDvgVpBCRrGhdWrJWZHHSjANBgkqhkiG9w0BAQUFADBh
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBD
QTAeFw0wNjExMTAwMDAwMDBaFw0zMTExMTAwMDAwMDBaMGExCzAJBgNVBAYTAlVT
MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IENBMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4jvhEXLeqKTTo1eqUKKPC3eQyaKl7hLOllsB
CSDMAZOnTjC3U/dDxGkAV53ijSLdhwZAAIEJzs4bg7/fzTtxRuLWZscFs3YnFo97
nh6Vfe63SKMI2tavegw5BmV/Sl0fvBf4q77uKNd0f3p4mVmFaG5cIzJLv07A6Fpt
43C/dxC//AH2hdmoRBBYMql1GNXRor5H4idq9Joz+EkIYIvUX7Q6hL+hqkpMfT7P
T19sdl6gSzeRntwi5m3OFBqOasv+zbMUZBfHWymeMr/y7vrTC0LUq7dBMtoM1O/4
gdW7jVg/tRvoSSiicNoxBN33shbyTApOB6jtSj1etX+jkMOvJwIDAQABo2MwYTAO
BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUA95QNVbR
TLtm8KPiGxvDl7I90VUwHwYDVR0jBBgwFoAUA95QNVbRTLtm8KPiGxvDl7I90VUw
DQYJKoZIhvcNAQEFBQADggEBAMucN6pIExIK+t1EnE9SsPTfrgT1eXkIoyQY/Esr
hMAtudXH/vTBH1jLuG2cenTnmCmrEbXjcKChzUyImZOMkXDiqw8cvpOp/2PV5Adg
06O/nVsJ8dWO41P0jmP6P6fbtGbfYmbW0W5BjfIttep3Sp+dWOIrWcBAI+0tKIJF
PnlUkiaY4IBIqDfv8NZ5YBberOgOzW6sRBc4L0na4UU+Krk2U886UAb3LujEV0ls
YSEY1QSteDwsOoBrp+uvFRTp2InBuThs4pFsiv9kuXclVzDAGySj4dzp30d8tbQk
CAUw7C29C79Fv1C5qfPrmAESrciIxpg0X40KPMbp1ZWVbd4=
-----END CERTIFICATE-----

# Issuer: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert High Assurance EV Root CA O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert High Assurance EV Root CA"
# Serial: 3553400076410547919724730734378100087
# MD5 Fingerprint: d4:74:de:57:5c:39:b2:d3:9c:85:83:c5:c0:65:49:8a
# SHA1 Fingerprint: 5f:b7:ee:06:33:e2:59:db:ad:0c:4c:9a:e6:d3:8f:1a:61:c7:dc:25
# SHA256 Fingerprint: 74:31:e5:f4:c3:c1:ce:46:90:77:4f:0b:61:e0:54:40:88:3b:a9:a0:1e:d0:0b:a6:ab:d7:80:6e:d3:b1:18:cf
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j
ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL
MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3
LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug
RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm
+9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW
PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM
xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB
Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3
hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg
EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF
MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA
FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec
nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z
eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF
hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2
Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe
vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep
+OkuE6N36B9K
-----END CERTIFICATE-----

# Issuer: CN=SwissSign Gold CA - G2 O=SwissSign AG
# Subject: CN=SwissSign Gold CA - G2 O=SwissSign AG
# Label: "SwissSign Gold CA - G2"
# Serial: 13492815561806991280
# MD5 Fingerprint: 24:77:d9:a8:91:d1:3b:fa:88:2d:c2:ff:f8:cd:33:93
# SHA1 Fingerprint: d8:c5:38:8a:b7:30:1b:1b:6e:d4:7a:e6:45:25:3a:6f:9f:1a:27:61
# SHA256 Fingerprint: 62:dd:0b:e9:b9:f5:0a:16:3e:a0:f8:e7:5c:05:3b:1e:ca:57:ea:55:c8:68:8f:64:7c:68:81:f2:c8:35:7b:95
-----BEGIN CERTIFICATE-----
MIIFujCCA6KgAwIBAgIJALtAHEP1Xk+wMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV
BAYTAkNIMRUwEwYDVQQKEwxTd2lzc1NpZ24gQUcxHzAdBgNVBAMTFlN3aXNzU2ln
biBHb2xkIENBIC0gRzIwHhcNMDYxMDI1MDgzMDM1WhcNMzYxMDI1MDgzMDM1WjBF
MQswCQYDVQQGEwJDSDEVMBMGA1UEChMMU3dpc3NTaWduIEFHMR8wHQYDVQQDExZT
d2lzc1NpZ24gR29sZCBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEAr+TufoskDhJuqVAtFkQ7kpJcyrhdhJJCEyq8ZVeCQD5XJM1QiyUqt2/8
76LQwB8CJEoTlo8jE+YoWACjR8cGp4QjK7u9lit/VcyLwVcfDmJlD909Vopz2q5+
bbqBHH5CjCA12UNNhPqE21Is8w4ndwtrvxEvcnifLtg+5hg3Wipy+dpikJKVyh+c
6bM8K8vzARO/Ws/BtQpgvd21mWRTuKCWs2/iJneRjOBiEAKfNA+k1ZIzUd6+jbqE
emA8atufK+ze3gE/bk3lUIbLtK/tREDFylqM2tIrfKjuvqblCqoOpd8FUrdVxyJd
MmqXl2MT28nbeTZ7hTpKxVKJ+STnnXepgv9VHKVxaSvRAiTysybUa9oEVeXBCsdt
MDeQKuSeFDNeFhdVxVu1yzSJkvGdJo+hB9TGsnhQ2wwMC3wLjEHXuendjIj3o02y
MszYF9rNt85mndT9Xv+9lz4pded+p2JYryU0pUHHPbwNUMoDAw8IWh+Vc3hiv69y
FGkOpeUDDniOJihC8AcLYiAQZzlG+qkDzAQ4embvIIO1jEpWjpEA/I5cgt6IoMPi
aG59je883WX0XaxR7ySArqpWl2/5rX3aYT+YdzylkbYcjCbaZaIJbcHiVOO5ykxM
gI93e2CaHt+28kgeDrpOVG2Y4OGiGqJ3UM/EY5LsRxmd6+ZrzsECAwEAAaOBrDCB
qTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUWyV7
lqRlUX64OfPAeGZe6Drn8O4wHwYDVR0jBBgwFoAUWyV7lqRlUX64OfPAeGZe6Drn
8O4wRgYDVR0gBD8wPTA7BglghXQBWQECAQEwLjAsBggrBgEFBQcCARYgaHR0cDov
L3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIBACe6
45R88a7A3hfm5djV9VSwg/S7zV4Fe0+fdWavPOhWfvxyeDgD2StiGwC5+OlgzczO
UYrHUDFu4Up+GC9pWbY9ZIEr44OE5iKHjn3g7gKZYbge9LgriBIWhMIxkziWMaa5
O1M/wySTVltpkuzFwbs4AOPsF6m43Md8AYOfMke6UiI0HTJ6CVanfCU2qT1L2sCC
bwq7EsiHSycR+R4tx5M/nttfJmtS2S6K8RTGRI0Vqbe/vd6mGu6uLftIdxf+u+yv
GPUqUfA5hJeVbG4bwyvEdGB5JbAKJ9/fXtI5z0V9QkvfsywexcZdylU6oJxpmo/a
77KwPJ+HbBIrZXAVUjEaJM9vMSNQH4xPjyPDdEFjHFWoFN0+4FFQz/EbMFYOkrCC
hdiDyyJkvC24JdVUorgG6q2SpCSgwYa1ShNqR88uC1aVVMvOmttqtKay20EIhid3
92qgQmwLOM7XdVAyksLfKzAiSNDVQTglXaTpXZ/GlHXQRf0wl0OPkKsKx4ZzYEpp
Ld6leNcG2mqeSz53OiATIgHQv2ieY2BrNU0LbbqhPcCT4H8js1WtciVORvnSFu+w
ZMEBnunKoGqYDs/YYPIvSbjkQuE4NRb0yG5P94FW6LqjviOvrv1vA+ACOzB2+htt
Qc8Bsem4yWb02ybzOqR08kkkW8mw0FfB+j564ZfJ
-----END CERTIFICATE-----

# Issuer: CN=SwissSign Silver CA - G2 O=SwissSign AG
# Subject: CN=SwissSign Silver CA - G2 O=SwissSign AG
# Label: "SwissSign Silver CA - G2"
# Serial: 5700383053117599563
# MD5 Fingerprint: e0:06:a1:c9:7d:cf:c9:fc:0d:c0:56:75:96:d8:62:13
# SHA1 Fingerprint: 9b:aa:e5:9f:56:ee:21:cb:43:5a:be:25:93:df:a7:f0:40:d1:1d:cb
# SHA256 Fingerprint: be:6c:4d:a2:bb:b9:ba:59:b6:f3:93:97:68:37:42:46:c3:c0:05:99:3f:a9:8f:02:0d:1d:ed:be:d4:8a:81:d5
-----BEGIN CERTIFICATE-----
MIIFvTCCA6WgAwIBAgIITxvUL1S7L0swDQYJKoZIhvcNAQEFBQAwRzELMAkGA1UE
BhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMYU3dpc3NTaWdu
IFNpbHZlciBDQSAtIEcyMB4XDTA2MTAyNTA4MzI0NloXDTM2MTAyNTA4MzI0Nlow
RzELMAkGA1UEBhMCQ0gxFTATBgNVBAoTDFN3aXNzU2lnbiBBRzEhMB8GA1UEAxMY
U3dpc3NTaWduIFNpbHZlciBDQSAtIEcyMIICIjANBgkqhkiG9w0BAQEFAAOCAg8A
MIICCgKCAgEAxPGHf9N4Mfc4yfjDmUO8x/e8N+dOcbpLj6VzHVxumK4DV644N0Mv
Fz0fyM5oEMF4rhkDKxD6LHmD9ui5aLlV8gREpzn5/ASLHvGiTSf5YXu6t+WiE7br
YT7QbNHm+/pe7R20nqA1W6GSy/BJkv6FCgU+5tkL4k+73JU3/JHpMjUi0R86TieF
nbAVlDLaYQ1HTWBCrpJH6INaUFjpiou5XaHc3ZlKHzZnu0jkg7Y360g6rw9njxcH
6ATK72oxh9TAtvmUcXtnZLi2kUpCe2UuMGoM9ZDulebyzYLs2aFK7PayS+VFheZt
eJMELpyCbTapxDFkH4aDCyr0NQp4yVXPQbBH6TCfmb5hqAaEuSh6XzjZG6k4sIN/
c8HDO0gqgg8hm7jMqDXDhBuDsz6+pJVpATqJAHgE2cn0mRmrVn5bi4Y5FZGkECwJ
MoBgs5PAKrYYC51+jUnyEEp/+dVGLxmSo5mnJqy7jDzmDrxHB9xzUfFwZC8I+bRH
HTBsROopN4WSaGa8gzj+ezku01DwH/teYLappvonQfGbGHLy9YR0SslnxFSuSGTf
jNFusB3hB48IHpmccelM2KX3RxIfdNFRnobzwqIjQAtz20um53MGjMGg6cFZrEb6
5i/4z3GcRm25xBWNOHkDRUjvxF3XCO6HOSKGsg0PWEP3calILv3q1h8CAwEAAaOB
rDCBqTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
F6DNweRBtjpbO8tFnb0cwpj6hlgwHwYDVR0jBBgwFoAUF6DNweRBtjpbO8tFnb0c
wpj6hlgwRgYDVR0gBD8wPTA7BglghXQBWQEDAQEwLjAsBggrBgEFBQcCARYgaHR0
cDovL3JlcG9zaXRvcnkuc3dpc3NzaWduLmNvbS8wDQYJKoZIhvcNAQEFBQADggIB
AHPGgeAn0i0P4JUw4ppBf1AsX19iYamGamkYDHRJ1l2E6kFSGG9YrVBWIGrGvShp
WJHckRE1qTodvBqlYJ7YH39FkWnZfrt4csEGDyrOj4VwYaygzQu4OSlWhDJOhrs9
xCrZ1x9y7v5RoSJBsXECYxqCsGKrXlcSH9/L3XWgwF15kIwb4FDm3jH+mHtwX6WQ
2K34ArZv02DdQEsixT2tOnqfGhpHkXkzuoLcMmkDlm4fS/Bx/uNncqCxv1yL5PqZ
IseEuRuNI5c/7SXgz2W79WEE790eslpBIlqhn10s6FvJbakMDHiqYMZWjwFaDGi8
aRl5xB9+lwW/xekkUV7U1UtT7dkjWjYDZaPBA61BMPNGG4WQr2W11bHkFlt4dR2X
em1ZqSqPe97Dh4kQmUlzeMg9vVE1dCrV8X5pGyq7O70luJpaPXJhkGaH7gzWTdQR
dAtq/gsD/KNVV4n+SsuuWxcFyPKNIzFTONItaj+CuY0IavdeQXRuwxF+B6wpYJE/
OMpXEA29MC/HpeZBoNquBYeaoKRlbEwJDIm6uNO5wJOKMPqN5ZprFQFOZ6raYlY+
hAhm0sQ2fac+EPyI4NSA5QC9qvNOBqN6avlicuMJT+ubDgEj8Z+7fNzcbBGXJbLy
tGMU0gYqZ4yD9c7qB9iaah7s5Aq7KkzrCWA5zspi2C5u
-----END CERTIFICATE-----

# Issuer: CN=SecureTrust CA O=SecureTrust Corporation
# Subject: CN=SecureTrust CA O=SecureTrust Corporation
# Label: "SecureTrust CA"
# Serial: 17199774589125277788362757014266862032
# MD5 Fingerprint: dc:32:c3:a7:6d:25:57:c7:68:09:9d:ea:2d:a9:a2:d1
# SHA1 Fingerprint: 87:82:c6:c3:04:35:3b:cf:d2:96:92:d2:59:3e:7d:44:d9:34:ff:11
# SHA256 Fingerprint: f1:c1:b5:0a:e5:a2:0d:d8:03:0e:c9:f6:bc:24:82:3d:d3:67:b5:25:57:59:b4:e7:1b:61:fc:e9:f7:37:5d:73
-----BEGIN CERTIFICATE-----
MIIDuDCCAqCgAwIBAgIQDPCOXAgWpa1Cf/DrJxhZ0DANBgkqhkiG9w0BAQUFADBI
MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x
FzAVBgNVBAMTDlNlY3VyZVRydXN0IENBMB4XDTA2MTEwNzE5MzExOFoXDTI5MTIz
MTE5NDA1NVowSDELMAkGA1UEBhMCVVMxIDAeBgNVBAoTF1NlY3VyZVRydXN0IENv
cnBvcmF0aW9uMRcwFQYDVQQDEw5TZWN1cmVUcnVzdCBDQTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAKukgeWVzfX2FI7CT8rU4niVWJxB4Q2ZQCQXOZEz
Zum+4YOvYlyJ0fwkW2Gz4BERQRwdbvC4u/jep4G6pkjGnx29vo6pQT64lO0pGtSO
0gMdA+9tDWccV9cGrcrI9f4Or2YlSASWC12juhbDCE/RRvgUXPLIXgGZbf2IzIao
wW8xQmxSPmjL8xk037uHGFaAJsTQ3MBv396gwpEWoGQRS0S8Hvbn+mPeZqx2pHGj
7DaUaHp3pLHnDi+BeuK1cobvomuL8A/b01k/unK8RCSc43Oz969XL0Imnal0ugBS
8kvNU3xHCzaFDmapCJcWNFfBZveA4+1wVMeT4C4oFVmHursCAwEAAaOBnTCBmjAT
BgkrBgEEAYI3FAIEBh4EAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB
/zAdBgNVHQ4EFgQUQjK2FvoE/f5dS3rD/fdMQB1aQ68wNAYDVR0fBC0wKzApoCeg
JYYjaHR0cDovL2NybC5zZWN1cmV0cnVzdC5jb20vU1RDQS5jcmwwEAYJKwYBBAGC
NxUBBAMCAQAwDQYJKoZIhvcNAQEFBQADggEBADDtT0rhWDpSclu1pqNlGKa7UTt3
6Z3q059c4EVlew3KW+JwULKUBRSuSceNQQcSc5R+DCMh/bwQf2AQWnL1mA6s7Ll/
3XpvXdMc9P+IBWlCqQVxyLesJugutIxq/3HcuLHfmbx8IVQr5Fiiu1cprp6poxkm
D5kuCLDv/WnPmRoJjeOnnyvJNjR7JLN4TJUXpAYmHrZkUjZfYGfZnMUFdAvnZyPS
CPyI6a6Lf+Ew9Dd+/cYy2i2eRDAwbO4H3tI0/NL/QPZL9GZGBlSm8jIKYyYwa5vR
3ItHuuG51WLQoqD0ZwV4KWMabwTW+MZMo5qxN7SN5ShLHZ4swrhovO0C7jE=
-----END CERTIFICATE-----

# Issuer: CN=Secure Global CA O=SecureTrust Corporation
# Subject: CN=Secure Global CA O=SecureTrust Corporation
# Label: "Secure Global CA"
# Serial: 9751836167731051554232119481456978597
# MD5 Fingerprint: cf:f4:27:0d:d4:ed:dc:65:16:49:6d:3d:da:bf:6e:de
# SHA1 Fingerprint: 3a:44:73:5a:e5:81:90:1f:24:86:61:46:1e:3b:9c:c4:5f:f5:3a:1b
# SHA256 Fingerprint: 42:00:f5:04:3a:c8:59:0e:bb:52:7d:20:9e:d1:50:30:29:fb:cb:d4:1c:a1:b5:06:ec:27:f1:5a:de:7d:ac:69
-----BEGIN CERTIFICATE-----
MIIDvDCCAqSgAwIBAgIQB1YipOjUiolN9BPI8PjqpTANBgkqhkiG9w0BAQUFADBK
MQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3QgQ29ycG9yYXRpb24x
GTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwHhcNMDYxMTA3MTk0MjI4WhcNMjkx
MjMxMTk1MjA2WjBKMQswCQYDVQQGEwJVUzEgMB4GA1UEChMXU2VjdXJlVHJ1c3Qg
Q29ycG9yYXRpb24xGTAXBgNVBAMTEFNlY3VyZSBHbG9iYWwgQ0EwggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvNS7YrGxVaQZx5RNoJLNP2MwhR/jxYDiJ
iQPpvepeRlMJ3Fz1Wuj3RSoC6zFh1ykzTM7HfAo3fg+6MpjhHZevj8fcyTiW89sa
/FHtaMbQbqR8JNGuQsiWUGMu4P51/pinX0kuleM5M2SOHqRfkNJnPLLZ/kG5VacJ
jnIFHovdRIWCQtBJwB1g8NEXLJXr9qXBkqPFwqcIYA1gBBCWeZ4WNOaptvolRTnI
HmX5k/Wq8VLcmZg9pYYaDDUz+kulBAYVHDGA76oYa8J719rO+TMg1fW9ajMtgQT7
sFzUnKPiXB3jqUJ1XnvUd+85VLrJChgbEplJL4hL/VBi0XPnj3pDAgMBAAGjgZ0w
gZowEwYJKwYBBAGCNxQCBAYeBABDAEEwCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQF
MAMBAf8wHQYDVR0OBBYEFK9EBMJBfkiD2045AuzshHrmzsmkMDQGA1UdHwQtMCsw
KaAnoCWGI2h0dHA6Ly9jcmwuc2VjdXJldHJ1c3QuY29tL1NHQ0EuY3JsMBAGCSsG
AQQBgjcVAQQDAgEAMA0GCSqGSIb3DQEBBQUAA4IBAQBjGghAfaReUw132HquHw0L
URYD7xh8yOOvaliTFGCRsoTciE6+OYo68+aCiV0BN7OrJKQVDpI1WkpEXk5X+nXO
H0jOZvQ8QCaSmGwb7iRGDBezUqXbpZGRzzfTb+cnCDpOGR86p1hcF895P4vkp9Mm
I50mD1hp/Ed+stCNi5O/KU9DaXR2Z0vPB4zmAve14bRDtUstFJ/53CYNv6ZHdAbY
iNE6KTCEztI5gGIbqMdXSbxqVVFnFUq+NQfk1XWYN3kwFNspnWzFacxHVaIw98xc
f8LDmBxrThaA63p4ZUWiABqvDA1VZDRIuJK58bRQKfJPIx/abKwfROHdI3hRW8cW
-----END CERTIFICATE-----

# Issuer: CN=COMODO Certification Authority O=COMODO CA Limited
# Subject: CN=COMODO Certification Authority O=COMODO CA Limited
# Label: "COMODO Certification Authority"
# Serial: 104350513648249232941998508985834464573
# MD5 Fingerprint: 5c:48:dc:f7:42:72:ec:56:94:6d:1c:cc:71:35:80:75
# SHA1 Fingerprint: 66:31:bf:9e:f7:4f:9e:b6:c9:d5:a6:0c:ba:6a:be:d1:f7:bd:ef:7b
# SHA256 Fingerprint: 0c:2c:d6:3d:f7:80:6f:a3:99:ed:e8:09:11:6b:57:5b:f8:79:89:f0:65:18:f9:80:8c:86:05:03:17:8b:af:66
-----BEGIN CERTIFICATE-----
MIIEHTCCAwWgAwIBAgIQToEtioJl4AsC7j41AkblPTANBgkqhkiG9w0BAQUFADCB
gTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxJzAlBgNV
BAMTHkNPTU9ETyBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0wNjEyMDEwMDAw
MDBaFw0yOTEyMzEyMzU5NTlaMIGBMQswCQYDVQQGEwJHQjEbMBkGA1UECBMSR3Jl
YXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P
RE8gQ0EgTGltaXRlZDEnMCUGA1UEAxMeQ09NT0RPIENlcnRpZmljYXRpb24gQXV0
aG9yaXR5MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0ECLi3LjkRv3
UcEbVASY06m/weaKXTuH+7uIzg3jLz8GlvCiKVCZrts7oVewdFFxze1CkU1B/qnI
2GqGd0S7WWaXUF601CxwRM/aN5VCaTwwxHGzUvAhTaHYujl8HJ6jJJ3ygxaYqhZ8
Q5sVW7euNJH+1GImGEaaP+vB+fGQV+useg2L23IwambV4EajcNxo2f8ESIl33rXp
+2dtQem8Ob0y2WIC8bGoPW43nOIv4tOiJovGuFVDiOEjPqXSJDlqR6sA1KGzqSX+
DT+nHbrTUcELpNqsOO9VUCQFZUaTNE8tja3G1CEZ0o7KBWFxB3NH5YoZEr0ETc5O
nKVIrLsm9wIDAQABo4GOMIGLMB0GA1UdDgQWBBQLWOWLxkwVN6RAqTCpIb5HNlpW
/zAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zBJBgNVHR8EQjBAMD6g
PKA6hjhodHRwOi8vY3JsLmNvbW9kb2NhLmNvbS9DT01PRE9DZXJ0aWZpY2F0aW9u
QXV0aG9yaXR5LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAPpiem/Yb6dc5t3iuHXIY
SdOH5EOC6z/JqvWote9VfCFSZfnVDeFs9D6Mk3ORLgLETgdxb8CPOGEIqB6BCsAv
IC9Bi5HcSEW88cbeunZrM8gALTFGTO3nnc+IlP8zwFboJIYmuNg4ON8qa90SzMc/
RxdMosIGlgnW2/4/PEZB31jiVg88O8EckzXZOFKs7sjsLjBOlDW0JB9LeGna8gI4
zJVSk/BwJVmcIGfE7vmLV2H0knZ9P4SNVbfo5azV8fUZVqZa+5Acr5Pr5RzUZ5dd
BA6+C4OmF4O5MBKgxTMVBbkN+8cFduPYSo38NBejxiEovjBFMR7HeL5YYTisO+IB
ZQ==
-----END CERTIFICATE-----

# Issuer: CN=COMODO ECC Certification Authority O=COMODO CA Limited
# Subject: CN=COMODO ECC Certification Authority O=COMODO CA Limited
# Label: "COMODO ECC Certification Authority"
# Serial: 41578283867086692638256921589707938090
# MD5 Fingerprint: 7c:62:ff:74:9d:31:53:5e:68:4a:d5:78:aa:1e:bf:23
# SHA1 Fingerprint: 9f:74:4e:9f:2b:4d:ba:ec:0f:31:2c:50:b6:56:3b:8e:2d:93:c3:11
# SHA256 Fingerprint: 17:93:92:7a:06:14:54:97:89:ad:ce:2f:8f:34:f7:f0:b6:6d:0f:3a:e3:a3:b8:4d:21:ec:15:db:ba:4f:ad:c7
-----BEGIN CERTIFICATE-----
MIICiTCCAg+gAwIBAgIQH0evqmIAcFBUTAGem2OZKjAKBggqhkjOPQQDAzCBhTEL
MAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMT
IkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwMzA2MDAw
MDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdy
ZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09N
T0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBFQ0MgQ2VydGlmaWNhdGlv
biBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQDR3svdcmCFYX7deSR
FtSrYpn1PlILBs5BAH+X4QokPB0BBO490o0JlwzgdeT6+3eKKvUDYEs2ixYjFq0J
cfRK9ChQtP6IHG4/bC8vCVlbpVsLM5niwz2J+Wos77LTBumjQjBAMB0GA1UdDgQW
BBR1cacZSBm8nZ3qQUfflMRId5nTeTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/
BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjEA7wNbeqy3eApyt4jf/7VGFAkK+qDm
fQjGGoe9GKhzvSbKYAydzpmfz1wPMOG+FDHqAjAU9JM8SaczepBGR7NjfRObTrdv
GDeAU/7dIOA1mjbRxwG55tzd8/8dLDoWV9mSOdY=
-----END CERTIFICATE-----

# Issuer: CN=Certigna O=Dhimyotis
# Subject: CN=Certigna O=Dhimyotis
# Label: "Certigna"
# Serial: 18364802974209362175
# MD5 Fingerprint: ab:57:a6:5b:7d:42:82:19:b5:d8:58:26:28:5e:fd:ff
# SHA1 Fingerprint: b1:2e:13:63:45:86:a4:6f:1a:b2:60:68:37:58:2d:c4:ac:fd:94:97
# SHA256 Fingerprint: e3:b6:a2:db:2e:d7:ce:48:84:2f:7a:c5:32:41:c7:b7:1d:54:14:4b:fb:40:c1:1f:3f:1d:0b:42:f5:ee:a1:2d
-----BEGIN CERTIFICATE-----
MIIDqDCCApCgAwIBAgIJAP7c4wEPyUj/MA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV
BAYTAkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hMB4X
DTA3MDYyOTE1MTMwNVoXDTI3MDYyOTE1MTMwNVowNDELMAkGA1UEBhMCRlIxEjAQ
BgNVBAoMCURoaW15b3RpczERMA8GA1UEAwwIQ2VydGlnbmEwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQDIaPHJ1tazNHUmgh7stL7qXOEm7RFHYeGifBZ4
QCHkYJ5ayGPhxLGWkv8YbWkj4Sti993iNi+RB7lIzw7sebYs5zRLcAglozyHGxny
gQcPOJAZ0xH+hrTy0V4eHpbNgGzOOzGTtvKg0KmVEn2lmsxryIRWijOp5yIVUxbw
zBfsV1/pogqYCd7jX5xv3EjjhQsVWqa6n6xI4wmy9/Qy3l40vhx4XUJbzg4ij02Q
130yGLMLLGq/jj8UEYkgDncUtT2UCIf3JR7VsmAA7G8qKCVuKj4YYxclPz5EIBb2
JsglrgVKtOdjLPOMFlN+XPsRGgjBRmKfIrjxwo1p3Po6WAbfAgMBAAGjgbwwgbkw
DwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUGu3+QTmQtCRZvgHyUtVF9lo53BEw
ZAYDVR0jBF0wW4AUGu3+QTmQtCRZvgHyUtVF9lo53BGhOKQ2MDQxCzAJBgNVBAYT
AkZSMRIwEAYDVQQKDAlEaGlteW90aXMxETAPBgNVBAMMCENlcnRpZ25hggkA/tzj
AQ/JSP8wDgYDVR0PAQH/BAQDAgEGMBEGCWCGSAGG+EIBAQQEAwIABzANBgkqhkiG
9w0BAQUFAAOCAQEAhQMeknH2Qq/ho2Ge6/PAD/Kl1NqV5ta+aDY9fm4fTIrv0Q8h
bV6lUmPOEvjvKtpv6zf+EwLHyzs+ImvaYS5/1HI93TDhHkxAGYwP15zRgzB7mFnc
fca5DClMoTOi62c6ZYTTluLtdkVwj7Ur3vkj1kluPBS1xp81HlDQwY9qcEQCYsuu
HWhBp6pX6FOqB9IG9tUUBguRA3UsbHK1YZWaDYu5Def131TN3ubY1gkIl2PlwS6w
t0QmwCbAr1UwnjvVNioZBPRcHv/PLLf/0P2HQBHVESO7SMAhqaQoLf0V+LBOK/Qw
WyH8EZE0vkHve52Xdf+XlcCWWC/qu0bXu+TZLg==
-----END CERTIFICATE-----

# Issuer: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority
# Subject: O=Chunghwa Telecom Co., Ltd. OU=ePKI Root Certification Authority
# Label: "ePKI Root Certification Authority"
# Serial: 28956088682735189655030529057352760477
# MD5 Fingerprint: 1b:2e:00:ca:26:06:90:3d:ad:fe:6f:15:68:d3:6b:b3
# SHA1 Fingerprint: 67:65:0d:f1:7e:8e:7e:5b:82:40:a4:f4:56:4b:cf:e2:3d:69:c6:f0
# SHA256 Fingerprint: c0:a6:f4:dc:63:a2:4b:fd:cf:54:ef:2a:6a:08:2a:0a:72:de:35:80:3e:2f:f5:ff:52:7a:e5:d8:72:06:df:d5
-----BEGIN CERTIFICATE-----
MIIFsDCCA5igAwIBAgIQFci9ZUdcr7iXAF7kBtK8nTANBgkqhkiG9w0BAQUFADBe
MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0
ZC4xKjAoBgNVBAsMIWVQS0kgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAe
Fw0wNDEyMjAwMjMxMjdaFw0zNDEyMjAwMjMxMjdaMF4xCzAJBgNVBAYTAlRXMSMw
IQYDVQQKDBpDaHVuZ2h3YSBUZWxlY29tIENvLiwgTHRkLjEqMCgGA1UECwwhZVBL
SSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIICIjANBgkqhkiG9w0BAQEF
AAOCAg8AMIICCgKCAgEA4SUP7o3biDN1Z82tH306Tm2d0y8U82N0ywEhajfqhFAH
SyZbCUNsIZ5qyNUD9WBpj8zwIuQf5/dqIjG3LBXy4P4AakP/h2XGtRrBp0xtInAh
ijHyl3SJCRImHJ7K2RKilTza6We/CKBk49ZCt0Xvl/T29de1ShUCWH2YWEtgvM3X
DZoTM1PRYfl61dd4s5oz9wCGzh1NlDivqOx4UXCKXBCDUSH3ET00hl7lSM2XgYI1
TBnsZfZrxQWh7kcT1rMhJ5QQCtkkO7q+RBNGMD+XPNjX12ruOzjjK9SXDrkb5wdJ
fzcq+Xd4z1TtW0ado4AOkUPB1ltfFLqfpo0kR0BZv3I4sjZsN/+Z0V0OWQqraffA
sgRFelQArr5T9rXn4fg8ozHSqf4hUmTFpmfwdQcGlBSBVcYn5AGPF8Fqcde+S/uU
WH1+ETOxQvdibBjWzwloPn9s9h6PYq2lY9sJpx8iQkEeb5mKPtf5P0B6ebClAZLS
nT0IFaUQAS2zMnaolQ2zepr7BxB4EW/hj8e6DyUadCrlHJhBmd8hh+iVBmoKs2pH
dmX2Os+PYhcZewoozRrSgx4hxyy/vv9haLdnG7t4TY3OZ+XkwY63I2binZB1NJip
NiuKmpS5nezMirH4JYlcWrYvjB9teSSnUmjDhDXiZo1jDiVN1Rmy5nk3pyKdVDEC
AwEAAaNqMGgwHQYDVR0OBBYEFB4M97Zn8uGSJglFwFU5Lnc/QkqiMAwGA1UdEwQF
MAMBAf8wOQYEZyoHAAQxMC8wLQIBADAJBgUrDgMCGgUAMAcGBWcqAwAABBRFsMLH
ClZ87lt4DJX5GFPBphzYEDANBgkqhkiG9w0BAQUFAAOCAgEACbODU1kBPpVJufGB
uvl2ICO1J2B01GqZNF5sAFPZn/KmsSQHRGoqxqWOeBLoR9lYGxMqXnmbnwoqZ6Yl
PwZpVnPDimZI+ymBV3QGypzqKOg4ZyYr8dW1P2WT+DZdjo2NQCCHGervJ8A9tDkP
JXtoUHRVnAxZfVo9QZQlUgjgRywVMRnVvwdVxrsStZf0X4OFunHB2WyBEXYKCrC/
gpf36j36+uwtqSiUO1bd0lEursC9CBWMd1I0ltabrNMdjmEPNXubrjlpC2JgQCA2
j6/7Nu4tCEoduL+bXPjqpRugc6bY+G7gMwRfaKonh+3ZwZCc7b3jajWvY9+rGNm6
5ulK6lCKD2GTHuItGeIwlDWSXQ62B68ZgI9HkFFLLk3dheLSClIKF5r8GrBQAuUB
o2M3IUxExJtRmREOc5wGj1QupyheRDmHVi03vYVElOEMSyycw5KFNGHLD7ibSkNS
/jQ6fbjpKdx2qcgw+BRxgMYeNkh0IkFch4LoGHGLQYlE535YW6i4jRPpp2zDR+2z
Gp1iro2C6pSe3VkQw63d4k3jMdXH7OjysP6SHhYKGvzZ8/gntsm+HbRsZJB/9OTE
W9c3rkIO3aQab3yIVMUWbuF6aC74Or8NpDyJO3inTmODBCEIZ43ygknQW/2xzQ+D
hNQ+IIX3Sj0rnP0qCglN6oH4EZw=
-----END CERTIFICATE-----

# Issuer: O=certSIGN OU=certSIGN ROOT CA
# Subject: O=certSIGN OU=certSIGN ROOT CA
# Label: "certSIGN ROOT CA"
# Serial: 35210227249154
# MD5 Fingerprint: 18:98:c0:d6:e9:3a:fc:f9:b0:f5:0c:f7:4b:01:44:17
# SHA1 Fingerprint: fa:b7:ee:36:97:26:62:fb:2d:b0:2a:f6:bf:03:fd:e8:7c:4b:2f:9b
# SHA256 Fingerprint: ea:a9:62:c4:fa:4a:6b:af:eb:e4:15:19:6d:35:1c:cd:88:8d:4f:53:f3:fa:8a:e6:d7:c4:66:a9:4e:60:42:bb
-----BEGIN CERTIFICATE-----
MIIDODCCAiCgAwIBAgIGIAYFFnACMA0GCSqGSIb3DQEBBQUAMDsxCzAJBgNVBAYT
AlJPMREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBD
QTAeFw0wNjA3MDQxNzIwMDRaFw0zMTA3MDQxNzIwMDRaMDsxCzAJBgNVBAYTAlJP
MREwDwYDVQQKEwhjZXJ0U0lHTjEZMBcGA1UECxMQY2VydFNJR04gUk9PVCBDQTCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALczuX7IJUqOtdu0KBuqV5Do
0SLTZLrTk+jUrIZhQGpgV2hUhE28alQCBf/fm5oqrl0Hj0rDKH/v+yv6efHHrfAQ
UySQi2bJqIirr1qjAOm+ukbuW3N7LBeCgV5iLKECZbO9xSsAfsT8AzNXDe3i+s5d
RdY4zTW2ssHQnIFKquSyAVwdj1+ZxLGt24gh65AIgoDzMKND5pCCrlUoSe1b16kQ
OA7+j0xbm0bqQfWwCHTD0IgztnzXdN/chNFDDnU5oSVAKOp4yw4sLjmdjItuFhwv
JoIQ4uNllAoEwF73XVv4EOLQunpL+943AAAaWyjj0pxzPjKHmKHJUS/X3qwzs08C
AwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAcYwHQYDVR0O
BBYEFOCMm9slSbPxfIbWskKHC9BroNnkMA0GCSqGSIb3DQEBBQUAA4IBAQA+0hyJ
LjX8+HXd5n9liPRyTMks1zJO890ZeUe9jjtbkw9QSSQTaxQGcu8J06Gh40CEyecY
MnQ8SG4Pn0vU9x7Tk4ZkVJdjclDVVc/6IJMCopvDI5NOFlV2oHB5bc0hH88vLbwZ
44gx+FkagQnIl6Z0x2DEW8xXjrJ1/RsCCdtZb3KTafcxQdaIOL+Hsr0Wefmq5L6I
Jd1hJyMctTEHBDa0GpC9oHRxUIltvBTjD4au8as+x6AJzKNI0eDbZOeStc+vckNw
i/nDhDwTqn6Sm1dTk/pwwpEOMfmbZ13pljheX7NzTogVZ96edhBiIL5VaZVDADlN
9u6wWk5JRFRYX0KD
-----END CERTIFICATE-----

# Issuer: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services)
# Subject: CN=NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny O=NetLock Kft. OU=Tan\xfas\xedtv\xe1nykiad\xf3k (Certification Services)
# Label: "NetLock Arany (Class Gold) F\u0151tan\xfas\xedtv\xe1ny"
# Serial: 80544274841616
# MD5 Fingerprint: c5:a1:b7:ff:73:dd:d6:d7:34:32:18:df:fc:3c:ad:88
# SHA1 Fingerprint: 06:08:3f:59:3f:15:a1:04:a0:69:a4:6b:a9:03:d0:06:b7:97:09:91
# SHA256 Fingerprint: 6c:61:da:c3:a2:de:f0:31:50:6b:e0:36:d2:a6:fe:40:19:94:fb:d1:3d:f9:c8:d4:66:59:92:74:c4:46:ec:98
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG
EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3
MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl
cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR
dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB
pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM
b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm
aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz
IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT
lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz
AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5
VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG
ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2
BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG
AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M
U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh
bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C
+C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC
bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F
uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2
XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E=
-----END CERTIFICATE-----

# Issuer: CN=SecureSign RootCA11 O=Japan Certification Services, Inc.
# Subject: CN=SecureSign RootCA11 O=Japan Certification Services, Inc.
# Label: "SecureSign RootCA11"
# Serial: 1
# MD5 Fingerprint: b7:52:74:e2:92:b4:80:93:f2:75:e4:cc:d7:f2:ea:26
# SHA1 Fingerprint: 3b:c4:9f:48:f8:f3:73:a0:9c:1e:bd:f8:5b:b1:c3:65:c7:d8:11:b3
# SHA256 Fingerprint: bf:0f:ee:fb:9e:3a:58:1a:d5:f9:e9:db:75:89:98:57:43:d2:61:08:5c:4d:31:4f:6f:5d:72:59:aa:42:16:12
-----BEGIN CERTIFICATE-----
MIIDbTCCAlWgAwIBAgIBATANBgkqhkiG9w0BAQUFADBYMQswCQYDVQQGEwJKUDEr
MCkGA1UEChMiSmFwYW4gQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcywgSW5jLjEcMBoG
A1UEAxMTU2VjdXJlU2lnbiBSb290Q0ExMTAeFw0wOTA0MDgwNDU2NDdaFw0yOTA0
MDgwNDU2NDdaMFgxCzAJBgNVBAYTAkpQMSswKQYDVQQKEyJKYXBhbiBDZXJ0aWZp
Y2F0aW9uIFNlcnZpY2VzLCBJbmMuMRwwGgYDVQQDExNTZWN1cmVTaWduIFJvb3RD
QTExMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/XeqpRyQBTvLTJsz
i1oURaTnkBbR31fSIRCkF/3frNYfp+TbfPfs37gD2pRY/V1yfIw/XwFndBWW4wI8
h9uuywGOwvNmxoVF9ALGOrVisq/6nL+k5tSAMJjzDbaTj6nU2DbysPyKyiyhFTOV
MdrAG/LuYpmGYz+/3ZMqg6h2uRMft85OQoWPIucuGvKVCbIFtUROd6EgvanyTgp9
UK31BQ1FT0Zx/Sg+U/sE2C3XZR1KG/rPO7AxmjVuyIsG0wCR8pQIZUyxNAYAeoni
8McDWc/V1uinMrPmmECGxc0nEovMe863ETxiYAcjPitAbpSACW22s293bzUIUPsC
h8U+iQIDAQABo0IwQDAdBgNVHQ4EFgQUW/hNT7KlhtQ60vFjmqC+CfZXt94wDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEFBQADggEB
AKChOBZmLqdWHyGcBvod7bkixTgm2E5P7KN/ed5GIaGHd48HCJqypMWvDzKYC3xm
KbabfSVSSUOrTC4rbnpwrxYO4wJs+0LmGJ1F2FXI6Dvd5+H0LgscNFxsWEr7jIhQ
X5Ucv+2rIrVls4W6ng+4reV6G4pQOh29Dbx7VFALuUKvVaAYga1lme++5Jy/xIWr
QbJUb9wlze144o4MjQlJ3WN7WmmWAiGovVJZ6X01y8hSyn+B/tlr0/cR7SXf+Of5
pPpyl4RTDaXQMhhRdlkUbA/r7F+AjHVDg8OFmP9Mni0N5HeDk061lgeLKBObjBmN
QSdJQO7e5iNEOdyhIta6A/I=
-----END CERTIFICATE-----

# Issuer: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd.
# Subject: CN=Microsec e-Szigno Root CA 2009 O=Microsec Ltd.
# Label: "Microsec e-Szigno Root CA 2009"
# Serial: 14014712776195784473
# MD5 Fingerprint: f8:49:f4:03:bc:44:2d:83:be:48:69:7d:29:64:fc:b1
# SHA1 Fingerprint: 89:df:74:fe:5c:f4:0f:4a:80:f9:e3:37:7d:54:da:91:e1:01:31:8e
# SHA256 Fingerprint: 3c:5f:81:fe:a5:fa:b8:2c:64:bf:a2:ea:ec:af:cd:e8:e0:77:fc:86:20:a7:ca:e5:37:16:3d:f3:6e:db:f3:78
-----BEGIN CERTIFICATE-----
MIIECjCCAvKgAwIBAgIJAMJ+QwRORz8ZMA0GCSqGSIb3DQEBCwUAMIGCMQswCQYD
VQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFjAUBgNVBAoMDU1pY3Jvc2VjIEx0
ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3ppZ25vIFJvb3QgQ0EgMjAwOTEfMB0G
CSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5odTAeFw0wOTA2MTYxMTMwMThaFw0y
OTEyMzAxMTMwMThaMIGCMQswCQYDVQQGEwJIVTERMA8GA1UEBwwIQnVkYXBlc3Qx
FjAUBgNVBAoMDU1pY3Jvc2VjIEx0ZC4xJzAlBgNVBAMMHk1pY3Jvc2VjIGUtU3pp
Z25vIFJvb3QgQ0EgMjAwOTEfMB0GCSqGSIb3DQEJARYQaW5mb0BlLXN6aWduby5o
dTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOn4j/NjrdqG2KfgQvvP
kd6mJviZpWNwrZuuyjNAfW2WbqEORO7hE52UQlKavXWFdCyoDh2Tthi3jCyoz/tc
cbna7P7ofo/kLx2yqHWH2Leh5TvPmUpG0IMZfcChEhyVbUr02MelTTMuhTlAdX4U
fIASmFDHQWe4oIBhVKZsTh/gnQ4H6cm6M+f+wFUoLAKApxn1ntxVUwOXewdI/5n7
N4okxFnMUBBjjqqpGrCEGob5X7uxUG6k0QrM1XF+H6cbfPVTbiJfyyvm1HxdrtbC
xkzlBQHZ7Vf8wSN5/PrIJIOV87VqUQHQd9bpEqH5GoP7ghu5sJf0dgYzQ0mg/wu1
+rUCAwEAAaOBgDB+MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G
A1UdDgQWBBTLD8bfQkPMPcu1SCOhGnqmKrs0aDAfBgNVHSMEGDAWgBTLD8bfQkPM
Pcu1SCOhGnqmKrs0aDAbBgNVHREEFDASgRBpbmZvQGUtc3ppZ25vLmh1MA0GCSqG
SIb3DQEBCwUAA4IBAQDJ0Q5eLtXMs3w+y/w9/w0olZMEyL/azXm4Q5DwpL7v8u8h
mLzU1F0G9u5C7DBsoKqpyvGvivo/C3NqPuouQH4frlRheesuCDfXI/OMn74dseGk
ddug4lQUsbocKaQY9hK6ohQU4zE1yED/t+AFdlfBHFny+L/k7SViXITwfn4fs775
tyERzAMBVnCnEJIeGzSBHq2cGsMEPO0CYdYeBvNfOofyK/FFh+U9rNHHV4S9a67c
2Pm2G2JwCz02yULyMtd6YebS2z3PyKnJm9zbWETXbzivf3jTo60adbocwTZ8jx5t
HMN1Rq41Bab2XD0h7lbwyYIiLXpUq3DDfSJlgnCW
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3
# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R3
# Label: "GlobalSign Root CA - R3"
# Serial: 4835703278459759426209954
# MD5 Fingerprint: c5:df:b8:49:ca:05:13:55:ee:2d:ba:1a:c3:3e:b0:28
# SHA1 Fingerprint: d6:9b:56:11:48:f0:1c:77:c5:45:78:c1:09:26:df:5b:85:69:76:ad
# SHA256 Fingerprint: cb:b5:22:d7:b7:f1:27:ad:6a:01:13:86:5b:df:1c:d4:10:2e:7d:07:59:af:63:5a:7c:f4:72:0d:c9:63:c5:3b
-----BEGIN CERTIFICATE-----
MIIDXzCCAkegAwIBAgILBAAAAAABIVhTCKIwDQYJKoZIhvcNAQELBQAwTDEgMB4G
A1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNp
Z24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMDkwMzE4MTAwMDAwWhcNMjkwMzE4
MTAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEG
A1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCASIwDQYJKoZI
hvcNAQEBBQADggEPADCCAQoCggEBAMwldpB5BngiFvXAg7aEyiie/QV2EcWtiHL8
RgJDx7KKnQRfJMsuS+FggkbhUqsMgUdwbN1k0ev1LKMPgj0MK66X17YUhhB5uzsT
gHeMCOFJ0mpiLx9e+pZo34knlTifBtc+ycsmWQ1z3rDI6SYOgxXG71uL0gRgykmm
KPZpO/bLyCiR5Z2KYVc3rHQU3HTgOu5yLy6c+9C7v/U9AOEGM+iCK65TpjoWc4zd
QQ4gOsC0p6Hpsk+QLjJg6VfLuQSSaGjlOCZgdbKfd/+RFO+uIEn8rUAVSNECMWEZ
XriX7613t2Saer9fwRPvm2L7DWzgVGkWqQPabumDk3F2xmmFghcCAwEAAaNCMEAw
DgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI/wS3+o
LkUkrk1Q+mOai97i3Ru8MA0GCSqGSIb3DQEBCwUAA4IBAQBLQNvAUKr+yAzv95ZU
RUm7lgAJQayzE4aGKAczymvmdLm6AC2upArT9fHxD4q/c2dKg8dEe3jgr25sbwMp
jjM5RcOO5LlXbKr8EpbsU8Yt5CRsuZRj+9xTaGdWPoO4zzUhw8lo/s7awlOqzJCK
6fBdRoyV3XpYKBovHd7NADdBj+1EbddTKJd+82cEHhXXipa0095MJ6RMG3NzdvQX
mcIfeg7jLQitChws/zyrVQ4PkX4268NXSb7hLi18YIvDQVETI53O9zJrlAGomecs
Mx86OyXShkDOOyyGeMlhLxS67ttVb9+E7gUJTb0o2HLO02JQZR7rkpeDMdmztcpH
WD9f
-----END CERTIFICATE-----

# Issuer: CN=Izenpe.com O=IZENPE S.A.
# Subject: CN=Izenpe.com O=IZENPE S.A.
# Label: "Izenpe.com"
# Serial: 917563065490389241595536686991402621
# MD5 Fingerprint: a6:b0:cd:85:80:da:5c:50:34:a3:39:90:2f:55:67:73
# SHA1 Fingerprint: 2f:78:3d:25:52:18:a7:4a:65:39:71:b5:2c:a2:9c:45:15:6f:e9:19
# SHA256 Fingerprint: 25:30:cc:8e:98:32:15:02:ba:d9:6f:9b:1f:ba:1b:09:9e:2d:29:9e:0f:45:48:bb:91:4f:36:3b:c0:d4:53:1f
-----BEGIN CERTIFICATE-----
MIIF8TCCA9mgAwIBAgIQALC3WhZIX7/hy/WL1xnmfTANBgkqhkiG9w0BAQsFADA4
MQswCQYDVQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6
ZW5wZS5jb20wHhcNMDcxMjEzMTMwODI4WhcNMzcxMjEzMDgyNzI1WjA4MQswCQYD
VQQGEwJFUzEUMBIGA1UECgwLSVpFTlBFIFMuQS4xEzARBgNVBAMMCkl6ZW5wZS5j
b20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDJ03rKDx6sp4boFmVq
scIbRTJxldn+EFvMr+eleQGPicPK8lVx93e+d5TzcqQsRNiekpsUOqHnJJAKClaO
xdgmlOHZSOEtPtoKct2jmRXagaKH9HtuJneJWK3W6wyyQXpzbm3benhB6QiIEn6H
LmYRY2xU+zydcsC8Lv/Ct90NduM61/e0aL6i9eOBbsFGb12N4E3GVFWJGjMxCrFX
uaOKmMPsOzTFlUFpfnXCPCDFYbpRR6AgkJOhkEvzTnyFRVSa0QUmQbC1TR0zvsQD
yCV8wXDbO/QJLVQnSKwv4cSsPsjLkkxTOTcj7NMB+eAJRE1NZMDhDVqHIrytG6P+
JrUV86f8hBnp7KGItERphIPzidF0BqnMC9bC3ieFUCbKF7jJeodWLBoBHmy+E60Q
rLUk9TiRodZL2vG70t5HtfG8gfZZa88ZU+mNFctKy6lvROUbQc/hhqfK0GqfvEyN
BjNaooXlkDWgYlwWTvDjovoDGrQscbNYLN57C9saD+veIR8GdwYDsMnvmfzAuU8L
hij+0rnq49qlw0dpEuDb8PYZi+17cNcC1u2HGCgsBCRMd+RIihrGO5rUD8r6ddIB
QFqNeb+Lz0vPqhbBleStTIo+F5HUsWLlguWABKQDfo2/2n+iD5dPDNMN+9fR5XJ+
HMh3/1uaD7euBUbl8agW7EekFwIDAQABo4H2MIHzMIGwBgNVHREEgagwgaWBD2lu
Zm9AaXplbnBlLmNvbaSBkTCBjjFHMEUGA1UECgw+SVpFTlBFIFMuQS4gLSBDSUYg
QTAxMzM3MjYwLVJNZXJjLlZpdG9yaWEtR2FzdGVpeiBUMTA1NSBGNjIgUzgxQzBB
BgNVBAkMOkF2ZGEgZGVsIE1lZGl0ZXJyYW5lbyBFdG9yYmlkZWEgMTQgLSAwMTAx
MCBWaXRvcmlhLUdhc3RlaXowDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC
AQYwHQYDVR0OBBYEFB0cZQ6o8iV7tJHP5LGx5r1VdGwFMA0GCSqGSIb3DQEBCwUA
A4ICAQB4pgwWSp9MiDrAyw6lFn2fuUhfGI8NYjb2zRlrrKvV9pF9rnHzP7MOeIWb
laQnIUdCSnxIOvVFfLMMjlF4rJUT3sb9fbgakEyrkgPH7UIBzg/YsfqikuFgba56
awmqxinuaElnMIAkejEWOVt+8Rwu3WwJrfIxwYJOubv5vr8qhT/AQKM6WfxZSzwo
JNu0FXWuDYi6LnPAvViH5ULy617uHjAimcs30cQhbIHsvm0m5hzkQiCeR7Csg1lw
LDXWrzY0tM07+DKo7+N4ifuNRSzanLh+QBxh5z6ikixL8s36mLYp//Pye6kfLqCT
VyvehQP5aTfLnnhqBbTFMXiJ7HqnheG5ezzevh55hM6fcA5ZwjUukCox2eRFekGk
LhObNA5me0mrZJfQRsN5nXJQY6aYWwa9SG3YOYNw6DXwBdGqvOPbyALqfP2C2sJb
UjWumDqtujWTI6cfSN01RpiyEGjkpTHCClguGYEQyVB1/OpaFs4R1+7vUIgtYf8/
QnMFlEPVjjxOAToZpR9GTnfQXeWBIiGH/pR9hNiTrdZoQ0iy2+tzJOeRf1SktoA+
naM8THLCV8Sg1Mw4J87VBp6iSNnpn86CcDaTmjvfliHjWbcM2pE38P1ZWrOZyGls
QyYBNWNgVYkDOnXYukrZVP/u3oDYLdE41V4tC5h9Pmzb/CaIxw==
-----END CERTIFICATE-----

# Issuer: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc.
# Subject: CN=Go Daddy Root Certificate Authority - G2 O=GoDaddy.com, Inc.
# Label: "Go Daddy Root Certificate Authority - G2"
# Serial: 0
# MD5 Fingerprint: 80:3a:bc:22:c1:e6:fb:8d:9b:3b:27:4a:32:1b:9a:01
# SHA1 Fingerprint: 47:be:ab:c9:22:ea:e8:0e:78:78:34:62:a7:9f:45:c2:54:fd:e6:8b
# SHA256 Fingerprint: 45:14:0b:32:47:eb:9c:c8:c5:b4:f0:d7:b5:30:91:f7:32:92:08:9e:6e:5a:63:e2:74:9d:d3:ac:a9:19:8e:da
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx
EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT
EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp
ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz
NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH
EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE
AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD
E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH
/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy
DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh
GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR
tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA
AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE
FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX
WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu
9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr
gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo
2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO
LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI
4uJEvlz36hz1
-----END CERTIFICATE-----

# Issuer: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc.
# Subject: CN=Starfield Root Certificate Authority - G2 O=Starfield Technologies, Inc.
# Label: "Starfield Root Certificate Authority - G2"
# Serial: 0
# MD5 Fingerprint: d6:39:81:c6:52:7e:96:69:fc:fc:ca:66:ed:05:f2:96
# SHA1 Fingerprint: b5:1c:06:7c:ee:2b:0c:3d:f8:55:ab:2d:92:f4:fe:39:d4:e7:0f:0e
# SHA256 Fingerprint: 2c:e1:cb:0b:f9:d2:f9:e1:02:99:3f:be:21:51:52:c3:b2:dd:0c:ab:de:1c:68:e5:31:9b:83:91:54:db:b7:f5
-----BEGIN CERTIFICATE-----
MIID3TCCAsWgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBjzELMAkGA1UEBhMCVVMx
EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xMjAwBgNVBAMTKVN0YXJmaWVs
ZCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAw
MFoXDTM3MTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6
b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFyZmllbGQgVGVj
aG5vbG9naWVzLCBJbmMuMTIwMAYDVQQDEylTdGFyZmllbGQgUm9vdCBDZXJ0aWZp
Y2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBAL3twQP89o/8ArFvW59I2Z154qK3A2FWGMNHttfKPTUuiUP3oWmb3ooa/RMg
nLRJdzIpVv257IzdIvpy3Cdhl+72WoTsbhm5iSzchFvVdPtrX8WJpRBSiUZV9Lh1
HOZ/5FSuS/hVclcCGfgXcVnrHigHdMWdSL5stPSksPNkN3mSwOxGXn/hbVNMYq/N
Hwtjuzqd+/x5AJhhdM8mgkBj87JyahkNmcrUDnXMN/uLicFZ8WJ/X7NfZTD4p7dN
dloedl40wOiWVpmKs/B/pM293DIxfJHP4F8R+GuqSVzRmZTRouNjWwl2tVZi4Ut0
HZbUJtQIBFnQmA4O5t78w+wfkPECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAO
BgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFHwMMh+n2TB/xH1oo2Kooc6rB1snMA0G
CSqGSIb3DQEBCwUAA4IBAQARWfolTwNvlJk7mh+ChTnUdgWUXuEok21iXQnCoKjU
sHU48TRqneSfioYmUeYs0cYtbpUgSpIB7LiKZ3sx4mcujJUDJi5DnUox9g61DLu3
4jd/IroAow57UvtruzvE03lRTs2Q9GcHGcg8RnoNAX3FWOdt5oUwF5okxBDgBPfg
8n/Uqgr/Qh037ZTlZFkSIHc40zI+OIF1lnP6aI+xy84fxez6nH7PfrHxBy22/L/K
pL/QlwVKvOoYKAKQvVR4CSFx09F9HdkWsKlhPdAKACL8x3vLCWRFCztAgfd9fDL1
mMpYjn0q7pBZc2T5NnReJaH1ZgUufzkVqSr7UIuOhWn0
-----END CERTIFICATE-----

# Issuer: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc.
# Subject: CN=Starfield Services Root Certificate Authority - G2 O=Starfield Technologies, Inc.
# Label: "Starfield Services Root Certificate Authority - G2"
# Serial: 0
# MD5 Fingerprint: 17:35:74:af:7b:61:1c:eb:f4:f9:3c:e2:ee:40:f9:a2
# SHA1 Fingerprint: 92:5a:8f:8d:2c:6d:04:e0:66:5f:59:6a:ff:22:d8:63:e8:25:6f:3f
# SHA256 Fingerprint: 56:8d:69:05:a2:c8:87:08:a4:b3:02:51:90:ed:cf:ed:b1:97:4a:60:6a:13:c6:e5:29:0f:cb:2a:e6:3e:da:b5
-----BEGIN CERTIFICATE-----
MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx
EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs
ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5
MDkwMTAwMDAwMFoXDTM3MTIzMTIzNTk1OVowgZgxCzAJBgNVBAYTAlVTMRAwDgYD
VQQIEwdBcml6b25hMRMwEQYDVQQHEwpTY290dHNkYWxlMSUwIwYDVQQKExxTdGFy
ZmllbGQgVGVjaG5vbG9naWVzLCBJbmMuMTswOQYDVQQDEzJTdGFyZmllbGQgU2Vy
dmljZXMgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIwDQYJKoZI
hvcNAQEBBQADggEPADCCAQoCggEBANUMOsQq+U7i9b4Zl1+OiFOxHz/Lz58gE20p
OsgPfTz3a3Y4Y9k2YKibXlwAgLIvWX/2h/klQ4bnaRtSmpDhcePYLQ1Ob/bISdm2
8xpWriu2dBTrz/sm4xq6HZYuajtYlIlHVv8loJNwU4PahHQUw2eeBGg6345AWh1K
Ts9DkTvnVtYAcMtS7nt9rjrnvDH5RfbCYM8TWQIrgMw0R9+53pBlbQLPLJGmpufe
hRhJfGZOozptqbXuNC66DQO4M99H67FrjSXZm86B0UVGMpZwh94CDklDhbZsc7tk
6mFBrMnUVN+HL8cisibMn1lUaJ/8viovxFUcdUBgF4UCVTmLfwUCAwEAAaNCMEAw
DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFJxfAN+q
AdcwKziIorhtSpzyEZGDMA0GCSqGSIb3DQEBCwUAA4IBAQBLNqaEd2ndOxmfZyMI
bw5hyf2E3F/YNoHN2BtBLZ9g3ccaaNnRbobhiCPPE95Dz+I0swSdHynVv/heyNXB
ve6SbzJ08pGCL72CQnqtKrcgfU28elUSwhXqvfdqlS5sdJ/PHLTyxQGjhdByPq1z
qwubdQxtRbeOlKyWN7Wg0I8VRw7j6IPdj/3vQQF3zCepYoUz8jcI73HPdwbeyBkd
iEDPfUYd/x7H4c7/I9vG+o1VTqkC50cRRj70/b17KSa7qWFiNyi2LSr2EIZkyXCn
0q23KXB56jzaYyWf/Wi3MOxw+3WKt21gZ7IeyLnp2KhvAotnDU0mV3HaIPzBSlCN
sSi6
-----END CERTIFICATE-----

# Issuer: CN=AffirmTrust Commercial O=AffirmTrust
# Subject: CN=AffirmTrust Commercial O=AffirmTrust
# Label: "AffirmTrust Commercial"
# Serial: 8608355977964138876
# MD5 Fingerprint: 82:92:ba:5b:ef:cd:8a:6f:a6:3d:55:f9:84:f6:d6:b7
# SHA1 Fingerprint: f9:b5:b6:32:45:5f:9c:be:ec:57:5f:80:dc:e9:6e:2c:c7:b2:78:b7
# SHA256 Fingerprint: 03:76:ab:1d:54:c5:f9:80:3c:e4:b2:e2:01:a0:ee:7e:ef:7b:57:b6:36:e8:a9:3c:9b:8d:48:60:c9:6f:5f:a7
-----BEGIN CERTIFICATE-----
MIIDTDCCAjSgAwIBAgIId3cGJyapsXwwDQYJKoZIhvcNAQELBQAwRDELMAkGA1UE
BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
dCBDb21tZXJjaWFsMB4XDTEwMDEyOTE0MDYwNloXDTMwMTIzMTE0MDYwNlowRDEL
MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
cm1UcnVzdCBDb21tZXJjaWFsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEA9htPZwcroRX1BiLLHwGy43NFBkRJLLtJJRTWzsO3qyxPxkEylFf6EqdbDuKP
Hx6GGaeqtS25Xw2Kwq+FNXkyLbscYjfysVtKPcrNcV/pQr6U6Mje+SJIZMblq8Yr
ba0F8PrVC8+a5fBQpIs7R6UjW3p6+DM/uO+Zl+MgwdYoic+U+7lF7eNAFxHUdPAL
MeIrJmqbTFeurCA+ukV6BfO9m2kVrn1OIGPENXY6BwLJN/3HR+7o8XYdcxXyl6S1
yHp52UKqK39c/s4mT6NmgTWvRLpUHhwwMmWd5jyTXlBOeuM61G7MGvv50jeuJCqr
VwMiKA1JdX+3KNp1v47j3A55MQIDAQABo0IwQDAdBgNVHQ4EFgQUnZPGU4teyq8/
nx4P5ZmVvCT2lI8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
KoZIhvcNAQELBQADggEBAFis9AQOzcAN/wr91LoWXym9e2iZWEnStB03TX8nfUYG
XUPGhi4+c7ImfU+TqbbEKpqrIZcUsd6M06uJFdhrJNTxFq7YpFzUf1GO7RgBsZNj
vbz4YYCanrHOQnDiqX0GJX0nof5v7LMeJNrjS1UaADs1tDvZ110w/YETifLCBivt
Z8SOyUOyXGsViQK8YvxO8rUzqrJv0wqiUOP2O+guRMLbZjipM1ZI8W0bM40NjD9g
N53Tym1+NH4Nn3J2ixufcv1SNUFFApYvHLKac0khsUlHRUe072o0EclNmsxZt9YC
nlpOZbWUrhvfKbAW8b8Angc6F2S1BLUjIZkKlTuXfO8=
-----END CERTIFICATE-----

# Issuer: CN=AffirmTrust Networking O=AffirmTrust
# Subject: CN=AffirmTrust Networking O=AffirmTrust
# Label: "AffirmTrust Networking"
# Serial: 8957382827206547757
# MD5 Fingerprint: 42:65:ca:be:01:9a:9a:4c:a9:8c:41:49:cd:c0:d5:7f
# SHA1 Fingerprint: 29:36:21:02:8b:20:ed:02:f5:66:c5:32:d1:d6:ed:90:9f:45:00:2f
# SHA256 Fingerprint: 0a:81:ec:5a:92:97:77:f1:45:90:4a:f3:8d:5d:50:9f:66:b5:e2:c5:8f:cd:b5:31:05:8b:0e:17:f3:f0:b4:1b
-----BEGIN CERTIFICATE-----
MIIDTDCCAjSgAwIBAgIIfE8EORzUmS0wDQYJKoZIhvcNAQEFBQAwRDELMAkGA1UE
BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZpcm1UcnVz
dCBOZXR3b3JraW5nMB4XDTEwMDEyOTE0MDgyNFoXDTMwMTIzMTE0MDgyNFowRDEL
MAkGA1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MR8wHQYDVQQDDBZBZmZp
cm1UcnVzdCBOZXR3b3JraW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
AQEAtITMMxcua5Rsa2FSoOujz3mUTOWUgJnLVWREZY9nZOIG41w3SfYvm4SEHi3y
YJ0wTsyEheIszx6e/jarM3c1RNg1lho9Nuh6DtjVR6FqaYvZ/Ls6rnla1fTWcbua
kCNrmreIdIcMHl+5ni36q1Mr3Lt2PpNMCAiMHqIjHNRqrSK6mQEubWXLviRmVSRL
QESxG9fhwoXA3hA/Pe24/PHxI1Pcv2WXb9n5QHGNfb2V1M6+oF4nI979ptAmDgAp
6zxG8D1gvz9Q0twmQVGeFDdCBKNwV6gbh+0t+nvujArjqWaJGctB+d1ENmHP4ndG
yH329JKBNv3bNPFyfvMMFr20FQIDAQABo0IwQDAdBgNVHQ4EFgQUBx/S55zawm6i
QLSwelAQUHTEyL0wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwDQYJ
KoZIhvcNAQEFBQADggEBAIlXshZ6qML91tmbmzTCnLQyFE2npN/svqe++EPbkTfO
tDIuUFUaNU52Q3Eg75N3ThVwLofDwR1t3Mu1J9QsVtFSUzpE0nPIxBsFZVpikpzu
QY0x2+c06lkh1QF612S4ZDnNye2v7UsDSKegmQGA3GWjNq5lWUhPgkvIZfFXHeVZ
Lgo/bNjR9eUJtGxUAArgFU2HdW23WJZa3W3SAKD0m0i+wzekujbgfIeFlxoVot4u
olu9rxj5kFDNcFn4J2dHy8egBzp90SxdbBk6ZrV9/ZFvgrG+CJPbFEfxojfHRZ48
x3evZKiT3/Zpg4Jg8klCNO1aAFSFHBY2kgxc+qatv9s=
-----END CERTIFICATE-----

# Issuer: CN=AffirmTrust Premium O=AffirmTrust
# Subject: CN=AffirmTrust Premium O=AffirmTrust
# Label: "AffirmTrust Premium"
# Serial: 7893706540734352110
# MD5 Fingerprint: c4:5d:0e:48:b6:ac:28:30:4e:0a:bc:f9:38:16:87:57
# SHA1 Fingerprint: d8:a6:33:2c:e0:03:6f:b1:85:f6:63:4f:7d:6a:06:65:26:32:28:27
# SHA256 Fingerprint: 70:a7:3f:7f:37:6b:60:07:42:48:90:45:34:b1:14:82:d5:bf:0e:69:8e:cc:49:8d:f5:25:77:eb:f2:e9:3b:9a
-----BEGIN CERTIFICATE-----
MIIFRjCCAy6gAwIBAgIIbYwURrGmCu4wDQYJKoZIhvcNAQEMBQAwQTELMAkGA1UE
BhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1UcnVz
dCBQcmVtaXVtMB4XDTEwMDEyOTE0MTAzNloXDTQwMTIzMTE0MTAzNlowQTELMAkG
A1UEBhMCVVMxFDASBgNVBAoMC0FmZmlybVRydXN0MRwwGgYDVQQDDBNBZmZpcm1U
cnVzdCBQcmVtaXVtMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxBLf
qV/+Qd3d9Z+K4/as4Tx4mrzY8H96oDMq3I0gW64tb+eT2TZwamjPjlGjhVtnBKAQ
JG9dKILBl1fYSCkTtuG+kU3fhQxTGJoeJKJPj/CihQvL9Cl/0qRY7iZNyaqoe5rZ
+jjeRFcV5fiMyNlI4g0WJx0eyIOFJbe6qlVBzAMiSy2RjYvmia9mx+n/K+k8rNrS
s8PhaJyJ+HoAVt70VZVs+7pk3WKL3wt3MutizCaam7uqYoNMtAZ6MMgpv+0GTZe5
HMQxK9VfvFMSF5yZVylmd2EhMQcuJUmdGPLu8ytxjLW6OQdJd/zvLpKQBY0tL3d7
70O/Nbua2Plzpyzy0FfuKE4mX4+QaAkvuPjcBukumj5Rp9EixAqnOEhss/n/fauG
V+O61oV4d7pD6kh/9ti+I20ev9E2bFhc8e6kGVQa9QPSdubhjL08s9NIS+LI+H+S
qHZGnEJlPqQewQcDWkYtuJfzt9WyVSHvutxMAJf7FJUnM7/oQ0dG0giZFmA7mn7S
5u046uwBHjxIVkkJx0w3AJ6IDsBz4W9m6XJHMD4Q5QsDyZpCAGzFlH5hxIrff4Ia
C1nEWTJ3s7xgaVY5/bQGeyzWZDbZvUjthB9+pSKPKrhC9IK31FOQeE4tGv2Bb0TX
OwF0lkLgAOIua+rF7nKsu7/+6qqo+Nz2snmKtmcCAwEAAaNCMEAwHQYDVR0OBBYE
FJ3AZ6YMItkm9UWrpmVSESfYRaxjMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/
BAQDAgEGMA0GCSqGSIb3DQEBDAUAA4ICAQCzV00QYk465KzquByvMiPIs0laUZx2
KI15qldGF9X1Uva3ROgIRL8YhNILgM3FEv0AVQVhh0HctSSePMTYyPtwni94loMg
Nt58D2kTiKV1NpgIpsbfrM7jWNa3Pt668+s0QNiigfV4Py/VpfzZotReBA4Xrf5B
8OWycvpEgjNC6C1Y91aMYj+6QrCcDFx+LmUmXFNPALJ4fqENmS2NuB2OosSw/WDQ
MKSOyARiqcTtNd56l+0OOF6SL5Nwpamcb6d9Ex1+xghIsV5n61EIJenmJWtSKZGc
0jlzCFfemQa0W50QBuHCAKi4HEoCChTQwUHK+4w1IX2COPKpVJEZNZOUbWo6xbLQ
u4mGk+ibyQ86p3q4ofB4Rvr8Ny/lioTz3/4E2aFooC8k4gmVBtWVyuEklut89pMF
u+1z6S3RdTnX5yTb2E5fQ4+e0BQ5v1VwSJlXMbSc7kqYA5YwH2AG7hsj/oFgIxpH
YoWlzBk0gG+zrBrjn/B7SK3VAdlntqlyk+otZrWyuOQ9PLLvTIzq6we/qzWaVYa8
GKa1qF60g2xraUDTn9zxw2lrueFtCfTxqlB2Cnp9ehehVZZCmTEJ3WARjQUwfuaO
RtGdFNrHF+QFlozEJLUbzxQHskD4o55BhrwE0GuWyCqANP2/7waj3VjFhT0+j/6e
KeC2uAloGRwYQw==
-----END CERTIFICATE-----

# Issuer: CN=AffirmTrust Premium ECC O=AffirmTrust
# Subject: CN=AffirmTrust Premium ECC O=AffirmTrust
# Label: "AffirmTrust Premium ECC"
# Serial: 8401224907861490260
# MD5 Fingerprint: 64:b0:09:55:cf:b1:d5:99:e2:be:13:ab:a6:5d:ea:4d
# SHA1 Fingerprint: b8:23:6b:00:2f:1d:16:86:53:01:55:6c:11:a4:37:ca:eb:ff:c3:bb
# SHA256 Fingerprint: bd:71:fd:f6:da:97:e4:cf:62:d1:64:7a:dd:25:81:b0:7d:79:ad:f8:39:7e:b4:ec:ba:9c:5e:84:88:82:14:23
-----BEGIN CERTIFICATE-----
MIIB/jCCAYWgAwIBAgIIdJclisc/elQwCgYIKoZIzj0EAwMwRTELMAkGA1UEBhMC
VVMxFDASBgNVBAoMC0FmZmlybVRydXN0MSAwHgYDVQQDDBdBZmZpcm1UcnVzdCBQ
cmVtaXVtIEVDQzAeFw0xMDAxMjkxNDIwMjRaFw00MDEyMzExNDIwMjRaMEUxCzAJ
BgNVBAYTAlVTMRQwEgYDVQQKDAtBZmZpcm1UcnVzdDEgMB4GA1UEAwwXQWZmaXJt
VHJ1c3QgUHJlbWl1bSBFQ0MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQNMF4bFZ0D
0KF5Nbc6PJJ6yhUczWLznCZcBz3lVPqj1swS6vQUX+iOGasvLkjmrBhDeKzQN8O9
ss0s5kfiGuZjuD0uL3jET9v0D6RoTFVya5UdThhClXjMNzyR4ptlKymjQjBAMB0G
A1UdDgQWBBSaryl6wBE1NSZRMADDav5A1a7WPDAPBgNVHRMBAf8EBTADAQH/MA4G
A1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNnADBkAjAXCfOHiFBar8jAQr9HX/Vs
aobgxCd05DhT1wV/GzTjxi+zygk8N53X57hG8f2h4nECMEJZh0PUUd+60wkyWs6I
flc9nF9Ca/UHLbXwgpP5WW+uZPpY5Yse42O+tYHNbwKMeQ==
-----END CERTIFICATE-----

# Issuer: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority
# Subject: CN=Certum Trusted Network CA O=Unizeto Technologies S.A. OU=Certum Certification Authority
# Label: "Certum Trusted Network CA"
# Serial: 279744
# MD5 Fingerprint: d5:e9:81:40:c5:18:69:fc:46:2c:89:75:62:0f:aa:78
# SHA1 Fingerprint: 07:e0:32:e0:20:b7:2c:3f:19:2f:06:28:a2:59:3a:19:a7:0f:06:9e
# SHA256 Fingerprint: 5c:58:46:8d:55:f5:8e:49:7e:74:39:82:d2:b5:00:10:b6:d1:65:37:4a:cf:83:a7:d4:a3:2d:b7:68:c4:40:8e
-----BEGIN CERTIFICATE-----
MIIDuzCCAqOgAwIBAgIDBETAMA0GCSqGSIb3DQEBBQUAMH4xCzAJBgNVBAYTAlBM
MSIwIAYDVQQKExlVbml6ZXRvIFRlY2hub2xvZ2llcyBTLkEuMScwJQYDVQQLEx5D
ZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxIjAgBgNVBAMTGUNlcnR1bSBU
cnVzdGVkIE5ldHdvcmsgQ0EwHhcNMDgxMDIyMTIwNzM3WhcNMjkxMjMxMTIwNzM3
WjB+MQswCQYDVQQGEwJQTDEiMCAGA1UEChMZVW5pemV0byBUZWNobm9sb2dpZXMg
Uy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MSIw
IAYDVQQDExlDZXJ0dW0gVHJ1c3RlZCBOZXR3b3JrIENBMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEA4/t9o3K6wvDJFIf1awFO4W5AB7ptJ11/91sts1rH
UV+rpDKmYYe2bg+G0jACl/jXaVehGDldamR5xgFZrDwxSjh80gTSSyjoIF87B6LM
TXPb865Px1bVWqeWifrzq2jUI4ZZJ88JJ7ysbnKDHDBy3+Ci6dLhdHUZvSqeexVU
BBvXQzmtVSjF4hq79MDkrjhJM8x2hZ85RdKknvISjFH4fOQtf/WsX+sWn7Et0brM
kUJ3TCXJkDhv2/DM+44el1k+1WBO5gUo7Ul5E0u6SNsv+XLTOcr+H9g0cvW0QM8x
AcPs3hEtF10fuFDRXhmnad4HMyjKUJX5p1TLVIZQRan5SQIDAQABo0IwQDAPBgNV
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBQIds3LB/8k9sXN7buQvOKEN0Z19zAOBgNV
HQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQEFBQADggEBAKaorSLOAT2mo/9i0Eidi15y
sHhE49wcrwn9I0j6vSrEuVUEtRCjjSfeC4Jj0O7eDDd5QVsisrCaQVymcODU0HfL
I9MA4GxWL+FpDQ3Zqr8hgVDZBqWo/5U30Kr+4rP1mS1FhIrlQgnXdAIv94nYmem8
J9RHjboNRhx3zxSkHLmkMcScKHQDNP8zGSal6Q10tz6XxnboJ5ajZt3hrvJBW8qY
VoNzcOSGGtIxQbovvi0TWnZvTuhOgQ4/WwMioBK+ZlgRSssDxLQqKi2WF+A5VLxI
03YnnZotBqbJ7DnSq9ufmgsnAjUpsUCV5/nonFWIGUbWtzT1fs45mtk48VH3Tyw=
-----END CERTIFICATE-----

# Issuer: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA
# Subject: CN=TWCA Root Certification Authority O=TAIWAN-CA OU=Root CA
# Label: "TWCA Root Certification Authority"
# Serial: 1
# MD5 Fingerprint: aa:08:8f:f6:f9:7b:b7:f2:b1:a7:1e:9b:ea:ea:bd:79
# SHA1 Fingerprint: cf:9e:87:6d:d3:eb:fc:42:26:97:a3:b5:a3:7a:a0:76:a9:06:23:48
# SHA256 Fingerprint: bf:d8:8f:e1:10:1c:41:ae:3e:80:1b:f8:be:56:35:0e:e9:ba:d1:a6:b9:bd:51:5e:dc:5c:6d:5b:87:11:ac:44
-----BEGIN CERTIFICATE-----
MIIDezCCAmOgAwIBAgIBATANBgkqhkiG9w0BAQUFADBfMQswCQYDVQQGEwJUVzES
MBAGA1UECgwJVEFJV0FOLUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFU
V0NBIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMDgwODI4MDcyNDMz
WhcNMzAxMjMxMTU1OTU5WjBfMQswCQYDVQQGEwJUVzESMBAGA1UECgwJVEFJV0FO
LUNBMRAwDgYDVQQLDAdSb290IENBMSowKAYDVQQDDCFUV0NBIFJvb3QgQ2VydGlm
aWNhdGlvbiBBdXRob3JpdHkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQCwfnK4pAOU5qfeCTiRShFAh6d8WWQUe7UREN3+v9XAu1bihSX0NXIP+FPQQeFE
AcK0HMMxQhZHhTMidrIKbw/lJVBPhYa+v5guEGcevhEFhgWQxFnQfHgQsIBct+HH
K3XLfJ+utdGdIzdjp9xCoi2SBBtQwXu4PhvJVgSLL1KbralW6cH/ralYhzC2gfeX
RfwZVzsrb+RH9JlF/h3x+JejiB03HFyP4HYlmlD4oFT/RJB2I9IyxsOrBr/8+7/z
rX2SYgJbKdM1o5OaQ2RgXbL6Mv87BK9NQGr5x+PvI/1ry+UPizgN7gr8/g+YnzAx
3WxSZfmLgb4i4RxYA7qRG4kHAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqOFsmjd6LWvJPelSDGRjjCDWmujANBgkq
hkiG9w0BAQUFAAOCAQEAPNV3PdrfibqHDAhUaiBQkr6wQT25JmSDCi/oQMCXKCeC
MErJk/9q56YAf4lCmtYR5VPOL8zy2gXE/uJQxDqGfczafhAJO5I1KlOy/usrBdls
XebQ79NqZp4VKIV66IIArB6nCWlWQtNoURi+VJq/REG6Sb4gumlc7rh3zc5sH62D
lhh9DrUUOYTxKOkto557HnpyWoOzeW/vtPzQCqVYT0bf+215WfKEIlKuD8z7fDvn
aspHYcN6+NOSBB+4IIThNlQWx0DeO4pz3N/GCUzf7Nr/1FNCocnyYh0igzyXxfkZ
YiesZSLX0zzG5Y6yU8xJzrww/nsOM5D77dIUkR8Hrw==
-----END CERTIFICATE-----

# Issuer: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2
# Subject: O=SECOM Trust Systems CO.,LTD. OU=Security Communication RootCA2
# Label: "Security Communication RootCA2"
# Serial: 0
# MD5 Fingerprint: 6c:39:7d:a4:0e:55:59:b2:3f:d6:41:b1:12:50:de:43
# SHA1 Fingerprint: 5f:3b:8c:f2:f8:10:b3:7d:78:b4:ce:ec:19:19:c3:73:34:b9:c7:74
# SHA256 Fingerprint: 51:3b:2c:ec:b8:10:d4:cd:e5:dd:85:39:1a:df:c6:c2:dd:60:d8:7b:b7:36:d2:b5:21:48:4a:a4:7a:0e:be:f6
-----BEGIN CERTIFICATE-----
MIIDdzCCAl+gAwIBAgIBADANBgkqhkiG9w0BAQsFADBdMQswCQYDVQQGEwJKUDEl
MCMGA1UEChMcU0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UECxMe
U2VjdXJpdHkgQ29tbXVuaWNhdGlvbiBSb290Q0EyMB4XDTA5MDUyOTA1MDAzOVoX
DTI5MDUyOTA1MDAzOVowXTELMAkGA1UEBhMCSlAxJTAjBgNVBAoTHFNFQ09NIFRy
dXN0IFN5c3RlbXMgQ08uLExURC4xJzAlBgNVBAsTHlNlY3VyaXR5IENvbW11bmlj
YXRpb24gUm9vdENBMjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANAV
OVKxUrO6xVmCxF1SrjpDZYBLx/KWvNs2l9amZIyoXvDjChz335c9S672XewhtUGr
zbl+dp+++T42NKA7wfYxEUV0kz1XgMX5iZnK5atq1LXaQZAQwdbWQonCv/Q4EpVM
VAX3NuRFg3sUZdbcDE3R3n4MqzvEFb46VqZab3ZpUql6ucjrappdUtAtCms1FgkQ
hNBqyjoGADdH5H5XTz+L62e4iKrFvlNVspHEfbmwhRkGeC7bYRr6hfVKkaHnFtWO
ojnflLhwHyg/i/xAXmODPIMqGplrz95Zajv8bxbXH/1KEOtOghY6rCcMU/Gt1SSw
awNQwS08Ft1ENCcadfsCAwEAAaNCMEAwHQYDVR0OBBYEFAqFqXdlBZh8QIH4D5cs
OPEK7DzPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3
DQEBCwUAA4IBAQBMOqNErLlFsceTfsgLCkLfZOoc7llsCLqJX2rKSpWeeo8HxdpF
coJxDjrSzG+ntKEju/Ykn8sX/oymzsLS28yN/HH8AynBbF0zX2S2ZTuJbxh2ePXc
okgfGT+Ok+vx+hfuzU7jBBJV1uXk3fs+BXziHV7Gp7yXT2g69ekuCkO2r1dcYmh8
t/2jioSgrGK+KwmHNPBqAbubKVY8/gA3zyNs8U6qtnRGEmyR7jTV7JqR50S+kDFy
1UkC9gLl9B/rfNmWVan/7Ir5mUf/NVoCqgTLiluHcSmRvaS0eg29mvVXIwAHIRc/
SjnRBUkLp7Y3gaVdjKozXoEofKd9J+sAro03
-----END CERTIFICATE-----

# Issuer: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967
# Subject: CN=Actalis Authentication Root CA O=Actalis S.p.A./03358520967
# Label: "Actalis Authentication Root CA"
# Serial: 6271844772424770508
# MD5 Fingerprint: 69:c1:0d:4f:07:a3:1b:c3:fe:56:3d:04:bc:11:f6:a6
# SHA1 Fingerprint: f3:73:b3:87:06:5a:28:84:8a:f2:f3:4a:ce:19:2b:dd:c7:8e:9c:ac
# SHA256 Fingerprint: 55:92:60:84:ec:96:3a:64:b9:6e:2a:be:01:ce:0b:a8:6a:64:fb:fe:bc:c7:aa:b5:af:c1:55:b3:7f:d7:60:66
-----BEGIN CERTIFICATE-----
MIIFuzCCA6OgAwIBAgIIVwoRl0LE48wwDQYJKoZIhvcNAQELBQAwazELMAkGA1UE
BhMCSVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8w
MzM1ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290
IENBMB4XDTExMDkyMjExMjIwMloXDTMwMDkyMjExMjIwMlowazELMAkGA1UEBhMC
SVQxDjAMBgNVBAcMBU1pbGFuMSMwIQYDVQQKDBpBY3RhbGlzIFMucC5BLi8wMzM1
ODUyMDk2NzEnMCUGA1UEAwweQWN0YWxpcyBBdXRoZW50aWNhdGlvbiBSb290IENB
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAp8bEpSmkLO/lGMWwUKNv
UTufClrJwkg4CsIcoBh/kbWHuUA/3R1oHwiD1S0eiKD4j1aPbZkCkpAW1V8IbInX
4ay8IMKx4INRimlNAJZaby/ARH6jDuSRzVju3PvHHkVH3Se5CAGfpiEd9UEtL0z9
KK3giq0itFZljoZUj5NDKd45RnijMCO6zfB9E1fAXdKDa0hMxKufgFpbOr3JpyI/
gCczWw63igxdBzcIy2zSekciRDXFzMwujt0q7bd9Zg1fYVEiVRvjRuPjPdA1Yprb
rxTIW6HMiRvhMCb8oJsfgadHHwTrozmSBp+Z07/T6k9QnBn+locePGX2oxgkg4YQ
51Q+qDp2JE+BIcXjDwL4k5RHILv+1A7TaLndxHqEguNTVHnd25zS8gebLra8Pu2F
be8lEfKXGkJh90qX6IuxEAf6ZYGyojnP9zz/GPvG8VqLWeICrHuS0E4UT1lF9gxe
KF+w6D9Fz8+vm2/7hNN3WpVvrJSEnu68wEqPSpP4RCHiMUVhUE4Q2OM1fEwZtN4F
v6MGn8i1zeQf1xcGDXqVdFUNaBr8EBtiZJ1t4JWgw5QHVw0U5r0F+7if5t+L4sbn
fpb2U8WANFAoWPASUHEXMLrmeGO89LKtmyuy/uE5jF66CyCU3nuDuP/jVo23Eek7
jPKxwV2dpAtMK9myGPW1n0sCAwEAAaNjMGEwHQYDVR0OBBYEFFLYiDrIn3hm7Ynz
ezhwlMkCAjbQMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUUtiIOsifeGbt
ifN7OHCUyQICNtAwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4ICAQAL
e3KHwGCmSUyIWOYdiPcUZEim2FgKDk8TNd81HdTtBjHIgT5q1d07GjLukD0R0i70
jsNjLiNmsGe+b7bAEzlgqqI0JZN1Ut6nna0Oh4lScWoWPBkdg/iaKWW+9D+a2fDz
WochcYBNy+A4mz+7+uAwTc+G02UQGRjRlwKxK3JCaKygvU5a2hi/a5iB0P2avl4V
SM0RFbnAKVy06Ij3Pjaut2L9HmLecHgQHEhb2rykOLpn7VU+Xlff1ANATIGk0k9j
pwlCCRT8AKnCgHNPLsBA2RF7SOp6AsDT6ygBJlh0wcBzIm2Tlf05fbsq4/aC4yyX
X04fkZT6/iyj2HYauE2yOE+b+h1IYHkm4vP9qdCa6HCPSXrW5b0KDtst842/6+Ok
fcvHlXHo2qN8xcL4dJIEG4aspCJTQLas/kx2z/uUMsA1n3Y/buWQbqCmJqK4LL7R
K4X9p2jIugErsWx0Hbhzlefut8cl8ABMALJ+tguLHPPAUJ4lueAI3jZm/zel0btU
ZCzJJ7VLkn5l/9Mt4blOvH+kQSGQQXemOR/qnuOf0GZvBeyqdn6/axag67XH/JJU
LysRJyU3eExRarDzzFhdFPFqSBX/wge2sY0PjlxQRrM9vwGYT7JZVEc+NHt4bVaT
LnPqZih4zR0Uv6CPLy64Lo7yFIrM6bV8+2ydDKXhlg==
-----END CERTIFICATE-----

# Issuer: CN=Buypass Class 2 Root CA O=Buypass AS-983163327
# Subject: CN=Buypass Class 2 Root CA O=Buypass AS-983163327
# Label: "Buypass Class 2 Root CA"
# Serial: 2
# MD5 Fingerprint: 46:a7:d2:fe:45:fb:64:5a:a8:59:90:9b:78:44:9b:29
# SHA1 Fingerprint: 49:0a:75:74:de:87:0a:47:fe:58:ee:f6:c7:6b:eb:c6:0b:12:40:99
# SHA256 Fingerprint: 9a:11:40:25:19:7c:5b:b9:5d:94:e6:3d:55:cd:43:79:08:47:b6:46:b2:3c:df:11:ad:a4:a0:0e:ff:15:fb:48
-----BEGIN CERTIFICATE-----
MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd
MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg
Q2xhc3MgMiBSb290IENBMB4XDTEwMTAyNjA4MzgwM1oXDTQwMTAyNjA4MzgwM1ow
TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw
HgYDVQQDDBdCdXlwYXNzIENsYXNzIDIgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB
BQADggIPADCCAgoCggIBANfHXvfBB9R3+0Mh9PT1aeTuMgHbo4Yf5FkNuud1g1Lr
6hxhFUi7HQfKjK6w3Jad6sNgkoaCKHOcVgb/S2TwDCo3SbXlzwx87vFKu3MwZfPV
L4O2fuPn9Z6rYPnT8Z2SdIrkHJasW4DptfQxh6NR/Md+oW+OU3fUl8FVM5I+GC91
1K2GScuVr1QGbNgGE41b/+EmGVnAJLqBcXmQRFBoJJRfuLMR8SlBYaNByyM21cHx
MlAQTn/0hpPshNOOvEu/XAFOBz3cFIqUCqTqc/sLUegTBxj6DvEr0VQVfTzh97QZ
QmdiXnfgolXsttlpF9U6r0TtSsWe5HonfOV116rLJeffawrbD02TTqigzXsu8lkB
arcNuAeBfos4GzjmCleZPe4h6KP1DBbdi+w0jpwqHAAVF41og9JwnxgIzRFo1clr
Us3ERo/ctfPYV3Me6ZQ5BL/T3jjetFPsaRyifsSP5BtwrfKi+fv3FmRmaZ9JUaLi
FRhnBkp/1Wy1TbMz4GHrXb7pmA8y1x1LPC5aAVKRCfLf6o3YBkBjqhHk/sM3nhRS
P/TizPJhk9H9Z2vXUq6/aKtAQ6BXNVN48FP4YUIHZMbXb5tMOA1jrGKvNouicwoN
9SG9dKpN6nIDSdvHXx1iY8f93ZHsM+71bbRuMGjeyNYmsHVee7QHIJihdjK4TWxP
AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMmAd+BikoL1Rpzz
uvdMw964o605MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAU18h
9bqwOlI5LJKwbADJ784g7wbylp7ppHR/ehb8t/W2+xUbP6umwHJdELFx7rxP462s
A20ucS6vxOOto70MEae0/0qyexAQH6dXQbLArvQsWdZHEIjzIVEpMMpghq9Gqx3t
OluwlN5E40EIosHsHdb9T7bWR9AUC8rmyrV7d35BH16Dx7aMOZawP5aBQW9gkOLo
+fsicdl9sz1Gv7SEr5AcD48Saq/v7h56rgJKihcrdv6sVIkkLE8/trKnToyokZf7
KcZ7XC25y2a2t6hbElGFtQl+Ynhw/qlqYLYdDnkM/crqJIByw5c/8nerQyIKx+u2
DISCLIBrQYoIwOula9+ZEsuK1V6ADJHgJgg2SMX6OBE1/yWDLfJ6v9r9jv6ly0Us
H8SIU653DtmadsWOLB2jutXsMq7Aqqz30XpN69QH4kj3Io6wpJ9qzo6ysmD0oyLQ
I+uUWnpp3Q+/QFesa1lQ2aOZ4W7+jQF5JyMV3pKdewlNWudLSDBaGOYKbeaP4NK7
5t98biGCwWg5TbSYWGZizEqQXsP6JwSxeRV0mcy+rSDeJmAc61ZRpqPq5KM/p/9h
3PFaTWwyI0PurKju7koSCTxdccK+efrCh2gdC/1cacwG0Jp9VJkqyTkaGa9LKkPz
Y11aWOIv4x3kqdbQCtCev9eBCfHJxyYNrJgWVqA=
-----END CERTIFICATE-----

# Issuer: CN=Buypass Class 3 Root CA O=Buypass AS-983163327
# Subject: CN=Buypass Class 3 Root CA O=Buypass AS-983163327
# Label: "Buypass Class 3 Root CA"
# Serial: 2
# MD5 Fingerprint: 3d:3b:18:9e:2c:64:5a:e8:d5:88:ce:0e:f9:37:c2:ec
# SHA1 Fingerprint: da:fa:f7:fa:66:84:ec:06:8f:14:50:bd:c7:c2:81:a5:bc:a9:64:57
# SHA256 Fingerprint: ed:f7:eb:bc:a2:7a:2a:38:4d:38:7b:7d:40:10:c6:66:e2:ed:b4:84:3e:4c:29:b4:ae:1d:5b:93:32:e6:b2:4d
-----BEGIN CERTIFICATE-----
MIIFWTCCA0GgAwIBAgIBAjANBgkqhkiG9w0BAQsFADBOMQswCQYDVQQGEwJOTzEd
MBsGA1UECgwUQnV5cGFzcyBBUy05ODMxNjMzMjcxIDAeBgNVBAMMF0J1eXBhc3Mg
Q2xhc3MgMyBSb290IENBMB4XDTEwMTAyNjA4Mjg1OFoXDTQwMTAyNjA4Mjg1OFow
TjELMAkGA1UEBhMCTk8xHTAbBgNVBAoMFEJ1eXBhc3MgQVMtOTgzMTYzMzI3MSAw
HgYDVQQDDBdCdXlwYXNzIENsYXNzIDMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEB
BQADggIPADCCAgoCggIBAKXaCpUWUOOV8l6ddjEGMnqb8RB2uACatVI2zSRHsJ8Y
ZLya9vrVediQYkwiL944PdbgqOkcLNt4EemOaFEVcsfzM4fkoF0LXOBXByow9c3E
N3coTRiR5r/VUv1xLXA+58bEiuPwKAv0dpihi4dVsjoT/Lc+JzeOIuOoTyrvYLs9
tznDDgFHmV0ST9tD+leh7fmdvhFHJlsTmKtdFoqwNxxXnUX/iJY2v7vKB3tvh2PX
0DJq1l1sDPGzbjniazEuOQAnFN44wOwZZoYS6J1yFhNkUsepNxz9gjDthBgd9K5c
/3ATAOux9TN6S9ZV+AWNS2mw9bMoNlwUxFFzTWsL8TQH2xc519woe2v1n/MuwU8X
KhDzzMro6/1rqy6any2CbgTUUgGTLT2G/H783+9CHaZr77kgxve9oKeV/afmiSTY
zIw0bOIjL9kSGiG5VZFvC5F5GQytQIgLcOJ60g7YaEi7ghM5EFjp2CoHxhLbWNvS
O1UQRwUVZ2J+GGOmRj8JDlQyXr8NYnon74Do29lLBlo3WiXQCBJ31G8JUJc9yB3D
34xFMFbG02SrZvPAXpacw8Tvw3xrizp5f7NJzz3iiZ+gMEuFuZyUJHmPfWupRWgP
K9Dx2hzLabjKSWJtyNBjYt1gD1iqj6G8BaVmos8bdrKEZLFMOVLAMLrwjEsCsLa3
AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEe4zf/lb+74suwv
Tg75JbCOPGvDMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAACAj
QTUEkMJAYmDv4jVM1z+s4jSQuKFvdvoWFqRINyzpkMLyPPgKn9iB5btb2iUspKdV
cSQy9sgL8rxq+JOssgfCX5/bzMiKqr5qb+FJEMwx14C7u8jYog5kV+qi9cKpMRXS
IGrs/CIBKM+GuIAeqcwRpTzyFrNHnfzSgCHEy9BHcEGhyoMZCCxt8l13nIoUE9Q2
HJLw5QY33KbmkJs4j1xrG0aGQ0JfPgEHU1RdZX33inOhmlRaHylDFCfChQ+1iHsa
O5S3HWCntZznKWlXWpuTekMwGwPXYshApqr8ZORK15FTAaggiG6cX0S5y2CBNOxv
033aSF/rtJC8LakcC6wc1aJoIIAE1vyxjy+7SjENSoYc6+I2KSb12tjE8nVhz36u
dmNKekBlk4f4HoCMhuWG1o8O/FMsYOgWYRqiPkN7zTlgVGr18okmAWiDSKIz6MkE
kbIRNBE+6tBDGR8Dk5AM/1E9V/RBbuHLoL7ryWPNbczk+DaqaJ3tvV2XcEQNtg41
3OEMXbugUZTLfhbrES+jkkXITHHZvMmZUldGL1DPvTVp9D0VzgalLA8+9oG6lLvD
u79leNKGef9JOxqDDPDeeOzI8k1MGt6CKfjBWtrt7uYnXuhF0J0cUahoq0Tj0Itq
4/g7u9xN12TyUb7mqqta6THuBrxzvxNiCp/HuZc=
-----END CERTIFICATE-----

# Issuer: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center
# Subject: CN=T-TeleSec GlobalRoot Class 3 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center
# Label: "T-TeleSec GlobalRoot Class 3"
# Serial: 1
# MD5 Fingerprint: ca:fb:40:a8:4e:39:92:8a:1d:fe:8e:2f:c4:27:ea:ef
# SHA1 Fingerprint: 55:a6:72:3e:cb:f2:ec:cd:c3:23:74:70:19:9d:2a:be:11:e3:81:d1
# SHA256 Fingerprint: fd:73:da:d3:1c:64:4f:f1:b4:3b:ef:0c:cd:da:96:71:0b:9c:d9:87:5e:ca:7e:31:70:7a:f3:e9:6d:52:2b:bd
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDMwHhcNMDgxMDAxMTAyOTU2WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDMwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9dZPwYiJvJK7genasfb3ZJNW4t/zN
8ELg63iIVl6bmlQdTQyK9tPPcPRStdiTBONGhnFBSivwKixVA9ZIw+A5OO3yXDw/
RLyTPWGrTs0NvvAgJ1gORH8EGoel15YUNpDQSXuhdfsaa3Ox+M6pCSzyU9XDFES4
hqX2iys52qMzVNn6chr3IhUciJFrf2blw2qAsCTz34ZFiP0Zf3WHHx+xGwpzJFu5
ZeAsVMhg02YXP+HMVDNzkQI6pn97djmiH5a2OK61yJN0HZ65tOVgnS9W0eDrXltM
EnAMbEQgqxHY9Bn20pxSN+f6tsIxO0rUFJmtxxr1XV/6B7h8DR/Wgx6zAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS1
A/d2O2GCahKqGFPrAyGUv/7OyjANBgkqhkiG9w0BAQsFAAOCAQEAVj3vlNW92nOy
WL6ukK2YJ5f+AbGwUgC4TeQbIXQbfsDuXmkqJa9c1h3a0nnJ85cp4IaH3gRZD/FZ
1GSFS5mvJQQeyUapl96Cshtwn5z2r3Ex3XsFpSzTucpH9sry9uetuUg/vBa3wW30
6gmv7PO15wWeph6KU1HWk4HMdJP2udqmJQV0eVp+QD6CSyYRMG7hP0HHRwA11fXT
91Q+gT3aSWqas+8QPebrb9HIIkfLzM8BMZLZGOMivgkeGj5asuRrDFR6fUNOuIml
e9eiPZaGzPImNC1qkp2aGtAw4l1OBLBfiyB+d8E9lYLRRpo7PHi4b6HQDWSieB4p
TpPDpFQUWw==
-----END CERTIFICATE-----

# Issuer: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH
# Subject: CN=D-TRUST Root Class 3 CA 2 2009 O=D-Trust GmbH
# Label: "D-TRUST Root Class 3 CA 2 2009"
# Serial: 623603
# MD5 Fingerprint: cd:e0:25:69:8d:47:ac:9c:89:35:90:f7:fd:51:3d:2f
# SHA1 Fingerprint: 58:e8:ab:b0:36:15:33:fb:80:f7:9b:1b:6d:29:d3:ff:8d:5f:00:f0
# SHA256 Fingerprint: 49:e7:a4:42:ac:f0:ea:62:87:05:00:54:b5:25:64:b6:50:e4:f4:9e:42:e3:48:d6:aa:38:e0:39:e9:57:b1:c1
-----BEGIN CERTIFICATE-----
MIIEMzCCAxugAwIBAgIDCYPzMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNVBAYTAkRF
MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMMHkQtVFJVU1QgUm9vdCBD
bGFzcyAzIENBIDIgMjAwOTAeFw0wOTExMDUwODM1NThaFw0yOTExMDUwODM1NTha
ME0xCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxJzAlBgNVBAMM
HkQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgMjAwOTCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBANOySs96R+91myP6Oi/WUEWJNTrGa9v+2wBoqOADER03
UAifTUpolDWzU9GUY6cgVq/eUXjsKj3zSEhQPgrfRlWLJ23DEE0NkVJD2IfgXU42
tSHKXzlABF9bfsyjxiupQB7ZNoTWSPOSHjRGICTBpFGOShrvUD9pXRl/RcPHAY9R
ySPocq60vFYJfxLLHLGvKZAKyVXMD9O0Gu1HNVpK7ZxzBCHQqr0ME7UAyiZsxGsM
lFqVlNpQmvH/pStmMaTJOKDfHR+4CS7zp+hnUquVH+BGPtikw8paxTGA6Eian5Rp
/hnd2HN8gcqW3o7tszIFZYQ05ub9VxC1X3a/L7AQDcUCAwEAAaOCARowggEWMA8G
A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFP3aFMSfMN4hvR5COfyrYyNJ4PGEMA4G
A1UdDwEB/wQEAwIBBjCB0wYDVR0fBIHLMIHIMIGAoH6gfIZ6bGRhcDovL2RpcmVj
dG9yeS5kLXRydXN0Lm5ldC9DTj1ELVRSVVNUJTIwUm9vdCUyMENsYXNzJTIwMyUy
MENBJTIwMiUyMDIwMDksTz1ELVRydXN0JTIwR21iSCxDPURFP2NlcnRpZmljYXRl
cmV2b2NhdGlvbmxpc3QwQ6BBoD+GPWh0dHA6Ly93d3cuZC10cnVzdC5uZXQvY3Js
L2QtdHJ1c3Rfcm9vdF9jbGFzc18zX2NhXzJfMjAwOS5jcmwwDQYJKoZIhvcNAQEL
BQADggEBAH+X2zDI36ScfSF6gHDOFBJpiBSVYEQBrLLpME+bUMJm2H6NMLVwMeni
acfzcNsgFYbQDfC+rAF1hM5+n02/t2A7nPPKHeJeaNijnZflQGDSNiH+0LS4F9p0
o3/U37CYAqxva2ssJSRyoWXuJVrl5jLn8t+rSfrzkGkj2wTZ51xY/GXUl77M/C4K
zCUqNQT4YJEVdT1B/yMfGchs64JTBKbkTCJNjYy6zltz7GRUUG3RnFX7acM2w4y8
PIWmawomDeCTmGCufsYkl4phX5GOZpIJhzbNi5stPvZR1FDUWSi9g/LMKHtThm3Y
Johw1+qRzT65ysCQblrGXnRl11z+o+I=
-----END CERTIFICATE-----

# Issuer: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH
# Subject: CN=D-TRUST Root Class 3 CA 2 EV 2009 O=D-Trust GmbH
# Label: "D-TRUST Root Class 3 CA 2 EV 2009"
# Serial: 623604
# MD5 Fingerprint: aa:c6:43:2c:5e:2d:cd:c4:34:c0:50:4f:11:02:4f:b6
# SHA1 Fingerprint: 96:c9:1b:0b:95:b4:10:98:42:fa:d0:d8:22:79:fe:60:fa:b9:16:83
# SHA256 Fingerprint: ee:c5:49:6b:98:8c:e9:86:25:b9:34:09:2e:ec:29:08:be:d0:b0:f3:16:c2:d4:73:0c:84:ea:f1:f3:d3:48:81
-----BEGIN CERTIFICATE-----
MIIEQzCCAyugAwIBAgIDCYP0MA0GCSqGSIb3DQEBCwUAMFAxCzAJBgNVBAYTAkRF
MRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNVBAMMIUQtVFJVU1QgUm9vdCBD
bGFzcyAzIENBIDIgRVYgMjAwOTAeFw0wOTExMDUwODUwNDZaFw0yOTExMDUwODUw
NDZaMFAxCzAJBgNVBAYTAkRFMRUwEwYDVQQKDAxELVRydXN0IEdtYkgxKjAoBgNV
BAMMIUQtVFJVU1QgUm9vdCBDbGFzcyAzIENBIDIgRVYgMjAwOTCCASIwDQYJKoZI
hvcNAQEBBQADggEPADCCAQoCggEBAJnxhDRwui+3MKCOvXwEz75ivJn9gpfSegpn
ljgJ9hBOlSJzmY3aFS3nBfwZcyK3jpgAvDw9rKFs+9Z5JUut8Mxk2og+KbgPCdM0
3TP1YtHhzRnp7hhPTFiu4h7WDFsVWtg6uMQYZB7jM7K1iXdODL/ZlGsTl28So/6Z
qQTMFexgaDbtCHu39b+T7WYxg4zGcTSHThfqr4uRjRxWQa4iN1438h3Z0S0NL2lR
p75mpoo6Kr3HGrHhFPC+Oh25z1uxav60sUYgovseO3Dvk5h9jHOW8sXvhXCtKSb8
HgQ+HKDYD8tSg2J87otTlZCpV6LqYQXY+U3EJ/pure3511H3a6UCAwEAAaOCASQw
ggEgMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNOUikxiEyoZLsyvcop9Ntea
HNxnMA4GA1UdDwEB/wQEAwIBBjCB3QYDVR0fBIHVMIHSMIGHoIGEoIGBhn9sZGFw
Oi8vZGlyZWN0b3J5LmQtdHJ1c3QubmV0L0NOPUQtVFJVU1QlMjBSb290JTIwQ2xh
c3MlMjAzJTIwQ0ElMjAyJTIwRVYlMjAyMDA5LE89RC1UcnVzdCUyMEdtYkgsQz1E
RT9jZXJ0aWZpY2F0ZXJldm9jYXRpb25saXN0MEagRKBChkBodHRwOi8vd3d3LmQt
dHJ1c3QubmV0L2NybC9kLXRydXN0X3Jvb3RfY2xhc3NfM19jYV8yX2V2XzIwMDku
Y3JsMA0GCSqGSIb3DQEBCwUAA4IBAQA07XtaPKSUiO8aEXUHL7P+PPoeUSbrh/Yp
3uDx1MYkCenBz1UbtDDZzhr+BlGmFaQt77JLvyAoJUnRpjZ3NOhk31KxEcdzes05
nsKtjHEh8lprr988TlWvsoRlFIm5d8sqMb7Po23Pb0iUMkZv53GMoKaEGTcH8gNF
CSuGdXzfX2lXANtu2KZyIktQ1HWYVt+3GP9DQ1CuekR78HlR10M9p9OB0/DJT7na
xpeG0ILD5EJt/rDiZE4OJudANCa1CInXCGNjOCd1HjPqbqjdn5lPdE2BiYBL3ZqX
KVwvvoFBuYz/6n1gBp7N1z3TLqMVvKjmJuVvw9y4AyHqnxbxLFS1
-----END CERTIFICATE-----

# Issuer: CN=CA Disig Root R2 O=Disig a.s.
# Subject: CN=CA Disig Root R2 O=Disig a.s.
# Label: "CA Disig Root R2"
# Serial: 10572350602393338211
# MD5 Fingerprint: 26:01:fb:d8:27:a7:17:9a:45:54:38:1a:43:01:3b:03
# SHA1 Fingerprint: b5:61:eb:ea:a4:de:e4:25:4b:69:1a:98:a5:57:47:c2:34:c7:d9:71
# SHA256 Fingerprint: e2:3d:4a:03:6d:7b:70:e9:f5:95:b1:42:20:79:d2:b9:1e:df:bb:1f:b6:51:a0:63:3e:aa:8a:9d:c5:f8:07:03
-----BEGIN CERTIFICATE-----
MIIFaTCCA1GgAwIBAgIJAJK4iNuwisFjMA0GCSqGSIb3DQEBCwUAMFIxCzAJBgNV
BAYTAlNLMRMwEQYDVQQHEwpCcmF0aXNsYXZhMRMwEQYDVQQKEwpEaXNpZyBhLnMu
MRkwFwYDVQQDExBDQSBEaXNpZyBSb290IFIyMB4XDTEyMDcxOTA5MTUzMFoXDTQy
MDcxOTA5MTUzMFowUjELMAkGA1UEBhMCU0sxEzARBgNVBAcTCkJyYXRpc2xhdmEx
EzARBgNVBAoTCkRpc2lnIGEucy4xGTAXBgNVBAMTEENBIERpc2lnIFJvb3QgUjIw
ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCio8QACdaFXS1tFPbCw3Oe
NcJxVX6B+6tGUODBfEl45qt5WDza/3wcn9iXAng+a0EE6UG9vgMsRfYvZNSrXaNH
PWSb6WiaxswbP7q+sos0Ai6YVRn8jG+qX9pMzk0DIaPY0jSTVpbLTAwAFjxfGs3I
x2ymrdMxp7zo5eFm1tL7A7RBZckQrg4FY8aAamkw/dLukO8NJ9+flXP04SXabBbe
QTg06ov80egEFGEtQX6sx3dOy1FU+16SGBsEWmjGycT6txOgmLcRK7fWV8x8nhfR
yyX+hk4kLlYMeE2eARKmK6cBZW58Yh2EhN/qwGu1pSqVg8NTEQxzHQuyRpDRQjrO
QG6Vrf/GlK1ul4SOfW+eioANSW1z4nuSHsPzwfPrLgVv2RvPN3YEyLRa5Beny912
H9AZdugsBbPWnDTYltxhh5EF5EQIM8HauQhl1K6yNg3ruji6DOWbnuuNZt2Zz9aJ
QfYEkoopKW1rOhzndX0CcQ7zwOe9yxndnWCywmZgtrEE7snmhrmaZkCo5xHtgUUD
i/ZnWejBBhG93c+AAk9lQHhcR1DIm+YfgXvkRKhbhZri3lrVx/k6RGZL5DJUfORs
nLMOPReisjQS1n6yqEm70XooQL6iFh/f5DcfEXP7kAplQ6INfPgGAVUzfbANuPT1
rqVCV3w2EYx7XsQDnYx5nQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1Ud
DwEB/wQEAwIBBjAdBgNVHQ4EFgQUtZn4r7CU9eMg1gqtzk5WpC5uQu0wDQYJKoZI
hvcNAQELBQADggIBACYGXnDnZTPIgm7ZnBc6G3pmsgH2eDtpXi/q/075KMOYKmFM
tCQSin1tERT3nLXK5ryeJ45MGcipvXrA1zYObYVybqjGom32+nNjf7xueQgcnYqf
GopTpti72TVVsRHFqQOzVju5hJMiXn7B9hJSi+osZ7z+Nkz1uM/Rs0mSO9MpDpkb
lvdhuDvEK7Z4bLQjb/D907JedR+Zlais9trhxTF7+9FGs9K8Z7RiVLoJ92Owk6Ka
+elSLotgEqv89WBW7xBci8QaQtyDW2QOy7W81k/BfDxujRNt+3vrMNDcTa/F1bal
TFtxyegxvug4BkihGuLq0t4SOVga/4AOgnXmt8kHbA7v/zjxmHHEt38OFdAlab0i
nSvtBfZGR6ztwPDUO+Ls7pZbkBNOHlY667DvlruWIxG68kOGdGSVyCh13x01utI3
gzhTODY7z2zp+WsO0PsE6E9312UBeIYMej4hYvF/Y3EMyZ9E26gnonW+boE+18Dr
G5gPcFw0sorMwIUY6256s/daoQe/qUKS82Ail+QUoQebTnbAjn39pCXHR+3/H3Os
zMOl6W8KjptlwlCFtaOgUxLMVYdh84GuEEZhvUQhuMI9dM9+JDX6HAcOmz0iyu8x
L4ysEr3vQCj8KWefshNPZiTEUxnpHikV7+ZtsH8tZ/3zbBt1RqPlShfppNcL
-----END CERTIFICATE-----

# Issuer: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV
# Subject: CN=ACCVRAIZ1 O=ACCV OU=PKIACCV
# Label: "ACCVRAIZ1"
# Serial: 6828503384748696800
# MD5 Fingerprint: d0:a0:5a:ee:05:b6:09:94:21:a1:7d:f1:b2:29:82:02
# SHA1 Fingerprint: 93:05:7a:88:15:c6:4f:ce:88:2f:fa:91:16:52:28:78:bc:53:64:17
# SHA256 Fingerprint: 9a:6e:c0:12:e1:a7:da:9d:be:34:19:4d:47:8a:d7:c0:db:18:22:fb:07:1d:f1:29:81:49:6e:d1:04:38:41:13
-----BEGIN CERTIFICATE-----
MIIH0zCCBbugAwIBAgIIXsO3pkN/pOAwDQYJKoZIhvcNAQEFBQAwQjESMBAGA1UE
AwwJQUNDVlJBSVoxMRAwDgYDVQQLDAdQS0lBQ0NWMQ0wCwYDVQQKDARBQ0NWMQsw
CQYDVQQGEwJFUzAeFw0xMTA1MDUwOTM3MzdaFw0zMDEyMzEwOTM3MzdaMEIxEjAQ
BgNVBAMMCUFDQ1ZSQUlaMTEQMA4GA1UECwwHUEtJQUNDVjENMAsGA1UECgwEQUND
VjELMAkGA1UEBhMCRVMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCb
qau/YUqXry+XZpp0X9DZlv3P4uRm7x8fRzPCRKPfmt4ftVTdFXxpNRFvu8gMjmoY
HtiP2Ra8EEg2XPBjs5BaXCQ316PWywlxufEBcoSwfdtNgM3802/J+Nq2DoLSRYWo
G2ioPej0RGy9ocLLA76MPhMAhN9KSMDjIgro6TenGEyxCQ0jVn8ETdkXhBilyNpA
lHPrzg5XPAOBOp0KoVdDaaxXbXmQeOW1tDvYvEyNKKGno6e6Ak4l0Squ7a4DIrhr
IA8wKFSVf+DuzgpmndFALW4ir50awQUZ0m/A8p/4e7MCQvtQqR0tkw8jq8bBD5L/
0KIV9VMJcRz/RROE5iZe+OCIHAr8Fraocwa48GOEAqDGWuzndN9wrqODJerWx5eH
k6fGioozl2A3ED6XPm4pFdahD9GILBKfb6qkxkLrQaLjlUPTAYVtjrs78yM2x/47
4KElB0iryYl0/wiPgL/AlmXz7uxLaL2diMMxs0Dx6M/2OLuc5NF/1OVYm3z61PMO
m3WR5LpSLhl+0fXNWhn8ugb2+1KoS5kE3fj5tItQo05iifCHJPqDQsGH+tUtKSpa
cXpkatcnYGMN285J9Y0fkIkyF/hzQ7jSWpOGYdbhdQrqeWZ2iE9x6wQl1gpaepPl
uUsXQA+xtrn13k/c4LOsOxFwYIRKQ26ZIMApcQrAZQIDAQABo4ICyzCCAscwfQYI
KwYBBQUHAQEEcTBvMEwGCCsGAQUFBzAChkBodHRwOi8vd3d3LmFjY3YuZXMvZmls
ZWFkbWluL0FyY2hpdm9zL2NlcnRpZmljYWRvcy9yYWl6YWNjdjEuY3J0MB8GCCsG
AQUFBzABhhNodHRwOi8vb2NzcC5hY2N2LmVzMB0GA1UdDgQWBBTSh7Tj3zcnk1X2
VuqB5TbMjB4/vTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFNKHtOPfNyeT
VfZW6oHlNsyMHj+9MIIBcwYDVR0gBIIBajCCAWYwggFiBgRVHSAAMIIBWDCCASIG
CCsGAQUFBwICMIIBFB6CARAAQQB1AHQAbwByAGkAZABhAGQAIABkAGUAIABDAGUA
cgB0AGkAZgBpAGMAYQBjAGkA8wBuACAAUgBhAO0AegAgAGQAZQAgAGwAYQAgAEEA
QwBDAFYAIAAoAEEAZwBlAG4AYwBpAGEAIABkAGUAIABUAGUAYwBuAG8AbABvAGcA
7QBhACAAeQAgAEMAZQByAHQAaQBmAGkAYwBhAGMAaQDzAG4AIABFAGwAZQBjAHQA
cgDzAG4AaQBjAGEALAAgAEMASQBGACAAUQA0ADYAMAAxADEANQA2AEUAKQAuACAA
QwBQAFMAIABlAG4AIABoAHQAdABwADoALwAvAHcAdwB3AC4AYQBjAGMAdgAuAGUA
czAwBggrBgEFBQcCARYkaHR0cDovL3d3dy5hY2N2LmVzL2xlZ2lzbGFjaW9uX2Mu
aHRtMFUGA1UdHwROMEwwSqBIoEaGRGh0dHA6Ly93d3cuYWNjdi5lcy9maWxlYWRt
aW4vQXJjaGl2b3MvY2VydGlmaWNhZG9zL3JhaXphY2N2MV9kZXIuY3JsMA4GA1Ud
DwEB/wQEAwIBBjAXBgNVHREEEDAOgQxhY2N2QGFjY3YuZXMwDQYJKoZIhvcNAQEF
BQADggIBAJcxAp/n/UNnSEQU5CmH7UwoZtCPNdpNYbdKl02125DgBS4OxnnQ8pdp
D70ER9m+27Up2pvZrqmZ1dM8MJP1jaGo/AaNRPTKFpV8M9xii6g3+CfYCS0b78gU
JyCpZET/LtZ1qmxNYEAZSUNUY9rizLpm5U9EelvZaoErQNV/+QEnWCzI7UiRfD+m
AM/EKXMRNt6GGT6d7hmKG9Ww7Y49nCrADdg9ZuM8Db3VlFzi4qc1GwQA9j9ajepD
vV+JHanBsMyZ4k0ACtrJJ1vnE5Bc5PUzolVt3OAJTS+xJlsndQAJxGJ3KQhfnlms
tn6tn1QwIgPBHnFk/vk4CpYY3QIUrCPLBhwepH2NDd4nQeit2hW3sCPdK6jT2iWH
7ehVRE2I9DZ+hJp4rPcOVkkO1jMl1oRQQmwgEh0q1b688nCBpHBgvgW1m54ERL5h
I6zppSSMEYCUWqKiuUnSwdzRp+0xESyeGabu4VXhwOrPDYTkF7eifKXeVSUG7szA
h1xA2syVP1XgNce4hL60Xc16gwFy7ofmXx2utYXGJt/mwZrpHgJHnyqobalbz+xF
d3+YJ5oyXSrjhO7FmGYvliAd3djDJ9ew+f7Zfc3Qn48LFFhRny+Lwzgt3uiP1o2H
pPVWQxaZLPSkVrQ0uGE3ycJYgBugl6H8WY3pEfbRD0tVNEYqi4Y7
-----END CERTIFICATE-----

# Issuer: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA
# Subject: CN=TWCA Global Root CA O=TAIWAN-CA OU=Root CA
# Label: "TWCA Global Root CA"
# Serial: 3262
# MD5 Fingerprint: f9:03:7e:cf:e6:9e:3c:73:7a:2a:90:07:69:ff:2b:96
# SHA1 Fingerprint: 9c:bb:48:53:f6:a4:f6:d3:52:a4:e8:32:52:55:60:13:f5:ad:af:65
# SHA256 Fingerprint: 59:76:90:07:f7:68:5d:0f:cd:50:87:2f:9f:95:d5:75:5a:5b:2b:45:7d:81:f3:69:2b:61:0a:98:67:2f:0e:1b
-----BEGIN CERTIFICATE-----
MIIFQTCCAymgAwIBAgICDL4wDQYJKoZIhvcNAQELBQAwUTELMAkGA1UEBhMCVFcx
EjAQBgNVBAoTCVRBSVdBTi1DQTEQMA4GA1UECxMHUm9vdCBDQTEcMBoGA1UEAxMT
VFdDQSBHbG9iYWwgUm9vdCBDQTAeFw0xMjA2MjcwNjI4MzNaFw0zMDEyMzExNTU5
NTlaMFExCzAJBgNVBAYTAlRXMRIwEAYDVQQKEwlUQUlXQU4tQ0ExEDAOBgNVBAsT
B1Jvb3QgQ0ExHDAaBgNVBAMTE1RXQ0EgR2xvYmFsIFJvb3QgQ0EwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQCwBdvI64zEbooh745NnHEKH1Jw7W2CnJfF
10xORUnLQEK1EjRsGcJ0pDFfhQKX7EMzClPSnIyOt7h52yvVavKOZsTuKwEHktSz
0ALfUPZVr2YOy+BHYC8rMjk1Ujoog/h7FsYYuGLWRyWRzvAZEk2tY/XTP3VfKfCh
MBwqoJimFb3u/Rk28OKRQ4/6ytYQJ0lM793B8YVwm8rqqFpD/G2Gb3PpN0Wp8DbH
zIh1HrtsBv+baz4X7GGqcXzGHaL3SekVtTzWoWH1EfcFbx39Eb7QMAfCKbAJTibc
46KokWofwpFFiFzlmLhxpRUZyXx1EcxwdE8tmx2RRP1WKKD+u4ZqyPpcC1jcxkt2
yKsi2XMPpfRaAok/T54igu6idFMqPVMnaR1sjjIsZAAmY2E2TqNGtz99sy2sbZCi
laLOz9qC5wc0GZbpuCGqKX6mOL6OKUohZnkfs8O1CWfe1tQHRvMq2uYiN2DLgbYP
oA/pyJV/v1WRBXrPPRXAb94JlAGD1zQbzECl8LibZ9WYkTunhHiVJqRaCPgrdLQA
BDzfuBSO6N+pjWxnkjMdwLfS7JLIvgm/LCkFbwJrnu+8vyq8W8BQj0FwcYeyTbcE
qYSjMq+u7msXi7Kx/mzhkIyIqJdIzshNy/MGz19qCkKxHh53L46g5pIOBvwFItIm
4TFRfTLcDwIDAQABoyMwITAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB
/zANBgkqhkiG9w0BAQsFAAOCAgEAXzSBdu+WHdXltdkCY4QWwa6gcFGn90xHNcgL
1yg9iXHZqjNB6hQbbCEAwGxCGX6faVsgQt+i0trEfJdLjbDorMjupWkEmQqSpqsn
LhpNgb+E1HAerUf+/UqdM+DyucRFCCEK2mlpc3INvjT+lIutwx4116KD7+U4x6WF
H6vPNOw/KP4M8VeGTslV9xzU2KV9Bnpv1d8Q34FOIWWxtuEXeZVFBs5fzNxGiWNo
RI2T9GRwoD2dKAXDOXC4Ynsg/eTb6QihuJ49CcdP+yz4k3ZB3lLg4VfSnQO8d57+
nile98FRYB/e2guyLXW3Q0iT5/Z5xoRdgFlglPx4mI88k1HtQJAH32RjJMtOcQWh
15QaiDLxInQirqWm2BJpTGCjAu4r7NRjkgtevi92a6O2JryPA9gK8kxkRr05YuWW
6zRjESjMlfGt7+/cgFhI6Uu46mWs6fyAtbXIRfmswZ/ZuepiiI7E8UuDEq3mi4TW
nsLrgxifarsbJGAzcMzs9zLzXNl5fe+epP7JI8Mk7hWSsT2RTyaGvWZzJBPqpK5j
wa19hAM8EHiGG3njxPPyBJUgriOCxLM6AGK/5jYk4Ve6xx6QddVfP5VhK8E7zeWz
aGHQRiapIVJpLesux+t3zqY6tQMzT3bR51xUAV3LePTJDL/PEo4XLSNolOer/qmy
KwbQBM0=
-----END CERTIFICATE-----

# Issuer: CN=TeliaSonera Root CA v1 O=TeliaSonera
# Subject: CN=TeliaSonera Root CA v1 O=TeliaSonera
# Label: "TeliaSonera Root CA v1"
# Serial: 199041966741090107964904287217786801558
# MD5 Fingerprint: 37:41:49:1b:18:56:9a:26:f5:ad:c2:66:fb:40:a5:4c
# SHA1 Fingerprint: 43:13:bb:96:f1:d5:86:9b:c1:4e:6a:92:f6:cf:f6:34:69:87:82:37
# SHA256 Fingerprint: dd:69:36:fe:21:f8:f0:77:c1:23:a1:a5:21:c1:22:24:f7:22:55:b7:3e:03:a7:26:06:93:e8:a2:4b:0f:a3:89
-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgIRAJW+FqD3LkbxezmCcvqLzZYwDQYJKoZIhvcNAQEFBQAw
NzEUMBIGA1UECgwLVGVsaWFTb25lcmExHzAdBgNVBAMMFlRlbGlhU29uZXJhIFJv
b3QgQ0EgdjEwHhcNMDcxMDE4MTIwMDUwWhcNMzIxMDE4MTIwMDUwWjA3MRQwEgYD
VQQKDAtUZWxpYVNvbmVyYTEfMB0GA1UEAwwWVGVsaWFTb25lcmEgUm9vdCBDQSB2
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMK+6yfwIaPzaSZVfp3F
VRaRXP3vIb9TgHot0pGMYzHw7CTww6XScnwQbfQ3t+XmfHnqjLWCi65ItqwA3GV1
7CpNX8GH9SBlK4GoRz6JI5UwFpB/6FcHSOcZrr9FZ7E3GwYq/t75rH2D+1665I+X
Z75Ljo1kB1c4VWk0Nj0TSO9P4tNmHqTPGrdeNjPUtAa9GAH9d4RQAEX1jF3oI7x+
/jXh7VB7qTCNGdMJjmhnXb88lxhTuylixcpecsHHltTbLaC0H2kD7OriUPEMPPCs
81Mt8Bz17Ww5OXOAFshSsCPN4D7c3TxHoLs1iuKYaIu+5b9y7tL6pe0S7fyYGKkm
dtwoSxAgHNN/Fnct7W+A90m7UwW7XWjH1Mh1Fj+JWov3F0fUTPHSiXk+TT2YqGHe
Oh7S+F4D4MHJHIzTjU3TlTazN19jY5szFPAtJmtTfImMMsJu7D0hADnJoWjiUIMu
sDor8zagrC/kb2HCUQk5PotTubtn2txTuXZZNp1D5SDgPTJghSJRt8czu90VL6R4
pgd7gUY2BIbdeTXHlSw7sKMXNeVzH7RcWe/a6hBle3rQf5+ztCo3O3CLm1u5K7fs
slESl1MpWtTwEhDcTwK7EpIvYtQ/aUN8Ddb8WHUBiJ1YFkveupD/RwGJBmr2X7KQ
arMCpgKIv7NHfirZ1fpoeDVNAgMBAAGjPzA9MA8GA1UdEwEB/wQFMAMBAf8wCwYD
VR0PBAQDAgEGMB0GA1UdDgQWBBTwj1k4ALP1j5qWDNXr+nuqF+gTEjANBgkqhkiG
9w0BAQUFAAOCAgEAvuRcYk4k9AwI//DTDGjkk0kiP0Qnb7tt3oNmzqjMDfz1mgbl
dxSR651Be5kqhOX//CHBXfDkH1e3damhXwIm/9fH907eT/j3HEbAek9ALCI18Bmx
0GtnLLCo4MBANzX2hFxc469CeP6nyQ1Q6g2EdvZR74NTxnr/DlZJLo961gzmJ1Tj
TQpgcmLNkQfWpb/ImWvtxBnmq0wROMVvMeJuScg/doAmAyYp4Db29iBT4xdwNBed
Y2gea+zDTYa4EzAvXUYNR0PVG6pZDrlcjQZIrXSHX8f8MVRBE+LHIQ6e4B4N4cB7
Q4WQxYpYxmUKeFfyxiMPAdkgS94P+5KFdSpcc41teyWRyu5FrgZLAMzTsVlQ2jqI
OylDRl6XK1TOU2+NSueW+r9xDkKLfP0ooNBIytrEgUy7onOTJsjrDNYmiLbAJM+7
vVvrdX3pCI6GMyx5dwlppYn8s3CQh3aP0yK7Qs69cwsgJirQmz1wHiRszYd2qReW
t88NkvuOGKmYSdGe/mBEciG5Ge3C9THxOUiIkCR1VBatzvT4aRRkOfujuLpwQMcn
HL/EVlP6Y2XQ8xwOFvVrhlhNGNTkDY6lnVuR3HYkUD/GKvvZt5y11ubQ2egZixVx
SK236thZiNSQvxaz2emsWWFUyBy6ysHK4bkgTI86k4mloMy/0/Z1pHWWbVY=
-----END CERTIFICATE-----

# Issuer: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center
# Subject: CN=T-TeleSec GlobalRoot Class 2 O=T-Systems Enterprise Services GmbH OU=T-Systems Trust Center
# Label: "T-TeleSec GlobalRoot Class 2"
# Serial: 1
# MD5 Fingerprint: 2b:9b:9e:e4:7b:6c:1f:00:72:1a:cc:c1:77:79:df:6a
# SHA1 Fingerprint: 59:0d:2d:7d:88:4f:40:2e:61:7e:a5:62:32:17:65:cf:17:d8:94:e9
# SHA256 Fingerprint: 91:e2:f5:78:8d:58:10:eb:a7:ba:58:73:7d:e1:54:8a:8e:ca:cd:01:45:98:bc:0b:14:3e:04:1b:17:05:25:52
-----BEGIN CERTIFICATE-----
MIIDwzCCAqugAwIBAgIBATANBgkqhkiG9w0BAQsFADCBgjELMAkGA1UEBhMCREUx
KzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnByaXNlIFNlcnZpY2VzIEdtYkgxHzAd
BgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50ZXIxJTAjBgNVBAMMHFQtVGVsZVNl
YyBHbG9iYWxSb290IENsYXNzIDIwHhcNMDgxMDAxMTA0MDE0WhcNMzMxMDAxMjM1
OTU5WjCBgjELMAkGA1UEBhMCREUxKzApBgNVBAoMIlQtU3lzdGVtcyBFbnRlcnBy
aXNlIFNlcnZpY2VzIEdtYkgxHzAdBgNVBAsMFlQtU3lzdGVtcyBUcnVzdCBDZW50
ZXIxJTAjBgNVBAMMHFQtVGVsZVNlYyBHbG9iYWxSb290IENsYXNzIDIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCqX9obX+hzkeXaXPSi5kfl82hVYAUd
AqSzm1nzHoqvNK38DcLZSBnuaY/JIPwhqgcZ7bBcrGXHX+0CfHt8LRvWurmAwhiC
FoT6ZrAIxlQjgeTNuUk/9k9uN0goOA/FvudocP05l03Sx5iRUKrERLMjfTlH6VJi
1hKTXrcxlkIF+3anHqP1wvzpesVsqXFP6st4vGCvx9702cu+fjOlbpSD8DT6Iavq
jnKgP6TeMFvvhk1qlVtDRKgQFRzlAVfFmPHmBiiRqiDFt1MmUUOyCxGVWOHAD3bZ
wI18gfNycJ5v/hqO2V81xrJvNHy+SE/iWjnX2J14np+GPgNeGYtEotXHAgMBAAGj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBS/
WSA2AHmgoCJrjNXyYdK4LMuCSjANBgkqhkiG9w0BAQsFAAOCAQEAMQOiYQsfdOhy
NsZt+U2e+iKo4YFWz827n+qrkRk4r6p8FU3ztqONpfSO9kSpp+ghla0+AGIWiPAC
uvxhI+YzmzB6azZie60EI4RYZeLbK4rnJVM3YlNfvNoBYimipidx5joifsFvHZVw
IEoHNN/q/xWA5brXethbdXwFeilHfkCoMRN3zUA7tFFHei4R40cR3p1m0IvVVGb6
g1XqfMIpiRvpb7PO4gWEyS8+eIVibslfwXhjdFjASBgMmTnrpMwatXlajRWc2BQN
9noHV8cigwUtPJslJj0Ys6lDfMjIq2SPDqO/nBudMNva0Bkuqjzx+zOAduTNrRlP
BSeOE6Fuwg==
-----END CERTIFICATE-----

# Issuer: CN=Atos TrustedRoot 2011 O=Atos
# Subject: CN=Atos TrustedRoot 2011 O=Atos
# Label: "Atos TrustedRoot 2011"
# Serial: 6643877497813316402
# MD5 Fingerprint: ae:b9:c4:32:4b:ac:7f:5d:66:cc:77:94:bb:2a:77:56
# SHA1 Fingerprint: 2b:b1:f5:3e:55:0c:1d:c5:f1:d4:e6:b7:6a:46:4b:55:06:02:ac:21
# SHA256 Fingerprint: f3:56:be:a2:44:b7:a9:1e:b3:5d:53:ca:9a:d7:86:4a:ce:01:8e:2d:35:d5:f8:f9:6d:df:68:a6:f4:1a:a4:74
-----BEGIN CERTIFICATE-----
MIIDdzCCAl+gAwIBAgIIXDPLYixfszIwDQYJKoZIhvcNAQELBQAwPDEeMBwGA1UE
AwwVQXRvcyBUcnVzdGVkUm9vdCAyMDExMQ0wCwYDVQQKDARBdG9zMQswCQYDVQQG
EwJERTAeFw0xMTA3MDcxNDU4MzBaFw0zMDEyMzEyMzU5NTlaMDwxHjAcBgNVBAMM
FUF0b3MgVHJ1c3RlZFJvb3QgMjAxMTENMAsGA1UECgwEQXRvczELMAkGA1UEBhMC
REUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVhTuXbyo7LjvPpvMp
Nb7PGKw+qtn4TaA+Gke5vJrf8v7MPkfoepbCJI419KkM/IL9bcFyYie96mvr54rM
VD6QUM+A1JX76LWC1BTFtqlVJVfbsVD2sGBkWXppzwO3bw2+yj5vdHLqqjAqc2K+
SZFhyBH+DgMq92og3AIVDV4VavzjgsG1xZ1kCWyjWZgHJ8cblithdHFsQ/H3NYkQ
4J7sVaE3IqKHBAUsR320HLliKWYoyrfhk/WklAOZuXCFteZI6o1Q/NnezG8HDt0L
cp2AMBYHlT8oDv3FdU9T1nSatCQujgKRz3bFmx5VdJx4IbHwLfELn8LVlhgf8FQi
eowHAgMBAAGjfTB7MB0GA1UdDgQWBBSnpQaxLKYJYO7Rl+lwrrw7GWzbITAPBgNV
HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFKelBrEspglg7tGX6XCuvDsZbNshMBgG
A1UdIAQRMA8wDQYLKwYBBAGwLQMEAQEwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3
DQEBCwUAA4IBAQAmdzTblEiGKkGdLD4GkGDEjKwLVLgfuXvTBznk+j57sj1O7Z8j
vZfza1zv7v1Apt+hk6EKhqzvINB5Ab149xnYJDE0BAGmuhWawyfc2E8PzBhj/5kP
DpFrdRbhIfzYJsdHt6bPWHJxfrrhTZVHO8mvbaG0weyJ9rQPOLXiZNwlz6bb65pc
maHFCN795trV1lpFDMS3wrUU77QR/w4VtfX128a961qn8FYiqTxlVMYVqL2Gns2D
lmh6cYGJ4Qvh6hEbaAjMaZ7snkGeRDImeuKHCnE96+RapNLbxc3G3mB/ufNPRJLv
KrcYPqcZ2Qt9sTdBQrC6YB3y/gkRsPCHe6ed
-----END CERTIFICATE-----

# Issuer: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited
# Subject: CN=QuoVadis Root CA 1 G3 O=QuoVadis Limited
# Label: "QuoVadis Root CA 1 G3"
# Serial: 687049649626669250736271037606554624078720034195
# MD5 Fingerprint: a4:bc:5b:3f:fe:37:9a:fa:64:f0:e2:fa:05:3d:0b:ab
# SHA1 Fingerprint: 1b:8e:ea:57:96:29:1a:c9:39:ea:b8:0a:81:1a:73:73:c0:93:79:67
# SHA256 Fingerprint: 8a:86:6f:d1:b2:76:b5:7e:57:8e:92:1c:65:82:8a:2b:ed:58:e9:f2:f2:88:05:41:34:b7:f1:f4:bf:c9:cc:74
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIUeFhfLq0sGUvjNwc1NBMotZbUZZMwDQYJKoZIhvcNAQEL
BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc
BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMSBHMzAeFw0xMjAxMTIxNzI3NDRaFw00
MjAxMTIxNzI3NDRaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM
aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDEgRzMwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQCgvlAQjunybEC0BJyFuTHK3C3kEakEPBtV
wedYMB0ktMPvhd6MLOHBPd+C5k+tR4ds7FtJwUrVu4/sh6x/gpqG7D0DmVIB0jWe
rNrwU8lmPNSsAgHaJNM7qAJGr6Qc4/hzWHa39g6QDbXwz8z6+cZM5cOGMAqNF341
68Xfuw6cwI2H44g4hWf6Pser4BOcBRiYz5P1sZK0/CPTz9XEJ0ngnjybCKOLXSoh
4Pw5qlPafX7PGglTvF0FBM+hSo+LdoINofjSxxR3W5A2B4GbPgb6Ul5jxaYA/qXp
UhtStZI5cgMJYr2wYBZupt0lwgNm3fME0UDiTouG9G/lg6AnhF4EwfWQvTA9xO+o
abw4m6SkltFi2mnAAZauy8RRNOoMqv8hjlmPSlzkYZqn0ukqeI1RPToV7qJZjqlc
3sX5kCLliEVx3ZGZbHqfPT2YfF72vhZooF6uCyP8Wg+qInYtyaEQHeTTRCOQiJ/G
KubX9ZqzWB4vMIkIG1SitZgj7Ah3HJVdYdHLiZxfokqRmu8hqkkWCKi9YSgxyXSt
hfbZxbGL0eUQMk1fiyA6PEkfM4VZDdvLCXVDaXP7a3F98N/ETH3Goy7IlXnLc6KO
Tk0k+17kBL5yG6YnLUlamXrXXAkgt3+UuU/xDRxeiEIbEbfnkduebPRq34wGmAOt
zCjvpUfzUwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB
BjAdBgNVHQ4EFgQUo5fW816iEOGrRZ88F2Q87gFwnMwwDQYJKoZIhvcNAQELBQAD
ggIBABj6W3X8PnrHX3fHyt/PX8MSxEBd1DKquGrX1RUVRpgjpeaQWxiZTOOtQqOC
MTaIzen7xASWSIsBx40Bz1szBpZGZnQdT+3Btrm0DWHMY37XLneMlhwqI2hrhVd2
cDMT/uFPpiN3GPoajOi9ZcnPP/TJF9zrx7zABC4tRi9pZsMbj/7sPtPKlL92CiUN
qXsCHKnQO18LwIE6PWThv6ctTr1NxNgpxiIY0MWscgKCP6o6ojoilzHdCGPDdRS5
YCgtW2jgFqlmgiNR9etT2DGbe+m3nUvriBbP+V04ikkwj+3x6xn0dxoxGE1nVGwv
b2X52z3sIexe9PSLymBlVNFxZPT5pqOBMzYzcfCkeF9OrYMh3jRJjehZrJ3ydlo2
8hP0r+AJx2EqbPfgna67hkooby7utHnNkDPDs3b69fBsnQGQ+p6Q9pxyz0fawx/k
NSBT8lTR32GDpgLiJTjehTItXnOQUl1CxM49S+H5GYQd1aJQzEH7QRTDvdbJWqNj
ZgKAvQU6O0ec7AAmTPWIUb+oI38YB7AL7YsmoWTTYUrrXJ/es69nA7Mf3W1daWhp
q1467HxpvMc7hU6eFbm0FU/DlXpY18ls6Wy58yljXrQs8C097Vpl4KlbQMJImYFt
nh8GKjwStIsPm6Ik8KaN1nrgS7ZklmOVhMJKzRwuJIczYOXD
-----END CERTIFICATE-----

# Issuer: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited
# Subject: CN=QuoVadis Root CA 2 G3 O=QuoVadis Limited
# Label: "QuoVadis Root CA 2 G3"
# Serial: 390156079458959257446133169266079962026824725800
# MD5 Fingerprint: af:0c:86:6e:bf:40:2d:7f:0b:3e:12:50:ba:12:3d:06
# SHA1 Fingerprint: 09:3c:61:f3:8b:8b:dc:7d:55:df:75:38:02:05:00:e1:25:f5:c8:36
# SHA256 Fingerprint: 8f:e4:fb:0a:f9:3a:4d:0d:67:db:0b:eb:b2:3e:37:c7:1b:f3:25:dc:bc:dd:24:0e:a0:4d:af:58:b4:7e:18:40
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIURFc0JFuBiZs18s64KztbpybwdSgwDQYJKoZIhvcNAQEL
BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc
BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMiBHMzAeFw0xMjAxMTIxODU5MzJaFw00
MjAxMTIxODU5MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM
aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDIgRzMwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQChriWyARjcV4g/Ruv5r+LrI3HimtFhZiFf
qq8nUeVuGxbULX1QsFN3vXg6YOJkApt8hpvWGo6t/x8Vf9WVHhLL5hSEBMHfNrMW
n4rjyduYNM7YMxcoRvynyfDStNVNCXJJ+fKH46nafaF9a7I6JaltUkSs+L5u+9ym
c5GQYaYDFCDy54ejiK2toIz/pgslUiXnFgHVy7g1gQyjO/Dh4fxaXc6AcW34Sas+
O7q414AB+6XrW7PFXmAqMaCvN+ggOp+oMiwMzAkd056OXbxMmO7FGmh77FOm6RQ1
o9/NgJ8MSPsc9PG/Srj61YxxSscfrf5BmrODXfKEVu+lV0POKa2Mq1W/xPtbAd0j
IaFYAI7D0GoT7RPjEiuA3GfmlbLNHiJuKvhB1PLKFAeNilUSxmn1uIZoL1NesNKq
IcGY5jDjZ1XHm26sGahVpkUG0CM62+tlXSoREfA7T8pt9DTEceT/AFr2XK4jYIVz
8eQQsSWu1ZK7E8EM4DnatDlXtas1qnIhO4M15zHfeiFuuDIIfR0ykRVKYnLP43eh
vNURG3YBZwjgQQvD6xVu+KQZ2aKrr+InUlYrAoosFCT5v0ICvybIxo/gbjh9Uy3l
7ZizlWNof/k19N+IxWA1ksB8aRxhlRbQ694Lrz4EEEVlWFA4r0jyWbYW8jwNkALG
cC4BrTwV1wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB
BjAdBgNVHQ4EFgQU7edvdlq/YOxJW8ald7tyFnGbxD0wDQYJKoZIhvcNAQELBQAD
ggIBAJHfgD9DCX5xwvfrs4iP4VGyvD11+ShdyLyZm3tdquXK4Qr36LLTn91nMX66
AarHakE7kNQIXLJgapDwyM4DYvmL7ftuKtwGTTwpD4kWilhMSA/ohGHqPHKmd+RC
roijQ1h5fq7KpVMNqT1wvSAZYaRsOPxDMuHBR//47PERIjKWnML2W2mWeyAMQ0Ga
W/ZZGYjeVYg3UQt4XAoeo0L9x52ID8DyeAIkVJOviYeIyUqAHerQbj5hLja7NQ4n
lv1mNDthcnPxFlxHBlRJAHpYErAK74X9sbgzdWqTHBLmYF5vHX/JHyPLhGGfHoJE
+V+tYlUkmlKY7VHnoX6XOuYvHxHaU4AshZ6rNRDbIl9qxV6XU/IyAgkwo1jwDQHV
csaxfGl7w/U2Rcxhbl5MlMVerugOXou/983g7aEOGzPuVBj+D77vfoRrQ+NwmNtd
dbINWQeFFSM51vHfqSYP1kjHs6Yi9TM3WpVHn3u6GBVv/9YUZINJ0gpnIdsPNWNg
KCLjsZWDzYWm3S8P52dSbrsvhXz1SnPnxT7AvSESBT/8twNJAlvIJebiVDj1eYeM
HVOyToV7BjjHLPj4sHKNJeV3UvQDHEimUF+IIDBu8oJDqz2XhOdT+yHBTw8imoa4
WSr2Rz0ZiC3oheGe7IUIarFsNMkd7EgrO3jtZsSOeWmD3n+M
-----END CERTIFICATE-----

# Issuer: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited
# Subject: CN=QuoVadis Root CA 3 G3 O=QuoVadis Limited
# Label: "QuoVadis Root CA 3 G3"
# Serial: 268090761170461462463995952157327242137089239581
# MD5 Fingerprint: df:7d:b9:ad:54:6f:68:a1:df:89:57:03:97:43:b0:d7
# SHA1 Fingerprint: 48:12:bd:92:3c:a8:c4:39:06:e7:30:6d:27:96:e6:a4:cf:22:2e:7d
# SHA256 Fingerprint: 88:ef:81:de:20:2e:b0:18:45:2e:43:f8:64:72:5c:ea:5f:bd:1f:c2:d9:d2:05:73:07:09:c5:d8:b8:69:0f:46
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIULvWbAiin23r/1aOp7r0DoM8Sah0wDQYJKoZIhvcNAQEL
BQAwSDELMAkGA1UEBhMCQk0xGTAXBgNVBAoTEFF1b1ZhZGlzIExpbWl0ZWQxHjAc
BgNVBAMTFVF1b1ZhZGlzIFJvb3QgQ0EgMyBHMzAeFw0xMjAxMTIyMDI2MzJaFw00
MjAxMTIyMDI2MzJaMEgxCzAJBgNVBAYTAkJNMRkwFwYDVQQKExBRdW9WYWRpcyBM
aW1pdGVkMR4wHAYDVQQDExVRdW9WYWRpcyBSb290IENBIDMgRzMwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQCzyw4QZ47qFJenMioKVjZ/aEzHs286IxSR
/xl/pcqs7rN2nXrpixurazHb+gtTTK/FpRp5PIpM/6zfJd5O2YIyC0TeytuMrKNu
FoM7pmRLMon7FhY4futD4tN0SsJiCnMK3UmzV9KwCoWdcTzeo8vAMvMBOSBDGzXR
U7Ox7sWTaYI+FrUoRqHe6okJ7UO4BUaKhvVZR74bbwEhELn9qdIoyhA5CcoTNs+c
ra1AdHkrAj80//ogaX3T7mH1urPnMNA3I4ZyYUUpSFlob3emLoG+B01vr87ERROR
FHAGjx+f+IdpsQ7vw4kZ6+ocYfx6bIrc1gMLnia6Et3UVDmrJqMz6nWB2i3ND0/k
A9HvFZcba5DFApCTZgIhsUfei5pKgLlVj7WiL8DWM2fafsSntARE60f75li59wzw
eyuxwHApw0BiLTtIadwjPEjrewl5qW3aqDCYz4ByA4imW0aucnl8CAMhZa634Ryl
sSqiMd5mBPfAdOhx3v89WcyWJhKLhZVXGqtrdQtEPREoPHtht+KPZ0/l7DxMYIBp
VzgeAVuNVejH38DMdyM0SXV89pgR6y3e7UEuFAUCf+D+IOs15xGsIs5XPd7JMG0Q
A4XN8f+MFrXBsj6IbGB/kE+V9/YtrQE5BwT6dYB9v0lQ7e/JxHwc64B+27bQ3RP+
ydOc17KXqQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB
BjAdBgNVHQ4EFgQUxhfQvKjqAkPyGwaZXSuQILnXnOQwDQYJKoZIhvcNAQELBQAD
ggIBADRh2Va1EodVTd2jNTFGu6QHcrxfYWLopfsLN7E8trP6KZ1/AvWkyaiTt3px
KGmPc+FSkNrVvjrlt3ZqVoAh313m6Tqe5T72omnHKgqwGEfcIHB9UqM+WXzBusnI
FUBhynLWcKzSt/Ac5IYp8M7vaGPQtSCKFWGafoaYtMnCdvvMujAWzKNhxnQT5Wvv
oxXqA/4Ti2Tk08HS6IT7SdEQTXlm66r99I0xHnAUrdzeZxNMgRVhvLfZkXdxGYFg
u/BYpbWcC/ePIlUnwEsBbTuZDdQdm2NnL9DuDcpmvJRPpq3t/O5jrFc/ZSXPsoaP
0Aj/uHYUbt7lJ+yreLVTubY/6CD50qi+YUbKh4yE8/nxoGibIh6BJpsQBJFxwAYf
3KDTuVan45gtf4Od34wrnDKOMpTwATwiKp9Dwi7DmDkHOHv8XgBCH/MyJnmDhPbl
8MFREsALHgQjDFSlTC9JxUrRtm5gDWv8a4uFJGS3iQ6rJUdbPM9+Sb3H6QrG2vd+
DhcI00iX0HGS8A85PjRqHH3Y8iKuu2n0M7SmSFXRDw4m6Oy2Cy2nhTXN/VnIn9HN
PlopNLk9hM6xZdRZkZFWdSHBd575euFgndOtBBj0fOtek49TSiIp+EgrPk2GrFt/
ywaZWWDYWGWVjUTR939+J399roD1B0y2PpxxVJkES/1Y+Zj0
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Assured ID Root G2 O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Assured ID Root G2"
# Serial: 15385348160840213938643033620894905419
# MD5 Fingerprint: 92:38:b9:f8:63:24:82:65:2c:57:33:e6:fe:81:8f:9d
# SHA1 Fingerprint: a1:4b:48:d9:43:ee:0a:0e:40:90:4f:3c:e0:a4:c0:91:93:51:5d:3f
# SHA256 Fingerprint: 7d:05:eb:b6:82:33:9f:8c:94:51:ee:09:4e:eb:fe:fa:79:53:a1:14:ed:b2:f4:49:49:45:2f:ab:7d:2f:c1:85
-----BEGIN CERTIFICATE-----
MIIDljCCAn6gAwIBAgIQC5McOtY5Z+pnI7/Dr5r0SzANBgkqhkiG9w0BAQsFADBl
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJv
b3QgRzIwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQG
EwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNl
cnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzIwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDZ5ygvUj82ckmIkzTz+GoeMVSA
n61UQbVH35ao1K+ALbkKz3X9iaV9JPrjIgwrvJUXCzO/GU1BBpAAvQxNEP4Htecc
biJVMWWXvdMX0h5i89vqbFCMP4QMls+3ywPgym2hFEwbid3tALBSfK+RbLE4E9Hp
EgjAALAcKxHad3A2m67OeYfcgnDmCXRwVWmvo2ifv922ebPynXApVfSr/5Vh88lA
bx3RvpO704gqu52/clpWcTs/1PPRCv4o76Pu2ZmvA9OPYLfykqGxvYmJHzDNw6Yu
YjOuFgJ3RFrngQo8p0Quebg/BLxcoIfhG69Rjs3sLPr4/m3wOnyqi+RnlTGNAgMB
AAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQW
BBTOw0q5mVXyuNtgv6l+vVa1lzan1jANBgkqhkiG9w0BAQsFAAOCAQEAyqVVjOPI
QW5pJ6d1Ee88hjZv0p3GeDgdaZaikmkuOGybfQTUiaWxMTeKySHMq2zNixya1r9I
0jJmwYrA8y8678Dj1JGG0VDjA9tzd29KOVPt3ibHtX2vK0LRdWLjSisCx1BL4Gni
lmwORGYQRI+tBev4eaymG+g3NJ1TyWGqolKvSnAWhsI6yLETcDbYz+70CjTVW0z9
B5yiutkBclzzTcHdDrEcDcRjvq30FPuJ7KJBDkzMyFdA0G4Dqs0MjomZmWzwPDCv
ON9vvKO+KSAnq3T/EyJ43pdSVR6DtVQgA+6uwE9W3jfMw3+qBCe703e4YtsXfJwo
IhNzbM8m9Yop5w==
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Assured ID Root G3 O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Assured ID Root G3"
# Serial: 15459312981008553731928384953135426796
# MD5 Fingerprint: 7c:7f:65:31:0c:81:df:8d:ba:3e:99:e2:5c:ad:6e:fb
# SHA1 Fingerprint: f5:17:a2:4f:9a:48:c6:c9:f8:a2:00:26:9f:dc:0f:48:2c:ab:30:89
# SHA256 Fingerprint: 7e:37:cb:8b:4c:47:09:0c:ab:36:55:1b:a6:f4:5d:b8:40:68:0f:ba:16:6a:95:2d:b1:00:71:7f:43:05:3f:c2
-----BEGIN CERTIFICATE-----
MIICRjCCAc2gAwIBAgIQC6Fa+h3foLVJRK/NJKBs7DAKBggqhkjOPQQDAzBlMQsw
CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu
ZGlnaWNlcnQuY29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3Qg
RzMwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBlMQswCQYDVQQGEwJV
UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu
Y29tMSQwIgYDVQQDExtEaWdpQ2VydCBBc3N1cmVkIElEIFJvb3QgRzMwdjAQBgcq
hkjOPQIBBgUrgQQAIgNiAAQZ57ysRGXtzbg/WPuNsVepRC0FFfLvC/8QdJ+1YlJf
Zn4f5dwbRXkLzMZTCp2NXQLZqVneAlr2lSoOjThKiknGvMYDOAdfVdp+CW7if17Q
RSAPWXYQ1qAk8C3eNvJsKTmjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/
BAQDAgGGMB0GA1UdDgQWBBTL0L2p4ZgFUaFNN6KDec6NHSrkhDAKBggqhkjOPQQD
AwNnADBkAjAlpIFFAmsSS3V0T8gj43DydXLefInwz5FyYZ5eEJJZVrmDxxDnOOlY
JjZ91eQ0hjkCMHw2U/Aw5WJjOpnitqM7mzT6HtoQknFekROn3aRukswy1vUhZscv
6pZjamVFkpUBtA==
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Global Root G2 O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Global Root G2"
# Serial: 4293743540046975378534879503202253541
# MD5 Fingerprint: e4:a6:8a:c8:54:ac:52:42:46:0a:fd:72:48:1b:2a:44
# SHA1 Fingerprint: df:3c:24:f9:bf:d6:66:76:1b:26:80:73:fe:06:d1:cc:8d:4f:82:a4
# SHA256 Fingerprint: cb:3c:cb:b7:60:31:e5:e0:13:8f:8d:d3:9a:23:f9:de:47:ff:c3:5e:43:c1:14:4c:ea:27:d4:6a:5a:b1:cb:5f
-----BEGIN CERTIFICATE-----
MIIDjjCCAnagAwIBAgIQAzrx5qcRqaC7KGSxHQn65TANBgkqhkiG9w0BAQsFADBh
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBH
MjAeFw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVT
MRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5j
b20xIDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEcyMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuzfNNNx7a8myaJCtSnX/RrohCgiN9RlUyfuI
2/Ou8jqJkTx65qsGGmvPrC3oXgkkRLpimn7Wo6h+4FR1IAWsULecYxpsMNzaHxmx
1x7e/dfgy5SDN67sH0NO3Xss0r0upS/kqbitOtSZpLYl6ZtrAGCSYP9PIUkY92eQ
q2EGnI/yuum06ZIya7XzV+hdG82MHauVBJVJ8zUtluNJbd134/tJS7SsVQepj5Wz
tCO7TG1F8PapspUwtP1MVYwnSlcUfIKdzXOS0xZKBgyMUNGPHgm+F6HmIcr9g+UQ
vIOlCsRnKPZzFBQ9RnbDhxSJITRNrw9FDKZJobq7nMWxM4MphQIDAQABo0IwQDAP
BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAdBgNVHQ4EFgQUTiJUIBiV
5uNu5g/6+rkS7QYXjzkwDQYJKoZIhvcNAQELBQADggEBAGBnKJRvDkhj6zHd6mcY
1Yl9PMWLSn/pvtsrF9+wX3N3KjITOYFnQoQj8kVnNeyIv/iPsGEMNKSuIEyExtv4
NeF22d+mQrvHRAiGfzZ0JFrabA0UWTW98kndth/Jsw1HKj2ZL7tcu7XUIOGZX1NG
Fdtom/DzMNU+MeKNhJ7jitralj41E6Vf8PlwUHBHQRFXGU7Aj64GxJUTFy8bJZ91
8rGOmaFvE7FBcf6IKshPECBV1/MUReXgRPTqh5Uykw7+U0b6LJ3/iyK5S9kJRaTe
pLiaWN0bfVKfjllDiIGknibVb63dDcY3fe0Dkhvld1927jyNxF1WW6LZZm6zNTfl
MrY=
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Global Root G3 O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Global Root G3"
# Serial: 7089244469030293291760083333884364146
# MD5 Fingerprint: f5:5d:a4:50:a5:fb:28:7e:1e:0f:0d:cc:96:57:56:ca
# SHA1 Fingerprint: 7e:04:de:89:6a:3e:66:6d:00:e6:87:d3:3f:fa:d9:3b:e8:3d:34:9e
# SHA256 Fingerprint: 31:ad:66:48:f8:10:41:38:c7:38:f3:9e:a4:32:01:33:39:3e:3a:18:cc:02:29:6e:f9:7c:2a:c9:ef:67:31:d0
-----BEGIN CERTIFICATE-----
MIICPzCCAcWgAwIBAgIQBVVWvPJepDU1w6QP1atFcjAKBggqhkjOPQQDAzBhMQsw
CQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cu
ZGlnaWNlcnQuY29tMSAwHgYDVQQDExdEaWdpQ2VydCBHbG9iYWwgUm9vdCBHMzAe
Fw0xMzA4MDExMjAwMDBaFw0zODAxMTUxMjAwMDBaMGExCzAJBgNVBAYTAlVTMRUw
EwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2VydC5jb20x
IDAeBgNVBAMTF0RpZ2lDZXJ0IEdsb2JhbCBSb290IEczMHYwEAYHKoZIzj0CAQYF
K4EEACIDYgAE3afZu4q4C/sLfyHS8L6+c/MzXRq8NOrexpu80JX28MzQC7phW1FG
fp4tn+6OYwwX7Adw9c+ELkCDnOg/QW07rdOkFFk2eJ0DQ+4QE2xy3q6Ip6FrtUPO
Z9wj/wMco+I+o0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBhjAd
BgNVHQ4EFgQUs9tIpPmhxdiuNkHMEWNpYim8S8YwCgYIKoZIzj0EAwMDaAAwZQIx
AK288mw/EkrRLTnDCgmXc/SINoyIJ7vmiI1Qhadj+Z4y3maTD/HMsQmP3Wyr+mt/
oAIwOWZbwmSNuJ5Q3KjVSaLtx9zRSX8XAbjIho9OjIgrqJqpisXRAL34VOKa5Vt8
sycX
-----END CERTIFICATE-----

# Issuer: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com
# Subject: CN=DigiCert Trusted Root G4 O=DigiCert Inc OU=www.digicert.com
# Label: "DigiCert Trusted Root G4"
# Serial: 7451500558977370777930084869016614236
# MD5 Fingerprint: 78:f2:fc:aa:60:1f:2f:b4:eb:c9:37:ba:53:2e:75:49
# SHA1 Fingerprint: dd:fb:16:cd:49:31:c9:73:a2:03:7d:3f:c8:3a:4d:7d:77:5d:05:e4
# SHA256 Fingerprint: 55:2f:7b:dc:f1:a7:af:9e:6c:e6:72:01:7f:4f:12:ab:f7:72:40:c7:8e:76:1a:c2:03:d1:d9:d2:0a:c8:99:88
-----BEGIN CERTIFICATE-----
MIIFkDCCA3igAwIBAgIQBZsbV56OITLiOQe9p3d1XDANBgkqhkiG9w0BAQwFADBi
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3
d3cuZGlnaWNlcnQuY29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3Qg
RzQwHhcNMTMwODAxMTIwMDAwWhcNMzgwMTE1MTIwMDAwWjBiMQswCQYDVQQGEwJV
UzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3d3cuZGlnaWNlcnQu
Y29tMSEwHwYDVQQDExhEaWdpQ2VydCBUcnVzdGVkIFJvb3QgRzQwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQC/5pBzaN675F1KPDAiMGkz7MKnJS7JIT3y
ithZwuEppz1Yq3aaza57G4QNxDAf8xukOBbrVsaXbR2rsnnyyhHS5F/WBTxSD1If
xp4VpX6+n6lXFllVcq9ok3DCsrp1mWpzMpTREEQQLt+C8weE5nQ7bXHiLQwb7iDV
ySAdYyktzuxeTsiT+CFhmzTrBcZe7FsavOvJz82sNEBfsXpm7nfISKhmV1efVFiO
DCu3T6cw2Vbuyntd463JT17lNecxy9qTXtyOj4DatpGYQJB5w3jHtrHEtWoYOAMQ
jdjUN6QuBX2I9YI+EJFwq1WCQTLX2wRzKm6RAXwhTNS8rhsDdV14Ztk6MUSaM0C/
CNdaSaTC5qmgZ92kJ7yhTzm1EVgX9yRcRo9k98FpiHaYdj1ZXUJ2h4mXaXpI8OCi
EhtmmnTK3kse5w5jrubU75KSOp493ADkRSWJtppEGSt+wJS00mFt6zPZxd9LBADM
fRyVw4/3IbKyEbe7f/LVjHAsQWCqsWMYRJUadmJ+9oCw++hkpjPRiQfhvbfmQ6QY
uKZ3AeEPlAwhHbJUKSWJbOUOUlFHdL4mrLZBdd56rF+NP8m800ERElvlEFDrMcXK
chYiCd98THU/Y+whX8QgUWtvsauGi0/C1kVfnSD8oR7FwI+isX4KJpn15GkvmB0t
9dmpsh3lGwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIB
hjAdBgNVHQ4EFgQU7NfjgtJxXWRM3y5nP+e6mK4cD08wDQYJKoZIhvcNAQEMBQAD
ggIBALth2X2pbL4XxJEbw6GiAI3jZGgPVs93rnD5/ZpKmbnJeFwMDF/k5hQpVgs2
SV1EY+CtnJYYZhsjDT156W1r1lT40jzBQ0CuHVD1UvyQO7uYmWlrx8GnqGikJ9yd
+SeuMIW59mdNOj6PWTkiU0TryF0Dyu1Qen1iIQqAyHNm0aAFYF/opbSnr6j3bTWc
fFqK1qI4mfN4i/RN0iAL3gTujJtHgXINwBQy7zBZLq7gcfJW5GqXb5JQbZaNaHqa
sjYUegbyJLkJEVDXCLG4iXqEI2FCKeWjzaIgQdfRnGTZ6iahixTXTBmyUEFxPT9N
cCOGDErcgdLMMpSEDQgJlxxPwO5rIHQw0uA5NBCFIRUBCOhVMt5xSdkoF1BN5r5N
0XWs0Mr7QbhDparTwwVETyw2m+L64kW4I1NsBm9nVX9GtUw/bihaeSbSpKhil9Ie
4u1Ki7wb/UdKDd9nZn6yW0HQO+T0O/QEY+nvwlQAUaCKKsnOeMzV6ocEGLPOr0mI
r/OSmbaz5mEP0oUA51Aa5BuVnRmhuZyxm7EAHu/QD09CbMkKvO5D+jpxpchNJqU1
/YldvIViHTLSoCtU7ZpXwdv6EM8Zt4tKG48BtieVU+i2iW1bvGjUI+iLUaJW+fCm
gKDWHrO8Dw9TdSmq6hN35N6MgSGtBxBHEa2HPQfRdbzP82Z+
-----END CERTIFICATE-----

# Issuer: CN=COMODO RSA Certification Authority O=COMODO CA Limited
# Subject: CN=COMODO RSA Certification Authority O=COMODO CA Limited
# Label: "COMODO RSA Certification Authority"
# Serial: 101909084537582093308941363524873193117
# MD5 Fingerprint: 1b:31:b0:71:40:36:cc:14:36:91:ad:c4:3e:fd:ec:18
# SHA1 Fingerprint: af:e5:d2:44:a8:d1:19:42:30:ff:47:9f:e2:f8:97:bb:cd:7a:8c:b4
# SHA256 Fingerprint: 52:f0:e1:c4:e5:8e:c6:29:29:1b:60:31:7f:07:46:71:b8:5d:7e:a8:0d:5b:07:27:34:63:53:4b:32:b4:02:34
-----BEGIN CERTIFICATE-----
MIIF2DCCA8CgAwIBAgIQTKr5yttjb+Af907YWwOGnTANBgkqhkiG9w0BAQwFADCB
hTELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G
A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxKzApBgNV
BAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMTE5
MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBhTELMAkGA1UEBhMCR0IxGzAZBgNVBAgT
EkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMR
Q09NT0RPIENBIExpbWl0ZWQxKzApBgNVBAMTIkNPTU9ETyBSU0EgQ2VydGlmaWNh
dGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCR
6FSS0gpWsawNJN3Fz0RndJkrN6N9I3AAcbxT38T6KhKPS38QVr2fcHK3YX/JSw8X
pz3jsARh7v8Rl8f0hj4K+j5c+ZPmNHrZFGvnnLOFoIJ6dq9xkNfs/Q36nGz637CC
9BR++b7Epi9Pf5l/tfxnQ3K9DADWietrLNPtj5gcFKt+5eNu/Nio5JIk2kNrYrhV
/erBvGy2i/MOjZrkm2xpmfh4SDBF1a3hDTxFYPwyllEnvGfDyi62a+pGx8cgoLEf
Zd5ICLqkTqnyg0Y3hOvozIFIQ2dOciqbXL1MGyiKXCJ7tKuY2e7gUYPDCUZObT6Z
+pUX2nwzV0E8jVHtC7ZcryxjGt9XyD+86V3Em69FmeKjWiS0uqlWPc9vqv9JWL7w
qP/0uK3pN/u6uPQLOvnoQ0IeidiEyxPx2bvhiWC4jChWrBQdnArncevPDt09qZah
SL0896+1DSJMwBGB7FY79tOi4lu3sgQiUpWAk2nojkxl8ZEDLXB0AuqLZxUpaVIC
u9ffUGpVRr+goyhhf3DQw6KqLCGqR84onAZFdr+CGCe01a60y1Dma/RMhnEw6abf
Fobg2P9A3fvQQoh/ozM6LlweQRGBY84YcWsr7KaKtzFcOmpH4MN5WdYgGq/yapiq
crxXStJLnbsQ/LBMQeXtHT1eKJ2czL+zUdqnR+WEUwIDAQABo0IwQDAdBgNVHQ4E
FgQUu69+Aj36pvE8hI6t7jiY7NkyMtQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAArx1UaEt65Ru2yyTUEUAJNMnMvl
wFTPoCWOAvn9sKIN9SCYPBMtrFaisNZ+EZLpLrqeLppysb0ZRGxhNaKatBYSaVqM
4dc+pBroLwP0rmEdEBsqpIt6xf4FpuHA1sj+nq6PK7o9mfjYcwlYRm6mnPTXJ9OV
2jeDchzTc+CiR5kDOF3VSXkAKRzH7JsgHAckaVd4sjn8OoSgtZx8jb8uk2Intzna
FxiuvTwJaP+EmzzV1gsD41eeFPfR60/IvYcjt7ZJQ3mFXLrrkguhxuhoqEwWsRqZ
CuhTLJK7oQkYdQxlqHvLI7cawiiFwxv/0Cti76R7CZGYZ4wUAc1oBmpjIXUDgIiK
boHGhfKppC3n9KUkEEeDys30jXlYsQab5xoq2Z0B15R97QNKyvDb6KkBPvVWmcke
jkk9u+UJueBPSZI9FoJAzMxZxuY67RIuaTxslbH9qh17f4a+Hg4yRvv7E491f0yL
S0Zj/gA0QHDBw7mh3aZw4gSzQbzpgJHqZJx64SIDqZxubw5lT2yHh17zbqD5daWb
QOhTsiedSrnAdyGN/4fy3ryM7xfft0kL0fJuMAsaDk527RH89elWsn2/x20Kk4yl
0MC2Hb46TpSi125sC8KKfPog88Tk5c0NqMuRkrF8hey1FGlmDoLnzc7ILaZRfyHB
NVOFBkpdn627G190
-----END CERTIFICATE-----

# Issuer: CN=USERTrust RSA Certification Authority O=The USERTRUST Network
# Subject: CN=USERTrust RSA Certification Authority O=The USERTRUST Network
# Label: "USERTrust RSA Certification Authority"
# Serial: 2645093764781058787591871645665788717
# MD5 Fingerprint: 1b:fe:69:d1:91:b7:19:33:a3:72:a8:0f:e1:55:e5:b5
# SHA1 Fingerprint: 2b:8f:1b:57:33:0d:bb:a2:d0:7a:6c:51:f7:0e:e9:0d:da:b9:ad:8e
# SHA256 Fingerprint: e7:93:c9:b0:2f:d8:aa:13:e2:1c:31:22:8a:cc:b0:81:19:64:3b:74:9c:89:89:64:b1:74:6d:46:c3:d4:cb:d2
-----BEGIN CERTIFICATE-----
MIIF3jCCA8agAwIBAgIQAf1tMPyjylGoG7xkDjUDLTANBgkqhkiG9w0BAQwFADCB
iDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0pl
cnNleSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNV
BAMTJVVTRVJUcnVzdCBSU0EgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAw
MjAxMDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNV
BAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVU
aGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBSU0EgQ2Vy
dGlmaWNhdGlvbiBBdXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK
AoICAQCAEmUXNg7D2wiz0KxXDXbtzSfTTK1Qg2HiqiBNCS1kCdzOiZ/MPans9s/B
3PHTsdZ7NygRK0faOca8Ohm0X6a9fZ2jY0K2dvKpOyuR+OJv0OwWIJAJPuLodMkY
tJHUYmTbf6MG8YgYapAiPLz+E/CHFHv25B+O1ORRxhFnRghRy4YUVD+8M/5+bJz/
Fp0YvVGONaanZshyZ9shZrHUm3gDwFA66Mzw3LyeTP6vBZY1H1dat//O+T23LLb2
VN3I5xI6Ta5MirdcmrS3ID3KfyI0rn47aGYBROcBTkZTmzNg95S+UzeQc0PzMsNT
79uq/nROacdrjGCT3sTHDN/hMq7MkztReJVni+49Vv4M0GkPGw/zJSZrM233bkf6
c0Plfg6lZrEpfDKEY1WJxA3Bk1QwGROs0303p+tdOmw1XNtB1xLaqUkL39iAigmT
Yo61Zs8liM2EuLE/pDkP2QKe6xJMlXzzawWpXhaDzLhn4ugTncxbgtNMs+1b/97l
c6wjOy0AvzVVdAlJ2ElYGn+SNuZRkg7zJn0cTRe8yexDJtC/QV9AqURE9JnnV4ee
UB9XVKg+/XRjL7FQZQnmWEIuQxpMtPAlR1n6BB6T1CZGSlCBst6+eLf8ZxXhyVeE
Hg9j1uliutZfVS7qXMYoCAQlObgOK6nyTJccBz8NUvXt7y+CDwIDAQABo0IwQDAd
BgNVHQ4EFgQUU3m/WqorSs9UgOHYm8Cd8rIDZsswDgYDVR0PAQH/BAQDAgEGMA8G
A1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAFzUfA3P9wF9QZllDHPF
Up/L+M+ZBn8b2kMVn54CVVeWFPFSPCeHlCjtHzoBN6J2/FNQwISbxmtOuowhT6KO
VWKR82kV2LyI48SqC/3vqOlLVSoGIG1VeCkZ7l8wXEskEVX/JJpuXior7gtNn3/3
ATiUFJVDBwn7YKnuHKsSjKCaXqeYalltiz8I+8jRRa8YFWSQEg9zKC7F4iRO/Fjs
8PRF/iKz6y+O0tlFYQXBl2+odnKPi4w2r78NBc5xjeambx9spnFixdjQg3IM8WcR
iQycE0xyNN+81XHfqnHd4blsjDwSXWXavVcStkNr/+XeTWYRUc+ZruwXtuhxkYze
Sf7dNXGiFSeUHM9h4ya7b6NnJSFd5t0dCy5oGzuCr+yDZ4XUmFF0sbmZgIn/f3gZ
XHlKYC6SQK5MNyosycdiyA5d9zZbyuAlJQG03RoHnHcAP9Dc1ew91Pq7P8yF1m9/
qS3fuQL39ZeatTXaw2ewh0qpKJ4jjv9cJ2vhsE/zB+4ALtRZh8tSQZXq9EfX7mRB
VXyNWQKV3WKdwrnuWih0hKWbt5DHDAff9Yk2dDLWKMGwsAvgnEzDHNb842m1R0aB
L6KCq9NjRHDEjf8tM7qtj3u1cIiuPhnPQCjY/MiQu12ZIvVS5ljFH4gxQ+6IHdfG
jjxDah2nGN59PRbxYvnKkKj9
-----END CERTIFICATE-----

# Issuer: CN=USERTrust ECC Certification Authority O=The USERTRUST Network
# Subject: CN=USERTrust ECC Certification Authority O=The USERTRUST Network
# Label: "USERTrust ECC Certification Authority"
# Serial: 123013823720199481456569720443997572134
# MD5 Fingerprint: fa:68:bc:d9:b5:7f:ad:fd:c9:1d:06:83:28:cc:24:c1
# SHA1 Fingerprint: d1:cb:ca:5d:b2:d5:2a:7f:69:3b:67:4d:e5:f0:5a:1d:0c:95:7d:f0
# SHA256 Fingerprint: 4f:f4:60:d5:4b:9c:86:da:bf:bc:fc:57:12:e0:40:0d:2b:ed:3f:bc:4d:4f:bd:aa:86:e0:6a:dc:d2:a9:ad:7a
-----BEGIN CERTIFICATE-----
MIICjzCCAhWgAwIBAgIQXIuZxVqUxdJxVt7NiYDMJjAKBggqhkjOPQQDAzCBiDEL
MAkGA1UEBhMCVVMxEzARBgNVBAgTCk5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNl
eSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMT
JVVTRVJUcnVzdCBFQ0MgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkwHhcNMTAwMjAx
MDAwMDAwWhcNMzgwMTE4MjM1OTU5WjCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgT
Ck5ldyBKZXJzZXkxFDASBgNVBAcTC0plcnNleSBDaXR5MR4wHAYDVQQKExVUaGUg
VVNFUlRSVVNUIE5ldHdvcmsxLjAsBgNVBAMTJVVTRVJUcnVzdCBFQ0MgQ2VydGlm
aWNhdGlvbiBBdXRob3JpdHkwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQarFRaqflo
I+d61SRvU8Za2EurxtW20eZzca7dnNYMYf3boIkDuAUU7FfO7l0/4iGzzvfUinng
o4N+LZfQYcTxmdwlkWOrfzCjtHDix6EznPO/LlxTsV+zfTJ/ijTjeXmjQjBAMB0G
A1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1xmNjmjAOBgNVHQ8BAf8EBAMCAQYwDwYD
VR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjA2Z6EWCNzklwBBHU6+4WMB
zzuqQhFkoJ2UOQIReVx7Hfpkue4WQrO/isIJxOzksU0CMQDpKmFHjFJKS04YcPbW
RNZu9YO6bVi9JNlWSOrvxKJGgYhqOkbRqZtNyWHa0V1Xahg=
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5
# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R5
# Label: "GlobalSign ECC Root CA - R5"
# Serial: 32785792099990507226680698011560947931244
# MD5 Fingerprint: 9f:ad:3b:1c:02:1e:8a:ba:17:74:38:81:0c:a2:bc:08
# SHA1 Fingerprint: 1f:24:c6:30:cd:a4:18:ef:20:69:ff:ad:4f:dd:5f:46:3a:1b:69:aa
# SHA256 Fingerprint: 17:9f:bc:14:8a:3d:d0:0f:d2:4e:a1:34:58:cc:43:bf:a7:f5:9c:81:82:d7:83:a5:13:f6:eb:ec:10:0c:89:24
-----BEGIN CERTIFICATE-----
MIICHjCCAaSgAwIBAgIRYFlJ4CYuu1X5CneKcflK2GwwCgYIKoZIzj0EAwMwUDEk
MCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBDQSAtIFI1MRMwEQYDVQQKEwpH
bG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWduMB4XDTEyMTExMzAwMDAwMFoX
DTM4MDExOTAzMTQwN1owUDEkMCIGA1UECxMbR2xvYmFsU2lnbiBFQ0MgUm9vdCBD
QSAtIFI1MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpHbG9iYWxTaWdu
MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAER0UOlvt9Xb/pOdEh+J8LttV7HpI6SFkc
8GIxLcB6KP4ap1yztsyX50XUWPrRd21DosCHZTQKH3rd6zwzocWdTaRvQZU4f8ke
hOvRnkmSh5SHDDqFSmafnVmTTZdhBoZKo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYD
VR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUPeYpSJvqB8ohREom3m7e0oPQn1kwCgYI
KoZIzj0EAwMDaAAwZQIxAOVpEslu28YxuglB4Zf4+/2a4n0Sye18ZNPLBSWLVtmg
515dTguDnFt2KaAJJiFqYgIwcdK1j1zqO+F4CYWodZI7yFz9SO8NdCKoCOJuxUnO
xwy8p2Fp8fc74SrL+SvzZpA3
-----END CERTIFICATE-----

# Issuer: CN=IdenTrust Commercial Root CA 1 O=IdenTrust
# Subject: CN=IdenTrust Commercial Root CA 1 O=IdenTrust
# Label: "IdenTrust Commercial Root CA 1"
# Serial: 13298821034946342390520003877796839426
# MD5 Fingerprint: b3:3e:77:73:75:ee:a0:d3:e3:7e:49:63:49:59:bb:c7
# SHA1 Fingerprint: df:71:7e:aa:4a:d9:4e:c9:55:84:99:60:2d:48:de:5f:bc:f0:3a:25
# SHA256 Fingerprint: 5d:56:49:9b:e4:d2:e0:8b:cf:ca:d0:8a:3e:38:72:3d:50:50:3b:de:70:69:48:e4:2f:55:60:30:19:e5:28:ae
-----BEGIN CERTIFICATE-----
MIIFYDCCA0igAwIBAgIQCgFCgAAAAUUjyES1AAAAAjANBgkqhkiG9w0BAQsFADBK
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVu
VHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwHhcNMTQwMTE2MTgxMjIzWhcNMzQw
MTE2MTgxMjIzWjBKMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MScw
JQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQCnUBneP5k91DNG8W9RYYKyqU+PZ4ldhNlT
3Qwo2dfw/66VQ3KZ+bVdfIrBQuExUHTRgQ18zZshq0PirK1ehm7zCYofWjK9ouuU
+ehcCuz/mNKvcbO0U59Oh++SvL3sTzIwiEsXXlfEU8L2ApeN2WIrvyQfYo3fw7gp
S0l4PJNgiCL8mdo2yMKi1CxUAGc1bnO/AljwpN3lsKImesrgNqUZFvX9t++uP0D1
bVoE/c40yiTcdCMbXTMTEl3EASX2MN0CXZ/g1Ue9tOsbobtJSdifWwLziuQkkORi
T0/Br4sOdBeo0XKIanoBScy0RnnGF7HamB4HWfp1IYVl3ZBWzvurpWCdxJ35UrCL
vYf5jysjCiN2O/cz4ckA82n5S6LgTrx+kzmEB/dEcH7+B1rlsazRGMzyNeVJSQjK
Vsk9+w8YfYs7wRPCTY/JTw436R+hDmrfYi7LNQZReSzIJTj0+kuniVyc0uMNOYZK
dHzVWYfCP04MXFL0PfdSgvHqo6z9STQaKPNBiDoT7uje/5kdX7rL6B7yuVBgwDHT
c+XvvqDtMwt0viAgxGds8AgDelWAf0ZOlqf0Hj7h9tgJ4TNkK2PXMl6f+cB7D3hv
l7yTmvmcEpB4eoCHFddydJxVdHixuuFucAS6T6C6aMN7/zHwcz09lCqxC0EOoP5N
iGVreTO01wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB
/zAdBgNVHQ4EFgQU7UQZwNPwBovupHu+QucmVMiONnYwDQYJKoZIhvcNAQELBQAD
ggIBAA2ukDL2pkt8RHYZYR4nKM1eVO8lvOMIkPkp165oCOGUAFjvLi5+U1KMtlwH
6oi6mYtQlNeCgN9hCQCTrQ0U5s7B8jeUeLBfnLOic7iPBZM4zY0+sLj7wM+x8uwt
LRvM7Kqas6pgghstO8OEPVeKlh6cdbjTMM1gCIOQ045U8U1mwF10A0Cj7oV+wh93
nAbowacYXVKV7cndJZ5t+qntozo00Fl72u1Q8zW/7esUTTHHYPTa8Yec4kjixsU3
+wYQ+nVZZjFHKdp2mhzpgq7vmrlR94gjmmmVYjzlVYA211QC//G5Xc7UI2/YRYRK
W2XviQzdFKcgyxilJbQN+QHwotL0AMh0jqEqSI5l2xPE4iUXfeu+h1sXIFRRk0pT
AwvsXcoz7WL9RccvW9xYoIA55vrX/hMUpu09lEpCdNTDd1lzzY9GvlU47/rokTLq
l1gEIt44w8y8bckzOmoKaT+gyOpyj4xjhiO9bTyWnpXgSUyqorkqG5w2gXjtw+hG
4iZZRHUe2XWJUc0QhJ1hYMtd+ZciTY6Y5uN/9lu7rs3KSoFrXgvzUeF0K+l+J6fZ
mUlO+KWA2yUPHGNiiskzZ2s8EIPGrd6ozRaOjfAHN3Gf8qv8QfXBi+wAN10J5U6A
7/qxXDgGpRtK4dw4LTzcqx+QGtVKnO7RcGzM7vRX+Bi6hG6H
-----END CERTIFICATE-----

# Issuer: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust
# Subject: CN=IdenTrust Public Sector Root CA 1 O=IdenTrust
# Label: "IdenTrust Public Sector Root CA 1"
# Serial: 13298821034946342390521976156843933698
# MD5 Fingerprint: 37:06:a5:b0:fc:89:9d:ba:f4:6b:8c:1a:64:cd:d5:ba
# SHA1 Fingerprint: ba:29:41:60:77:98:3f:f4:f3:ef:f2:31:05:3b:2e:ea:6d:4d:45:fd
# SHA256 Fingerprint: 30:d0:89:5a:9a:44:8a:26:20:91:63:55:22:d1:f5:20:10:b5:86:7a:ca:e1:2c:78:ef:95:8f:d4:f4:38:9f:2f
-----BEGIN CERTIFICATE-----
MIIFZjCCA06gAwIBAgIQCgFCgAAAAUUjz0Z8AAAAAjANBgkqhkiG9w0BAQsFADBN
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MSowKAYDVQQDEyFJZGVu
VHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwHhcNMTQwMTE2MTc1MzMyWhcN
MzQwMTE2MTc1MzMyWjBNMQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0
MSowKAYDVQQDEyFJZGVuVHJ1c3QgUHVibGljIFNlY3RvciBSb290IENBIDEwggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC2IpT8pEiv6EdrCvsnduTyP4o7
ekosMSqMjbCpwzFrqHd2hCa2rIFCDQjrVVi7evi8ZX3yoG2LqEfpYnYeEe4IFNGy
RBb06tD6Hi9e28tzQa68ALBKK0CyrOE7S8ItneShm+waOh7wCLPQ5CQ1B5+ctMlS
bdsHyo+1W/CD80/HLaXIrcuVIKQxKFdYWuSNG5qrng0M8gozOSI5Cpcu81N3uURF
/YTLNiCBWS2ab21ISGHKTN9T0a9SvESfqy9rg3LvdYDaBjMbXcjaY8ZNzaxmMc3R
3j6HEDbhuaR672BQssvKplbgN6+rNBM5Jeg5ZuSYeqoSmJxZZoY+rfGwyj4GD3vw
EUs3oERte8uojHH01bWRNszwFcYr3lEXsZdMUD2xlVl8BX0tIdUAvwFnol57plzy
9yLxkA2T26pEUWbMfXYD62qoKjgZl3YNa4ph+bz27nb9cCvdKTz4Ch5bQhyLVi9V
GxyhLrXHFub4qjySjmm2AcG1hp2JDws4lFTo6tyePSW8Uybt1as5qsVATFSrsrTZ
2fjXctscvG29ZV/viDUqZi/u9rNl8DONfJhBaUYPQxxp+pu10GFqzcpL2UyQRqsV
WaFHVCkugyhfHMKiq3IXAAaOReyL4jM9f9oZRORicsPfIsbyVtTdX5Vy7W1f90gD
W/3FKqD2cyOEEBsB5wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/
BAUwAwEB/zAdBgNVHQ4EFgQU43HgntinQtnbcZFrlJPrw6PRFKMwDQYJKoZIhvcN
AQELBQADggIBAEf63QqwEZE4rU1d9+UOl1QZgkiHVIyqZJnYWv6IAcVYpZmxI1Qj
t2odIFflAWJBF9MJ23XLblSQdf4an4EKwt3X9wnQW3IV5B4Jaj0z8yGa5hV+rVHV
DRDtfULAj+7AmgjVQdZcDiFpboBhDhXAuM/FSRJSzL46zNQuOAXeNf0fb7iAaJg9
TaDKQGXSc3z1i9kKlT/YPyNtGtEqJBnZhbMX73huqVjRI9PHE+1yJX9dsXNw0H8G
lwmEKYBhHfpe/3OsoOOJuBxxFcbeMX8S3OFtm6/n6J91eEyrRjuazr8FGF1NFTwW
mhlQBJqymm9li1JfPFgEKCXAZmExfrngdbkaqIHWchezxQMxNRF4eKLg6TCMf4Df
WN88uieW4oA0beOY02QnrEh+KHdcxiVhJfiFDGX6xDIvpZgF5PgLZxYWxoK4Mhn5
+bl53B/N66+rDt0b20XkeucC4pVd/GnwU2lhlXV5C15V5jgclKlZM57IcXR5f1GJ
tshquDDIajjDbp7hNxbqBWJMWxJH7ae0s1hWx0nzfxJoCTFx8G34Tkf71oXuxVhA
GaQdp/lLQzfcaFpPz+vCZHTetBXZ9FRUGi8c15dxVJCO2SCdUyt/q4/i6jC8UDfv
8Ue1fXwsBOxonbRJRBD0ckscZOf85muQ3Wl9af0AVqW3rLatt8o+Ae+c
-----END CERTIFICATE-----

# Issuer: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only
# Subject: CN=Entrust Root Certification Authority - G2 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2009 Entrust, Inc. - for authorized use only
# Label: "Entrust Root Certification Authority - G2"
# Serial: 1246989352
# MD5 Fingerprint: 4b:e2:c9:91:96:65:0c:f4:0e:5a:93:92:a0:0a:fe:b2
# SHA1 Fingerprint: 8c:f4:27:fd:79:0c:3a:d1:66:06:8d:e8:1e:57:ef:bb:93:22:72:d4
# SHA256 Fingerprint: 43:df:57:74:b0:3e:7f:ef:5f:e4:0d:93:1a:7b:ed:f1:bb:2e:6b:42:73:8c:4e:6d:38:41:10:3d:3a:a7:f3:39
-----BEGIN CERTIFICATE-----
MIIEPjCCAyagAwIBAgIESlOMKDANBgkqhkiG9w0BAQsFADCBvjELMAkGA1UEBhMC
VVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50
cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3Qs
IEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVz
dCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRzIwHhcNMDkwNzA3MTcy
NTU0WhcNMzAxMjA3MTc1NTU0WjCBvjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVu
dHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3d3cuZW50cnVzdC5uZXQvbGVnYWwt
dGVybXMxOTA3BgNVBAsTMChjKSAyMDA5IEVudHJ1c3QsIEluYy4gLSBmb3IgYXV0
aG9yaXplZCB1c2Ugb25seTEyMDAGA1UEAxMpRW50cnVzdCBSb290IENlcnRpZmlj
YXRpb24gQXV0aG9yaXR5IC0gRzIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQC6hLZy254Ma+KZ6TABp3bqMriVQRrJ2mFOWHLP/vaCeb9zYQYKpSfYs1/T
RU4cctZOMvJyig/3gxnQaoCAAEUesMfnmr8SVycco2gvCoe9amsOXmXzHHfV1IWN
cCG0szLni6LVhjkCsbjSR87kyUnEO6fe+1R9V77w6G7CebI6C1XiUJgWMhNcL3hW
wcKUs/Ja5CeanyTXxuzQmyWC48zCxEXFjJd6BmsqEZ+pCm5IO2/b1BEZQvePB7/1
U1+cPvQXLOZprE4yTGJ36rfo5bs0vBmLrpxR57d+tVOxMyLlbc9wPBr64ptntoP0
jaWvYkxN4FisZDQSA/i2jZRjJKRxAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAP
BgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRqciZ60B7vfec7aVHUbI2fkBJmqzAN
BgkqhkiG9w0BAQsFAAOCAQEAeZ8dlsa2eT8ijYfThwMEYGprmi5ZiXMRrEPR9RP/
jTkrwPK9T3CMqS/qF8QLVJ7UG5aYMzyorWKiAHarWWluBh1+xLlEjZivEtRh2woZ
Rkfz6/djwUAFQKXSt/S1mja/qYh2iARVBCuch38aNzx+LaUa2NSJXsq9rD1s2G2v
1fN2D807iDginWyTmsQ9v4IbZT+mD12q/OWyFcq1rca8PdCE6OoGcrBNOTJ4vz4R
nAuknZoh8/CbCzB428Hch0P+vGOaysXCHMnHjf87ElgI5rY97HosTvuDls4MPGmH
VHOkc8KT/1EQrBVUAdj8BbGJoX90g5pJ19xOe4pIb4tF9g==
-----END CERTIFICATE-----

# Issuer: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only
# Subject: CN=Entrust Root Certification Authority - EC1 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2012 Entrust, Inc. - for authorized use only
# Label: "Entrust Root Certification Authority - EC1"
# Serial: 51543124481930649114116133369
# MD5 Fingerprint: b6:7e:1d:f0:58:c5:49:6c:24:3b:3d:ed:98:18:ed:bc
# SHA1 Fingerprint: 20:d8:06:40:df:9b:25:f5:12:25:3a:11:ea:f7:59:8a:eb:14:b5:47
# SHA256 Fingerprint: 02:ed:0e:b2:8c:14:da:45:16:5c:56:67:91:70:0d:64:51:d7:fb:56:f0:b2:ab:1d:3b:8e:b0:70:e5:6e:df:f5
-----BEGIN CERTIFICATE-----
MIIC+TCCAoCgAwIBAgINAKaLeSkAAAAAUNCR+TAKBggqhkjOPQQDAzCBvzELMAkG
A1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xKDAmBgNVBAsTH1NlZSB3
d3cuZW50cnVzdC5uZXQvbGVnYWwtdGVybXMxOTA3BgNVBAsTMChjKSAyMDEyIEVu
dHJ1c3QsIEluYy4gLSBmb3IgYXV0aG9yaXplZCB1c2Ugb25seTEzMDEGA1UEAxMq
RW50cnVzdCBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IC0gRUMxMB4XDTEy
MTIxODE1MjUzNloXDTM3MTIxODE1NTUzNlowgb8xCzAJBgNVBAYTAlVTMRYwFAYD
VQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1c3QubmV0
L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxMiBFbnRydXN0LCBJbmMuIC0g
Zm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMzAxBgNVBAMTKkVudHJ1c3QgUm9vdCBD
ZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEVDMTB2MBAGByqGSM49AgEGBSuBBAAi
A2IABIQTydC6bUF74mzQ61VfZgIaJPRbiWlH47jCffHyAsWfoPZb1YsGGYZPUxBt
ByQnoaD41UcZYUx9ypMn6nQM72+WCf5j7HBdNq1nd67JnXxVRDqiY1Ef9eNi1KlH
Bz7MIKNCMEAwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0O
BBYEFLdj5xrdjekIplWDpOBqUEFlEUJJMAoGCCqGSM49BAMDA2cAMGQCMGF52OVC
R98crlOZF7ZvHH3hvxGU0QOIdeSNiaSKd0bebWHvAvX7td/M/k7//qnmpwIwW5nX
hTcGtXsI/esni0qU+eH6p44mCOh8kmhtc9hvJqwhAriZtyZBWyVgrtBIGu4G
-----END CERTIFICATE-----

# Issuer: CN=CFCA EV ROOT O=China Financial Certification Authority
# Subject: CN=CFCA EV ROOT O=China Financial Certification Authority
# Label: "CFCA EV ROOT"
# Serial: 407555286
# MD5 Fingerprint: 74:e1:b6:ed:26:7a:7a:44:30:33:94:ab:7b:27:81:30
# SHA1 Fingerprint: e2:b8:29:4b:55:84:ab:6b:58:c2:90:46:6c:ac:3f:b8:39:8f:84:83
# SHA256 Fingerprint: 5c:c3:d7:8e:4e:1d:5e:45:54:7a:04:e6:87:3e:64:f9:0c:f9:53:6d:1c:cc:2e:f8:00:f3:55:c4:c5:fd:70:fd
-----BEGIN CERTIFICATE-----
MIIFjTCCA3WgAwIBAgIEGErM1jANBgkqhkiG9w0BAQsFADBWMQswCQYDVQQGEwJD
TjEwMC4GA1UECgwnQ2hpbmEgRmluYW5jaWFsIENlcnRpZmljYXRpb24gQXV0aG9y
aXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJPT1QwHhcNMTIwODA4MDMwNzAxWhcNMjkx
MjMxMDMwNzAxWjBWMQswCQYDVQQGEwJDTjEwMC4GA1UECgwnQ2hpbmEgRmluYW5j
aWFsIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MRUwEwYDVQQDDAxDRkNBIEVWIFJP
T1QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDXXWvNED8fBVnVBU03
sQ7smCuOFR36k0sXgiFxEFLXUWRwFsJVaU2OFW2fvwwbwuCjZ9YMrM8irq93VCpL
TIpTUnrD7i7es3ElweldPe6hL6P3KjzJIx1qqx2hp/Hz7KDVRM8Vz3IvHWOX6Jn5
/ZOkVIBMUtRSqy5J35DNuF++P96hyk0g1CXohClTt7GIH//62pCfCqktQT+x8Rgp
7hZZLDRJGqgG16iI0gNyejLi6mhNbiyWZXvKWfry4t3uMCz7zEasxGPrb382KzRz
EpR/38wmnvFyXVBlWY9ps4deMm/DGIq1lY+wejfeWkU7xzbh72fROdOXW3NiGUgt
hxwG+3SYIElz8AXSG7Ggo7cbcNOIabla1jj0Ytwli3i/+Oh+uFzJlU9fpy25IGvP
a931DfSCt/SyZi4QKPaXWnuWFo8BGS1sbn85WAZkgwGDg8NNkt0yxoekN+kWzqot
aK8KgWU6cMGbrU1tVMoqLUuFG7OA5nBFDWteNfB/O7ic5ARwiRIlk9oKmSJgamNg
TnYGmE69g60dWIolhdLHZR4tjsbftsbhf4oEIRUpdPA+nJCdDC7xij5aqgwJHsfV
PKPtl8MeNPo4+QgO48BdK4PRVmrJtqhUUy54Mmc9gn900PvhtgVguXDbjgv5E1hv
cWAQUhC5wUEJ73IfZzF4/5YFjQIDAQABo2MwYTAfBgNVHSMEGDAWgBTj/i39KNAL
tbq2osS/BqoFjJP7LzAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAd
BgNVHQ4EFgQU4/4t/SjQC7W6tqLEvwaqBYyT+y8wDQYJKoZIhvcNAQELBQADggIB
ACXGumvrh8vegjmWPfBEp2uEcwPenStPuiB/vHiyz5ewG5zz13ku9Ui20vsXiObT
ej/tUxPQ4i9qecsAIyjmHjdXNYmEwnZPNDatZ8POQQaIxffu2Bq41gt/UP+TqhdL
jOztUmCypAbqTuv0axn96/Ua4CUqmtzHQTb3yHQFhDmVOdYLO6Qn+gjYXB74BGBS
ESgoA//vU2YApUo0FmZ8/Qmkrp5nGm9BC2sGE5uPhnEFtC+NiWYzKXZUmhH4J/qy
P5Hgzg0b8zAarb8iXRvTvyUFTeGSGn+ZnzxEk8rUQElsgIfXBDrDMlI1Dlb4pd19
xIsNER9Tyx6yF7Zod1rg1MvIB671Oi6ON7fQAUtDKXeMOZePglr4UeWJoBjnaH9d
Ci77o0cOPaYjesYBx4/IXr9tgFa+iiS6M+qf4TIRnvHST4D2G0CvOJ4RUHlzEhLN
5mydLIhyPDCBBpEi6lmt2hkuIsKNuYyH4Ga8cyNfIWRjgEj1oDwYPZTISEEdQLpe
/v5WOaHIz16eGWRGENoXkbcFgKyLmZJ956LYBws2J+dIeWCKw9cTXPhyQN9Ky8+Z
AAoACxGV2lZFA4gKn2fQ1XmxqI1AbQ3CekD6819kR5LLU7m7Wc5P/dAVUwHY3+vZ
5nbv0CO7O6l5s9UCKc2Jo5YPSjXnTkLAdc0Hz+Ys63su
-----END CERTIFICATE-----

# Issuer: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed
# Subject: CN=OISTE WISeKey Global Root GB CA O=WISeKey OU=OISTE Foundation Endorsed
# Label: "OISTE WISeKey Global Root GB CA"
# Serial: 157768595616588414422159278966750757568
# MD5 Fingerprint: a4:eb:b9:61:28:2e:b7:2f:98:b0:35:26:90:99:51:1d
# SHA1 Fingerprint: 0f:f9:40:76:18:d3:d7:6a:4b:98:f0:a8:35:9e:0c:fd:27:ac:cc:ed
# SHA256 Fingerprint: 6b:9c:08:e8:6e:b0:f7:67:cf:ad:65:cd:98:b6:21:49:e5:49:4a:67:f5:84:5e:7b:d1:ed:01:9f:27:b8:6b:d6
-----BEGIN CERTIFICATE-----
MIIDtTCCAp2gAwIBAgIQdrEgUnTwhYdGs/gjGvbCwDANBgkqhkiG9w0BAQsFADBt
MQswCQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUg
Rm91bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9i
YWwgUm9vdCBHQiBDQTAeFw0xNDEyMDExNTAwMzJaFw0zOTEyMDExNTEwMzFaMG0x
CzAJBgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBG
b3VuZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2Jh
bCBSb290IEdCIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Be3
HEokKtaXscriHvt9OO+Y9bI5mE4nuBFde9IllIiCFSZqGzG7qFshISvYD06fWvGx
WuR51jIjK+FTzJlFXHtPrby/h0oLS5daqPZI7H17Dc0hBt+eFf1Biki3IPShehtX
1F1Q/7pn2COZH8g/497/b1t3sWtuuMlk9+HKQUYOKXHQuSP8yYFfTvdv37+ErXNk
u7dCjmn21HYdfp2nuFeKUWdy19SouJVUQHMD9ur06/4oQnc/nSMbsrY9gBQHTC5P
99UKFg29ZkM3fiNDecNAhvVMKdqOmq0NpQSHiB6F4+lT1ZvIiwNjeOvgGUpuuy9r
M2RYk61pv48b74JIxwIDAQABo1EwTzALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/BAUw
AwEB/zAdBgNVHQ4EFgQUNQ/INmNe4qPs+TtmFc5RUuORmj0wEAYJKwYBBAGCNxUB
BAMCAQAwDQYJKoZIhvcNAQELBQADggEBAEBM+4eymYGQfp3FsLAmzYh7KzKNbrgh
cViXfa43FK8+5/ea4n32cZiZBKpDdHij40lhPnOMTZTg+XHEthYOU3gf1qKHLwI5
gSk8rxWYITD+KJAAjNHhy/peyP34EEY7onhCkRd0VQreUGdNZtGn//3ZwLWoo4rO
ZvUPQ82nK1d7Y0Zqqi5S2PTt4W2tKZB4SLrhI6qjiey1q5bAtEuiHZeeevJuQHHf
aPFlTc58Bd9TZaml8LGXBHAVRgOY1NK/VLSgWH1Sb9pWJmLU2NuJMW8c8CLC02Ic
Nc1MaRVUGpCY3useX8p3x8uOPUNpnJpY0CQ73xtAln41rYHHTnG6iBM=
-----END CERTIFICATE-----

# Issuer: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A.
# Subject: CN=SZAFIR ROOT CA2 O=Krajowa Izba Rozliczeniowa S.A.
# Label: "SZAFIR ROOT CA2"
# Serial: 357043034767186914217277344587386743377558296292
# MD5 Fingerprint: 11:64:c1:89:b0:24:b1:8c:b1:07:7e:89:9e:51:9e:99
# SHA1 Fingerprint: e2:52:fa:95:3f:ed:db:24:60:bd:6e:28:f3:9c:cc:cf:5e:b3:3f:de
# SHA256 Fingerprint: a1:33:9d:33:28:1a:0b:56:e5:57:d3:d3:2b:1c:e7:f9:36:7e:b0:94:bd:5f:a7:2a:7e:50:04:c8:de:d7:ca:fe
-----BEGIN CERTIFICATE-----
MIIDcjCCAlqgAwIBAgIUPopdB+xV0jLVt+O2XwHrLdzk1uQwDQYJKoZIhvcNAQEL
BQAwUTELMAkGA1UEBhMCUEwxKDAmBgNVBAoMH0tyYWpvd2EgSXpiYSBSb3psaWN6
ZW5pb3dhIFMuQS4xGDAWBgNVBAMMD1NaQUZJUiBST09UIENBMjAeFw0xNTEwMTkw
NzQzMzBaFw0zNTEwMTkwNzQzMzBaMFExCzAJBgNVBAYTAlBMMSgwJgYDVQQKDB9L
cmFqb3dhIEl6YmEgUm96bGljemVuaW93YSBTLkEuMRgwFgYDVQQDDA9TWkFGSVIg
Uk9PVCBDQTIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC3vD5QqEvN
QLXOYeeWyrSh2gwisPq1e3YAd4wLz32ohswmUeQgPYUM1ljj5/QqGJ3a0a4m7utT
3PSQ1hNKDJA8w/Ta0o4NkjrcsbH/ON7Dui1fgLkCvUqdGw+0w8LBZwPd3BucPbOw
3gAeqDRHu5rr/gsUvTaE2g0gv/pby6kWIK05YO4vdbbnl5z5Pv1+TW9NL++IDWr6
3fE9biCloBK0TXC5ztdyO4mTp4CEHCdJckm1/zuVnsHMyAHs6A6KCpbns6aH5db5
BSsNl0BwPLqsdVqc1U2dAgrSS5tmS0YHF2Wtn2yIANwiieDhZNRnvDF5YTy7ykHN
XGoAyDw4jlivAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD
AgEGMB0GA1UdDgQWBBQuFqlKGLXLzPVvUPMjX/hd56zwyDANBgkqhkiG9w0BAQsF
AAOCAQEAtXP4A9xZWx126aMqe5Aosk3AM0+qmrHUuOQn/6mWmc5G4G18TKI4pAZw
8PRBEew/R40/cof5O/2kbytTAOD/OblqBw7rHRz2onKQy4I9EYKL0rufKq8h5mOG
nXkZ7/e7DDWQw4rtTw/1zBLZpD67oPwglV9PJi8RI4NOdQcPv5vRtB3pEAT+ymCP
oky4rc/hkA/NrgrHXXu3UNLUYfrVFdvXn4dRVOul4+vJhaAlIDf7js4MNIThPIGy
d05DpYhfhmehPea0XGG2Ptv+tyjFogeutcrKjSoS75ftwjCkySp6+/NNIxuZMzSg
LvWpCz/UXeHPhJ/iGcJfitYgHuNztw==
-----END CERTIFICATE-----

# Issuer: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority
# Subject: CN=Certum Trusted Network CA 2 O=Unizeto Technologies S.A. OU=Certum Certification Authority
# Label: "Certum Trusted Network CA 2"
# Serial: 44979900017204383099463764357512596969
# MD5 Fingerprint: 6d:46:9e:d9:25:6d:08:23:5b:5e:74:7d:1e:27:db:f2
# SHA1 Fingerprint: d3:dd:48:3e:2b:bf:4c:05:e8:af:10:f5:fa:76:26:cf:d3:dc:30:92
# SHA256 Fingerprint: b6:76:f2:ed:da:e8:77:5c:d3:6c:b0:f6:3c:d1:d4:60:39:61:f4:9e:62:65:ba:01:3a:2f:03:07:b6:d0:b8:04
-----BEGIN CERTIFICATE-----
MIIF0jCCA7qgAwIBAgIQIdbQSk8lD8kyN/yqXhKN6TANBgkqhkiG9w0BAQ0FADCB
gDELMAkGA1UEBhMCUEwxIjAgBgNVBAoTGVVuaXpldG8gVGVjaG5vbG9naWVzIFMu
QS4xJzAlBgNVBAsTHkNlcnR1bSBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTEkMCIG
A1UEAxMbQ2VydHVtIFRydXN0ZWQgTmV0d29yayBDQSAyMCIYDzIwMTExMDA2MDgz
OTU2WhgPMjA0NjEwMDYwODM5NTZaMIGAMQswCQYDVQQGEwJQTDEiMCAGA1UEChMZ
VW5pemV0byBUZWNobm9sb2dpZXMgUy5BLjEnMCUGA1UECxMeQ2VydHVtIENlcnRp
ZmljYXRpb24gQXV0aG9yaXR5MSQwIgYDVQQDExtDZXJ0dW0gVHJ1c3RlZCBOZXR3
b3JrIENBIDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC9+Xj45tWA
DGSdhhuWZGc/IjoedQF97/tcZ4zJzFxrqZHmuULlIEub2pt7uZld2ZuAS9eEQCsn
0+i6MLs+CRqnSZXvK0AkwpfHp+6bJe+oCgCXhVqqndwpyeI1B+twTUrWwbNWuKFB
OJvR+zF/j+Bf4bE/D44WSWDXBo0Y+aomEKsq09DRZ40bRr5HMNUuctHFY9rnY3lE
fktjJImGLjQ/KUxSiyqnwOKRKIm5wFv5HdnnJ63/mgKXwcZQkpsCLL2puTRZCr+E
Sv/f/rOf69me4Jgj7KZrdxYq28ytOxykh9xGc14ZYmhFV+SQgkK7QtbwYeDBoz1m
o130GO6IyY0XRSmZMnUCMe4pJshrAua1YkV/NxVaI2iJ1D7eTiew8EAMvE0Xy02i
sx7QBlrd9pPPV3WZ9fqGGmd4s7+W/jTcvedSVuWz5XV710GRBdxdaeOVDUO5/IOW
OZV7bIBaTxNyxtd9KXpEulKkKtVBRgkg/iKgtlswjbyJDNXXcPiHUv3a76xRLgez
Tv7QCdpw75j6VuZt27VXS9zlLCUVyJ4ueE742pyehizKV/Ma5ciSixqClnrDvFAS
adgOWkaLOusm+iPJtrCBvkIApPjW/jAux9JG9uWOdf3yzLnQh1vMBhBgu4M1t15n
3kfsmUjxpKEV/q2MYo45VU85FrmxY53/twIDAQABo0IwQDAPBgNVHRMBAf8EBTAD
AQH/MB0GA1UdDgQWBBS2oVQ5AsOgP46KvPrU+Bym0ToO/TAOBgNVHQ8BAf8EBAMC
AQYwDQYJKoZIhvcNAQENBQADggIBAHGlDs7k6b8/ONWJWsQCYftMxRQXLYtPU2sQ
F/xlhMcQSZDe28cmk4gmb3DWAl45oPePq5a1pRNcgRRtDoGCERuKTsZPpd1iHkTf
CVn0W3cLN+mLIMb4Ck4uWBzrM9DPhmDJ2vuAL55MYIR4PSFk1vtBHxgP58l1cb29
XN40hz5BsA72udY/CROWFC/emh1auVbONTqwX3BNXuMp8SMoclm2q8KMZiYcdywm
djWLKKdpoPk79SPdhRB0yZADVpHnr7pH1BKXESLjokmUbOe3lEu6LaTaM4tMpkT/
WjzGHWTYtTHkpjx6qFcL2+1hGsvxznN3Y6SHb0xRONbkX8eftoEq5IVIeVheO/jb
AoJnwTnbw3RLPTYe+SmTiGhbqEQZIfCn6IENLOiTNrQ3ssqwGyZ6miUfmpqAnksq
P/ujmv5zMnHCnsZy4YpoJ/HkD7TETKVhk/iXEAcqMCWpuchxuO9ozC1+9eB+D4Ko
b7a6bINDd82Kkhehnlt4Fj1F4jNy3eFmypnTycUm/Q1oBEauttmbjL4ZvrHG8hnj
XALKLNhvSgfZyTXaQHXyxKcZb55CEJh15pWLYLztxRLXis7VmFxWlgPF7ncGNf/P
5O4/E2Hu29othfDNrp2yGAlFw5Khchf8R7agCyzxxN5DaAhqXzvwdmP7zAYspsbi
DrW5viSP
-----END CERTIFICATE-----

# Issuer: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority
# Subject: CN=Hellenic Academic and Research Institutions RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority
# Label: "Hellenic Academic and Research Institutions RootCA 2015"
# Serial: 0
# MD5 Fingerprint: ca:ff:e2:db:03:d9:cb:4b:e9:0f:ad:84:fd:7b:18:ce
# SHA1 Fingerprint: 01:0c:06:95:a6:98:19:14:ff:bf:5f:c6:b0:b6:95:ea:29:e9:12:a6
# SHA256 Fingerprint: a0:40:92:9a:02:ce:53:b4:ac:f4:f2:ff:c6:98:1c:e4:49:6f:75:5e:6d:45:fe:0b:2a:69:2b:cd:52:52:3f:36
-----BEGIN CERTIFICATE-----
MIIGCzCCA/OgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBpjELMAkGA1UEBhMCR1Ix
DzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5k
IFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxQDA+BgNVBAMT
N0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgUm9v
dENBIDIwMTUwHhcNMTUwNzA3MTAxMTIxWhcNNDAwNjMwMTAxMTIxWjCBpjELMAkG
A1UEBhMCR1IxDzANBgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNh
ZGVtaWMgYW5kIFJlc2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkx
QDA+BgNVBAMTN0hlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1
dGlvbnMgUm9vdENBIDIwMTUwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoIC
AQDC+Kk/G4n8PDwEXT2QNrCROnk8ZlrvbTkBSRq0t89/TSNTt5AA4xMqKKYx8ZEA
4yjsriFBzh/a/X0SWwGDD7mwX5nh8hKDgE0GPt+sr+ehiGsxr/CL0BgzuNtFajT0
AoAkKAoCFZVedioNmToUW/bLy1O8E00BiDeUJRtCvCLYjqOWXjrZMts+6PAQZe10
4S+nfK8nNLspfZu2zwnI5dMK/IhlZXQK3HMcXM1AsRzUtoSMTFDPaI6oWa7CJ06C
ojXdFPQf/7J31Ycvqm59JCfnxssm5uX+Zwdj2EUN3TpZZTlYepKZcj2chF6IIbjV
9Cz82XBST3i4vTwri5WY9bPRaM8gFH5MXF/ni+X1NYEZN9cRCLdmvtNKzoNXADrD
gfgXy5I2XdGj2HUb4Ysn6npIQf1FGQatJ5lOwXBH3bWfgVMS5bGMSF0xQxfjjMZ6
Y5ZLKTBOhE5iGV48zpeQpX8B653g+IuJ3SWYPZK2fu/Z8VFRfS0myGlZYeCsargq
NhEEelC9MoS+L9xy1dcdFkfkR2YgP/SWxa+OAXqlD3pk9Q0Yh9muiNX6hME6wGko
LfINaFGq46V3xqSQDqE3izEjR8EJCOtu93ib14L8hCCZSRm2Ekax+0VVFqmjZayc
Bw/qa9wfLgZy7IaIEuQt218FL+TwA9MmM+eAws1CoRc0CwIDAQABo0IwQDAPBgNV
HRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUcRVnyMjJvXVd
ctA4GGqd83EkVAswDQYJKoZIhvcNAQELBQADggIBAHW7bVRLqhBYRjTyYtcWNl0I
XtVsyIe9tC5G8jH4fOpCtZMWVdyhDBKg2mF+D1hYc2Ryx+hFjtyp8iY/xnmMsVMI
M4GwVhO+5lFc2JsKT0ucVlMC6U/2DWDqTUJV6HwbISHTGzrMd/K4kPFox/la/vot
9L/J9UUbzjgQKjeKeaO04wlshYaT/4mWJ3iBj2fjRnRUjtkNaeJK9E10A/+yd+2V
Z5fkscWrv2oj6NSU4kQoYsRL4vDY4ilrGnB+JGGTe08DMiUNRSQrlrRGar9KC/ea
j8GsGsVn82800vpzY4zvFrCopEYq+OsS7HK07/grfoxSwIuEVPkvPuNVqNxmsdnh
X9izjFk0WaSrT2y7HxjbdavYy5LNlDhhDgcGH0tGEPEVvo2FXDtKK4F5D7Rpn0lQ
l033DlZdwJVqwjbDG2jJ9SrcR5q+ss7FJej6A7na+RZukYT1HCjI/CbM1xyQVqdf
bzoEvM14iQuODy+jqk+iGxI9FghAD/FGTNeqewjBCvVtJ94Cj8rDtSvK6evIIVM4
pcw72Hc3MKJP2W/R8kCtQXoXxdZKNYm3QdV8hn9VTYNKpXMgwDqvkPGaJI7ZjnHK
e7iG2rKPmT4dEw0SEe7Uq/DpFXYC5ODfqiAeW2GFZECpkJcNrVPSWh2HagCXZWK0
vm9qp/UsQu0yrbYhnr68
-----END CERTIFICATE-----

# Issuer: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority
# Subject: CN=Hellenic Academic and Research Institutions ECC RootCA 2015 O=Hellenic Academic and Research Institutions Cert. Authority
# Label: "Hellenic Academic and Research Institutions ECC RootCA 2015"
# Serial: 0
# MD5 Fingerprint: 81:e5:b4:17:eb:c2:f5:e1:4b:0d:41:7b:49:92:fe:ef
# SHA1 Fingerprint: 9f:f1:71:8d:92:d5:9a:f3:7d:74:97:b4:bc:6f:84:68:0b:ba:b6:66
# SHA256 Fingerprint: 44:b5:45:aa:8a:25:e6:5a:73:ca:15:dc:27:fc:36:d2:4c:1c:b9:95:3a:06:65:39:b1:15:82:dc:48:7b:48:33
-----BEGIN CERTIFICATE-----
MIICwzCCAkqgAwIBAgIBADAKBggqhkjOPQQDAjCBqjELMAkGA1UEBhMCR1IxDzAN
BgNVBAcTBkF0aGVuczFEMEIGA1UEChM7SGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl
c2VhcmNoIEluc3RpdHV0aW9ucyBDZXJ0LiBBdXRob3JpdHkxRDBCBgNVBAMTO0hl
bGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgRUNDIFJv
b3RDQSAyMDE1MB4XDTE1MDcwNzEwMzcxMloXDTQwMDYzMDEwMzcxMlowgaoxCzAJ
BgNVBAYTAkdSMQ8wDQYDVQQHEwZBdGhlbnMxRDBCBgNVBAoTO0hlbGxlbmljIEFj
YWRlbWljIGFuZCBSZXNlYXJjaCBJbnN0aXR1dGlvbnMgQ2VydC4gQXV0aG9yaXR5
MUQwQgYDVQQDEztIZWxsZW5pYyBBY2FkZW1pYyBhbmQgUmVzZWFyY2ggSW5zdGl0
dXRpb25zIEVDQyBSb290Q0EgMjAxNTB2MBAGByqGSM49AgEGBSuBBAAiA2IABJKg
QehLgoRc4vgxEZmGZE4JJS+dQS8KrjVPdJWyUWRrjWvmP3CV8AVER6ZyOFB2lQJa
jq4onvktTpnvLEhvTCUp6NFxW98dwXU3tNf6e3pCnGoKVlp8aQuqgAkkbH7BRqNC
MEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFLQi
C4KZJAEOnLvkDv2/+5cgk5kqMAoGCCqGSM49BAMCA2cAMGQCMGfOFmI4oqxiRaep
lSTAGiecMjvAwNW6qef4BENThe5SId6d9SWDPp5YSy/XZxMOIQIwBeF1Ad5o7Sof
TUwJCA3sS61kFyjndc5FZXIhF8siQQ6ME5g4mlRtm8rifOoCWCKR
-----END CERTIFICATE-----

# Issuer: CN=ISRG Root X1 O=Internet Security Research Group
# Subject: CN=ISRG Root X1 O=Internet Security Research Group
# Label: "ISRG Root X1"
# Serial: 172886928669790476064670243504169061120
# MD5 Fingerprint: 0c:d2:f9:e0:da:17:73:e9:ed:86:4d:a5:e3:70:e7:4e
# SHA1 Fingerprint: ca:bd:2a:79:a1:07:6a:31:f2:1d:25:36:35:cb:03:9d:43:29:a5:e8
# SHA256 Fingerprint: 96:bc:ec:06:26:49:76:f3:74:60:77:9a:cf:28:c5:a7:cf:e8:a3:c0:aa:e1:1a:8f:fc:ee:05:c0:bd:df:08:c6
-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4
WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJu
ZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBY
MTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygc
h77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+
0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6U
A5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sW
T8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyH
B5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UC
B5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUv
KBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWn
OlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTn
jh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbw
qHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CI
rU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNV
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkq
hkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZL
ubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ
3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KK
NFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5
ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7Ur
TkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdC
jNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVc
oyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq
4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPA
mRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57d
emyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
-----END CERTIFICATE-----

# Issuer: O=FNMT-RCM OU=AC RAIZ FNMT-RCM
# Subject: O=FNMT-RCM OU=AC RAIZ FNMT-RCM
# Label: "AC RAIZ FNMT-RCM"
# Serial: 485876308206448804701554682760554759
# MD5 Fingerprint: e2:09:04:b4:d3:bd:d1:a0:14:fd:1a:d2:47:c4:57:1d
# SHA1 Fingerprint: ec:50:35:07:b2:15:c4:95:62:19:e2:a8:9a:5b:42:99:2c:4c:2c:20
# SHA256 Fingerprint: eb:c5:57:0c:29:01:8c:4d:67:b1:aa:12:7b:af:12:f7:03:b4:61:1e:bc:17:b7:da:b5:57:38:94:17:9b:93:fa
-----BEGIN CERTIFICATE-----
MIIFgzCCA2ugAwIBAgIPXZONMGc2yAYdGsdUhGkHMA0GCSqGSIb3DQEBCwUAMDsx
CzAJBgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJ
WiBGTk1ULVJDTTAeFw0wODEwMjkxNTU5NTZaFw0zMDAxMDEwMDAwMDBaMDsxCzAJ
BgNVBAYTAkVTMREwDwYDVQQKDAhGTk1ULVJDTTEZMBcGA1UECwwQQUMgUkFJWiBG
Tk1ULVJDTTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBALpxgHpMhm5/
yBNtwMZ9HACXjywMI7sQmkCpGreHiPibVmr75nuOi5KOpyVdWRHbNi63URcfqQgf
BBckWKo3Shjf5TnUV/3XwSyRAZHiItQDwFj8d0fsjz50Q7qsNI1NOHZnjrDIbzAz
WHFctPVrbtQBULgTfmxKo0nRIBnuvMApGGWn3v7v3QqQIecaZ5JCEJhfTzC8PhxF
tBDXaEAUwED653cXeuYLj2VbPNmaUtu1vZ5Gzz3rkQUCwJaydkxNEJY7kvqcfw+Z
374jNUUeAlz+taibmSXaXvMiwzn15Cou08YfxGyqxRxqAQVKL9LFwag0Jl1mpdIC
IfkYtwb1TplvqKtMUejPUBjFd8g5CSxJkjKZqLsXF3mwWsXmo8RZZUc1g16p6DUL
mbvkzSDGm0oGObVo/CK67lWMK07q87Hj/LaZmtVC+nFNCM+HHmpxffnTtOmlcYF7
wk5HlqX2doWjKI/pgG6BU6VtX7hI+cL5NqYuSf+4lsKMB7ObiFj86xsc3i1w4peS
MKGJ47xVqCfWS+2QrYv6YyVZLag13cqXM7zlzced0ezvXg5KkAYmY6252TUtB7p2
ZSysV4999AeU14ECll2jB0nVetBX+RvnU0Z1qrB5QstocQjpYL05ac70r8NWQMet
UqIJ5G+GR4of6ygnXYMgrwTJbFaai0b1AgMBAAGjgYMwgYAwDwYDVR0TAQH/BAUw
AwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFPd9xf3E6Jobd2Sn9R2gzL+H
YJptMD4GA1UdIAQ3MDUwMwYEVR0gADArMCkGCCsGAQUFBwIBFh1odHRwOi8vd3d3
LmNlcnQuZm5tdC5lcy9kcGNzLzANBgkqhkiG9w0BAQsFAAOCAgEAB5BK3/MjTvDD
nFFlm5wioooMhfNzKWtN/gHiqQxjAb8EZ6WdmF/9ARP67Jpi6Yb+tmLSbkyU+8B1
RXxlDPiyN8+sD8+Nb/kZ94/sHvJwnvDKuO+3/3Y3dlv2bojzr2IyIpMNOmqOFGYM
LVN0V2Ue1bLdI4E7pWYjJ2cJj+F3qkPNZVEI7VFY/uY5+ctHhKQV8Xa7pO6kO8Rf
77IzlhEYt8llvhjho6Tc+hj507wTmzl6NLrTQfv6MooqtyuGC2mDOL7Nii4LcK2N
JpLuHvUBKwrZ1pebbuCoGRw6IYsMHkCtA+fdZn71uSANA+iW+YJF1DngoABd15jm
fZ5nc8OaKveri6E6FO80vFIOiZiaBECEHX5FaZNXzuvO+FB8TxxuBEOb+dY7Ixjp
6o7RTUaN8Tvkasq6+yO3m/qZASlaWFot4/nUbQ4mrcFuNLwy+AwF+mWj2zs3gyLp
1txyM/1d8iC9djwj2ij3+RvrWWTV3F9yfiD8zYm1kGdNYno/Tq0dwzn+evQoFt9B
9kiABdcPUXmsEKvU7ANm5mqwujGSQkBqvjrTcuFqN1W8rB2Vt2lh8kORdOag0wok
RqEIr9baRRmW1FMdW4R58MD3R++Lj8UGrp1MYp3/RgT408m2ECVAdf4WqslKYIYv
uu8wd+RU4riEmViAqhOLUTpPSPaLtrM=
-----END CERTIFICATE-----

# Issuer: CN=Amazon Root CA 1 O=Amazon
# Subject: CN=Amazon Root CA 1 O=Amazon
# Label: "Amazon Root CA 1"
# Serial: 143266978916655856878034712317230054538369994
# MD5 Fingerprint: 43:c6:bf:ae:ec:fe:ad:2f:18:c6:88:68:30:fc:c8:e6
# SHA1 Fingerprint: 8d:a7:f9:65:ec:5e:fc:37:91:0f:1c:6e:59:fd:c1:cc:6a:6e:de:16
# SHA256 Fingerprint: 8e:cd:e6:88:4f:3d:87:b1:12:5b:a3:1a:c3:fc:b1:3d:70:16:de:7f:57:cc:90:4f:e1:cb:97:c6:ae:98:19:6e
-----BEGIN CERTIFICATE-----
MIIDQTCCAimgAwIBAgITBmyfz5m/jAo54vB4ikPmljZbyjANBgkqhkiG9w0BAQsF
ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6
b24gUm9vdCBDQSAxMB4XDTE1MDUyNjAwMDAwMFoXDTM4MDExNzAwMDAwMFowOTEL
MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv
b3QgQ0EgMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALJ4gHHKeNXj
ca9HgFB0fW7Y14h29Jlo91ghYPl0hAEvrAIthtOgQ3pOsqTQNroBvo3bSMgHFzZM
9O6II8c+6zf1tRn4SWiw3te5djgdYZ6k/oI2peVKVuRF4fn9tBb6dNqcmzU5L/qw
IFAGbHrQgLKm+a/sRxmPUDgH3KKHOVj4utWp+UhnMJbulHheb4mjUcAwhmahRWa6
VOujw5H5SNz/0egwLX0tdHA114gk957EWW67c4cX8jJGKLhD+rcdqsq08p8kDi1L
93FcXmn/6pUCyziKrlA4b9v7LWIbxcceVOF34GfID5yHI9Y/QCB/IIDEgEw+OyQm
jgSubJrIqg0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMC
AYYwHQYDVR0OBBYEFIQYzIU07LwMlJQuCFmcx7IQTgoIMA0GCSqGSIb3DQEBCwUA
A4IBAQCY8jdaQZChGsV2USggNiMOruYou6r4lK5IpDB/G/wkjUu0yKGX9rbxenDI
U5PMCCjjmCXPI6T53iHTfIUJrU6adTrCC2qJeHZERxhlbI1Bjjt/msv0tadQ1wUs
N+gDS63pYaACbvXy8MWy7Vu33PqUXHeeE6V/Uq2V8viTO96LXFvKWlJbYK8U90vv
o/ufQJVtMVT8QtPHRh8jrdkPSHCa2XV4cdFyQzR1bldZwgJcJmApzyMZFo6IQ6XU
5MsI+yMRQ+hDKXJioaldXgjUkK642M4UwtBV8ob2xJNDd2ZhwLnoQdeXeGADbkpy
rqXRfboQnoZsG4q5WTP468SQvvG5
-----END CERTIFICATE-----

# Issuer: CN=Amazon Root CA 2 O=Amazon
# Subject: CN=Amazon Root CA 2 O=Amazon
# Label: "Amazon Root CA 2"
# Serial: 143266982885963551818349160658925006970653239
# MD5 Fingerprint: c8:e5:8d:ce:a8:42:e2:7a:c0:2a:5c:7c:9e:26:bf:66
# SHA1 Fingerprint: 5a:8c:ef:45:d7:a6:98:59:76:7a:8c:8b:44:96:b5:78:cf:47:4b:1a
# SHA256 Fingerprint: 1b:a5:b2:aa:8c:65:40:1a:82:96:01:18:f8:0b:ec:4f:62:30:4d:83:ce:c4:71:3a:19:c3:9c:01:1e:a4:6d:b4
-----BEGIN CERTIFICATE-----
MIIFQTCCAymgAwIBAgITBmyf0pY1hp8KD+WGePhbJruKNzANBgkqhkiG9w0BAQwF
ADA5MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6
b24gUm9vdCBDQSAyMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTEL
MAkGA1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJv
b3QgQ0EgMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK2Wny2cSkxK
gXlRmeyKy2tgURO8TW0G/LAIjd0ZEGrHJgw12MBvIITplLGbhQPDW9tK6Mj4kHbZ
W0/jTOgGNk3Mmqw9DJArktQGGWCsN0R5hYGCrVo34A3MnaZMUnbqQ523BNFQ9lXg
1dKmSYXpN+nKfq5clU1Imj+uIFptiJXZNLhSGkOQsL9sBbm2eLfq0OQ6PBJTYv9K
8nu+NQWpEjTj82R0Yiw9AElaKP4yRLuH3WUnAnE72kr3H9rN9yFVkE8P7K6C4Z9r
2UXTu/Bfh+08LDmG2j/e7HJV63mjrdvdfLC6HM783k81ds8P+HgfajZRRidhW+me
z/CiVX18JYpvL7TFz4QuK/0NURBs+18bvBt+xa47mAExkv8LV/SasrlX6avvDXbR
8O70zoan4G7ptGmh32n2M8ZpLpcTnqWHsFcQgTfJU7O7f/aS0ZzQGPSSbtqDT6Zj
mUyl+17vIWR6IF9sZIUVyzfpYgwLKhbcAS4y2j5L9Z469hdAlO+ekQiG+r5jqFoz
7Mt0Q5X5bGlSNscpb/xVA1wf+5+9R+vnSUeVC06JIglJ4PVhHvG/LopyboBZ/1c6
+XUyo05f7O0oYtlNc/LMgRdg7c3r3NunysV+Ar3yVAhU/bQtCSwXVEqY0VThUWcI
0u1ufm8/0i2BWSlmy5A5lREedCf+3euvAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSwDPBMMPQFWAJI/TPlUq9LhONm
UjANBgkqhkiG9w0BAQwFAAOCAgEAqqiAjw54o+Ci1M3m9Zh6O+oAA7CXDpO8Wqj2
LIxyh6mx/H9z/WNxeKWHWc8w4Q0QshNabYL1auaAn6AFC2jkR2vHat+2/XcycuUY
+gn0oJMsXdKMdYV2ZZAMA3m3MSNjrXiDCYZohMr/+c8mmpJ5581LxedhpxfL86kS
k5Nrp+gvU5LEYFiwzAJRGFuFjWJZY7attN6a+yb3ACfAXVU3dJnJUH/jWS5E4ywl
7uxMMne0nxrpS10gxdr9HIcWxkPo1LsmmkVwXqkLN1PiRnsn/eBG8om3zEK2yygm
btmlyTrIQRNg91CMFa6ybRoVGld45pIq2WWQgj9sAq+uEjonljYE1x2igGOpm/Hl
urR8FLBOybEfdF849lHqm/osohHUqS0nGkWxr7JOcQ3AWEbWaQbLU8uz/mtBzUF+
fUwPfHJ5elnNXkoOrJupmHN5fLT0zLm4BwyydFy4x2+IoZCn9Kr5v2c69BoVYh63
n749sSmvZ6ES8lgQGVMDMBu4Gon2nL2XA46jCfMdiyHxtN/kHNGfZQIG6lzWE7OE
76KlXIx3KadowGuuQNKotOrN8I1LOJwZmhsoVLiJkO/KdYE+HvJkJMcYr07/R54H
9jVlpNMKVv/1F2Rs76giJUmTtt8AF9pYfl3uxRuw0dFfIRDH+fO6AgonB8Xx1sfT
4PsJYGw=
-----END CERTIFICATE-----

# Issuer: CN=Amazon Root CA 3 O=Amazon
# Subject: CN=Amazon Root CA 3 O=Amazon
# Label: "Amazon Root CA 3"
# Serial: 143266986699090766294700635381230934788665930
# MD5 Fingerprint: a0:d4:ef:0b:f7:b5:d8:49:95:2a:ec:f5:c4:fc:81:87
# SHA1 Fingerprint: 0d:44:dd:8c:3c:8c:1a:1a:58:75:64:81:e9:0f:2e:2a:ff:b3:d2:6e
# SHA256 Fingerprint: 18:ce:6c:fe:7b:f1:4e:60:b2:e3:47:b8:df:e8:68:cb:31:d0:2e:bb:3a:da:27:15:69:f5:03:43:b4:6d:b3:a4
-----BEGIN CERTIFICATE-----
MIIBtjCCAVugAwIBAgITBmyf1XSXNmY/Owua2eiedgPySjAKBggqhkjOPQQDAjA5
MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g
Um9vdCBDQSAzMB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG
A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg
Q0EgMzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABCmXp8ZBf8ANm+gBG1bG8lKl
ui2yEujSLtf6ycXYqm0fc4E7O5hrOXwzpcVOho6AF2hiRVd9RFgdszflZwjrZt6j
QjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMB0GA1UdDgQWBBSr
ttvXBp43rDCGB5Fwx5zEGbF4wDAKBggqhkjOPQQDAgNJADBGAiEA4IWSoxe3jfkr
BqWTrBqYaGFy+uGh0PsceGCmQ5nFuMQCIQCcAu/xlJyzlvnrxir4tiz+OpAUFteM
YyRIHN8wfdVoOw==
-----END CERTIFICATE-----

# Issuer: CN=Amazon Root CA 4 O=Amazon
# Subject: CN=Amazon Root CA 4 O=Amazon
# Label: "Amazon Root CA 4"
# Serial: 143266989758080763974105200630763877849284878
# MD5 Fingerprint: 89:bc:27:d5:eb:17:8d:06:6a:69:d5:fd:89:47:b4:cd
# SHA1 Fingerprint: f6:10:84:07:d6:f8:bb:67:98:0c:c2:e2:44:c2:eb:ae:1c:ef:63:be
# SHA256 Fingerprint: e3:5d:28:41:9e:d0:20:25:cf:a6:90:38:cd:62:39:62:45:8d:a5:c6:95:fb:de:a3:c2:2b:0b:fb:25:89:70:92
-----BEGIN CERTIFICATE-----
MIIB8jCCAXigAwIBAgITBmyf18G7EEwpQ+Vxe3ssyBrBDjAKBggqhkjOPQQDAzA5
MQswCQYDVQQGEwJVUzEPMA0GA1UEChMGQW1hem9uMRkwFwYDVQQDExBBbWF6b24g
Um9vdCBDQSA0MB4XDTE1MDUyNjAwMDAwMFoXDTQwMDUyNjAwMDAwMFowOTELMAkG
A1UEBhMCVVMxDzANBgNVBAoTBkFtYXpvbjEZMBcGA1UEAxMQQW1hem9uIFJvb3Qg
Q0EgNDB2MBAGByqGSM49AgEGBSuBBAAiA2IABNKrijdPo1MN/sGKe0uoe0ZLY7Bi
9i0b2whxIdIA6GO9mif78DluXeo9pcmBqqNbIJhFXRbb/egQbeOc4OO9X4Ri83Bk
M6DLJC9wuoihKqB1+IGuYgbEgds5bimwHvouXKNCMEAwDwYDVR0TAQH/BAUwAwEB
/zAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFNPsxzplbszh2naaVvuc84ZtV+WB
MAoGCCqGSM49BAMDA2gAMGUCMDqLIfG9fhGt0O9Yli/W651+kI0rz2ZVwyzjKKlw
CkcO8DdZEv8tmZQoTipPNU0zWgIxAOp1AE47xDqUEpHJWEadIRNyp4iciuRMStuW
1KyLa2tJElMzrdfkviT8tQp21KW8EA==
-----END CERTIFICATE-----

# Issuer: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM
# Subject: CN=TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1 O=Turkiye Bilimsel ve Teknolojik Arastirma Kurumu - TUBITAK OU=Kamu Sertifikasyon Merkezi - Kamu SM
# Label: "TUBITAK Kamu SM SSL Kok Sertifikasi - Surum 1"
# Serial: 1
# MD5 Fingerprint: dc:00:81:dc:69:2f:3e:2f:b0:3b:f6:3d:5a:91:8e:49
# SHA1 Fingerprint: 31:43:64:9b:ec:ce:27:ec:ed:3a:3f:0b:8f:0d:e4:e8:91:dd:ee:ca
# SHA256 Fingerprint: 46:ed:c3:68:90:46:d5:3a:45:3f:b3:10:4a:b8:0d:ca:ec:65:8b:26:60:ea:16:29:dd:7e:86:79:90:64:87:16
-----BEGIN CERTIFICATE-----
MIIEYzCCA0ugAwIBAgIBATANBgkqhkiG9w0BAQsFADCB0jELMAkGA1UEBhMCVFIx
GDAWBgNVBAcTD0dlYnplIC0gS29jYWVsaTFCMEAGA1UEChM5VHVya2l5ZSBCaWxp
bXNlbCB2ZSBUZWtub2xvamlrIEFyYXN0aXJtYSBLdXJ1bXUgLSBUVUJJVEFLMS0w
KwYDVQQLEyRLYW11IFNlcnRpZmlrYXN5b24gTWVya2V6aSAtIEthbXUgU00xNjA0
BgNVBAMTLVRVQklUQUsgS2FtdSBTTSBTU0wgS29rIFNlcnRpZmlrYXNpIC0gU3Vy
dW0gMTAeFw0xMzExMjUwODI1NTVaFw00MzEwMjUwODI1NTVaMIHSMQswCQYDVQQG
EwJUUjEYMBYGA1UEBxMPR2ViemUgLSBLb2NhZWxpMUIwQAYDVQQKEzlUdXJraXll
IEJpbGltc2VsIHZlIFRla25vbG9qaWsgQXJhc3Rpcm1hIEt1cnVtdSAtIFRVQklU
QUsxLTArBgNVBAsTJEthbXUgU2VydGlmaWthc3lvbiBNZXJrZXppIC0gS2FtdSBT
TTE2MDQGA1UEAxMtVFVCSVRBSyBLYW11IFNNIFNTTCBLb2sgU2VydGlmaWthc2kg
LSBTdXJ1bSAxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAr3UwM6q7
a9OZLBI3hNmNe5eA027n/5tQlT6QlVZC1xl8JoSNkvoBHToP4mQ4t4y86Ij5iySr
LqP1N+RAjhgleYN1Hzv/bKjFxlb4tO2KRKOrbEz8HdDc72i9z+SqzvBV96I01INr
N3wcwv61A+xXzry0tcXtAA9TNypN9E8Mg/uGz8v+jE69h/mniyFXnHrfA2eJLJ2X
YacQuFWQfw4tJzh03+f92k4S400VIgLI4OD8D62K18lUUMw7D8oWgITQUVbDjlZ/
iSIzL+aFCr2lqBs23tPcLG07xxO9WSMs5uWk99gL7eqQQESolbuT1dCANLZGeA4f
AJNG4e7p+exPFwIDAQABo0IwQDAdBgNVHQ4EFgQUZT/HiobGPN08VFw1+DrtUgxH
V8gwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL
BQADggEBACo/4fEyjq7hmFxLXs9rHmoJ0iKpEsdeV31zVmSAhHqT5Am5EM2fKifh
AHe+SMg1qIGf5LgsyX8OsNJLN13qudULXjS99HMpw+0mFZx+CFOKWI3QSyjfwbPf
IPP54+M638yclNhOT8NrF7f3cuitZjO1JVOr4PhMqZ398g26rrnZqsZr+ZO7rqu4
lzwDGrpDxpa5RXI4s6ehlj2Re37AIVNMh+3yC1SVUZPVIqUNivGTDj5UDrDYyU7c
8jEyVupk+eq1nRZmQnLzf9OxMUP8pI4X8W0jq5Rm+K37DwhuJi1/FwcJsoz7UMCf
lo3Ptv0AnVoUmr8CRPXBwp8iXqIPoeM=
-----END CERTIFICATE-----

# Issuer: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD.
# Subject: CN=GDCA TrustAUTH R5 ROOT O=GUANG DONG CERTIFICATE AUTHORITY CO.,LTD.
# Label: "GDCA TrustAUTH R5 ROOT"
# Serial: 9009899650740120186
# MD5 Fingerprint: 63:cc:d9:3d:34:35:5c:6f:53:a3:e2:08:70:48:1f:b4
# SHA1 Fingerprint: 0f:36:38:5b:81:1a:25:c3:9b:31:4e:83:ca:e9:34:66:70:cc:74:b4
# SHA256 Fingerprint: bf:ff:8f:d0:44:33:48:7d:6a:8a:a6:0c:1a:29:76:7a:9f:c2:bb:b0:5e:42:0f:71:3a:13:b9:92:89:1d:38:93
-----BEGIN CERTIFICATE-----
MIIFiDCCA3CgAwIBAgIIfQmX/vBH6nowDQYJKoZIhvcNAQELBQAwYjELMAkGA1UE
BhMCQ04xMjAwBgNVBAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZ
IENPLixMVEQuMR8wHQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMB4XDTE0
MTEyNjA1MTMxNVoXDTQwMTIzMTE1NTk1OVowYjELMAkGA1UEBhMCQ04xMjAwBgNV
BAoMKUdVQU5HIERPTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZIENPLixMVEQuMR8w
HQYDVQQDDBZHRENBIFRydXN0QVVUSCBSNSBST09UMIICIjANBgkqhkiG9w0BAQEF
AAOCAg8AMIICCgKCAgEA2aMW8Mh0dHeb7zMNOwZ+Vfy1YI92hhJCfVZmPoiC7XJj
Dp6L3TQsAlFRwxn9WVSEyfFrs0yw6ehGXTjGoqcuEVe6ghWinI9tsJlKCvLriXBj
TnnEt1u9ol2x8kECK62pOqPseQrsXzrj/e+APK00mxqriCZ7VqKChh/rNYmDf1+u
KU49tm7srsHwJ5uu4/Ts765/94Y9cnrrpftZTqfrlYwiOXnhLQiPzLyRuEH3FMEj
qcOtmkVEs7LXLM3GKeJQEK5cy4KOFxg2fZfmiJqwTTQJ9Cy5WmYqsBebnh52nUpm
MUHfP/vFBu8btn4aRjb3ZGM74zkYI+dndRTVdVeSN72+ahsmUPI2JgaQxXABZG12
ZuGR224HwGGALrIuL4xwp9E7PLOR5G62xDtw8mySlwnNR30YwPO7ng/Wi64HtloP
zgsMR6flPri9fcebNaBhlzpBdRfMK5Z3KpIhHtmVdiBnaM8Nvd/WHwlqmuLMc3Gk
L30SgLdTMEZeS1SZD2fJpcjyIMGC7J0R38IC+xo70e0gmu9lZJIQDSri3nDxGGeC
jGHeuLzRL5z7D9Ar7Rt2ueQ5Vfj4oR24qoAATILnsn8JuLwwoC8N9VKejveSswoA
HQBUlwbgsQfZxw9cZX08bVlX5O2ljelAU58VS6Bx9hoh49pwBiFYFIeFd3mqgnkC
AwEAAaNCMEAwHQYDVR0OBBYEFOLJQJ9NzuiaoXzPDj9lxSmIahlRMA8GA1UdEwEB
/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQDRSVfg
p8xoWLoBDysZzY2wYUWsEe1jUGn4H3++Fo/9nesLqjJHdtJnJO29fDMylyrHBYZm
DRd9FBUb1Ov9H5r2XpdptxolpAqzkT9fNqyL7FeoPueBihhXOYV0GkLH6VsTX4/5
COmSdI31R9KrO9b7eGZONn356ZLpBN79SWP8bfsUcZNnL0dKt7n/HipzcEYwv1ry
L3ml4Y0M2fmyYzeMN2WFcGpcWwlyua1jPLHd+PwyvzeG5LuOmCd+uh8W4XAR8gPf
JWIyJyYYMoSf/wA6E7qaTfRPuBRwIrHKK5DOKcFw9C+df/KQHtZa37dG/OaG+svg
IHZ6uqbL9XzeYqWxi+7egmaKTjowHz+Ay60nugxe19CxVsp3cbK1daFQqUBDF8Io
2c9Si1vIY9RCPqAzekYu9wogRlR+ak8x8YF+QnQ4ZXMn7sZ8uI7XpTrXmKGcjBBV
09tL7ECQ8s1uV9JiDnxXk7Gnbc2dg7sq5+W2O3FYrf3RRbxake5TFW/TRQl1brqQ
XR4EzzffHqhmsYzmIGrv/EhOdJhCrylvLmrH+33RZjEizIYAfmaDDEL0vTSSwxrq
T8p+ck0LcIymSLumoRT2+1hEmRSuqguTaaApJUqlyyvdimYHFngVV3Eb7PVHhPOe
MTd61X8kreS8/f3MboPoDKi3QWwH3b08hpcv0g==
-----END CERTIFICATE-----

# Issuer: CN=SSL.com Root Certification Authority RSA O=SSL Corporation
# Subject: CN=SSL.com Root Certification Authority RSA O=SSL Corporation
# Label: "SSL.com Root Certification Authority RSA"
# Serial: 8875640296558310041
# MD5 Fingerprint: 86:69:12:c0:70:f1:ec:ac:ac:c2:d5:bc:a5:5b:a1:29
# SHA1 Fingerprint: b7:ab:33:08:d1:ea:44:77:ba:14:80:12:5a:6f:bd:a9:36:49:0c:bb
# SHA256 Fingerprint: 85:66:6a:56:2e:e0:be:5c:e9:25:c1:d8:89:0a:6f:76:a8:7e:c1:6d:4d:7d:5f:29:ea:74:19:cf:20:12:3b:69
-----BEGIN CERTIFICATE-----
MIIF3TCCA8WgAwIBAgIIeyyb0xaAMpkwDQYJKoZIhvcNAQELBQAwfDELMAkGA1UE
BhMCVVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQK
DA9TU0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZp
Y2F0aW9uIEF1dGhvcml0eSBSU0EwHhcNMTYwMjEyMTczOTM5WhcNNDEwMjEyMTcz
OTM5WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv
dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNv
bSBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IFJTQTCCAiIwDQYJKoZIhvcN
AQEBBQADggIPADCCAgoCggIBAPkP3aMrfcvQKv7sZ4Wm5y4bunfh4/WvpOz6Sl2R
xFdHaxh3a3by/ZPkPQ/CFp4LZsNWlJ4Xg4XOVu/yFv0AYvUiCVToZRdOQbngT0aX
qhvIuG5iXmmxX9sqAn78bMrzQdjt0Oj8P2FI7bADFB0QDksZ4LtO7IZl/zbzXmcC
C52GVWH9ejjt/uIZALdvoVBidXQ8oPrIJZK0bnoix/geoeOy3ZExqysdBP+lSgQ3
6YWkMyv94tZVNHwZpEpox7Ko07fKoZOI68GXvIz5HdkihCR0xwQ9aqkpk8zruFvh
/l8lqjRYyMEjVJ0bmBHDOJx+PYZspQ9AhnwC9FwCTyjLrnGfDzrIM/4RJTXq/LrF
YD3ZfBjVsqnTdXgDciLKOsMf7yzlLqn6niy2UUb9rwPW6mBo6oUWNmuF6R7As93E
JNyAKoFBbZQ+yODJgUEAnl6/f8UImKIYLEJAs/lvOCdLToD0PYFH4Ih86hzOtXVc
US4cK38acijnALXRdMbX5J+tB5O2UzU1/Dfkw/ZdFr4hc96SCvigY2q8lpJqPvi8
ZVWb3vUNiSYE/CUapiVpy8JtynziWV+XrOvvLsi81xtZPCvM8hnIk2snYxnP/Okm
+Mpxm3+T/jRnhE6Z6/yzeAkzcLpmpnbtG3PrGqUNxCITIJRWCk4sbE6x/c+cCbqi
M+2HAgMBAAGjYzBhMB0GA1UdDgQWBBTdBAkHovV6fVJTEpKV7jiAJQ2mWTAPBgNV
HRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFN0ECQei9Xp9UlMSkpXuOIAlDaZZMA4G
A1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAIBgRlCn7Jp0cHh5wYfGV
cpNxJK1ok1iOMq8bs3AD/CUrdIWQPXhq9LmLpZc7tRiRux6n+UBbkflVma8eEdBc
Hadm47GUBwwyOabqG7B52B2ccETjit3E+ZUfijhDPwGFpUenPUayvOUiaPd7nNgs
PgohyC0zrL/FgZkxdMF1ccW+sfAjRfSda/wZY52jvATGGAslu1OJD7OAUN5F7kR/
q5R4ZJjT9ijdh9hwZXT7DrkT66cPYakylszeu+1jTBi7qUD3oFRuIIhxdRjqerQ0
cuAjJ3dctpDqhiVAq+8zD8ufgr6iIPv2tS0a5sKFsXQP+8hlAqRSAUfdSSLBv9jr
a6x+3uxjMxW3IwiPxg+NQVrdjsW5j+VFP3jbutIbQLH+cU0/4IGiul607BXgk90I
H37hVZkLId6Tngr75qNJvTYw/ud3sqB1l7UtgYgXZSD32pAAn8lSzDLKNXz1PQ/Y
K9f1JmzJBjSWFupwWRoyeXkLtoh/D1JIPb9s2KJELtFOt3JY04kTlf5Eq/jXixtu
nLwsoFvVagCvXzfh1foQC5ichucmj87w7G6KVwuA406ywKBjYZC6VWg3dGq2ktuf
oYYitmUnDuy2n0Jg5GfCtdpBC8TTi2EbvPofkSvXRAdeuims2cXp71NIWuuA8ShY
Ic2wBlX7Jz9TkHCpBB5XJ7k=
-----END CERTIFICATE-----

# Issuer: CN=SSL.com Root Certification Authority ECC O=SSL Corporation
# Subject: CN=SSL.com Root Certification Authority ECC O=SSL Corporation
# Label: "SSL.com Root Certification Authority ECC"
# Serial: 8495723813297216424
# MD5 Fingerprint: 2e:da:e4:39:7f:9c:8f:37:d1:70:9f:26:17:51:3a:8e
# SHA1 Fingerprint: c3:19:7c:39:24:e6:54:af:1b:c4:ab:20:95:7a:e2:c3:0e:13:02:6a
# SHA256 Fingerprint: 34:17:bb:06:cc:60:07:da:1b:96:1c:92:0b:8a:b4:ce:3f:ad:82:0e:4a:a3:0b:9a:cb:c4:a7:4e:bd:ce:bc:65
-----BEGIN CERTIFICATE-----
MIICjTCCAhSgAwIBAgIIdebfy8FoW6gwCgYIKoZIzj0EAwIwfDELMAkGA1UEBhMC
VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T
U0wgQ29ycG9yYXRpb24xMTAvBgNVBAMMKFNTTC5jb20gUm9vdCBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNDAzWhcNNDEwMjEyMTgxNDAz
WjB8MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hvdXN0
b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjExMC8GA1UEAwwoU1NMLmNvbSBS
b290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49AgEGBSuB
BAAiA2IABEVuqVDEpiM2nl8ojRfLliJkP9x6jh3MCLOicSS6jkm5BBtHllirLZXI
7Z4INcgn64mMU1jrYor+8FsPazFSY0E7ic3s7LaNGdM0B9y7xgZ/wkWV7Mt/qCPg
CemB+vNH06NjMGEwHQYDVR0OBBYEFILRhXMw5zUE044CkvvlpNHEIejNMA8GA1Ud
EwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUgtGFczDnNQTTjgKS++Wk0cQh6M0wDgYD
VR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2cAMGQCMG/n61kRpGDPYbCWe+0F+S8T
kdzt5fxQaxFGRrMcIQBiu77D5+jNB5n5DQtdcj7EqgIwH7y6C+IwJPt8bYBVCpk+
gA0z5Wajs6O7pdWLjwkspl1+4vAHCGht0nxpbl/f5Wpl
-----END CERTIFICATE-----

# Issuer: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation
# Subject: CN=SSL.com EV Root Certification Authority RSA R2 O=SSL Corporation
# Label: "SSL.com EV Root Certification Authority RSA R2"
# Serial: 6248227494352943350
# MD5 Fingerprint: e1:1e:31:58:1a:ae:54:53:02:f6:17:6a:11:7b:4d:95
# SHA1 Fingerprint: 74:3a:f0:52:9b:d0:32:a0:f4:4a:83:cd:d4:ba:a9:7b:7c:2e:c4:9a
# SHA256 Fingerprint: 2e:7b:f1:6c:c2:24:85:a7:bb:e2:aa:86:96:75:07:61:b0:ae:39:be:3b:2f:e9:d0:cc:6d:4e:f7:34:91:42:5c
-----BEGIN CERTIFICATE-----
MIIF6zCCA9OgAwIBAgIIVrYpzTS8ePYwDQYJKoZIhvcNAQELBQAwgYIxCzAJBgNV
BAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4GA1UEBwwHSG91c3RvbjEYMBYGA1UE
CgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQDDC5TU0wuY29tIEVWIFJvb3QgQ2Vy
dGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIyMB4XDTE3MDUzMTE4MTQzN1oXDTQy
MDUzMDE4MTQzN1owgYIxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVUZXhhczEQMA4G
A1UEBwwHSG91c3RvbjEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMTcwNQYDVQQD
DC5TU0wuY29tIEVWIFJvb3QgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgUlNBIFIy
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAjzZlQOHWTcDXtOlG2mvq
M0fNTPl9fb69LT3w23jhhqXZuglXaO1XPqDQCEGD5yhBJB/jchXQARr7XnAjssuf
OePPxU7Gkm0mxnu7s9onnQqG6YE3Bf7wcXHswxzpY6IXFJ3vG2fThVUCAtZJycxa
4bH3bzKfydQ7iEGonL3Lq9ttewkfokxykNorCPzPPFTOZw+oz12WGQvE43LrrdF9
HSfvkusQv1vrO6/PgN3B0pYEW3p+pKk8OHakYo6gOV7qd89dAFmPZiw+B6KjBSYR
aZfqhbcPlgtLyEDhULouisv3D5oi53+aNxPN8k0TayHRwMwi8qFG9kRpnMphNQcA
b9ZhCBHqurj26bNg5U257J8UZslXWNvNh2n4ioYSA0e/ZhN2rHd9NCSFg83XqpyQ
Gp8hLH94t2S42Oim9HizVcuE0jLEeK6jj2HdzghTreyI/BXkmg3mnxp3zkyPuBQV
PWKchjgGAGYS5Fl2WlPAApiiECtoRHuOec4zSnaqW4EWG7WK2NAAe15itAnWhmMO
pgWVSbooi4iTsjQc2KRVbrcc0N6ZVTsj9CLg+SlmJuwgUHfbSguPvuUCYHBBXtSu
UDkiFCbLsjtzdFVHB3mBOagwE0TlBIqulhMlQg+5U8Sb/M3kHN48+qvWBkofZ6aY
MBzdLNvcGJVXZsb/XItW9XcCAwEAAaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAfBgNV
HSMEGDAWgBT5YLvU49U09rj1BoAlp3PbRmmonjAdBgNVHQ4EFgQU+WC71OPVNPa4
9QaAJadz20ZpqJ4wDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEBCwUAA4ICAQBW
s47LCp1Jjr+kxJG7ZhcFUZh1++VQLHqe8RT6q9OKPv+RKY9ji9i0qVQBDb6Thi/5
Sm3HXvVX+cpVHBK+Rw82xd9qt9t1wkclf7nxY/hoLVUE0fKNsKTPvDxeH3jnpaAg
cLAExbf3cqfeIg29MyVGjGSSJuM+LmOW2puMPfgYCdcDzH2GguDKBAdRUNf/ktUM
79qGn5nX67evaOI5JpS6aLe/g9Pqemc9YmeuJeVy6OLk7K4S9ksrPJ/psEDzOFSz
/bdoyNrGj1E8svuR3Bznm53htw1yj+KkxKl4+esUrMZDBcJlOSgYAsOCsp0FvmXt
ll9ldDz7CTUue5wT/RsPXcdtgTpWD8w74a8CLyKsRspGPKAcTNZEtF4uXBVmCeEm
Kf7GUmG6sXP/wwyc5WxqlD8UykAWlYTzWamsX0xhk23RO8yilQwipmdnRC652dKK
QbNmC1r7fSOl8hqw/96bg5Qu0T/fkreRrwU7ZcegbLHNYhLDkBvjJc40vG93drEQ
w/cFGsDWr3RiSBd3kmmQYRzelYB0VI8YHMPzA9C/pEN1hlMYegouCRw2n5H9gooi
S9EOUCXdywMMF8mDAAhONU2Ki+3wApRmLER/y5UnlhetCTCstnEXbosX9hwJ1C07
mKVx01QT2WDz9UtmT/rx7iASjbSsV7FFY6GsdqnC+w==
-----END CERTIFICATE-----

# Issuer: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation
# Subject: CN=SSL.com EV Root Certification Authority ECC O=SSL Corporation
# Label: "SSL.com EV Root Certification Authority ECC"
# Serial: 3182246526754555285
# MD5 Fingerprint: 59:53:22:65:83:42:01:54:c0:ce:42:b9:5a:7c:f2:90
# SHA1 Fingerprint: 4c:dd:51:a3:d1:f5:20:32:14:b0:c6:c5:32:23:03:91:c7:46:42:6d
# SHA256 Fingerprint: 22:a2:c1:f7:bd:ed:70:4c:c1:e7:01:b5:f4:08:c3:10:88:0f:e9:56:b5:de:2a:4a:44:f9:9c:87:3a:25:a7:c8
-----BEGIN CERTIFICATE-----
MIIClDCCAhqgAwIBAgIILCmcWxbtBZUwCgYIKoZIzj0EAwIwfzELMAkGA1UEBhMC
VVMxDjAMBgNVBAgMBVRleGFzMRAwDgYDVQQHDAdIb3VzdG9uMRgwFgYDVQQKDA9T
U0wgQ29ycG9yYXRpb24xNDAyBgNVBAMMK1NTTC5jb20gRVYgUm9vdCBDZXJ0aWZp
Y2F0aW9uIEF1dGhvcml0eSBFQ0MwHhcNMTYwMjEyMTgxNTIzWhcNNDEwMjEyMTgx
NTIzWjB/MQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVGV4YXMxEDAOBgNVBAcMB0hv
dXN0b24xGDAWBgNVBAoMD1NTTCBDb3Jwb3JhdGlvbjE0MDIGA1UEAwwrU1NMLmNv
bSBFViBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5IEVDQzB2MBAGByqGSM49
AgEGBSuBBAAiA2IABKoSR5CYG/vvw0AHgyBO8TCCogbR8pKGYfL2IWjKAMTH6kMA
VIbc/R/fALhBYlzccBYy3h+Z1MzFB8gIH2EWB1E9fVwHU+M1OIzfzZ/ZLg1Kthku
WnBaBu2+8KGwytAJKaNjMGEwHQYDVR0OBBYEFFvKXuXe0oGqzagtZFG22XKbl+ZP
MA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUW8pe5d7SgarNqC1kUbbZcpuX
5k8wDgYDVR0PAQH/BAQDAgGGMAoGCCqGSM49BAMCA2gAMGUCMQCK5kCJN+vp1RPZ
ytRrJPOwPYdGWBrssd9v+1a6cGvHOMzosYxPD/fxZ3YOg9AeUY8CMD32IygmTMZg
h5Mmm7I1HrrW9zzRHM76JTymGoEVW/MSD2zuZYrJh6j5B+BimoxcSg==
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6
# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign Root CA - R6
# Label: "GlobalSign Root CA - R6"
# Serial: 1417766617973444989252670301619537
# MD5 Fingerprint: 4f:dd:07:e4:d4:22:64:39:1e:0c:37:42:ea:d1:c6:ae
# SHA1 Fingerprint: 80:94:64:0e:b5:a7:a1:ca:11:9c:1f:dd:d5:9f:81:02:63:a7:fb:d1
# SHA256 Fingerprint: 2c:ab:ea:fe:37:d0:6c:a2:2a:ba:73:91:c0:03:3d:25:98:29:52:c4:53:64:73:49:76:3a:3a:b5:ad:6c:cf:69
-----BEGIN CERTIFICATE-----
MIIFgzCCA2ugAwIBAgIORea7A4Mzw4VlSOb/RVEwDQYJKoZIhvcNAQEMBQAwTDEg
MB4GA1UECxMXR2xvYmFsU2lnbiBSb290IENBIC0gUjYxEzARBgNVBAoTCkdsb2Jh
bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTQxMjEwMDAwMDAwWhcNMzQx
MjEwMDAwMDAwWjBMMSAwHgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSNjET
MBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjCCAiIwDQYJ
KoZIhvcNAQEBBQADggIPADCCAgoCggIBAJUH6HPKZvnsFMp7PPcNCPG0RQssgrRI
xutbPK6DuEGSMxSkb3/pKszGsIhrxbaJ0cay/xTOURQh7ErdG1rG1ofuTToVBu1k
ZguSgMpE3nOUTvOniX9PeGMIyBJQbUJmL025eShNUhqKGoC3GYEOfsSKvGRMIRxD
aNc9PIrFsmbVkJq3MQbFvuJtMgamHvm566qjuL++gmNQ0PAYid/kD3n16qIfKtJw
LnvnvJO7bVPiSHyMEAc4/2ayd2F+4OqMPKq0pPbzlUoSB239jLKJz9CgYXfIWHSw
1CM69106yqLbnQneXUQtkPGBzVeS+n68UARjNN9rkxi+azayOeSsJDa38O+2HBNX
k7besvjihbdzorg1qkXy4J02oW9UivFyVm4uiMVRQkQVlO6jxTiWm05OWgtH8wY2
SXcwvHE35absIQh1/OZhFj931dmRl4QKbNQCTXTAFO39OfuD8l4UoQSwC+n+7o/h
bguyCLNhZglqsQY6ZZZZwPA1/cnaKI0aEYdwgQqomnUdnjqGBQCe24DWJfncBZ4n
WUx2OVvq+aWh2IMP0f/fMBH5hc8zSPXKbWQULHpYT9NLCEnFlWQaYw55PfWzjMpY
rZxCRXluDocZXFSxZba/jJvcE+kNb7gu3GduyYsRtYQUigAZcIN5kZeR1Bonvzce
MgfYFGM8KEyvAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTAD
AQH/MB0GA1UdDgQWBBSubAWjkxPioufi1xzWx/B/yGdToDAfBgNVHSMEGDAWgBSu
bAWjkxPioufi1xzWx/B/yGdToDANBgkqhkiG9w0BAQwFAAOCAgEAgyXt6NH9lVLN
nsAEoJFp5lzQhN7craJP6Ed41mWYqVuoPId8AorRbrcWc+ZfwFSY1XS+wc3iEZGt
Ixg93eFyRJa0lV7Ae46ZeBZDE1ZXs6KzO7V33EByrKPrmzU+sQghoefEQzd5Mr61
55wsTLxDKZmOMNOsIeDjHfrYBzN2VAAiKrlNIC5waNrlU/yDXNOd8v9EDERm8tLj
vUYAGm0CuiVdjaExUd1URhxN25mW7xocBFymFe944Hn+Xds+qkxV/ZoVqW/hpvvf
cDDpw+5CRu3CkwWJ+n1jez/QcYF8AOiYrg54NMMl+68KnyBr3TsTjxKM4kEaSHpz
oHdpx7Zcf4LIHv5YGygrqGytXm3ABdJ7t+uA/iU3/gKbaKxCXcPu9czc8FB10jZp
nOZ7BN9uBmm23goJSFmH63sUYHpkqmlD75HHTOwY3WzvUy2MmeFe8nI+z1TIvWfs
pA9MRf/TuTAjB0yPEL+GltmZWrSZVxykzLsViVO6LAUP5MSeGbEYNNVMnbrt9x+v
JJUEeKgDu+6B5dpffItKoZB0JaezPkvILFa9x8jvOOJckvB595yEunQtYQEgfn7R
8k8HWV+LLUNS60YMlOH1Zkd5d9VUWx+tJDfLRVpOoERIyNiwmcUVhAn21klJwGW4
5hpxbqCo8YLoRT5s1gLXCmeDBVrJpBA=
-----END CERTIFICATE-----

# Issuer: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed
# Subject: CN=OISTE WISeKey Global Root GC CA O=WISeKey OU=OISTE Foundation Endorsed
# Label: "OISTE WISeKey Global Root GC CA"
# Serial: 44084345621038548146064804565436152554
# MD5 Fingerprint: a9:d6:b9:2d:2f:93:64:f8:a5:69:ca:91:e9:68:07:23
# SHA1 Fingerprint: e0:11:84:5e:34:de:be:88:81:b9:9c:f6:16:26:d1:96:1f:c3:b9:31
# SHA256 Fingerprint: 85:60:f9:1c:36:24:da:ba:95:70:b5:fe:a0:db:e3:6f:f1:1a:83:23:be:94:86:85:4f:b3:f3:4a:55:71:19:8d
-----BEGIN CERTIFICATE-----
MIICaTCCAe+gAwIBAgIQISpWDK7aDKtARb8roi066jAKBggqhkjOPQQDAzBtMQsw
CQYDVQQGEwJDSDEQMA4GA1UEChMHV0lTZUtleTEiMCAGA1UECxMZT0lTVEUgRm91
bmRhdGlvbiBFbmRvcnNlZDEoMCYGA1UEAxMfT0lTVEUgV0lTZUtleSBHbG9iYWwg
Um9vdCBHQyBDQTAeFw0xNzA1MDkwOTQ4MzRaFw00MjA1MDkwOTU4MzNaMG0xCzAJ
BgNVBAYTAkNIMRAwDgYDVQQKEwdXSVNlS2V5MSIwIAYDVQQLExlPSVNURSBGb3Vu
ZGF0aW9uIEVuZG9yc2VkMSgwJgYDVQQDEx9PSVNURSBXSVNlS2V5IEdsb2JhbCBS
b290IEdDIENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAETOlQwMYPchi82PG6s4ni
eUqjFqdrVCTbUf/q9Akkwwsin8tqJ4KBDdLArzHkdIJuyiXZjHWd8dvQmqJLIX4W
p2OQ0jnUsYd4XxiWD1AbNTcPasbc2RNNpI6QN+a9WzGRo1QwUjAOBgNVHQ8BAf8E
BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUSIcUrOPDnpBgOtfKie7T
rYy0UGYwEAYJKwYBBAGCNxUBBAMCAQAwCgYIKoZIzj0EAwMDaAAwZQIwJsdpW9zV
57LnyAyMjMPdeYwbY9XJUpROTYJKcx6ygISpJcBMWm1JKWB4E+J+SOtkAjEA2zQg
Mgj/mkkCtojeFK9dbJlxjRo/i9fgojaGHAeCOnZT/cKi7e97sIBPWA9LUzm9
-----END CERTIFICATE-----

# Issuer: CN=UCA Global G2 Root O=UniTrust
# Subject: CN=UCA Global G2 Root O=UniTrust
# Label: "UCA Global G2 Root"
# Serial: 124779693093741543919145257850076631279
# MD5 Fingerprint: 80:fe:f0:c4:4a:f0:5c:62:32:9f:1c:ba:78:a9:50:f8
# SHA1 Fingerprint: 28:f9:78:16:19:7a:ff:18:25:18:aa:44:fe:c1:a0:ce:5c:b6:4c:8a
# SHA256 Fingerprint: 9b:ea:11:c9:76:fe:01:47:64:c1:be:56:a6:f9:14:b5:a5:60:31:7a:bd:99:88:39:33:82:e5:16:1a:a0:49:3c
-----BEGIN CERTIFICATE-----
MIIFRjCCAy6gAwIBAgIQXd+x2lqj7V2+WmUgZQOQ7zANBgkqhkiG9w0BAQsFADA9
MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxGzAZBgNVBAMMElVDQSBH
bG9iYWwgRzIgUm9vdDAeFw0xNjAzMTEwMDAwMDBaFw00MDEyMzEwMDAwMDBaMD0x
CzAJBgNVBAYTAkNOMREwDwYDVQQKDAhVbmlUcnVzdDEbMBkGA1UEAwwSVUNBIEds
b2JhbCBHMiBSb290MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAxeYr
b3zvJgUno4Ek2m/LAfmZmqkywiKHYUGRO8vDaBsGxUypK8FnFyIdK+35KYmToni9
kmugow2ifsqTs6bRjDXVdfkX9s9FxeV67HeToI8jrg4aA3++1NDtLnurRiNb/yzm
VHqUwCoV8MmNsHo7JOHXaOIxPAYzRrZUEaalLyJUKlgNAQLx+hVRZ2zA+te2G3/R
VogvGjqNO7uCEeBHANBSh6v7hn4PJGtAnTRnvI3HLYZveT6OqTwXS3+wmeOwcWDc
C/Vkw85DvG1xudLeJ1uK6NjGruFZfc8oLTW4lVYa8bJYS7cSN8h8s+1LgOGN+jIj
tm+3SJUIsUROhYw6AlQgL9+/V087OpAh18EmNVQg7Mc/R+zvWr9LesGtOxdQXGLY
D0tK3Cv6brxzks3sx1DoQZbXqX5t2Okdj4q1uViSukqSKwxW/YDrCPBeKW4bHAyv
j5OJrdu9o54hyokZ7N+1wxrrFv54NkzWbtA+FxyQF2smuvt6L78RHBgOLXMDj6Dl
NaBa4kx1HXHhOThTeEDMg5PXCp6dW4+K5OXgSORIskfNTip1KnvyIvbJvgmRlld6
iIis7nCs+dwp4wwcOxJORNanTrAmyPPZGpeRaOrvjUYG0lZFWJo8DA+DuAUlwznP
O6Q0ibd5Ei9Hxeepl2n8pndntd978XplFeRhVmUCAwEAAaNCMEAwDgYDVR0PAQH/
BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFIHEjMz15DD/pQwIX4wV
ZyF0Ad/fMA0GCSqGSIb3DQEBCwUAA4ICAQATZSL1jiutROTL/7lo5sOASD0Ee/oj
L3rtNtqyzm325p7lX1iPyzcyochltq44PTUbPrw7tgTQvPlJ9Zv3hcU2tsu8+Mg5
1eRfB70VVJd0ysrtT7q6ZHafgbiERUlMjW+i67HM0cOU2kTC5uLqGOiiHycFutfl
1qnN3e92mI0ADs0b+gO3joBYDic/UvuUospeZcnWhNq5NXHzJsBPd+aBJ9J3O5oU
b3n09tDh05S60FdRvScFDcH9yBIw7m+NESsIndTUv4BFFJqIRNow6rSn4+7vW4LV
PtateJLbXDzz2K36uGt/xDYotgIVilQsnLAXc47QN6MUPJiVAAwpBVueSUmxX8fj
y88nZY41F7dXyDDZQVu5FLbowg+UMaeUmMxq67XhJ/UQqAHojhJi6IjMtX9Gl8Cb
EGY4GjZGXyJoPd/JxhMnq1MGrKI8hgZlb7F+sSlEmqO6SWkoaY/X5V+tBIZkbxqg
DMUIYs6Ao9Dz7GjevjPHF1t/gMRMTLGmhIrDO7gJzRSBuhjjVFc2/tsvfEehOjPI
+Vg7RE+xygKJBJYoaMVLuCaJu9YzL1DV/pqJuhgyklTGW+Cd+V7lDSKb9triyCGy
YiGqhkCyLmTTX8jjfhFnRR8F/uOi77Oos/N9j/gMHyIfLXC0uAE0djAA5SN4p1bX
UB+K+wb1whnw0A==
-----END CERTIFICATE-----

# Issuer: CN=UCA Extended Validation Root O=UniTrust
# Subject: CN=UCA Extended Validation Root O=UniTrust
# Label: "UCA Extended Validation Root"
# Serial: 106100277556486529736699587978573607008
# MD5 Fingerprint: a1:f3:5f:43:c6:34:9b:da:bf:8c:7e:05:53:ad:96:e2
# SHA1 Fingerprint: a3:a1:b0:6f:24:61:23:4a:e3:36:a5:c2:37:fc:a6:ff:dd:f0:d7:3a
# SHA256 Fingerprint: d4:3a:f9:b3:54:73:75:5c:96:84:fc:06:d7:d8:cb:70:ee:5c:28:e7:73:fb:29:4e:b4:1e:e7:17:22:92:4d:24
-----BEGIN CERTIFICATE-----
MIIFWjCCA0KgAwIBAgIQT9Irj/VkyDOeTzRYZiNwYDANBgkqhkiG9w0BAQsFADBH
MQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNVBAMMHFVDQSBF
eHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwHhcNMTUwMzEzMDAwMDAwWhcNMzgxMjMx
MDAwMDAwWjBHMQswCQYDVQQGEwJDTjERMA8GA1UECgwIVW5pVHJ1c3QxJTAjBgNV
BAMMHFVDQSBFeHRlbmRlZCBWYWxpZGF0aW9uIFJvb3QwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQCpCQcoEwKwmeBkqh5DFnpzsZGgdT6o+uM4AHrsiWog
D4vFsJszA1qGxliG1cGFu0/GnEBNyr7uaZa4rYEwmnySBesFK5pI0Lh2PpbIILvS
sPGP2KxFRv+qZ2C0d35qHzwaUnoEPQc8hQ2E0B92CvdqFN9y4zR8V05WAT558aop
O2z6+I9tTcg1367r3CTueUWnhbYFiN6IXSV8l2RnCdm/WhUFhvMJHuxYMjMR83dk
sHYf5BA1FxvyDrFspCqjc/wJHx4yGVMR59mzLC52LqGj3n5qiAno8geK+LLNEOfi
c0CTuwjRP+H8C5SzJe98ptfRr5//lpr1kXuYC3fUfugH0mK1lTnj8/FtDw5lhIpj
VMWAtuCeS31HJqcBCF3RiJ7XwzJE+oJKCmhUfzhTA8ykADNkUVkLo4KRel7sFsLz
KuZi2irbWWIQJUoqgQtHB0MGcIfS+pMRKXpITeuUx3BNr2fVUbGAIAEBtHoIppB/
TuDvB0GHr2qlXov7z1CymlSvw4m6WC31MJixNnI5fkkE/SmnTHnkBVfblLkWU41G
sx2VYVdWf6/wFlthWG82UBEL2KwrlRYaDh8IzTY0ZRBiZtWAXxQgXy0MoHgKaNYs
1+lvK9JKBZP8nm9rZ/+I8U6laUpSNwXqxhaN0sSZ0YIrO7o1dfdRUVjzyAfd5LQD
fwIDAQABo0IwQDAdBgNVHQ4EFgQU2XQ65DA9DfcS3H5aBZ8eNJr34RQwDwYDVR0T
AQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQADggIBADaN
l8xCFWQpN5smLNb7rhVpLGsaGvdftvkHTFnq88nIua7Mui563MD1sC3AO6+fcAUR
ap8lTwEpcOPlDOHqWnzcSbvBHiqB9RZLcpHIojG5qtr8nR/zXUACE/xOHAbKsxSQ
VBcZEhrxH9cMaVr2cXj0lH2RC47skFSOvG+hTKv8dGT9cZr4QQehzZHkPJrgmzI5
c6sq1WnIeJEmMX3ixzDx/BR4dxIOE/TdFpS/S2d7cFOFyrC78zhNLJA5wA3CXWvp
4uXViI3WLL+rG761KIcSF3Ru/H38j9CHJrAb+7lsq+KePRXBOy5nAliRn+/4Qh8s
t2j1da3Ptfb/EX3C8CSlrdP6oDyp+l3cpaDvRKS+1ujl5BOWF3sGPjLtx7dCvHaj
2GU4Kzg1USEODm8uNBNA4StnDG1KQTAYI1oyVZnJF+A83vbsea0rWBmirSwiGpWO
vpaQXUJXxPkUAzUrHC1RVwinOt4/5Mi0A3PCwSaAuwtCH60NryZy2sy+s6ODWA2C
xR9GUeOcGMyNm43sSet1UNWMKFnKdDTajAshqx7qG+XH/RU+wBeq+yNuJkbL+vmx
cmtpzyKEC2IPrNkZAJSidjzULZrtBJ4tBmIQN1IchXIbJ+XMxjHsN+xjWZsLHXbM
fjKaiJUINlK73nZfdklJrX+9ZSCyycErdhh2n1ax
-----END CERTIFICATE-----

# Issuer: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036
# Subject: CN=Certigna Root CA O=Dhimyotis OU=0002 48146308100036
# Label: "Certigna Root CA"
# Serial: 269714418870597844693661054334862075617
# MD5 Fingerprint: 0e:5c:30:62:27:eb:5b:bc:d7:ae:62:ba:e9:d5:df:77
# SHA1 Fingerprint: 2d:0d:52:14:ff:9e:ad:99:24:01:74:20:47:6e:6c:85:27:27:f5:43
# SHA256 Fingerprint: d4:8d:3d:23:ee:db:50:a4:59:e5:51:97:60:1c:27:77:4b:9d:7b:18:c9:4d:5a:05:95:11:a1:02:50:b9:31:68
-----BEGIN CERTIFICATE-----
MIIGWzCCBEOgAwIBAgIRAMrpG4nxVQMNo+ZBbcTjpuEwDQYJKoZIhvcNAQELBQAw
WjELMAkGA1UEBhMCRlIxEjAQBgNVBAoMCURoaW15b3RpczEcMBoGA1UECwwTMDAw
MiA0ODE0NjMwODEwMDAzNjEZMBcGA1UEAwwQQ2VydGlnbmEgUm9vdCBDQTAeFw0x
MzEwMDEwODMyMjdaFw0zMzEwMDEwODMyMjdaMFoxCzAJBgNVBAYTAkZSMRIwEAYD
VQQKDAlEaGlteW90aXMxHDAaBgNVBAsMEzAwMDIgNDgxNDYzMDgxMDAwMzYxGTAX
BgNVBAMMEENlcnRpZ25hIFJvb3QgQ0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAw
ggIKAoICAQDNGDllGlmx6mQWDoyUJJV8g9PFOSbcDO8WV43X2KyjQn+Cyu3NW9sO
ty3tRQgXstmzy9YXUnIo245Onoq2C/mehJpNdt4iKVzSs9IGPjA5qXSjklYcoW9M
CiBtnyN6tMbaLOQdLNyzKNAT8kxOAkmhVECe5uUFoC2EyP+YbNDrihqECB63aCPu
I9Vwzm1RaRDuoXrC0SIxwoKF0vJVdlB8JXrJhFwLrN1CTivngqIkicuQstDuI7pm
TLtipPlTWmR7fJj6o0ieD5Wupxj0auwuA0Wv8HT4Ks16XdG+RCYyKfHx9WzMfgIh
C59vpD++nVPiz32pLHxYGpfhPTc3GGYo0kDFUYqMwy3OU4gkWGQwFsWq4NYKpkDf
ePb1BHxpE4S80dGnBs8B92jAqFe7OmGtBIyT46388NtEbVncSVmurJqZNjBBe3Yz
IoejwpKGbvlw7q6Hh5UbxHq9MfPU0uWZ/75I7HX1eBYdpnDBfzwboZL7z8g81sWT
Co/1VTp2lc5ZmIoJlXcymoO6LAQ6l73UL77XbJuiyn1tJslV1c/DeVIICZkHJC1k
JWumIWmbat10TWuXekG9qxf5kBdIjzb5LdXF2+6qhUVB+s06RbFo5jZMm5BX7CO5
hwjCxAnxl4YqKE3idMDaxIzb3+KhF1nOJFl0Mdp//TBt2dzhauH8XwIDAQABo4IB
GjCCARYwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE
FBiHVuBud+4kNTxOc5of1uHieX4rMB8GA1UdIwQYMBaAFBiHVuBud+4kNTxOc5of
1uHieX4rMEQGA1UdIAQ9MDswOQYEVR0gADAxMC8GCCsGAQUFBwIBFiNodHRwczov
L3d3d3cuY2VydGlnbmEuZnIvYXV0b3JpdGVzLzBtBgNVHR8EZjBkMC+gLaArhilo
dHRwOi8vY3JsLmNlcnRpZ25hLmZyL2NlcnRpZ25hcm9vdGNhLmNybDAxoC+gLYYr
aHR0cDovL2NybC5kaGlteW90aXMuY29tL2NlcnRpZ25hcm9vdGNhLmNybDANBgkq
hkiG9w0BAQsFAAOCAgEAlLieT/DjlQgi581oQfccVdV8AOItOoldaDgvUSILSo3L
6btdPrtcPbEo/uRTVRPPoZAbAh1fZkYJMyjhDSSXcNMQH+pkV5a7XdrnxIxPTGRG
HVyH41neQtGbqH6mid2PHMkwgu07nM3A6RngatgCdTer9zQoKJHyBApPNeNgJgH6
0BGM+RFq7q89w1DTj18zeTyGqHNFkIwgtnJzFyO+B2XleJINugHA64wcZr+shncB
lA2c5uk5jR+mUYyZDDl34bSb+hxnV29qao6pK0xXeXpXIs/NX2NGjVxZOob4Mkdi
o2cNGJHc+6Zr9UhhcyNZjgKnvETq9Emd8VRY+WCv2hikLyhF3HqgiIZd8zvn/yk1
gPxkQ5Tm4xxvvq0OKmOZK8l+hfZx6AYDlf7ej0gcWtSS6Cvu5zHbugRqh5jnxV/v
faci9wHYTfmJ0A6aBVmknpjZbyvKcL5kwlWj9Omvw5Ip3IgWJJk8jSaYtlu3zM63
Nwf9JtmYhST/WSMDmu2dnajkXjjO11INb9I/bbEFa0nOipFGc/T2L/Coc3cOZayh
jWZSaX5LaAzHHjcng6WMxwLkFM1JAbBzs/3GkDpv0mztO+7skb6iQ12LAEpmJURw
3kAP+HwV96LOPNdeE4yBFxgX0b3xdxA61GU5wSesVywlVP+i2k+KYTlerj1KjL0=
-----END CERTIFICATE-----

# Issuer: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI
# Subject: CN=emSign Root CA - G1 O=eMudhra Technologies Limited OU=emSign PKI
# Label: "emSign Root CA - G1"
# Serial: 235931866688319308814040
# MD5 Fingerprint: 9c:42:84:57:dd:cb:0b:a7:2e:95:ad:b6:f3:da:bc:ac
# SHA1 Fingerprint: 8a:c7:ad:8f:73:ac:4e:c1:b5:75:4d:a5:40:f4:fc:cf:7c:b5:8e:8c
# SHA256 Fingerprint: 40:f6:af:03:46:a9:9a:a1:cd:1d:55:5a:4e:9c:ce:62:c7:f9:63:46:03:ee:40:66:15:83:3d:c8:c8:d0:03:67
-----BEGIN CERTIFICATE-----
MIIDlDCCAnygAwIBAgIKMfXkYgxsWO3W2DANBgkqhkiG9w0BAQsFADBnMQswCQYD
VQQGEwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBU
ZWNobm9sb2dpZXMgTGltaXRlZDEcMBoGA1UEAxMTZW1TaWduIFJvb3QgQ0EgLSBH
MTAeFw0xODAyMTgxODMwMDBaFw00MzAyMTgxODMwMDBaMGcxCzAJBgNVBAYTAklO
MRMwEQYDVQQLEwplbVNpZ24gUEtJMSUwIwYDVQQKExxlTXVkaHJhIFRlY2hub2xv
Z2llcyBMaW1pdGVkMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEcxMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAk0u76WaK7p1b1TST0Bsew+eeuGQz
f2N4aLTNLnF115sgxk0pvLZoYIr3IZpWNVrzdr3YzZr/k1ZLpVkGoZM0Kd0WNHVO
8oG0x5ZOrRkVUkr+PHB1cM2vK6sVmjM8qrOLqs1D/fXqcP/tzxE7lM5OMhbTI0Aq
d7OvPAEsbO2ZLIvZTmmYsvePQbAyeGHWDV/D+qJAkh1cF+ZwPjXnorfCYuKrpDhM
tTk1b+oDafo6VGiFbdbyL0NVHpENDtjVaqSW0RM8LHhQ6DqS0hdW5TUaQBw+jSzt
Od9C4INBdN+jzcKGYEho42kLVACL5HZpIQ15TjQIXhTCzLG3rdd8cIrHhQIDAQAB
o0IwQDAdBgNVHQ4EFgQU++8Nhp6w492pufEhF38+/PB3KxowDgYDVR0PAQH/BAQD
AgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAFn/8oz1h31x
PaOfG1vR2vjTnGs2vZupYeveFix0PZ7mddrXuqe8QhfnPZHr5X3dPpzxz5KsbEjM
wiI/aTvFthUvozXGaCocV685743QNcMYDHsAVhzNixl03r4PEuDQqqE/AjSxcM6d
GNYIAwlG7mDgfrbESQRRfXBgvKqy/3lyeqYdPV8q+Mri/Tm3R7nrft8EI6/6nAYH
6ftjk4BAtcZsCjEozgyfz7MjNYBBjWzEN3uBL4ChQEKF6dk4jeihU80Bv2noWgby
RQuQ+q7hv53yrlc8pa6yVvSLZUDp/TGBLPQ5Cdjua6e0ph0VpZj3AYHYhX3zUVxx
iN66zB+Afko=
-----END CERTIFICATE-----

# Issuer: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI
# Subject: CN=emSign ECC Root CA - G3 O=eMudhra Technologies Limited OU=emSign PKI
# Label: "emSign ECC Root CA - G3"
# Serial: 287880440101571086945156
# MD5 Fingerprint: ce:0b:72:d1:9f:88:8e:d0:50:03:e8:e3:b8:8b:67:40
# SHA1 Fingerprint: 30:43:fa:4f:f2:57:dc:a0:c3:80:ee:2e:58:ea:78:b2:3f:e6:bb:c1
# SHA256 Fingerprint: 86:a1:ec:ba:08:9c:4a:8d:3b:be:27:34:c6:12:ba:34:1d:81:3e:04:3c:f9:e8:a8:62:cd:5c:57:a3:6b:be:6b
-----BEGIN CERTIFICATE-----
MIICTjCCAdOgAwIBAgIKPPYHqWhwDtqLhDAKBggqhkjOPQQDAzBrMQswCQYDVQQG
EwJJTjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNo
bm9sb2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0g
RzMwHhcNMTgwMjE4MTgzMDAwWhcNNDMwMjE4MTgzMDAwWjBrMQswCQYDVQQGEwJJ
TjETMBEGA1UECxMKZW1TaWduIFBLSTElMCMGA1UEChMcZU11ZGhyYSBUZWNobm9s
b2dpZXMgTGltaXRlZDEgMB4GA1UEAxMXZW1TaWduIEVDQyBSb290IENBIC0gRzMw
djAQBgcqhkjOPQIBBgUrgQQAIgNiAAQjpQy4LRL1KPOxst3iAhKAnjlfSU2fySU0
WXTsuwYc58Byr+iuL+FBVIcUqEqy6HyC5ltqtdyzdc6LBtCGI79G1Y4PPwT01xyS
fvalY8L1X44uT6EYGQIrMgqCZH0Wk9GjQjBAMB0GA1UdDgQWBBR8XQKEE9TMipuB
zhccLikenEhjQjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggq
hkjOPQQDAwNpADBmAjEAvvNhzwIQHWSVB7gYboiFBS+DCBeQyh+KTOgNG3qxrdWB
CUfvO6wIBHxcmbHtRwfSAjEAnbpV/KlK6O3t5nYBQnvI+GDZjVGLVTv7jHvrZQnD
+JbNR6iC8hZVdyR+EhCVBCyj
-----END CERTIFICATE-----

# Issuer: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI
# Subject: CN=emSign Root CA - C1 O=eMudhra Inc OU=emSign PKI
# Label: "emSign Root CA - C1"
# Serial: 825510296613316004955058
# MD5 Fingerprint: d8:e3:5d:01:21:fa:78:5a:b0:df:ba:d2:ee:2a:5f:68
# SHA1 Fingerprint: e7:2e:f1:df:fc:b2:09:28:cf:5d:d4:d5:67:37:b1:51:cb:86:4f:01
# SHA256 Fingerprint: 12:56:09:aa:30:1d:a0:a2:49:b9:7a:82:39:cb:6a:34:21:6f:44:dc:ac:9f:39:54:b1:42:92:f2:e8:c8:60:8f
-----BEGIN CERTIFICATE-----
MIIDczCCAlugAwIBAgILAK7PALrEzzL4Q7IwDQYJKoZIhvcNAQELBQAwVjELMAkG
A1UEBhMCVVMxEzARBgNVBAsTCmVtU2lnbiBQS0kxFDASBgNVBAoTC2VNdWRocmEg
SW5jMRwwGgYDVQQDExNlbVNpZ24gUm9vdCBDQSAtIEMxMB4XDTE4MDIxODE4MzAw
MFoXDTQzMDIxODE4MzAwMFowVjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln
biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMRwwGgYDVQQDExNlbVNpZ24gUm9v
dCBDQSAtIEMxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz+upufGZ
BczYKCFK83M0UYRWEPWgTywS4/oTmifQz/l5GnRfHXk5/Fv4cI7gklL35CX5VIPZ
HdPIWoU/Xse2B+4+wM6ar6xWQio5JXDWv7V7Nq2s9nPczdcdioOl+yuQFTdrHCZH
3DspVpNqs8FqOp099cGXOFgFixwR4+S0uF2FHYP+eF8LRWgYSKVGczQ7/g/IdrvH
GPMF0Ybzhe3nudkyrVWIzqa2kbBPrH4VI5b2P/AgNBbeCsbEBEV5f6f9vtKppa+c
xSMq9zwhbL2vj07FOrLzNBL834AaSaTUqZX3noleoomslMuoaJuvimUnzYnu3Yy1
aylwQ6BpC+S5DwIDAQABo0IwQDAdBgNVHQ4EFgQU/qHgcB4qAzlSWkK+XJGFehiq
TbUwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEL
BQADggEBAMJKVvoVIXsoounlHfv4LcQ5lkFMOycsxGwYFYDGrK9HWS8mC+M2sO87
/kOXSTKZEhVb3xEp/6tT+LvBeA+snFOvV71ojD1pM/CjoCNjO2RnIkSt1XHLVip4
kqNPEjE2NuLe/gDEo2APJ62gsIq1NnpSob0n9CAnYuhNlCQT5AoE6TyrLshDCUrG
YQTlSTR+08TI9Q/Aqum6VF7zYytPT1DU/rl7mYw9wC68AivTxEDkigcxHpvOJpkT
+xHqmiIMERnHXhuBUDDIlhJu58tBf5E7oke3VIAb3ADMmpDqw8NQBmIMMMAVSKeo
WXzhriKi4gp6D/piq1JM4fHfyr6DDUI=
-----END CERTIFICATE-----

# Issuer: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI
# Subject: CN=emSign ECC Root CA - C3 O=eMudhra Inc OU=emSign PKI
# Label: "emSign ECC Root CA - C3"
# Serial: 582948710642506000014504
# MD5 Fingerprint: 3e:53:b3:a3:81:ee:d7:10:f8:d3:b0:1d:17:92:f5:d5
# SHA1 Fingerprint: b6:af:43:c2:9b:81:53:7d:f6:ef:6b:c3:1f:1f:60:15:0c:ee:48:66
# SHA256 Fingerprint: bc:4d:80:9b:15:18:9d:78:db:3e:1d:8c:f4:f9:72:6a:79:5d:a1:64:3c:a5:f1:35:8e:1d:db:0e:dc:0d:7e:b3
-----BEGIN CERTIFICATE-----
MIICKzCCAbGgAwIBAgIKe3G2gla4EnycqDAKBggqhkjOPQQDAzBaMQswCQYDVQQG
EwJVUzETMBEGA1UECxMKZW1TaWduIFBLSTEUMBIGA1UEChMLZU11ZGhyYSBJbmMx
IDAeBgNVBAMTF2VtU2lnbiBFQ0MgUm9vdCBDQSAtIEMzMB4XDTE4MDIxODE4MzAw
MFoXDTQzMDIxODE4MzAwMFowWjELMAkGA1UEBhMCVVMxEzARBgNVBAsTCmVtU2ln
biBQS0kxFDASBgNVBAoTC2VNdWRocmEgSW5jMSAwHgYDVQQDExdlbVNpZ24gRUND
IFJvb3QgQ0EgLSBDMzB2MBAGByqGSM49AgEGBSuBBAAiA2IABP2lYa57JhAd6bci
MK4G9IGzsUJxlTm801Ljr6/58pc1kjZGDoeVjbk5Wum739D+yAdBPLtVb4Ojavti
sIGJAnB9SMVK4+kiVCJNk7tCDK93nCOmfddhEc5lx/h//vXyqaNCMEAwHQYDVR0O
BBYEFPtaSNCAIEDyqOkAB2kZd6fmw/TPMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MAoGCCqGSM49BAMDA2gAMGUCMQC02C8Cif22TGK6Q04ThHK1rt0c
3ta13FaPWEBaLd4gTCKDypOofu4SQMfWh0/434UCMBwUZOR8loMRnLDRWmFLpg9J
0wD8ofzkpf9/rdcw0Md3f76BB1UwUCAU9Vc4CqgxUQ==
-----END CERTIFICATE-----

# Issuer: CN=Hongkong Post Root CA 3 O=Hongkong Post
# Subject: CN=Hongkong Post Root CA 3 O=Hongkong Post
# Label: "Hongkong Post Root CA 3"
# Serial: 46170865288971385588281144162979347873371282084
# MD5 Fingerprint: 11:fc:9f:bd:73:30:02:8a:fd:3f:f3:58:b9:cb:20:f0
# SHA1 Fingerprint: 58:a2:d0:ec:20:52:81:5b:c1:f3:f8:64:02:24:4e:c2:8e:02:4b:02
# SHA256 Fingerprint: 5a:2f:c0:3f:0c:83:b0:90:bb:fa:40:60:4b:09:88:44:6c:76:36:18:3d:f9:84:6e:17:10:1a:44:7f:b8:ef:d6
-----BEGIN CERTIFICATE-----
MIIFzzCCA7egAwIBAgIUCBZfikyl7ADJk0DfxMauI7gcWqQwDQYJKoZIhvcNAQEL
BQAwbzELMAkGA1UEBhMCSEsxEjAQBgNVBAgTCUhvbmcgS29uZzESMBAGA1UEBxMJ
SG9uZyBLb25nMRYwFAYDVQQKEw1Ib25na29uZyBQb3N0MSAwHgYDVQQDExdIb25n
a29uZyBQb3N0IFJvb3QgQ0EgMzAeFw0xNzA2MDMwMjI5NDZaFw00MjA2MDMwMjI5
NDZaMG8xCzAJBgNVBAYTAkhLMRIwEAYDVQQIEwlIb25nIEtvbmcxEjAQBgNVBAcT
CUhvbmcgS29uZzEWMBQGA1UEChMNSG9uZ2tvbmcgUG9zdDEgMB4GA1UEAxMXSG9u
Z2tvbmcgUG9zdCBSb290IENBIDMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK
AoICAQCziNfqzg8gTr7m1gNt7ln8wlffKWihgw4+aMdoWJwcYEuJQwy51BWy7sFO
dem1p+/l6TWZ5Mwc50tfjTMwIDNT2aa71T4Tjukfh0mtUC1Qyhi+AViiE3CWu4mI
VoBc+L0sPOFMV4i707mV78vH9toxdCim5lSJ9UExyuUmGs2C4HDaOym71QP1mbpV
9WTRYA6ziUm4ii8F0oRFKHyPaFASePwLtVPLwpgchKOesL4jpNrcyCse2m5FHomY
2vkALgbpDDtw1VAliJnLzXNg99X/NWfFobxeq81KuEXryGgeDQ0URhLj0mRiikKY
vLTGCAj4/ahMZJx2Ab0vqWwzD9g/KLg8aQFChn5pwckGyuV6RmXpwtZQQS4/t+Tt
bNe/JgERohYpSms0BpDsE9K2+2p20jzt8NYt3eEV7KObLyzJPivkaTv/ciWxNoZb
x39ri1UbSsUgYT2uy1DhCDq+sI9jQVMwCFk8mB13umOResoQUGC/8Ne8lYePl8X+
l2oBlKN8W4UdKjk60FSh0Tlxnf0h+bV78OLgAo9uliQlLKAeLKjEiafv7ZkGL7YK
TE/bosw3Gq9HhS2KX8Q0NEwA/RiTZxPRN+ZItIsGxVd7GYYKecsAyVKvQv83j+Gj
Hno9UKtjBucVtT+2RTeUN7F+8kjDf8V1/peNRY8apxpyKBpADwIDAQABo2MwYTAP
BgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjAfBgNVHSMEGDAWgBQXnc0e
i9Y5K3DTXNSguB+wAPzFYTAdBgNVHQ4EFgQUF53NHovWOStw01zUoLgfsAD8xWEw
DQYJKoZIhvcNAQELBQADggIBAFbVe27mIgHSQpsY1Q7XZiNc4/6gx5LS6ZStS6LG
7BJ8dNVI0lkUmcDrudHr9EgwW62nV3OZqdPlt9EuWSRY3GguLmLYauRwCy0gUCCk
MpXRAJi70/33MvJJrsZ64Ee+bs7Lo3I6LWldy8joRTnU+kLBEUx3XZL7av9YROXr
gZ6voJmtvqkBZss4HTzfQx/0TW60uhdG/H39h4F5ag0zD/ov+BS5gLNdTaqX4fnk
GMX41TiMJjz98iji7lpJiCzfeT2OnpA8vUFKOt1b9pq0zj8lMH8yfaIDlNDceqFS
3m6TjRgm/VWsvY+b0s+v54Ysyx8Jb6NvqYTUc79NoXQbTiNg8swOqn+knEwlqLJm
Ozj/2ZQw9nKEvmhVEA/GcywWaZMH/rFF7buiVWqw2rVKAiUnhde3t4ZEFolsgCs+
l6mc1X5VTMbeRRAc6uk7nwNT7u56AQIWeNTowr5GdogTPyK7SBIdUgC0An4hGh6c
JfTzPV4e0hz5sy229zdcxsshTrD3mUcYhcErulWuBurQB7Lcq9CClnXO0lD+mefP
L5/ndtFhKvshuzHQqp9HpLIiyhY6UFfEW0NnxWViA0kB60PZ2Pierc+xYw5F9KBa
LJstxabArahH9CdMOA0uG0k7UvToiIMrVCjU8jVStDKDYmlkDJGcn5fqdBb9HxEG
mpv0
-----END CERTIFICATE-----

# Issuer: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only
# Subject: CN=Entrust Root Certification Authority - G4 O=Entrust, Inc. OU=See www.entrust.net/legal-terms/(c) 2015 Entrust, Inc. - for authorized use only
# Label: "Entrust Root Certification Authority - G4"
# Serial: 289383649854506086828220374796556676440
# MD5 Fingerprint: 89:53:f1:83:23:b7:7c:8e:05:f1:8c:71:38:4e:1f:88
# SHA1 Fingerprint: 14:88:4e:86:26:37:b0:26:af:59:62:5c:40:77:ec:35:29:ba:96:01
# SHA256 Fingerprint: db:35:17:d1:f6:73:2a:2d:5a:b9:7c:53:3e:c7:07:79:ee:32:70:a6:2f:b4:ac:42:38:37:24:60:e6:f0:1e:88
-----BEGIN CERTIFICATE-----
MIIGSzCCBDOgAwIBAgIRANm1Q3+vqTkPAAAAAFVlrVgwDQYJKoZIhvcNAQELBQAw
gb4xCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQL
Ex9TZWUgd3d3LmVudHJ1c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykg
MjAxNSBFbnRydXN0LCBJbmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAw
BgNVBAMTKUVudHJ1c3QgUm9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0
MB4XDTE1MDUyNzExMTExNloXDTM3MTIyNzExNDExNlowgb4xCzAJBgNVBAYTAlVT
MRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMSgwJgYDVQQLEx9TZWUgd3d3LmVudHJ1
c3QubmV0L2xlZ2FsLXRlcm1zMTkwNwYDVQQLEzAoYykgMjAxNSBFbnRydXN0LCBJ
bmMuIC0gZm9yIGF1dGhvcml6ZWQgdXNlIG9ubHkxMjAwBgNVBAMTKUVudHJ1c3Qg
Um9vdCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAtIEc0MIICIjANBgkqhkiG9w0B
AQEFAAOCAg8AMIICCgKCAgEAsewsQu7i0TD/pZJH4i3DumSXbcr3DbVZwbPLqGgZ
2K+EbTBwXX7zLtJTmeH+H17ZSK9dE43b/2MzTdMAArzE+NEGCJR5WIoV3imz/f3E
T+iq4qA7ec2/a0My3dl0ELn39GjUu9CH1apLiipvKgS1sqbHoHrmSKvS0VnM1n4j
5pds8ELl3FFLFUHtSUrJ3hCX1nbB76W1NhSXNdh4IjVS70O92yfbYVaCNNzLiGAM
C1rlLAHGVK/XqsEQe9IFWrhAnoanw5CGAlZSCXqc0ieCU0plUmr1POeo8pyvi73T
DtTUXm6Hnmo9RR3RXRv06QqsYJn7ibT/mCzPfB3pAqoEmh643IhuJbNsZvc8kPNX
wbMv9W3y+8qh+CmdRouzavbmZwe+LGcKKh9asj5XxNMhIWNlUpEbsZmOeX7m640A
2Vqq6nPopIICR5b+W45UYaPrL0swsIsjdXJ8ITzI9vF01Bx7owVV7rtNOzK+mndm
nqxpkCIHH2E6lr7lmk/MBTwoWdPBDFSoWWG9yHJM6Nyfh3+9nEg2XpWjDrk4JFX8
dWbrAuMINClKxuMrLzOg2qOGpRKX/YAr2hRC45K9PvJdXmd0LhyIRyk0X+IyqJwl
N4y6mACXi0mWHv0liqzc2thddG5msP9E36EYxr5ILzeUePiVSj9/E15dWf10hkNj
c0kCAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD
VR0OBBYEFJ84xFYjwznooHFs6FRM5Og6sb9nMA0GCSqGSIb3DQEBCwUAA4ICAQAS
5UKme4sPDORGpbZgQIeMJX6tuGguW8ZAdjwD+MlZ9POrYs4QjbRaZIxowLByQzTS
Gwv2LFPSypBLhmb8qoMi9IsabyZIrHZ3CL/FmFz0Jomee8O5ZDIBf9PD3Vht7LGr
hFV0d4QEJ1JrhkzO3bll/9bGXp+aEJlLdWr+aumXIOTkdnrG0CSqkM0gkLpHZPt/
B7NTeLUKYvJzQ85BK4FqLoUWlFPUa19yIqtRLULVAJyZv967lDtX/Zr1hstWO1uI
AeV8KEsD+UmDfLJ/fOPtjqF/YFOOVZ1QNBIPt5d7bIdKROf1beyAN/BYGW5KaHbw
H5Lk6rWS02FREAutp9lfx1/cH6NcjKF+m7ee01ZvZl4HliDtC3T7Zk6LERXpgUl+
b7DUUH8i119lAg2m9IUe2K4GS0qn0jFmwvjO5QimpAKWRGhXxNUzzxkvFMSUHHuk
2fCfDrGA4tGeEWSpiBE6doLlYsKA2KSD7ZPvfC+QsDJMlhVoSFLUmQjAJOgc47Ol
IQ6SwJAfzyBfyjs4x7dtOvPmRLgOMWuIjnDrnBdSqEGULoe256YSxXXfW8AKbnuk
5F6G+TaU33fD6Q3AOfF5u0aOq0NZJ7cguyPpVkAh7DE9ZapD8j3fcEThuk0mEDuY
n/PIjhs4ViFqUZPTkcpG2om3PVODLAgfi49T3f+sHw==
-----END CERTIFICATE-----

# Issuer: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation
# Subject: CN=Microsoft ECC Root Certificate Authority 2017 O=Microsoft Corporation
# Label: "Microsoft ECC Root Certificate Authority 2017"
# Serial: 136839042543790627607696632466672567020
# MD5 Fingerprint: dd:a1:03:e6:4a:93:10:d1:bf:f0:19:42:cb:fe:ed:67
# SHA1 Fingerprint: 99:9a:64:c3:7f:f4:7d:9f:ab:95:f1:47:69:89:14:60:ee:c4:c3:c5
# SHA256 Fingerprint: 35:8d:f3:9d:76:4a:f9:e1:b7:66:e9:c9:72:df:35:2e:e1:5c:fa:c2:27:af:6a:d1:d7:0e:8e:4a:6e:dc:ba:02
-----BEGIN CERTIFICATE-----
MIICWTCCAd+gAwIBAgIQZvI9r4fei7FK6gxXMQHC7DAKBggqhkjOPQQDAzBlMQsw
CQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYD
VQQDEy1NaWNyb3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIw
MTcwHhcNMTkxMjE4MjMwNjQ1WhcNNDIwNzE4MjMxNjA0WjBlMQswCQYDVQQGEwJV
UzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1NaWNy
b3NvZnQgRUNDIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwdjAQBgcq
hkjOPQIBBgUrgQQAIgNiAATUvD0CQnVBEyPNgASGAlEvaqiBYgtlzPbKnR5vSmZR
ogPZnZH6thaxjG7efM3beaYvzrvOcS/lpaso7GMEZpn4+vKTEAXhgShC48Zo9OYb
hGBKia/teQ87zvH2RPUBeMCjVDBSMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E
BTADAQH/MB0GA1UdDgQWBBTIy5lycFIM+Oa+sgRXKSrPQhDtNTAQBgkrBgEEAYI3
FQEEAwIBADAKBggqhkjOPQQDAwNoADBlAjBY8k3qDPlfXu5gKcs68tvWMoQZP3zV
L8KxzJOuULsJMsbG7X7JNpQS5GiFBqIb0C8CMQCZ6Ra0DvpWSNSkMBaReNtUjGUB
iudQZsIxtzm6uBoiB078a1QWIP8rtedMDE2mT3M=
-----END CERTIFICATE-----

# Issuer: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation
# Subject: CN=Microsoft RSA Root Certificate Authority 2017 O=Microsoft Corporation
# Label: "Microsoft RSA Root Certificate Authority 2017"
# Serial: 40975477897264996090493496164228220339
# MD5 Fingerprint: 10:ff:00:ff:cf:c9:f8:c7:7a:c0:ee:35:8e:c9:0f:47
# SHA1 Fingerprint: 73:a5:e6:4a:3b:ff:83:16:ff:0e:dc:cc:61:8a:90:6e:4e:ae:4d:74
# SHA256 Fingerprint: c7:41:f7:0f:4b:2a:8d:88:bf:2e:71:c1:41:22:ef:53:ef:10:eb:a0:cf:a5:e6:4c:fa:20:f4:18:85:30:73:e0
-----BEGIN CERTIFICATE-----
MIIFqDCCA5CgAwIBAgIQHtOXCV/YtLNHcB6qvn9FszANBgkqhkiG9w0BAQwFADBl
MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYw
NAYDVQQDEy1NaWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5
IDIwMTcwHhcNMTkxMjE4MjI1MTIyWhcNNDIwNzE4MjMwMDIzWjBlMQswCQYDVQQG
EwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMTYwNAYDVQQDEy1N
aWNyb3NvZnQgUlNBIFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTcwggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKW76UM4wplZEWCpW9R2LBifOZ
Nt9GkMml7Xhqb0eRaPgnZ1AzHaGm++DlQ6OEAlcBXZxIQIJTELy/xztokLaCLeX0
ZdDMbRnMlfl7rEqUrQ7eS0MdhweSE5CAg2Q1OQT85elss7YfUJQ4ZVBcF0a5toW1
HLUX6NZFndiyJrDKxHBKrmCk3bPZ7Pw71VdyvD/IybLeS2v4I2wDwAW9lcfNcztm
gGTjGqwu+UcF8ga2m3P1eDNbx6H7JyqhtJqRjJHTOoI+dkC0zVJhUXAoP8XFWvLJ
jEm7FFtNyP9nTUwSlq31/niol4fX/V4ggNyhSyL71Imtus5Hl0dVe49FyGcohJUc
aDDv70ngNXtk55iwlNpNhTs+VcQor1fznhPbRiefHqJeRIOkpcrVE7NLP8TjwuaG
YaRSMLl6IE9vDzhTyzMMEyuP1pq9KsgtsRx9S1HKR9FIJ3Jdh+vVReZIZZ2vUpC6
W6IYZVcSn2i51BVrlMRpIpj0M+Dt+VGOQVDJNE92kKz8OMHY4Xu54+OU4UZpyw4K
UGsTuqwPN1q3ErWQgR5WrlcihtnJ0tHXUeOrO8ZV/R4O03QK0dqq6mm4lyiPSMQH
+FJDOvTKVTUssKZqwJz58oHhEmrARdlns87/I6KJClTUFLkqqNfs+avNJVgyeY+Q
W5g5xAgGwax/Dj0ApQIDAQABo1QwUjAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/
BAUwAwEB/zAdBgNVHQ4EFgQUCctZf4aycI8awznjwNnpv7tNsiMwEAYJKwYBBAGC
NxUBBAMCAQAwDQYJKoZIhvcNAQEMBQADggIBAKyvPl3CEZaJjqPnktaXFbgToqZC
LgLNFgVZJ8og6Lq46BrsTaiXVq5lQ7GPAJtSzVXNUzltYkyLDVt8LkS/gxCP81OC
gMNPOsduET/m4xaRhPtthH80dK2Jp86519efhGSSvpWhrQlTM93uCupKUY5vVau6
tZRGrox/2KJQJWVggEbbMwSubLWYdFQl3JPk+ONVFT24bcMKpBLBaYVu32TxU5nh
SnUgnZUP5NbcA/FZGOhHibJXWpS2qdgXKxdJ5XbLwVaZOjex/2kskZGT4d9Mozd2
TaGf+G0eHdP67Pv0RR0Tbc/3WeUiJ3IrhvNXuzDtJE3cfVa7o7P4NHmJweDyAmH3
pvwPuxwXC65B2Xy9J6P9LjrRk5Sxcx0ki69bIImtt2dmefU6xqaWM/5TkshGsRGR
xpl/j8nWZjEgQRCHLQzWwa80mMpkg/sTV9HB8Dx6jKXB/ZUhoHHBk2dxEuqPiApp
GWSZI1b7rCoucL5mxAyE7+WL85MB+GqQk2dLsmijtWKP6T+MejteD+eMuMZ87zf9
dOLITzNy4ZQ5bb0Sr74MTnB8G2+NszKTc0QWbej09+CVgI+WXTik9KveCjCHk9hN
AHFiRSdLOkKEW39lt2c0Ui2cFmuqqNh7o0JMcccMyj6D5KbvtwEwXlGjefVwaaZB
RA+GsCyRxj3qrg+E
-----END CERTIFICATE-----

# Issuer: CN=e-Szigno Root CA 2017 O=Microsec Ltd.
# Subject: CN=e-Szigno Root CA 2017 O=Microsec Ltd.
# Label: "e-Szigno Root CA 2017"
# Serial: 411379200276854331539784714
# MD5 Fingerprint: de:1f:f6:9e:84:ae:a7:b4:21:ce:1e:58:7d:d1:84:98
# SHA1 Fingerprint: 89:d4:83:03:4f:9e:9a:48:80:5f:72:37:d4:a9:a6:ef:cb:7c:1f:d1
# SHA256 Fingerprint: be:b0:0b:30:83:9b:9b:c3:2c:32:e4:44:79:05:95:06:41:f2:64:21:b1:5e:d0:89:19:8b:51:8a:e2:ea:1b:99
-----BEGIN CERTIFICATE-----
MIICQDCCAeWgAwIBAgIMAVRI7yH9l1kN9QQKMAoGCCqGSM49BAMCMHExCzAJBgNV
BAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMgTHRk
LjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25vIFJv
b3QgQ0EgMjAxNzAeFw0xNzA4MjIxMjA3MDZaFw00MjA4MjIxMjA3MDZaMHExCzAJ
BgNVBAYTAkhVMREwDwYDVQQHDAhCdWRhcGVzdDEWMBQGA1UECgwNTWljcm9zZWMg
THRkLjEXMBUGA1UEYQwOVkFUSFUtMjM1ODQ0OTcxHjAcBgNVBAMMFWUtU3ppZ25v
IFJvb3QgQ0EgMjAxNzBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJbcPYrYsHtv
xie+RJCxs1YVe45DJH0ahFnuY2iyxl6H0BVIHqiQrb1TotreOpCmYF9oMrWGQd+H
Wyx7xf58etqjYzBhMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0G
A1UdDgQWBBSHERUI0arBeAyxr87GyZDvvzAEwDAfBgNVHSMEGDAWgBSHERUI0arB
eAyxr87GyZDvvzAEwDAKBggqhkjOPQQDAgNJADBGAiEAtVfd14pVCzbhhkT61Nlo
jbjcI4qKDdQvfepz7L9NbKgCIQDLpbQS+ue16M9+k/zzNY9vTlp8tLxOsvxyqltZ
+efcMQ==
-----END CERTIFICATE-----

# Issuer: O=CERTSIGN SA OU=certSIGN ROOT CA G2
# Subject: O=CERTSIGN SA OU=certSIGN ROOT CA G2
# Label: "certSIGN Root CA G2"
# Serial: 313609486401300475190
# MD5 Fingerprint: 8c:f1:75:8a:c6:19:cf:94:b7:f7:65:20:87:c3:97:c7
# SHA1 Fingerprint: 26:f9:93:b4:ed:3d:28:27:b0:b9:4b:a7:e9:15:1d:a3:8d:92:e5:32
# SHA256 Fingerprint: 65:7c:fe:2f:a7:3f:aa:38:46:25:71:f3:32:a2:36:3a:46:fc:e7:02:09:51:71:07:02:cd:fb:b6:ee:da:33:05
-----BEGIN CERTIFICATE-----
MIIFRzCCAy+gAwIBAgIJEQA0tk7GNi02MA0GCSqGSIb3DQEBCwUAMEExCzAJBgNV
BAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJR04g
Uk9PVCBDQSBHMjAeFw0xNzAyMDYwOTI3MzVaFw00MjAyMDYwOTI3MzVaMEExCzAJ
BgNVBAYTAlJPMRQwEgYDVQQKEwtDRVJUU0lHTiBTQTEcMBoGA1UECxMTY2VydFNJ
R04gUk9PVCBDQSBHMjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAMDF
dRmRfUR0dIf+DjuW3NgBFszuY5HnC2/OOwppGnzC46+CjobXXo9X69MhWf05N0Iw
vlDqtg+piNguLWkh59E3GE59kdUWX2tbAMI5Qw02hVK5U2UPHULlj88F0+7cDBrZ
uIt4ImfkabBoxTzkbFpG583H+u/E7Eu9aqSs/cwoUe+StCmrqzWaTOTECMYmzPhp
n+Sc8CnTXPnGFiWeI8MgwT0PPzhAsP6CRDiqWhqKa2NYOLQV07YRaXseVO6MGiKs
cpc/I1mbySKEwQdPzH/iV8oScLumZfNpdWO9lfsbl83kqK/20U6o2YpxJM02PbyW
xPFsqa7lzw1uKA2wDrXKUXt4FMMgL3/7FFXhEZn91QqhngLjYl/rNUssuHLoPj1P
rCy7Lobio3aP5ZMqz6WryFyNSwb/EkaseMsUBzXgqd+L6a8VTxaJW732jcZZroiF
DsGJ6x9nxUWO/203Nit4ZoORUSs9/1F3dmKh7Gc+PoGD4FapUB8fepmrY7+EF3fx
DTvf95xhszWYijqy7DwaNz9+j5LP2RIUZNoQAhVB/0/E6xyjyfqZ90bp4RjZsbgy
LcsUDFDYg2WD7rlcz8sFWkz6GZdr1l0T08JcVLwyc6B49fFtHsufpaafItzRUZ6C
eWRgKRM+o/1Pcmqr4tTluCRVLERLiohEnMqE0yo7AgMBAAGjQjBAMA8GA1UdEwEB
/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSCIS1mxteg4BXrzkwJ
d8RgnlRuAzANBgkqhkiG9w0BAQsFAAOCAgEAYN4auOfyYILVAzOBywaK8SJJ6ejq
kX/GM15oGQOGO0MBzwdw5AgeZYWR5hEit/UCI46uuR59H35s5r0l1ZUa8gWmr4UC
b6741jH/JclKyMeKqdmfS0mbEVeZkkMR3rYzpMzXjWR91M08KCy0mpbqTfXERMQl
qiCA2ClV9+BB/AYm/7k29UMUA2Z44RGx2iBfRgB4ACGlHgAoYXhvqAEBj500mv/0
OJD7uNGzcgbJceaBxXntC6Z58hMLnPddDnskk7RI24Zf3lCGeOdA5jGokHZwYa+c
NywRtYK3qq4kNFtyDGkNzVmf9nGvnAvRCjj5BiKDUyUM/FHE5r7iOZULJK2v0ZXk
ltd0ZGtxTgI8qoXzIKNDOXZbbFD+mpwUHmUUihW9o4JFWklWatKcsWMy5WHgUyIO
pwpJ6st+H6jiYoD2EEVSmAYY3qXNL3+q1Ok+CHLsIwMCPKaq2LxndD0UF/tUSxfj
03k9bWtJySgOLnRQvwzZRjoQhsmnP+mg7H/rpXdYaXHmgwo38oZJar55CJD2AhZk
PuXaTH4MNMn5X7azKFGnpyuqSfqNZSlO42sTp5SjLVFteAxEy9/eCG/Oo2Sr05WE
1LlSVHJ7liXMvGnjSG4N0MedJ5qq+BOS3R7fY581qRY27Iy4g/Q9iY/NtBde17MX
QRBdJ3NghVdJIgc=
-----END CERTIFICATE-----

# Issuer: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc.
# Subject: CN=Trustwave Global Certification Authority O=Trustwave Holdings, Inc.
# Label: "Trustwave Global Certification Authority"
# Serial: 1846098327275375458322922162
# MD5 Fingerprint: f8:1c:18:2d:2f:ba:5f:6d:a1:6c:bc:c7:ab:91:c7:0e
# SHA1 Fingerprint: 2f:8f:36:4f:e1:58:97:44:21:59:87:a5:2a:9a:d0:69:95:26:7f:b5
# SHA256 Fingerprint: 97:55:20:15:f5:dd:fc:3c:87:88:c0:06:94:45:55:40:88:94:45:00:84:f1:00:86:70:86:bc:1a:2b:b5:8d:c8
-----BEGIN CERTIFICATE-----
MIIF2jCCA8KgAwIBAgIMBfcOhtpJ80Y1LrqyMA0GCSqGSIb3DQEBCwUAMIGIMQsw
CQYDVQQGEwJVUzERMA8GA1UECAwISWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28x
ITAfBgNVBAoMGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1
c3R3YXZlIEdsb2JhbCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0xNzA4MjMx
OTM0MTJaFw00MjA4MjMxOTM0MTJaMIGIMQswCQYDVQQGEwJVUzERMA8GA1UECAwI
SWxsaW5vaXMxEDAOBgNVBAcMB0NoaWNhZ28xITAfBgNVBAoMGFRydXN0d2F2ZSBI
b2xkaW5ncywgSW5jLjExMC8GA1UEAwwoVHJ1c3R3YXZlIEdsb2JhbCBDZXJ0aWZp
Y2F0aW9uIEF1dGhvcml0eTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB
ALldUShLPDeS0YLOvR29zd24q88KPuFd5dyqCblXAj7mY2Hf8g+CY66j96xz0Xzn
swuvCAAJWX/NKSqIk4cXGIDtiLK0thAfLdZfVaITXdHG6wZWiYj+rDKd/VzDBcdu
7oaJuogDnXIhhpCujwOl3J+IKMujkkkP7NAP4m1ET4BqstTnoApTAbqOl5F2brz8
1Ws25kCI1nsvXwXoLG0R8+eyvpJETNKXpP7ScoFDB5zpET71ixpZfR9oWN0EACyW
80OzfpgZdNmcc9kYvkHHNHnZ9GLCQ7mzJ7Aiy/k9UscwR7PJPrhq4ufogXBeQotP
JqX+OsIgbrv4Fo7NDKm0G2x2EOFYeUY+VM6AqFcJNykbmROPDMjWLBz7BegIlT1l
RtzuzWniTY+HKE40Cz7PFNm73bZQmq131BnW2hqIyE4bJ3XYsgjxroMwuREOzYfw
hI0Vcnyh78zyiGG69Gm7DIwLdVcEuE4qFC49DxweMqZiNu5m4iK4BUBjECLzMx10
coos9TkpoNPnG4CELcU9402x/RpvumUHO1jsQkUm+9jaJXLE9gCxInm943xZYkqc
BW89zubWR2OZxiRvchLIrH+QtAuRcOi35hYQcRfO3gZPSEF9NUqjifLJS3tBEW1n
twiYTOURGa5CgNz7kAXU+FDKvuStx8KU1xad5hePrzb7AgMBAAGjQjBAMA8GA1Ud
EwEB/wQFMAMBAf8wHQYDVR0OBBYEFJngGWcNYtt2s9o9uFvo/ULSMQ6HMA4GA1Ud
DwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAmHNw4rDT7TnsTGDZqRKGFx6W
0OhUKDtkLSGm+J1WE2pIPU/HPinbbViDVD2HfSMF1OQc3Og4ZYbFdada2zUFvXfe
uyk3QAUHw5RSn8pk3fEbK9xGChACMf1KaA0HZJDmHvUqoai7PF35owgLEQzxPy0Q
lG/+4jSHg9bP5Rs1bdID4bANqKCqRieCNqcVtgimQlRXtpla4gt5kNdXElE1GYhB
aCXUNxeEFfsBctyV3lImIJgm4nb1J2/6ADtKYdkNy1GTKv0WBpanI5ojSP5RvbbE
sLFUzt5sQa0WZ37b/TjNuThOssFgy50X31ieemKyJo90lZvkWx3SD92YHJtZuSPT
MaCm/zjdzyBP6VhWOmfD0faZmZ26NraAL4hHT4a/RDqA5Dccprrql5gR0IRiR2Qe
qu5AvzSxnI9O4fKSTx+O856X3vOmeWqJcU9LJxdI/uz0UA9PSX3MReO9ekDFQdxh
VicGaeVyQYHTtgGJoC86cnn+OjC/QezHYj6RS8fZMXZC+fc8Y+wmjHMMfRod6qh8
h6jCJ3zhM0EPz8/8AKAigJ5Kp28AsEFFtyLKaEjFQqKu3R3y4G5OBVixwJAWKqQ9
EEC+j2Jjg6mcgn0tAumDMHzLJ8n9HmYAsC7TIS+OMxZsmO0QqAfWzJPP29FpHOTK
yeC2nOnOcXHebD8WpHk=
-----END CERTIFICATE-----

# Issuer: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc.
# Subject: CN=Trustwave Global ECC P256 Certification Authority O=Trustwave Holdings, Inc.
# Label: "Trustwave Global ECC P256 Certification Authority"
# Serial: 4151900041497450638097112925
# MD5 Fingerprint: 5b:44:e3:8d:5d:36:86:26:e8:0d:05:d2:59:a7:83:54
# SHA1 Fingerprint: b4:90:82:dd:45:0c:be:8b:5b:b1:66:d3:e2:a4:08:26:cd:ed:42:cf
# SHA256 Fingerprint: 94:5b:bc:82:5e:a5:54:f4:89:d1:fd:51:a7:3d:df:2e:a6:24:ac:70:19:a0:52:05:22:5c:22:a7:8c:cf:a8:b4
-----BEGIN CERTIFICATE-----
MIICYDCCAgegAwIBAgIMDWpfCD8oXD5Rld9dMAoGCCqGSM49BAMCMIGRMQswCQYD
VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf
BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3
YXZlIEdsb2JhbCBFQ0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x
NzA4MjMxOTM1MTBaFw00MjA4MjMxOTM1MTBaMIGRMQswCQYDVQQGEwJVUzERMA8G
A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0
d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF
Q0MgUDI1NiBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTBZMBMGByqGSM49AgEGCCqG
SM49AwEHA0IABH77bOYj43MyCMpg5lOcunSNGLB4kFKA3TjASh3RqMyTpJcGOMoN
FWLGjgEqZZ2q3zSRLoHB5DOSMcT9CTqmP62jQzBBMA8GA1UdEwEB/wQFMAMBAf8w
DwYDVR0PAQH/BAUDAwcGADAdBgNVHQ4EFgQUo0EGrJBt0UrrdaVKEJmzsaGLSvcw
CgYIKoZIzj0EAwIDRwAwRAIgB+ZU2g6gWrKuEZ+Hxbb/ad4lvvigtwjzRM4q3wgh
DDcCIC0mA6AFvWvR9lz4ZcyGbbOcNEhjhAnFjXca4syc4XR7
-----END CERTIFICATE-----

# Issuer: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc.
# Subject: CN=Trustwave Global ECC P384 Certification Authority O=Trustwave Holdings, Inc.
# Label: "Trustwave Global ECC P384 Certification Authority"
# Serial: 2704997926503831671788816187
# MD5 Fingerprint: ea:cf:60:c4:3b:b9:15:29:40:a1:97:ed:78:27:93:d6
# SHA1 Fingerprint: e7:f3:a3:c8:cf:6f:c3:04:2e:6d:0e:67:32:c5:9e:68:95:0d:5e:d2
# SHA256 Fingerprint: 55:90:38:59:c8:c0:c3:eb:b8:75:9e:ce:4e:25:57:22:5f:f5:75:8b:bd:38:eb:d4:82:76:60:1e:1b:d5:80:97
-----BEGIN CERTIFICATE-----
MIICnTCCAiSgAwIBAgIMCL2Fl2yZJ6SAaEc7MAoGCCqGSM49BAMDMIGRMQswCQYD
VQQGEwJVUzERMA8GA1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAf
BgNVBAoTGFRydXN0d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3
YXZlIEdsb2JhbCBFQ0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTAeFw0x
NzA4MjMxOTM2NDNaFw00MjA4MjMxOTM2NDNaMIGRMQswCQYDVQQGEwJVUzERMA8G
A1UECBMISWxsaW5vaXMxEDAOBgNVBAcTB0NoaWNhZ28xITAfBgNVBAoTGFRydXN0
d2F2ZSBIb2xkaW5ncywgSW5jLjE6MDgGA1UEAxMxVHJ1c3R3YXZlIEdsb2JhbCBF
Q0MgUDM4NCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eTB2MBAGByqGSM49AgEGBSuB
BAAiA2IABGvaDXU1CDFHBa5FmVXxERMuSvgQMSOjfoPTfygIOiYaOs+Xgh+AtycJ
j9GOMMQKmw6sWASr9zZ9lCOkmwqKi6vr/TklZvFe/oyujUF5nQlgziip04pt89ZF
1PKYhDhloKNDMEEwDwYDVR0TAQH/BAUwAwEB/zAPBgNVHQ8BAf8EBQMDBwYAMB0G
A1UdDgQWBBRVqYSJ0sEyvRjLbKYHTsjnnb6CkDAKBggqhkjOPQQDAwNnADBkAjA3
AZKXRRJ+oPM+rRk6ct30UJMDEr5E0k9BpIycnR+j9sKS50gU/k6bpZFXrsY3crsC
MGclCrEMXu6pY5Jv5ZAL/mYiykf9ijH3g/56vxC+GCsej/YpHpRZ744hN8tRmKVu
Sw==
-----END CERTIFICATE-----

# Issuer: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp.
# Subject: CN=NAVER Global Root Certification Authority O=NAVER BUSINESS PLATFORM Corp.
# Label: "NAVER Global Root Certification Authority"
# Serial: 9013692873798656336226253319739695165984492813
# MD5 Fingerprint: c8:7e:41:f6:25:3b:f5:09:b3:17:e8:46:3d:bf:d0:9b
# SHA1 Fingerprint: 8f:6b:f2:a9:27:4a:da:14:a0:c4:f4:8e:61:27:f9:c0:1e:78:5d:d1
# SHA256 Fingerprint: 88:f4:38:dc:f8:ff:d1:fa:8f:42:91:15:ff:e5:f8:2a:e1:e0:6e:0c:70:c3:75:fa:ad:71:7b:34:a4:9e:72:65
-----BEGIN CERTIFICATE-----
MIIFojCCA4qgAwIBAgIUAZQwHqIL3fXFMyqxQ0Rx+NZQTQ0wDQYJKoZIhvcNAQEM
BQAwaTELMAkGA1UEBhMCS1IxJjAkBgNVBAoMHU5BVkVSIEJVU0lORVNTIFBMQVRG
T1JNIENvcnAuMTIwMAYDVQQDDClOQVZFUiBHbG9iYWwgUm9vdCBDZXJ0aWZpY2F0
aW9uIEF1dGhvcml0eTAeFw0xNzA4MTgwODU4NDJaFw0zNzA4MTgyMzU5NTlaMGkx
CzAJBgNVBAYTAktSMSYwJAYDVQQKDB1OQVZFUiBCVVNJTkVTUyBQTEFURk9STSBD
b3JwLjEyMDAGA1UEAwwpTkFWRVIgR2xvYmFsIFJvb3QgQ2VydGlmaWNhdGlvbiBB
dXRob3JpdHkwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC21PGTXLVA
iQqrDZBbUGOukJR0F0Vy1ntlWilLp1agS7gvQnXp2XskWjFlqxcX0TM62RHcQDaH
38dq6SZeWYp34+hInDEW+j6RscrJo+KfziFTowI2MMtSAuXaMl3Dxeb57hHHi8lE
HoSTGEq0n+USZGnQJoViAbbJAh2+g1G7XNr4rRVqmfeSVPc0W+m/6imBEtRTkZaz
kVrd/pBzKPswRrXKCAfHcXLJZtM0l/aM9BhK4dA9WkW2aacp+yPOiNgSnABIqKYP
szuSjXEOdMWLyEz59JuOuDxp7W87UC9Y7cSw0BwbagzivESq2M0UXZR4Yb8Obtoq
vC8MC3GmsxY/nOb5zJ9TNeIDoKAYv7vxvvTWjIcNQvcGufFt7QSUqP620wbGQGHf
nZ3zVHbOUzoBppJB7ASjjw2i1QnK1sua8e9DXcCrpUHPXFNwcMmIpi3Ua2FzUCaG
YQ5fG8Ir4ozVu53BA0K6lNpfqbDKzE0K70dpAy8i+/Eozr9dUGWokG2zdLAIx6yo
0es+nPxdGoMuK8u180SdOqcXYZaicdNwlhVNt0xz7hlcxVs+Qf6sdWA7G2POAN3a
CJBitOUt7kinaxeZVL6HSuOpXgRM6xBtVNbv8ejyYhbLgGvtPe31HzClrkvJE+2K
AQHJuFFYwGY6sWZLxNUxAmLpdIQM201GLQIDAQABo0IwQDAdBgNVHQ4EFgQU0p+I
36HNLL3s9TsBAZMzJ7LrYEswDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMB
Af8wDQYJKoZIhvcNAQEMBQADggIBADLKgLOdPVQG3dLSLvCkASELZ0jKbY7gyKoN
qo0hV4/GPnrK21HUUrPUloSlWGB/5QuOH/XcChWB5Tu2tyIvCZwTFrFsDDUIbatj
cu3cvuzHV+YwIHHW1xDBE1UBjCpD5EHxzzp6U5LOogMFDTjfArsQLtk70pt6wKGm
+LUx5vR1yblTmXVHIloUFcd4G7ad6Qz4G3bxhYTeodoS76TiEJd6eN4MUZeoIUCL
hr0N8F5OSza7OyAfikJW4Qsav3vQIkMsRIz75Sq0bBwcupTgE34h5prCy8VCZLQe
lHsIJchxzIdFV4XTnyliIoNRlwAYl3dqmJLJfGBs32x9SuRwTMKeuB330DTHD8z7
p/8Dvq1wkNoL3chtl1+afwkyQf3NosxabUzyqkn+Zvjp2DXrDige7kgvOtB5CTh8
piKCk5XQA76+AqAF3SAi428diDRgxuYKuQl1C/AH6GmWNcf7I4GOODm4RStDeKLR
LBT/DShycpWbXgnbiUSYqqFJu3FS8r/2/yehNq+4tneI3TqkbZs0kNwUXTC/t+sX
5Ie3cdCh13cV1ELX8vMxmV2b3RZtP+oGI/hGoiLtk/bdmuYqh7GYVPEi92tF4+KO
dh2ajcQGjTa3FPOdVGm3jjzVpG2Tgbet9r1ke8LJaDmgkpzNNIaRkPpkUZ3+/uul
9XXeifdy
-----END CERTIFICATE-----

# Issuer: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres
# Subject: CN=AC RAIZ FNMT-RCM SERVIDORES SEGUROS O=FNMT-RCM OU=Ceres
# Label: "AC RAIZ FNMT-RCM SERVIDORES SEGUROS"
# Serial: 131542671362353147877283741781055151509
# MD5 Fingerprint: 19:36:9c:52:03:2f:d2:d1:bb:23:cc:dd:1e:12:55:bb
# SHA1 Fingerprint: 62:ff:d9:9e:c0:65:0d:03:ce:75:93:d2:ed:3f:2d:32:c9:e3:e5:4a
# SHA256 Fingerprint: 55:41:53:b1:3d:2c:f9:dd:b7:53:bf:be:1a:4e:0a:e0:8d:0a:a4:18:70:58:fe:60:a2:b8:62:b2:e4:b8:7b:cb
-----BEGIN CERTIFICATE-----
MIICbjCCAfOgAwIBAgIQYvYybOXE42hcG2LdnC6dlTAKBggqhkjOPQQDAzB4MQsw
CQYDVQQGEwJFUzERMA8GA1UECgwIRk5NVC1SQ00xDjAMBgNVBAsMBUNlcmVzMRgw
FgYDVQRhDA9WQVRFUy1RMjgyNjAwNEoxLDAqBgNVBAMMI0FDIFJBSVogRk5NVC1S
Q00gU0VSVklET1JFUyBTRUdVUk9TMB4XDTE4MTIyMDA5MzczM1oXDTQzMTIyMDA5
MzczM1oweDELMAkGA1UEBhMCRVMxETAPBgNVBAoMCEZOTVQtUkNNMQ4wDAYDVQQL
DAVDZXJlczEYMBYGA1UEYQwPVkFURVMtUTI4MjYwMDRKMSwwKgYDVQQDDCNBQyBS
QUlaIEZOTVQtUkNNIFNFUlZJRE9SRVMgU0VHVVJPUzB2MBAGByqGSM49AgEGBSuB
BAAiA2IABPa6V1PIyqvfNkpSIeSX0oNnnvBlUdBeh8dHsVnyV0ebAAKTRBdp20LH
sbI6GA60XYyzZl2hNPk2LEnb80b8s0RpRBNm/dfF/a82Tc4DTQdxz69qBdKiQ1oK
Um8BA06Oi6NCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYD
VR0OBBYEFAG5L++/EYZg8k/QQW6rcx/n0m5JMAoGCCqGSM49BAMDA2kAMGYCMQCu
SuMrQMN0EfKVrRYj3k4MGuZdpSRea0R7/DjiT8ucRRcRTBQnJlU5dUoDzBOQn5IC
MQD6SmxgiHPz7riYYqnOK8LZiqZwMR2vsJRM60/G49HzYqc8/5MuB1xJAWdpEgJy
v+c=
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign Root R46 O=GlobalSign nv-sa
# Subject: CN=GlobalSign Root R46 O=GlobalSign nv-sa
# Label: "GlobalSign Root R46"
# Serial: 1552617688466950547958867513931858518042577
# MD5 Fingerprint: c4:14:30:e4:fa:66:43:94:2a:6a:1b:24:5f:19:d0:ef
# SHA1 Fingerprint: 53:a2:b0:4b:ca:6b:d6:45:e6:39:8a:8e:c4:0d:d2:bf:77:c3:a2:90
# SHA256 Fingerprint: 4f:a3:12:6d:8d:3a:11:d1:c4:85:5a:4f:80:7c:ba:d6:cf:91:9d:3a:5a:88:b0:3b:ea:2c:63:72:d9:3c:40:c9
-----BEGIN CERTIFICATE-----
MIIFWjCCA0KgAwIBAgISEdK7udcjGJ5AXwqdLdDfJWfRMA0GCSqGSIb3DQEBDAUA
MEYxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYD
VQQDExNHbG9iYWxTaWduIFJvb3QgUjQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMy
MDAwMDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYt
c2ExHDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBSNDYwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQCsrHQy6LNl5brtQyYdpokNRbopiLKkHWPd08EsCVeJ
OaFV6Wc0dwxu5FUdUiXSE2te4R2pt32JMl8Nnp8semNgQB+msLZ4j5lUlghYruQG
vGIFAha/r6gjA7aUD7xubMLL1aa7DOn2wQL7Id5m3RerdELv8HQvJfTqa1VbkNud
316HCkD7rRlr+/fKYIje2sGP1q7Vf9Q8g+7XFkyDRTNrJ9CG0Bwta/OrffGFqfUo
0q3v84RLHIf8E6M6cqJaESvWJ3En7YEtbWaBkoe0G1h6zD8K+kZPTXhc+CtI4wSE
y132tGqzZfxCnlEmIyDLPRT5ge1lFgBPGmSXZgjPjHvjK8Cd+RTyG/FWaha/LIWF
zXg4mutCagI0GIMXTpRW+LaCtfOW3T3zvn8gdz57GSNrLNRyc0NXfeD412lPFzYE
+cCQYDdF3uYM2HSNrpyibXRdQr4G9dlkbgIQrImwTDsHTUB+JMWKmIJ5jqSngiCN
I/onccnfxkF0oE32kRbcRoxfKWMxWXEM2G/CtjJ9++ZdU6Z+Ffy7dXxd7Pj2Fxzs
x2sZy/N78CsHpdlseVR2bJ0cpm4O6XkMqCNqo98bMDGfsVR7/mrLZqrcZdCinkqa
ByFrgY/bxFn63iLABJzjqls2k+g9vXqhnQt2sQvHnf3PmKgGwvgqo6GDoLclcqUC
4wIDAQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNV
HQ4EFgQUA1yrc4GHqMywptWU4jaWSf8FmSwwDQYJKoZIhvcNAQEMBQADggIBAHx4
7PYCLLtbfpIrXTncvtgdokIzTfnvpCo7RGkerNlFo048p9gkUbJUHJNOxO97k4Vg
JuoJSOD1u8fpaNK7ajFxzHmuEajwmf3lH7wvqMxX63bEIaZHU1VNaL8FpO7XJqti
2kM3S+LGteWygxk6x9PbTZ4IevPuzz5i+6zoYMzRx6Fcg0XERczzF2sUyQQCPtIk
pnnpHs6i58FZFZ8d4kuaPp92CC1r2LpXFNqD6v6MVenQTqnMdzGxRBF6XLE+0xRF
FRhiJBPSy03OXIPBNvIQtQ6IbbjhVp+J3pZmOUdkLG5NrmJ7v2B0GbhWrJKsFjLt
rWhV/pi60zTe9Mlhww6G9kuEYO4Ne7UyWHmRVSyBQ7N0H3qqJZ4d16GLuc1CLgSk
ZoNNiTW2bKg2SnkheCLQQrzRQDGQob4Ez8pn7fXwgNNgyYMqIgXQBztSvwyeqiv5
u+YfjyW6hY0XHgL+XVAEV8/+LbzvXMAaq7afJMbfc2hIkCwU9D9SGuTSyxTDYWnP
4vkYxboznxSjBF25cfe1lNj2M8FawTSLfJvdkzrnE6JwYZ+vj+vYxXX4M2bUdGc6
N3ec592kD3ZDZopD8p/7DEJ4Y9HiD2971KE9dJeFt0g5QdYg/NA6s/rob8SKunE3
vouXsXgxT7PntgMTzlSdriVZzH81Xwj3QEUxeCp6
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign Root E46 O=GlobalSign nv-sa
# Subject: CN=GlobalSign Root E46 O=GlobalSign nv-sa
# Label: "GlobalSign Root E46"
# Serial: 1552617690338932563915843282459653771421763
# MD5 Fingerprint: b5:b8:66:ed:de:08:83:e3:c9:e2:01:34:06:ac:51:6f
# SHA1 Fingerprint: 39:b4:6c:d5:fe:80:06:eb:e2:2f:4a:bb:08:33:a0:af:db:b9:dd:84
# SHA256 Fingerprint: cb:b9:c4:4d:84:b8:04:3e:10:50:ea:31:a6:9f:51:49:55:d7:bf:d2:e2:c6:b4:93:01:01:9a:d6:1d:9f:50:58
-----BEGIN CERTIFICATE-----
MIICCzCCAZGgAwIBAgISEdK7ujNu1LzmJGjFDYQdmOhDMAoGCCqGSM49BAMDMEYx
CzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMRwwGgYDVQQD
ExNHbG9iYWxTaWduIFJvb3QgRTQ2MB4XDTE5MDMyMDAwMDAwMFoXDTQ2MDMyMDAw
MDAwMFowRjELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2Ex
HDAaBgNVBAMTE0dsb2JhbFNpZ24gUm9vdCBFNDYwdjAQBgcqhkjOPQIBBgUrgQQA
IgNiAAScDrHPt+ieUnd1NPqlRqetMhkytAepJ8qUuwzSChDH2omwlwxwEwkBjtjq
R+q+soArzfwoDdusvKSGN+1wCAB16pMLey5SnCNoIwZD7JIvU4Tb+0cUB+hflGdd
yXqBPCCjQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud
DgQWBBQxCpCPtsad0kRLgLWi5h+xEk8blTAKBggqhkjOPQQDAwNoADBlAjEA31SQ
7Zvvi5QCkxeCmb6zniz2C5GMn0oUsfZkvLtoURMMA/cVi4RguYv/Uo7njLwcAjA8
+RHUjE7AwWHCFUyqqx0LMV87HOIAl0Qx5v5zli/altP+CAezNIm8BZ/3Hobui3A=
-----END CERTIFICATE-----

# Issuer: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz
# Subject: CN=ANF Secure Server Root CA O=ANF Autoridad de Certificacion OU=ANF CA Raiz
# Label: "ANF Secure Server Root CA"
# Serial: 996390341000653745
# MD5 Fingerprint: 26:a6:44:5a:d9:af:4e:2f:b2:1d:b6:65:b0:4e:e8:96
# SHA1 Fingerprint: 5b:6e:68:d0:cc:15:b6:a0:5f:1e:c1:5f:ae:02:fc:6b:2f:5d:6f:74
# SHA256 Fingerprint: fb:8f:ec:75:91:69:b9:10:6b:1e:51:16:44:c6:18:c5:13:04:37:3f:6c:06:43:08:8d:8b:ef:fd:1b:99:75:99
-----BEGIN CERTIFICATE-----
MIIF7zCCA9egAwIBAgIIDdPjvGz5a7EwDQYJKoZIhvcNAQELBQAwgYQxEjAQBgNV
BAUTCUc2MzI4NzUxMDELMAkGA1UEBhMCRVMxJzAlBgNVBAoTHkFORiBBdXRvcmlk
YWQgZGUgQ2VydGlmaWNhY2lvbjEUMBIGA1UECxMLQU5GIENBIFJhaXoxIjAgBgNV
BAMTGUFORiBTZWN1cmUgU2VydmVyIFJvb3QgQ0EwHhcNMTkwOTA0MTAwMDM4WhcN
MzkwODMwMTAwMDM4WjCBhDESMBAGA1UEBRMJRzYzMjg3NTEwMQswCQYDVQQGEwJF
UzEnMCUGA1UEChMeQU5GIEF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uMRQwEgYD
VQQLEwtBTkYgQ0EgUmFpejEiMCAGA1UEAxMZQU5GIFNlY3VyZSBTZXJ2ZXIgUm9v
dCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANvrayvmZFSVgpCj
cqQZAZ2cC4Ffc0m6p6zzBE57lgvsEeBbphzOG9INgxwruJ4dfkUyYA8H6XdYfp9q
yGFOtibBTI3/TO80sh9l2Ll49a2pcbnvT1gdpd50IJeh7WhM3pIXS7yr/2WanvtH
2Vdy8wmhrnZEE26cLUQ5vPnHO6RYPUG9tMJJo8gN0pcvB2VSAKduyK9o7PQUlrZX
H1bDOZ8rbeTzPvY1ZNoMHKGESy9LS+IsJJ1tk0DrtSOOMspvRdOoiXsezx76W0OL
zc2oD2rKDF65nkeP8Nm2CgtYZRczuSPkdxl9y0oukntPLxB3sY0vaJxizOBQ+OyR
p1RMVwnVdmPF6GUe7m1qzwmd+nxPrWAI/VaZDxUse6mAq4xhj0oHdkLePfTdsiQz
W7i1o0TJrH93PB0j7IKppuLIBkwC/qxcmZkLLxCKpvR/1Yd0DVlJRfbwcVw5Kda/
SiOL9V8BY9KHcyi1Swr1+KuCLH5zJTIdC2MKF4EA/7Z2Xue0sUDKIbvVgFHlSFJn
LNJhiQcND85Cd8BEc5xEUKDbEAotlRyBr+Qc5RQe8TZBAQIvfXOn3kLMTOmJDVb3
n5HUA8ZsyY/b2BzgQJhdZpmYgG4t/wHFzstGH6wCxkPmrqKEPMVOHj1tyRRM4y5B
u8o5vzY8KhmqQYdOpc5LMnndkEl/AgMBAAGjYzBhMB8GA1UdIwQYMBaAFJxf0Gxj
o1+TypOYCK2Mh6UsXME3MB0GA1UdDgQWBBScX9BsY6Nfk8qTmAitjIelLFzBNzAO
BgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC
AgEATh65isagmD9uw2nAalxJUqzLK114OMHVVISfk/CHGT0sZonrDUL8zPB1hT+L
9IBdeeUXZ701guLyPI59WzbLWoAAKfLOKyzxj6ptBZNscsdW699QIyjlRRA96Gej
rw5VD5AJYu9LWaL2U/HANeQvwSS9eS9OICI7/RogsKQOLHDtdD+4E5UGUcjohybK
pFtqFiGS3XNgnhAY3jyB6ugYw3yJ8otQPr0R4hUDqDZ9MwFsSBXXiJCZBMXM5gf0
vPSQ7RPi6ovDj6MzD8EpTBNO2hVWcXNyglD2mjN8orGoGjR0ZVzO0eurU+AagNjq
OknkJjCb5RyKqKkVMoaZkgoQI1YS4PbOTOK7vtuNknMBZi9iPrJyJ0U27U1W45eZ
/zo1PqVUSlJZS2Db7v54EX9K3BR5YLZrZAPbFYPhor72I5dQ8AkzNqdxliXzuUJ9
2zg/LFis6ELhDtjTO0wugumDLmsx2d1Hhk9tl5EuT+IocTUW0fJz/iUrB0ckYyfI
+PbZa/wSMVYIwFNCr5zQM378BvAxRAMU8Vjq8moNqRGyg77FGr8H6lnco4g175x2
MjxNBiLOFeXdntiP2t7SxDnlF4HPOEfrf4htWRvfn0IUrn7PqLBmZdo3r5+qPeoo
tt7VMVgWglvquxl1AnMaykgaIZOQCo6ThKd9OyMYkomgjaw=
-----END CERTIFICATE-----

# Issuer: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority
# Subject: CN=Certum EC-384 CA O=Asseco Data Systems S.A. OU=Certum Certification Authority
# Label: "Certum EC-384 CA"
# Serial: 160250656287871593594747141429395092468
# MD5 Fingerprint: b6:65:b3:96:60:97:12:a1:ec:4e:e1:3d:a3:c6:c9:f1
# SHA1 Fingerprint: f3:3e:78:3c:ac:df:f4:a2:cc:ac:67:55:69:56:d7:e5:16:3c:e1:ed
# SHA256 Fingerprint: 6b:32:80:85:62:53:18:aa:50:d1:73:c9:8d:8b:da:09:d5:7e:27:41:3d:11:4c:f7:87:a0:f5:d0:6c:03:0c:f6
-----BEGIN CERTIFICATE-----
MIICZTCCAeugAwIBAgIQeI8nXIESUiClBNAt3bpz9DAKBggqhkjOPQQDAzB0MQsw
CQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEuMScw
JQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAXBgNVBAMT
EENlcnR1bSBFQy0zODQgQ0EwHhcNMTgwMzI2MDcyNDU0WhcNNDMwMzI2MDcyNDU0
WjB0MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBT
LkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxGTAX
BgNVBAMTEENlcnR1bSBFQy0zODQgQ0EwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATE
KI6rGFtqvm5kN2PkzeyrOvfMobgOgknXhimfoZTy42B4mIF4Bk3y7JoOV2CDn7Tm
Fy8as10CW4kjPMIRBSqniBMY81CE1700LCeJVf/OTOffph8oxPBUw7l8t1Ot68Kj
QjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFI0GZnQkdjrzife81r1HfS+8
EF9LMA4GA1UdDwEB/wQEAwIBBjAKBggqhkjOPQQDAwNoADBlAjADVS2m5hjEfO/J
UG7BJw+ch69u1RsIGL2SKcHvlJF40jocVYli5RsJHrpka/F2tNQCMQC0QoSZ/6vn
nvuRlydd3LBbMHHOXjgaatkl5+r3YZJW+OraNsKHZZYuciUvf9/DE8k=
-----END CERTIFICATE-----

# Issuer: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority
# Subject: CN=Certum Trusted Root CA O=Asseco Data Systems S.A. OU=Certum Certification Authority
# Label: "Certum Trusted Root CA"
# Serial: 40870380103424195783807378461123655149
# MD5 Fingerprint: 51:e1:c2:e7:fe:4c:84:af:59:0e:2f:f4:54:6f:ea:29
# SHA1 Fingerprint: c8:83:44:c0:18:ae:9f:cc:f1:87:b7:8f:22:d1:c5:d7:45:84:ba:e5
# SHA256 Fingerprint: fe:76:96:57:38:55:77:3e:37:a9:5e:7a:d4:d9:cc:96:c3:01:57:c1:5d:31:76:5b:a9:b1:57:04:e1:ae:78:fd
-----BEGIN CERTIFICATE-----
MIIFwDCCA6igAwIBAgIQHr9ZULjJgDdMBvfrVU+17TANBgkqhkiG9w0BAQ0FADB6
MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEgU3lzdGVtcyBTLkEu
MScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxHzAdBgNV
BAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwHhcNMTgwMzE2MTIxMDEzWhcNNDMw
MzE2MTIxMDEzWjB6MQswCQYDVQQGEwJQTDEhMB8GA1UEChMYQXNzZWNvIERhdGEg
U3lzdGVtcyBTLkEuMScwJQYDVQQLEx5DZXJ0dW0gQ2VydGlmaWNhdGlvbiBBdXRo
b3JpdHkxHzAdBgNVBAMTFkNlcnR1bSBUcnVzdGVkIFJvb3QgQ0EwggIiMA0GCSqG
SIb3DQEBAQUAA4ICDwAwggIKAoICAQDRLY67tzbqbTeRn06TpwXkKQMlzhyC93yZ
n0EGze2jusDbCSzBfN8pfktlL5On1AFrAygYo9idBcEq2EXxkd7fO9CAAozPOA/q
p1x4EaTByIVcJdPTsuclzxFUl6s1wB52HO8AU5853BSlLCIls3Jy/I2z5T4IHhQq
NwuIPMqw9MjCoa68wb4pZ1Xi/K1ZXP69VyywkI3C7Te2fJmItdUDmj0VDT06qKhF
8JVOJVkdzZhpu9PMMsmN74H+rX2Ju7pgE8pllWeg8xn2A1bUatMn4qGtg/BKEiJ3
HAVz4hlxQsDsdUaakFjgao4rpUYwBI4Zshfjvqm6f1bxJAPXsiEodg42MEx51UGa
mqi4NboMOvJEGyCI98Ul1z3G4z5D3Yf+xOr1Uz5MZf87Sst4WmsXXw3Hw09Omiqi
7VdNIuJGmj8PkTQkfVXjjJU30xrwCSss0smNtA0Aq2cpKNgB9RkEth2+dv5yXMSF
ytKAQd8FqKPVhJBPC/PgP5sZ0jeJP/J7UhyM9uH3PAeXjA6iWYEMspA90+NZRu0P
qafegGtaqge2Gcu8V/OXIXoMsSt0Puvap2ctTMSYnjYJdmZm/Bo/6khUHL4wvYBQ
v3y1zgD2DGHZ5yQD4OMBgQ692IU0iL2yNqh7XAjlRICMb/gv1SHKHRzQ+8S1h9E6
Tsd2tTVItQIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSM+xx1
vALTn04uSNn5YFSqxLNP+jAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQAD
ggIBAEii1QALLtA/vBzVtVRJHlpr9OTy4EA34MwUe7nJ+jW1dReTagVphZzNTxl4
WxmB82M+w85bj/UvXgF2Ez8sALnNllI5SW0ETsXpD4YN4fqzX4IS8TrOZgYkNCvo
zMrnadyHncI013nR03e4qllY/p0m+jiGPp2Kh2RX5Rc64vmNueMzeMGQ2Ljdt4NR
5MTMI9UGfOZR0800McD2RrsLrfw9EAUqO0qRJe6M1ISHgCq8CYyqOhNf6DR5UMEQ
GfnTKB7U0VEwKbOukGfWHwpjscWpxkIxYxeU72nLL/qMFH3EQxiJ2fAyQOaA4kZf
5ePBAFmo+eggvIksDkc0C+pXwlM2/KfUrzHN/gLldfq5Jwn58/U7yn2fqSLLiMmq
0Uc9NneoWWRrJ8/vJ8HjJLWG965+Mk2weWjROeiQWMODvA8s1pfrzgzhIMfatz7D
P78v3DSk+yshzWePS/Tj6tQ/50+6uaWTRRxmHyH6ZF5v4HaUMst19W7l9o/HuKTM
qJZ9ZPskWkoDbGs4xugDQ5r3V7mzKWmTOPQD8rv7gmsHINFSH5pkAnuYZttcTVoP
0ISVoDwUQwbKytu4QTbaakRnh6+v40URFWkIsr4WOZckbxJF0WddCajJFdr60qZf
E2Efv4WstK2tBZQIgx51F9NxO5NQI1mg7TyRVJ12AMXDuDjb
-----END CERTIFICATE-----

# Issuer: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique
# Subject: CN=TunTrust Root CA O=Agence Nationale de Certification Electronique
# Label: "TunTrust Root CA"
# Serial: 108534058042236574382096126452369648152337120275
# MD5 Fingerprint: 85:13:b9:90:5b:36:5c:b6:5e:b8:5a:f8:e0:31:57:b4
# SHA1 Fingerprint: cf:e9:70:84:0f:e0:73:0f:9d:f6:0c:7f:2c:4b:ee:20:46:34:9c:bb
# SHA256 Fingerprint: 2e:44:10:2a:b5:8c:b8:54:19:45:1c:8e:19:d9:ac:f3:66:2c:af:bc:61:4b:6a:53:96:0a:30:f7:d0:e2:eb:41
-----BEGIN CERTIFICATE-----
MIIFszCCA5ugAwIBAgIUEwLV4kBMkkaGFmddtLu7sms+/BMwDQYJKoZIhvcNAQEL
BQAwYTELMAkGA1UEBhMCVE4xNzA1BgNVBAoMLkFnZW5jZSBOYXRpb25hbGUgZGUg
Q2VydGlmaWNhdGlvbiBFbGVjdHJvbmlxdWUxGTAXBgNVBAMMEFR1blRydXN0IFJv
b3QgQ0EwHhcNMTkwNDI2MDg1NzU2WhcNNDQwNDI2MDg1NzU2WjBhMQswCQYDVQQG
EwJUTjE3MDUGA1UECgwuQWdlbmNlIE5hdGlvbmFsZSBkZSBDZXJ0aWZpY2F0aW9u
IEVsZWN0cm9uaXF1ZTEZMBcGA1UEAwwQVHVuVHJ1c3QgUm9vdCBDQTCCAiIwDQYJ
KoZIhvcNAQEBBQADggIPADCCAgoCggIBAMPN0/y9BFPdDCA61YguBUtB9YOCfvdZ
n56eY+hz2vYGqU8ftPkLHzmMmiDQfgbU7DTZhrx1W4eI8NLZ1KMKsmwb60ksPqxd
2JQDoOw05TDENX37Jk0bbjBU2PWARZw5rZzJJQRNmpA+TkBuimvNKWfGzC3gdOgF
VwpIUPp6Q9p+7FuaDmJ2/uqdHYVy7BG7NegfJ7/Boce7SBbdVtfMTqDhuazb1YMZ
GoXRlJfXyqNlC/M4+QKu3fZnz8k/9YosRxqZbwUN/dAdgjH8KcwAWJeRTIAAHDOF
li/LQcKLEITDCSSJH7UP2dl3RxiSlGBcx5kDPP73lad9UKGAwqmDrViWVSHbhlnU
r8a83YFuB9tgYv7sEG7aaAH0gxupPqJbI9dkxt/con3YS7qC0lH4Zr8GRuR5KiY2
eY8fTpkdso8MDhz/yV3A/ZAQprE38806JG60hZC/gLkMjNWb1sjxVj8agIl6qeIb
MlEsPvLfe/ZdeikZjuXIvTZxi11Mwh0/rViizz1wTaZQmCXcI/m4WEEIcb9PuISg
jwBUFfyRbVinljvrS5YnzWuioYasDXxU5mZMZl+QviGaAkYt5IPCgLnPSz7ofzwB
7I9ezX/SKEIBlYrilz0QIX32nRzFNKHsLA4KUiwSVXAkPcvCFDVDXSdOvsC9qnyW
5/yeYa1E0wCXAgMBAAGjYzBhMB0GA1UdDgQWBBQGmpsfU33x9aTI04Y+oXNZtPdE
ITAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFAaamx9TffH1pMjThj6hc1m0
90QhMA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAqgVutt0Vyb+z
xiD2BkewhpMl0425yAA/l/VSJ4hxyXT968pk21vvHl26v9Hr7lxpuhbI87mP0zYu
QEkHDVneixCwSQXi/5E/S7fdAo74gShczNxtr18UnH1YeA32gAm56Q6XKRm4t+v4
FstVEuTGfbvE7Pi1HE4+Z7/FXxttbUcoqgRYYdZ2vyJ/0Adqp2RT8JeNnYA/u8EH
22Wv5psymsNUk8QcCMNE+3tjEUPRahphanltkE8pjkcFwRJpadbGNjHh/PqAulxP
xOu3Mqz4dWEX1xAZufHSCe96Qp1bWgvUxpVOKs7/B9dPfhgGiPEZtdmYu65xxBzn
dFlY7wyJz4sfdZMaBBSSSFCp61cpABbjNhzI+L/wM9VBD8TMPN3pM0MBkRArHtG5
Xc0yGYuPjCB31yLEQtyEFpslbei0VXF/sHyz03FJuc9SpAQ/3D2gu68zngowYI7b
nV2UqL1g52KAdoGDDIzMMEZJ4gzSqK/rYXHv5yJiqfdcZGyfFoxnNidF9Ql7v/YQ
CvGwjVRDjAS6oz/v4jXH+XTgbzRB0L9zZVcg+ZtnemZoJE6AZb0QmQZZ8mWvuMZH
u/2QeItBcy6vVR/cO5JyboTT0GFMDcx2V+IthSIVNg3rAZ3r2OvEhJn7wAzMMujj
d9qDRIueVSjAi1jTkD5OGwDxFa2DK5o=
-----END CERTIFICATE-----

# Issuer: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA
# Subject: CN=HARICA TLS RSA Root CA 2021 O=Hellenic Academic and Research Institutions CA
# Label: "HARICA TLS RSA Root CA 2021"
# Serial: 76817823531813593706434026085292783742
# MD5 Fingerprint: 65:47:9b:58:86:dd:2c:f0:fc:a2:84:1f:1e:96:c4:91
# SHA1 Fingerprint: 02:2d:05:82:fa:88:ce:14:0c:06:79:de:7f:14:10:e9:45:d7:a5:6d
# SHA256 Fingerprint: d9:5d:0e:8e:da:79:52:5b:f9:be:b1:1b:14:d2:10:0d:32:94:98:5f:0c:62:d9:fa:bd:9c:d9:99:ec:cb:7b:1d
-----BEGIN CERTIFICATE-----
MIIFpDCCA4ygAwIBAgIQOcqTHO9D88aOk8f0ZIk4fjANBgkqhkiG9w0BAQsFADBs
MQswCQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJl
c2VhcmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBSU0Eg
Um9vdCBDQSAyMDIxMB4XDTIxMDIxOTEwNTUzOFoXDTQ1MDIxMzEwNTUzN1owbDEL
MAkGA1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNl
YXJjaCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgUlNBIFJv
b3QgQ0EgMjAyMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAIvC569l
mwVnlskNJLnQDmT8zuIkGCyEf3dRywQRNrhe7Wlxp57kJQmXZ8FHws+RFjZiPTgE
4VGC/6zStGndLuwRo0Xua2s7TL+MjaQenRG56Tj5eg4MmOIjHdFOY9TnuEFE+2uv
a9of08WRiFukiZLRgeaMOVig1mlDqa2YUlhu2wr7a89o+uOkXjpFc5gH6l8Cct4M
pbOfrqkdtx2z/IpZ525yZa31MJQjB/OCFks1mJxTuy/K5FrZx40d/JiZ+yykgmvw
Kh+OC19xXFyuQnspiYHLA6OZyoieC0AJQTPb5lh6/a6ZcMBaD9YThnEvdmn8kN3b
LW7R8pv1GmuebxWMevBLKKAiOIAkbDakO/IwkfN4E8/BPzWr8R0RI7VDIp4BkrcY
AuUR0YLbFQDMYTfBKnya4dC6s1BG7oKsnTH4+yPiAwBIcKMJJnkVU2DzOFytOOqB
AGMUuTNe3QvboEUHGjMJ+E20pwKmafTCWQWIZYVWrkvL4N48fS0ayOn7H6NhStYq
E613TBoYm5EPWNgGVMWX+Ko/IIqmhaZ39qb8HOLubpQzKoNQhArlT4b4UEV4AIHr
W2jjJo3Me1xR9BQsQL4aYB16cmEdH2MtiKrOokWQCPxrvrNQKlr9qEgYRtaQQJKQ
CoReaDH46+0N0x3GfZkYVVYnZS6NRcUk7M7jAgMBAAGjQjBAMA8GA1UdEwEB/wQF
MAMBAf8wHQYDVR0OBBYEFApII6ZgpJIKM+qTW8VX6iVNvRLuMA4GA1UdDwEB/wQE
AwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAPpBIqm5iFSVmewzVjIuJndftTgfvnNAU
X15QvWiWkKQUEapobQk1OUAJ2vQJLDSle1mESSmXdMgHHkdt8s4cUCbjnj1AUz/3
f5Z2EMVGpdAgS1D0NTsY9FVqQRtHBmg8uwkIYtlfVUKqrFOFrJVWNlar5AWMxaja
H6NpvVMPxP/cyuN+8kyIhkdGGvMA9YCRotxDQpSbIPDRzbLrLFPCU3hKTwSUQZqP
JzLB5UkZv/HywouoCjkxKLR9YjYsTewfM7Z+d21+UPCfDtcRj88YxeMn/ibvBZ3P
zzfF0HvaO7AWhAw6k9a+F9sPPg4ZeAnHqQJyIkv3N3a6dcSFA1pj1bF1BcK5vZSt
jBWZp5N99sXzqnTPBIWUmAD04vnKJGW/4GKvyMX6ssmeVkjaef2WdhW+o45WxLM0
/L5H9MG0qPzVMIho7suuyWPEdr6sOBjhXlzPrjoiUevRi7PzKzMHVIf6tLITe7pT
BGIBnfHAT+7hOtSLIBD6Alfm78ELt5BGnBkpjNxvoEppaZS3JGWg/6w/zgH7IS79
aPib8qXPMThcFarmlwDB31qlpzmq6YR/PFGoOtmUW4y/Twhx5duoXNTSpv4Ao8YW
xw/ogM4cKGR0GQjTQuPOAF1/sdwTsOEFy9EgqoZ0njnnkf3/W9b3raYvAwtt41dU
63ZTGI0RmLo=
-----END CERTIFICATE-----

# Issuer: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA
# Subject: CN=HARICA TLS ECC Root CA 2021 O=Hellenic Academic and Research Institutions CA
# Label: "HARICA TLS ECC Root CA 2021"
# Serial: 137515985548005187474074462014555733966
# MD5 Fingerprint: ae:f7:4c:e5:66:35:d1:b7:9b:8c:22:93:74:d3:4b:b0
# SHA1 Fingerprint: bc:b0:c1:9d:e9:98:92:70:19:38:57:e9:8d:a7:b4:5d:6e:ee:01:48
# SHA256 Fingerprint: 3f:99:cc:47:4a:cf:ce:4d:fe:d5:87:94:66:5e:47:8d:15:47:73:9f:2e:78:0f:1b:b4:ca:9b:13:30:97:d4:01
-----BEGIN CERTIFICATE-----
MIICVDCCAdugAwIBAgIQZ3SdjXfYO2rbIvT/WeK/zjAKBggqhkjOPQQDAzBsMQsw
CQYDVQQGEwJHUjE3MDUGA1UECgwuSGVsbGVuaWMgQWNhZGVtaWMgYW5kIFJlc2Vh
cmNoIEluc3RpdHV0aW9ucyBDQTEkMCIGA1UEAwwbSEFSSUNBIFRMUyBFQ0MgUm9v
dCBDQSAyMDIxMB4XDTIxMDIxOTExMDExMFoXDTQ1MDIxMzExMDEwOVowbDELMAkG
A1UEBhMCR1IxNzA1BgNVBAoMLkhlbGxlbmljIEFjYWRlbWljIGFuZCBSZXNlYXJj
aCBJbnN0aXR1dGlvbnMgQ0ExJDAiBgNVBAMMG0hBUklDQSBUTFMgRUNDIFJvb3Qg
Q0EgMjAyMTB2MBAGByqGSM49AgEGBSuBBAAiA2IABDgI/rGgltJ6rK9JOtDA4MM7
KKrxcm1lAEeIhPyaJmuqS7psBAqIXhfyVYf8MLA04jRYVxqEU+kw2anylnTDUR9Y
STHMmE5gEYd103KUkE+bECUqqHgtvpBBWJAVcqeht6NCMEAwDwYDVR0TAQH/BAUw
AwEB/zAdBgNVHQ4EFgQUyRtTgRL+BNUW0aq8mm+3oJUZbsowDgYDVR0PAQH/BAQD
AgGGMAoGCCqGSM49BAMDA2cAMGQCMBHervjcToiwqfAircJRQO9gcS3ujwLEXQNw
SaSS6sUUiHCm0w2wqsosQJz76YJumgIwK0eaB8bRwoF8yguWGEEbo/QwCZ61IygN
nxS2PFOiTAZpffpskcYqSUXm7LcT4Tps
-----END CERTIFICATE-----

# Issuer: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068
# Subject: CN=Autoridad de Certificacion Firmaprofesional CIF A62634068
# Label: "Autoridad de Certificacion Firmaprofesional CIF A62634068"
# Serial: 1977337328857672817
# MD5 Fingerprint: 4e:6e:9b:54:4c:ca:b7:fa:48:e4:90:b1:15:4b:1c:a3
# SHA1 Fingerprint: 0b:be:c2:27:22:49:cb:39:aa:db:35:5c:53:e3:8c:ae:78:ff:b6:fe
# SHA256 Fingerprint: 57:de:05:83:ef:d2:b2:6e:03:61:da:99:da:9d:f4:64:8d:ef:7e:e8:44:1c:3b:72:8a:fa:9b:cd:e0:f9:b2:6a
-----BEGIN CERTIFICATE-----
MIIGFDCCA/ygAwIBAgIIG3Dp0v+ubHEwDQYJKoZIhvcNAQELBQAwUTELMAkGA1UE
BhMCRVMxQjBABgNVBAMMOUF1dG9yaWRhZCBkZSBDZXJ0aWZpY2FjaW9uIEZpcm1h
cHJvZmVzaW9uYWwgQ0lGIEE2MjYzNDA2ODAeFw0xNDA5MjMxNTIyMDdaFw0zNjA1
MDUxNTIyMDdaMFExCzAJBgNVBAYTAkVTMUIwQAYDVQQDDDlBdXRvcmlkYWQgZGUg
Q2VydGlmaWNhY2lvbiBGaXJtYXByb2Zlc2lvbmFsIENJRiBBNjI2MzQwNjgwggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDKlmuO6vj78aI14H9M2uDDUtd9
thDIAl6zQyrET2qyyhxdKJp4ERppWVevtSBC5IsP5t9bpgOSL/UR5GLXMnE42QQM
cas9UX4PB99jBVzpv5RvwSmCwLTaUbDBPLutN0pcyvFLNg4kq7/DhHf9qFD0sefG
L9ItWY16Ck6WaVICqjaY7Pz6FIMMNx/Jkjd/14Et5cS54D40/mf0PmbR0/RAz15i
NA9wBj4gGFrO93IbJWyTdBSTo3OxDqqHECNZXyAFGUftaI6SEspd/NYrspI8IM/h
X68gvqB2f3bl7BqGYTM+53u0P6APjqK5am+5hyZvQWyIplD9amML9ZMWGxmPsu2b
m8mQ9QEM3xk9Dz44I8kvjwzRAv4bVdZO0I08r0+k8/6vKtMFnXkIoctXMbScyJCy
Z/QYFpM6/EfY0XiWMR+6KwxfXZmtY4laJCB22N/9q06mIqqdXuYnin1oKaPnirja
EbsXLZmdEyRG98Xi2J+Of8ePdG1asuhy9azuJBCtLxTa/y2aRnFHvkLfuwHb9H/T
KI8xWVvTyQKmtFLKbpf7Q8UIJm+K9Lv9nyiqDdVF8xM6HdjAeI9BZzwelGSuewvF
6NkBiDkal4ZkQdU7hwxu+g/GvUgUvzlN1J5Bto+WHWOWk9mVBngxaJ43BjuAiUVh
OSPHG0SjFeUc+JIwuwIDAQABo4HvMIHsMB0GA1UdDgQWBBRlzeurNR4APn7VdMAc
tHNHDhpkLzASBgNVHRMBAf8ECDAGAQH/AgEBMIGmBgNVHSAEgZ4wgZswgZgGBFUd
IAAwgY8wLwYIKwYBBQUHAgEWI2h0dHA6Ly93d3cuZmlybWFwcm9mZXNpb25hbC5j
b20vY3BzMFwGCCsGAQUFBwICMFAeTgBQAGEAcwBlAG8AIABkAGUAIABsAGEAIABC
AG8AbgBhAG4AbwB2AGEAIAA0ADcAIABCAGEAcgBjAGUAbABvAG4AYQAgADAAOAAw
ADEANzAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQELBQADggIBAHSHKAIrdx9m
iWTtj3QuRhy7qPj4Cx2Dtjqn6EWKB7fgPiDL4QjbEwj4KKE1soCzC1HA01aajTNF
Sa9J8OA9B3pFE1r/yJfY0xgsfZb43aJlQ3CTkBW6kN/oGbDbLIpgD7dvlAceHabJ
hfa9NPhAeGIQcDq+fUs5gakQ1JZBu/hfHAsdCPKxsIl68veg4MSPi3i1O1ilI45P
Vf42O+AMt8oqMEEgtIDNrvx2ZnOorm7hfNoD6JQg5iKj0B+QXSBTFCZX2lSX3xZE
EAEeiGaPcjiT3SC3NL7X8e5jjkd5KAb881lFJWAiMxujX6i6KtoaPc1A6ozuBRWV
1aUsIC+nmCjuRfzxuIgALI9C2lHVnOUTaHFFQ4ueCyE8S1wF3BqfmI7avSKecs2t
CsvMo2ebKHTEm9caPARYpoKdrcd7b/+Alun4jWq9GJAd/0kakFI3ky88Al2CdgtR
5xbHV/g4+afNmyJU72OwFW1TZQNKXkqgsqeOSQBZONXH9IBk9W6VULgRfhVwOEqw
f9DEMnDAGf/JOC0ULGb0QkTmVXYbgBVX/8Cnp6o5qtjTcNAuuuuUavpfNIbnYrX9
ivAwhZTJryQCL2/W3Wf+47BVTwSYT6RBVuKT0Gro1vP7ZeDOdcQxWQzugsgMYDNK
GbqEZycPvEJdvSRUDewdcAZfpLz6IHxV
-----END CERTIFICATE-----

# Issuer: CN=vTrus ECC Root CA O=iTrusChina Co.,Ltd.
# Subject: CN=vTrus ECC Root CA O=iTrusChina Co.,Ltd.
# Label: "vTrus ECC Root CA"
# Serial: 630369271402956006249506845124680065938238527194
# MD5 Fingerprint: de:4b:c1:f5:52:8c:9b:43:e1:3e:8f:55:54:17:8d:85
# SHA1 Fingerprint: f6:9c:db:b0:fc:f6:02:13:b6:52:32:a6:a3:91:3f:16:70:da:c3:e1
# SHA256 Fingerprint: 30:fb:ba:2c:32:23:8e:2a:98:54:7a:f9:79:31:e5:50:42:8b:9b:3f:1c:8e:eb:66:33:dc:fa:86:c5:b2:7d:d3
-----BEGIN CERTIFICATE-----
MIICDzCCAZWgAwIBAgIUbmq8WapTvpg5Z6LSa6Q75m0c1towCgYIKoZIzj0EAwMw
RzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28uLEx0ZC4xGjAY
BgNVBAMTEXZUcnVzIEVDQyBSb290IENBMB4XDTE4MDczMTA3MjY0NFoXDTQzMDcz
MTA3MjY0NFowRzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28u
LEx0ZC4xGjAYBgNVBAMTEXZUcnVzIEVDQyBSb290IENBMHYwEAYHKoZIzj0CAQYF
K4EEACIDYgAEZVBKrox5lkqqHAjDo6LN/llWQXf9JpRCux3NCNtzslt188+cToL0
v/hhJoVs1oVbcnDS/dtitN9Ti72xRFhiQgnH+n9bEOf+QP3A2MMrMudwpremIFUd
e4BdS49nTPEQo0IwQDAdBgNVHQ4EFgQUmDnNvtiyjPeyq+GtJK97fKHbH88wDwYD
VR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwCgYIKoZIzj0EAwMDaAAwZQIw
V53dVvHH4+m4SVBrm2nDb+zDfSXkV5UTQJtS0zvzQBm8JsctBp61ezaf9SXUY2sA
AjEA6dPGnlaaKsyh2j/IZivTWJwghfqrkYpwcBE4YGQLYgmRWAD5Tfs0aNoJrSEG
GJTO
-----END CERTIFICATE-----

# Issuer: CN=vTrus Root CA O=iTrusChina Co.,Ltd.
# Subject: CN=vTrus Root CA O=iTrusChina Co.,Ltd.
# Label: "vTrus Root CA"
# Serial: 387574501246983434957692974888460947164905180485
# MD5 Fingerprint: b8:c9:37:df:fa:6b:31:84:64:c5:ea:11:6a:1b:75:fc
# SHA1 Fingerprint: 84:1a:69:fb:f5:cd:1a:25:34:13:3d:e3:f8:fc:b8:99:d0:c9:14:b7
# SHA256 Fingerprint: 8a:71:de:65:59:33:6f:42:6c:26:e5:38:80:d0:0d:88:a1:8d:a4:c6:a9:1f:0d:cb:61:94:e2:06:c5:c9:63:87
-----BEGIN CERTIFICATE-----
MIIFVjCCAz6gAwIBAgIUQ+NxE9izWRRdt86M/TX9b7wFjUUwDQYJKoZIhvcNAQEL
BQAwQzELMAkGA1UEBhMCQ04xHDAaBgNVBAoTE2lUcnVzQ2hpbmEgQ28uLEx0ZC4x
FjAUBgNVBAMTDXZUcnVzIFJvb3QgQ0EwHhcNMTgwNzMxMDcyNDA1WhcNNDMwNzMx
MDcyNDA1WjBDMQswCQYDVQQGEwJDTjEcMBoGA1UEChMTaVRydXNDaGluYSBDby4s
THRkLjEWMBQGA1UEAxMNdlRydXMgUm9vdCBDQTCCAiIwDQYJKoZIhvcNAQEBBQAD
ggIPADCCAgoCggIBAL1VfGHTuB0EYgWgrmy3cLRB6ksDXhA/kFocizuwZotsSKYc
IrrVQJLuM7IjWcmOvFjai57QGfIvWcaMY1q6n6MLsLOaXLoRuBLpDLvPbmyAhykU
AyyNJJrIZIO1aqwTLDPxn9wsYTwaP3BVm60AUn/PBLn+NvqcwBauYv6WTEN+VRS+
GrPSbcKvdmaVayqwlHeFXgQPYh1jdfdr58tbmnDsPmcF8P4HCIDPKNsFxhQnL4Z9
8Cfe/+Z+M0jnCx5Y0ScrUw5XSmXX+6KAYPxMvDVTAWqXcoKv8R1w6Jz1717CbMdH
flqUhSZNO7rrTOiwCcJlwp2dCZtOtZcFrPUGoPc2BX70kLJrxLT5ZOrpGgrIDajt
J8nU57O5q4IikCc9Kuh8kO+8T/3iCiSn3mUkpF3qwHYw03dQ+A0Em5Q2AXPKBlim
0zvc+gRGE1WKyURHuFE5Gi7oNOJ5y1lKCn+8pu8fA2dqWSslYpPZUxlmPCdiKYZN
pGvu/9ROutW04o5IWgAZCfEF2c6Rsffr6TlP9m8EQ5pV9T4FFL2/s1m02I4zhKOQ
UqqzApVg+QxMaPnu1RcN+HFXtSXkKe5lXa/R7jwXC1pDxaWG6iSe4gUH3DRCEpHW
OXSuTEGC2/KmSNGzm/MzqvOmwMVO9fSddmPmAsYiS8GVP1BkLFTltvA8Kc9XAgMB
AAGjQjBAMB0GA1UdDgQWBBRUYnBj8XWEQ1iO0RYgscasGrz2iTAPBgNVHRMBAf8E
BTADAQH/MA4GA1UdDwEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAKbqSSaet
8PFww+SX8J+pJdVrnjT+5hpk9jprUrIQeBqfTNqK2uwcN1LgQkv7bHbKJAs5EhWd
nxEt/Hlk3ODg9d3gV8mlsnZwUKT+twpw1aA08XXXTUm6EdGz2OyC/+sOxL9kLX1j
bhd47F18iMjrjld22VkE+rxSH0Ws8HqA7Oxvdq6R2xCOBNyS36D25q5J08FsEhvM
Kar5CKXiNxTKsbhm7xqC5PD48acWabfbqWE8n/Uxy+QARsIvdLGx14HuqCaVvIiv
TDUHKgLKeBRtRytAVunLKmChZwOgzoy8sHJnxDHO2zTlJQNgJXtxmOTAGytfdELS
S8VZCAeHvsXDf+eW2eHcKJfWjwXj9ZtOyh1QRwVTsMo554WgicEFOwE30z9J4nfr
I8iIZjs9OXYhRvHsXyO466JmdXTBQPfYaJqT4i2pLr0cox7IdMakLXogqzu4sEb9
b91fUlV1YvCXoHzXOP0l382gmxDPi7g4Xl7FtKYCNqEeXxzP4padKar9mK5S4fNB
UvupLnKWnyfjqnN9+BojZns7q2WwMgFLFT49ok8MKzWixtlnEjUwzXYuFrOZnk1P
Ti07NEPhmg4NpGaXutIcSkwsKouLgU9xGqndXHt7CMUADTdA43x7VF8vhV929ven
sBxXVsFy6K2ir40zSbofitzmdHxghm+Hl3s=
-----END CERTIFICATE-----

# Issuer: CN=ISRG Root X2 O=Internet Security Research Group
# Subject: CN=ISRG Root X2 O=Internet Security Research Group
# Label: "ISRG Root X2"
# Serial: 87493402998870891108772069816698636114
# MD5 Fingerprint: d3:9e:c4:1e:23:3c:a6:df:cf:a3:7e:6d:e0:14:e6:e5
# SHA1 Fingerprint: bd:b1:b9:3c:d5:97:8d:45:c6:26:14:55:f8:db:95:c7:5a:d1:53:af
# SHA256 Fingerprint: 69:72:9b:8e:15:a8:6e:fc:17:7a:57:af:b7:17:1d:fc:64:ad:d2:8c:2f:ca:8c:f1:50:7e:34:45:3c:cb:14:70
-----BEGIN CERTIFICATE-----
MIICGzCCAaGgAwIBAgIQQdKd0XLq7qeAwSxs6S+HUjAKBggqhkjOPQQDAzBPMQsw
CQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJuZXQgU2VjdXJpdHkgUmVzZWFyY2gg
R3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBYMjAeFw0yMDA5MDQwMDAwMDBaFw00
MDA5MTcxNjAwMDBaME8xCzAJBgNVBAYTAlVTMSkwJwYDVQQKEyBJbnRlcm5ldCBT
ZWN1cml0eSBSZXNlYXJjaCBHcm91cDEVMBMGA1UEAxMMSVNSRyBSb290IFgyMHYw
EAYHKoZIzj0CAQYFK4EEACIDYgAEzZvVn4CDCuwJSvMWSj5cz3es3mcFDR0HttwW
+1qLFNvicWDEukWVEYmO6gbf9yoWHKS5xcUy4APgHoIYOIvXRdgKam7mAHf7AlF9
ItgKbppbd9/w+kHsOdx1ymgHDB/qo0IwQDAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0T
AQH/BAUwAwEB/zAdBgNVHQ4EFgQUfEKWrt5LSDv6kviejM9ti6lyN5UwCgYIKoZI
zj0EAwMDaAAwZQIwe3lORlCEwkSHRhtFcP9Ymd70/aTSVaYgLXTWNLxBo1BfASdW
tL4ndQavEi51mI38AjEAi/V3bNTIZargCyzuFJ0nN6T5U6VR5CmD1/iQMVtCnwr1
/q4AaOeMSQ+2b1tbFfLn
-----END CERTIFICATE-----

# Issuer: CN=HiPKI Root CA - G1 O=Chunghwa Telecom Co., Ltd.
# Subject: CN=HiPKI Root CA - G1 O=Chunghwa Telecom Co., Ltd.
# Label: "HiPKI Root CA - G1"
# Serial: 60966262342023497858655262305426234976
# MD5 Fingerprint: 69:45:df:16:65:4b:e8:68:9a:8f:76:5f:ff:80:9e:d3
# SHA1 Fingerprint: 6a:92:e4:a8:ee:1b:ec:96:45:37:e3:29:57:49:cd:96:e3:e5:d2:60
# SHA256 Fingerprint: f0:15:ce:3c:c2:39:bf:ef:06:4b:e9:f1:d2:c4:17:e1:a0:26:4a:0a:94:be:1f:0c:8d:12:18:64:eb:69:49:cc
-----BEGIN CERTIFICATE-----
MIIFajCCA1KgAwIBAgIQLd2szmKXlKFD6LDNdmpeYDANBgkqhkiG9w0BAQsFADBP
MQswCQYDVQQGEwJUVzEjMCEGA1UECgwaQ2h1bmdod2EgVGVsZWNvbSBDby4sIEx0
ZC4xGzAZBgNVBAMMEkhpUEtJIFJvb3QgQ0EgLSBHMTAeFw0xOTAyMjIwOTQ2MDRa
Fw0zNzEyMzExNTU5NTlaME8xCzAJBgNVBAYTAlRXMSMwIQYDVQQKDBpDaHVuZ2h3
YSBUZWxlY29tIENvLiwgTHRkLjEbMBkGA1UEAwwSSGlQS0kgUm9vdCBDQSAtIEcx
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA9B5/UnMyDHPkvRN0o9Qw
qNCuS9i233VHZvR85zkEHmpwINJaR3JnVfSl6J3VHiGh8Ge6zCFovkRTv4354twv
Vcg3Px+kwJyz5HdcoEb+d/oaoDjq7Zpy3iu9lFc6uux55199QmQ5eiY29yTw1S+6
lZgRZq2XNdZ1AYDgr/SEYYwNHl98h5ZeQa/rh+r4XfEuiAU+TCK72h8q3VJGZDnz
Qs7ZngyzsHeXZJzA9KMuH5UHsBffMNsAGJZMoYFL3QRtU6M9/Aes1MU3guvklQgZ
KILSQjqj2FPseYlgSGDIcpJQ3AOPgz+yQlda22rpEZfdhSi8MEyr48KxRURHH+CK
FgeW0iEPU8DtqX7UTuybCeyvQqww1r/REEXgphaypcXTT3OUM3ECoWqj1jOXTyFj
HluP2cFeRXF3D4FdXyGarYPM+l7WjSNfGz1BryB1ZlpK9p/7qxj3ccC2HTHsOyDr
y+K49a6SsvfhhEvyovKTmiKe0xRvNlS9H15ZFblzqMF8b3ti6RZsR1pl8w4Rm0bZ
/W3c1pzAtH2lsN0/Vm+h+fbkEkj9Bn8SV7apI09bA8PgcSojt/ewsTu8mL3WmKgM
a/aOEmem8rJY5AIJEzypuxC00jBF8ez3ABHfZfjcK0NVvxaXxA/VLGGEqnKG/uY6
fsI/fe78LxQ+5oXdUG+3Se0CAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAdBgNV
HQ4EFgQU8ncX+l6o/vY9cdVouslGDDjYr7AwDgYDVR0PAQH/BAQDAgGGMA0GCSqG
SIb3DQEBCwUAA4ICAQBQUfB13HAE4/+qddRxosuej6ip0691x1TPOhwEmSKsxBHi
7zNKpiMdDg1H2DfHb680f0+BazVP6XKlMeJ45/dOlBhbQH3PayFUhuaVevvGyuqc
SE5XCV0vrPSltJczWNWseanMX/mF+lLFjfiRFOs6DRfQUsJ748JzjkZ4Bjgs6Fza
ZsT0pPBWGTMpWmWSBUdGSquEwx4noR8RkpkndZMPvDY7l1ePJlsMu5wP1G4wB9Tc
XzZoZjmDlicmisjEOf6aIW/Vcobpf2Lll07QJNBAsNB1CI69aO4I1258EHBGG3zg
iLKecoaZAeO/n0kZtCW+VmWuF2PlHt/o/0elv+EmBYTksMCv5wiZqAxeJoBF1Pho
L5aPruJKHJwWDBNvOIf2u8g0X5IDUXlwpt/L9ZlNec1OvFefQ05rLisY+GpzjLrF
Ne85akEez3GoorKGB1s6yeHvP2UEgEcyRHCVTjFnanRbEEV16rCf0OY1/k6fi8wr
kkVbbiVghUbN0aqwdmaTd5a+g744tiROJgvM7XpWGuDpWsZkrUx6AEhEL7lAuxM+
vhV4nYWBSipX3tUZQ9rbyltHhoMLP7YNdnhzeSJesYAfz77RP1YQmCuVh6EfnWQU
YDksswBVLuT1sw5XxJFBAJw/6KXf6vb/yPCtbVKoF6ubYfwSUTXkJf2vqmqGOQ==
-----END CERTIFICATE-----

# Issuer: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4
# Subject: CN=GlobalSign O=GlobalSign OU=GlobalSign ECC Root CA - R4
# Label: "GlobalSign ECC Root CA - R4"
# Serial: 159662223612894884239637590694
# MD5 Fingerprint: 26:29:f8:6d:e1:88:bf:a2:65:7f:aa:c4:cd:0f:7f:fc
# SHA1 Fingerprint: 6b:a0:b0:98:e1:71:ef:5a:ad:fe:48:15:80:77:10:f4:bd:6f:0b:28
# SHA256 Fingerprint: b0:85:d7:0b:96:4f:19:1a:73:e4:af:0d:54:ae:7a:0e:07:aa:fd:af:9b:71:dd:08:62:13:8a:b7:32:5a:24:a2
-----BEGIN CERTIFICATE-----
MIIB3DCCAYOgAwIBAgINAgPlfvU/k/2lCSGypjAKBggqhkjOPQQDAjBQMSQwIgYD
VQQLExtHbG9iYWxTaWduIEVDQyBSb290IENBIC0gUjQxEzARBgNVBAoTCkdsb2Jh
bFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wHhcNMTIxMTEzMDAwMDAwWhcNMzgw
MTE5MDMxNDA3WjBQMSQwIgYDVQQLExtHbG9iYWxTaWduIEVDQyBSb290IENBIC0g
UjQxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNpZ24wWTAT
BgcqhkjOPQIBBggqhkjOPQMBBwNCAAS4xnnTj2wlDp8uORkcA6SumuU5BwkWymOx
uYb4ilfBV85C+nOh92VC/x7BALJucw7/xyHlGKSq2XE/qNS5zowdo0IwQDAOBgNV
HQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUVLB7rUW44kB/
+wpu+74zyTyjhNUwCgYIKoZIzj0EAwIDRwAwRAIgIk90crlgr/HmnKAWBVBfw147
bmF0774BxL4YSFlhgjICICadVGNA3jdgUM/I2O2dgq43mLyjj0xMqTQrbO/7lZsm
-----END CERTIFICATE-----

# Issuer: CN=GTS Root R1 O=Google Trust Services LLC
# Subject: CN=GTS Root R1 O=Google Trust Services LLC
# Label: "GTS Root R1"
# Serial: 159662320309726417404178440727
# MD5 Fingerprint: 05:fe:d0:bf:71:a8:a3:76:63:da:01:e0:d8:52:dc:40
# SHA1 Fingerprint: e5:8c:1c:c4:91:3b:38:63:4b:e9:10:6e:e3:ad:8e:6b:9d:d9:81:4a
# SHA256 Fingerprint: d9:47:43:2a:bd:e7:b7:fa:90:fc:2e:6b:59:10:1b:12:80:e0:e1:c7:e4:e4:0f:a3:c6:88:7f:ff:57:a7:f4:cf
-----BEGIN CERTIFICATE-----
MIIFVzCCAz+gAwIBAgINAgPlk28xsBNJiGuiFzANBgkqhkiG9w0BAQwFADBHMQsw
CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU
MBIGA1UEAxMLR1RTIFJvb3QgUjEwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw
MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp
Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjEwggIiMA0GCSqGSIb3DQEBAQUA
A4ICDwAwggIKAoICAQC2EQKLHuOhd5s73L+UPreVp0A8of2C+X0yBoJx9vaMf/vo
27xqLpeXo4xL+Sv2sfnOhB2x+cWX3u+58qPpvBKJXqeqUqv4IyfLpLGcY9vXmX7w
Cl7raKb0xlpHDU0QM+NOsROjyBhsS+z8CZDfnWQpJSMHobTSPS5g4M/SCYe7zUjw
TcLCeoiKu7rPWRnWr4+wB7CeMfGCwcDfLqZtbBkOtdh+JhpFAz2weaSUKK0Pfybl
qAj+lug8aJRT7oM6iCsVlgmy4HqMLnXWnOunVmSPlk9orj2XwoSPwLxAwAtcvfaH
szVsrBhQf4TgTM2S0yDpM7xSma8ytSmzJSq0SPly4cpk9+aCEI3oncKKiPo4Zor8
Y/kB+Xj9e1x3+naH+uzfsQ55lVe0vSbv1gHR6xYKu44LtcXFilWr06zqkUspzBmk
MiVOKvFlRNACzqrOSbTqn3yDsEB750Orp2yjj32JgfpMpf/VjsPOS+C12LOORc92
wO1AK/1TD7Cn1TsNsYqiA94xrcx36m97PtbfkSIS5r762DL8EGMUUXLeXdYWk70p
aDPvOmbsB4om3xPXV2V4J95eSRQAogB/mqghtqmxlbCluQ0WEdrHbEg8QOB+DVrN
VjzRlwW5y0vtOUucxD/SVRNuJLDWcfr0wbrM7Rv1/oFB2ACYPTrIrnqYNxgFlQID
AQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E
FgQU5K8rJnEaK0gnhS9SZizv8IkTcT4wDQYJKoZIhvcNAQEMBQADggIBAJ+qQibb
C5u+/x6Wki4+omVKapi6Ist9wTrYggoGxval3sBOh2Z5ofmmWJyq+bXmYOfg6LEe
QkEzCzc9zolwFcq1JKjPa7XSQCGYzyI0zzvFIoTgxQ6KfF2I5DUkzps+GlQebtuy
h6f88/qBVRRiClmpIgUxPoLW7ttXNLwzldMXG+gnoot7TiYaelpkttGsN/H9oPM4
7HLwEXWdyzRSjeZ2axfG34arJ45JK3VmgRAhpuo+9K4l/3wV3s6MJT/KYnAK9y8J
ZgfIPxz88NtFMN9iiMG1D53Dn0reWVlHxYciNuaCp+0KueIHoI17eko8cdLiA6Ef
MgfdG+RCzgwARWGAtQsgWSl4vflVy2PFPEz0tv/bal8xa5meLMFrUKTX5hgUvYU/
Z6tGn6D/Qqc6f1zLXbBwHSs09dR2CQzreExZBfMzQsNhFRAbd03OIozUhfJFfbdT
6u9AWpQKXCBfTkBdYiJ23//OYb2MI3jSNwLgjt7RETeJ9r/tSQdirpLsQBqvFAnZ
0E6yove+7u7Y/9waLd64NnHi/Hm3lCXRSHNboTXns5lndcEZOitHTtNCjv0xyBZm
2tIMPNuzjsmhDYAPexZ3FL//2wmUspO8IFgV6dtxQ/PeEMMA3KgqlbbC1j+Qa3bb
bP6MvPJwNQzcmRk13NfIRmPVNnGuV/u3gm3c
-----END CERTIFICATE-----

# Issuer: CN=GTS Root R2 O=Google Trust Services LLC
# Subject: CN=GTS Root R2 O=Google Trust Services LLC
# Label: "GTS Root R2"
# Serial: 159662449406622349769042896298
# MD5 Fingerprint: 1e:39:c0:53:e6:1e:29:82:0b:ca:52:55:36:5d:57:dc
# SHA1 Fingerprint: 9a:44:49:76:32:db:de:fa:d0:bc:fb:5a:7b:17:bd:9e:56:09:24:94
# SHA256 Fingerprint: 8d:25:cd:97:22:9d:bf:70:35:6b:da:4e:b3:cc:73:40:31:e2:4c:f0:0f:af:cf:d3:2d:c7:6e:b5:84:1c:7e:a8
-----BEGIN CERTIFICATE-----
MIIFVzCCAz+gAwIBAgINAgPlrsWNBCUaqxElqjANBgkqhkiG9w0BAQwFADBHMQsw
CQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEU
MBIGA1UEAxMLR1RTIFJvb3QgUjIwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAw
MDAwWjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZp
Y2VzIExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjIwggIiMA0GCSqGSIb3DQEBAQUA
A4ICDwAwggIKAoICAQDO3v2m++zsFDQ8BwZabFn3GTXd98GdVarTzTukk3LvCvpt
nfbwhYBboUhSnznFt+4orO/LdmgUud+tAWyZH8QiHZ/+cnfgLFuv5AS/T3KgGjSY
6Dlo7JUle3ah5mm5hRm9iYz+re026nO8/4Piy33B0s5Ks40FnotJk9/BW9BuXvAu
MC6C/Pq8tBcKSOWIm8Wba96wyrQD8Nr0kLhlZPdcTK3ofmZemde4wj7I0BOdre7k
RXuJVfeKH2JShBKzwkCX44ofR5GmdFrS+LFjKBC4swm4VndAoiaYecb+3yXuPuWg
f9RhD1FLPD+M2uFwdNjCaKH5wQzpoeJ/u1U8dgbuak7MkogwTZq9TwtImoS1mKPV
+3PBV2HdKFZ1E66HjucMUQkQdYhMvI35ezzUIkgfKtzra7tEscszcTJGr61K8Yzo
dDqs5xoic4DSMPclQsciOzsSrZYuxsN2B6ogtzVJV+mSSeh2FnIxZyuWfoqjx5RW
Ir9qS34BIbIjMt/kmkRtWVtd9QCgHJvGeJeNkP+byKq0rxFROV7Z+2et1VsRnTKa
G73VululycslaVNVJ1zgyjbLiGH7HrfQy+4W+9OmTN6SpdTi3/UGVN4unUu0kzCq
gc7dGtxRcw1PcOnlthYhGXmy5okLdWTK1au8CcEYof/UVKGFPP0UJAOyh9OktwID
AQABo0IwQDAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E
FgQUu//KjiOfT5nK2+JopqUVJxce2Q4wDQYJKoZIhvcNAQEMBQADggIBAB/Kzt3H
vqGf2SdMC9wXmBFqiN495nFWcrKeGk6c1SuYJF2ba3uwM4IJvd8lRuqYnrYb/oM8
0mJhwQTtzuDFycgTE1XnqGOtjHsB/ncw4c5omwX4Eu55MaBBRTUoCnGkJE+M3DyC
B19m3H0Q/gxhswWV7uGugQ+o+MePTagjAiZrHYNSVc61LwDKgEDg4XSsYPWHgJ2u
NmSRXbBoGOqKYcl3qJfEycel/FVL8/B/uWU9J2jQzGv6U53hkRrJXRqWbTKH7QMg
yALOWr7Z6v2yTcQvG99fevX4i8buMTolUVVnjWQye+mew4K6Ki3pHrTgSAai/Gev
HyICc/sgCq+dVEuhzf9gR7A/Xe8bVr2XIZYtCtFenTgCR2y59PYjJbigapordwj6
xLEokCZYCDzifqrXPW+6MYgKBesntaFJ7qBFVHvmJ2WZICGoo7z7GJa7Um8M7YNR
TOlZ4iBgxcJlkoKM8xAfDoqXvneCbT+PHV28SSe9zE8P4c52hgQjxcCMElv924Sg
JPFI/2R80L5cFtHvma3AH/vLrrw4IgYmZNralw4/KBVEqE8AyvCazM90arQ+POuV
7LXTWtiBmelDGDfrs7vRWGJB82bSj6p4lVQgw1oudCvV0b4YacCs1aTPObpRhANl
6WLAYv7YTVWW4tAR+kg0Eeye7QUd5MjWHYbL
-----END CERTIFICATE-----

# Issuer: CN=GTS Root R3 O=Google Trust Services LLC
# Subject: CN=GTS Root R3 O=Google Trust Services LLC
# Label: "GTS Root R3"
# Serial: 159662495401136852707857743206
# MD5 Fingerprint: 3e:e7:9d:58:02:94:46:51:94:e5:e0:22:4a:8b:e7:73
# SHA1 Fingerprint: ed:e5:71:80:2b:c8:92:b9:5b:83:3c:d2:32:68:3f:09:cd:a0:1e:46
# SHA256 Fingerprint: 34:d8:a7:3e:e2:08:d9:bc:db:0d:95:65:20:93:4b:4e:40:e6:94:82:59:6e:8b:6f:73:c8:42:6b:01:0a:6f:48
-----BEGIN CERTIFICATE-----
MIICCTCCAY6gAwIBAgINAgPluILrIPglJ209ZjAKBggqhkjOPQQDAzBHMQswCQYD
VQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEUMBIG
A1UEAxMLR1RTIFJvb3QgUjMwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAwMDAw
WjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2Vz
IExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjMwdjAQBgcqhkjOPQIBBgUrgQQAIgNi
AAQfTzOHMymKoYTey8chWEGJ6ladK0uFxh1MJ7x/JlFyb+Kf1qPKzEUURout736G
jOyxfi//qXGdGIRFBEFVbivqJn+7kAHjSxm65FSWRQmx1WyRRK2EE46ajA2ADDL2
4CejQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBTB8Sa6oC2uhYHP0/EqEr24Cmf9vDAKBggqhkjOPQQDAwNpADBmAjEA9uEglRR7
VKOQFhG/hMjqb2sXnh5GmCCbn9MN2azTL818+FsuVbu/3ZL3pAzcMeGiAjEA/Jdm
ZuVDFhOD3cffL74UOO0BzrEXGhF16b0DjyZ+hOXJYKaV11RZt+cRLInUue4X
-----END CERTIFICATE-----

# Issuer: CN=GTS Root R4 O=Google Trust Services LLC
# Subject: CN=GTS Root R4 O=Google Trust Services LLC
# Label: "GTS Root R4"
# Serial: 159662532700760215368942768210
# MD5 Fingerprint: 43:96:83:77:19:4d:76:b3:9d:65:52:e4:1d:22:a5:e8
# SHA1 Fingerprint: 77:d3:03:67:b5:e0:0c:15:f6:0c:38:61:df:7c:e1:3b:92:46:4d:47
# SHA256 Fingerprint: 34:9d:fa:40:58:c5:e2:63:12:3b:39:8a:e7:95:57:3c:4e:13:13:c8:3f:e6:8f:93:55:6c:d5:e8:03:1b:3c:7d
-----BEGIN CERTIFICATE-----
MIICCTCCAY6gAwIBAgINAgPlwGjvYxqccpBQUjAKBggqhkjOPQQDAzBHMQswCQYD
VQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2VzIExMQzEUMBIG
A1UEAxMLR1RTIFJvb3QgUjQwHhcNMTYwNjIyMDAwMDAwWhcNMzYwNjIyMDAwMDAw
WjBHMQswCQYDVQQGEwJVUzEiMCAGA1UEChMZR29vZ2xlIFRydXN0IFNlcnZpY2Vz
IExMQzEUMBIGA1UEAxMLR1RTIFJvb3QgUjQwdjAQBgcqhkjOPQIBBgUrgQQAIgNi
AATzdHOnaItgrkO4NcWBMHtLSZ37wWHO5t5GvWvVYRg1rkDdc/eJkTBa6zzuhXyi
QHY7qca4R9gq55KRanPpsXI5nymfopjTX15YhmUPoYRlBtHci8nHc8iMai/lxKvR
HYqjQjBAMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBSATNbrdP9JNqPV2Py1PsVq8JQdjDAKBggqhkjOPQQDAwNpADBmAjEA6ED/g94D
9J+uHXqnLrmvT/aDHQ4thQEd0dlq7A/Cr8deVl5c1RxYIigL9zC2L7F8AjEA8GE8
p/SgguMh1YQdc4acLa/KNJvxn7kjNuK8YAOdgLOaVsjh4rsUecrNIdSUtUlD
-----END CERTIFICATE-----

# Issuer: CN=Telia Root CA v2 O=Telia Finland Oyj
# Subject: CN=Telia Root CA v2 O=Telia Finland Oyj
# Label: "Telia Root CA v2"
# Serial: 7288924052977061235122729490515358
# MD5 Fingerprint: 0e:8f:ac:aa:82:df:85:b1:f4:dc:10:1c:fc:99:d9:48
# SHA1 Fingerprint: b9:99:cd:d1:73:50:8a:c4:47:05:08:9c:8c:88:fb:be:a0:2b:40:cd
# SHA256 Fingerprint: 24:2b:69:74:2f:cb:1e:5b:2a:bf:98:89:8b:94:57:21:87:54:4e:5b:4d:99:11:78:65:73:62:1f:6a:74:b8:2c
-----BEGIN CERTIFICATE-----
MIIFdDCCA1ygAwIBAgIPAWdfJ9b+euPkrL4JWwWeMA0GCSqGSIb3DQEBCwUAMEQx
CzAJBgNVBAYTAkZJMRowGAYDVQQKDBFUZWxpYSBGaW5sYW5kIE95ajEZMBcGA1UE
AwwQVGVsaWEgUm9vdCBDQSB2MjAeFw0xODExMjkxMTU1NTRaFw00MzExMjkxMTU1
NTRaMEQxCzAJBgNVBAYTAkZJMRowGAYDVQQKDBFUZWxpYSBGaW5sYW5kIE95ajEZ
MBcGA1UEAwwQVGVsaWEgUm9vdCBDQSB2MjCCAiIwDQYJKoZIhvcNAQEBBQADggIP
ADCCAgoCggIBALLQPwe84nvQa5n44ndp586dpAO8gm2h/oFlH0wnrI4AuhZ76zBq
AMCzdGh+sq/H1WKzej9Qyow2RCRj0jbpDIX2Q3bVTKFgcmfiKDOlyzG4OiIjNLh9
vVYiQJ3q9HsDrWj8soFPmNB06o3lfc1jw6P23pLCWBnglrvFxKk9pXSW/q/5iaq9
lRdU2HhE8Qx3FZLgmEKnpNaqIJLNwaCzlrI6hEKNfdWV5Nbb6WLEWLN5xYzTNTOD
n3WhUidhOPFZPY5Q4L15POdslv5e2QJltI5c0BE0312/UqeBAMN/mUWZFdUXyApT
7GPzmX3MaRKGwhfwAZ6/hLzRUssbkmbOpFPlob/E2wnW5olWK8jjfN7j/4nlNW4o
6GwLI1GpJQXrSPjdscr6bAhR77cYbETKJuFzxokGgeWKrLDiKca5JLNrRBH0pUPC
TEPlcDaMtjNXepUugqD0XBCzYYP2AgWGLnwtbNwDRm41k9V6lS/eINhbfpSQBGq6
WT0EBXWdN6IOLj3rwaRSg/7Qa9RmjtzG6RJOHSpXqhC8fF6CfaamyfItufUXJ63R
DolUK5X6wK0dmBR4M0KGCqlztft0DbcbMBnEWg4cJ7faGND/isgFuvGqHKI3t+ZI
pEYslOqodmJHixBTB0hXbOKSTbauBcvcwUpej6w9GU7C7WB1K9vBykLVAgMBAAGj
YzBhMB8GA1UdIwQYMBaAFHKs5DN5qkWH9v2sHZ7Wxy+G2CQ5MB0GA1UdDgQWBBRy
rOQzeapFh/b9rB2e1scvhtgkOTAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUw
AwEB/zANBgkqhkiG9w0BAQsFAAOCAgEAoDtZpwmUPjaE0n4vOaWWl/oRrfxn83EJ
8rKJhGdEr7nv7ZbsnGTbMjBvZ5qsfl+yqwE2foH65IRe0qw24GtixX1LDoJt0nZi
0f6X+J8wfBj5tFJ3gh1229MdqfDBmgC9bXXYfef6xzijnHDoRnkDry5023X4blMM
A8iZGok1GTzTyVR8qPAs5m4HeW9q4ebqkYJpCh3DflminmtGFZhb069GHWLIzoBS
SRE/yQQSwxN8PzuKlts8oB4KtItUsiRnDe+Cy748fdHif64W1lZYudogsYMVoe+K
TTJvQS8TUoKU1xrBeKJR3Stwbbca+few4GeXVtt8YVMJAygCQMez2P2ccGrGKMOF
6eLtGpOg3kuYooQ+BXcBlj37tCAPnHICehIv1aO6UXivKitEZU61/Qrowc15h2Er
3oBXRb9n8ZuRXqWk7FlIEA04x7D6w0RtBPV4UBySllva9bguulvP5fBqnUsvWHMt
Ty3EHD70sz+rFQ47GUGKpMFXEmZxTPpT41frYpUJnlTd0cI8Vzy9OK2YZLe4A5pT
VmBds9hCG1xLEooc6+t9xnppxyd/pPiL8uSUZodL6ZQHCRJ5irLrdATczvREWeAW
ysUsWNc8e89ihmpQfTU2Zqf7N+cox9jQraVplI/owd8k+BsHMYeB2F326CjYSlKA
rBPuUBQemMc=
-----END CERTIFICATE-----

# Issuer: CN=D-TRUST BR Root CA 1 2020 O=D-Trust GmbH
# Subject: CN=D-TRUST BR Root CA 1 2020 O=D-Trust GmbH
# Label: "D-TRUST BR Root CA 1 2020"
# Serial: 165870826978392376648679885835942448534
# MD5 Fingerprint: b5:aa:4b:d5:ed:f7:e3:55:2e:8f:72:0a:f3:75:b8:ed
# SHA1 Fingerprint: 1f:5b:98:f0:e3:b5:f7:74:3c:ed:e6:b0:36:7d:32:cd:f4:09:41:67
# SHA256 Fingerprint: e5:9a:aa:81:60:09:c2:2b:ff:5b:25:ba:d3:7d:f3:06:f0:49:79:7c:1f:81:d8:5a:b0:89:e6:57:bd:8f:00:44
-----BEGIN CERTIFICATE-----
MIIC2zCCAmCgAwIBAgIQfMmPK4TX3+oPyWWa00tNljAKBggqhkjOPQQDAzBIMQsw
CQYDVQQGEwJERTEVMBMGA1UEChMMRC1UcnVzdCBHbWJIMSIwIAYDVQQDExlELVRS
VVNUIEJSIFJvb3QgQ0EgMSAyMDIwMB4XDTIwMDIxMTA5NDUwMFoXDTM1MDIxMTA5
NDQ1OVowSDELMAkGA1UEBhMCREUxFTATBgNVBAoTDEQtVHJ1c3QgR21iSDEiMCAG
A1UEAxMZRC1UUlVTVCBCUiBSb290IENBIDEgMjAyMDB2MBAGByqGSM49AgEGBSuB
BAAiA2IABMbLxyjR+4T1mu9CFCDhQ2tuda38KwOE1HaTJddZO0Flax7mNCq7dPYS
zuht56vkPE4/RAiLzRZxy7+SmfSk1zxQVFKQhYN4lGdnoxwJGT11NIXe7WB9xwy0
QVK5buXuQqOCAQ0wggEJMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHOREKv/
VbNafAkl1bK6CKBrqx9tMA4GA1UdDwEB/wQEAwIBBjCBxgYDVR0fBIG+MIG7MD6g
PKA6hjhodHRwOi8vY3JsLmQtdHJ1c3QubmV0L2NybC9kLXRydXN0X2JyX3Jvb3Rf
Y2FfMV8yMDIwLmNybDB5oHegdYZzbGRhcDovL2RpcmVjdG9yeS5kLXRydXN0Lm5l
dC9DTj1ELVRSVVNUJTIwQlIlMjBSb290JTIwQ0ElMjAxJTIwMjAyMCxPPUQtVHJ1
c3QlMjBHbWJILEM9REU/Y2VydGlmaWNhdGVyZXZvY2F0aW9ubGlzdDAKBggqhkjO
PQQDAwNpADBmAjEAlJAtE/rhY/hhY+ithXhUkZy4kzg+GkHaQBZTQgjKL47xPoFW
wKrY7RjEsK70PvomAjEA8yjixtsrmfu3Ubgko6SUeho/5jbiA1czijDLgsfWFBHV
dWNbFJWcHwHP2NVypw87
-----END CERTIFICATE-----

# Issuer: CN=D-TRUST EV Root CA 1 2020 O=D-Trust GmbH
# Subject: CN=D-TRUST EV Root CA 1 2020 O=D-Trust GmbH
# Label: "D-TRUST EV Root CA 1 2020"
# Serial: 126288379621884218666039612629459926992
# MD5 Fingerprint: 8c:2d:9d:70:9f:48:99:11:06:11:fb:e9:cb:30:c0:6e
# SHA1 Fingerprint: 61:db:8c:21:59:69:03:90:d8:7c:9c:12:86:54:cf:9d:3d:f4:dd:07
# SHA256 Fingerprint: 08:17:0d:1a:a3:64:53:90:1a:2f:95:92:45:e3:47:db:0c:8d:37:ab:aa:bc:56:b8:1a:a1:00:dc:95:89:70:db
-----BEGIN CERTIFICATE-----
MIIC2zCCAmCgAwIBAgIQXwJB13qHfEwDo6yWjfv/0DAKBggqhkjOPQQDAzBIMQsw
CQYDVQQGEwJERTEVMBMGA1UEChMMRC1UcnVzdCBHbWJIMSIwIAYDVQQDExlELVRS
VVNUIEVWIFJvb3QgQ0EgMSAyMDIwMB4XDTIwMDIxMTEwMDAwMFoXDTM1MDIxMTA5
NTk1OVowSDELMAkGA1UEBhMCREUxFTATBgNVBAoTDEQtVHJ1c3QgR21iSDEiMCAG
A1UEAxMZRC1UUlVTVCBFViBSb290IENBIDEgMjAyMDB2MBAGByqGSM49AgEGBSuB
BAAiA2IABPEL3YZDIBnfl4XoIkqbz52Yv7QFJsnL46bSj8WeeHsxiamJrSc8ZRCC
/N/DnU7wMyPE0jL1HLDfMxddxfCxivnvubcUyilKwg+pf3VlSSowZ/Rk99Yad9rD
wpdhQntJraOCAQ0wggEJMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFH8QARY3
OqQo5FD4pPfsazK2/umLMA4GA1UdDwEB/wQEAwIBBjCBxgYDVR0fBIG+MIG7MD6g
PKA6hjhodHRwOi8vY3JsLmQtdHJ1c3QubmV0L2NybC9kLXRydXN0X2V2X3Jvb3Rf
Y2FfMV8yMDIwLmNybDB5oHegdYZzbGRhcDovL2RpcmVjdG9yeS5kLXRydXN0Lm5l
dC9DTj1ELVRSVVNUJTIwRVYlMjBSb290JTIwQ0ElMjAxJTIwMjAyMCxPPUQtVHJ1
c3QlMjBHbWJILEM9REU/Y2VydGlmaWNhdGVyZXZvY2F0aW9ubGlzdDAKBggqhkjO
PQQDAwNpADBmAjEAyjzGKnXCXnViOTYAYFqLwZOZzNnbQTs7h5kXO9XMT8oi96CA
y/m0sRtW9XLS/BnRAjEAkfcwkz8QRitxpNA7RJvAKQIFskF3UfN5Wp6OFKBOQtJb
gfM0agPnIjhQW+0ZT0MW
-----END CERTIFICATE-----

# Issuer: CN=DigiCert TLS ECC P384 Root G5 O=DigiCert, Inc.
# Subject: CN=DigiCert TLS ECC P384 Root G5 O=DigiCert, Inc.
# Label: "DigiCert TLS ECC P384 Root G5"
# Serial: 13129116028163249804115411775095713523
# MD5 Fingerprint: d3:71:04:6a:43:1c:db:a6:59:e1:a8:a3:aa:c5:71:ed
# SHA1 Fingerprint: 17:f3:de:5e:9f:0f:19:e9:8e:f6:1f:32:26:6e:20:c4:07:ae:30:ee
# SHA256 Fingerprint: 01:8e:13:f0:77:25:32:cf:80:9b:d1:b1:72:81:86:72:83:fc:48:c6:e1:3b:e9:c6:98:12:85:4a:49:0c:1b:05
-----BEGIN CERTIFICATE-----
MIICGTCCAZ+gAwIBAgIQCeCTZaz32ci5PhwLBCou8zAKBggqhkjOPQQDAzBOMQsw
CQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIEluYy4xJjAkBgNVBAMTHURp
Z2lDZXJ0IFRMUyBFQ0MgUDM4NCBSb290IEc1MB4XDTIxMDExNTAwMDAwMFoXDTQ2
MDExNDIzNTk1OVowTjELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDkRpZ2lDZXJ0LCBJ
bmMuMSYwJAYDVQQDEx1EaWdpQ2VydCBUTFMgRUNDIFAzODQgUm9vdCBHNTB2MBAG
ByqGSM49AgEGBSuBBAAiA2IABMFEoc8Rl1Ca3iOCNQfN0MsYndLxf3c1TzvdlHJS
7cI7+Oz6e2tYIOyZrsn8aLN1udsJ7MgT9U7GCh1mMEy7H0cKPGEQQil8pQgO4CLp
0zVozptjn4S1mU1YoI71VOeVyaNCMEAwHQYDVR0OBBYEFMFRRVBZqz7nLFr6ICIS
B4CIfBFqMA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49
BAMDA2gAMGUCMQCJao1H5+z8blUD2WdsJk6Dxv3J+ysTvLd6jLRl0mlpYxNjOyZQ
LgGheQaRnUi/wr4CMEfDFXuxoJGZSZOoPHzoRgaLLPIxAJSdYsiJvRmEFOml+wG4
DXZDjC5Ty3zfDBeWUA==
-----END CERTIFICATE-----

# Issuer: CN=DigiCert TLS RSA4096 Root G5 O=DigiCert, Inc.
# Subject: CN=DigiCert TLS RSA4096 Root G5 O=DigiCert, Inc.
# Label: "DigiCert TLS RSA4096 Root G5"
# Serial: 11930366277458970227240571539258396554
# MD5 Fingerprint: ac:fe:f7:34:96:a9:f2:b3:b4:12:4b:e4:27:41:6f:e1
# SHA1 Fingerprint: a7:88:49:dc:5d:7c:75:8c:8c:de:39:98:56:b3:aa:d0:b2:a5:71:35
# SHA256 Fingerprint: 37:1a:00:dc:05:33:b3:72:1a:7e:eb:40:e8:41:9e:70:79:9d:2b:0a:0f:2c:1d:80:69:31:65:f7:ce:c4:ad:75
-----BEGIN CERTIFICATE-----
MIIFZjCCA06gAwIBAgIQCPm0eKj6ftpqMzeJ3nzPijANBgkqhkiG9w0BAQwFADBN
MQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQsIEluYy4xJTAjBgNVBAMT
HERpZ2lDZXJ0IFRMUyBSU0E0MDk2IFJvb3QgRzUwHhcNMjEwMTE1MDAwMDAwWhcN
NDYwMTE0MjM1OTU5WjBNMQswCQYDVQQGEwJVUzEXMBUGA1UEChMORGlnaUNlcnQs
IEluYy4xJTAjBgNVBAMTHERpZ2lDZXJ0IFRMUyBSU0E0MDk2IFJvb3QgRzUwggIi
MA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCz0PTJeRGd/fxmgefM1eS87IE+
ajWOLrfn3q/5B03PMJ3qCQuZvWxX2hhKuHisOjmopkisLnLlvevxGs3npAOpPxG0
2C+JFvuUAT27L/gTBaF4HI4o4EXgg/RZG5Wzrn4DReW+wkL+7vI8toUTmDKdFqgp
wgscONyfMXdcvyej/Cestyu9dJsXLfKB2l2w4SMXPohKEiPQ6s+d3gMXsUJKoBZM
pG2T6T867jp8nVid9E6P/DsjyG244gXazOvswzH016cpVIDPRFtMbzCe88zdH5RD
nU1/cHAN1DrRN/BsnZvAFJNY781BOHW8EwOVfH/jXOnVDdXifBBiqmvwPXbzP6Po
sMH976pXTayGpxi0KcEsDr9kvimM2AItzVwv8n/vFfQMFawKsPHTDU9qTXeXAaDx
Zre3zu/O7Oyldcqs4+Fj97ihBMi8ez9dLRYiVu1ISf6nL3kwJZu6ay0/nTvEF+cd
Lvvyz6b84xQslpghjLSR6Rlgg/IwKwZzUNWYOwbpx4oMYIwo+FKbbuH2TbsGJJvX
KyY//SovcfXWJL5/MZ4PbeiPT02jP/816t9JXkGPhvnxd3lLG7SjXi/7RgLQZhNe
XoVPzthwiHvOAbWWl9fNff2C+MIkwcoBOU+NosEUQB+cZtUMCUbW8tDRSHZWOkPL
tgoRObqME2wGtZ7P6wIDAQABo0IwQDAdBgNVHQ4EFgQUUTMc7TZArxfTJc1paPKv
TiM+s0EwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcN
AQEMBQADggIBAGCmr1tfV9qJ20tQqcQjNSH/0GEwhJG3PxDPJY7Jv0Y02cEhJhxw
GXIeo8mH/qlDZJY6yFMECrZBu8RHANmfGBg7sg7zNOok992vIGCukihfNudd5N7H
PNtQOa27PShNlnx2xlv0wdsUpasZYgcYQF+Xkdycx6u1UQ3maVNVzDl92sURVXLF
O4uJ+DQtpBflF+aZfTCIITfNMBc9uPK8qHWgQ9w+iUuQrm0D4ByjoJYJu32jtyoQ
REtGBzRj7TG5BO6jm5qu5jF49OokYTurWGT/u4cnYiWB39yhL/btp/96j1EuMPik
AdKFOV8BmZZvWltwGUb+hmA+rYAQCd05JS9Yf7vSdPD3Rh9GOUrYU9DzLjtxpdRv
/PNn5AeP3SYZ4Y1b+qOTEZvpyDrDVWiakuFSdjjo4bq9+0/V77PnSIMx8IIh47a+
p6tv75/fTM8BuGJqIz3nCU2AG3swpMPdB380vqQmsvZB6Akd4yCYqjdP//fx4ilw
MUc/dNAUFvohigLVigmUdy7yWSiLfFCSCmZ4OIN1xLVaqBHG5cGdZlXPU8Sv13WF
qUITVuwhd4GTWgzqltlJyqEI8pc7bZsEGCREjnwB8twl2F6GmrE52/WRMmrRpnCK
ovfepEWFJqgejF0pW8hL2JpqA15w8oVPbEtoL8pU9ozaMv7Da4M/OMZ+
-----END CERTIFICATE-----

# Issuer: CN=Certainly Root R1 O=Certainly
# Subject: CN=Certainly Root R1 O=Certainly
# Label: "Certainly Root R1"
# Serial: 188833316161142517227353805653483829216
# MD5 Fingerprint: 07:70:d4:3e:82:87:a0:fa:33:36:13:f4:fa:33:e7:12
# SHA1 Fingerprint: a0:50:ee:0f:28:71:f4:27:b2:12:6d:6f:50:96:25:ba:cc:86:42:af
# SHA256 Fingerprint: 77:b8:2c:d8:64:4c:43:05:f7:ac:c5:cb:15:6b:45:67:50:04:03:3d:51:c6:0c:62:02:a8:e0:c3:34:67:d3:a0
-----BEGIN CERTIFICATE-----
MIIFRzCCAy+gAwIBAgIRAI4P+UuQcWhlM1T01EQ5t+AwDQYJKoZIhvcNAQELBQAw
PTELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUNlcnRhaW5seTEaMBgGA1UEAxMRQ2Vy
dGFpbmx5IFJvb3QgUjEwHhcNMjEwNDAxMDAwMDAwWhcNNDYwNDAxMDAwMDAwWjA9
MQswCQYDVQQGEwJVUzESMBAGA1UEChMJQ2VydGFpbmx5MRowGAYDVQQDExFDZXJ0
YWlubHkgUm9vdCBSMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANA2
1B/q3avk0bbm+yLA3RMNansiExyXPGhjZjKcA7WNpIGD2ngwEc/csiu+kr+O5MQT
vqRoTNoCaBZ0vrLdBORrKt03H2As2/X3oXyVtwxwhi7xOu9S98zTm/mLvg7fMbed
aFySpvXl8wo0tf97ouSHocavFwDvA5HtqRxOcT3Si2yJ9HiG5mpJoM610rCrm/b0
1C7jcvk2xusVtyWMOvwlDbMicyF0yEqWYZL1LwsYpfSt4u5BvQF5+paMjRcCMLT5
r3gajLQ2EBAHBXDQ9DGQilHFhiZ5shGIXsXwClTNSaa/ApzSRKft43jvRl5tcdF5
cBxGX1HpyTfcX35pe0HfNEXgO4T0oYoKNp43zGJS4YkNKPl6I7ENPT2a/Z2B7yyQ
wHtETrtJ4A5KVpK8y7XdeReJkd5hiXSSqOMyhb5OhaRLWcsrxXiOcVTQAjeZjOVJ
6uBUcqQRBi8LjMFbvrWhsFNunLhgkR9Za/kt9JQKl7XsxXYDVBtlUrpMklZRNaBA
2CnbrlJ2Oy0wQJuK0EJWtLeIAaSHO1OWzaMWj/Nmqhexx2DgwUMFDO6bW2BvBlyH
Wyf5QBGenDPBt+U1VwV/J84XIIwc/PH72jEpSe31C4SnT8H2TsIonPru4K8H+zMR
eiFPCyEQtkA6qyI6BJyLm4SGcprSp6XEtHWRqSsjAgMBAAGjQjBAMA4GA1UdDwEB
/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBTgqj8ljZ9EXME66C6u
d0yEPmcM9DANBgkqhkiG9w0BAQsFAAOCAgEAuVevuBLaV4OPaAszHQNTVfSVcOQr
PbA56/qJYv331hgELyE03fFo8NWWWt7CgKPBjcZq91l3rhVkz1t5BXdm6ozTaw3d
8VkswTOlMIAVRQdFGjEitpIAq5lNOo93r6kiyi9jyhXWx8bwPWz8HA2YEGGeEaIi
1wrykXprOQ4vMMM2SZ/g6Q8CRFA3lFV96p/2O7qUpUzpvD5RtOjKkjZUbVwlKNrd
rRT90+7iIgXr0PK3aBLXWopBGsaSpVo7Y0VPv+E6dyIvXL9G+VoDhRNCX8reU9di
taY1BMJH/5n9hN9czulegChB8n3nHpDYT3Y+gjwN/KUD+nsa2UUeYNrEjvn8K8l7
lcUq/6qJ34IxD3L/DCfXCh5WAFAeDJDBlrXYFIW7pw0WwfgHJBu6haEaBQmAupVj
yTrsJZ9/nbqkRxWbRHDxakvWOF5D8xh+UG7pWijmZeZ3Gzr9Hb4DJqPb1OG7fpYn
Kx3upPvaJVQTA945xsMfTZDsjxtK0hzthZU4UHlG1sGQUDGpXJpuHfUzVounmdLy
yCwzk5Iwx06MZTMQZBf9JBeW0Y3COmor6xOLRPIh80oat3df1+2IpHLlOR+Vnb5n
wXARPbv0+Em34yaXOp/SX3z7wJl8OSngex2/DaeP0ik0biQVy96QXr8axGbqwua6
OV+KmalBWQewLK8=
-----END CERTIFICATE-----

# Issuer: CN=Certainly Root E1 O=Certainly
# Subject: CN=Certainly Root E1 O=Certainly
# Label: "Certainly Root E1"
# Serial: 8168531406727139161245376702891150584
# MD5 Fingerprint: 0a:9e:ca:cd:3e:52:50:c6:36:f3:4b:a3:ed:a7:53:e9
# SHA1 Fingerprint: f9:e1:6d:dc:01:89:cf:d5:82:45:63:3e:c5:37:7d:c2:eb:93:6f:2b
# SHA256 Fingerprint: b4:58:5f:22:e4:ac:75:6a:4e:86:12:a1:36:1c:5d:9d:03:1a:93:fd:84:fe:bb:77:8f:a3:06:8b:0f:c4:2d:c2
-----BEGIN CERTIFICATE-----
MIIB9zCCAX2gAwIBAgIQBiUzsUcDMydc+Y2aub/M+DAKBggqhkjOPQQDAzA9MQsw
CQYDVQQGEwJVUzESMBAGA1UEChMJQ2VydGFpbmx5MRowGAYDVQQDExFDZXJ0YWlu
bHkgUm9vdCBFMTAeFw0yMTA0MDEwMDAwMDBaFw00NjA0MDEwMDAwMDBaMD0xCzAJ
BgNVBAYTAlVTMRIwEAYDVQQKEwlDZXJ0YWlubHkxGjAYBgNVBAMTEUNlcnRhaW5s
eSBSb290IEUxMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAE3m/4fxzf7flHh4axpMCK
+IKXgOqPyEpeKn2IaKcBYhSRJHpcnqMXfYqGITQYUBsQ3tA3SybHGWCA6TS9YBk2
QNYphwk8kXr2vBMj3VlOBF7PyAIcGFPBMdjaIOlEjeR2o0IwQDAOBgNVHQ8BAf8E
BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU8ygYy2R17ikq6+2uI1g4
hevIIgcwCgYIKoZIzj0EAwMDaAAwZQIxALGOWiDDshliTd6wT99u0nCK8Z9+aozm
ut6Dacpps6kFtZaSF4fC0urQe87YQVt8rgIwRt7qy12a7DLCZRawTDBcMPPaTnOG
BtjOiQRINzf43TNRnXCve1XYAS59BWQOhriR
-----END CERTIFICATE-----

# Issuer: CN=Security Communication RootCA3 O=SECOM Trust Systems CO.,LTD.
# Subject: CN=Security Communication RootCA3 O=SECOM Trust Systems CO.,LTD.
# Label: "Security Communication RootCA3"
# Serial: 16247922307909811815
# MD5 Fingerprint: 1c:9a:16:ff:9e:5c:e0:4d:8a:14:01:f4:35:5d:29:26
# SHA1 Fingerprint: c3:03:c8:22:74:92:e5:61:a2:9c:5f:79:91:2b:1e:44:13:91:30:3a
# SHA256 Fingerprint: 24:a5:5c:2a:b0:51:44:2d:06:17:76:65:41:23:9a:4a:d0:32:d7:c5:51:75:aa:34:ff:de:2f:bc:4f:5c:52:94
-----BEGIN CERTIFICATE-----
MIIFfzCCA2egAwIBAgIJAOF8N0D9G/5nMA0GCSqGSIb3DQEBDAUAMF0xCzAJBgNV
BAYTAkpQMSUwIwYDVQQKExxTRUNPTSBUcnVzdCBTeXN0ZW1zIENPLixMVEQuMScw
JQYDVQQDEx5TZWN1cml0eSBDb21tdW5pY2F0aW9uIFJvb3RDQTMwHhcNMTYwNjE2
MDYxNzE2WhcNMzgwMTE4MDYxNzE2WjBdMQswCQYDVQQGEwJKUDElMCMGA1UEChMc
U0VDT00gVHJ1c3QgU3lzdGVtcyBDTy4sTFRELjEnMCUGA1UEAxMeU2VjdXJpdHkg
Q29tbXVuaWNhdGlvbiBSb290Q0EzMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEA48lySfcw3gl8qUCBWNO0Ot26YQ+TUG5pPDXC7ltzkBtnTCHsXzW7OT4r
CmDvu20rhvtxosis5FaU+cmvsXLUIKx00rgVrVH+hXShuRD+BYD5UpOzQD11EKzA
lrenfna84xtSGc4RHwsENPXY9Wk8d/Nk9A2qhd7gCVAEF5aEt8iKvE1y/By7z/MG
TfmfZPd+pmaGNXHIEYBMwXFAWB6+oHP2/D5Q4eAvJj1+XCO1eXDe+uDRpdYMQXF7
9+qMHIjH7Iv10S9VlkZ8WjtYO/u62C21Jdp6Ts9EriGmnpjKIG58u4iFW/vAEGK7
8vknR+/RiTlDxN/e4UG/VHMgly1s2vPUB6PmudhvrvyMGS7TZ2crldtYXLVqAvO4
g160a75BflcJdURQVc1aEWEhCmHCqYj9E7wtiS/NYeCVvsq1e+F7NGcLH7YMx3we
GVPKp7FKFSBWFHA9K4IsD50VHUeAR/94mQ4xr28+j+2GaR57GIgUssL8gjMunEst
+3A7caoreyYn8xrC3PsXuKHqy6C0rtOUfnrQq8PsOC0RLoi/1D+tEjtCrI8Cbn3M
0V9hvqG8OmpI6iZVIhZdXw3/JzOfGAN0iltSIEdrRU0id4xVJ/CvHozJgyJUt5rQ
T9nO/NkuHJYosQLTA70lUhw0Zk8jq/R3gpYd0VcwCBEF/VfR2ccCAwEAAaNCMEAw
HQYDVR0OBBYEFGQUfPxYchamCik0FW8qy7z8r6irMA4GA1UdDwEB/wQEAwIBBjAP
BgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBDAUAA4ICAQDcAiMI4u8hOscNtybS
YpOnpSNyByCCYN8Y11StaSWSntkUz5m5UoHPrmyKO1o5yGwBQ8IibQLwYs1OY0PA
FNr0Y/Dq9HHuTofjcan0yVflLl8cebsjqodEV+m9NU1Bu0soo5iyG9kLFwfl9+qd
9XbXv8S2gVj/yP9kaWJ5rW4OH3/uHWnlt3Jxs/6lATWUVCvAUm2PVcTJ0rjLyjQI
UYWg9by0F1jqClx6vWPGOi//lkkZhOpn2ASxYfQAW0q3nHE3GYV5v4GwxxMOdnE+
OoAGrgYWp421wsTL/0ClXI2lyTrtcoHKXJg80jQDdwj98ClZXSEIx2C/pHF7uNke
gr4Jr2VvKKu/S7XuPghHJ6APbw+LP6yVGPO5DtxnVW5inkYO0QR4ynKudtml+LLf
iAlhi+8kTtFZP1rUPcmTPCtk9YENFpb3ksP+MW/oKjJ0DvRMmEoYDjBU1cXrvMUV
nuiZIesnKwkK2/HmcBhWuwzkvvnoEKQTkrgc4NtnHVMDpCKn3F2SEDzq//wbEBrD
2NCcnWXL0CsnMQMeNuE9dnUM/0Umud1RvCPHX9jYhxBAEg09ODfnRDwYwFMJZI//
1ZqmfHAuc1Uh6N//g7kdPjIe1qZ9LPFm6Vwdp6POXiUyK+OVrCoHzrQoeIY8Laad
TdJ0MN1kURXbg4NR16/9M51NZg==
-----END CERTIFICATE-----

# Issuer: CN=Security Communication ECC RootCA1 O=SECOM Trust Systems CO.,LTD.
# Subject: CN=Security Communication ECC RootCA1 O=SECOM Trust Systems CO.,LTD.
# Label: "Security Communication ECC RootCA1"
# Serial: 15446673492073852651
# MD5 Fingerprint: 7e:43:b0:92:68:ec:05:43:4c:98:ab:5d:35:2e:7e:86
# SHA1 Fingerprint: b8:0e:26:a9:bf:d2:b2:3b:c0:ef:46:c9:ba:c7:bb:f6:1d:0d:41:41
# SHA256 Fingerprint: e7:4f:bd:a5:5b:d5:64:c4:73:a3:6b:44:1a:a7:99:c8:a6:8e:07:74:40:e8:28:8b:9f:a1:e5:0e:4b:ba:ca:11
-----BEGIN CERTIFICATE-----
MIICODCCAb6gAwIBAgIJANZdm7N4gS7rMAoGCCqGSM49BAMDMGExCzAJBgNVBAYT
AkpQMSUwIwYDVQQKExxTRUNPTSBUcnVzdCBTeXN0ZW1zIENPLixMVEQuMSswKQYD
VQQDEyJTZWN1cml0eSBDb21tdW5pY2F0aW9uIEVDQyBSb290Q0ExMB4XDTE2MDYx
NjA1MTUyOFoXDTM4MDExODA1MTUyOFowYTELMAkGA1UEBhMCSlAxJTAjBgNVBAoT
HFNFQ09NIFRydXN0IFN5c3RlbXMgQ08uLExURC4xKzApBgNVBAMTIlNlY3VyaXR5
IENvbW11bmljYXRpb24gRUNDIFJvb3RDQTEwdjAQBgcqhkjOPQIBBgUrgQQAIgNi
AASkpW9gAwPDvTH00xecK4R1rOX9PVdu12O/5gSJko6BnOPpR27KkBLIE+Cnnfdl
dB9sELLo5OnvbYUymUSxXv3MdhDYW72ixvnWQuRXdtyQwjWpS4g8EkdtXP9JTxpK
ULGjQjBAMB0GA1UdDgQWBBSGHOf+LaVKiwj+KBH6vqNm+GBZLzAOBgNVHQ8BAf8E
BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNoADBlAjAVXUI9/Lbu
9zuxNuie9sRGKEkz0FhDKmMpzE2xtHqiuQ04pV1IKv3LsnNdo4gIxwwCMQDAqy0O
be0YottT6SXbVQjgUMzfRGEWgqtJsLKB7HOHeLRMsmIbEvoWTSVLY70eN9k=
-----END CERTIFICATE-----

# Issuer: CN=BJCA Global Root CA1 O=BEIJING CERTIFICATE AUTHORITY
# Subject: CN=BJCA Global Root CA1 O=BEIJING CERTIFICATE AUTHORITY
# Label: "BJCA Global Root CA1"
# Serial: 113562791157148395269083148143378328608
# MD5 Fingerprint: 42:32:99:76:43:33:36:24:35:07:82:9b:28:f9:d0:90
# SHA1 Fingerprint: d5:ec:8d:7b:4c:ba:79:f4:e7:e8:cb:9d:6b:ae:77:83:10:03:21:6a
# SHA256 Fingerprint: f3:89:6f:88:fe:7c:0a:88:27:66:a7:fa:6a:d2:74:9f:b5:7a:7f:3e:98:fb:76:9c:1f:a7:b0:9c:2c:44:d5:ae
-----BEGIN CERTIFICATE-----
MIIFdDCCA1ygAwIBAgIQVW9l47TZkGobCdFsPsBsIDANBgkqhkiG9w0BAQsFADBU
MQswCQYDVQQGEwJDTjEmMCQGA1UECgwdQkVJSklORyBDRVJUSUZJQ0FURSBBVVRI
T1JJVFkxHTAbBgNVBAMMFEJKQ0EgR2xvYmFsIFJvb3QgQ0ExMB4XDTE5MTIxOTAz
MTYxN1oXDTQ0MTIxMjAzMTYxN1owVDELMAkGA1UEBhMCQ04xJjAkBgNVBAoMHUJF
SUpJTkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZMR0wGwYDVQQDDBRCSkNBIEdsb2Jh
bCBSb290IENBMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAPFmCL3Z
xRVhy4QEQaVpN3cdwbB7+sN3SJATcmTRuHyQNZ0YeYjjlwE8R4HyDqKYDZ4/N+AZ
spDyRhySsTphzvq3Rp4Dhtczbu33RYx2N95ulpH3134rhxfVizXuhJFyV9xgw8O5
58dnJCNPYwpj9mZ9S1WnP3hkSWkSl+BMDdMJoDIwOvqfwPKcxRIqLhy1BDPapDgR
at7GGPZHOiJBhyL8xIkoVNiMpTAK+BcWyqw3/XmnkRd4OJmtWO2y3syJfQOcs4ll
5+M7sSKGjwZteAf9kRJ/sGsciQ35uMt0WwfCyPQ10WRjeulumijWML3mG90Vr4Tq
nMfK9Q7q8l0ph49pczm+LiRvRSGsxdRpJQaDrXpIhRMsDQa4bHlW/KNnMoH1V6XK
V0Jp6VwkYe/iMBhORJhVb3rCk9gZtt58R4oRTklH2yiUAguUSiz5EtBP6DF+bHq/
pj+bOT0CFqMYs2esWz8sgytnOYFcuX6U1WTdno9uruh8W7TXakdI136z1C2OVnZO
z2nxbkRs1CTqjSShGL+9V/6pmTW12xB3uD1IutbB5/EjPtffhZ0nPNRAvQoMvfXn
jSXWgXSHRtQpdaJCbPdzied9v3pKH9MiyRVVz99vfFXQpIsHETdfg6YmV6YBW37+
WGgHqel62bno/1Afq8K0wM7o6v0PvY1NuLxxAgMBAAGjQjBAMB0GA1UdDgQWBBTF
7+3M2I0hxkjk49cULqcWk+WYATAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQE
AwIBBjANBgkqhkiG9w0BAQsFAAOCAgEAUoKsITQfI/Ki2Pm4rzc2IInRNwPWaZ+4
YRC6ojGYWUfo0Q0lHhVBDOAqVdVXUsv45Mdpox1NcQJeXyFFYEhcCY5JEMEE3Kli
awLwQ8hOnThJdMkycFRtwUf8jrQ2ntScvd0g1lPJGKm1Vrl2i5VnZu69mP6u775u
+2D2/VnGKhs/I0qUJDAnyIm860Qkmss9vk/Ves6OF8tiwdneHg56/0OGNFK8YT88
X7vZdrRTvJez/opMEi4r89fO4aL/3Xtw+zuhTaRjAv04l5U/BXCga99igUOLtFkN
SoxUnMW7gZ/NfaXvCyUeOiDbHPwfmGcCCtRzRBPbUYQaVQNW4AB+dAb/OMRyHdOo
P2gxXdMJxy6MW2Pg6Nwe0uxhHvLe5e/2mXZgLR6UcnHGCyoyx5JO1UbXHfmpGQrI
+pXObSOYqgs4rZpWDW+N8TEAiMEXnM0ZNjX+VVOg4DwzX5Ze4jLp3zO7Bkqp2IRz
znfSxqxx4VyjHQy7Ct9f4qNx2No3WqB4K/TUfet27fJhcKVlmtOJNBir+3I+17Q9
eVzYH6Eze9mCUAyTF6ps3MKCuwJXNq+YJyo5UOGwifUll35HaBC07HPKs5fRJNz2
YqAo07WjuGS3iGJCz51TzZm+ZGiPTx4SSPfSKcOYKMryMguTjClPPGAyzQWWYezy
r/6zcCwupvI=
-----END CERTIFICATE-----

# Issuer: CN=BJCA Global Root CA2 O=BEIJING CERTIFICATE AUTHORITY
# Subject: CN=BJCA Global Root CA2 O=BEIJING CERTIFICATE AUTHORITY
# Label: "BJCA Global Root CA2"
# Serial: 58605626836079930195615843123109055211
# MD5 Fingerprint: 5e:0a:f6:47:5f:a6:14:e8:11:01:95:3f:4d:01:eb:3c
# SHA1 Fingerprint: f4:27:86:eb:6e:b8:6d:88:31:67:02:fb:ba:66:a4:53:00:aa:7a:a6
# SHA256 Fingerprint: 57:4d:f6:93:1e:27:80:39:66:7b:72:0a:fd:c1:60:0f:c2:7e:b6:6d:d3:09:29:79:fb:73:85:64:87:21:28:82
-----BEGIN CERTIFICATE-----
MIICJTCCAaugAwIBAgIQLBcIfWQqwP6FGFkGz7RK6zAKBggqhkjOPQQDAzBUMQsw
CQYDVQQGEwJDTjEmMCQGA1UECgwdQkVJSklORyBDRVJUSUZJQ0FURSBBVVRIT1JJ
VFkxHTAbBgNVBAMMFEJKQ0EgR2xvYmFsIFJvb3QgQ0EyMB4XDTE5MTIxOTAzMTgy
MVoXDTQ0MTIxMjAzMTgyMVowVDELMAkGA1UEBhMCQ04xJjAkBgNVBAoMHUJFSUpJ
TkcgQ0VSVElGSUNBVEUgQVVUSE9SSVRZMR0wGwYDVQQDDBRCSkNBIEdsb2JhbCBS
b290IENBMjB2MBAGByqGSM49AgEGBSuBBAAiA2IABJ3LgJGNU2e1uVCxA/jlSR9B
IgmwUVJY1is0j8USRhTFiy8shP8sbqjV8QnjAyEUxEM9fMEsxEtqSs3ph+B99iK+
+kpRuDCK/eHeGBIK9ke35xe/J4rUQUyWPGCWwf0VHKNCMEAwHQYDVR0OBBYEFNJK
sVF/BvDRgh9Obl+rg/xI1LCRMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD
AgEGMAoGCCqGSM49BAMDA2gAMGUCMBq8W9f+qdJUDkpd0m2xQNz0Q9XSSpkZElaA
94M04TVOSG0ED1cxMDAtsaqdAzjbBgIxAMvMh1PLet8gUXOQwKhbYdDFUDn9hf7B
43j4ptZLvZuHjw/l1lOWqzzIQNph91Oj9w==
-----END CERTIFICATE-----

# Issuer: CN=Sectigo Public Server Authentication Root E46 O=Sectigo Limited
# Subject: CN=Sectigo Public Server Authentication Root E46 O=Sectigo Limited
# Label: "Sectigo Public Server Authentication Root E46"
# Serial: 88989738453351742415770396670917916916
# MD5 Fingerprint: 28:23:f8:b2:98:5c:37:16:3b:3e:46:13:4e:b0:b3:01
# SHA1 Fingerprint: ec:8a:39:6c:40:f0:2e:bc:42:75:d4:9f:ab:1c:1a:5b:67:be:d2:9a
# SHA256 Fingerprint: c9:0f:26:f0:fb:1b:40:18:b2:22:27:51:9b:5c:a2:b5:3e:2c:a5:b3:be:5c:f1:8e:fe:1b:ef:47:38:0c:53:83
-----BEGIN CERTIFICATE-----
MIICOjCCAcGgAwIBAgIQQvLM2htpN0RfFf51KBC49DAKBggqhkjOPQQDAzBfMQsw
CQYDVQQGEwJHQjEYMBYGA1UEChMPU2VjdGlnbyBMaW1pdGVkMTYwNAYDVQQDEy1T
ZWN0aWdvIFB1YmxpYyBTZXJ2ZXIgQXV0aGVudGljYXRpb24gUm9vdCBFNDYwHhcN
MjEwMzIyMDAwMDAwWhcNNDYwMzIxMjM1OTU5WjBfMQswCQYDVQQGEwJHQjEYMBYG
A1UEChMPU2VjdGlnbyBMaW1pdGVkMTYwNAYDVQQDEy1TZWN0aWdvIFB1YmxpYyBT
ZXJ2ZXIgQXV0aGVudGljYXRpb24gUm9vdCBFNDYwdjAQBgcqhkjOPQIBBgUrgQQA
IgNiAAR2+pmpbiDt+dd34wc7qNs9Xzjoq1WmVk/WSOrsfy2qw7LFeeyZYX8QeccC
WvkEN/U0NSt3zn8gj1KjAIns1aeibVvjS5KToID1AZTc8GgHHs3u/iVStSBDHBv+
6xnOQ6OjQjBAMB0GA1UdDgQWBBTRItpMWfFLXyY4qp3W7usNw/upYTAOBgNVHQ8B
Af8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zAKBggqhkjOPQQDAwNnADBkAjAn7qRa
qCG76UeXlImldCBteU/IvZNeWBj7LRoAasm4PdCkT0RHlAFWovgzJQxC36oCMB3q
4S6ILuH5px0CMk7yn2xVdOOurvulGu7t0vzCAxHrRVxgED1cf5kDW21USAGKcw==
-----END CERTIFICATE-----

# Issuer: CN=Sectigo Public Server Authentication Root R46 O=Sectigo Limited
# Subject: CN=Sectigo Public Server Authentication Root R46 O=Sectigo Limited
# Label: "Sectigo Public Server Authentication Root R46"
# Serial: 156256931880233212765902055439220583700
# MD5 Fingerprint: 32:10:09:52:00:d5:7e:6c:43:df:15:c0:b1:16:93:e5
# SHA1 Fingerprint: ad:98:f9:f3:e4:7d:75:3b:65:d4:82:b3:a4:52:17:bb:6e:f5:e4:38
# SHA256 Fingerprint: 7b:b6:47:a6:2a:ee:ac:88:bf:25:7a:a5:22:d0:1f:fe:a3:95:e0:ab:45:c7:3f:93:f6:56:54:ec:38:f2:5a:06
-----BEGIN CERTIFICATE-----
MIIFijCCA3KgAwIBAgIQdY39i658BwD6qSWn4cetFDANBgkqhkiG9w0BAQwFADBf
MQswCQYDVQQGEwJHQjEYMBYGA1UEChMPU2VjdGlnbyBMaW1pdGVkMTYwNAYDVQQD
Ey1TZWN0aWdvIFB1YmxpYyBTZXJ2ZXIgQXV0aGVudGljYXRpb24gUm9vdCBSNDYw
HhcNMjEwMzIyMDAwMDAwWhcNNDYwMzIxMjM1OTU5WjBfMQswCQYDVQQGEwJHQjEY
MBYGA1UEChMPU2VjdGlnbyBMaW1pdGVkMTYwNAYDVQQDEy1TZWN0aWdvIFB1Ymxp
YyBTZXJ2ZXIgQXV0aGVudGljYXRpb24gUm9vdCBSNDYwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQCTvtU2UnXYASOgHEdCSe5jtrch/cSV1UgrJnwUUxDa
ef0rty2k1Cz66jLdScK5vQ9IPXtamFSvnl0xdE8H/FAh3aTPaE8bEmNtJZlMKpnz
SDBh+oF8HqcIStw+KxwfGExxqjWMrfhu6DtK2eWUAtaJhBOqbchPM8xQljeSM9xf
iOefVNlI8JhD1mb9nxc4Q8UBUQvX4yMPFF1bFOdLvt30yNoDN9HWOaEhUTCDsG3X
ME6WW5HwcCSrv0WBZEMNvSE6Lzzpng3LILVCJ8zab5vuZDCQOc2TZYEhMbUjUDM3
IuM47fgxMMxF/mL50V0yeUKH32rMVhlATc6qu/m1dkmU8Sf4kaWD5QazYw6A3OAS
VYCmO2a0OYctyPDQ0RTp5A1NDvZdV3LFOxxHVp3i1fuBYYzMTYCQNFu31xR13NgE
SJ/AwSiItOkcyqex8Va3e0lMWeUgFaiEAin6OJRpmkkGj80feRQXEgyDet4fsZfu
+Zd4KKTIRJLpfSYFplhym3kT2BFfrsU4YjRosoYwjviQYZ4ybPUHNs2iTG7sijbt
8uaZFURww3y8nDnAtOFr94MlI1fZEoDlSfB1D++N6xybVCi0ITz8fAr/73trdf+L
HaAZBav6+CuBQug4urv7qv094PPK306Xlynt8xhW6aWWrL3DkJiy4Pmi1KZHQ3xt
zwIDAQABo0IwQDAdBgNVHQ4EFgQUVnNYZJX5khqwEioEYnmhQBWIIUkwDgYDVR0P
AQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQEMBQADggIBAC9c
mTz8Bl6MlC5w6tIyMY208FHVvArzZJ8HXtXBc2hkeqK5Duj5XYUtqDdFqij0lgVQ
YKlJfp/imTYpE0RHap1VIDzYm/EDMrraQKFz6oOht0SmDpkBm+S8f74TlH7Kph52
gDY9hAaLMyZlbcp+nv4fjFg4exqDsQ+8FxG75gbMY/qB8oFM2gsQa6H61SilzwZA
Fv97fRheORKkU55+MkIQpiGRqRxOF3yEvJ+M0ejf5lG5Nkc/kLnHvALcWxxPDkjB
JYOcCj+esQMzEhonrPcibCTRAUH4WAP+JWgiH5paPHxsnnVI84HxZmduTILA7rpX
DhjvLpr3Etiga+kFpaHpaPi8TD8SHkXoUsCjvxInebnMMTzD9joiFgOgyY9mpFui
TdaBJQbpdqQACj7LzTWb4OE4y2BThihCQRxEV+ioratF4yUQvNs+ZUH7G6aXD+u5
dHn5HrwdVw1Hr8Mvn4dGp+smWg9WY7ViYG4A++MnESLn/pmPNPW56MORcr3Ywx65
LvKRRFHQV80MNNVIIb/bE/FmJUNS0nAiNs2fxBx1IK1jcmMGDw4nztJqDby1ORrp
0XZ60Vzk50lJLVU3aPAaOpg+VBeHVOmmJ1CJeyAvP/+/oYtKR5j/K3tJPsMpRmAY
QqszKbrAKbkTidOIijlBO8n9pu0f9GBj39ItVQGL
-----END CERTIFICATE-----

# Issuer: CN=SSL.com TLS RSA Root CA 2022 O=SSL Corporation
# Subject: CN=SSL.com TLS RSA Root CA 2022 O=SSL Corporation
# Label: "SSL.com TLS RSA Root CA 2022"
# Serial: 148535279242832292258835760425842727825
# MD5 Fingerprint: d8:4e:c6:59:30:d8:fe:a0:d6:7a:5a:2c:2c:69:78:da
# SHA1 Fingerprint: ec:2c:83:40:72:af:26:95:10:ff:0e:f2:03:ee:31:70:f6:78:9d:ca
# SHA256 Fingerprint: 8f:af:7d:2e:2c:b4:70:9b:b8:e0:b3:36:66:bf:75:a5:dd:45:b5:de:48:0f:8e:a8:d4:bf:e6:be:bc:17:f2:ed
-----BEGIN CERTIFICATE-----
MIIFiTCCA3GgAwIBAgIQb77arXO9CEDii02+1PdbkTANBgkqhkiG9w0BAQsFADBO
MQswCQYDVQQGEwJVUzEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMSUwIwYDVQQD
DBxTU0wuY29tIFRMUyBSU0EgUm9vdCBDQSAyMDIyMB4XDTIyMDgyNTE2MzQyMloX
DTQ2MDgxOTE2MzQyMVowTjELMAkGA1UEBhMCVVMxGDAWBgNVBAoMD1NTTCBDb3Jw
b3JhdGlvbjElMCMGA1UEAwwcU1NMLmNvbSBUTFMgUlNBIFJvb3QgQ0EgMjAyMjCC
AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANCkCXJPQIgSYT41I57u9nTP
L3tYPc48DRAokC+X94xI2KDYJbFMsBFMF3NQ0CJKY7uB0ylu1bUJPiYYf7ISf5OY
t6/wNr/y7hienDtSxUcZXXTzZGbVXcdotL8bHAajvI9AI7YexoS9UcQbOcGV0ins
S657Lb85/bRi3pZ7QcacoOAGcvvwB5cJOYF0r/c0WRFXCsJbwST0MXMwgsadugL3
PnxEX4MN8/HdIGkWCVDi1FW24IBydm5MR7d1VVm0U3TZlMZBrViKMWYPHqIbKUBO
L9975hYsLfy/7PO0+r4Y9ptJ1O4Fbtk085zx7AGL0SDGD6C1vBdOSHtRwvzpXGk3
R2azaPgVKPC506QVzFpPulJwoxJF3ca6TvvC0PeoUidtbnm1jPx7jMEWTO6Af77w
dr5BUxIzrlo4QqvXDz5BjXYHMtWrifZOZ9mxQnUjbvPNQrL8VfVThxc7wDNY8VLS
+YCk8OjwO4s4zKTGkH8PnP2L0aPP2oOnaclQNtVcBdIKQXTbYxE3waWglksejBYS
d66UNHsef8JmAOSqg+qKkK3ONkRN0VHpvB/zagX9wHQfJRlAUW7qglFA35u5CCoG
AtUjHBPW6dvbxrB6y3snm/vg1UYk7RBLY0ulBY+6uB0rpvqR4pJSvezrZ5dtmi2f
gTIFZzL7SAg/2SW4BCUvAgMBAAGjYzBhMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0j
BBgwFoAU+y437uOEeicuzRk1sTN8/9REQrkwHQYDVR0OBBYEFPsuN+7jhHonLs0Z
NbEzfP/UREK5MA4GA1UdDwEB/wQEAwIBhjANBgkqhkiG9w0BAQsFAAOCAgEAjYlt
hEUY8U+zoO9opMAdrDC8Z2awms22qyIZZtM7QbUQnRC6cm4pJCAcAZli05bg4vsM
QtfhWsSWTVTNj8pDU/0quOr4ZcoBwq1gaAafORpR2eCNJvkLTqVTJXojpBzOCBvf
R4iyrT7gJ4eLSYwfqUdYe5byiB0YrrPRpgqU+tvT5TgKa3kSM/tKWTcWQA673vWJ
DPFs0/dRa1419dvAJuoSc06pkZCmF8NsLzjUo3KUQyxi4U5cMj29TH0ZR6LDSeeW
P4+a0zvkEdiLA9z2tmBVGKaBUfPhqBVq6+AL8BQx1rmMRTqoENjwuSfr98t67wVy
lrXEj5ZzxOhWc5y8aVFjvO9nHEMaX3cZHxj4HCUp+UmZKbaSPaKDN7EgkaibMOlq
bLQjk2UEqxHzDh1TJElTHaE/nUiSEeJ9DU/1172iWD54nR4fK/4huxoTtrEoZP2w
AgDHbICivRZQIA9ygV/MlP+7mea6kMvq+cYMwq7FGc4zoWtcu358NFcXrfA/rs3q
r5nsLFR+jM4uElZI7xc7P0peYNLcdDa8pUNjyw9bowJWCZ4kLOGGgYz+qxcs+sji
Mho6/4UIyYOf8kpIEFR3N+2ivEC+5BB09+Rbu7nzifmPQdjH5FCQNYA+HLhNkNPU
98OwoX6EyneSMSy4kLGCenROmxMmtNVQZlR4rmA=
-----END CERTIFICATE-----

# Issuer: CN=SSL.com TLS ECC Root CA 2022 O=SSL Corporation
# Subject: CN=SSL.com TLS ECC Root CA 2022 O=SSL Corporation
# Label: "SSL.com TLS ECC Root CA 2022"
# Serial: 26605119622390491762507526719404364228
# MD5 Fingerprint: 99:d7:5c:f1:51:36:cc:e9:ce:d9:19:2e:77:71:56:c5
# SHA1 Fingerprint: 9f:5f:d9:1a:54:6d:f5:0c:71:f0:ee:7a:bd:17:49:98:84:73:e2:39
# SHA256 Fingerprint: c3:2f:fd:9f:46:f9:36:d1:6c:36:73:99:09:59:43:4b:9a:d6:0a:af:bb:9e:7c:f3:36:54:f1:44:cc:1b:a1:43
-----BEGIN CERTIFICATE-----
MIICOjCCAcCgAwIBAgIQFAP1q/s3ixdAW+JDsqXRxDAKBggqhkjOPQQDAzBOMQsw
CQYDVQQGEwJVUzEYMBYGA1UECgwPU1NMIENvcnBvcmF0aW9uMSUwIwYDVQQDDBxT
U0wuY29tIFRMUyBFQ0MgUm9vdCBDQSAyMDIyMB4XDTIyMDgyNTE2MzM0OFoXDTQ2
MDgxOTE2MzM0N1owTjELMAkGA1UEBhMCVVMxGDAWBgNVBAoMD1NTTCBDb3Jwb3Jh
dGlvbjElMCMGA1UEAwwcU1NMLmNvbSBUTFMgRUNDIFJvb3QgQ0EgMjAyMjB2MBAG
ByqGSM49AgEGBSuBBAAiA2IABEUpNXP6wrgjzhR9qLFNoFs27iosU8NgCTWyJGYm
acCzldZdkkAZDsalE3D07xJRKF3nzL35PIXBz5SQySvOkkJYWWf9lCcQZIxPBLFN
SeR7T5v15wj4A4j3p8OSSxlUgaNjMGEwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSME
GDAWgBSJjy+j6CugFFR781a4Jl9nOAuc0DAdBgNVHQ4EFgQUiY8vo+groBRUe/NW
uCZfZzgLnNAwDgYDVR0PAQH/BAQDAgGGMAoGCCqGSM49BAMDA2gAMGUCMFXjIlbp
15IkWE8elDIPDAI2wv2sdDJO4fscgIijzPvX6yv/N33w7deedWo1dlJF4AIxAMeN
b0Igj762TVntd00pxCAgRWSGOlDGxK0tk/UYfXLtqc/ErFc2KAhl3zx5Zn6g6g==
-----END CERTIFICATE-----

# Issuer: CN=Atos TrustedRoot Root CA ECC TLS 2021 O=Atos
# Subject: CN=Atos TrustedRoot Root CA ECC TLS 2021 O=Atos
# Label: "Atos TrustedRoot Root CA ECC TLS 2021"
# Serial: 81873346711060652204712539181482831616
# MD5 Fingerprint: 16:9f:ad:f1:70:ad:79:d6:ed:29:b4:d1:c5:79:70:a8
# SHA1 Fingerprint: 9e:bc:75:10:42:b3:02:f3:81:f4:f7:30:62:d4:8f:c3:a7:51:b2:dd
# SHA256 Fingerprint: b2:fa:e5:3e:14:cc:d7:ab:92:12:06:47:01:ae:27:9c:1d:89:88:fa:cb:77:5f:a8:a0:08:91:4e:66:39:88:a8
-----BEGIN CERTIFICATE-----
MIICFTCCAZugAwIBAgIQPZg7pmY9kGP3fiZXOATvADAKBggqhkjOPQQDAzBMMS4w
LAYDVQQDDCVBdG9zIFRydXN0ZWRSb290IFJvb3QgQ0EgRUNDIFRMUyAyMDIxMQ0w
CwYDVQQKDARBdG9zMQswCQYDVQQGEwJERTAeFw0yMTA0MjIwOTI2MjNaFw00MTA0
MTcwOTI2MjJaMEwxLjAsBgNVBAMMJUF0b3MgVHJ1c3RlZFJvb3QgUm9vdCBDQSBF
Q0MgVExTIDIwMjExDTALBgNVBAoMBEF0b3MxCzAJBgNVBAYTAkRFMHYwEAYHKoZI
zj0CAQYFK4EEACIDYgAEloZYKDcKZ9Cg3iQZGeHkBQcfl+3oZIK59sRxUM6KDP/X
tXa7oWyTbIOiaG6l2b4siJVBzV3dscqDY4PMwL502eCdpO5KTlbgmClBk1IQ1SQ4
AjJn8ZQSb+/Xxd4u/RmAo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBR2
KCXWfeBmmnoJsmo7jjPXNtNPojAOBgNVHQ8BAf8EBAMCAYYwCgYIKoZIzj0EAwMD
aAAwZQIwW5kp85wxtolrbNa9d+F851F+uDrNozZffPc8dz7kUK2o59JZDCaOMDtu
CCrCp1rIAjEAmeMM56PDr9NJLkaCI2ZdyQAUEv049OGYa3cpetskz2VAv9LcjBHo
9H1/IISpQuQo
-----END CERTIFICATE-----

# Issuer: CN=Atos TrustedRoot Root CA RSA TLS 2021 O=Atos
# Subject: CN=Atos TrustedRoot Root CA RSA TLS 2021 O=Atos
# Label: "Atos TrustedRoot Root CA RSA TLS 2021"
# Serial: 111436099570196163832749341232207667876
# MD5 Fingerprint: d4:d3:46:b8:9a:c0:9c:76:5d:9e:3a:c3:b9:99:31:d2
# SHA1 Fingerprint: 18:52:3b:0d:06:37:e4:d6:3a:df:23:e4:98:fb:5b:16:fb:86:74:48
# SHA256 Fingerprint: 81:a9:08:8e:a5:9f:b3:64:c5:48:a6:f8:55:59:09:9b:6f:04:05:ef:bf:18:e5:32:4e:c9:f4:57:ba:00:11:2f
-----BEGIN CERTIFICATE-----
MIIFZDCCA0ygAwIBAgIQU9XP5hmTC/srBRLYwiqipDANBgkqhkiG9w0BAQwFADBM
MS4wLAYDVQQDDCVBdG9zIFRydXN0ZWRSb290IFJvb3QgQ0EgUlNBIFRMUyAyMDIx
MQ0wCwYDVQQKDARBdG9zMQswCQYDVQQGEwJERTAeFw0yMTA0MjIwOTIxMTBaFw00
MTA0MTcwOTIxMDlaMEwxLjAsBgNVBAMMJUF0b3MgVHJ1c3RlZFJvb3QgUm9vdCBD
QSBSU0EgVExTIDIwMjExDTALBgNVBAoMBEF0b3MxCzAJBgNVBAYTAkRFMIICIjAN
BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAtoAOxHm9BYx9sKOdTSJNy/BBl01Z
4NH+VoyX8te9j2y3I49f1cTYQcvyAh5x5en2XssIKl4w8i1mx4QbZFc4nXUtVsYv
Ye+W/CBGvevUez8/fEc4BKkbqlLfEzfTFRVOvV98r61jx3ncCHvVoOX3W3WsgFWZ
kmGbzSoXfduP9LVq6hdKZChmFSlsAvFr1bqjM9xaZ6cF4r9lthawEO3NUDPJcFDs
GY6wx/J0W2tExn2WuZgIWWbeKQGb9Cpt0xU6kGpn8bRrZtkh68rZYnxGEFzedUln
nkL5/nWpo63/dgpnQOPF943HhZpZnmKaau1Fh5hnstVKPNe0OwANwI8f4UDErmwh
3El+fsqyjW22v5MvoVw+j8rtgI5Y4dtXz4U2OLJxpAmMkokIiEjxQGMYsluMWuPD
0xeqqxmjLBvk1cbiZnrXghmmOxYsL3GHX0WelXOTwkKBIROW1527k2gV+p2kHYzy
geBYBr3JtuP2iV2J+axEoctr+hbxx1A9JNr3w+SH1VbxT5Aw+kUJWdo0zuATHAR8
ANSbhqRAvNncTFd+rrcztl524WWLZt+NyteYr842mIycg5kDcPOvdO3GDjbnvezB
c6eUWsuSZIKmAMFwoW4sKeFYV+xafJlrJaSQOoD0IJ2azsct+bJLKZWD6TWNp0lI
pw9MGZHQ9b8Q4HECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
dEmZ0f+0emhFdcN+tNzMzjkz2ggwDgYDVR0PAQH/BAQDAgGGMA0GCSqGSIb3DQEB
DAUAA4ICAQAjQ1MkYlxt/T7Cz1UAbMVWiLkO3TriJQ2VSpfKgInuKs1l+NsW4AmS
4BjHeJi78+xCUvuppILXTdiK/ORO/auQxDh1MoSf/7OwKwIzNsAQkG8dnK/haZPs
o0UvFJ/1TCplQ3IM98P4lYsU84UgYt1UU90s3BiVaU+DR3BAM1h3Egyi61IxHkzJ
qM7F78PRreBrAwA0JrRUITWXAdxfG/F851X6LWh3e9NpzNMOa7pNdkTWwhWaJuyw
xfW70Xp0wmzNxbVe9kzmWy2B27O3Opee7c9GslA9hGCZcbUztVdF5kJHdWoOsAgM
rr3e97sPWD2PAzHoPYJQyi9eDF20l74gNAf0xBLh7tew2VktafcxBPTy+av5EzH4
AXcOPUIjJsyacmdRIXrMPIWo6iFqO9taPKU0nprALN+AnCng33eU0aKAQv9qTFsR
0PXNor6uzFFcw9VUewyu1rkGd4Di7wcaaMxZUa1+XGdrudviB0JbuAEFWDlN5LuY
o7Ey7Nmj1m+UI/87tyll5gfp77YZ6ufCOB0yiJA8EytuzO+rdwY0d4RPcuSBhPm5
dDTedk+SKlOxJTnbPP/lPqYO5Wue/9vsL3SD3460s6neFE3/MaNFcyT6lSnMEpcE
oji2jbDwN/zIIX8/syQbPYtuzE2wFg2WHYMfRsCbvUOZ58SWLs5fyQ==
-----END CERTIFICATE-----

# Issuer: CN=TrustAsia Global Root CA G3 O=TrustAsia Technologies, Inc.
# Subject: CN=TrustAsia Global Root CA G3 O=TrustAsia Technologies, Inc.
# Label: "TrustAsia Global Root CA G3"
# Serial: 576386314500428537169965010905813481816650257167
# MD5 Fingerprint: 30:42:1b:b7:bb:81:75:35:e4:16:4f:53:d2:94:de:04
# SHA1 Fingerprint: 63:cf:b6:c1:27:2b:56:e4:88:8e:1c:23:9a:b6:2e:81:47:24:c3:c7
# SHA256 Fingerprint: e0:d3:22:6a:eb:11:63:c2:e4:8f:f9:be:3b:50:b4:c6:43:1b:e7:bb:1e:ac:c5:c3:6b:5d:5e:c5:09:03:9a:08
-----BEGIN CERTIFICATE-----
MIIFpTCCA42gAwIBAgIUZPYOZXdhaqs7tOqFhLuxibhxkw8wDQYJKoZIhvcNAQEM
BQAwWjELMAkGA1UEBhMCQ04xJTAjBgNVBAoMHFRydXN0QXNpYSBUZWNobm9sb2dp
ZXMsIEluYy4xJDAiBgNVBAMMG1RydXN0QXNpYSBHbG9iYWwgUm9vdCBDQSBHMzAe
Fw0yMTA1MjAwMjEwMTlaFw00NjA1MTkwMjEwMTlaMFoxCzAJBgNVBAYTAkNOMSUw
IwYDVQQKDBxUcnVzdEFzaWEgVGVjaG5vbG9naWVzLCBJbmMuMSQwIgYDVQQDDBtU
cnVzdEFzaWEgR2xvYmFsIFJvb3QgQ0EgRzMwggIiMA0GCSqGSIb3DQEBAQUAA4IC
DwAwggIKAoICAQDAMYJhkuSUGwoqZdC+BqmHO1ES6nBBruL7dOoKjbmzTNyPtxNS
T1QY4SxzlZHFZjtqz6xjbYdT8PfxObegQ2OwxANdV6nnRM7EoYNl9lA+sX4WuDqK
AtCWHwDNBSHvBm3dIZwZQ0WhxeiAysKtQGIXBsaqvPPW5vxQfmZCHzyLpnl5hkA1
nyDvP+uLRx+PjsXUjrYsyUQE49RDdT/VP68czH5GX6zfZBCK70bwkPAPLfSIC7Ep
qq+FqklYqL9joDiR5rPmd2jE+SoZhLsO4fWvieylL1AgdB4SQXMeJNnKziyhWTXA
yB1GJ2Faj/lN03J5Zh6fFZAhLf3ti1ZwA0pJPn9pMRJpxx5cynoTi+jm9WAPzJMs
hH/x/Gr8m0ed262IPfN2dTPXS6TIi/n1Q1hPy8gDVI+lhXgEGvNz8teHHUGf59gX
zhqcD0r83ERoVGjiQTz+LISGNzzNPy+i2+f3VANfWdP3kXjHi3dqFuVJhZBFcnAv
kV34PmVACxmZySYgWmjBNb9Pp1Hx2BErW+Canig7CjoKH8GB5S7wprlppYiU5msT
f9FkPz2ccEblooV7WIQn3MSAPmeamseaMQ4w7OYXQJXZRe0Blqq/DPNL0WP3E1jA
uPP6Z92bfW1K/zJMtSU7/xxnD4UiWQWRkUF3gdCFTIcQcf+eQxuulXUtgQIDAQAB
o2MwYTAPBgNVHRMBAf8EBTADAQH/MB8GA1UdIwQYMBaAFEDk5PIj7zjKsK5Xf/Ih
MBY027ySMB0GA1UdDgQWBBRA5OTyI+84yrCuV3/yITAWNNu8kjAOBgNVHQ8BAf8E
BAMCAQYwDQYJKoZIhvcNAQEMBQADggIBACY7UeFNOPMyGLS0XuFlXsSUT9SnYaP4
wM8zAQLpw6o1D/GUE3d3NZ4tVlFEbuHGLige/9rsR82XRBf34EzC4Xx8MnpmyFq2
XFNFV1pF1AWZLy4jVe5jaN/TG3inEpQGAHUNcoTpLrxaatXeL1nHo+zSh2bbt1S1
JKv0Q3jbSwTEb93mPmY+KfJLaHEih6D4sTNjduMNhXJEIlU/HHzp/LgV6FL6qj6j
ITk1dImmasI5+njPtqzn59ZW/yOSLlALqbUHM/Q4X6RJpstlcHboCoWASzY9M/eV
VHUl2qzEc4Jl6VL1XP04lQJqaTDFHApXB64ipCz5xUG3uOyfT0gA+QEEVcys+TIx
xHWVBqB/0Y0n3bOppHKH/lmLmnp0Ft0WpWIp6zqW3IunaFnT63eROfjXy9mPX1on
AX1daBli2MjN9LdyR75bl87yraKZk62Uy5P2EgmVtqvXO9A/EcswFi55gORngS1d
7XB4tmBZrOFdRWOPyN9yaFvqHbgB8X7754qz41SgOAngPN5C8sLtLpvzHzW2Ntjj
gKGLzZlkD8Kqq7HK9W+eQ42EVJmzbsASZthwEPEGNTNDqJwuuhQxzhB/HIbjj9LV
+Hfsm6vxL2PZQl/gZ4FkkfGXL/xuJvYz+NO1+MRiqzFRJQJ6+N1rZdVtTTDIZbpo
FGWsJwt0ivKH
-----END CERTIFICATE-----

# Issuer: CN=TrustAsia Global Root CA G4 O=TrustAsia Technologies, Inc.
# Subject: CN=TrustAsia Global Root CA G4 O=TrustAsia Technologies, Inc.
# Label: "TrustAsia Global Root CA G4"
# Serial: 451799571007117016466790293371524403291602933463
# MD5 Fingerprint: 54:dd:b2:d7:5f:d8:3e:ed:7c:e0:0b:2e:cc:ed:eb:eb
# SHA1 Fingerprint: 57:73:a5:61:5d:80:b2:e6:ac:38:82:fc:68:07:31:ac:9f:b5:92:5a
# SHA256 Fingerprint: be:4b:56:cb:50:56:c0:13:6a:52:6d:f4:44:50:8d:aa:36:a0:b5:4f:42:e4:ac:38:f7:2a:f4:70:e4:79:65:4c
-----BEGIN CERTIFICATE-----
MIICVTCCAdygAwIBAgIUTyNkuI6XY57GU4HBdk7LKnQV1tcwCgYIKoZIzj0EAwMw
WjELMAkGA1UEBhMCQ04xJTAjBgNVBAoMHFRydXN0QXNpYSBUZWNobm9sb2dpZXMs
IEluYy4xJDAiBgNVBAMMG1RydXN0QXNpYSBHbG9iYWwgUm9vdCBDQSBHNDAeFw0y
MTA1MjAwMjEwMjJaFw00NjA1MTkwMjEwMjJaMFoxCzAJBgNVBAYTAkNOMSUwIwYD
VQQKDBxUcnVzdEFzaWEgVGVjaG5vbG9naWVzLCBJbmMuMSQwIgYDVQQDDBtUcnVz
dEFzaWEgR2xvYmFsIFJvb3QgQ0EgRzQwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATx
s8045CVD5d4ZCbuBeaIVXxVjAd7Cq92zphtnS4CDr5nLrBfbK5bKfFJV4hrhPVbw
LxYI+hW8m7tH5j/uqOFMjPXTNvk4XatwmkcN4oFBButJ+bAp3TPsUKV/eSm4IJij
YzBhMA8GA1UdEwEB/wQFMAMBAf8wHwYDVR0jBBgwFoAUpbtKl86zK3+kMd6Xg1mD
pm9xy94wHQYDVR0OBBYEFKW7SpfOsyt/pDHel4NZg6ZvccveMA4GA1UdDwEB/wQE
AwIBBjAKBggqhkjOPQQDAwNnADBkAjBe8usGzEkxn0AAbbd+NvBNEU/zy4k6LHiR
UKNbwMp1JvK/kF0LgoxgKJ/GcJpo5PECMFxYDlZ2z1jD1xCMuo6u47xkdUfFVZDj
/bpV6wfEU6s3qe4hsiFbYI89MvHVI5TWWA==
-----END CERTIFICATE-----

# Issuer: CN=CommScope Public Trust ECC Root-01 O=CommScope
# Subject: CN=CommScope Public Trust ECC Root-01 O=CommScope
# Label: "CommScope Public Trust ECC Root-01"
# Serial: 385011430473757362783587124273108818652468453534
# MD5 Fingerprint: 3a:40:a7:fc:03:8c:9c:38:79:2f:3a:a2:6c:b6:0a:16
# SHA1 Fingerprint: 07:86:c0:d8:dd:8e:c0:80:98:06:98:d0:58:7a:ef:de:a6:cc:a2:5d
# SHA256 Fingerprint: 11:43:7c:da:7b:b4:5e:41:36:5f:45:b3:9a:38:98:6b:0d:e0:0d:ef:34:8e:0c:7b:b0:87:36:33:80:0b:c3:8b
-----BEGIN CERTIFICATE-----
MIICHTCCAaOgAwIBAgIUQ3CCd89NXTTxyq4yLzf39H91oJ4wCgYIKoZIzj0EAwMw
TjELMAkGA1UEBhMCVVMxEjAQBgNVBAoMCUNvbW1TY29wZTErMCkGA1UEAwwiQ29t
bVNjb3BlIFB1YmxpYyBUcnVzdCBFQ0MgUm9vdC0wMTAeFw0yMTA0MjgxNzM1NDNa
Fw00NjA0MjgxNzM1NDJaME4xCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlDb21tU2Nv
cGUxKzApBgNVBAMMIkNvbW1TY29wZSBQdWJsaWMgVHJ1c3QgRUNDIFJvb3QtMDEw
djAQBgcqhkjOPQIBBgUrgQQAIgNiAARLNumuV16ocNfQj3Rid8NeeqrltqLxeP0C
flfdkXmcbLlSiFS8LwS+uM32ENEp7LXQoMPwiXAZu1FlxUOcw5tjnSCDPgYLpkJE
hRGnSjot6dZoL0hOUysHP029uax3OVejQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYD
VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBSOB2LAUN3GGQYARnQE9/OufXVNMDAKBggq
hkjOPQQDAwNoADBlAjEAnDPfQeMjqEI2Jpc1XHvr20v4qotzVRVcrHgpD7oh2MSg
2NED3W3ROT3Ek2DS43KyAjB8xX6I01D1HiXo+k515liWpDVfG2XqYZpwI7UNo5uS
Um9poIyNStDuiw7LR47QjRE=
-----END CERTIFICATE-----

# Issuer: CN=CommScope Public Trust ECC Root-02 O=CommScope
# Subject: CN=CommScope Public Trust ECC Root-02 O=CommScope
# Label: "CommScope Public Trust ECC Root-02"
# Serial: 234015080301808452132356021271193974922492992893
# MD5 Fingerprint: 59:b0:44:d5:65:4d:b8:5c:55:19:92:02:b6:d1:94:b2
# SHA1 Fingerprint: 3c:3f:ef:57:0f:fe:65:93:86:9e:a0:fe:b0:f6:ed:8e:d1:13:c7:e5
# SHA256 Fingerprint: 2f:fb:7f:81:3b:bb:b3:c8:9a:b4:e8:16:2d:0f:16:d7:15:09:a8:30:cc:9d:73:c2:62:e5:14:08:75:d1:ad:4a
-----BEGIN CERTIFICATE-----
MIICHDCCAaOgAwIBAgIUKP2ZYEFHpgE6yhR7H+/5aAiDXX0wCgYIKoZIzj0EAwMw
TjELMAkGA1UEBhMCVVMxEjAQBgNVBAoMCUNvbW1TY29wZTErMCkGA1UEAwwiQ29t
bVNjb3BlIFB1YmxpYyBUcnVzdCBFQ0MgUm9vdC0wMjAeFw0yMTA0MjgxNzQ0NTRa
Fw00NjA0MjgxNzQ0NTNaME4xCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlDb21tU2Nv
cGUxKzApBgNVBAMMIkNvbW1TY29wZSBQdWJsaWMgVHJ1c3QgRUNDIFJvb3QtMDIw
djAQBgcqhkjOPQIBBgUrgQQAIgNiAAR4MIHoYx7l63FRD/cHB8o5mXxO1Q/MMDAL
j2aTPs+9xYa9+bG3tD60B8jzljHz7aRP+KNOjSkVWLjVb3/ubCK1sK9IRQq9qEmU
v4RDsNuESgMjGWdqb8FuvAY5N9GIIvejQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYD
VR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTmGHX/72DehKT1RsfeSlXjMjZ59TAKBggq
hkjOPQQDAwNnADBkAjAmc0l6tqvmSfR9Uj/UQQSugEODZXW5hYA4O9Zv5JOGq4/n
ich/m35rChJVYaoR4HkCMHfoMXGsPHED1oQmHhS48zs73u1Z/GtMMH9ZzkXpc2AV
mkzw5l4lIhVtwodZ0LKOag==
-----END CERTIFICATE-----

# Issuer: CN=CommScope Public Trust RSA Root-01 O=CommScope
# Subject: CN=CommScope Public Trust RSA Root-01 O=CommScope
# Label: "CommScope Public Trust RSA Root-01"
# Serial: 354030733275608256394402989253558293562031411421
# MD5 Fingerprint: 0e:b4:15:bc:87:63:5d:5d:02:73:d4:26:38:68:73:d8
# SHA1 Fingerprint: 6d:0a:5f:f7:b4:23:06:b4:85:b3:b7:97:64:fc:ac:75:f5:33:f2:93
# SHA256 Fingerprint: 02:bd:f9:6e:2a:45:dd:9b:f1:8f:c7:e1:db:df:21:a0:37:9b:a3:c9:c2:61:03:44:cf:d8:d6:06:fe:c1:ed:81
-----BEGIN CERTIFICATE-----
MIIFbDCCA1SgAwIBAgIUPgNJgXUWdDGOTKvVxZAplsU5EN0wDQYJKoZIhvcNAQEL
BQAwTjELMAkGA1UEBhMCVVMxEjAQBgNVBAoMCUNvbW1TY29wZTErMCkGA1UEAwwi
Q29tbVNjb3BlIFB1YmxpYyBUcnVzdCBSU0EgUm9vdC0wMTAeFw0yMTA0MjgxNjQ1
NTRaFw00NjA0MjgxNjQ1NTNaME4xCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlDb21t
U2NvcGUxKzApBgNVBAMMIkNvbW1TY29wZSBQdWJsaWMgVHJ1c3QgUlNBIFJvb3Qt
MDEwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCwSGWjDR1C45FtnYSk
YZYSwu3D2iM0GXb26v1VWvZVAVMP8syMl0+5UMuzAURWlv2bKOx7dAvnQmtVzslh
suitQDy6uUEKBU8bJoWPQ7VAtYXR1HHcg0Hz9kXHgKKEUJdGzqAMxGBWBB0HW0al
DrJLpA6lfO741GIDuZNqihS4cPgugkY4Iw50x2tBt9Apo52AsH53k2NC+zSDO3Oj
WiE260f6GBfZumbCk6SP/F2krfxQapWsvCQz0b2If4b19bJzKo98rwjyGpg/qYFl
P8GMicWWMJoKz/TUyDTtnS+8jTiGU+6Xn6myY5QXjQ/cZip8UlF1y5mO6D1cv547
KI2DAg+pn3LiLCuz3GaXAEDQpFSOm117RTYm1nJD68/A6g3czhLmfTifBSeolz7p
UcZsBSjBAg/pGG3svZwG1KdJ9FQFa2ww8esD1eo9anbCyxooSU1/ZOD6K9pzg4H/
kQO9lLvkuI6cMmPNn7togbGEW682v3fuHX/3SZtS7NJ3Wn2RnU3COS3kuoL4b/JO
Hg9O5j9ZpSPcPYeoKFgo0fEbNttPxP/hjFtyjMcmAyejOQoBqsCyMWCDIqFPEgkB
Ea801M/XrmLTBQe0MXXgDW1XT2mH+VepuhX2yFJtocucH+X8eKg1mp9BFM6ltM6U
CBwJrVbl2rZJmkrqYxhTnCwuwwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G
A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUN12mmnQywsL5x6YVEFm45P3luG0wDQYJ
KoZIhvcNAQELBQADggIBAK+nz97/4L1CjU3lIpbfaOp9TSp90K09FlxD533Ahuh6
NWPxzIHIxgvoLlI1pKZJkGNRrDSsBTtXAOnTYtPZKdVUvhwQkZyybf5Z/Xn36lbQ
nmhUQo8mUuJM3y+Xpi/SB5io82BdS5pYV4jvguX6r2yBS5KPQJqTRlnLX3gWsWc+
QgvfKNmwrZggvkN80V4aCRckjXtdlemrwWCrWxhkgPut4AZ9HcpZuPN4KWfGVh2v
trV0KnahP/t1MJ+UXjulYPPLXAziDslg+MkfFoom3ecnf+slpoq9uC02EJqxWE2a
aE9gVOX2RhOOiKy8IUISrcZKiX2bwdgt6ZYD9KJ0DLwAHb/WNyVntHKLr4W96ioD
j8z7PEQkguIBpQtZtjSNMgsSDesnwv1B10A8ckYpwIzqug/xBpMu95yo9GA+o/E4
Xo4TwbM6l4c/ksp4qRyv0LAbJh6+cOx69TOY6lz/KwsETkPdY34Op054A5U+1C0w
lREQKC6/oAI+/15Z0wUOlV9TRe9rh9VIzRamloPh37MG88EU26fsHItdkJANclHn
YfkUyq+Dj7+vsQpZXdxc1+SWrVtgHdqul7I52Qb1dgAT+GhMIbA1xNxVssnBQVoc
icCMb3SgazNNtQEo/a2tiRc7ppqEvOuM6sRxJKi6KfkIsidWNTJf6jn7MZrVGczw
-----END CERTIFICATE-----

# Issuer: CN=CommScope Public Trust RSA Root-02 O=CommScope
# Subject: CN=CommScope Public Trust RSA Root-02 O=CommScope
# Label: "CommScope Public Trust RSA Root-02"
# Serial: 480062499834624527752716769107743131258796508494
# MD5 Fingerprint: e1:29:f9:62:7b:76:e2:96:6d:f3:d4:d7:0f:ae:1f:aa
# SHA1 Fingerprint: ea:b0:e2:52:1b:89:93:4c:11:68:f2:d8:9a:ac:22:4c:a3:8a:57:ae
# SHA256 Fingerprint: ff:e9:43:d7:93:42:4b:4f:7c:44:0c:1c:3d:64:8d:53:63:f3:4b:82:dc:87:aa:7a:9f:11:8f:c5:de:e1:01:f1
-----BEGIN CERTIFICATE-----
MIIFbDCCA1SgAwIBAgIUVBa/O345lXGN0aoApYYNK496BU4wDQYJKoZIhvcNAQEL
BQAwTjELMAkGA1UEBhMCVVMxEjAQBgNVBAoMCUNvbW1TY29wZTErMCkGA1UEAwwi
Q29tbVNjb3BlIFB1YmxpYyBUcnVzdCBSU0EgUm9vdC0wMjAeFw0yMTA0MjgxNzE2
NDNaFw00NjA0MjgxNzE2NDJaME4xCzAJBgNVBAYTAlVTMRIwEAYDVQQKDAlDb21t
U2NvcGUxKzApBgNVBAMMIkNvbW1TY29wZSBQdWJsaWMgVHJ1c3QgUlNBIFJvb3Qt
MDIwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDh+g77aAASyE3VrCLE
NQE7xVTlWXZjpX/rwcRqmL0yjReA61260WI9JSMZNRTpf4mnG2I81lDnNJUDMrG0
kyI9p+Kx7eZ7Ti6Hmw0zdQreqjXnfuU2mKKuJZ6VszKWpCtYHu8//mI0SFHRtI1C
rWDaSWqVcN3SAOLMV2MCe5bdSZdbkk6V0/nLKR8YSvgBKtJjCW4k6YnS5cciTNxz
hkcAqg2Ijq6FfUrpuzNPDlJwnZXjfG2WWy09X6GDRl224yW4fKcZgBzqZUPckXk2
LHR88mcGyYnJ27/aaL8j7dxrrSiDeS/sOKUNNwFnJ5rpM9kzXzehxfCrPfp4sOcs
n/Y+n2Dg70jpkEUeBVF4GiwSLFworA2iI540jwXmojPOEXcT1A6kHkIfhs1w/tku
FT0du7jyU1fbzMZ0KZwYszZ1OC4PVKH4kh+Jlk+71O6d6Ts2QrUKOyrUZHk2EOH5
kQMreyBUzQ0ZGshBMjTRsJnhkB4BQDa1t/qp5Xd1pCKBXbCL5CcSD1SIxtuFdOa3
wNemKfrb3vOTlycEVS8KbzfFPROvCgCpLIscgSjX74Yxqa7ybrjKaixUR9gqiC6v
wQcQeKwRoi9C8DfF8rhW3Q5iLc4tVn5V8qdE9isy9COoR+jUKgF4z2rDN6ieZdIs
5fq6M8EGRPbmz6UNp2YINIos8wIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MA4G
A1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUR9DnsSL/nSz12Vdgs7GxcJXvYXowDQYJ
KoZIhvcNAQELBQADggIBAIZpsU0v6Z9PIpNojuQhmaPORVMbc0RTAIFhzTHjCLqB
KCh6krm2qMhDnscTJk3C2OVVnJJdUNjCK9v+5qiXz1I6JMNlZFxHMaNlNRPDk7n3
+VGXu6TwYofF1gbTl4MgqX67tiHCpQ2EAOHyJxCDut0DgdXdaMNmEMjRdrSzbyme
APnCKfWxkxlSaRosTKCL4BWaMS/TiJVZbuXEs1DIFAhKm4sTg7GkcrI7djNB3Nyq
pgdvHSQSn8h2vS/ZjvQs7rfSOBAkNlEv41xdgSGn2rtO/+YHqP65DSdsu3BaVXoT
6fEqSWnHX4dXTEN5bTpl6TBcQe7rd6VzEojov32u5cSoHw2OHG1QAk8mGEPej1WF
sQs3BWDJVTkSBKEqz3EWnzZRSb9wO55nnPt7eck5HHisd5FUmrh1CoFSl+NmYWvt
PjgelmFV4ZFUjO2MJB+ByRCac5krFk5yAD9UG/iNuovnFNa2RU9g7Jauwy8CTl2d
lklyALKrdVwPaFsdZcJfMw8eD/A7hvWwTruc9+olBdytoptLFwG+Qt81IR2tq670
v64fG9PiO/yzcnMcmyiQiRM9HcEARwmWmjgb3bHPDcK0RPOWlc4yOo80nOAXx17O
rg3bhzjlP1v9mxnhMUF6cKojawHhRUzNlM47ni3niAIi9G7oyOzWPPO5std3eqx7
-----END CERTIFICATE-----

# Issuer: CN=Telekom Security TLS ECC Root 2020 O=Deutsche Telekom Security GmbH
# Subject: CN=Telekom Security TLS ECC Root 2020 O=Deutsche Telekom Security GmbH
# Label: "Telekom Security TLS ECC Root 2020"
# Serial: 72082518505882327255703894282316633856
# MD5 Fingerprint: c1:ab:fe:6a:10:2c:03:8d:bc:1c:22:32:c0:85:a7:fd
# SHA1 Fingerprint: c0:f8:96:c5:a9:3b:01:06:21:07:da:18:42:48:bc:e9:9d:88:d5:ec
# SHA256 Fingerprint: 57:8a:f4:de:d0:85:3f:4e:59:98:db:4a:ea:f9:cb:ea:8d:94:5f:60:b6:20:a3:8d:1a:3c:13:b2:bc:7b:a8:e1
-----BEGIN CERTIFICATE-----
MIICQjCCAcmgAwIBAgIQNjqWjMlcsljN0AFdxeVXADAKBggqhkjOPQQDAzBjMQsw
CQYDVQQGEwJERTEnMCUGA1UECgweRGV1dHNjaGUgVGVsZWtvbSBTZWN1cml0eSBH
bWJIMSswKQYDVQQDDCJUZWxla29tIFNlY3VyaXR5IFRMUyBFQ0MgUm9vdCAyMDIw
MB4XDTIwMDgyNTA3NDgyMFoXDTQ1MDgyNTIzNTk1OVowYzELMAkGA1UEBhMCREUx
JzAlBgNVBAoMHkRldXRzY2hlIFRlbGVrb20gU2VjdXJpdHkgR21iSDErMCkGA1UE
AwwiVGVsZWtvbSBTZWN1cml0eSBUTFMgRUNDIFJvb3QgMjAyMDB2MBAGByqGSM49
AgEGBSuBBAAiA2IABM6//leov9Wq9xCazbzREaK9Z0LMkOsVGJDZos0MKiXrPk/O
tdKPD/M12kOLAoC+b1EkHQ9rK8qfwm9QMuU3ILYg/4gND21Ju9sGpIeQkpT0CdDP
f8iAC8GXs7s1J8nCG6NCMEAwHQYDVR0OBBYEFONyzG6VmUex5rNhTNHLq+O6zd6f
MA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQDAgEGMAoGCCqGSM49BAMDA2cA
MGQCMHVSi7ekEE+uShCLsoRbQuHmKjYC2qBuGT8lv9pZMo7k+5Dck2TOrbRBR2Di
z6fLHgIwN0GMZt9Ba9aDAEH9L1r3ULRn0SyocddDypwnJJGDSA3PzfdUga/sf+Rn
27iQ7t0l
-----END CERTIFICATE-----

# Issuer: CN=Telekom Security TLS RSA Root 2023 O=Deutsche Telekom Security GmbH
# Subject: CN=Telekom Security TLS RSA Root 2023 O=Deutsche Telekom Security GmbH
# Label: "Telekom Security TLS RSA Root 2023"
# Serial: 44676229530606711399881795178081572759
# MD5 Fingerprint: bf:5b:eb:54:40:cd:48:71:c4:20:8d:7d:de:0a:42:f2
# SHA1 Fingerprint: 54:d3:ac:b3:bd:57:56:f6:85:9d:ce:e5:c3:21:e2:d4:ad:83:d0:93
# SHA256 Fingerprint: ef:c6:5c:ad:bb:59:ad:b6:ef:e8:4d:a2:23:11:b3:56:24:b7:1b:3b:1e:a0:da:8b:66:55:17:4e:c8:97:86:46
-----BEGIN CERTIFICATE-----
MIIFszCCA5ugAwIBAgIQIZxULej27HF3+k7ow3BXlzANBgkqhkiG9w0BAQwFADBj
MQswCQYDVQQGEwJERTEnMCUGA1UECgweRGV1dHNjaGUgVGVsZWtvbSBTZWN1cml0
eSBHbWJIMSswKQYDVQQDDCJUZWxla29tIFNlY3VyaXR5IFRMUyBSU0EgUm9vdCAy
MDIzMB4XDTIzMDMyODEyMTY0NVoXDTQ4MDMyNzIzNTk1OVowYzELMAkGA1UEBhMC
REUxJzAlBgNVBAoMHkRldXRzY2hlIFRlbGVrb20gU2VjdXJpdHkgR21iSDErMCkG
A1UEAwwiVGVsZWtvbSBTZWN1cml0eSBUTFMgUlNBIFJvb3QgMjAyMzCCAiIwDQYJ
KoZIhvcNAQEBBQADggIPADCCAgoCggIBAO01oYGA88tKaVvC+1GDrib94W7zgRJ9
cUD/h3VCKSHtgVIs3xLBGYSJwb3FKNXVS2xE1kzbB5ZKVXrKNoIENqil/Cf2SfHV
cp6R+SPWcHu79ZvB7JPPGeplfohwoHP89v+1VmLhc2o0mD6CuKyVU/QBoCcHcqMA
U6DksquDOFczJZSfvkgdmOGjup5czQRxUX11eKvzWarE4GC+j4NSuHUaQTXtvPM6
Y+mpFEXX5lLRbtLevOP1Czvm4MS9Q2QTps70mDdsipWol8hHD/BeEIvnHRz+sTug
BTNoBUGCwQMrAcjnj02r6LX2zWtEtefdi+zqJbQAIldNsLGyMcEWzv/9FIS3R/qy
8XDe24tsNlikfLMR0cN3f1+2JeANxdKz+bi4d9s3cXFH42AYTyS2dTd4uaNir73J
co4vzLuu2+QVUhkHM/tqty1LkCiCc/4YizWN26cEar7qwU02OxY2kTLvtkCJkUPg
8qKrBC7m8kwOFjQgrIfBLX7JZkcXFBGk8/ehJImr2BrIoVyxo/eMbcgByU/J7MT8
rFEz0ciD0cmfHdRHNCk+y7AO+oMLKFjlKdw/fKifybYKu6boRhYPluV75Gp6SG12
mAWl3G0eQh5C2hrgUve1g8Aae3g1LDj1H/1Joy7SWWO/gLCMk3PLNaaZlSJhZQNg
+y+TS/qanIA7AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUtqeX
gj10hZv3PJ+TmpV5dVKMbUcwDwYDVR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBS2
p5eCPXSFm/c8n5OalXl1UoxtRzANBgkqhkiG9w0BAQwFAAOCAgEAqMxhpr51nhVQ
pGv7qHBFfLp+sVr8WyP6Cnf4mHGCDG3gXkaqk/QeoMPhk9tLrbKmXauw1GLLXrtm
9S3ul0A8Yute1hTWjOKWi0FpkzXmuZlrYrShF2Y0pmtjxrlO8iLpWA1WQdH6DErw
M807u20hOq6OcrXDSvvpfeWxm4bu4uB9tPcy/SKE8YXJN3nptT+/XOR0so8RYgDd
GGah2XsjX/GO1WfoVNpbOms2b/mBsTNHM3dA+VKq3dSDz4V4mZqTuXNnQkYRIer+
CqkbGmVps4+uFrb2S1ayLfmlyOw7YqPta9BO1UAJpB+Y1zqlklkg5LB9zVtzaL1t
xKITDmcZuI1CfmwMmm6gJC3VRRvcxAIU/oVbZZfKTpBQCHpCNfnqwmbU+AGuHrS+
w6jv/naaoqYfRvaE7fzbzsQCzndILIyy7MMAo+wsVRjBfhnu4S/yrYObnqsZ38aK
L4x35bcF7DvB7L6Gs4a8wPfc5+pbrrLMtTWGS9DiP7bY+A4A7l3j941Y/8+LN+lj
X273CXE2whJdV/LItM3z7gLfEdxquVeEHVlNjM7IDiPCtyaaEBRx/pOyiriA8A4Q
ntOoUAw3gi/q4Iqd4Sw5/7W0cwDk90imc6y/st53BIe0o82bNSQ3+pCTE4FCxpgm
dTdmQRCsu/WU48IxK63nI1bMNSWSs1A=
-----END CERTIFICATE-----

# Issuer: CN=FIRMAPROFESIONAL CA ROOT-A WEB O=Firmaprofesional SA
# Subject: CN=FIRMAPROFESIONAL CA ROOT-A WEB O=Firmaprofesional SA
# Label: "FIRMAPROFESIONAL CA ROOT-A WEB"
# Serial: 65916896770016886708751106294915943533
# MD5 Fingerprint: 82:b2:ad:45:00:82:b0:66:63:f8:5f:c3:67:4e:ce:a3
# SHA1 Fingerprint: a8:31:11:74:a6:14:15:0d:ca:77:dd:0e:e4:0c:5d:58:fc:a0:72:a5
# SHA256 Fingerprint: be:f2:56:da:f2:6e:9c:69:bd:ec:16:02:35:97:98:f3:ca:f7:18:21:a0:3e:01:82:57:c5:3c:65:61:7f:3d:4a
-----BEGIN CERTIFICATE-----
MIICejCCAgCgAwIBAgIQMZch7a+JQn81QYehZ1ZMbTAKBggqhkjOPQQDAzBuMQsw
CQYDVQQGEwJFUzEcMBoGA1UECgwTRmlybWFwcm9mZXNpb25hbCBTQTEYMBYGA1UE
YQwPVkFURVMtQTYyNjM0MDY4MScwJQYDVQQDDB5GSVJNQVBST0ZFU0lPTkFMIENB
IFJPT1QtQSBXRUIwHhcNMjIwNDA2MDkwMTM2WhcNNDcwMzMxMDkwMTM2WjBuMQsw
CQYDVQQGEwJFUzEcMBoGA1UECgwTRmlybWFwcm9mZXNpb25hbCBTQTEYMBYGA1UE
YQwPVkFURVMtQTYyNjM0MDY4MScwJQYDVQQDDB5GSVJNQVBST0ZFU0lPTkFMIENB
IFJPT1QtQSBXRUIwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAARHU+osEaR3xyrq89Zf
e9MEkVz6iMYiuYMQYneEMy3pA4jU4DP37XcsSmDq5G+tbbT4TIqk5B/K6k84Si6C
cyvHZpsKjECcfIr28jlgst7L7Ljkb+qbXbdTkBgyVcUgt5SjYzBhMA8GA1UdEwEB
/wQFMAMBAf8wHwYDVR0jBBgwFoAUk+FDY1w8ndYn81LsF7Kpryz3dvgwHQYDVR0O
BBYEFJPhQ2NcPJ3WJ/NS7Beyqa8s93b4MA4GA1UdDwEB/wQEAwIBBjAKBggqhkjO
PQQDAwNoADBlAjAdfKR7w4l1M+E7qUW/Runpod3JIha3RxEL2Jq68cgLcFBTApFw
hVmpHqTm6iMxoAACMQD94vizrxa5HnPEluPBMBnYfubDl94cT7iJLzPrSA8Z94dG
XSaQpYXFuXqUPoeovQA=
-----END CERTIFICATE-----

# Issuer: CN=TWCA CYBER Root CA O=TAIWAN-CA OU=Root CA
# Subject: CN=TWCA CYBER Root CA O=TAIWAN-CA OU=Root CA
# Label: "TWCA CYBER Root CA"
# Serial: 85076849864375384482682434040119489222
# MD5 Fingerprint: 0b:33:a0:97:52:95:d4:a9:fd:bb:db:6e:a3:55:5b:51
# SHA1 Fingerprint: f6:b1:1c:1a:83:38:e9:7b:db:b3:a8:c8:33:24:e0:2d:9c:7f:26:66
# SHA256 Fingerprint: 3f:63:bb:28:14:be:17:4e:c8:b6:43:9c:f0:8d:6d:56:f0:b7:c4:05:88:3a:56:48:a3:34:42:4d:6b:3e:c5:58
-----BEGIN CERTIFICATE-----
MIIFjTCCA3WgAwIBAgIQQAE0jMIAAAAAAAAAATzyxjANBgkqhkiG9w0BAQwFADBQ
MQswCQYDVQQGEwJUVzESMBAGA1UEChMJVEFJV0FOLUNBMRAwDgYDVQQLEwdSb290
IENBMRswGQYDVQQDExJUV0NBIENZQkVSIFJvb3QgQ0EwHhcNMjIxMTIyMDY1NDI5
WhcNNDcxMTIyMTU1OTU5WjBQMQswCQYDVQQGEwJUVzESMBAGA1UEChMJVEFJV0FO
LUNBMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJUV0NBIENZQkVSIFJvb3Qg
Q0EwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDG+Moe2Qkgfh1sTs6P
40czRJzHyWmqOlt47nDSkvgEs1JSHWdyKKHfi12VCv7qze33Kc7wb3+szT3vsxxF
avcokPFhV8UMxKNQXd7UtcsZyoC5dc4pztKFIuwCY8xEMCDa6pFbVuYdHNWdZsc/
34bKS1PE2Y2yHer43CdTo0fhYcx9tbD47nORxc5zb87uEB8aBs/pJ2DFTxnk684i
JkXXYJndzk834H/nY62wuFm40AZoNWDTNq5xQwTxaWV4fPMf88oon1oglWa0zbfu
j3ikRRjpJi+NmykosaS3Om251Bw4ckVYsV7r8Cibt4LK/c/WMw+f+5eesRycnupf
Xtuq3VTpMCEobY5583WSjCb+3MX2w7DfRFlDo7YDKPYIMKoNM+HvnKkHIuNZW0CP
2oi3aQiotyMuRAlZN1vH4xfyIutuOVLF3lSnmMlLIJXcRolftBL5hSmO68gnFSDA
S9TMfAxsNAwmmyYxpjyn9tnQS6Jk/zuZQXLB4HCX8SS7K8R0IrGsayIyJNN4KsDA
oS/xUgXJP+92ZuJF2A09rZXIx4kmyA+upwMu+8Ff+iDhcK2wZSA3M2Cw1a/XDBzC
kHDXShi8fgGwsOsVHkQGzaRP6AzRwyAQ4VRlnrZR0Bp2a0JaWHY06rc3Ga4udfmW
5cFZ95RXKSWNOkyrTZpB0F8mAwIDAQABo2MwYTAOBgNVHQ8BAf8EBAMCAQYwDwYD
VR0TAQH/BAUwAwEB/zAfBgNVHSMEGDAWgBSdhWEUfMFib5do5E83QOGt4A1WNzAd
BgNVHQ4EFgQUnYVhFHzBYm+XaORPN0DhreANVjcwDQYJKoZIhvcNAQEMBQADggIB
AGSPesRiDrWIzLjHhg6hShbNcAu3p4ULs3a2D6f/CIsLJc+o1IN1KriWiLb73y0t
tGlTITVX1olNc79pj3CjYcya2x6a4CD4bLubIp1dhDGaLIrdaqHXKGnK/nZVekZn
68xDiBaiA9a5F/gZbG0jAn/xX9AKKSM70aoK7akXJlQKTcKlTfjF/biBzysseKNn
TKkHmvPfXvt89YnNdJdhEGoHK4Fa0o635yDRIG4kqIQnoVesqlVYL9zZyvpoBJ7t
RCT5dEA7IzOrg1oYJkK2bVS1FmAwbLGg+LhBoF1JSdJlBTrq/p1hvIbZv97Tujqx
f36SNI7JAG7cmL3c7IAFrQI932XtCwP39xaEBDG6k5TY8hL4iuO/Qq+n1M0RFxbI
Qh0UqEL20kCGoE8jypZFVmAGzbdVAaYBlGX+bgUJurSkquLvWL69J1bY73NxW0Qz
8ppy6rBePm6pUlvscG21h483XjyMnM7k8M4MZ0HMzvaAq07MTFb1wWFZk7Q+ptq4
NxKfKjLji7gh7MMrZQzvIt6IKTtM1/r+t+FHvpw+PoP7UV31aPcuIYXcv/Fa4nzX
xeSDwWrruoBa3lwtcHb4yOWHh8qgnaHlIhInD0Q9HWzq1MKLL295q39QpsQZp6F6
t5b5wR9iWqJDB0BeJsas7a5wFsWqynKKTbDPAYsDP27X
-----END CERTIFICATE-----

# Issuer: CN=SecureSign Root CA12 O=Cybertrust Japan Co., Ltd.
# Subject: CN=SecureSign Root CA12 O=Cybertrust Japan Co., Ltd.
# Label: "SecureSign Root CA12"
# Serial: 587887345431707215246142177076162061960426065942
# MD5 Fingerprint: c6:89:ca:64:42:9b:62:08:49:0b:1e:7f:e9:07:3d:e8
# SHA1 Fingerprint: 7a:22:1e:3d:de:1b:06:ac:9e:c8:47:70:16:8e:3c:e5:f7:6b:06:f4
# SHA256 Fingerprint: 3f:03:4b:b5:70:4d:44:b2:d0:85:45:a0:20:57:de:93:eb:f3:90:5f:ce:72:1a:cb:c7:30:c0:6d:da:ee:90:4e
-----BEGIN CERTIFICATE-----
MIIDcjCCAlqgAwIBAgIUZvnHwa/swlG07VOX5uaCwysckBYwDQYJKoZIhvcNAQEL
BQAwUTELMAkGA1UEBhMCSlAxIzAhBgNVBAoTGkN5YmVydHJ1c3QgSmFwYW4gQ28u
LCBMdGQuMR0wGwYDVQQDExRTZWN1cmVTaWduIFJvb3QgQ0ExMjAeFw0yMDA0MDgw
NTM2NDZaFw00MDA0MDgwNTM2NDZaMFExCzAJBgNVBAYTAkpQMSMwIQYDVQQKExpD
eWJlcnRydXN0IEphcGFuIENvLiwgTHRkLjEdMBsGA1UEAxMUU2VjdXJlU2lnbiBS
b290IENBMTIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC6OcE3emhF
KxS06+QT61d1I02PJC0W6K6OyX2kVzsqdiUzg2zqMoqUm048luT9Ub+ZyZN+v/mt
p7JIKwccJ/VMvHASd6SFVLX9kHrko+RRWAPNEHl57muTH2SOa2SroxPjcf59q5zd
J1M3s6oYwlkm7Fsf0uZlfO+TvdhYXAvA42VvPMfKWeP+bl+sg779XSVOKik71gur
FzJ4pOE+lEa+Ym6b3kaosRbnhW70CEBFEaCeVESE99g2zvVQR9wsMJvuwPWW0v4J
hscGWa5Pro4RmHvzC1KqYiaqId+OJTN5lxZJjfU+1UefNzFJM3IFTQy2VYzxV4+K
h9GtxRESOaCtAgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wDgYDVR0PAQH/BAQD
AgEGMB0GA1UdDgQWBBRXNPN0zwRL1SXm8UC2LEzZLemgrTANBgkqhkiG9w0BAQsF
AAOCAQEAPrvbFxbS8hQBICw4g0utvsqFepq2m2um4fylOqyttCg6r9cBg0krY6Ld
mmQOmFxv3Y67ilQiLUoT865AQ9tPkbeGGuwAtEGBpE/6aouIs3YIcipJQMPTw4WJ
mBClnW8Zt7vPemVV2zfrPIpyMpcemik+rY3moxtt9XUa5rBouVui7mlHJzWhhpmA
8zNL4WukJsPvdFlseqJkth5Ew1DgDzk9qTPxpfPSvWKErI4cqc1avTc7bgoitPQV
55FYxTpE05Uo2cBl6XLK0A+9H7MV2anjpEcJnuDLN/v9vZfVvhgaaaI5gdka9at/
yOPiZwud9AzqVN/Ssq+xIvEg37xEHA==
-----END CERTIFICATE-----

# Issuer: CN=SecureSign Root CA14 O=Cybertrust Japan Co., Ltd.
# Subject: CN=SecureSign Root CA14 O=Cybertrust Japan Co., Ltd.
# Label: "SecureSign Root CA14"
# Serial: 575790784512929437950770173562378038616896959179
# MD5 Fingerprint: 71:0d:72:fa:92:19:65:5e:89:04:ac:16:33:f0:bc:d5
# SHA1 Fingerprint: dd:50:c0:f7:79:b3:64:2e:74:a2:b8:9d:9f:d3:40:dd:bb:f0:f2:4f
# SHA256 Fingerprint: 4b:00:9c:10:34:49:4f:9a:b5:6b:ba:3b:a1:d6:27:31:fc:4d:20:d8:95:5a:dc:ec:10:a9:25:60:72:61:e3:38
-----BEGIN CERTIFICATE-----
MIIFcjCCA1qgAwIBAgIUZNtaDCBO6Ncpd8hQJ6JaJ90t8sswDQYJKoZIhvcNAQEM
BQAwUTELMAkGA1UEBhMCSlAxIzAhBgNVBAoTGkN5YmVydHJ1c3QgSmFwYW4gQ28u
LCBMdGQuMR0wGwYDVQQDExRTZWN1cmVTaWduIFJvb3QgQ0ExNDAeFw0yMDA0MDgw
NzA2MTlaFw00NTA0MDgwNzA2MTlaMFExCzAJBgNVBAYTAkpQMSMwIQYDVQQKExpD
eWJlcnRydXN0IEphcGFuIENvLiwgTHRkLjEdMBsGA1UEAxMUU2VjdXJlU2lnbiBS
b290IENBMTQwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDF0nqh1oq/
FjHQmNE6lPxauG4iwWL3pwon71D2LrGeaBLwbCRjOfHw3xDG3rdSINVSW0KZnvOg
vlIfX8xnbacuUKLBl422+JX1sLrcneC+y9/3OPJH9aaakpUqYllQC6KxNedlsmGy
6pJxaeQp8E+BgQQ8sqVb1MWoWWd7VRxJq3qdwudzTe/NCcLEVxLbAQ4jeQkHO6Lo
/IrPj8BGJJw4J+CDnRugv3gVEOuGTgpa/d/aLIJ+7sr2KeH6caH3iGicnPCNvg9J
kdjqOvn90Ghx2+m1K06Ckm9mH+Dw3EzsytHqunQG+bOEkJTRX45zGRBdAuVwpcAQ
0BB8b8VYSbSwbprafZX1zNoCr7gsfXmPvkPx+SgojQlD+Ajda8iLLCSxjVIHvXib
y8posqTdDEx5YMaZ0ZPxMBoH064iwurO8YQJzOAUbn8/ftKChazcqRZOhaBgy/ac
18izju3Gm5h1DVXoX+WViwKkrkMpKBGk5hIwAUt1ax5mnXkvpXYvHUC0bcl9eQjs
0Wq2XSqypWa9a4X0dFbD9ed1Uigspf9mR6XU/v6eVL9lfgHWMI+lNpyiUBzuOIAB
SMbHdPTGrMNASRZhdCyvjG817XsYAFs2PJxQDcqSMxDxJklt33UkN4Ii1+iW/RVL
ApY+B3KVfqs9TC7XyvDf4Fg/LS8EmjijAQIDAQABo0IwQDAPBgNVHRMBAf8EBTAD
AQH/MA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUBpOjCl4oaTeqYR3r6/wtbyPk
86AwDQYJKoZIhvcNAQEMBQADggIBAJaAcgkGfpzMkwQWu6A6jZJOtxEaCnFxEM0E
rX+lRVAQZk5KQaID2RFPeje5S+LGjzJmdSX7684/AykmjbgWHfYfM25I5uj4V7Ib
ed87hwriZLoAymzvftAj63iP/2SbNDefNWWipAA9EiOWWF3KY4fGoweITedpdopT
zfFP7ELyk+OZpDc8h7hi2/DsHzc/N19DzFGdtfCXwreFamgLRB7lUe6TzktuhsHS
DCRZNhqfLJGP4xjblJUK7ZGqDpncllPjYYPGFrojutzdfhrGe0K22VoF3Jpf1d+4
2kd92jjbrDnVHmtsKheMYc2xbXIBw8MgAGJoFjHVdqqGuw6qnsb58Nn4DSEC5MUo
FlkRudlpcyqSeLiSV5sI8jrlL5WwWLdrIBRtFO8KvH7YVdiI2i/6GaX7i+B/OfVy
K4XELKzvGUWSTLNhB9xNH27SgRNcmvMSZ4PPmz+Ln52kuaiWA3rF7iDeM9ovnhp6
dB7h7sxaOgTdsxoEqBRjrLdHEoOabPXm6RUVkRqEGQ6UROcSjiVbgGcZ3GOTEAtl
Lor6CZpO2oYofaphNdgOpygau1LgePhsumywbrmHXumZNTfxPWQrqaA0k89jL9WB
365jJ6UeTo3cKXhZ+PmhIIynJkBugnLNeLLIjzwec+fBH7/PzqUqm9tEZDKgu39c
JRNItX+S
-----END CERTIFICATE-----

# Issuer: CN=SecureSign Root CA15 O=Cybertrust Japan Co., Ltd.
# Subject: CN=SecureSign Root CA15 O=Cybertrust Japan Co., Ltd.
# Label: "SecureSign Root CA15"
# Serial: 126083514594751269499665114766174399806381178503
# MD5 Fingerprint: 13:30:fc:c4:62:a6:a9:de:b5:c1:68:af:b5:d2:31:47
# SHA1 Fingerprint: cb:ba:83:c8:c1:5a:5d:f1:f9:73:6f:ca:d7:ef:28:13:06:4a:07:7d
# SHA256 Fingerprint: e7:78:f0:f0:95:fe:84:37:29:cd:1a:00:82:17:9e:53:14:a9:c2:91:44:28:05:e1:fb:1d:8f:b6:b8:88:6c:3a
-----BEGIN CERTIFICATE-----
MIICIzCCAamgAwIBAgIUFhXHw9hJp75pDIqI7fBw+d23PocwCgYIKoZIzj0EAwMw
UTELMAkGA1UEBhMCSlAxIzAhBgNVBAoTGkN5YmVydHJ1c3QgSmFwYW4gQ28uLCBM
dGQuMR0wGwYDVQQDExRTZWN1cmVTaWduIFJvb3QgQ0ExNTAeFw0yMDA0MDgwODMy
NTZaFw00NTA0MDgwODMyNTZaMFExCzAJBgNVBAYTAkpQMSMwIQYDVQQKExpDeWJl
cnRydXN0IEphcGFuIENvLiwgTHRkLjEdMBsGA1UEAxMUU2VjdXJlU2lnbiBSb290
IENBMTUwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQLUHSNZDKZmbPSYAi4Io5GdCx4
wCtELW1fHcmuS1Iggz24FG1Th2CeX2yF2wYUleDHKP+dX+Sq8bOLbe1PL0vJSpSR
ZHX+AezB2Ot6lHhWGENfa4HL9rzatAy2KZMIaY+jQjBAMA8GA1UdEwEB/wQFMAMB
Af8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdDgQWBBTrQciu/NWeUUj1vYv0hyCTQSvT
9DAKBggqhkjOPQQDAwNoADBlAjEA2S6Jfl5OpBEHvVnCB96rMjhTKkZEBhd6zlHp
4P9mLQlO4E/0BdGF9jVg3PVys0Z9AjBEmEYagoUeYWmJSwdLZrWeqrqgHkHZAXQ6
bkU6iYAZezKYVWOr62Nuk22rGwlgMU4=
-----END CERTIFICATE-----
PK@u\o��4\\distro-1.9.0.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: bdist_wheel (0.42.0)
Root-Is-Purelib: true
Tag: py3-none-any

PK!@u\�[�=,=,distro-1.9.0.dist-info/LICENSEnu�[���Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.

PK#@u\�e	_��distro-1.9.0.dist-info/RECORDnu�[���../../../bin/distro,sha256=6XbzdM4pKDrvNJcJIdM6ElyLIuSHQLFlXI5QqD6IVX4,212
distro-1.9.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
distro-1.9.0.dist-info/LICENSE,sha256=y16Ofl9KOYjhBjwULGDcLfdWBfTEZRXnduOspt-XbhQ,11325
distro-1.9.0.dist-info/METADATA,sha256=MWMqst5VkRMQkbM5e9zfeXcYV52Fp1GG8Gg53QwJ6B0,6791
distro-1.9.0.dist-info/RECORD,,
distro-1.9.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
distro-1.9.0.dist-info/WHEEL,sha256=oiQVh_5PnQM0E3gPdiz09WCNmwiHDMaGer_elqB3coM,92
distro-1.9.0.dist-info/entry_points.txt,sha256=3ObjqQMbh1xeQQwsWtgbfDNDMDD-EbggR1Oj_z8s9hc,46
distro-1.9.0.dist-info/top_level.txt,sha256=ikde_V_XEdSBqaGd5tEriN_wzYHLgTX_zVtlsGLHvwQ,7
distro/__init__.py,sha256=2fHjF-SfgPvjyNZ1iHh_wjqWdR_Yo5ODHwZC0jLBPhc,981
distro/__main__.py,sha256=bu9d3TifoKciZFcqRBuygV3GSuThnVD_m2IK4cz96Vs,64
distro/__pycache__/__init__.cpython-39.pyc,,
distro/__pycache__/__main__.cpython-39.pyc,,
distro/__pycache__/distro.cpython-39.pyc,,
distro/distro.py,sha256=XqbefacAhDT4zr_trnbA15eY8vdK4GTghgmvUGrEM_4,49430
distro/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
PK&@u\�û���distro-1.9.0.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: distro
Version: 1.9.0
Summary: Distro - an OS platform information API
Home-page: https://github.com/python-distro/distro
Author: Nir Cohen
Author-email: nir36g@gmail.com
License: Apache License, Version 2.0
Platform: All
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: POSIX :: BSD
Classifier: Operating System :: POSIX :: BSD :: FreeBSD
Classifier: Operating System :: POSIX :: BSD :: NetBSD
Classifier: Operating System :: POSIX :: BSD :: OpenBSD
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Operating System
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE

Distro - an OS platform information API
=======================================

[![CI Status](https://github.com/python-distro/distro/workflows/CI/badge.svg)](https://github.com/python-distro/distro/actions/workflows/ci.yaml)
[![PyPI version](http://img.shields.io/pypi/v/distro.svg)](https://pypi.python.org/pypi/distro)
[![Supported Python Versions](https://img.shields.io/pypi/pyversions/distro.svg)](https://img.shields.io/pypi/pyversions/distro.svg)
[![Code Coverage](https://codecov.io/github/python-distro/distro/coverage.svg?branch=master)](https://codecov.io/github/python-distro/distro?branch=master)
[![Is Wheel](https://img.shields.io/pypi/wheel/distro.svg?style=flat)](https://pypi.python.org/pypi/distro)
[![Latest Github Release](https://readthedocs.org/projects/distro/badge/?version=stable)](http://distro.readthedocs.io/en/latest/)

`distro` provides information about the
OS distribution it runs on, such as a reliable machine-readable ID, or
version information.

It is the recommended replacement for Python's original
[`platform.linux_distribution`](https://docs.python.org/3.7/library/platform.html#platform.linux_distribution)
function (removed in Python 3.8). It also provides much more functionality
which isn't necessarily Python bound, like a command-line interface.

Distro currently supports Linux and BSD based systems but [Windows and OS X support](https://github.com/python-distro/distro/issues/177) is also planned.

For Python 2.6 support, see https://github.com/python-distro/distro/tree/python2.6-support

## Installation

Installation of the latest released version from PyPI:

```shell
pip install distro
```

Installation of the latest development version:

```shell
pip install https://github.com/python-distro/distro/archive/master.tar.gz
```

To use as a standalone script, download `distro.py` directly:

```shell
curl -O https://raw.githubusercontent.com/python-distro/distro/master/src/distro/distro.py
python distro.py
```

``distro`` is safe to vendor within projects that do not wish to add
dependencies.

```shell
cd myproject
curl -O https://raw.githubusercontent.com/python-distro/distro/master/src/distro/distro.py
```

## Usage

```bash
$ distro
Name: Antergos Linux
Version: 2015.10 (ISO-Rolling)
Codename: ISO-Rolling

$ distro -j
{
    "codename": "ISO-Rolling",
    "id": "antergos",
    "like": "arch",
    "version": "16.9",
    "version_parts": {
        "build_number": "",
        "major": "16",
        "minor": "9"
    }
}


$ python
>>> import distro
>>> distro.name(pretty=True)
'CentOS Linux 8'
>>> distro.id()
'centos'
>>> distro.version(best=True)
'8.4.2105'
```


## Documentation

On top of the aforementioned API, several more functions are available. For a complete description of the
API, see the [latest API documentation](http://distro.readthedocs.org/en/latest/).

## Background

An alternative implementation became necessary because Python 3.5 deprecated
this function, and Python 3.8 removed it altogether. Its predecessor function
[`platform.dist`](https://docs.python.org/3.7/library/platform.html#platform.dist)
was already deprecated since Python 2.6 and removed in Python 3.8. Still, there
are many cases in which access to that information is needed. See [Python issue
1322](https://bugs.python.org/issue1322) for more information.

The `distro` package implements a robust and inclusive way of retrieving the
information about a distribution based on new standards and old methods,
namely from these data sources (from high to low precedence):

* The os-release file `/etc/os-release` if present, with a fall-back on `/usr/lib/os-release` if needed.
* The output of the `lsb_release` command, if available.
* The distro release file (`/etc/*(-|_)(release|version)`), if present.
* The `uname` command for BSD based distrubtions.


## Python and Distribution Support

`distro` is supported and tested on Python 3.6+ and PyPy and on any
distribution that provides one or more of the data sources covered.

This package is tested with test data that mimics the exact behavior of the data sources of [a number of Linux distributions](https://github.com/python-distro/distro/tree/master/tests/resources/distros).


## Testing

```shell
git clone git@github.com:python-distro/distro.git
cd distro
pip install tox
tox
```


## Contributions

Pull requests are always welcome to deal with specific distributions or just
for general merriment.

See [CONTRIBUTIONS](https://github.com/python-distro/distro/blob/master/CONTRIBUTING.md) for contribution info.

Reference implementations for supporting additional distributions and file
formats can be found here:

* https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1172
* https://github.com/chef/ohai/blob/master/lib/ohai/plugins/linux/platform.rb
* https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/system/distribution.py
* https://github.com/puppetlabs/facter/blob/master/lib/src/facts/linux/os_linux.cc

## Package manager distributions

* https://src.fedoraproject.org/rpms/python-distro
* https://www.archlinux.org/packages/community/any/python-distro/
* https://launchpad.net/ubuntu/+source/python-distro
* https://packages.debian.org/stable/python3-distro
* https://packages.gentoo.org/packages/dev-python/distro
* https://pkgs.org/download/python3-distro
* https://slackbuilds.org/repository/14.2/python/python-distro/
PK(@u\ distro-1.9.0.dist-info/REQUESTEDnu�[���PK(@u\�f��$distro-1.9.0.dist-info/top_level.txtnu�[���distro
PK,@u\���..'distro-1.9.0.dist-info/entry_points.txtnu�[���[console_scripts]
distro = distro.distro:main
PK.@u\��� distro-1.9.0.dist-info/INSTALLERnu�[���pip
PK3@u\I��\\future-1.0.0.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: bdist_wheel (0.41.2)
Root-Is-Purelib: true
Tag: py3-none-any

PK6@u\�E+��z�zfuture-1.0.0.dist-info/RECORDnu�[���../../../bin/futurize,sha256=sXCNiOHh_TxI0O2jpXervN8SaTGHsk3s7DWZVaVd7ps,215
../../../bin/pasteurize,sha256=Z6X36i6UI3tjKEgkWJ-dYlE1hFprHZZaMzOq0Pr6yu8,217
future-1.0.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
future-1.0.0.dist-info/LICENSE.txt,sha256=vroEEDw6n_-Si1nHfV--sDo-S9DXrD0eH4mUoh4w7js,1075
future-1.0.0.dist-info/METADATA,sha256=MNNaUKMRbZAmVYq5Am_-1NtYZ1ViECpSa8VsQjXbkT4,3959
future-1.0.0.dist-info/RECORD,,
future-1.0.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future-1.0.0.dist-info/WHEEL,sha256=yQN5g4mg4AybRjkgi-9yy4iQEFibGQmlz78Pik5Or-A,92
future-1.0.0.dist-info/entry_points.txt,sha256=U9LtP60KSNXoj58mzV5TbotBF371gTWzrKrzJIH80Kw,88
future-1.0.0.dist-info/top_level.txt,sha256=DT0C3az2gb-uJaj-fs0h4WwHYlJVDp0EvLdud1y5Zyw,38
future/__init__.py,sha256=T9PNLu6ycmVtpETLxRurmufuRAaosICWmWdAExZb5a8,2938
future/__pycache__/__init__.cpython-39.pyc,,
future/backports/__init__.py,sha256=5QXvQ_jc5Xx6p4dSaHnZXPZazBEunKDKhbUjxZ0XD1I,530
future/backports/__pycache__/__init__.cpython-39.pyc,,
future/backports/__pycache__/_markupbase.cpython-39.pyc,,
future/backports/__pycache__/datetime.cpython-39.pyc,,
future/backports/__pycache__/misc.cpython-39.pyc,,
future/backports/__pycache__/socket.cpython-39.pyc,,
future/backports/__pycache__/socketserver.cpython-39.pyc,,
future/backports/__pycache__/total_ordering.cpython-39.pyc,,
future/backports/_markupbase.py,sha256=MDPTCykLq4J7Aea3PvYotATEE0CG4R_SjlxfJaLXTJM,16215
future/backports/datetime.py,sha256=jITCStolfadhCEhejFd99wCo59mBDF0Ruj8l7QcG7Ms,75553
future/backports/email/__init__.py,sha256=eH3AJr3FkuBy_D6yS1V2K76Q2CQ93y2zmAMWmn8FbHI,2269
future/backports/email/__pycache__/__init__.cpython-39.pyc,,
future/backports/email/__pycache__/_encoded_words.cpython-39.pyc,,
future/backports/email/__pycache__/_header_value_parser.cpython-39.pyc,,
future/backports/email/__pycache__/_parseaddr.cpython-39.pyc,,
future/backports/email/__pycache__/_policybase.cpython-39.pyc,,
future/backports/email/__pycache__/base64mime.cpython-39.pyc,,
future/backports/email/__pycache__/charset.cpython-39.pyc,,
future/backports/email/__pycache__/encoders.cpython-39.pyc,,
future/backports/email/__pycache__/errors.cpython-39.pyc,,
future/backports/email/__pycache__/feedparser.cpython-39.pyc,,
future/backports/email/__pycache__/generator.cpython-39.pyc,,
future/backports/email/__pycache__/header.cpython-39.pyc,,
future/backports/email/__pycache__/headerregistry.cpython-39.pyc,,
future/backports/email/__pycache__/iterators.cpython-39.pyc,,
future/backports/email/__pycache__/message.cpython-39.pyc,,
future/backports/email/__pycache__/parser.cpython-39.pyc,,
future/backports/email/__pycache__/policy.cpython-39.pyc,,
future/backports/email/__pycache__/quoprimime.cpython-39.pyc,,
future/backports/email/__pycache__/utils.cpython-39.pyc,,
future/backports/email/_encoded_words.py,sha256=m1vTRfxAQdg4VyWO7PF-1ih1mmq97V-BPyHHkuEwSME,8443
future/backports/email/_header_value_parser.py,sha256=GmSdr5PpG3xzedMiElSJOsQ6IwE3Tl5SNwp4m6ZT4aE,104692
future/backports/email/_parseaddr.py,sha256=KewEnos0YDM-SYX503z7E1MmVbG5VRaKjxjcl0Ipjbs,17389
future/backports/email/_policybase.py,sha256=2lJD9xouiz4uHvWGQ6j1nwlwWVQGwwzpy5JZoeQqhUc,14647
future/backports/email/base64mime.py,sha256=gXZFxh66jk6D2UqAmjRbmmyhOXbGUWZmFcdVOIolaYE,3761
future/backports/email/charset.py,sha256=CfE4iV2zAq6MQC0CHXHLnwTNW71zmhNITbzOcfxE4vY,17439
future/backports/email/encoders.py,sha256=Nn4Pcx1rOdRgoSIzB6T5RWHl5zxClbf32wgE6D0tUt8,2800
future/backports/email/errors.py,sha256=tRX8PP5g7mk2bAxL1jTCYrbfhD2gPZFNrh4_GJRM8OQ,3680
future/backports/email/feedparser.py,sha256=bvmhb4cdY-ipextPK2K2sDgMsNvTspmuQfYyCxc4zSc,22736
future/backports/email/generator.py,sha256=lpaLhZHneguvZ2QgRu7Figkjb7zmY28AGhj9iZTdI7s,19520
future/backports/email/header.py,sha256=uBHbNKO-yx5I9KBflernJpyy3fX4gImCB1QE7ICApLs,24448
future/backports/email/headerregistry.py,sha256=ZPbvLKXD0NMLSU4jXlVHfGyGcLMrFm-GQVURu_XHj88,20637
future/backports/email/iterators.py,sha256=kMRYFGy3SVVpo7HG7JJr2ZAlOoaX6CVPzKYwDSvLfV0,2348
future/backports/email/message.py,sha256=I6WW5cZDza7uwLOGJSvsDhGZC9K_Q570Lk2gt_vDUXM,35237
future/backports/email/mime/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/backports/email/mime/__pycache__/__init__.cpython-39.pyc,,
future/backports/email/mime/__pycache__/application.cpython-39.pyc,,
future/backports/email/mime/__pycache__/audio.cpython-39.pyc,,
future/backports/email/mime/__pycache__/base.cpython-39.pyc,,
future/backports/email/mime/__pycache__/image.cpython-39.pyc,,
future/backports/email/mime/__pycache__/message.cpython-39.pyc,,
future/backports/email/mime/__pycache__/multipart.cpython-39.pyc,,
future/backports/email/mime/__pycache__/nonmultipart.cpython-39.pyc,,
future/backports/email/mime/__pycache__/text.cpython-39.pyc,,
future/backports/email/mime/application.py,sha256=m-5a4mSxu2E32XAImnp9x9eMVX5Vme2iNgn2dMMNyss,1401
future/backports/email/mime/audio.py,sha256=2ognalFRadcsUYQYMUZbjv5i1xJbFhQN643doMuI7M4,2815
future/backports/email/mime/base.py,sha256=wV3ClQyMsOqmkXSXbk_wd_zPoPTvBx8kAIzq3rdM4lE,875
future/backports/email/mime/image.py,sha256=DpQk1sB-IMmO43AF4uadsXyf_y5TdEzJLfyhqR48bIw,1907
future/backports/email/mime/message.py,sha256=pFsMhXW07aRjsLq1peO847PApWFAl28-Z2Z7BP1Dn74,1429
future/backports/email/mime/multipart.py,sha256=j4Lf_sJmuwTbfgdQ6R35_t1_ha2DynJBJDvpjwbNObE,1699
future/backports/email/mime/nonmultipart.py,sha256=Ciba1Z8d2yLDDpxgDJuk3Bb-TqcpE9HCd8KfbW5vgl4,832
future/backports/email/mime/text.py,sha256=zV98BjoR4S_nX8c47x43LnsnifeGhIfNGwSAh575bs0,1552
future/backports/email/parser.py,sha256=NpTjmvjv6YDH6eImMJEYiIn_K7qe9-pPz3DmzTdMZUU,5310
future/backports/email/policy.py,sha256=gpcbhVRXuCohkK6MUqopTs1lv4E4-ZVUO6OVncoGEJE,8823
future/backports/email/quoprimime.py,sha256=w93W5XgdFpyGaDqDBJrnXF_v_npH5r20WuAxmrAzyQg,10923
future/backports/email/utils.py,sha256=vpfN0E8UjNbNw-2NFBQGCo4TNgrghMsqzpEYW5C_fBs,14270
future/backports/html/__init__.py,sha256=FKwqFtWMCoGNkhU97OPnR1fZSh6etAKfN1FU1KvXcV8,924
future/backports/html/__pycache__/__init__.cpython-39.pyc,,
future/backports/html/__pycache__/entities.cpython-39.pyc,,
future/backports/html/__pycache__/parser.cpython-39.pyc,,
future/backports/html/entities.py,sha256=kzoRnQyGk_3DgoucHLhL5QL1pglK9nvmxhPIGZFDTnc,75428
future/backports/html/parser.py,sha256=G2tUObvbHSotNt06JLY-BP1swaZNfDYFd_ENWDjPmRg,19770
future/backports/http/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/backports/http/__pycache__/__init__.cpython-39.pyc,,
future/backports/http/__pycache__/client.cpython-39.pyc,,
future/backports/http/__pycache__/cookiejar.cpython-39.pyc,,
future/backports/http/__pycache__/cookies.cpython-39.pyc,,
future/backports/http/__pycache__/server.cpython-39.pyc,,
future/backports/http/client.py,sha256=76EbhEZOtvdHFcU-jrjivoff13oQ9IMbdkZEdf5kQzQ,47602
future/backports/http/cookiejar.py,sha256=jqb27uvv8wB2mJm6kF9aC0w7B03nO6rzQ0_CF35yArg,76608
future/backports/http/cookies.py,sha256=DsyDUGDEbCXAA9Jq6suswSc76uSZqUu39adDDNj8XGw,21581
future/backports/http/server.py,sha256=1CaMxgzHf9lYhmTJyE7topgjRIlIn9cnjgw8YEvwJV4,45523
future/backports/misc.py,sha256=EGnCVRmU-_7xrzss1rqqCqwqlQVywaPAxxLogBeNpw4,33063
future/backports/socket.py,sha256=DH1V6IjKPpJ0tln8bYvxvQ7qnvZG-UoQtMA5yVleHiU,15663
future/backports/socketserver.py,sha256=Twvyk5FqVnOeiNcbVsyMDPTF1mNlkKfyofG7tKxTdD8,24286
future/backports/test/__init__.py,sha256=9dXxIZnkI095YfHC-XIaVF6d31GjeY1Ag8TEzcFgepM,264
future/backports/test/__pycache__/__init__.cpython-39.pyc,,
future/backports/test/__pycache__/pystone.cpython-39.pyc,,
future/backports/test/__pycache__/ssl_servers.cpython-39.pyc,,
future/backports/test/__pycache__/support.cpython-39.pyc,,
future/backports/test/badcert.pem,sha256=JioQeRZkHH8hGsWJjAF3U1zQvcWqhyzG6IOEJpTY9SE,1928
future/backports/test/badkey.pem,sha256=gaBK9px_gG7DmrLKxfD6f6i-toAmARBTVfs-YGFRQF0,2162
future/backports/test/dh512.pem,sha256=dUTsjtLbK-femrorUrTGF8qvLjhTiT_n4Uo5V6u__Gs,402
future/backports/test/https_svn_python_org_root.pem,sha256=wOB3Onnc62Iu9kEFd8GcHhd_suucYjpJNA3jyfHeJWA,2569
future/backports/test/keycert.passwd.pem,sha256=ZBfnVLpbBtAOf_2gCdiQ-yrBHmRsNzSf8VC3UpQZIjg,1830
future/backports/test/keycert.pem,sha256=xPXi5idPcQVbrhgxBqF2TNGm6sSZ2aLVVEt6DWzplL8,1783
future/backports/test/keycert2.pem,sha256=DB46FEAYv8BWwQJ-5RzC696FxPN7CON-Qsi-R4poJgc,1795
future/backports/test/nokia.pem,sha256=s00x0uPDSaa5DHJ_CwzlVhg3OVdJ47f4zgqQdd0SAfQ,1923
future/backports/test/nullbytecert.pem,sha256=NFRYWhmP_qT3jGfVjR6-iaC-EQdhIFjiXtTLN5ZPKnE,5435
future/backports/test/nullcert.pem,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/backports/test/pystone.py,sha256=fvyoJ_tVovTNaxbJmdJMwr9F6SngY-U4ibULnd_wUqA,7427
future/backports/test/sha256.pem,sha256=3wB-GQqEc7jq-PYwYAQaPbtTvvr7stk_DVmZxFgehfA,8344
future/backports/test/ssl_cert.pem,sha256=M607jJNeIeHG9BlTf_jaQkPJI4nOxSJPn-zmEAaW43M,867
future/backports/test/ssl_key.passwd.pem,sha256=I_WH4sBw9Vs9Z-BvmuXY0aw8tx8avv6rm5UL4S_pP00,963
future/backports/test/ssl_key.pem,sha256=VKGU-R3UYaZpVTXl7chWl4vEYEDeob69SfvRTQ8aq_4,916
future/backports/test/ssl_servers.py,sha256=-pd7HMZljuZfFRAbCAiAP_2G04orITJFj-S9ddr6o84,7209
future/backports/test/support.py,sha256=oTQ09QrLcbmFZXhMGqPz3VrYZddgxpJGEJPQhwfiG2k,69620
future/backports/total_ordering.py,sha256=O3M57_IisQ-zW5hW20uxkfk4fTGsr0EF2tAKx3BksQo,1929
future/backports/urllib/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/backports/urllib/__pycache__/__init__.cpython-39.pyc,,
future/backports/urllib/__pycache__/error.cpython-39.pyc,,
future/backports/urllib/__pycache__/parse.cpython-39.pyc,,
future/backports/urllib/__pycache__/request.cpython-39.pyc,,
future/backports/urllib/__pycache__/response.cpython-39.pyc,,
future/backports/urllib/__pycache__/robotparser.cpython-39.pyc,,
future/backports/urllib/error.py,sha256=ktikuK9ag4lS4f8Z0k5p1F11qF40N2AiOtjbXiF97ew,2715
future/backports/urllib/parse.py,sha256=67avrYqV1UK7i_22goRUrvJ8SffzjRdTja9wzq_ynXY,35792
future/backports/urllib/request.py,sha256=aR9ZMzfhV1C2Qk3wFsGvkwxqtdPTdsJVGRt5DUCwgJ8,96276
future/backports/urllib/response.py,sha256=ooQyswwbb-9N6IVi1Kwjss1aR-Kvm8ZNezoyVEonp8c,3180
future/backports/urllib/robotparser.py,sha256=pnAGTbKhdbCq_9yMZp7m8hj5q_NJpyQX6oQIZuYcnkw,6865
future/backports/xmlrpc/__init__.py,sha256=h61ciVTdVvu8oEUXv4dHf_Tc5XUXDH3RKB1-8fQhSsg,38
future/backports/xmlrpc/__pycache__/__init__.cpython-39.pyc,,
future/backports/xmlrpc/__pycache__/client.cpython-39.pyc,,
future/backports/xmlrpc/__pycache__/server.cpython-39.pyc,,
future/backports/xmlrpc/client.py,sha256=TIHPztKRlrStphmO_PfYOQxsy2xugzWKz77STC1OZ1U,48175
future/backports/xmlrpc/server.py,sha256=W_RW5hgYbNV2LGbnvngzm7akacRdK-XFY-Cy2HL-qsY,37285
future/builtins/__init__.py,sha256=rDAHzkhfXHSaye72FgzQz-HPN3yYBu-VXSs5PUJGA6o,1688
future/builtins/__pycache__/__init__.cpython-39.pyc,,
future/builtins/__pycache__/disabled.cpython-39.pyc,,
future/builtins/__pycache__/iterators.cpython-39.pyc,,
future/builtins/__pycache__/misc.cpython-39.pyc,,
future/builtins/__pycache__/new_min_max.cpython-39.pyc,,
future/builtins/__pycache__/newnext.cpython-39.pyc,,
future/builtins/__pycache__/newround.cpython-39.pyc,,
future/builtins/__pycache__/newsuper.cpython-39.pyc,,
future/builtins/disabled.py,sha256=Ysq74bsmwntpq7dzkwTAD7IHKrkXy66vJlPshVwgVBI,2109
future/builtins/iterators.py,sha256=l1Zawm2x82oqOuGGtCZRE76Ej98sMlHQwu9fZLK5RrA,1396
future/builtins/misc.py,sha256=hctlKKWUyN0Eoodxg4ySQHEqARTukOLR4L5K5c6PW9k,4550
future/builtins/new_min_max.py,sha256=7qQ4iiG4GDgRzjPzzzmg9pdby35Mtt6xNOOsyqHnIGY,1757
future/builtins/newnext.py,sha256=oxXB8baXqJv29YG40aCS9UXk9zObyoOjya8BJ7NdBJM,2009
future/builtins/newround.py,sha256=7YTWjBgfIAvSEl7hLCWgemhjqdKtzohbO18yMErKz4E,3190
future/builtins/newsuper.py,sha256=3Ygqq-8l3wh9gNvGbW5nAiTYT5WxxgSKN6RhNj7qi74,3849
future/moves/__init__.py,sha256=MsAW69Xp_fqUo4xODufcKM6AZf-ozHaz44WPZdsDFJA,220
future/moves/__pycache__/__init__.cpython-39.pyc,,
future/moves/__pycache__/_dummy_thread.cpython-39.pyc,,
future/moves/__pycache__/_markupbase.cpython-39.pyc,,
future/moves/__pycache__/_thread.cpython-39.pyc,,
future/moves/__pycache__/builtins.cpython-39.pyc,,
future/moves/__pycache__/collections.cpython-39.pyc,,
future/moves/__pycache__/configparser.cpython-39.pyc,,
future/moves/__pycache__/copyreg.cpython-39.pyc,,
future/moves/__pycache__/itertools.cpython-39.pyc,,
future/moves/__pycache__/multiprocessing.cpython-39.pyc,,
future/moves/__pycache__/pickle.cpython-39.pyc,,
future/moves/__pycache__/queue.cpython-39.pyc,,
future/moves/__pycache__/reprlib.cpython-39.pyc,,
future/moves/__pycache__/socketserver.cpython-39.pyc,,
future/moves/__pycache__/subprocess.cpython-39.pyc,,
future/moves/__pycache__/sys.cpython-39.pyc,,
future/moves/__pycache__/winreg.cpython-39.pyc,,
future/moves/_dummy_thread.py,sha256=ULUtLk1Luw9I1h-YPitnU3gqCbvNPoKC28N_Bk8jkR8,348
future/moves/_markupbase.py,sha256=W9wh_Gu3jDAMIhVBV1ZnCkJwYLHRk_v_su_HLALBkZQ,171
future/moves/_thread.py,sha256=rwY7L4BZMFPlrp_i6T2Un4_iKYwnrXJ-yV6FJZN8YDo,163
future/moves/builtins.py,sha256=4sjjKiylecJeL9da_RaBZjdymX2jtMs84oA9lCqb4Ug,281
future/moves/collections.py,sha256=OKQ-TfUgms_2bnZRn4hrclLDoiN2i-HSWcjs3BC2iY8,417
future/moves/configparser.py,sha256=TNy226uCbljjU-DjAVo7j7Effbj5zxXvDh0SdXehbzk,146
future/moves/copyreg.py,sha256=Y3UjLXIMSOxZggXtvZucE9yv4tkKZtVan45z8eix4sU,438
future/moves/dbm/__init__.py,sha256=_VkvQHC2UcIgZFPRroiX_P0Fs7HNqS_69flR0-oq2B8,488
future/moves/dbm/__pycache__/__init__.cpython-39.pyc,,
future/moves/dbm/__pycache__/dumb.cpython-39.pyc,,
future/moves/dbm/__pycache__/gnu.cpython-39.pyc,,
future/moves/dbm/__pycache__/ndbm.cpython-39.pyc,,
future/moves/dbm/dumb.py,sha256=HKdjjtO3EyP9EKi1Hgxh_eUU6yCQ0fBX9NN3n-zb8JE,166
future/moves/dbm/gnu.py,sha256=XoCSEpZ2QaOgo2h1m80GW7NUgj_b93BKtbcuwgtnaKo,162
future/moves/dbm/ndbm.py,sha256=OFnreyo_1YHDBl5YUm9gCzKlN1MHgWbfSQAZVls2jaM,162
future/moves/html/__init__.py,sha256=BSUFSHxXf2kGvHozlnrB1nn6bPE6p4PpN3DwA_Z5geo,1016
future/moves/html/__pycache__/__init__.cpython-39.pyc,,
future/moves/html/__pycache__/entities.cpython-39.pyc,,
future/moves/html/__pycache__/parser.cpython-39.pyc,,
future/moves/html/entities.py,sha256=lVvchdjK_RzRj759eg4RMvGWHfgBbj0tKGOoZ8dbRyY,177
future/moves/html/parser.py,sha256=V2XpHLKLCxQum3N9xlO3IUccAD7BIykZMqdEcWET3vY,167
future/moves/http/__init__.py,sha256=Mx1v_Tcks4udHCtDM8q2xnYUiQ01gD7EpPyeQwsP3-Q,71
future/moves/http/__pycache__/__init__.cpython-39.pyc,,
future/moves/http/__pycache__/client.cpython-39.pyc,,
future/moves/http/__pycache__/cookiejar.cpython-39.pyc,,
future/moves/http/__pycache__/cookies.cpython-39.pyc,,
future/moves/http/__pycache__/server.cpython-39.pyc,,
future/moves/http/client.py,sha256=hqEBq7GDXZidd1AscKnSyjSoMcuj8rERqGTmD7VheDQ,165
future/moves/http/cookiejar.py,sha256=Frr9ZZCg-145ymy0VGpiPJhvBEpJtVqRBYPaKhgT1Z4,173
future/moves/http/cookies.py,sha256=PPrHa1_oDbu3D_BhJGc6PvMgY1KoxyYq1jqeJwEcMvE,233
future/moves/http/server.py,sha256=8YQlSCShjAsB5rr5foVvZgp3IzwYFvTmGZCHhBSDtaI,606
future/moves/itertools.py,sha256=PVxFHRlBQl9ElS0cuGFPcUtj53eHX7Z1DmggzGfgQ6c,158
future/moves/multiprocessing.py,sha256=4L37igVf2NwBhXqmCHRA3slZ7lJeiQLzZdrGSGOOZ08,191
future/moves/pickle.py,sha256=r8j9skzfE8ZCeHyh_OB-WucOkRTIHN7zpRM7l7V3qS4,229
future/moves/queue.py,sha256=uxvLCChF-zxWWgrY1a_wxt8rp2jILdwO4PrnkBW6VTE,160
future/moves/reprlib.py,sha256=Nt5sUgMQ3jeVIukqSHOvB0UIsl6Y5t-mmT_13mpZmiY,161
future/moves/socketserver.py,sha256=v8ZLurDxHOgsubYm1iefjlpnnJQcx2VuRUGt9FCJB9k,174
future/moves/subprocess.py,sha256=oqRSMfFZkxM4MXkt3oD5N6eBwmmJ6rQ9KPhvSQKT_hM,251
future/moves/sys.py,sha256=HOMRX4Loim75FMbWawd3oEwuGNJR-ClMREEFkVpBsRs,132
future/moves/test/__init__.py,sha256=yB9F-fDQpzu1v8cBoKgIrL2ScUNqjlkqEztYrGVCQ-0,110
future/moves/test/__pycache__/__init__.cpython-39.pyc,,
future/moves/test/__pycache__/support.cpython-39.pyc,,
future/moves/test/support.py,sha256=TG5h0FVGwyJGtKQEXMhWtD4G9WZagHrMI_CeL9NlZYc,484
future/moves/tkinter/__init__.py,sha256=jV9vDx3wRl0bsoclU8oSe-5SqHQ3YpCbStmqtXnq1p4,620
future/moves/tkinter/__pycache__/__init__.cpython-39.pyc,,
future/moves/tkinter/__pycache__/colorchooser.cpython-39.pyc,,
future/moves/tkinter/__pycache__/commondialog.cpython-39.pyc,,
future/moves/tkinter/__pycache__/constants.cpython-39.pyc,,
future/moves/tkinter/__pycache__/dialog.cpython-39.pyc,,
future/moves/tkinter/__pycache__/dnd.cpython-39.pyc,,
future/moves/tkinter/__pycache__/filedialog.cpython-39.pyc,,
future/moves/tkinter/__pycache__/font.cpython-39.pyc,,
future/moves/tkinter/__pycache__/messagebox.cpython-39.pyc,,
future/moves/tkinter/__pycache__/scrolledtext.cpython-39.pyc,,
future/moves/tkinter/__pycache__/simpledialog.cpython-39.pyc,,
future/moves/tkinter/__pycache__/tix.cpython-39.pyc,,
future/moves/tkinter/__pycache__/ttk.cpython-39.pyc,,
future/moves/tkinter/colorchooser.py,sha256=kprlmpRtvDbW5Gq43H1mi2KmNJ2kuzLQOba0a5EwDkU,333
future/moves/tkinter/commondialog.py,sha256=mdUbq1IZqOGaSA7_8R367IukDCsMfzXiVHrTQQpp7Z0,333
future/moves/tkinter/constants.py,sha256=0qRUrZLRPdVxueABL9KTzzEWEsk6xM1rOjxK6OHxXtA,324
future/moves/tkinter/dialog.py,sha256=ksp-zvs-_A90P9RNHS8S27f1k8f48zG2Bel2jwZV5y0,311
future/moves/tkinter/dnd.py,sha256=C_Ah0Urnyf2XKE5u-oP6mWi16RzMSXgMA1uhBSAwKY8,306
future/moves/tkinter/filedialog.py,sha256=yNr30k-hDY1aMJHNsKqRqHqOOlzYKCubfQ3HjY1ZlrE,534
future/moves/tkinter/font.py,sha256=TXarflhJRxqepaRNSDw6JFUVGz5P1T1C4_uF9VRqj3w,309
future/moves/tkinter/messagebox.py,sha256=WJt4t83kLmr_UnpCWFuLoyazZr3wAUOEl6ADn3osoEA,327
future/moves/tkinter/scrolledtext.py,sha256=DRzN8aBAlDBUo1B2KDHzdpRSzXBfH4rOOz0iuHXbQcg,329
future/moves/tkinter/simpledialog.py,sha256=6MhuVhZCJV4XfPpPSUWKlDLLGEi0Y2ZlGQ9TbsmJFL0,329
future/moves/tkinter/tix.py,sha256=aNeOfbWSGmcN69UmEGf4tJ-QIxLT6SU5ynzm1iWgepA,302
future/moves/tkinter/ttk.py,sha256=rRrJpDjcP2gjQNukECu4F026P-CkW-3Ca2tN6Oia-Fw,302
future/moves/urllib/__init__.py,sha256=yB9F-fDQpzu1v8cBoKgIrL2ScUNqjlkqEztYrGVCQ-0,110
future/moves/urllib/__pycache__/__init__.cpython-39.pyc,,
future/moves/urllib/__pycache__/error.cpython-39.pyc,,
future/moves/urllib/__pycache__/parse.cpython-39.pyc,,
future/moves/urllib/__pycache__/request.cpython-39.pyc,,
future/moves/urllib/__pycache__/response.cpython-39.pyc,,
future/moves/urllib/__pycache__/robotparser.cpython-39.pyc,,
future/moves/urllib/error.py,sha256=gfrKzv-6W5OjzNIfjvJaQkxABRLym2KwjfKFXSdDB60,479
future/moves/urllib/parse.py,sha256=xLLUMIIB5MreCdYzRZ5zIRWrhTRCoMO8RZEH4WPFQDY,1045
future/moves/urllib/request.py,sha256=ttIzq60PwjRyrLQUGdAtfYvs4fziVwvcLe2Kw-hvE0g,3496
future/moves/urllib/response.py,sha256=ZEZML0FpbB--GIeBFPvSzbtlVJ6EsR4tCI4qB7D8sFQ,342
future/moves/urllib/robotparser.py,sha256=j24p6dMNzUpGZtT3BQxwRoE-F88iWmBpKgu0tRV61FQ,179
future/moves/winreg.py,sha256=2zNAG59QI7vFlCj7kqDh0JrAYTpexOnI55PEAIjYhqo,163
future/moves/xmlrpc/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/moves/xmlrpc/__pycache__/__init__.cpython-39.pyc,,
future/moves/xmlrpc/__pycache__/client.cpython-39.pyc,,
future/moves/xmlrpc/__pycache__/server.cpython-39.pyc,,
future/moves/xmlrpc/client.py,sha256=2PfnL5IbKVwdKP7C8B1OUviEtuBObwoH4pAPfvHIvQc,143
future/moves/xmlrpc/server.py,sha256=ESDXdpUgTKyeFmCDSnJmBp8zONjJklsRJOvy4OtaALc,143
future/standard_library/__init__.py,sha256=Nwbaqikyeh77wSiro-BHNjSsCmSmuLGAe91d4c5q_QE,28065
future/standard_library/__pycache__/__init__.cpython-39.pyc,,
future/tests/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
future/tests/__pycache__/__init__.cpython-39.pyc,,
future/tests/__pycache__/base.cpython-39.pyc,,
future/tests/base.py,sha256=7LTAKHJgUxOwmffD1kgcErVt2VouKcldPnq4iruqk_k,19956
future/types/__init__.py,sha256=5fBxWqf_OTQ8jZ7k2TS34rFH14togeR488F4zBHIQ-s,6831
future/types/__pycache__/__init__.cpython-39.pyc,,
future/types/__pycache__/newbytes.cpython-39.pyc,,
future/types/__pycache__/newdict.cpython-39.pyc,,
future/types/__pycache__/newint.cpython-39.pyc,,
future/types/__pycache__/newlist.cpython-39.pyc,,
future/types/__pycache__/newmemoryview.cpython-39.pyc,,
future/types/__pycache__/newobject.cpython-39.pyc,,
future/types/__pycache__/newopen.cpython-39.pyc,,
future/types/__pycache__/newrange.cpython-39.pyc,,
future/types/__pycache__/newstr.cpython-39.pyc,,
future/types/newbytes.py,sha256=D_kNDD9sbNJir2cUxxePiAuw2OW5irxVnu55uHmuK9E,16303
future/types/newdict.py,sha256=go-Lbl2MRWZJJRlwTAUlJNJRkg986YYeV0jCqEUEFNc,2011
future/types/newint.py,sha256=HH90HS2Y1ApS02LDpKzqt9V1Lwtp6tktMIYjavZUIh8,13406
future/types/newlist.py,sha256=-H5-fXodd-UQgTFnZBJdwE68CrgIL_jViYdv4w7q2rU,2284
future/types/newmemoryview.py,sha256=LnARgiKqQ2zLwwDZ3owu1atoonPQkOneWMfxJCwB4_o,712
future/types/newobject.py,sha256=AX_n8GwlDR2IY-xIwZCvu0Olj_Ca2aS57nlTihnFr-I,3358
future/types/newopen.py,sha256=lcRNHWZ1UjEn_0_XKis1ZA5U6l-4c-CHlC0WX1sY4NI,810
future/types/newrange.py,sha256=fcCL1imqqH-lqWsY9Lnml9d-WbJOtXrayAUPoUbM7Ck,5296
future/types/newstr.py,sha256=e0brkurI0IK--4ToQEO4Cz1FECZav4CyUGMKxlrcmK4,15758
future/utils/__init__.py,sha256=Er_tUl6bS4xp7_M1Z3hZrgM9hAGrRUvCAdcHDRgSOdE,21960
future/utils/__pycache__/__init__.cpython-39.pyc,,
future/utils/__pycache__/surrogateescape.cpython-39.pyc,,
future/utils/surrogateescape.py,sha256=7u4V4XlW83P5YSAJS2f92YUF8vsWthsiTnmAshOJL_M,6097
libfuturize/__init__.py,sha256=CZA_KgvTQOPAY1_MrlJeQ6eMh2Eei4_KIv4JuyAkpfw,31
libfuturize/__pycache__/__init__.cpython-39.pyc,,
libfuturize/__pycache__/fixer_util.cpython-39.pyc,,
libfuturize/__pycache__/main.cpython-39.pyc,,
libfuturize/fixer_util.py,sha256=hOmX8XLnicGJ6RGwlUxslhuhzhPc0cZimlylFQAeDOo,17357
libfuturize/fixes/__init__.py,sha256=5KEpUnjVsFCCsr_-zrikvJbLf9zslEJnFTH_5pBc33I,5236
libfuturize/fixes/__pycache__/__init__.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_UserDict.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_absolute_import.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_add__future__imports_except_unicode_literals.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_basestring.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_bytes.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_cmp.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_division.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_division_safe.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_execfile.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_future_builtins.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_future_standard_library.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_future_standard_library_urllib.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_input.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_metaclass.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_next_call.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_object.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_oldstr_wrap.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_order___future__imports.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_print.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_print_with_import.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_raise.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_remove_old__future__imports.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_unicode_keep_u.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_unicode_literals_import.cpython-39.pyc,,
libfuturize/fixes/__pycache__/fix_xrange_with_import.cpython-39.pyc,,
libfuturize/fixes/fix_UserDict.py,sha256=jL4jXnGaUQTkG8RKfGXbU_HVTkB3MWZMQwUkqMAjB6I,3840
libfuturize/fixes/fix_absolute_import.py,sha256=vkrF2FyQR5lSz2WmdqywzkEJVTC0eq4gh_REWBKHh7w,3140
libfuturize/fixes/fix_add__future__imports_except_unicode_literals.py,sha256=Fr219VAzR8KWXc2_bfiqLl10EgxAWjL6cI3Mowt--VU,662
libfuturize/fixes/fix_basestring.py,sha256=bHkKuMzhr5FMXwjXlMOjsod4S3rQkVdbzhoWV4-tl3Y,394
libfuturize/fixes/fix_bytes.py,sha256=AhzOJes6EnPwgzboDjvURANbWKqciG6ZGaYW07PYQK8,685
libfuturize/fixes/fix_cmp.py,sha256=Blq_Z0IGkYiKS83QzZ5wUgpJyZfQiZoEsWJ1VPyXgFY,701
libfuturize/fixes/fix_division.py,sha256=gnrAi7stquiVUyi_De1H8q--43iQaSUX0CjnOmQ6O2w,228
libfuturize/fixes/fix_division_safe.py,sha256=oz407p0Woc2EKw7jZHUL4CpDs81FFpekRum58NKsNp4,3631
libfuturize/fixes/fix_execfile.py,sha256=I5AcJ6vPZ7i70TChaq9inxqnZ4C04-yJyfAItGa8E3c,921
libfuturize/fixes/fix_future_builtins.py,sha256=QBCRpD9XA7tbtfP4wmOF2DXquB4lq-eupkQj-QAxp0s,2027
libfuturize/fixes/fix_future_standard_library.py,sha256=FVtflFt38efHe_SEX6k3m6IYAtKWjA4rAPZrlCv6yA0,733
libfuturize/fixes/fix_future_standard_library_urllib.py,sha256=Rf81XcAXA-vwNvrhskf5sLExbR--Wkr5fiUcMYGAKzs,1001
libfuturize/fixes/fix_input.py,sha256=bhaPNtMrZNbjWIDQCR7Iue5BxBj4rf0RJQ9_jiwvb-s,687
libfuturize/fixes/fix_metaclass.py,sha256=_CS1NDXYM-Mh6xpogLK_GtYx3rUUptu1-Z0Rx3lC9eQ,9570
libfuturize/fixes/fix_next_call.py,sha256=01STG86Av9o5QcpQDJ6UbPhvxt9kKrkatiPeddXRgvA,3158
libfuturize/fixes/fix_object.py,sha256=qalFIjn0VTWXG5sGOOoCvO65omjX5_9d40SUpwUjBdw,407
libfuturize/fixes/fix_oldstr_wrap.py,sha256=UCR6Q2l-pVqJSrRTnQAWMlaqBoX7oX1VpG_w6Q0XcyY,1214
libfuturize/fixes/fix_order___future__imports.py,sha256=ACUCw5NEGWvj6XA9rNj8BYha3ktxLvkM5Ssh5cyV644,829
libfuturize/fixes/fix_print.py,sha256=nbJdv5DbxtWzJIRIQ0tr7FfGkMkHScJTLzvpxv_hSNw,3881
libfuturize/fixes/fix_print_with_import.py,sha256=hVWn70Q1DPMUiHMyEqgUx-6sM1AylLj78v9pMc4LFw8,735
libfuturize/fixes/fix_raise.py,sha256=CkjqiSvHHD-enaLxYMkH-Nsi92NGShFLWd3fG-exmI4,3904
libfuturize/fixes/fix_remove_old__future__imports.py,sha256=j4EC1KEVgXhuQAqhYHnAruUjW6uczPjV_fTCSOLMuAw,851
libfuturize/fixes/fix_unicode_keep_u.py,sha256=M8fcFxHeFnWVOKoQRpkMsnpd9qmUFubI2oFhO4ZPk7A,779
libfuturize/fixes/fix_unicode_literals_import.py,sha256=wq-hb-9Yx3Az4ol-ylXZJPEDZ81EaPZeIy5VvpA0CEY,367
libfuturize/fixes/fix_xrange_with_import.py,sha256=f074qStjMz3OtLjt1bKKZSxQnRbbb7HzEbqHt9wgqdw,479
libfuturize/main.py,sha256=feICmcv0dzWhutvwz0unnIVxusbSlQZFDaxObkHebs8,13733
libpasteurize/__init__.py,sha256=CZA_KgvTQOPAY1_MrlJeQ6eMh2Eei4_KIv4JuyAkpfw,31
libpasteurize/__pycache__/__init__.cpython-39.pyc,,
libpasteurize/__pycache__/main.cpython-39.pyc,,
libpasteurize/fixes/__init__.py,sha256=ccdv-2MGjQMbq8XuEZBndHmbzGRrZnabksjXZLUv044,3719
libpasteurize/fixes/__pycache__/__init__.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/feature_base.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_add_all__future__imports.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_add_all_future_builtins.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_add_future_standard_library_import.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_annotations.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_division.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_features.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_fullargspec.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_future_builtins.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_getcwd.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_imports.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_imports2.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_kwargs.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_memoryview.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_metaclass.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_newstyle.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_next.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_printfunction.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_raise.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_raise_.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_throw.cpython-39.pyc,,
libpasteurize/fixes/__pycache__/fix_unpacking.cpython-39.pyc,,
libpasteurize/fixes/feature_base.py,sha256=v7yLjBDBUPeNUc-YHGGlIsJDOQzFAM4Vo0RN5F1JHVU,1723
libpasteurize/fixes/fix_add_all__future__imports.py,sha256=mHet1LgbHn9GfgCYGNZXKo-rseDWreAvUcAjZwdgeTE,676
libpasteurize/fixes/fix_add_all_future_builtins.py,sha256=scfkY-Sz5j0yDtLYls2ENOcqEMPVxeDm9gFYYPINPB8,1269
libpasteurize/fixes/fix_add_future_standard_library_import.py,sha256=thTRbkBzy_SJjZ0bJteTp0sBTx8Wr69xFakH4styf7Y,663
libpasteurize/fixes/fix_annotations.py,sha256=VT_AorKY9AYWYZUZ17_CeUrJlEA7VGkwVLDQlwD1Bxo,1581
libpasteurize/fixes/fix_division.py,sha256=_TD_c5KniAYqEm11O7NJF0v2WEhYSNkRGcKG_94ZOas,904
libpasteurize/fixes/fix_features.py,sha256=NZn0n34_MYZpLNwyP1Tf51hOiN58Rg7A8tA9pK1S8-c,2675
libpasteurize/fixes/fix_fullargspec.py,sha256=VlZuIU6QNrClmRuvC4mtLICL3yMCi-RcGCnS9fD4b-Q,438
libpasteurize/fixes/fix_future_builtins.py,sha256=SlCK9I9u05m19Lr1wxlJxF8toZ5yu0yXBeDLxUN9_fw,1450
libpasteurize/fixes/fix_getcwd.py,sha256=uebvTvFboLqsROFCwdnzoP6ThziM0skz9TDXHoJcFsQ,873
libpasteurize/fixes/fix_imports.py,sha256=KH4Q-qMzsuN5VcfE1ZGS337yHhxbgrmLoRtpHtr2A94,5026
libpasteurize/fixes/fix_imports2.py,sha256=bs2V5Yv0v_8xLx-lNj9kNEAK2dLYXUXkZ2hxECg01CU,8580
libpasteurize/fixes/fix_kwargs.py,sha256=NB_Ap8YJk-9ncoJRbOiPY_VMIigFgVB8m8AuY29DDhE,5991
libpasteurize/fixes/fix_memoryview.py,sha256=Fwayx_ezpr22tbJ0-QrKdJ-FZTpU-m7y78l1h_N4xxc,551
libpasteurize/fixes/fix_metaclass.py,sha256=IcE2KjaDG8jUR3FYXECzOC_cr2pr5r95W1NTbMrK8Wc,3260
libpasteurize/fixes/fix_newstyle.py,sha256=78sazKOHm9DUoMyW4VdvQpMXZhicbXzorVPRhBpSUrM,888
libpasteurize/fixes/fix_next.py,sha256=VHqcyORRNVqKJ51jJ1OkhwxHuXRgp8qaldyqcMvA4J0,1233
libpasteurize/fixes/fix_printfunction.py,sha256=NDIfqVmUJBG3H9E6nrnN0cWZK8ch9pL4F-nMexdsa38,401
libpasteurize/fixes/fix_raise.py,sha256=zQ_AcMsGmCbtKMgrxZGcHLYNscw6tqXFvHQxgqtNbU8,1099
libpasteurize/fixes/fix_raise_.py,sha256=9STp633frUfYASjYzqhwxx_MXePNmMhfJClowRj8FLY,1225
libpasteurize/fixes/fix_throw.py,sha256=_ZREVre-WttUvk4sWjrqUNqm9Q1uFaATECN0_-PXKbk,835
libpasteurize/fixes/fix_unpacking.py,sha256=xZqxMYHgdeuIkermtY-evisvcKlGCPi5vg5t5pt-XCY,6041
libpasteurize/main.py,sha256=dVHYTQQeJonuOFDNrenJZl-rKHgOQKRMPP1OqnJogWQ,8186
past/__init__.py,sha256=2DxcQt5zgPH-e-TSDS2l7hI94A9eG7pPgD-V5FgH084,2892
past/__pycache__/__init__.cpython-39.pyc,,
past/builtins/__init__.py,sha256=7j_4OsUlN6q2eKr14do7mRQ1GwXRoXAMUR0A1fJpAls,1805
past/builtins/__pycache__/__init__.cpython-39.pyc,,
past/builtins/__pycache__/misc.cpython-39.pyc,,
past/builtins/__pycache__/noniterators.cpython-39.pyc,,
past/builtins/misc.py,sha256=I76Mpx_3wnHpJg7Ub9SZOBRqEFo02YgimZJpfoq17_0,5598
past/builtins/noniterators.py,sha256=LtdELnd7KyYdXg7GkW25cgkEPUC0ggZ5AYMtDe9N95I,9370
past/translation/__init__.py,sha256=oTtrOHD8ToM9c9RXat_BhjKhN33N7_Vg4HGS0if-UbU,14914
past/translation/__pycache__/__init__.cpython-39.pyc,,
past/types/__init__.py,sha256=RyJlgqg9uJ8oF-kJT9QlfhfdmhiMh3fShmtvd2CQycY,879
past/types/__pycache__/__init__.cpython-39.pyc,,
past/types/__pycache__/basestring.cpython-39.pyc,,
past/types/__pycache__/olddict.cpython-39.pyc,,
past/types/__pycache__/oldstr.cpython-39.pyc,,
past/types/basestring.py,sha256=lO66aHgOV02vka6kosnR6GWK0iNC0G28Nugb1MP69-E,774
past/types/olddict.py,sha256=0YtffZ55VY6AyQ_rwu4DZ4vcRsp6dz-dQzczeyN8hLk,2721
past/types/oldstr.py,sha256=JuF8VBBI4OGSgZ3PyhU6LxSAiTfEWzdHUx0Hwg13WSY,4333
past/utils/__init__.py,sha256=e8l1sOfdiDJ3dkckBWLNWvC1ahC5BX5haHC2TGdNgA8,2633
past/utils/__pycache__/__init__.cpython-39.pyc,,
PK9@u\���wwfuture-1.0.0.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: future
Version: 1.0.0
Summary: Clean single-source support for Python 3 and 2
Home-page: https://python-future.org
Author: Ed Schofield
Author-email: ed@pythoncharmers.com
License: MIT
Project-URL: Source, https://github.com/PythonCharmers/python-future
Keywords: future past python3 migration futurize backport six 2to3 modernize pasteurize 3to2
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: License :: OSI Approved
Classifier: License :: OSI Approved :: MIT License
Classifier: Development Status :: 6 - Mature
Classifier: Intended Audience :: Developers
Requires-Python: >=2.6, !=3.0.*, !=3.1.*, !=3.2.*
License-File: LICENSE.txt


future: Easy, safe support for Python 2/3 compatibility
=======================================================

``future`` is the missing compatibility layer between Python 2 and Python
3. It allows you to use a single, clean Python 3.x-compatible codebase to
support both Python 2 and Python 3 with minimal overhead.

It is designed to be used as follows::

    from __future__ import (absolute_import, division,
                            print_function, unicode_literals)
    from builtins import (
             bytes, dict, int, list, object, range, str,
             ascii, chr, hex, input, next, oct, open,
             pow, round, super,
             filter, map, zip)

followed by predominantly standard, idiomatic Python 3 code that then runs
similarly on Python 2.6/2.7 and Python 3.3+.

The imports have no effect on Python 3. On Python 2, they shadow the
corresponding builtins, which normally have different semantics on Python 3
versus 2, to provide their Python 3 semantics.


Standard library reorganization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``future`` supports the standard library reorganization (PEP 3108) through the
following Py3 interfaces:

    >>> # Top-level packages with Py3 names provided on Py2:
    >>> import html.parser
    >>> import queue
    >>> import tkinter.dialog
    >>> import xmlrpc.client
    >>> # etc.

    >>> # Aliases provided for extensions to existing Py2 module names:
    >>> from future.standard_library import install_aliases
    >>> install_aliases()

    >>> from collections import Counter, OrderedDict   # backported to Py2.6
    >>> from collections import UserDict, UserList, UserString
    >>> import urllib.request
    >>> from itertools import filterfalse, zip_longest
    >>> from subprocess import getoutput, getstatusoutput


Automatic conversion
--------------------

An included script called `futurize
<https://python-future.org/automatic_conversion.html>`_ aids in converting
code (from either Python 2 or Python 3) to code compatible with both
platforms. It is similar to ``python-modernize`` but goes further in
providing Python 3 compatibility through the use of the backported types
and builtin functions in ``future``.


Documentation
-------------

See: https://python-future.org


Credits
-------

:Author:  Ed Schofield, Jordan M. Adler, et al
:Sponsor: Python Charmers: https://pythoncharmers.com
:Others:  See docs/credits.rst or https://python-future.org/credits.html


Licensing
---------
Copyright 2013-2024 Python Charmers, Australia.
The software is distributed under an MIT licence. See LICENSE.txt.

PK;@u\�l�33"future-1.0.0.dist-info/LICENSE.txtnu�[���Copyright (c) 2013-2024 Python Charmers, Australia

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
PK@@u\ future-1.0.0.dist-info/REQUESTEDnu�[���PK@@u\Z�w�&&$future-1.0.0.dist-info/top_level.txtnu�[���future
libfuturize
libpasteurize
past
PKB@u\}
�=XX'future-1.0.0.dist-info/entry_points.txtnu�[���[console_scripts]
futurize = libfuturize.main:main
pasteurize = libpasteurize.main:main
PKE@u\��� future-1.0.0.dist-info/INSTALLERnu�[���pip
PKJ@u\E���

configparser.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-

"""Convenience module importing everything from backports.configparser."""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals

from backports.configparser import (
    RawConfigParser,
    ConfigParser,
    SafeConfigParser,
    SectionProxy,
    Interpolation,
    BasicInterpolation,
    ExtendedInterpolation,
    LegacyInterpolation,
    NoSectionError,
    DuplicateSectionError,
    DuplicateOptionError,
    NoOptionError,
    InterpolationError,
    InterpolationMissingOptionError,
    InterpolationSyntaxError,
    InterpolationDepthError,
    ParsingError,
    MissingSectionHeaderError,
    ConverterMapping,
    DEFAULTSECT,
    MAX_INTERPOLATION_DEPTH,
)

from backports.configparser import Error, _UNSET, _default_dict, _ChainMap  # noqa: F401

__all__ = [
    "NoSectionError",
    "DuplicateOptionError",
    "DuplicateSectionError",
    "NoOptionError",
    "InterpolationError",
    "InterpolationDepthError",
    "InterpolationMissingOptionError",
    "InterpolationSyntaxError",
    "ParsingError",
    "MissingSectionHeaderError",
    "ConfigParser",
    "SafeConfigParser",
    "RawConfigParser",
    "Interpolation",
    "BasicInterpolation",
    "ExtendedInterpolation",
    "LegacyInterpolation",
    "SectionProxy",
    "ConverterMapping",
    "DEFAULTSECT",
    "MAX_INTERPOLATION_DEPTH",
]

# NOTE: names missing from __all__ imported anyway for backwards compatibility.
PKP@u\�9J���'__pycache__/configparser.cpython-39.pycnu�[���a

��?h
�@s�dZddlmZddlmZddlmZddlmZddlmZmZm	Z	m
Z
mZmZm
Z
mZmZmZmZmZmZmZmZmZmZmZmZmZmZddlmZmZmZmZgd�Z d	S)
zDConvenience module importing everything from backports.configparser.�)�absolute_import)�division)�print_function)�unicode_literals)�RawConfigParser�ConfigParser�SafeConfigParser�SectionProxy�
Interpolation�BasicInterpolation�ExtendedInterpolation�LegacyInterpolation�NoSectionError�DuplicateSectionError�DuplicateOptionError�
NoOptionError�InterpolationError�InterpolationMissingOptionError�InterpolationSyntaxError�InterpolationDepthError�ParsingError�MissingSectionHeaderError�ConverterMapping�DEFAULTSECT�MAX_INTERPOLATION_DEPTH)�Error�_UNSET�
_default_dict�	_ChainMap)rrrrrrrrrrrrrr
rrr
r	rrrN)!�__doc__�
__future__rrrrZbackports.configparserrrrr	r
rrr
rrrrrrrrrrrrrrrrr�__all__�r"r"�6/usr/local/lib/python3.9/site-packages/configparser.py�<module>s\PKU@u\�P2�\\"configparser-5.2.0.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: bdist_wheel (0.37.0)
Root-Is-Purelib: true
Tag: py3-none-any

PKW@u\�Yď$configparser-5.2.0.dist-info/LICENSEnu�[���Copyright Jason R. Coombs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the
rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
PKZ@u\����#configparser-5.2.0.dist-info/RECORDnu�[���__pycache__/configparser.cpython-39.pyc,,
backports/configparser/__init__.py,sha256=zjgAe9lH7_cTCyt7e-iyfyi007PfaMAVPhJCjM_TUPs,54655
backports/configparser/__pycache__/__init__.cpython-39.pyc,,
backports/configparser/__pycache__/compat.cpython-39.pyc,,
backports/configparser/compat.py,sha256=Z3Fo6AI4BfFRfNd9Fj8n1Yw2NW2r112uQTFEnF659Lc,404
configparser-5.2.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
configparser-5.2.0.dist-info/LICENSE,sha256=2z8CRrH5J48VhFuZ_sR4uLUG63ZIeZNyL4xuJUKF-vg,1050
configparser-5.2.0.dist-info/METADATA,sha256=4MeDipErpJIH61Zwu6BQ9f951Qu5HRg5RKFu3LYNZck,11053
configparser-5.2.0.dist-info/RECORD,,
configparser-5.2.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
configparser-5.2.0.dist-info/WHEEL,sha256=ewwEueio1C2XeHTvT17n8dZUJgOvyCWCt0WVNLClP9o,92
configparser-5.2.0.dist-info/top_level.txt,sha256=mIs8gajd7cvEWhVluv4u6ocaHw_TJ9rOrpkZEFv-7Hc,23
configparser.py,sha256=4VADEswCwzy_RDVgvje3BmZhD6iwo3k4EkUZcgzLD4M,1546
PK\@u\�J��-+-+%configparser-5.2.0.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: configparser
Version: 5.2.0
Summary: Updated configparser from Python 3.8 for Python 2.6+.
Home-page: https://github.com/jaraco/configparser/
Author: Łukasz Langa
Author-email: lukasz@langa.pl
Maintainer: Jason R. Coombs
Maintainer-email: jaraco@jaraco.com
License: UNKNOWN
Keywords: configparser ini parsing conf cfg configuration file
Platform: any
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Requires-Python: >=3.6
License-File: LICENSE
Provides-Extra: docs
Requires-Dist: sphinx ; extra == 'docs'
Requires-Dist: jaraco.packaging (>=8.2) ; extra == 'docs'
Requires-Dist: rst.linker (>=1.9) ; extra == 'docs'
Requires-Dist: jaraco.tidelift (>=1.4) ; extra == 'docs'
Provides-Extra: testing
Requires-Dist: pytest (>=6) ; extra == 'testing'
Requires-Dist: pytest-checkdocs (>=2.4) ; extra == 'testing'
Requires-Dist: pytest-flake8 ; extra == 'testing'
Requires-Dist: pytest-cov ; extra == 'testing'
Requires-Dist: pytest-enabler (>=1.0.1) ; extra == 'testing'
Requires-Dist: types-backports ; extra == 'testing'
Requires-Dist: pytest-black (>=0.3.7) ; (platform_python_implementation != "PyPy") and extra == 'testing'
Requires-Dist: pytest-mypy ; (platform_python_implementation != "PyPy") and extra == 'testing'

.. image:: https://img.shields.io/pypi/v/configparser.svg
   :target: `PyPI link`_

.. image:: https://img.shields.io/pypi/pyversions/configparser.svg
   :target: `PyPI link`_

.. _PyPI link: https://pypi.org/project/configparser

.. image:: https://github.com/jaraco/configparser/workflows/tests/badge.svg
   :target: https://github.com/jaraco/configparser/actions?query=workflow%3A%22tests%22
   :alt: tests

.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
   :target: https://github.com/psf/black
   :alt: Code style: Black

.. image:: https://readthedocs.org/projects/configparser/badge/?version=latest
   :target: https://configparser.readthedocs.io/en/latest/?badge=latest

.. image:: https://img.shields.io/badge/skeleton-2021-informational
   :target: https://blog.jaraco.com/skeleton

.. image:: https://tidelift.com/badges/package/pypi/configparser
   :target: https://tidelift.com/subscription/pkg/pypi-configparser?utm_source=pypi-configparser&utm_medium=readme


This package is a backport of the refreshed and enhanced ConfigParser from
later Python versions. To use the backport instead of the built-in version,
simply import it explicitly as a backport::

  from backports import configparser

To use the backport on Python 2 and the built-in version on
Python 3, use the standard invocation::

  import configparser

For detailed documentation consult the vanilla version at
http://docs.python.org/3/library/configparser.html.

Why you'll love ``configparser``
--------------------------------

Whereas almost completely compatible with its older brother, ``configparser``
sports a bunch of interesting new features:

* full mapping protocol access (`more info
  <http://docs.python.org/3/library/configparser.html#mapping-protocol-access>`_)::

    >>> parser = ConfigParser()
    >>> parser.read_string("""
    [DEFAULT]
    location = upper left
    visible = yes
    editable = no
    color = blue

    [main]
    title = Main Menu
    color = green

    [options]
    title = Options
    """)
    >>> parser['main']['color']
    'green'
    >>> parser['main']['editable']
    'no'
    >>> section = parser['options']
    >>> section['title']
    'Options'
    >>> section['title'] = 'Options (editable: %(editable)s)'
    >>> section['title']
    'Options (editable: no)'

* there's now one default ``ConfigParser`` class, which basically is the old
  ``SafeConfigParser`` with a bunch of tweaks which make it more predictable for
  users. Don't need interpolation? Simply use
  ``ConfigParser(interpolation=None)``, no need to use a distinct
  ``RawConfigParser`` anymore.

* the parser is highly `customizable upon instantiation
  <http://docs.python.org/3/library/configparser.html#customizing-parser-behaviour>`__
  supporting things like changing option delimiters, comment characters, the
  name of the DEFAULT section, the interpolation syntax, etc.

* you can easily create your own interpolation syntax but there are two powerful
  implementations built-in (`more info
  <http://docs.python.org/3/library/configparser.html#interpolation-of-values>`__):

  * the classic ``%(string-like)s`` syntax (called ``BasicInterpolation``)

  * a new ``${buildout:like}`` syntax (called ``ExtendedInterpolation``)

* fallback values may be specified in getters (`more info
  <http://docs.python.org/3/library/configparser.html#fallback-values>`__)::

    >>> config.get('closet', 'monster',
    ...            fallback='No such things as monsters')
    'No such things as monsters'

* ``ConfigParser`` objects can now read data directly `from strings
  <http://docs.python.org/3/library/configparser.html#configparser.ConfigParser.read_string>`__
  and `from dictionaries
  <http://docs.python.org/3/library/configparser.html#configparser.ConfigParser.read_dict>`__.
  That means importing configuration from JSON or specifying default values for
  the whole configuration (multiple sections) is now a single line of code. Same
  goes for copying data from another ``ConfigParser`` instance, thanks to its
  mapping protocol support.

* many smaller tweaks, updates and fixes

A few words about Unicode
-------------------------

``configparser`` comes from Python 3 and as such it works well with Unicode.
The library is generally cleaned up in terms of internal data storage and
reading/writing files.  There are a couple of incompatibilities with the old
``ConfigParser`` due to that. However, the work required to migrate is well
worth it as it shows the issues that would likely come up during migration of
your project to Python 3.

The design assumes that Unicode strings are used whenever possible [1]_.  That
gives you the certainty that what's stored in a configuration object is text.
Once your configuration is read, the rest of your application doesn't have to
deal with encoding issues. All you have is text [2]_. The only two phases when
you should explicitly state encoding is when you either read from an external
source (e.g. a file) or write back.

Versioning
----------

This project uses `semver <https://semver.org/spec/v2.0.0.html>`_ to
communicate the impact of various releases while periodically syncing
with the upstream implementation in CPython.
The `history <https://configparser.readthedocs.io/en/latest/history.html>`_
serves as a reference indicating which versions incorporate
which upstream functionality.

Prior to the ``4.0.0`` release, `another scheme
<https://github.com/jaraco/configparser/blob/3.8.1/README.rst#versioning>`_
was used to associate the CPython and backports releases.

Maintenance
-----------

This backport was originally authored by Łukasz Langa, the current vanilla
``configparser`` maintainer for CPython and is currently maintained by
Jason R. Coombs:

* `configparser repository <https://github.com/jaraco/configparser>`_

* `configparser issue tracker <https://github.com/jaraco/configparser/issues>`_

For Enterprise
==============

Available as part of the Tidelift Subscription.

This project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.

`Learn more <https://tidelift.com/subscription/pkg/pypi-configparser?utm_source=pypi-configparser&utm_medium=referral&utm_campaign=github>`_.

Security Contact
----------------

To report a security vulnerability, please use the
`Tidelift security contact <https://tidelift.com/security>`_.
Tidelift will coordinate the fix and disclosure.

Conversion Process
------------------

This section is technical and should bother you only if you are wondering how
this backport is produced. If the implementation details of this backport are
not important for you, feel free to ignore the following content.

The project takes the following branching approach:

* The ``3.x`` branch holds unchanged files synchronized from the upstream
  CPython repository. The synchronization is currently done by manually copying
  the required files and stating from which CPython changeset they come.

* The ``main`` branch holds a version of the ``3.x`` code with some tweaks
  that make it compatible with older Pythons. Code on this branch must work
  on all supported Python versions. Test with ``tox`` or in CI.

The process works like this:

1. In the ``3.x`` branch, run ``pip-run -- sync-upstream.py``, which
   downloads the latest stable release of Python and copies the relevant
   files from there into their new locations and then commits those
   changes with a nice reference to the relevant upstream commit hash.

2. Check for new names in ``__all__`` and update imports in
   ``configparser.py`` accordingly. Optionally, run the tests on a late
   Python 3. Commit.

3. Merge the new commit to ``main``. Run tests. Commit.

4. Make any compatibility changes on ``main``. Run tests. Commit.

5. Update the docs and release the new version.


Footnotes
---------

.. [1] To somewhat ease migration, passing bytestrings is still supported but
       they are converted to Unicode for internal storage anyway. This means
       that for the vast majority of strings used in configuration files, it
       won't matter if you pass them as bytestrings or Unicode. However, if you
       pass a bytestring that cannot be converted to Unicode using the naive
       ASCII codec, a ``UnicodeDecodeError`` will be raised. This is purposeful
       and helps you manage proper encoding for all content you store in
       memory, read from various sources and write back.

.. [2] Life gets much easier when you understand that you basically manage
       **text** in your application.  You don't care about bytes but about
       letters.  In that regard the concept of content encoding is meaningless.
       The only time when you deal with raw bytes is when you write the data to
       a file.  Then you have to specify how your text should be encoded.  On
       the other end, to get meaningful text from a file, the application
       reading it has to know which encoding was used during its creation.  But
       once the bytes are read and properly decoded, all you have is text.  This
       is especially powerful when you start interacting with multiple data
       sources.  Even if each of them uses a different encoding, inside your
       application data is held in abstract text form.  You can program your
       business logic without worrying about which data came from which source.
       You can freely exchange the data you store between sources.  Only
       reading/writing files requires encoding your text to bytes.


PKa@u\&configparser-5.2.0.dist-info/REQUESTEDnu�[���PKa@u\�*configparser-5.2.0.dist-info/top_level.txtnu�[���backports
configparser
PKc@u\���&configparser-5.2.0.dist-info/INSTALLERnu�[���pip
PKi@u\h��[[!certifi-2024.8.30.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: setuptools (74.0.0)
Root-Is-Purelib: true
Tag: py3-none-any

PKk@u\�+2��#certifi-2024.8.30.dist-info/LICENSEnu�[���This package contains a modified version of ca-bundle.crt:

ca-bundle.crt -- Bundle of CA Root Certificates

This is a bundle of X.509 certificates of public Certificate Authorities
(CA). These were automatically extracted from Mozilla's root certificates
file (certdata.txt).  This file can be found in the mozilla source tree:
https://hg.mozilla.org/mozilla-central/file/tip/security/nss/lib/ckfw/builtins/certdata.txt
It contains the certificates in PEM format and therefore
can be directly used with curl / libcurl / php_curl, or with
an Apache+mod_ssl webserver for SSL client authentication.
Just configure this file as the SSLCACertificateFile.#

***** BEGIN LICENSE BLOCK *****
This Source Code Form is subject to the terms of the Mozilla Public License,
v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain
one at http://mozilla.org/MPL/2.0/.

***** END LICENSE BLOCK *****
@(#) $RCSfile: certdata.txt,v $ $Revision: 1.80 $ $Date: 2011/11/03 15:11:58 $
PKn@u\���%OO"certifi-2024.8.30.dist-info/RECORDnu�[���certifi-2024.8.30.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
certifi-2024.8.30.dist-info/LICENSE,sha256=6TcW2mucDVpKHfYP5pWzcPBpVgPSH2-D8FPkLPwQyvc,989
certifi-2024.8.30.dist-info/METADATA,sha256=GhBHRVUN6a4ZdUgE_N5wmukJfyuoE-QyIl8Y3ifNQBM,2222
certifi-2024.8.30.dist-info/RECORD,,
certifi-2024.8.30.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
certifi-2024.8.30.dist-info/WHEEL,sha256=UvcQYKBHoFqaQd6LKyqHw9fxEolWLQnlzP0h_LgJAfI,91
certifi-2024.8.30.dist-info/top_level.txt,sha256=KMu4vUCfsjLrkPbSNdgdekS-pVJzBAJFO__nI8NF6-U,8
certifi/__init__.py,sha256=p_GYZrjUwPBUhpLlCZoGb0miKBKSqDAyZC5DvIuqbHQ,94
certifi/__main__.py,sha256=xBBoj905TUWBLRGANOcf7oi6e-3dMP4cEoG9OyMs11g,243
certifi/__pycache__/__init__.cpython-39.pyc,,
certifi/__pycache__/__main__.cpython-39.pyc,,
certifi/__pycache__/core.cpython-39.pyc,,
certifi/cacert.pem,sha256=lO3rZukXdPyuk6BWUJFOKQliWaXH6HGh9l1GGrUgG0c,299427
certifi/core.py,sha256=qRDDFyXVJwTB_EmoGppaXU_R9qCZvhl-EzxPMuV3nTA,4426
certifi/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
PKp@u\�AϺ��$certifi-2024.8.30.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: certifi
Version: 2024.8.30
Summary: Python package for providing Mozilla's CA Bundle.
Home-page: https://github.com/certifi/python-certifi
Author: Kenneth Reitz
Author-email: me@kennethreitz.com
License: MPL-2.0
Project-URL: Source, https://github.com/certifi/python-certifi
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
Classifier: Natural Language :: English
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.6
License-File: LICENSE

Certifi: Python SSL Certificates
================================

Certifi provides Mozilla's carefully curated collection of Root Certificates for
validating the trustworthiness of SSL certificates while verifying the identity
of TLS hosts. It has been extracted from the `Requests`_ project.

Installation
------------

``certifi`` is available on PyPI. Simply install it with ``pip``::

    $ pip install certifi

Usage
-----

To reference the installed certificate authority (CA) bundle, you can use the
built-in function::

    >>> import certifi

    >>> certifi.where()
    '/usr/local/lib/python3.7/site-packages/certifi/cacert.pem'

Or from the command line::

    $ python -m certifi
    /usr/local/lib/python3.7/site-packages/certifi/cacert.pem

Enjoy!

.. _`Requests`: https://requests.readthedocs.io/en/master/

Addition/Removal of Certificates
--------------------------------

Certifi does not support any addition/removal or other modification of the
CA trust store content. This project is intended to provide a reliable and
highly portable root of trust to python deployments. Look to upstream projects
for methods to use alternate trust.
PKs@u\%certifi-2024.8.30.dist-info/REQUESTEDnu�[���PKs@u\]��)certifi-2024.8.30.dist-info/top_level.txtnu�[���certifi
PKu@u\���%certifi-2024.8.30.dist-info/INSTALLERnu�[���pip
PK{@u\
^��LLpast/__init__.pynu�[���# coding=utf-8
"""
past: compatibility with Python 2 from Python 3
===============================================

``past`` is a package to aid with Python 2/3 compatibility. Whereas ``future``
contains backports of Python 3 constructs to Python 2, ``past`` provides
implementations of some Python 2 constructs in Python 3 and tools to import and
run Python 2 code in Python 3. It is intended to be used sparingly, as a way of
running old Python 2 code from Python 3 until the code is ported properly.

Potential uses for libraries:

- as a step in porting a Python 2 codebase to Python 3 (e.g. with the ``futurize`` script)
- to provide Python 3 support for previously Python 2-only libraries with the
  same APIs as on Python 2 -- particularly with regard to 8-bit strings (the
  ``past.builtins.str`` type).
- to aid in providing minimal-effort Python 3 support for applications using
  libraries that do not yet wish to upgrade their code properly to Python 3, or
  wish to upgrade it gradually to Python 3 style.


Here are some code examples that run identically on Python 3 and 2::

    >>> from past.builtins import str as oldstr

    >>> philosopher = oldstr(u'\u5b54\u5b50'.encode('utf-8'))
    >>> # This now behaves like a Py2 byte-string on both Py2 and Py3.
    >>> # For example, indexing returns a Python 2-like string object, not
    >>> # an integer:
    >>> philosopher[0]
    '\xe5'
    >>> type(philosopher[0])
    <past.builtins.oldstr>

    >>> # List-producing versions of range, reduce, map, filter
    >>> from past.builtins import range, reduce
    >>> range(10)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    >>> reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])
    15

    >>> # Other functions removed in Python 3 are resurrected ...
    >>> from past.builtins import execfile
    >>> execfile('myfile.py')

    >>> from past.builtins import raw_input
    >>> name = raw_input('What is your name? ')
    What is your name? [cursor]

    >>> from past.builtins import reload
    >>> reload(mymodule)   # equivalent to imp.reload(mymodule) in Python 3

    >>> from past.builtins import xrange
    >>> for i in xrange(10):
    ...     pass


It also provides import hooks so you can import and use Python 2 modules like
this::

    $ python3

    >>> from past.translation import autotranslate
    >>> authotranslate('mypy2module')
    >>> import mypy2module

until the authors of the Python 2 modules have upgraded their code. Then, for
example::

    >>> mypy2module.func_taking_py2_string(oldstr(b'abcd'))


Credits
-------

:Author:  Ed Schofield, Jordan M. Adler, et al
:Sponsor: Python Charmers: https://pythoncharmers.com


Licensing
---------
Copyright 2013-2024 Python Charmers, Australia.
The software is distributed under an MIT licence. See LICENSE.txt.
"""

from future import __version__, __copyright__, __license__

__title__ = 'past'
__author__ = 'Ed Schofield'
PK�@u\8�p�B:B:past/translation/__init__.pynu�[���# -*- coding: utf-8 -*-
"""
past.translation
==================

The ``past.translation`` package provides an import hook for Python 3 which
transparently runs ``futurize`` fixers over Python 2 code on import to convert
print statements into functions, etc.

It is intended to assist users in migrating to Python 3.x even if some
dependencies still only support Python 2.x.

Usage
-----

Once your Py2 package is installed in the usual module search path, the import
hook is invoked as follows:

    >>> from past.translation import autotranslate
    >>> autotranslate('mypackagename')

Or:

    >>> autotranslate(['mypackage1', 'mypackage2'])

You can unregister the hook using::

    >>> from past.translation import remove_hooks
    >>> remove_hooks()

Author: Ed Schofield.
Inspired by and based on ``uprefix`` by Vinay M. Sajip.
"""

import sys
# imp was deprecated in python 3.6
if sys.version_info >= (3, 6):
    import importlib as imp
else:
    import imp
import logging
import os
import copy
from lib2to3.pgen2.parse import ParseError
from lib2to3.refactor import RefactoringTool

from libfuturize import fixes

try:
    from importlib.machinery import (
        PathFinder,
        SourceFileLoader,
    )
except ImportError:
    PathFinder = None
    SourceFileLoader = object

if sys.version_info[:2] < (3, 4):
    import imp

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

myfixes = (list(fixes.libfuturize_fix_names_stage1) +
           list(fixes.lib2to3_fix_names_stage1) +
           list(fixes.libfuturize_fix_names_stage2) +
           list(fixes.lib2to3_fix_names_stage2))


# We detect whether the code is Py2 or Py3 by applying certain lib2to3 fixers
# to it. If the diff is empty, it's Python 3 code.

py2_detect_fixers = [
# From stage 1:
    'lib2to3.fixes.fix_apply',
    # 'lib2to3.fixes.fix_dict',        # TODO: add support for utils.viewitems() etc. and move to stage2
    'lib2to3.fixes.fix_except',
    'lib2to3.fixes.fix_execfile',
    'lib2to3.fixes.fix_exitfunc',
    'lib2to3.fixes.fix_funcattrs',
    'lib2to3.fixes.fix_filter',
    'lib2to3.fixes.fix_has_key',
    'lib2to3.fixes.fix_idioms',
    'lib2to3.fixes.fix_import',    # makes any implicit relative imports explicit. (Use with ``from __future__ import absolute_import)
    'lib2to3.fixes.fix_intern',
    'lib2to3.fixes.fix_isinstance',
    'lib2to3.fixes.fix_methodattrs',
    'lib2to3.fixes.fix_ne',
    'lib2to3.fixes.fix_numliterals',    # turns 1L into 1, 0755 into 0o755
    'lib2to3.fixes.fix_paren',
    'lib2to3.fixes.fix_print',
    'lib2to3.fixes.fix_raise',   # uses incompatible with_traceback() method on exceptions
    'lib2to3.fixes.fix_renames',
    'lib2to3.fixes.fix_reduce',
    # 'lib2to3.fixes.fix_set_literal',  # this is unnecessary and breaks Py2.6 support
    'lib2to3.fixes.fix_repr',
    'lib2to3.fixes.fix_standarderror',
    'lib2to3.fixes.fix_sys_exc',
    'lib2to3.fixes.fix_throw',
    'lib2to3.fixes.fix_tuple_params',
    'lib2to3.fixes.fix_types',
    'lib2to3.fixes.fix_ws_comma',
    'lib2to3.fixes.fix_xreadlines',

# From stage 2:
    'lib2to3.fixes.fix_basestring',
    # 'lib2to3.fixes.fix_buffer',    # perhaps not safe. Test this.
    # 'lib2to3.fixes.fix_callable',  # not needed in Py3.2+
    # 'lib2to3.fixes.fix_dict',        # TODO: add support for utils.viewitems() etc.
    'lib2to3.fixes.fix_exec',
    # 'lib2to3.fixes.fix_future',    # we don't want to remove __future__ imports
    'lib2to3.fixes.fix_getcwdu',
    # 'lib2to3.fixes.fix_imports',   # called by libfuturize.fixes.fix_future_standard_library
    # 'lib2to3.fixes.fix_imports2',  # we don't handle this yet (dbm)
    # 'lib2to3.fixes.fix_input',
    # 'lib2to3.fixes.fix_itertools',
    # 'lib2to3.fixes.fix_itertools_imports',
    'lib2to3.fixes.fix_long',
    # 'lib2to3.fixes.fix_map',
    # 'lib2to3.fixes.fix_metaclass', # causes SyntaxError in Py2! Use the one from ``six`` instead
    'lib2to3.fixes.fix_next',
    'lib2to3.fixes.fix_nonzero',     # TODO: add a decorator for mapping __bool__ to __nonzero__
    # 'lib2to3.fixes.fix_operator',    # we will need support for this by e.g. extending the Py2 operator module to provide those functions in Py3
    'lib2to3.fixes.fix_raw_input',
    # 'lib2to3.fixes.fix_unicode',   # strips off the u'' prefix, which removes a potentially helpful source of information for disambiguating unicode/byte strings
    # 'lib2to3.fixes.fix_urllib',
    'lib2to3.fixes.fix_xrange',
    # 'lib2to3.fixes.fix_zip',
]


class RTs:
    """
    A namespace for the refactoring tools. This avoids creating these at
    the module level, which slows down the module import. (See issue #117).

    There are two possible grammars: with or without the print statement.
    Hence we have two possible refactoring tool implementations.
    """
    _rt = None
    _rtp = None
    _rt_py2_detect = None
    _rtp_py2_detect = None

    @staticmethod
    def setup():
        """
        Call this before using the refactoring tools to create them on demand
        if needed.
        """
        if None in [RTs._rt, RTs._rtp]:
            RTs._rt = RefactoringTool(myfixes)
            RTs._rtp = RefactoringTool(myfixes, {'print_function': True})


    @staticmethod
    def setup_detect_python2():
        """
        Call this before using the refactoring tools to create them on demand
        if needed.
        """
        if None in [RTs._rt_py2_detect, RTs._rtp_py2_detect]:
            RTs._rt_py2_detect = RefactoringTool(py2_detect_fixers)
            RTs._rtp_py2_detect = RefactoringTool(py2_detect_fixers,
                                                  {'print_function': True})


# We need to find a prefix for the standard library, as we don't want to
# process any files there (they will already be Python 3).
#
# The following method is used by Sanjay Vinip in uprefix. This fails for
# ``conda`` environments:
#     # In a non-pythonv virtualenv, sys.real_prefix points to the installed Python.
#     # In a pythonv venv, sys.base_prefix points to the installed Python.
#     # Outside a virtual environment, sys.prefix points to the installed Python.

#     if hasattr(sys, 'real_prefix'):
#         _syslibprefix = sys.real_prefix
#     else:
#         _syslibprefix = getattr(sys, 'base_prefix', sys.prefix)

# Instead, we use the portion of the path common to both the stdlib modules
# ``math`` and ``urllib``.

def splitall(path):
    """
    Split a path into all components. From Python Cookbook.
    """
    allparts = []
    while True:
        parts = os.path.split(path)
        if parts[0] == path:  # sentinel for absolute paths
            allparts.insert(0, parts[0])
            break
        elif parts[1] == path: # sentinel for relative paths
            allparts.insert(0, parts[1])
            break
        else:
            path = parts[0]
            allparts.insert(0, parts[1])
    return allparts


def common_substring(s1, s2):
    """
    Returns the longest common substring to the two strings, starting from the
    left.
    """
    chunks = []
    path1 = splitall(s1)
    path2 = splitall(s2)
    for (dir1, dir2) in zip(path1, path2):
        if dir1 != dir2:
            break
        chunks.append(dir1)
    return os.path.join(*chunks)

# _stdlibprefix = common_substring(math.__file__, urllib.__file__)


def detect_python2(source, pathname):
    """
    Returns a bool indicating whether we think the code is Py2
    """
    RTs.setup_detect_python2()
    try:
        tree = RTs._rt_py2_detect.refactor_string(source, pathname)
    except ParseError as e:
        if e.msg != 'bad input' or e.value != '=':
            raise
        tree = RTs._rtp.refactor_string(source, pathname)

    if source != str(tree)[:-1]:   # remove added newline
        # The above fixers made changes, so we conclude it's Python 2 code
        logger.debug('Detected Python 2 code: {0}'.format(pathname))
        return True
    else:
        logger.debug('Detected Python 3 code: {0}'.format(pathname))
        return False


def transform(source, pathname):
    # This implementation uses lib2to3,
    # you can override and use something else
    # if that's better for you

    # lib2to3 likes a newline at the end
    RTs.setup()
    source += '\n'
    try:
        tree = RTs._rt.refactor_string(source, pathname)
    except ParseError as e:
        if e.msg != 'bad input' or e.value != '=':
            raise
        tree = RTs._rtp.refactor_string(source, pathname)
    # could optimise a bit for only doing str(tree) if
    # getattr(tree, 'was_changed', False) returns True
    return str(tree)[:-1]  # remove added newline


class PastSourceFileLoader(SourceFileLoader):
    exclude_paths = []
    include_paths = []

    def _convert_needed(self):
        fullname = self.name
        if any(fullname.startswith(path) for path in self.exclude_paths):
            convert = False
        elif any(fullname.startswith(path) for path in self.include_paths):
            convert = True
        else:
            convert = False
        return convert

    def _exec_transformed_module(self, module):
        source = self.get_source(self.name)
        pathname = self.path
        if detect_python2(source, pathname):
            source = transform(source, pathname)
        code = compile(source, pathname, "exec")
        exec(code, module.__dict__)

    # For Python 3.3
    def load_module(self, fullname):
        logger.debug("Running load_module for %s", fullname)
        if fullname in sys.modules:
            mod = sys.modules[fullname]
        else:
            if self._convert_needed():
                logger.debug("Autoconverting %s", fullname)
                mod = imp.new_module(fullname)
                sys.modules[fullname] = mod

                # required by PEP 302
                mod.__file__ = self.path
                mod.__loader__ = self
                if self.is_package(fullname):
                    mod.__path__ = []
                    mod.__package__ = fullname
                else:
                    mod.__package__ = fullname.rpartition('.')[0]
                self._exec_transformed_module(mod)
            else:
                mod = super().load_module(fullname)
        return mod

    # For Python >=3.4
    def exec_module(self, module):
        logger.debug("Running exec_module for %s", module)
        if self._convert_needed():
            logger.debug("Autoconverting %s", self.name)
            self._exec_transformed_module(module)
        else:
            super().exec_module(module)


class Py2Fixer(object):
    """
    An import hook class that uses lib2to3 for source-to-source translation of
    Py2 code to Py3.
    """

    # See the comments on :class:future.standard_library.RenameImport.
    # We add this attribute here so remove_hooks() and install_hooks() can
    # unambiguously detect whether the import hook is installed:
    PY2FIXER = True

    def __init__(self):
        self.found = None
        self.base_exclude_paths = ['future', 'past']
        self.exclude_paths = copy.copy(self.base_exclude_paths)
        self.include_paths = []

    def include(self, paths):
        """
        Pass in a sequence of module names such as 'plotrique.plotting' that,
        if present at the leftmost side of the full package name, would
        specify the module to be transformed from Py2 to Py3.
        """
        self.include_paths += paths

    def exclude(self, paths):
        """
        Pass in a sequence of strings such as 'mymodule' that, if
        present at the leftmost side of the full package name, would cause
        the module not to undergo any source transformation.
        """
        self.exclude_paths += paths

    # For Python 3.3
    def find_module(self, fullname, path=None):
        logger.debug("Running find_module: (%s, %s)", fullname, path)
        loader = PathFinder.find_module(fullname, path)
        if not loader:
            logger.debug("Py2Fixer could not find %s", fullname)
            return None
        loader.__class__ = PastSourceFileLoader
        loader.exclude_paths = self.exclude_paths
        loader.include_paths = self.include_paths
        return loader

    # For Python >=3.4
    def find_spec(self, fullname, path=None, target=None):
        logger.debug("Running find_spec: (%s, %s, %s)", fullname, path, target)
        spec = PathFinder.find_spec(fullname, path, target)
        if not spec:
            logger.debug("Py2Fixer could not find %s", fullname)
            return None
        spec.loader.__class__ = PastSourceFileLoader
        spec.loader.exclude_paths = self.exclude_paths
        spec.loader.include_paths = self.include_paths
        return spec


_hook = Py2Fixer()


def install_hooks(include_paths=(), exclude_paths=()):
    if isinstance(include_paths, str):
        include_paths = (include_paths,)
    if isinstance(exclude_paths, str):
        exclude_paths = (exclude_paths,)
    assert len(include_paths) + len(exclude_paths) > 0, 'Pass at least one argument'
    _hook.include(include_paths)
    _hook.exclude(exclude_paths)
    # _hook.debug = debug
    enable = sys.version_info[0] >= 3   # enabled for all 3.x+
    if enable and _hook not in sys.meta_path:
        sys.meta_path.insert(0, _hook)  # insert at beginning. This could be made a parameter

    # We could return the hook when there are ways of configuring it
    #return _hook


def remove_hooks():
    if _hook in sys.meta_path:
        sys.meta_path.remove(_hook)


def detect_hooks():
    """
    Returns True if the import hooks are installed, False if not.
    """
    return _hook in sys.meta_path
    # present = any([hasattr(hook, 'PY2FIXER') for hook in sys.meta_path])
    # return present


class hooks(object):
    """
    Acts as a context manager. Use like this:

    >>> from past import translation
    >>> with translation.hooks():
    ...     import mypy2module
    >>> import requests        # py2/3 compatible anyway
    >>> # etc.
    """
    def __enter__(self):
        self.hooks_were_installed = detect_hooks()
        install_hooks()
        return self

    def __exit__(self, *args):
        if not self.hooks_were_installed:
            remove_hooks()


class suspend_hooks(object):
    """
    Acts as a context manager. Use like this:

    >>> from past import translation
    >>> translation.install_hooks()
    >>> import http.client
    >>> # ...
    >>> with translation.suspend_hooks():
    >>>     import requests     # or others that support Py2/3

    If the hooks were disabled before the context, they are not installed when
    the context is left.
    """
    def __enter__(self):
        self.hooks_were_installed = detect_hooks()
        remove_hooks()
        return self
    def __exit__(self, *args):
        if self.hooks_were_installed:
            install_hooks()


# alias
autotranslate = install_hooks
PK�@u\��!��+�+4past/translation/__pycache__/__init__.cpython-39.pycnu�[���a

��?hB:�@s�dZddlZejdkr ddlZnddlZddlZddlZddlZddlm	Z	ddl
mZddlm
Z
zddlmZmZWney�dZeZYn0ejdd�d	kr�ddlZe�e�Ze�ej�ee
j�ee
j�ee
j�ee
j�Zgd
�ZGdd�d�Zd
d�Z dd�Z!dd�Z"dd�Z#Gdd�de�Z$Gdd�de�Z%e%�Z&d$dd�Z'dd�Z(dd�Z)Gd d!�d!e�Z*Gd"d#�d#e�Z+e'Z,dS)%a'
past.translation
==================

The ``past.translation`` package provides an import hook for Python 3 which
transparently runs ``futurize`` fixers over Python 2 code on import to convert
print statements into functions, etc.

It is intended to assist users in migrating to Python 3.x even if some
dependencies still only support Python 2.x.

Usage
-----

Once your Py2 package is installed in the usual module search path, the import
hook is invoked as follows:

    >>> from past.translation import autotranslate
    >>> autotranslate('mypackagename')

Or:

    >>> autotranslate(['mypackage1', 'mypackage2'])

You can unregister the hook using::

    >>> from past.translation import remove_hooks
    >>> remove_hooks()

Author: Ed Schofield.
Inspired by and based on ``uprefix`` by Vinay M. Sajip.
�N)��)�
ParseError)�RefactoringTool)�fixes)�
PathFinder�SourceFileLoader�)r�)#zlib2to3.fixes.fix_applyzlib2to3.fixes.fix_exceptzlib2to3.fixes.fix_execfilezlib2to3.fixes.fix_exitfunczlib2to3.fixes.fix_funcattrszlib2to3.fixes.fix_filterzlib2to3.fixes.fix_has_keyzlib2to3.fixes.fix_idiomszlib2to3.fixes.fix_importzlib2to3.fixes.fix_internzlib2to3.fixes.fix_isinstancezlib2to3.fixes.fix_methodattrszlib2to3.fixes.fix_nezlib2to3.fixes.fix_numliteralszlib2to3.fixes.fix_parenzlib2to3.fixes.fix_printzlib2to3.fixes.fix_raisezlib2to3.fixes.fix_renameszlib2to3.fixes.fix_reducezlib2to3.fixes.fix_reprzlib2to3.fixes.fix_standarderrorzlib2to3.fixes.fix_sys_exczlib2to3.fixes.fix_throwzlib2to3.fixes.fix_tuple_paramszlib2to3.fixes.fix_typeszlib2to3.fixes.fix_ws_commazlib2to3.fixes.fix_xreadlineszlib2to3.fixes.fix_basestringzlib2to3.fixes.fix_execzlib2to3.fixes.fix_getcwduzlib2to3.fixes.fix_longzlib2to3.fixes.fix_nextzlib2to3.fixes.fix_nonzerozlib2to3.fixes.fix_raw_inputzlib2to3.fixes.fix_xrangec@s8eZdZdZdZdZdZdZedd��Z	edd��Z
dS)�RTsa&
    A namespace for the refactoring tools. This avoids creating these at
    the module level, which slows down the module import. (See issue #117).

    There are two possible grammars: with or without the print statement.
    Hence we have two possible refactoring tool implementations.
    NcCs.dtjtjfvr*tt�t_ttddi�t_dS�zj
        Call this before using the refactoring tools to create them on demand
        if needed.
        N�print_functionT)r�_rt�_rtpr�myfixes�rr�C/usr/local/lib/python3.9/site-packages/past/translation/__init__.py�setup�s
z	RTs.setupcCs.dtjtjfvr*tt�t_ttddi�t_dSr)r�_rt_py2_detect�_rtp_py2_detectr�py2_detect_fixersrrrr�setup_detect_python2�s

�zRTs.setup_detect_python2)�__name__�
__module__�__qualname__�__doc__rrrr�staticmethodrrrrrrr�s

rcCsng}tj�|�}|d|kr0|�d|d�qjq|d|krP|�d|d�qjq|d}|�d|d�q|S)zA
    Split a path into all components. From Python Cookbook.
    r�)�os�path�split�insert)rZallparts�partsrrr�splitall�sr#cCsJg}t|�}t|�}t||�D]\}}||kr2q>|�|�qtjj|�S)z^
    Returns the longest common substring to the two strings, starting from the
    left.
    )r#�zip�appendrr�join)�s1�s2�chunksZpath1Zpath2Zdir1Zdir2rrr�common_substring�sr*c
Cs�t��ztj�||�}WnHtyb}z0|jdks>|jdkr@�tj�||�}WYd}~n
d}~00|t|�dd�kr�t	�
d�|��dSt	�
d�|��dSdS)	zD
    Returns a bool indicating whether we think the code is Py2
    �	bad input�=N���zDetected Python 2 code: {0}TzDetected Python 3 code: {0}F)rrr�refactor_stringr�msg�valuer�str�logger�debug�format��source�pathname�tree�errr�detect_python2�s$r:c
Cs|t��|d7}ztj�||�}WnHtyj}z0|jdksF|jdkrH�tj�||�}WYd}~n
d}~00t|�dd�S)N�
r+r,r-)	rrrr.rr/r0rr1r5rrr�	transform�s$r<cs@eZdZgZgZdd�Zdd�Z�fdd�Z�fdd�Z�Z	S)	�PastSourceFileLoadercsJ|j�t�fdd�|jD��r$d}n"t�fdd�|jD��rBd}nd}|S)Nc3s|]}��|�VqdS�N��
startswith��.0r��fullnamerr�	<genexpr>�z7PastSourceFileLoader._convert_needed.<locals>.<genexpr>Fc3s|]}��|�VqdSr>r?rArCrrrE
rFT)�name�any�
exclude_paths�
include_paths)�self�convertrrCr�_convert_needed	sz$PastSourceFileLoader._convert_neededcCsB|�|j�}|j}t||�r&t||�}t||d�}t||j�dS)N�exec)�
get_sourcerGrr:r<�compilerN�__dict__)rK�moduler6r7�coderrr�_exec_transformed_modules

z-PastSourceFileLoader._exec_transformed_modulecs�t�d|�|tjvr"tj|}nv|��r�t�d|�t�|�}|tj|<|j|_||_	|�
|�rpg|_||_n|�
d�d|_|�|�nt��|�}|S)NzRunning load_module for %s�Autoconverting %s�.r)r2r3�sys�modulesrM�impZ
new_moduler�__file__�
__loader__�
is_package�__path__�__package__�
rpartitionrT�super�load_module)rKrD�mod��	__class__rrras 



z PastSourceFileLoader.load_modulecs>t�d|�|��r.t�d|j�|�|�nt��|�dS)NzRunning exec_module for %srU)r2r3rMrGrTr`�exec_module)rKrRrcrrre4s
z PastSourceFileLoader.exec_module)
rrrrIrJrMrTrare�
__classcell__rrrcrr=s
	r=c@s@eZdZdZdZdd�Zdd�Zdd�Zdd
d�Zddd
�Z	d	S)�Py2Fixerzi
    An import hook class that uses lib2to3 for source-to-source translation of
    Py2 code to Py3.
    TcCs(d|_ddg|_t�|j�|_g|_dS)N�futureZpast)�foundZbase_exclude_paths�copyrIrJ�rKrrr�__init__Hs
zPy2Fixer.__init__cCs|j|7_dS)z�
        Pass in a sequence of module names such as 'plotrique.plotting' that,
        if present at the leftmost side of the full package name, would
        specify the module to be transformed from Py2 to Py3.
        N)rJ�rK�pathsrrr�includeNszPy2Fixer.includecCs|j|7_dS)z�
        Pass in a sequence of strings such as 'mymodule' that, if
        present at the leftmost side of the full package name, would cause
        the module not to undergo any source transformation.
        N)rIrmrrr�excludeVszPy2Fixer.excludeNcCsHt�d||�t�||�}|s.t�d|�dSt|_|j|_|j|_|S)NzRunning find_module: (%s, %s)�Py2Fixer could not find %s)r2r3r�find_moduler=rdrIrJ)rKrDr�loaderrrrrr_szPy2Fixer.find_modulecCsRt�d|||�t�|||�}|s2t�d|�dSt|j_|j|j_|j|j_|S)NzRunning find_spec: (%s, %s, %s)rq)	r2r3r�	find_specr=rsrdrIrJ)rKrDr�target�specrrrrtks

zPy2Fixer.find_spec)N)NN)
rrrrZPY2FIXERrlrorprrrtrrrrrg=s	
rgrcCs~t|t�r|f}t|t�r |f}t|�t|�dks<Jd��t�|�t�|�tjddk}|rzttjvrztj�	dt�dS)NrzPass at least one argumentr)
�
isinstancer1�len�_hookrorprW�version_info�	meta_pathr!)rJrI�enablerrr�
install_hookszs



r}cCsttjvrtj�t�dSr>)ryrWr{�removerrrr�remove_hooks�s
rcCs
ttjvS)zG
    Returns True if the import hooks are installed, False if not.
    )ryrWr{rrrr�detect_hooks�sr�c@s eZdZdZdd�Zdd�ZdS)�hooksz�
    Acts as a context manager. Use like this:

    >>> from past import translation
    >>> with translation.hooks():
    ...     import mypy2module
    >>> import requests        # py2/3 compatible anyway
    >>> # etc.
    cCst�|_t�|Sr>)r��hooks_were_installedr}rkrrr�	__enter__�szhooks.__enter__cGs|jst�dSr>)r�r�rK�argsrrr�__exit__�szhooks.__exit__N�rrrrr�r�rrrrr��s	r�c@s eZdZdZdd�Zdd�ZdS)�
suspend_hooksax
    Acts as a context manager. Use like this:

    >>> from past import translation
    >>> translation.install_hooks()
    >>> import http.client
    >>> # ...
    >>> with translation.suspend_hooks():
    >>>     import requests     # or others that support Py2/3

    If the hooks were disabled before the context, they are not installed when
    the context is left.
    cCst�|_t�|Sr>)r�r�rrkrrrr��szsuspend_hooks.__enter__cGs|jrt�dSr>)r�r}r�rrrr��szsuspend_hooks.__exit__Nr�rrrrr��s
r�)rr)-rrWrz�	importlibrY�loggingrrjZlib2to3.pgen2.parser�lib2to3.refactorrZlibfuturizer�importlib.machineryrr�ImportError�object�	getLoggerrr2�setLevel�DEBUG�listZlibfuturize_fix_names_stage1Zlib2to3_fix_names_stage1Zlibfuturize_fix_names_stage2Zlib2to3_fix_names_stage2rrrr#r*r:r<r=rgryr}rr�r�r�Z
autotranslaterrrr�<module>sR!



���	;58:
	PK�@u\zD���past/builtins/misc.pynu�[���from __future__ import unicode_literals

import inspect
import sys
import math
import numbers

from future.utils import PY2, PY3, exec_


if PY2:
    from collections import Mapping
else:
    from collections.abc import Mapping

if PY3:
    import builtins
    from collections.abc import Mapping

    def apply(f, *args, **kw):
        return f(*args, **kw)

    from past.builtins import str as oldstr

    def chr(i):
        """
        Return a byte-string of one character with ordinal i; 0 <= i <= 256
        """
        return oldstr(bytes((i,)))

    def cmp(x, y):
        """
        cmp(x, y) -> integer

        Return negative if x<y, zero if x==y, positive if x>y.
        Python2 had looser comparison allowing cmp None and non Numerical types and collections.
        Try to match the old behavior
        """
        if isinstance(x, set) and isinstance(y, set):
            raise TypeError('cannot compare sets using cmp()',)
        try:
            if isinstance(x, numbers.Number) and math.isnan(x):
                if not isinstance(y, numbers.Number):
                    raise TypeError('cannot compare float("nan"), {type_y} with cmp'.format(type_y=type(y)))
                if isinstance(y, int):
                    return 1
                else:
                    return -1
            if isinstance(y, numbers.Number) and math.isnan(y):
                if not isinstance(x, numbers.Number):
                    raise TypeError('cannot compare {type_x}, float("nan") with cmp'.format(type_x=type(x)))
                if isinstance(x, int):
                    return -1
                else:
                    return 1
            return (x > y) - (x < y)
        except TypeError:
            if x == y:
                return 0
            type_order = [
                type(None),
                numbers.Number,
                dict, list,
                set,
                (str, bytes),
            ]
            x_type_index = y_type_index = None
            for i, type_match in enumerate(type_order):
                if isinstance(x, type_match):
                    x_type_index = i
                if isinstance(y, type_match):
                    y_type_index = i
            if cmp(x_type_index, y_type_index) == 0:
                if isinstance(x, bytes) and isinstance(y, str):
                    return cmp(x.decode('ascii'), y)
                if isinstance(y, bytes) and isinstance(x, str):
                    return cmp(x, y.decode('ascii'))
                elif isinstance(x, list):
                    # if both arguments are lists take the comparison of the first non equal value
                    for x_elem, y_elem in zip(x, y):
                        elem_cmp_val = cmp(x_elem, y_elem)
                        if elem_cmp_val != 0:
                            return elem_cmp_val
                    # if all elements are equal, return equal/0
                    return 0
                elif isinstance(x, dict):
                    if len(x) != len(y):
                        return cmp(len(x), len(y))
                    else:
                        x_key = min(a for a in x if a not in y or x[a] != y[a])
                        y_key = min(b for b in y if b not in x or x[b] != y[b])
                        if x_key != y_key:
                            return cmp(x_key, y_key)
                        else:
                            return cmp(x[x_key], y[y_key])
            return cmp(x_type_index, y_type_index)

    from sys import intern

    def oct(number):
        """oct(number) -> string

        Return the octal representation of an integer
        """
        return '0' + builtins.oct(number)[2:]

    raw_input = input
    # imp was deprecated in python 3.6
    if sys.version_info >= (3, 6):
        from importlib import reload
    else:
        # for python2, python3 <= 3.4
        from imp import reload
    unicode = str
    unichr = chr
    xrange = range
else:
    import __builtin__
    from collections import Mapping
    apply = __builtin__.apply
    chr = __builtin__.chr
    cmp = __builtin__.cmp
    execfile = __builtin__.execfile
    intern = __builtin__.intern
    oct = __builtin__.oct
    raw_input = __builtin__.raw_input
    reload = __builtin__.reload
    unicode = __builtin__.unicode
    unichr = __builtin__.unichr
    xrange = __builtin__.xrange


if PY3:
    def execfile(filename, myglobals=None, mylocals=None):
        """
        Read and execute a Python script from a file in the given namespaces.
        The globals and locals are dictionaries, defaulting to the current
        globals and locals. If only globals is given, locals defaults to it.
        """
        if myglobals is None:
            # There seems to be no alternative to frame hacking here.
            caller_frame = inspect.stack()[1]
            myglobals = caller_frame[0].f_globals
            mylocals = caller_frame[0].f_locals
        elif mylocals is None:
            # Only if myglobals is given do we set mylocals to it.
            mylocals = myglobals
        if not isinstance(myglobals, Mapping):
            raise TypeError('globals must be a mapping')
        if not isinstance(mylocals, Mapping):
            raise TypeError('locals must be a mapping')
        with open(filename, "rb") as fin:
            source = fin.read()
        code = compile(source, filename, "exec")
        exec_(code, myglobals, mylocals)


if PY3:
    __all__ = ['apply', 'chr', 'cmp', 'execfile', 'intern', 'raw_input',
               'reload', 'unichr', 'unicode', 'xrange']
else:
    __all__ = []
PK�@u\J�<�$�$past/builtins/noniterators.pynu�[���"""
This module is designed to be used as follows::

    from past.builtins.noniterators import filter, map, range, reduce, zip

And then, for example::

    assert isinstance(range(5), list)

The list-producing functions this brings in are::

- ``filter``
- ``map``
- ``range``
- ``reduce``
- ``zip``

"""

from __future__ import division, absolute_import, print_function

from itertools import chain, starmap
import itertools       # since zip_longest doesn't exist on Py2
from past.types import basestring
from past.utils import PY3


def flatmap(f, items):
    return chain.from_iterable(map(f, items))


if PY3:
    import builtins

    # list-producing versions of the major Python iterating functions
    def oldfilter(*args):
        """
        filter(function or None, sequence) -> list, tuple, or string

        Return those items of sequence for which function(item) is true.
        If function is None, return the items that are true.  If sequence
        is a tuple or string, return the same type, else return a list.
        """
        mytype = type(args[1])
        if isinstance(args[1], basestring):
            return mytype().join(builtins.filter(*args))
        elif isinstance(args[1], (tuple, list)):
            return mytype(builtins.filter(*args))
        else:
            # Fall back to list. Is this the right thing to do?
            return list(builtins.filter(*args))

    # This is surprisingly difficult to get right. For example, the
    # solutions here fail with the test cases in the docstring below:
    # http://stackoverflow.com/questions/8072755/
    def oldmap(func, *iterables):
        """
        map(function, sequence[, sequence, ...]) -> list

        Return a list of the results of applying the function to the
        items of the argument sequence(s).  If more than one sequence is
        given, the function is called with an argument list consisting of
        the corresponding item of each sequence, substituting None for
        missing values when not all sequences have the same length.  If
        the function is None, return a list of the items of the sequence
        (or a list of tuples if more than one sequence).

        Test cases:
        >>> oldmap(None, 'hello world')
        ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']

        >>> oldmap(None, range(4))
        [0, 1, 2, 3]

        More test cases are in test_past.test_builtins.
        """
        zipped = itertools.zip_longest(*iterables)
        l = list(zipped)
        if len(l) == 0:
            return []
        if func is None:
            result = l
        else:
            result = list(starmap(func, l))

        # Inspect to see whether it's a simple sequence of tuples
        try:
            if max([len(item) for item in result]) == 1:
                return list(chain.from_iterable(result))
            # return list(flatmap(func, result))
        except TypeError as e:
            # Simple objects like ints have no len()
            pass
        return result

        ############################
        ### For reference, the source code for Py2.7 map function:
        # static PyObject *
        # builtin_map(PyObject *self, PyObject *args)
        # {
        #     typedef struct {
        #         PyObject *it;           /* the iterator object */
        #         int saw_StopIteration;  /* bool:  did the iterator end? */
        #     } sequence;
        #
        #     PyObject *func, *result;
        #     sequence *seqs = NULL, *sqp;
        #     Py_ssize_t n, len;
        #     register int i, j;
        #
        #     n = PyTuple_Size(args);
        #     if (n < 2) {
        #         PyErr_SetString(PyExc_TypeError,
        #                         "map() requires at least two args");
        #         return NULL;
        #     }
        #
        #     func = PyTuple_GetItem(args, 0);
        #     n--;
        #
        #     if (func == Py_None) {
        #         if (PyErr_WarnPy3k("map(None, ...) not supported in 3.x; "
        #                            "use list(...)", 1) < 0)
        #             return NULL;
        #         if (n == 1) {
        #             /* map(None, S) is the same as list(S). */
        #             return PySequence_List(PyTuple_GetItem(args, 1));
        #         }
        #     }
        #
        #     /* Get space for sequence descriptors.  Must NULL out the iterator
        #      * pointers so that jumping to Fail_2 later doesn't see trash.
        #      */
        #     if ((seqs = PyMem_NEW(sequence, n)) == NULL) {
        #         PyErr_NoMemory();
        #         return NULL;
        #     }
        #     for (i = 0; i < n; ++i) {
        #         seqs[i].it = (PyObject*)NULL;
        #         seqs[i].saw_StopIteration = 0;
        #     }
        #
        #     /* Do a first pass to obtain iterators for the arguments, and set len
        #      * to the largest of their lengths.
        #      */
        #     len = 0;
        #     for (i = 0, sqp = seqs; i < n; ++i, ++sqp) {
        #         PyObject *curseq;
        #         Py_ssize_t curlen;
        #
        #         /* Get iterator. */
        #         curseq = PyTuple_GetItem(args, i+1);
        #         sqp->it = PyObject_GetIter(curseq);
        #         if (sqp->it == NULL) {
        #             static char errmsg[] =
        #                 "argument %d to map() must support iteration";
        #             char errbuf[sizeof(errmsg) + 25];
        #             PyOS_snprintf(errbuf, sizeof(errbuf), errmsg, i+2);
        #             PyErr_SetString(PyExc_TypeError, errbuf);
        #             goto Fail_2;
        #         }
        #
        #         /* Update len. */
        #         curlen = _PyObject_LengthHint(curseq, 8);
        #         if (curlen > len)
        #             len = curlen;
        #     }
        #
        #     /* Get space for the result list. */
        #     if ((result = (PyObject *) PyList_New(len)) == NULL)
        #         goto Fail_2;
        #
        #     /* Iterate over the sequences until all have stopped. */
        #     for (i = 0; ; ++i) {
        #         PyObject *alist, *item=NULL, *value;
        #         int numactive = 0;
        #
        #         if (func == Py_None && n == 1)
        #             alist = NULL;
        #         else if ((alist = PyTuple_New(n)) == NULL)
        #             goto Fail_1;
        #
        #         for (j = 0, sqp = seqs; j < n; ++j, ++sqp) {
        #             if (sqp->saw_StopIteration) {
        #                 Py_INCREF(Py_None);
        #                 item = Py_None;
        #             }
        #             else {
        #                 item = PyIter_Next(sqp->it);
        #                 if (item)
        #                     ++numactive;
        #                 else {
        #                     if (PyErr_Occurred()) {
        #                         Py_XDECREF(alist);
        #                         goto Fail_1;
        #                     }
        #                     Py_INCREF(Py_None);
        #                     item = Py_None;
        #                     sqp->saw_StopIteration = 1;
        #                 }
        #             }
        #             if (alist)
        #                 PyTuple_SET_ITEM(alist, j, item);
        #             else
        #                 break;
        #         }
        #
        #         if (!alist)
        #             alist = item;
        #
        #         if (numactive == 0) {
        #             Py_DECREF(alist);
        #             break;
        #         }
        #
        #         if (func == Py_None)
        #             value = alist;
        #         else {
        #             value = PyEval_CallObject(func, alist);
        #             Py_DECREF(alist);
        #             if (value == NULL)
        #                 goto Fail_1;
        #         }
        #         if (i >= len) {
        #             int status = PyList_Append(result, value);
        #             Py_DECREF(value);
        #             if (status < 0)
        #                 goto Fail_1;
        #         }
        #         else if (PyList_SetItem(result, i, value) < 0)
        #             goto Fail_1;
        #     }
        #
        #     if (i < len && PyList_SetSlice(result, i, len, NULL) < 0)
        #         goto Fail_1;
        #
        #     goto Succeed;
        #
        # Fail_1:
        #     Py_DECREF(result);
        # Fail_2:
        #     result = NULL;
        # Succeed:
        #     assert(seqs);
        #     for (i = 0; i < n; ++i)
        #         Py_XDECREF(seqs[i].it);
        #     PyMem_DEL(seqs);
        #     return result;
        # }

    def oldrange(*args, **kwargs):
        return list(builtins.range(*args, **kwargs))

    def oldzip(*args, **kwargs):
        return list(builtins.zip(*args, **kwargs))

    filter = oldfilter
    map = oldmap
    range = oldrange
    from functools import reduce
    zip = oldzip
    __all__ = ['filter', 'map', 'range', 'reduce', 'zip']

else:
    import __builtin__
    # Python 2-builtin ranges produce lists
    filter = __builtin__.filter
    map = __builtin__.map
    range = __builtin__.range
    reduce = __builtin__.reduce
    zip = __builtin__.zip
    __all__ = []
PK�@u\�Q�|

past/builtins/__init__.pynu�[���"""
A resurrection of some old functions from Python 2 for use in Python 3. These
should be used sparingly, to help with porting efforts, since code using them
is no longer standard Python 3 code.

This module provides the following:

1. Implementations of these builtin functions which have no equivalent on Py3:

- apply
- chr
- cmp
- execfile

2. Aliases:

- intern <- sys.intern
- raw_input <- input
- reduce <- functools.reduce
- reload <- imp.reload
- unichr <- chr
- unicode <- str
- xrange <- range

3. List-producing versions of the corresponding Python 3 iterator-producing functions:

- filter
- map
- range
- zip

4. Forward-ported Py2 types:

- basestring
- dict
- str
- long
- unicode

"""

from future.utils import PY3
from past.builtins.noniterators import (filter, map, range, reduce, zip)
# from past.builtins.misc import (ascii, hex, input, oct, open)
if PY3:
    from past.types import (basestring,
                            olddict as dict,
                            oldstr as str,
                            long,
                            unicode)
else:
    from __builtin__ import (basestring, dict, str, long, unicode)

from past.builtins.misc import (apply, chr, cmp, execfile, intern, oct,
                                raw_input, reload, unichr, unicode, xrange)
from past import utils


if utils.PY3:
    # We only import names that shadow the builtins on Py3. No other namespace
    # pollution on Py3.

    # Only shadow builtins on Py3; no new names
    __all__ = ['filter', 'map', 'range', 'reduce', 'zip',
               'basestring', 'dict', 'str', 'long', 'unicode',
               'apply', 'chr', 'cmp', 'execfile', 'intern', 'raw_input',
               'reload', 'unichr', 'xrange'
              ]

else:
    # No namespace pollution on Py2
    __all__ = []
PK�@u\�5�DD1past/builtins/__pycache__/__init__.cpython-39.pycnu�[���a

��?h
�@s�dZddlmZddlmZmZmZmZmZerNddl	m
Z
mZm
ZmZmZnddlm
Z
mZmZmZmZddlmZmZmZmZmZmZmZmZmZmZmZddlmZejr�gd�ZngZd	S)
a�
A resurrection of some old functions from Python 2 for use in Python 3. These
should be used sparingly, to help with porting efforts, since code using them
is no longer standard Python 3 code.

This module provides the following:

1. Implementations of these builtin functions which have no equivalent on Py3:

- apply
- chr
- cmp
- execfile

2. Aliases:

- intern <- sys.intern
- raw_input <- input
- reduce <- functools.reduce
- reload <- imp.reload
- unichr <- chr
- unicode <- str
- xrange <- range

3. List-producing versions of the corresponding Python 3 iterator-producing functions:

- filter
- map
- range
- zip

4. Forward-ported Py2 types:

- basestring
- dict
- str
- long
- unicode

�)�PY3)�filter�map�range�reduce�zip)�
basestring�olddict�oldstr�long�unicode)r�dict�strrr)�apply�chr�cmp�execfile�intern�oct�	raw_input�reload�unichrr�xrange)�utils)rrrrrrr
rrrrrrrrrrrrN) �__doc__Zfuture.utilsrZpast.builtins.noniteratorsrrrrrZ
past.typesrr	r
r
rrr�__builtin__Zpast.builtins.miscrrrrrrrrrrZpastr�__all__�rr�@/usr/local/lib/python3.9/site-packages/past/builtins/__init__.py�<module>s)4
PK�@u\��Z���-past/builtins/__pycache__/misc.cpython-39.pycnu�[���a

��?h��@sbddlmZddlZddlZddlZddlZddlmZmZm	Z	erRddl
mZnddlmZer�ddl
Z
ddlmZdd�ZddlmZdd	�Zd
d�ZddlmZd
d�ZeZejdkr�ddlmZnddlmZeZeZeZnVddl Z ddl
mZe jZe jZe jZe j!Z!e jZe jZe jZe jZe jZe jZe jZe�rJddd�Z!e�rZgd�Z"ngZ"dS)�)�unicode_literalsN)�PY2�PY3�exec_)�MappingcOs||i|��S�N�)�f�args�kwrr�</usr/local/lib/python3.9/site-packages/past/builtins/misc.py�applysr
)�strcCstt|f��S)zU
        Return a byte-string of one character with ordinal i; 0 <= i <= 256
        )�oldstr�bytes)�irrr�chrsrc
s�t�t�rt�t�rtd��z�t�tj�rjt���rjt�tj�sTtdjt��d���t�t	�rdWdSWdSt�tj�r�t���r�t�tj�s�tdjt��d���t�t	�r�WdSWdS��k��kWSt�y���kr�YdStd	�tjt
tttt
fg}d	}}t|�D]*\}}t�|��r&|}t�|��r|}�qt||�dk�r|t�t
��rvt�t��rvt��d
���YSt�t
��r�t�t��r�t���d
��YSt�t��r�t���D]*\}}t||�}	|	dk�r�|	YS�q�YdSt�t
��r|t��t��k�rtt��t���YSt��fdd��D��}
t��fd
d��D��}|
|k�rft|
|�YSt�|
�|�YSt||�YS0d	S)z�
        cmp(x, y) -> integer

        Return negative if x<y, zero if x==y, positive if x>y.
        Python2 had looser comparison allowing cmp None and non Numerical types and collections.
        Try to match the old behavior
        zcannot compare sets using cmp()z.cannot compare float("nan"), {type_y} with cmp)Ztype_y����z.cannot compare {type_x}, float("nan") with cmp)Ztype_xrN�asciic3s*|]"}|�vs�|�|kr|VqdSrr)�.0�a��x�yrr�	<genexpr>Z�zcmp.<locals>.<genexpr>c3s*|]"}|�vs�|�|kr|VqdSrr)r�brrrr)�
isinstance�set�	TypeError�numbers�Number�math�isnan�format�type�int�dict�listrr�	enumerate�cmp�decode�zip�len�min)rrZ
type_orderZx_type_indexZy_type_indexrZ
type_matchZx_elemZy_elemZelem_cmp_valZx_keyZy_keyrrrr+sf

�


r+)�interncCsdt�|�dd�S)zUoct(number) -> string

        Return the octal representation of an integer
        �0�N)�builtins�oct)�numberrrrr4dsr4)��)�reloadcCs�|dur*t��d}|dj}|dj}n|dur6|}t|t�sHtd��t|t�sZtd��t|d��}|��}Wd�n1s�0Yt	||d�}t
|||�dS)z�
        Read and execute a Python script from a file in the given namespaces.
        The globals and locals are dictionaries, defaulting to the current
        globals and locals. If only globals is given, locals defaults to it.
        Nrrzglobals must be a mappingzlocals must be a mapping�rb�exec)�inspect�stack�	f_globals�f_localsrrr �open�read�compiler)�filenameZ	myglobalsZmylocalsZcaller_frameZfin�source�coderrr�execfile�s


&rE)
r
rr+rEr0�	raw_inputr8�unichr�unicode�xrange)NN)#�
__future__rr;�sysr#r!Zfuture.utilsrrr�collectionsr�collections.abcr3r
Z
past.builtinsrrrr+r0r4�inputrF�version_info�	importlibr8�imprHrG�rangerI�__builtin__rE�__all__rrrr�<module>sTC


PK�@u\93]���5past/builtins/__pycache__/noniterators.cpython-39.pycnu�[���a

��?h�$�@s�dZddlmZmZmZddlmZmZddlZddlm	Z	ddl
mZdd�Zer�ddl
Z
d	d
�Zdd�Zd
d�Zdd�ZeZeZeZddlmZeZgd�Zn*ddlZejZejZejZejZejZgZdS)a,
This module is designed to be used as follows::

    from past.builtins.noniterators import filter, map, range, reduce, zip

And then, for example::

    assert isinstance(range(5), list)

The list-producing functions this brings in are::

- ``filter``
- ``map``
- ``range``
- ``reduce``
- ``zip``

�)�division�absolute_import�print_function)�chain�starmapN)�
basestring)�PY3cCst�t||��S�N)r�
from_iterable�map)�f�items�r�D/usr/local/lib/python3.9/site-packages/past/builtins/noniterators.py�flatmapsrcGs^t|d�}t|dt�r,|��tj|��St|dttf�rL|tj|��Sttj|��SdS)a*
        filter(function or None, sequence) -> list, tuple, or string

        Return those items of sequence for which function(item) is true.
        If function is None, return the items that are true.  If sequence
        is a tuple or string, return the same type, else return a list.
        �N)�type�
isinstancer�join�builtins�filter�tuple�list)�argsZmytyperrr�	oldfilter$src
Gs�tj|�}t|�}t|�dkr"gS|dur0|}ntt||��}z*tdd�|D��dkrftt�|��WSWn$ty�}zWYd}~n
d}~00|S)a
        map(function, sequence[, sequence, ...]) -> list

        Return a list of the results of applying the function to the
        items of the argument sequence(s).  If more than one sequence is
        given, the function is called with an argument list consisting of
        the corresponding item of each sequence, substituting None for
        missing values when not all sequences have the same length.  If
        the function is None, return a list of the items of the sequence
        (or a list of tuples if more than one sequence).

        Test cases:
        >>> oldmap(None, 'hello world')
        ['h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd']

        >>> oldmap(None, range(4))
        [0, 1, 2, 3]

        More test cases are in test_past.test_builtins.
        rNcSsg|]}t|��qSr)�len)�.0�itemrrr�
<listcomp>X�zoldmap.<locals>.<listcomp>r)	�	itertools�zip_longestrrr�maxrr
�	TypeError)�func�	iterablesZzipped�l�result�errr�oldmap8s
r)cOsttj|i|���Sr	)rr�range�r�kwargsrrr�oldrange�sr-cOsttj|i|���Sr	)rr�zipr+rrr�oldzip�sr/)�reduce)rrr*r0r.)�__doc__�
__future__rrrr rrZ
past.typesrZ
past.utilsrrrrr)r-r/rrr*�	functoolsr0r.�__all__�__builtin__rrrr�<module>s4D
PK�@u\���$��(past/__pycache__/__init__.cpython-39.pycnu�[���a

��?hL�@s$dZddlmZmZmZdZdZdS)u�

past: compatibility with Python 2 from Python 3
===============================================

``past`` is a package to aid with Python 2/3 compatibility. Whereas ``future``
contains backports of Python 3 constructs to Python 2, ``past`` provides
implementations of some Python 2 constructs in Python 3 and tools to import and
run Python 2 code in Python 3. It is intended to be used sparingly, as a way of
running old Python 2 code from Python 3 until the code is ported properly.

Potential uses for libraries:

- as a step in porting a Python 2 codebase to Python 3 (e.g. with the ``futurize`` script)
- to provide Python 3 support for previously Python 2-only libraries with the
  same APIs as on Python 2 -- particularly with regard to 8-bit strings (the
  ``past.builtins.str`` type).
- to aid in providing minimal-effort Python 3 support for applications using
  libraries that do not yet wish to upgrade their code properly to Python 3, or
  wish to upgrade it gradually to Python 3 style.


Here are some code examples that run identically on Python 3 and 2::

    >>> from past.builtins import str as oldstr

    >>> philosopher = oldstr(u'孔子'.encode('utf-8'))
    >>> # This now behaves like a Py2 byte-string on both Py2 and Py3.
    >>> # For example, indexing returns a Python 2-like string object, not
    >>> # an integer:
    >>> philosopher[0]
    'å'
    >>> type(philosopher[0])
    <past.builtins.oldstr>

    >>> # List-producing versions of range, reduce, map, filter
    >>> from past.builtins import range, reduce
    >>> range(10)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
    >>> reduce(lambda x, y: x+y, [1, 2, 3, 4, 5])
    15

    >>> # Other functions removed in Python 3 are resurrected ...
    >>> from past.builtins import execfile
    >>> execfile('myfile.py')

    >>> from past.builtins import raw_input
    >>> name = raw_input('What is your name? ')
    What is your name? [cursor]

    >>> from past.builtins import reload
    >>> reload(mymodule)   # equivalent to imp.reload(mymodule) in Python 3

    >>> from past.builtins import xrange
    >>> for i in xrange(10):
    ...     pass


It also provides import hooks so you can import and use Python 2 modules like
this::

    $ python3

    >>> from past.translation import autotranslate
    >>> authotranslate('mypy2module')
    >>> import mypy2module

until the authors of the Python 2 modules have upgraded their code. Then, for
example::

    >>> mypy2module.func_taking_py2_string(oldstr(b'abcd'))


Credits
-------

:Author:  Ed Schofield, Jordan M. Adler, et al
:Sponsor: Python Charmers: https://pythoncharmers.com


Licensing
---------
Copyright 2013-2024 Python Charmers, Australia.
The software is distributed under an MIT licence. See LICENSE.txt.
�)�__version__�
__copyright__�__license__ZpastzEd SchofieldN)�__doc__�futurerrr�	__title__�
__author__�r	r	�7/usr/local/lib/python3.9/site-packages/past/__init__.py�<module>sUPK�@u\ZF2past/types/basestring.pynu�[���"""
An implementation of the basestring type for Python 3

Example use:

>>> s = b'abc'
>>> assert isinstance(s, basestring)
>>> from past.types import str as oldstr
>>> s2 = oldstr(b'abc')
>>> assert isinstance(s2, basestring)

"""

import sys

from past.utils import with_metaclass, PY2

if PY2:
    str = unicode

ver = sys.version_info[:2]


class BaseBaseString(type):
    def __instancecheck__(cls, instance):
        return isinstance(instance, (bytes, str))

    def __subclasscheck__(cls, subclass):
        return super(BaseBaseString, cls).__subclasscheck__(subclass) or issubclass(subclass, (bytes, str))


class basestring(with_metaclass(BaseBaseString)):
    """
    A minimal backport of the Python 2 basestring type to Py3
    """


__all__ = ['basestring']
PK�@u\�Woopast/types/__init__.pynu�[���"""
Forward-ports of types from Python 2 for use with Python 3:

- ``basestring``: equivalent to ``(str, bytes)`` in ``isinstance`` checks
- ``dict``: with list-producing .keys() etc. methods
- ``str``: bytes-like, but iterating over them doesn't product integers
- ``long``: alias of Py3 int with ``L`` suffix in the ``repr``
- ``unicode``: alias of Py3 str with ``u`` prefix in the ``repr``

"""

from past import utils

if utils.PY2:
    import __builtin__
    basestring = __builtin__.basestring
    dict = __builtin__.dict
    str = __builtin__.str
    long = __builtin__.long
    unicode = __builtin__.unicode
    __all__ = []
else:
    from .basestring import basestring
    from .olddict import olddict
    from .oldstr import oldstr
    long = int
    unicode = str
    # from .unicode import unicode
    __all__ = ['basestring', 'olddict', 'oldstr', 'long', 'unicode']
PK�@u\�
��
�
past/types/olddict.pynu�[���"""
A dict subclass for Python 3 that behaves like Python 2's dict

Example use:

>>> from past.builtins import dict
>>> d1 = dict()    # instead of {} for an empty dict
>>> d2 = dict(key1='value1', key2='value2')

The keys, values and items methods now return lists on Python 3.x and there are
methods for iterkeys, itervalues, iteritems, and viewkeys etc.

>>> for d in (d1, d2):
...     assert isinstance(d.keys(), list)
...     assert isinstance(d.values(), list)
...     assert isinstance(d.items(), list)
"""

import sys

from past.utils import with_metaclass


_builtin_dict = dict
ver = sys.version_info[:2]


class BaseOldDict(type):
    def __instancecheck__(cls, instance):
        return isinstance(instance, _builtin_dict)


class olddict(with_metaclass(BaseOldDict, _builtin_dict)):
    """
    A backport of the Python 3 dict object to Py2
    """
    iterkeys = _builtin_dict.keys
    viewkeys = _builtin_dict.keys

    def keys(self):
        return list(super(olddict, self).keys())

    itervalues = _builtin_dict.values
    viewvalues = _builtin_dict.values

    def values(self):
        return list(super(olddict, self).values())

    iteritems = _builtin_dict.items
    viewitems = _builtin_dict.items

    def items(self):
        return list(super(olddict, self).items())

    def has_key(self, k):
        """
        D.has_key(k) -> True if D has a key k, else False
        """
        return k in self

    # def __new__(cls, *args, **kwargs):
    #     """
    #     dict() -> new empty dictionary
    #     dict(mapping) -> new dictionary initialized from a mapping object's
    #         (key, value) pairs
    #     dict(iterable) -> new dictionary initialized as if via:
    #         d = {}
    #         for k, v in iterable:
    #             d[k] = v
    #     dict(**kwargs) -> new dictionary initialized with the name=value pairs
    #         in the keyword argument list.  For example:  dict(one=1, two=2)

    #     """
    #
    #     if len(args) == 0:
    #         return super(olddict, cls).__new__(cls)
    #     # Was: elif isinstance(args[0], newbytes):
    #     # We use type() instead of the above because we're redefining
    #     # this to be True for all unicode string subclasses. Warning:
    #     # This may render newstr un-subclassable.
    #     elif type(args[0]) == olddict:
    #         return args[0]
    #     # elif isinstance(args[0], _builtin_dict):
    #     #     value = args[0]
    #     else:
    #         value = args[0]
    #     return super(olddict, cls).__new__(cls, value)

    def __native__(self):
        """
        Hook for the past.utils.native() function
        """
        return super(oldbytes, self)


__all__ = ['olddict']
PK�@u\,#l��past/types/oldstr.pynu�[���"""
Pure-Python implementation of a Python 2-like str object for Python 3.
"""

from numbers import Integral

from past.utils import PY2, with_metaclass

if PY2:
    from collections import Iterable
else:
    from collections.abc import Iterable

_builtin_bytes = bytes


class BaseOldStr(type):
    def __instancecheck__(cls, instance):
        return isinstance(instance, _builtin_bytes)


def unescape(s):
    r"""
    Interprets strings with escape sequences

    Example:
    >>> s = unescape(r'abc\\def')   # i.e. 'abc\\\\def'
    >>> print(s)
    'abc\def'
    >>> s2 = unescape('abc\\ndef')
    >>> len(s2)
    8
    >>> print(s2)
    abc
    def
    """
    return s.encode().decode('unicode_escape')


class oldstr(with_metaclass(BaseOldStr, _builtin_bytes)):
    """
    A forward port of the Python 2 8-bit string object to Py3
    """
    # Python 2 strings have no __iter__ method:
    @property
    def __iter__(self):
        raise AttributeError

    def __dir__(self):
        return [thing for thing in dir(_builtin_bytes) if thing != '__iter__']

    # def __new__(cls, *args, **kwargs):
    #     """
    #     From the Py3 bytes docstring:

    #     bytes(iterable_of_ints) -> bytes
    #     bytes(string, encoding[, errors]) -> bytes
    #     bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer
    #     bytes(int) -> bytes object of size given by the parameter initialized with null bytes
    #     bytes() -> empty bytes object
    #
    #     Construct an immutable array of bytes from:
    #       - an iterable yielding integers in range(256)
    #       - a text string encoded using the specified encoding
    #       - any object implementing the buffer API.
    #       - an integer
    #     """
    #
    #     if len(args) == 0:
    #         return super(newbytes, cls).__new__(cls)
    #     # Was: elif isinstance(args[0], newbytes):
    #     # We use type() instead of the above because we're redefining
    #     # this to be True for all unicode string subclasses. Warning:
    #     # This may render newstr un-subclassable.
    #     elif type(args[0]) == newbytes:
    #         return args[0]
    #     elif isinstance(args[0], _builtin_bytes):
    #         value = args[0]
    #     elif isinstance(args[0], unicode):
    #         if 'encoding' not in kwargs:
    #             raise TypeError('unicode string argument without an encoding')
    #         ###
    #         # Was:   value = args[0].encode(**kwargs)
    #         # Python 2.6 string encode() method doesn't take kwargs:
    #         # Use this instead:
    #         newargs = [kwargs['encoding']]
    #         if 'errors' in kwargs:
    #             newargs.append(kwargs['errors'])
    #         value = args[0].encode(*newargs)
    #         ###
    #     elif isinstance(args[0], Iterable):
    #         if len(args[0]) == 0:
    #             # What is this?
    #             raise ValueError('unknown argument type')
    #         elif len(args[0]) > 0 and isinstance(args[0][0], Integral):
    #             # It's a list of integers
    #             value = b''.join([chr(x) for x in args[0]])
    #         else:
    #             raise ValueError('item cannot be interpreted as an integer')
    #     elif isinstance(args[0], Integral):
    #         if args[0] < 0:
    #             raise ValueError('negative count')
    #         value = b'\x00' * args[0]
    #     else:
    #         value = args[0]
    #     return super(newbytes, cls).__new__(cls, value)

    def __repr__(self):
        s = super(oldstr, self).__repr__()   # e.g. b'abc' on Py3, b'abc' on Py3
        return s[1:]

    def __str__(self):
        s = super(oldstr, self).__str__()   # e.g. "b'abc'" or "b'abc\\ndef'
        # TODO: fix this:
        assert s[:2] == "b'" and s[-1] == "'"
        return unescape(s[2:-1])            # e.g. 'abc'    or 'abc\ndef'

    def __getitem__(self, y):
        if isinstance(y, Integral):
            return super(oldstr, self).__getitem__(slice(y, y+1))
        else:
            return super(oldstr, self).__getitem__(y)

    def __getslice__(self, *args):
        return self.__getitem__(slice(*args))

    def __contains__(self, key):
        if isinstance(key, int):
            return False

    def __native__(self):
        return bytes(self)


__all__ = ['oldstr']
PK�@u\��ջcc.past/types/__pycache__/__init__.cpython-39.pycnu�[���a

��?ho�@szdZddlmZejrBddlZejZejZejZejZej	Z	gZ
n4ddlmZddlmZddlmZe
ZeZ	gd�Z
dS)	a�
Forward-ports of types from Python 2 for use with Python 3:

- ``basestring``: equivalent to ``(str, bytes)`` in ``isinstance`` checks
- ``dict``: with list-producing .keys() etc. methods
- ``str``: bytes-like, but iterating over them doesn't product integers
- ``long``: alias of Py3 int with ``L`` suffix in the ``repr``
- ``unicode``: alias of Py3 str with ``u`` prefix in the ``repr``

�)�utilsN�)�
basestring)�olddict)�oldstr)rrr�long�unicode)�__doc__Zpastr�PY2�__builtin__r�dict�strrr�__all__rr�int�rr�=/usr/local/lib/python3.9/site-packages/past/types/__init__.py�<module>sPK�@u\�g�TT,past/types/__pycache__/oldstr.cpython-39.pycnu�[���a

��?h��@szdZddlmZddlmZmZer2ddlmZnddlmZe	Z
Gdd�de�Zdd�Z
Gd	d
�d
eee
��Zd
gZdS)zH
Pure-Python implementation of a Python 2-like str object for Python 3.
�)�Integral)�PY2�with_metaclass)�Iterablec@seZdZdd�ZdS)�
BaseOldStrcCs
t|t�S�N)�
isinstance�_builtin_bytes)�cls�instance�r�;/usr/local/lib/python3.9/site-packages/past/types/oldstr.py�__instancecheck__szBaseOldStr.__instancecheck__N)�__name__�
__module__�__qualname__rrrrr
rsrcCs|���d�S)z�
    Interprets strings with escape sequences

    Example:
    >>> s = unescape(r'abc\\def')   # i.e. 'abc\\\\def'
    >>> print(s)
    'abc\def'
    >>> s2 = unescape('abc\\ndef')
    >>> len(s2)
    8
    >>> print(s2)
    abc
    def
    �unicode_escape)�encode�decode)�srrr
�unescapesrcsdeZdZdZedd��Zdd�Z�fdd�Z�fdd	�Z�fd
d�Z	dd
�Z
dd�Zdd�Z�Z
S)�oldstrzC
    A forward port of the Python 2 8-bit string object to Py3
    cCst�dSr)�AttributeError��selfrrr
�__iter__-szoldstr.__iter__cCsdd�tt�D�S)NcSsg|]}|dkr|�qS)rr)�.0�thingrrr
�
<listcomp>2�z"oldstr.__dir__.<locals>.<listcomp>)�dirr	rrrr
�__dir__1szoldstr.__dir__cstt|���}|dd�S�N�)�superr�__repr__�rr��	__class__rr
r%lszoldstr.__repr__cs>tt|���}|dd�dkr*|ddks.J�t|dd��S)N�zb'����')r$r�__str__rr&r'rr
r,ps zoldstr.__str__cs8t|t�r$tt|��t||d��Stt|��|�SdSr")rrr$r�__getitem__�slice)r�yr'rr
r-vs
zoldstr.__getitem__cGs|�t|��Sr)r-r.)r�argsrrr
�__getslice__|szoldstr.__getslice__cCst|t�rdSdS)NF)r�int)r�keyrrr
�__contains__s
zoldstr.__contains__cCst|�Sr)�bytesrrrr
�
__native__�szoldstr.__native__)rrr�__doc__�propertyrr!r%r,r-r1r4r6�
__classcell__rrr'r
r(s
;rN)r7ZnumbersrZ
past.utilsrr�collectionsr�collections.abcr5r	�typerrr�__all__rrrr
�<module>s_PK�@u\Y�ױ1	1	-past/types/__pycache__/olddict.cpython-39.pycnu�[���a

��?h�
�@sZdZddlZddlmZeZejdd�ZGdd�de�Z	Gdd�dee	e��Z
dgZdS)	a�
A dict subclass for Python 3 that behaves like Python 2's dict

Example use:

>>> from past.builtins import dict
>>> d1 = dict()    # instead of {} for an empty dict
>>> d2 = dict(key1='value1', key2='value2')

The keys, values and items methods now return lists on Python 3.x and there are
methods for iterkeys, itervalues, iteritems, and viewkeys etc.

>>> for d in (d1, d2):
...     assert isinstance(d.keys(), list)
...     assert isinstance(d.values(), list)
...     assert isinstance(d.items(), list)
�N)�with_metaclass�c@seZdZdd�ZdS)�BaseOldDictcCs
t|t�S�N)�
isinstance�
_builtin_dict)�cls�instance�r
�</usr/local/lib/python3.9/site-packages/past/types/olddict.py�__instancecheck__szBaseOldDict.__instancecheck__N)�__name__�
__module__�__qualname__rr
r
r
rrsrcspeZdZdZejZejZ�fdd�ZejZ	ejZ
�fdd�ZejZejZ
�fdd�Zdd	�Z�fd
d�Z�ZS)�olddictz7
    A backport of the Python 3 dict object to Py2
    csttt|����Sr)�list�superr�keys��self��	__class__r
rr(szolddict.keyscsttt|����Sr)rrr�valuesrrr
rr.szolddict.valuescsttt|����Sr)rrr�itemsrrr
rr4sz
olddict.itemscCs||vS)zC
        D.has_key(k) -> True if D has a key k, else False
        r
)r�kr
r
r�has_key7szolddict.has_keycs
tt|�S)z;
        Hook for the past.utils.native() function
        )rZoldbytesrrr
r�
__native__Yszolddict.__native__)r
rr�__doc__rr�iterkeys�viewkeysr�
itervalues�
viewvaluesr�	iteritems�	viewitemsrr�
__classcell__r
r
rrr!s"r)r�sysZ
past.utilsr�dictr�version_info�ver�typerr�__all__r
r
r
r�<module>s?PK�@u\�3L``0past/types/__pycache__/basestring.cpython-39.pycnu�[���a

��?h�@s`dZddlZddlmZmZer$eZejdd�ZGdd�de	�Z
Gdd�dee
��ZdgZdS)	z�
An implementation of the basestring type for Python 3

Example use:

>>> s = b'abc'
>>> assert isinstance(s, basestring)
>>> from past.types import str as oldstr
>>> s2 = oldstr(b'abc')
>>> assert isinstance(s2, basestring)

�N)�with_metaclass�PY2�cs$eZdZdd�Z�fdd�Z�ZS)�BaseBaseStringcCst|ttf�S�N)�
isinstance�bytes�str)�cls�instance�r�?/usr/local/lib/python3.9/site-packages/past/types/basestring.py�__instancecheck__sz BaseBaseString.__instancecheck__cstt|��|�pt|ttf�Sr)�superr�__subclasscheck__�
issubclassrr	)r
�subclass��	__class__rr
rsz BaseBaseString.__subclasscheck__)�__name__�
__module__�__qualname__rr�
__classcell__rrrr
rsrc@seZdZdZdS)�
basestringzC
    A minimal backport of the Python 2 basestring type to Py3
    N)rrr�__doc__rrrr
r sr)
r�sysZ
past.utilsrr�unicoder	�version_info�ver�typerr�__all__rrrr
�<module>s
PK�@u\�ߵ�I
I
past/utils/__init__.pynu�[���"""
Various non-built-in utility functions and definitions for Py2
compatibility in Py3.

For example:

    >>> # The old_div() function behaves like Python 2's / operator
    >>> # without "from __future__ import division"
    >>> from past.utils import old_div
    >>> old_div(3, 2)    # like 3/2 in Py2
    0
    >>> old_div(3, 2.0)  # like 3/2.0 in Py2
    1.5
"""

import sys
import numbers

PY3 = sys.version_info[0] >= 3
PY2 = sys.version_info[0] == 2
PYPY = hasattr(sys, 'pypy_translation_info')


def with_metaclass(meta, *bases):
    """
    Function from jinja2/_compat.py. License: BSD.

    Use it like this::

        class BaseForm(object):
            pass

        class FormType(type):
            pass

        class Form(with_metaclass(FormType, BaseForm)):
            pass

    This requires a bit of explanation: the basic idea is to make a
    dummy metaclass for one level of class instantiation that replaces
    itself with the actual metaclass.  Because of internal type checks
    we also need to make sure that we downgrade the custom metaclass
    for one level to something closer to type (that's why __call__ and
    __init__ comes back from type etc.).

    This has the advantage over six.with_metaclass of not introducing
    dummy classes into the final MRO.
    """
    class metaclass(meta):
        __call__ = type.__call__
        __init__ = type.__init__
        def __new__(cls, name, this_bases, d):
            if this_bases is None:
                return type.__new__(cls, name, (), d)
            return meta(name, bases, d)
    return metaclass('temporary_class', None, {})


def native(obj):
    """
    On Py2, this is a no-op: native(obj) -> obj

    On Py3, returns the corresponding native Py3 types that are
    superclasses for forward-ported objects from Py2:

    >>> from past.builtins import str, dict

    >>> native(str(b'ABC'))   # Output on Py3 follows. On Py2, output is 'ABC'
    b'ABC'
    >>> type(native(str(b'ABC')))
    bytes

    Existing native types on Py3 will be returned unchanged:

    >>> type(native(b'ABC'))
    bytes
    """
    if hasattr(obj, '__native__'):
        return obj.__native__()
    else:
        return obj


# An alias for future.utils.old_div():
def old_div(a, b):
    """
    Equivalent to ``a / b`` on Python 2 without ``from __future__ import
    division``.

    TODO: generalize this to other objects (like arrays etc.)
    """
    if isinstance(a, numbers.Integral) and isinstance(b, numbers.Integral):
        return a // b
    else:
        return a / b

__all__ = ['PY3', 'PY2', 'PYPY', 'with_metaclass', 'native', 'old_div']
PK�@u\�����.past/utils/__pycache__/__init__.cpython-39.pycnu�[���a

��?hI
�@s^dZddlZddlZejddkZejddkZeed�Zdd�Zdd	�Z	d
d�Z
gd�ZdS)
aj
Various non-built-in utility functions and definitions for Py2
compatibility in Py3.

For example:

    >>> # The old_div() function behaves like Python 2's / operator
    >>> # without "from __future__ import division"
    >>> from past.utils import old_div
    >>> old_div(3, 2)    # like 3/2 in Py2
    0
    >>> old_div(3, 2.0)  # like 3/2.0 in Py2
    1.5
�N��Zpypy_translation_infocs"G��fdd�d��}|ddi�S)a�
    Function from jinja2/_compat.py. License: BSD.

    Use it like this::

        class BaseForm(object):
            pass

        class FormType(type):
            pass

        class Form(with_metaclass(FormType, BaseForm)):
            pass

    This requires a bit of explanation: the basic idea is to make a
    dummy metaclass for one level of class instantiation that replaces
    itself with the actual metaclass.  Because of internal type checks
    we also need to make sure that we downgrade the custom metaclass
    for one level to something closer to type (that's why __call__ and
    __init__ comes back from type etc.).

    This has the advantage over six.with_metaclass of not introducing
    dummy classes into the final MRO.
    cs&eZdZejZejZ��fdd�ZdS)z!with_metaclass.<locals>.metaclasscs$|durt�||d|�S�|�|�S)N�)�type�__new__)�cls�name�
this_bases�d��bases�metar�=/usr/local/lib/python3.9/site-packages/past/utils/__init__.pyr4sz)with_metaclass.<locals>.metaclass.__new__N)�__name__�
__module__�__qualname__r�__call__�__init__rrrrr�	metaclass1sr�temporary_classNr)r
rrrrr�with_metaclasssrcCst|d�r|��S|SdS)a�
    On Py2, this is a no-op: native(obj) -> obj

    On Py3, returns the corresponding native Py3 types that are
    superclasses for forward-ported objects from Py2:

    >>> from past.builtins import str, dict

    >>> native(str(b'ABC'))   # Output on Py3 follows. On Py2, output is 'ABC'
    b'ABC'
    >>> type(native(str(b'ABC')))
    bytes

    Existing native types on Py3 will be returned unchanged:

    >>> type(native(b'ABC'))
    bytes
    �
__native__N)�hasattrr)�objrrr�native;s
rcCs,t|tj�r t|tj�r ||S||SdS)z�
    Equivalent to ``a / b`` on Python 2 without ``from __future__ import
    division``.

    TODO: generalize this to other objects (like arrays etc.)
    N)�
isinstance�numbers�Integral)�a�brrr�old_divUsr )�PY3�PY2�PYPYrrr )�__doc__�sysr�version_infor!r"rr#rrr �__all__rrrr�<module>s
#PK�@u\�졀��"backports/configparser/__init__.pynu�[���"""Configuration file parser.

A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.

Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.

class:

ConfigParser -- responsible for parsing a list of
                    configuration files, and managing the parsed database.

    methods:

    __init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
             delimiters=('=', ':'), comment_prefixes=('#', ';'),
             inline_comment_prefixes=None, strict=True,
             empty_lines_in_values=True, default_section='DEFAULT',
             interpolation=<unset>, converters=<unset>):
        Create the parser. When `defaults' is given, it is initialized into the
        dictionary or intrinsic defaults. The keys must be strings, the values
        must be appropriate for %()s string interpolation.

        When `dict_type' is given, it will be used to create the dictionary
        objects for the list of sections, for the options within a section, and
        for the default values.

        When `delimiters' is given, it will be used as the set of substrings
        that divide keys from values.

        When `comment_prefixes' is given, it will be used as the set of
        substrings that prefix comments in empty lines. Comments can be
        indented.

        When `inline_comment_prefixes' is given, it will be used as the set of
        substrings that prefix comments in non-empty lines.

        When `strict` is True, the parser won't allow for any section or option
        duplicates while reading from a single source (file, string or
        dictionary). Default is True.

        When `empty_lines_in_values' is False (default: True), each empty line
        marks the end of an option. Otherwise, internal empty lines of
        a multiline option are kept as part of the value.

        When `allow_no_value' is True (default: False), options without
        values are accepted; the value presented for these is None.

        When `default_section' is given, the name of the special section is
        named accordingly. By default it is called ``"DEFAULT"`` but this can
        be customized to point to any other valid section name. Its current
        value can be retrieved using the ``parser_instance.default_section``
        attribute and may be modified at runtime.

        When `interpolation` is given, it should be an Interpolation subclass
        instance. It will be used as the handler for option value
        pre-processing when using getters. RawConfigParser objects don't do
        any sort of interpolation, whereas ConfigParser uses an instance of
        BasicInterpolation. The library also provides a ``zc.buildbot``
        inspired ExtendedInterpolation implementation.

        When `converters` is given, it should be a dictionary where each key
        represents the name of a type converter and each value is a callable
        implementing the conversion from string to the desired datatype. Every
        converter gets its corresponding get*() method on the parser object and
        section proxies.

    sections()
        Return all the configuration section names, sans DEFAULT.

    has_section(section)
        Return whether the given section exists.

    has_option(section, option)
        Return whether the given option exists in the given section.

    options(section)
        Return list of configuration options for the named section.

    read(filenames, encoding=None)
        Read and parse the iterable of named configuration files, given by
        name.  A single filename is also allowed.  Non-existing files
        are ignored.  Return list of successfully read files.

    read_file(f, filename=None)
        Read and parse one configuration file, given as a file object.
        The filename defaults to f.name; it is only used in error
        messages (if f has no `name' attribute, the string `<???>' is used).

    read_string(string)
        Read configuration from a given string.

    read_dict(dictionary)
        Read configuration from a dictionary. Keys are section names,
        values are dictionaries with keys and values that should be present
        in the section. If the used dictionary type preserves order, sections
        and their keys will be added in order. Values are automatically
        converted to strings.

    get(section, option, raw=False, vars=None, fallback=_UNSET)
        Return a string value for the named option.  All % interpolations are
        expanded in the return values, based on the defaults passed into the
        constructor and the DEFAULT section.  Additional substitutions may be
        provided using the `vars' argument, which must be a dictionary whose
        contents override any pre-existing defaults. If `option' is a key in
        `vars', the value from `vars' is used.

    getint(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to an integer.

    getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to a float.

    getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to a boolean (currently case
        insensitively defined as 0, false, no, off for False, and 1, true,
        yes, on for True).  Returns False or True.

    items(section=_UNSET, raw=False, vars=None)
        If section is given, return a list of tuples with (name, value) for
        each option in the section. Otherwise, return a list of tuples with
        (section_name, section_proxy) for each section, including DEFAULTSECT.

    remove_section(section)
        Remove the given file section and all its options.

    remove_option(section, option)
        Remove the given option from the given section.

    set(section, option, value)
        Set the given option.

    write(fp, space_around_delimiters=True)
        Write the configuration state in .ini format. If
        `space_around_delimiters' is True (the default), delimiters
        between keys and values are surrounded by spaces.
"""

from collections.abc import MutableMapping
from collections import ChainMap as _ChainMap
import functools
from .compat import io
import itertools
import os
import re
import sys
import warnings


__all__ = [
    "NoSectionError",
    "DuplicateOptionError",
    "DuplicateSectionError",
    "NoOptionError",
    "InterpolationError",
    "InterpolationDepthError",
    "InterpolationMissingOptionError",
    "InterpolationSyntaxError",
    "ParsingError",
    "MissingSectionHeaderError",
    "ConfigParser",
    "SafeConfigParser",
    "RawConfigParser",
    "Interpolation",
    "BasicInterpolation",
    "ExtendedInterpolation",
    "LegacyInterpolation",
    "SectionProxy",
    "ConverterMapping",
    "DEFAULTSECT",
    "MAX_INTERPOLATION_DEPTH",
]

_default_dict = dict
DEFAULTSECT = "DEFAULT"

MAX_INTERPOLATION_DEPTH = 10


# exception classes
class Error(Exception):
    """Base class for ConfigParser exceptions."""

    def __init__(self, msg=''):
        self.message = msg
        Exception.__init__(self, msg)

    def __repr__(self):
        return self.message

    __str__ = __repr__


class NoSectionError(Error):
    """Raised when no section matches a requested option."""

    def __init__(self, section):
        Error.__init__(self, 'No section: %r' % (section,))
        self.section = section
        self.args = (section,)


class DuplicateSectionError(Error):
    """Raised when a section is repeated in an input source.

    Possible repetitions that raise this exception are: multiple creation
    using the API or in strict parsers when a section is found more than once
    in a single input file, string or dictionary.
    """

    def __init__(self, section, source=None, lineno=None):
        msg = [repr(section), " already exists"]
        if source is not None:
            message = ["While reading from ", repr(source)]
            if lineno is not None:
                message.append(" [line {0:2d}]".format(lineno))
            message.append(": section ")
            message.extend(msg)
            msg = message
        else:
            msg.insert(0, "Section ")
        Error.__init__(self, "".join(msg))
        self.section = section
        self.source = source
        self.lineno = lineno
        self.args = (section, source, lineno)


class DuplicateOptionError(Error):
    """Raised by strict parsers when an option is repeated in an input source.

    Current implementation raises this exception only when an option is found
    more than once in a single file, string or dictionary.
    """

    def __init__(self, section, option, source=None, lineno=None):
        msg = [repr(option), " in section ", repr(section), " already exists"]
        if source is not None:
            message = ["While reading from ", repr(source)]
            if lineno is not None:
                message.append(" [line {0:2d}]".format(lineno))
            message.append(": option ")
            message.extend(msg)
            msg = message
        else:
            msg.insert(0, "Option ")
        Error.__init__(self, "".join(msg))
        self.section = section
        self.option = option
        self.source = source
        self.lineno = lineno
        self.args = (section, option, source, lineno)


class NoOptionError(Error):
    """A requested option was not found."""

    def __init__(self, option, section):
        Error.__init__(self, "No option %r in section: %r" % (option, section))
        self.option = option
        self.section = section
        self.args = (option, section)


class InterpolationError(Error):
    """Base class for interpolation-related exceptions."""

    def __init__(self, option, section, msg):
        Error.__init__(self, msg)
        self.option = option
        self.section = section
        self.args = (option, section, msg)


class InterpolationMissingOptionError(InterpolationError):
    """A string substitution required a setting which was not available."""

    def __init__(self, option, section, rawval, reference):
        msg = (
            "Bad value substitution: option {!r} in section {!r} contains "
            "an interpolation key {!r} which is not a valid option name. "
            "Raw value: {!r}".format(option, section, reference, rawval)
        )
        InterpolationError.__init__(self, option, section, msg)
        self.reference = reference
        self.args = (option, section, rawval, reference)


class InterpolationSyntaxError(InterpolationError):
    """Raised when the source text contains invalid syntax.

    Current implementation raises this exception when the source text into
    which substitutions are made does not conform to the required syntax.
    """


class InterpolationDepthError(InterpolationError):
    """Raised when substitutions are nested too deeply."""

    def __init__(self, option, section, rawval):
        msg = (
            "Recursion limit exceeded in value substitution: option {!r} "
            "in section {!r} contains an interpolation key which "
            "cannot be substituted in {} steps. Raw value: {!r}"
            "".format(option, section, MAX_INTERPOLATION_DEPTH, rawval)
        )
        InterpolationError.__init__(self, option, section, msg)
        self.args = (option, section, rawval)


class ParsingError(Error):
    """Raised when a configuration file does not follow legal syntax."""

    def __init__(self, source=None, filename=None):
        # Exactly one of `source'/`filename' arguments has to be given.
        # `filename' kept for compatibility.
        if filename and source:
            raise ValueError(
                "Cannot specify both `filename' and `source'. " "Use `source'."
            )
        elif not filename and not source:
            raise ValueError("Required argument `source' not given.")
        elif filename:
            source = filename
        Error.__init__(self, 'Source contains parsing errors: %r' % source)
        self.source = source
        self.errors = []
        self.args = (source,)

    @property
    def filename(self):
        """Deprecated, use `source'."""
        warnings.warn(
            "The 'filename' attribute will be removed in future versions.  "
            "Use 'source' instead.",
            DeprecationWarning,
            stacklevel=2,
        )
        return self.source

    @filename.setter
    def filename(self, value):
        """Deprecated, user `source'."""
        warnings.warn(
            "The 'filename' attribute will be removed in future versions.  "
            "Use 'source' instead.",
            DeprecationWarning,
            stacklevel=2,
        )
        self.source = value

    def append(self, lineno, line):
        self.errors.append((lineno, line))
        self.message += '\n\t[line %2d]: %s' % (lineno, line)


class MissingSectionHeaderError(ParsingError):
    """Raised when a key-value pair is found before any section header."""

    def __init__(self, filename, lineno, line):
        Error.__init__(
            self,
            'File contains no section headers.\nfile: %r, line: %d\n%r'
            % (filename, lineno, line),
        )
        self.source = filename
        self.lineno = lineno
        self.line = line
        self.args = (filename, lineno, line)


# Used in parser getters to indicate the default behaviour when a specific
# option is not found it to raise an exception. Created to enable `None' as
# a valid fallback value.
_UNSET = object()


class Interpolation:
    """Dummy interpolation that passes the value through with no changes."""

    def before_get(self, parser, section, option, value, defaults):
        return value

    def before_set(self, parser, section, option, value):
        return value

    def before_read(self, parser, section, option, value):
        return value

    def before_write(self, parser, section, option, value):
        return value


class BasicInterpolation(Interpolation):
    """Interpolation as implemented in the classic ConfigParser.

    The option values can contain format strings which refer to other values in
    the same section, or values in the special default section.

    For example:

        something: %(dir)s/whatever

    would resolve the "%(dir)s" to the value of dir.  All reference
    expansions are done late, on demand. If a user needs to use a bare % in
    a configuration file, she can escape it by writing %%. Other % usage
    is considered a user error and raises `InterpolationSyntaxError'."""

    _KEYCRE = re.compile(r"%\(([^)]+)\)s")

    def before_get(self, parser, section, option, value, defaults):
        L = []
        self._interpolate_some(parser, option, L, value, section, defaults, 1)
        return ''.join(L)

    def before_set(self, parser, section, option, value):
        tmp_value = value.replace('%%', '')  # escaped percent signs
        tmp_value = self._KEYCRE.sub('', tmp_value)  # valid syntax
        if '%' in tmp_value:
            raise ValueError(
                "invalid interpolation syntax in %r at "
                "position %d" % (value, tmp_value.find('%'))
            )
        return value

    def _interpolate_some(  # noqa: C901
        self, parser, option, accum, rest, section, map, depth
    ):
        rawval = parser.get(section, option, raw=True, fallback=rest)
        if depth > MAX_INTERPOLATION_DEPTH:
            raise InterpolationDepthError(option, section, rawval)
        while rest:
            p = rest.find("%")
            if p < 0:
                accum.append(rest)
                return
            if p > 0:
                accum.append(rest[:p])
                rest = rest[p:]
            # p is no longer used
            c = rest[1:2]
            if c == "%":
                accum.append("%")
                rest = rest[2:]
            elif c == "(":
                m = self._KEYCRE.match(rest)
                if m is None:
                    raise InterpolationSyntaxError(
                        option,
                        section,
                        "bad interpolation variable reference %r" % rest,
                    )
                var = parser.optionxform(m.group(1))
                rest = rest[m.end() :]
                try:
                    v = map[var]
                except KeyError:
                    raise InterpolationMissingOptionError(
                        option, section, rawval, var
                    ) from None
                if "%" in v:
                    self._interpolate_some(
                        parser, option, accum, v, section, map, depth + 1
                    )
                else:
                    accum.append(v)
            else:
                raise InterpolationSyntaxError(
                    option,
                    section,
                    "'%%' must be followed by '%%' or '(', " "found: %r" % (rest,),
                )


class ExtendedInterpolation(Interpolation):
    """Advanced variant of interpolation, supports the syntax used by
    `zc.buildout'. Enables interpolation between sections."""

    _KEYCRE = re.compile(r"\$\{([^}]+)\}")

    def before_get(self, parser, section, option, value, defaults):
        L = []
        self._interpolate_some(parser, option, L, value, section, defaults, 1)
        return ''.join(L)

    def before_set(self, parser, section, option, value):
        tmp_value = value.replace('$$', '')  # escaped dollar signs
        tmp_value = self._KEYCRE.sub('', tmp_value)  # valid syntax
        if '$' in tmp_value:
            raise ValueError(
                "invalid interpolation syntax in %r at "
                "position %d" % (value, tmp_value.find('$'))
            )
        return value

    def _interpolate_some(  # noqa: C901
        self, parser, option, accum, rest, section, map, depth
    ):
        rawval = parser.get(section, option, raw=True, fallback=rest)
        if depth > MAX_INTERPOLATION_DEPTH:
            raise InterpolationDepthError(option, section, rawval)
        while rest:
            p = rest.find("$")
            if p < 0:
                accum.append(rest)
                return
            if p > 0:
                accum.append(rest[:p])
                rest = rest[p:]
            # p is no longer used
            c = rest[1:2]
            if c == "$":
                accum.append("$")
                rest = rest[2:]
            elif c == "{":
                m = self._KEYCRE.match(rest)
                if m is None:
                    raise InterpolationSyntaxError(
                        option,
                        section,
                        "bad interpolation variable reference %r" % rest,
                    )
                path = m.group(1).split(':')
                rest = rest[m.end() :]
                sect = section
                opt = option
                try:
                    if len(path) == 1:
                        opt = parser.optionxform(path[0])
                        v = map[opt]
                    elif len(path) == 2:
                        sect = path[0]
                        opt = parser.optionxform(path[1])
                        v = parser.get(sect, opt, raw=True)
                    else:
                        raise InterpolationSyntaxError(
                            option, section, "More than one ':' found: %r" % (rest,)
                        )
                except (KeyError, NoSectionError, NoOptionError):
                    raise InterpolationMissingOptionError(
                        option, section, rawval, ":".join(path)
                    ) from None
                if "$" in v:
                    self._interpolate_some(
                        parser,
                        opt,
                        accum,
                        v,
                        sect,
                        dict(parser.items(sect, raw=True)),
                        depth + 1,
                    )
                else:
                    accum.append(v)
            else:
                raise InterpolationSyntaxError(
                    option,
                    section,
                    "'$' must be followed by '$' or '{', " "found: %r" % (rest,),
                )


class LegacyInterpolation(Interpolation):
    """Deprecated interpolation used in old versions of ConfigParser.
    Use BasicInterpolation or ExtendedInterpolation instead."""

    _KEYCRE = re.compile(r"%\(([^)]*)\)s|.")

    def before_get(self, parser, section, option, value, vars):
        rawval = value
        depth = MAX_INTERPOLATION_DEPTH
        while depth:  # Loop through this until it's done
            depth -= 1
            if value and "%(" in value:
                replace = functools.partial(self._interpolation_replace, parser=parser)
                value = self._KEYCRE.sub(replace, value)
                try:
                    value = value % vars
                except KeyError as e:
                    raise InterpolationMissingOptionError(
                        option, section, rawval, e.args[0]
                    ) from None
            else:
                break
        if value and "%(" in value:
            raise InterpolationDepthError(option, section, rawval)
        return value

    def before_set(self, parser, section, option, value):
        return value

    @staticmethod
    def _interpolation_replace(match, parser):
        s = match.group(1)
        if s is None:
            return match.group()
        else:
            return "%%(%s)s" % parser.optionxform(s)


class RawConfigParser(MutableMapping):
    """ConfigParser that does not do interpolation."""

    # Regular expressions for parsing section headers and options
    _SECT_TMPL = r"""
        \[                                 # [
        (?P<header>.+)                     # very permissive!
        \]                                 # ]
        """
    _OPT_TMPL = r"""
        (?P<option>.*?)                    # very permissive!
        \s*(?P<vi>{delim})\s*              # any number of space/tab,
                                           # followed by any of the
                                           # allowed delimiters,
                                           # followed by any space/tab
        (?P<value>.*)$                     # everything up to eol
        """
    _OPT_NV_TMPL = r"""
        (?P<option>.*?)                    # very permissive!
        \s*(?:                             # any number of space/tab,
        (?P<vi>{delim})\s*                 # optionally followed by
                                           # any of the allowed
                                           # delimiters, followed by any
                                           # space/tab
        (?P<value>.*))?$                   # everything up to eol
        """
    # Interpolation algorithm to be used if the user does not specify another
    _DEFAULT_INTERPOLATION = Interpolation()
    # Compiled regular expression for matching sections
    SECTCRE = re.compile(_SECT_TMPL, re.VERBOSE)
    # Compiled regular expression for matching options with typical separators
    OPTCRE = re.compile(_OPT_TMPL.format(delim="=|:"), re.VERBOSE)
    # Compiled regular expression for matching options with optional values
    # delimited using typical separators
    OPTCRE_NV = re.compile(_OPT_NV_TMPL.format(delim="=|:"), re.VERBOSE)
    # Compiled regular expression for matching leading whitespace in a line
    NONSPACECRE = re.compile(r"\S")
    # Possible boolean values in the configuration.
    BOOLEAN_STATES = {
        '1': True,
        'yes': True,
        'true': True,
        'on': True,
        '0': False,
        'no': False,
        'false': False,
        'off': False,
    }

    def __init__(
        self,
        defaults=None,
        dict_type=_default_dict,
        allow_no_value=False,
        *,
        delimiters=('=', ':'),
        comment_prefixes=('#', ';'),
        inline_comment_prefixes=None,
        strict=True,
        empty_lines_in_values=True,
        default_section=DEFAULTSECT,
        interpolation=_UNSET,
        converters=_UNSET,
    ):

        self._dict = dict_type
        self._sections = self._dict()
        self._defaults = self._dict()
        self._converters = ConverterMapping(self)
        self._proxies = self._dict()
        self._proxies[default_section] = SectionProxy(self, default_section)
        self._delimiters = tuple(delimiters)
        if delimiters == ('=', ':'):
            self._optcre = self.OPTCRE_NV if allow_no_value else self.OPTCRE
        else:
            d = "|".join(re.escape(d) for d in delimiters)
            if allow_no_value:
                self._optcre = re.compile(self._OPT_NV_TMPL.format(delim=d), re.VERBOSE)
            else:
                self._optcre = re.compile(self._OPT_TMPL.format(delim=d), re.VERBOSE)
        self._comment_prefixes = tuple(comment_prefixes or ())
        self._inline_comment_prefixes = tuple(inline_comment_prefixes or ())
        self._strict = strict
        self._allow_no_value = allow_no_value
        self._empty_lines_in_values = empty_lines_in_values
        self.default_section = default_section
        self._interpolation = interpolation
        if self._interpolation is _UNSET:
            self._interpolation = self._DEFAULT_INTERPOLATION
        if self._interpolation is None:
            self._interpolation = Interpolation()
        if converters is not _UNSET:
            self._converters.update(converters)
        if defaults:
            self._read_defaults(defaults)

    def defaults(self):
        return self._defaults

    def sections(self):
        """Return a list of section names, excluding [DEFAULT]"""
        # self._sections will never have [DEFAULT] in it
        return list(self._sections.keys())

    def add_section(self, section):
        """Create a new section in the configuration.

        Raise DuplicateSectionError if a section by the specified name
        already exists. Raise ValueError if name is DEFAULT.
        """
        if section == self.default_section:
            raise ValueError('Invalid section name: %r' % section)

        if section in self._sections:
            raise DuplicateSectionError(section)
        self._sections[section] = self._dict()
        self._proxies[section] = SectionProxy(self, section)

    def has_section(self, section):
        """Indicate whether the named section is present in the configuration.

        The DEFAULT section is not acknowledged.
        """
        return section in self._sections

    def options(self, section):
        """Return a list of option names for the given section name."""
        try:
            opts = self._sections[section].copy()
        except KeyError:
            raise NoSectionError(section) from None
        opts.update(self._defaults)
        return list(opts.keys())

    def read(self, filenames, encoding=None):
        """Read and parse a filename or an iterable of filenames.

        Files that cannot be opened are silently ignored; this is
        designed so that you can specify an iterable of potential
        configuration file locations (e.g. current directory, user's
        home directory, systemwide directory), and all existing
        configuration files in the iterable will be read.  A single
        filename may also be given.

        Return list of successfully read files.
        """
        if isinstance(filenames, (str, bytes, os.PathLike)):
            filenames = [filenames]
        encoding = io.text_encoding(encoding)
        read_ok = []
        for filename in filenames:
            try:
                with open(filename, encoding=encoding) as fp:
                    self._read(fp, filename)
            except OSError:
                continue
            if isinstance(filename, os.PathLike):
                filename = os.fspath(filename)
            read_ok.append(filename)
        return read_ok

    def read_file(self, f, source=None):
        """Like read() but the argument must be a file-like object.

        The `f' argument must be iterable, returning one line at a time.
        Optional second argument is the `source' specifying the name of the
        file being read. If not given, it is taken from f.name. If `f' has no
        `name' attribute, `<???>' is used.
        """
        if source is None:
            try:
                source = f.name
            except AttributeError:
                source = '<???>'
        self._read(f, source)

    def read_string(self, string, source='<string>'):
        """Read configuration from a given string."""
        sfile = io.StringIO(string)
        self.read_file(sfile, source)

    def read_dict(self, dictionary, source='<dict>'):
        """Read configuration from a dictionary.

        Keys are section names, values are dictionaries with keys and values
        that should be present in the section. If the used dictionary type
        preserves order, sections and their keys will be added in order.

        All types held in the dictionary are converted to strings during
        reading, including section names, option names and keys.

        Optional second argument is the `source' specifying the name of the
        dictionary being read.
        """
        elements_added = set()
        for section, keys in dictionary.items():
            section = str(section)
            try:
                self.add_section(section)
            except (DuplicateSectionError, ValueError):
                if self._strict and section in elements_added:
                    raise
            elements_added.add(section)
            for key, value in keys.items():
                key = self.optionxform(str(key))
                if value is not None:
                    value = str(value)
                if self._strict and (section, key) in elements_added:
                    raise DuplicateOptionError(section, key, source)
                elements_added.add((section, key))
                self.set(section, key, value)

    def readfp(self, fp, filename=None):
        """Deprecated, use read_file instead."""
        warnings.warn(
            "This method will be removed in future versions.  "
            "Use 'parser.read_file()' instead.",
            DeprecationWarning,
            stacklevel=2,
        )
        self.read_file(fp, source=filename)

    def get(self, section, option, *, raw=False, vars=None, fallback=_UNSET):
        """Get an option value for a given section.

        If `vars' is provided, it must be a dictionary. The option is looked up
        in `vars' (if provided), `section', and in `DEFAULTSECT' in that order.
        If the key is not found and `fallback' is provided, it is used as
        a fallback value. `None' can be provided as a `fallback' value.

        If interpolation is enabled and the optional argument `raw' is False,
        all interpolations are expanded in the return values.

        Arguments `raw', `vars', and `fallback' are keyword only.

        The section DEFAULT is special.
        """
        try:
            d = self._unify_values(section, vars)
        except NoSectionError:
            if fallback is _UNSET:
                raise
            else:
                return fallback
        option = self.optionxform(option)
        try:
            value = d[option]
        except KeyError:
            if fallback is _UNSET:
                raise NoOptionError(option, section)
            else:
                return fallback

        if raw or value is None:
            return value
        else:
            return self._interpolation.before_get(self, section, option, value, d)

    def _get(self, section, conv, option, **kwargs):
        return conv(self.get(section, option, **kwargs))

    def _get_conv(
        self, section, option, conv, *, raw=False, vars=None, fallback=_UNSET, **kwargs
    ):
        try:
            return self._get(section, conv, option, raw=raw, vars=vars, **kwargs)
        except (NoSectionError, NoOptionError):
            if fallback is _UNSET:
                raise
            return fallback

    # getint, getfloat and getboolean provided directly for backwards compat
    def getint(
        self, section, option, *, raw=False, vars=None, fallback=_UNSET, **kwargs
    ):
        return self._get_conv(
            section, option, int, raw=raw, vars=vars, fallback=fallback, **kwargs
        )

    def getfloat(
        self, section, option, *, raw=False, vars=None, fallback=_UNSET, **kwargs
    ):
        return self._get_conv(
            section, option, float, raw=raw, vars=vars, fallback=fallback, **kwargs
        )

    def getboolean(
        self, section, option, *, raw=False, vars=None, fallback=_UNSET, **kwargs
    ):
        return self._get_conv(
            section,
            option,
            self._convert_to_boolean,
            raw=raw,
            vars=vars,
            fallback=fallback,
            **kwargs,
        )

    def items(self, section=_UNSET, raw=False, vars=None):
        """Return a list of (name, value) tuples for each option in a section.

        All % interpolations are expanded in the return values, based on the
        defaults passed into the constructor, unless the optional argument
        `raw' is true.  Additional substitutions may be provided using the
        `vars' argument, which must be a dictionary whose contents overrides
        any pre-existing defaults.

        The section DEFAULT is special.
        """
        if section is _UNSET:
            return super(RawConfigParser, self).items()
        d = self._defaults.copy()
        try:
            d.update(self._sections[section])
        except KeyError:
            if section != self.default_section:
                raise NoSectionError(section)
        orig_keys = list(d.keys())
        # Update with the entry specific variables
        if vars:
            for key, value in vars.items():
                d[self.optionxform(key)] = value

        def value_getter_interp(option):
            return self._interpolation.before_get(self, section, option, d[option], d)

        def value_getter_raw(option):
            return d[option]

        value_getter = value_getter_raw if raw else value_getter_interp

        return [(option, value_getter(option)) for option in orig_keys]

    def popitem(self):
        """Remove a section from the parser and return it as
        a (section_name, section_proxy) tuple. If no section is present, raise
        KeyError.

        The section DEFAULT is never returned because it cannot be removed.
        """
        for key in self.sections():
            value = self[key]
            del self[key]
            return key, value
        raise KeyError

    def optionxform(self, optionstr):
        return optionstr.lower()

    def has_option(self, section, option):
        """Check for the existence of a given option in a given section.
        If the specified `section' is None or an empty string, DEFAULT is
        assumed. If the specified `section' does not exist, returns False."""
        if not section or section == self.default_section:
            option = self.optionxform(option)
            return option in self._defaults
        elif section not in self._sections:
            return False
        else:
            option = self.optionxform(option)
            return option in self._sections[section] or option in self._defaults

    def set(self, section, option, value=None):
        """Set an option."""
        if value:
            value = self._interpolation.before_set(self, section, option, value)
        if not section or section == self.default_section:
            sectdict = self._defaults
        else:
            try:
                sectdict = self._sections[section]
            except KeyError:
                raise NoSectionError(section) from None
        sectdict[self.optionxform(option)] = value

    def write(self, fp, space_around_delimiters=True):
        """Write an .ini-format representation of the configuration state.

        If `space_around_delimiters' is True (the default), delimiters
        between keys and values are surrounded by spaces.

        Please note that comments in the original configuration file are not
        preserved when writing the configuration back.
        """
        if space_around_delimiters:
            d = " {} ".format(self._delimiters[0])
        else:
            d = self._delimiters[0]
        if self._defaults:
            self._write_section(fp, self.default_section, self._defaults.items(), d)
        for section in self._sections:
            self._write_section(fp, section, self._sections[section].items(), d)

    def _write_section(self, fp, section_name, section_items, delimiter):
        """Write a single section to the specified `fp'."""
        fp.write("[{}]\n".format(section_name))
        for key, value in section_items:
            value = self._interpolation.before_write(self, section_name, key, value)
            if value is not None or not self._allow_no_value:
                value = delimiter + str(value).replace('\n', '\n\t')
            else:
                value = ""
            fp.write("{}{}\n".format(key, value))
        fp.write("\n")

    def remove_option(self, section, option):
        """Remove an option."""
        if not section or section == self.default_section:
            sectdict = self._defaults
        else:
            try:
                sectdict = self._sections[section]
            except KeyError:
                raise NoSectionError(section) from None
        option = self.optionxform(option)
        existed = option in sectdict
        if existed:
            del sectdict[option]
        return existed

    def remove_section(self, section):
        """Remove a file section."""
        existed = section in self._sections
        if existed:
            del self._sections[section]
            del self._proxies[section]
        return existed

    def __getitem__(self, key):
        if key != self.default_section and not self.has_section(key):
            raise KeyError(key)
        return self._proxies[key]

    def __setitem__(self, key, value):
        # To conform with the mapping protocol, overwrites existing values in
        # the section.
        if key in self and self[key] is value:
            return
        # XXX this is not atomic if read_dict fails at any point. Then again,
        # no update method in configparser is atomic in this implementation.
        if key == self.default_section:
            self._defaults.clear()
        elif key in self._sections:
            self._sections[key].clear()
        self.read_dict({key: value})

    def __delitem__(self, key):
        if key == self.default_section:
            raise ValueError("Cannot remove the default section.")
        if not self.has_section(key):
            raise KeyError(key)
        self.remove_section(key)

    def __contains__(self, key):
        return key == self.default_section or self.has_section(key)

    def __len__(self):
        return len(self._sections) + 1  # the default section

    def __iter__(self):
        # XXX does it break when underlying container state changed?
        return itertools.chain((self.default_section,), self._sections.keys())

    def _read(self, fp, fpname):  # noqa: C901
        """Parse a sectioned configuration file.

        Each section in a configuration file contains a header, indicated by
        a name in square brackets (`[]'), plus key/value options, indicated by
        `name' and `value' delimited with a specific substring (`=' or `:' by
        default).

        Values can span multiple lines, as long as they are indented deeper
        than the first line of the value. Depending on the parser's mode, blank
        lines may be treated as parts of multiline values or ignored.

        Configuration files may include comments, prefixed by specific
        characters (`#' and `;' by default). Comments may appear on their own
        in an otherwise empty line or may be entered in lines holding values or
        section names. Please note that comments get stripped off when reading
        configuration files.
        """
        elements_added = set()
        cursect = None  # None, or a dictionary
        sectname = None
        optname = None
        lineno = 0
        indent_level = 0
        e = None  # None, or an exception
        for lineno, line in enumerate(fp, start=1):
            comment_start = sys.maxsize
            # strip inline comments
            inline_prefixes = {p: -1 for p in self._inline_comment_prefixes}
            while comment_start == sys.maxsize and inline_prefixes:
                next_prefixes = {}
                for prefix, index in inline_prefixes.items():
                    index = line.find(prefix, index + 1)
                    if index == -1:
                        continue
                    next_prefixes[prefix] = index
                    if index == 0 or (index > 0 and line[index - 1].isspace()):
                        comment_start = min(comment_start, index)
                inline_prefixes = next_prefixes
            # strip full line comments
            for prefix in self._comment_prefixes:
                if line.strip().startswith(prefix):
                    comment_start = 0
                    break
            if comment_start == sys.maxsize:
                comment_start = None
            value = line[:comment_start].strip()
            if not value:
                if self._empty_lines_in_values:
                    # add empty line to the value, but only if there was no
                    # comment on the line
                    if (
                        comment_start is None
                        and cursect is not None
                        and optname
                        and cursect[optname] is not None
                    ):
                        cursect[optname].append('')  # newlines added at join
                else:
                    # empty line marks end of value
                    indent_level = sys.maxsize
                continue
            # continuation line?
            first_nonspace = self.NONSPACECRE.search(line)
            cur_indent_level = first_nonspace.start() if first_nonspace else 0
            if cursect is not None and optname and cur_indent_level > indent_level:
                cursect[optname].append(value)
            # a section header or option header?
            else:
                indent_level = cur_indent_level
                # is it a section header?
                mo = self.SECTCRE.match(value)
                if mo:
                    sectname = mo.group('header')
                    if sectname in self._sections:
                        if self._strict and sectname in elements_added:
                            raise DuplicateSectionError(sectname, fpname, lineno)
                        cursect = self._sections[sectname]
                        elements_added.add(sectname)
                    elif sectname == self.default_section:
                        cursect = self._defaults
                    else:
                        cursect = self._dict()
                        self._sections[sectname] = cursect
                        self._proxies[sectname] = SectionProxy(self, sectname)
                        elements_added.add(sectname)
                    # So sections can't start with a continuation line
                    optname = None
                # no section header in the file?
                elif cursect is None:
                    raise MissingSectionHeaderError(fpname, lineno, line)
                # an option line?
                else:
                    mo = self._optcre.match(value)
                    if mo:
                        optname, vi, optval = mo.group('option', 'vi', 'value')
                        if not optname:
                            e = self._handle_error(e, fpname, lineno, line)
                        optname = self.optionxform(optname.rstrip())
                        if self._strict and (sectname, optname) in elements_added:
                            raise DuplicateOptionError(
                                sectname, optname, fpname, lineno
                            )
                        elements_added.add((sectname, optname))
                        # This check is fine because the OPTCRE cannot
                        # match if it would set optval to None
                        if optval is not None:
                            optval = optval.strip()
                            cursect[optname] = [optval]
                        else:
                            # valueless option handling
                            cursect[optname] = None
                    else:
                        # a non-fatal parsing error occurred. set up the
                        # exception but keep going. the exception will be
                        # raised at the end of the file and will contain a
                        # list of all bogus lines
                        e = self._handle_error(e, fpname, lineno, line)
        self._join_multiline_values()
        # if any parsing errors occurred, raise an exception
        if e:
            raise e

    def _join_multiline_values(self):
        defaults = self.default_section, self._defaults
        all_sections = itertools.chain((defaults,), self._sections.items())
        for section, options in all_sections:
            for name, val in options.items():
                if isinstance(val, list):
                    val = '\n'.join(val).rstrip()
                options[name] = self._interpolation.before_read(
                    self, section, name, val
                )

    def _read_defaults(self, defaults):
        """Read the defaults passed in the initializer.
        Note: values can be non-string."""
        for key, value in defaults.items():
            self._defaults[self.optionxform(key)] = value

    def _handle_error(self, exc, fpname, lineno, line):
        if not exc:
            exc = ParsingError(fpname)
        exc.append(lineno, repr(line))
        return exc

    def _unify_values(self, section, vars):
        """Create a sequence of lookups with 'vars' taking priority over
        the 'section' which takes priority over the DEFAULTSECT.

        """
        sectiondict = {}
        try:
            sectiondict = self._sections[section]
        except KeyError:
            if section != self.default_section:
                raise NoSectionError(section)
        # Update with the entry specific variables
        vardict = {}
        if vars:
            for key, value in vars.items():
                if value is not None:
                    value = str(value)
                vardict[self.optionxform(key)] = value
        return _ChainMap(vardict, sectiondict, self._defaults)

    def _convert_to_boolean(self, value):
        """Return a boolean value translating from other types if necessary."""
        if value.lower() not in self.BOOLEAN_STATES:
            raise ValueError('Not a boolean: %s' % value)
        return self.BOOLEAN_STATES[value.lower()]

    def _validate_value_types(self, *, section="", option="", value=""):
        """Raises a TypeError for non-string values.

        The only legal non-string value if we allow valueless
        options is None, so we need to check if the value is a
        string if:
        - we do not allow valueless options, or
        - we allow valueless options but the value is not None

        For compatibility reasons this method is not used in classic set()
        for RawConfigParsers. It is invoked in every case for mapping protocol
        access and in ConfigParser.set().
        """
        if not isinstance(section, str):
            raise TypeError("section names must be strings")
        if not isinstance(option, str):
            raise TypeError("option keys must be strings")
        if not self._allow_no_value or value:
            if not isinstance(value, str):
                raise TypeError("option values must be strings")

    @property
    def converters(self):
        return self._converters


class ConfigParser(RawConfigParser):
    """ConfigParser implementing interpolation."""

    _DEFAULT_INTERPOLATION = BasicInterpolation()

    def set(self, section, option, value=None):
        """Set an option.  Extends RawConfigParser.set by validating type and
        interpolation syntax on the value."""
        self._validate_value_types(option=option, value=value)
        super().set(section, option, value)

    def add_section(self, section):
        """Create a new section in the configuration.  Extends
        RawConfigParser.add_section by validating if the section name is
        a string."""
        self._validate_value_types(section=section)
        super().add_section(section)

    def _read_defaults(self, defaults):
        """Reads the defaults passed in the initializer, implicitly converting
        values to strings like the rest of the API.

        Does not perform interpolation for backwards compatibility.
        """
        try:
            hold_interpolation = self._interpolation
            self._interpolation = Interpolation()
            self.read_dict({self.default_section: defaults})
        finally:
            self._interpolation = hold_interpolation


class SafeConfigParser(ConfigParser):
    """ConfigParser alias for backwards compatibility purposes."""

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        warnings.warn(
            "The SafeConfigParser class has been renamed to ConfigParser "
            "in Python 3.2. This alias will be removed in future versions."
            " Use ConfigParser directly instead.",
            DeprecationWarning,
            stacklevel=2,
        )


class SectionProxy(MutableMapping):
    """A proxy for a single section from a parser."""

    def __init__(self, parser, name):
        """Creates a view on a section of the specified `name` in `parser`."""
        self._parser = parser
        self._name = name
        for conv in parser.converters:
            key = 'get' + conv
            getter = functools.partial(self.get, _impl=getattr(parser, key))
            setattr(self, key, getter)

    def __repr__(self):
        return '<Section: {}>'.format(self._name)

    def __getitem__(self, key):
        if not self._parser.has_option(self._name, key):
            raise KeyError(key)
        return self._parser.get(self._name, key)

    def __setitem__(self, key, value):
        self._parser._validate_value_types(option=key, value=value)
        return self._parser.set(self._name, key, value)

    def __delitem__(self, key):
        if not (
            self._parser.has_option(self._name, key)
            and self._parser.remove_option(self._name, key)
        ):
            raise KeyError(key)

    def __contains__(self, key):
        return self._parser.has_option(self._name, key)

    def __len__(self):
        return len(self._options())

    def __iter__(self):
        return self._options().__iter__()

    def _options(self):
        if self._name != self._parser.default_section:
            return self._parser.options(self._name)
        else:
            return self._parser.defaults()

    @property
    def parser(self):
        # The parser object of the proxy is read-only.
        return self._parser

    @property
    def name(self):
        # The name of the section on a proxy is read-only.
        return self._name

    def get(self, option, fallback=None, *, raw=False, vars=None, _impl=None, **kwargs):
        """Get an option value.

        Unless `fallback` is provided, `None` will be returned if the option
        is not found.

        """
        # If `_impl` is provided, it should be a getter method on the parser
        # object that provides the desired type conversion.
        if not _impl:
            _impl = self._parser.get
        return _impl(
            self._name, option, raw=raw, vars=vars, fallback=fallback, **kwargs
        )


class ConverterMapping(MutableMapping):
    """Enables reuse of get*() methods between the parser and section proxies.

    If a parser class implements a getter directly, the value for the given
    key will be ``None``. The presence of the converter name here enables
    section proxies to find and use the implementation on the parser class.
    """

    GETTERCRE = re.compile(r"^get(?P<name>.+)$")

    def __init__(self, parser):
        self._parser = parser
        self._data = {}
        for getter in dir(self._parser):
            m = self.GETTERCRE.match(getter)
            if not m or not callable(getattr(self._parser, getter)):
                continue
            self._data[m.group('name')] = None  # See class docstring.

    def __getitem__(self, key):
        return self._data[key]

    def __setitem__(self, key, value):
        try:
            k = 'get' + key
        except TypeError:
            raise ValueError(
                'Incompatible key: {} (type: {})' ''.format(key, type(key))
            )
        if k == 'get':
            raise ValueError('Incompatible key: cannot use "" as a name')
        self._data[key] = value
        func = functools.partial(self._parser._get_conv, conv=value)
        func.converter = value
        setattr(self._parser, k, func)
        for proxy in self._parser.values():
            getter = functools.partial(proxy.get, _impl=func)
            setattr(proxy, k, getter)

    def __delitem__(self, key):
        try:
            k = 'get' + (key or None)
        except TypeError:
            raise KeyError(key)
        del self._data[key]
        for inst in itertools.chain((self._parser,), self._parser.values()):
            try:
                delattr(inst, k)
            except AttributeError:
                # don't raise since the entry was present in _data, silently
                # clean up
                continue

    def __iter__(self):
        return iter(self._data)

    def __len__(self):
        return len(self._data)
PK�@u\�����8backports/configparser/__pycache__/compat.cpython-39.pycnu�[���a

��?h��@s2ddlZddlZddd�Zdd�Zeeed�ZdS)	�N�cCs|S)zE
    Stubbed version of io.text_encoding as found in Python 3.10
    �)�encoding�
stacklevelrr�G/usr/local/lib/python3.9/site-packages/backports/configparser/compat.py�
text_encodingsrcKs6tj|j|jd�}t|��|�t|��t|��|S)N)�doc)�types�
ModuleType�__name__�__doc__�vars�update)�mod�defaults�copyrrr�copy_modulesr)r)r)r	�io�_iorrrrrr�<module>s
PK�@u\HEʷ����:backports/configparser/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s�dZddlmZddlmZddlZddlmZddl	Z	ddl
Z
ddlZddlZddl
Z
gd�ZeZdZd	ZGd
d�de�ZGdd
�d
e�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�Ze�Z Gd d!�d!�Z!Gd"d#�d#e!�Z"Gd$d%�d%e!�Z#Gd&d'�d'e!�Z$Gd(d)�d)e�Z%Gd*d+�d+e%�Z&Gd,d-�d-e&�Z'Gd.d/�d/e�Z(Gd0d1�d1e�Z)dS)2a�Configuration file parser.

A configuration file consists of sections, lead by a "[section]" header,
and followed by "name: value" entries, with continuations and such in
the style of RFC 822.

Intrinsic defaults can be specified by passing them into the
ConfigParser constructor as a dictionary.

class:

ConfigParser -- responsible for parsing a list of
                    configuration files, and managing the parsed database.

    methods:

    __init__(defaults=None, dict_type=_default_dict, allow_no_value=False,
             delimiters=('=', ':'), comment_prefixes=('#', ';'),
             inline_comment_prefixes=None, strict=True,
             empty_lines_in_values=True, default_section='DEFAULT',
             interpolation=<unset>, converters=<unset>):
        Create the parser. When `defaults' is given, it is initialized into the
        dictionary or intrinsic defaults. The keys must be strings, the values
        must be appropriate for %()s string interpolation.

        When `dict_type' is given, it will be used to create the dictionary
        objects for the list of sections, for the options within a section, and
        for the default values.

        When `delimiters' is given, it will be used as the set of substrings
        that divide keys from values.

        When `comment_prefixes' is given, it will be used as the set of
        substrings that prefix comments in empty lines. Comments can be
        indented.

        When `inline_comment_prefixes' is given, it will be used as the set of
        substrings that prefix comments in non-empty lines.

        When `strict` is True, the parser won't allow for any section or option
        duplicates while reading from a single source (file, string or
        dictionary). Default is True.

        When `empty_lines_in_values' is False (default: True), each empty line
        marks the end of an option. Otherwise, internal empty lines of
        a multiline option are kept as part of the value.

        When `allow_no_value' is True (default: False), options without
        values are accepted; the value presented for these is None.

        When `default_section' is given, the name of the special section is
        named accordingly. By default it is called ``"DEFAULT"`` but this can
        be customized to point to any other valid section name. Its current
        value can be retrieved using the ``parser_instance.default_section``
        attribute and may be modified at runtime.

        When `interpolation` is given, it should be an Interpolation subclass
        instance. It will be used as the handler for option value
        pre-processing when using getters. RawConfigParser objects don't do
        any sort of interpolation, whereas ConfigParser uses an instance of
        BasicInterpolation. The library also provides a ``zc.buildbot``
        inspired ExtendedInterpolation implementation.

        When `converters` is given, it should be a dictionary where each key
        represents the name of a type converter and each value is a callable
        implementing the conversion from string to the desired datatype. Every
        converter gets its corresponding get*() method on the parser object and
        section proxies.

    sections()
        Return all the configuration section names, sans DEFAULT.

    has_section(section)
        Return whether the given section exists.

    has_option(section, option)
        Return whether the given option exists in the given section.

    options(section)
        Return list of configuration options for the named section.

    read(filenames, encoding=None)
        Read and parse the iterable of named configuration files, given by
        name.  A single filename is also allowed.  Non-existing files
        are ignored.  Return list of successfully read files.

    read_file(f, filename=None)
        Read and parse one configuration file, given as a file object.
        The filename defaults to f.name; it is only used in error
        messages (if f has no `name' attribute, the string `<???>' is used).

    read_string(string)
        Read configuration from a given string.

    read_dict(dictionary)
        Read configuration from a dictionary. Keys are section names,
        values are dictionaries with keys and values that should be present
        in the section. If the used dictionary type preserves order, sections
        and their keys will be added in order. Values are automatically
        converted to strings.

    get(section, option, raw=False, vars=None, fallback=_UNSET)
        Return a string value for the named option.  All % interpolations are
        expanded in the return values, based on the defaults passed into the
        constructor and the DEFAULT section.  Additional substitutions may be
        provided using the `vars' argument, which must be a dictionary whose
        contents override any pre-existing defaults. If `option' is a key in
        `vars', the value from `vars' is used.

    getint(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to an integer.

    getfloat(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to a float.

    getboolean(section, options, raw=False, vars=None, fallback=_UNSET)
        Like get(), but convert value to a boolean (currently case
        insensitively defined as 0, false, no, off for False, and 1, true,
        yes, on for True).  Returns False or True.

    items(section=_UNSET, raw=False, vars=None)
        If section is given, return a list of tuples with (name, value) for
        each option in the section. Otherwise, return a list of tuples with
        (section_name, section_proxy) for each section, including DEFAULTSECT.

    remove_section(section)
        Remove the given file section and all its options.

    remove_option(section, option)
        Remove the given option from the given section.

    set(section, option, value)
        Set the given option.

    write(fp, space_around_delimiters=True)
        Write the configuration state in .ini format. If
        `space_around_delimiters' is True (the default), delimiters
        between keys and values are surrounded by spaces.
�)�MutableMapping)�ChainMapN�)�io)�NoSectionError�DuplicateOptionError�DuplicateSectionError�
NoOptionError�InterpolationError�InterpolationDepthError�InterpolationMissingOptionError�InterpolationSyntaxError�ParsingError�MissingSectionHeaderError�ConfigParser�SafeConfigParser�RawConfigParser�
Interpolation�BasicInterpolation�ExtendedInterpolation�LegacyInterpolation�SectionProxy�ConverterMapping�DEFAULTSECT�MAX_INTERPOLATION_DEPTH�DEFAULT�
c@s&eZdZdZddd�Zdd�ZeZdS)	�Errorz'Base class for ConfigParser exceptions.�cCs||_t�||�dS�N)�message�	Exception�__init__)�self�msg�r%�I/usr/local/lib/python3.9/site-packages/backports/configparser/__init__.pyr"�szError.__init__cCs|jSr)r �r#r%r%r&�__repr__�szError.__repr__N)r)�__name__�
__module__�__qualname__�__doc__r"r(�__str__r%r%r%r&r�s
rc@seZdZdZdd�ZdS)rz2Raised when no section matches a requested option.cCs$t�|d|f�||_|f|_dS)NzNo section: %r)rr"�section�args�r#r.r%r%r&r"�szNoSectionError.__init__N�r)r*r+r,r"r%r%r%r&r�src@seZdZdZddd�ZdS)raRaised when a section is repeated in an input source.

    Possible repetitions that raise this exception are: multiple creation
    using the API or in strict parsers when a section is found more than once
    in a single input file, string or dictionary.
    NcCs�t|�dg}|durRdt|�g}|dur8|�d�|��|�d�|�|�|}n|�dd�t�|d�|��||_||_	||_
|||f|_dS)N� already exists�While reading from � [line {0:2d}]z
: section rzSection r)�repr�append�format�extend�insertrr"�joinr.�source�linenor/)r#r.r;r<r$r r%r%r&r"�s

zDuplicateSectionError.__init__)NNr1r%r%r%r&r�src@seZdZdZddd�ZdS)rz�Raised by strict parsers when an option is repeated in an input source.

    Current implementation raises this exception only when an option is found
    more than once in a single file, string or dictionary.
    NcCs�t|�dt|�dg}|durZdt|�g}|dur@|�d�|��|�d�|�|�|}n|�dd�t�|d�|��||_||_	||_
||_||||f|_dS)	Nz in section r2r3r4z	: option rzOption r)
r5r6r7r8r9rr"r:r.�optionr;r<r/)r#r.r=r;r<r$r r%r%r&r"�s

zDuplicateOptionError.__init__)NNr1r%r%r%r&r�src@seZdZdZdd�ZdS)r	z!A requested option was not found.cCs.t�|d||f�||_||_||f|_dS)NzNo option %r in section: %r�rr"r=r.r/)r#r=r.r%r%r&r"szNoOptionError.__init__Nr1r%r%r%r&r	sr	c@seZdZdZdd�ZdS)r
z0Base class for interpolation-related exceptions.cCs(t�||�||_||_|||f|_dSrr>)r#r=r.r$r%r%r&r"szInterpolationError.__init__Nr1r%r%r%r&r
sr
c@seZdZdZdd�ZdS)rzAA string substitution required a setting which was not available.cCs8d�||||�}t�||||�||_||||f|_dS)Nz�Bad value substitution: option {!r} in section {!r} contains an interpolation key {!r} which is not a valid option name. Raw value: {!r})r7r
r"�	referencer/)r#r=r.�rawvalr?r$r%r%r&r"s��z(InterpolationMissingOptionError.__init__Nr1r%r%r%r&rsrc@seZdZdZdS)r
z�Raised when the source text contains invalid syntax.

    Current implementation raises this exception when the source text into
    which substitutions are made does not conform to the required syntax.
    N)r)r*r+r,r%r%r%r&r
#sr
c@seZdZdZdd�ZdS)rz0Raised when substitutions are nested too deeply.cCs0d�||t|�}t�||||�|||f|_dS)Nz�Recursion limit exceeded in value substitution: option {!r} in section {!r} contains an interpolation key which cannot be substituted in {} steps. Raw value: {!r})r7rr
r"r/)r#r=r.r@r$r%r%r&r".s��z InterpolationDepthError.__init__Nr1r%r%r%r&r+src@s<eZdZdZd
dd�Zedd��Zejdd��Zdd	�ZdS)rz>Raised when a configuration file does not follow legal syntax.NcCsT|r|rtd��n|s$|s$td��n|r,|}t�|d|�||_g|_|f|_dS)Nz:Cannot specify both `filename' and `source'. Use `source'.z%Required argument `source' not given.z"Source contains parsing errors: %r)�
ValueErrorrr"r;�errorsr/)r#r;�filenamer%r%r&r"<s�
zParsingError.__init__cCstjdtdd�|jS)zDeprecated, use `source'.�SThe 'filename' attribute will be removed in future versions.  Use 'source' instead.���
stacklevel��warnings�warn�DeprecationWarningr;r'r%r%r&rCLs�zParsingError.filenamecCstjdtdd�||_dS)zDeprecated, user `source'.rDrErFNrH�r#�valuer%r%r&rCWs�cCs*|j�||f�|jd||f7_dS)Nz
	[line %2d]: %s)rBr6r )r#r<�liner%r%r&r6bszParsingError.append)NN)	r)r*r+r,r"�propertyrC�setterr6r%r%r%r&r9s




rc@seZdZdZdd�ZdS)rz@Raised when a key-value pair is found before any section header.cCs8t�|d|||f�||_||_||_|||f|_dS)Nz7File contains no section headers.
file: %r, line: %d
%r)rr"r;r<rNr/)r#rCr<rNr%r%r&r"js��z"MissingSectionHeaderError.__init__Nr1r%r%r%r&rgsrc@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)rzBDummy interpolation that passes the value through with no changes.cCs|Srr%)r#�parserr.r=rM�defaultsr%r%r&�
before_getszInterpolation.before_getcCs|Srr%�r#rQr.r=rMr%r%r&�
before_set�szInterpolation.before_setcCs|Srr%rTr%r%r&�before_read�szInterpolation.before_readcCs|Srr%rTr%r%r&�before_write�szInterpolation.before_writeN)r)r*r+r,rSrUrVrWr%r%r%r&r|s
rc@s2eZdZdZe�d�Zdd�Zdd�Zdd�Z	d	S)
ra!Interpolation as implemented in the classic ConfigParser.

    The option values can contain format strings which refer to other values in
    the same section, or values in the special default section.

    For example:

        something: %(dir)s/whatever

    would resolve the "%(dir)s" to the value of dir.  All reference
    expansions are done late, on demand. If a user needs to use a bare % in
    a configuration file, she can escape it by writing %%. Other % usage
    is considered a user error and raises `InterpolationSyntaxError'.z
%\(([^)]+)\)sc	Cs$g}|�||||||d�d�|�S�Nrr��_interpolate_somer:�r#rQr.r=rMrR�Lr%r%r&rS�szBasicInterpolation.before_getcCs<|�dd�}|j�d|�}d|vr8td||�d�f��|S)Nz%%r�%�1invalid interpolation syntax in %r at position %d��replace�_KEYCRE�subrA�find�r#rQr.r=rM�	tmp_valuer%r%r&rU�s��zBasicInterpolation.before_setc
Csh|j||d|d�}|tkr&t|||��|�rd|�d�}	|	dkrL|�|�dS|	dkrr|�|d|	��||	d�}|dd�}
|
dkr�|�d�|dd�}q&|
dk�rP|j�|�}|dur�t||d|��|�|�	d��}||�
�d�}z||}
Wn$t�yt||||�d�Yn0d|
v�rD|�
||||
|||d�n
|�|
�q&t||d	|f��q&dS)
NT��raw�fallbackr]rrrE�(�'bad interpolation variable reference %rz/'%%' must be followed by '%%' or '(', found: %r)�getrrrcr6ra�matchr
�optionxform�group�end�KeyErrorrrZ)r#rQr=�accum�restr.�map�depthr@�p�c�m�var�vr%r%r&rZ�sV



���
��z$BasicInterpolation._interpolate_someN�
r)r*r+r,�re�compilerarSrUrZr%r%r%r&r�s


rc@s2eZdZdZe�d�Zdd�Zdd�Zdd�Z	d	S)
rzyAdvanced variant of interpolation, supports the syntax used by
    `zc.buildout'. Enables interpolation between sections.z
\$\{([^}]+)\}c	Cs$g}|�||||||d�d�|�SrXrYr[r%r%r&rS�sz ExtendedInterpolation.before_getcCs<|�dd�}|j�d|�}d|vr8td||�d�f��|S)Nz$$r�$r^r_rdr%r%r&rU�s��z ExtendedInterpolation.before_setcCs�|j||d|d�}|tkr&t|||��|�r�|�d�}	|	dkrL|�|�dS|	dkrr|�|d|	��||	d�}|dd�}
|
dkr�|�d�|dd�}q&|
dk�r�|j�|�}|dur�t||d|��|�d��	d	�}||�
�d�}|}
|}zrt|�dk�r|�|d�}||}nHt|�dk�rR|d}
|�|d�}|j|
|dd
�}nt||d|f��Wn0t
ttf�y�t|||d	�|��d�Yn0d|v�r�|�|||||
t|j|
dd
��|d�n
|�|�q&t||d|f��q&dS)
NTrfr}rrrE�{rj�:)rgzMore than one ':' found: %rz-'$' must be followed by '$' or '{', found: %r)rkrrrcr6rarlr
rn�splitro�lenrmrprr	rr:rZ�dict�items)r#rQr=rqrrr.rsrtr@rurvrw�path�sect�optryr%r%r&rZ�sx



�
���
�
�z'ExtendedInterpolation._interpolate_someNrzr%r%r%r&r�s


rc@s6eZdZdZe�d�Zdd�Zdd�Ze	dd��Z
d	S)
rz{Deprecated interpolation used in old versions of ConfigParser.
    Use BasicInterpolation or ExtendedInterpolation instead.z%\(([^)]*)\)s|.c

Cs�|}t}|r�|d8}|r�d|vr�tj|j|d�}|j�||�}z||}Wq�ty�}	z"t||||	jd�d�WYd}	~	q�d}	~	00qq�q|r�d|vr�t	|||��|S)Nrz%()rQr)
r�	functools�partial�_interpolation_replacerarbrprr/r)
r#rQr.r=rM�varsr@rtr`�er%r%r&rS;s&��zLegacyInterpolation.before_getcCs|Srr%rTr%r%r&rUOszLegacyInterpolation.before_setcCs,|�d�}|dur|��Sd|�|�SdS)Nrz%%(%s)s)rnrm)rlrQ�sr%r%r&r�Rs
z*LegacyInterpolation._interpolation_replaceN)r)r*r+r,r{r|rarSrU�staticmethodr�r%r%r%r&r5s
rc
s6eZdZdZdZdZdZe�Ze	�
ee	j�Ze	�
ej
dd�e	j�Ze	�
ej
dd�e	j�Ze	�
d�Zddddd	d	d	d	d
�Zded	fdd
dddeeed�dd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdddd�Zdedd�Zdfd d!�Zdgd#d$�Zdhd%d&�Zd	ded'�d(d)�Z d*d+�Z!d	ded'�d,d-�Z"d	ded'�d.d/�Z#d	ded'�d0d1�Z$d	ded'�d2d3�Z%ed	df�fd4d5�	Z&d6d7�Z'd8d9�Z(d:d;�Z)did<d=�Z*djd>d?�Z+d@dA�Z,dBdC�Z-dDdE�Z.dFdG�Z/dHdI�Z0dJdK�Z1dLdM�Z2dNdO�Z3dPdQ�Z4dRdS�Z5dTdU�Z6dVdW�Z7dXdY�Z8dZd[�Z9d\d]�Z:d^d^d^d_�d`da�Z;e<dbdc��Z=�Z>S)krz,ConfigParser that does not do interpolation.z�
        \[                                 # [
        (?P<header>.+)                     # very permissive!
        \]                                 # ]
        a�
        (?P<option>.*?)                    # very permissive!
        \s*(?P<vi>{delim})\s*              # any number of space/tab,
                                           # followed by any of the
                                           # allowed delimiters,
                                           # followed by any space/tab
        (?P<value>.*)$                     # everything up to eol
        a�
        (?P<option>.*?)                    # very permissive!
        \s*(?:                             # any number of space/tab,
        (?P<vi>{delim})\s*                 # optionally followed by
                                           # any of the allowed
                                           # delimiters, followed by any
                                           # space/tab
        (?P<value>.*))?$                   # everything up to eol
        z=|:��delimz\STF)�1�yes�true�on�0�no�false�offN��=r)�#�;)�
delimiters�comment_prefixes�inline_comment_prefixes�strict�empty_lines_in_values�default_section�
interpolation�
convertersc
Cs<||_|��|_|��|_t|�|_|��|_t||	�|j|	<t|�|_|dkrd|rZ|j	n|j
|_nNd�dd�|D��}|r�t
�|jj|d�t
j�|_nt
�|jj|d�t
j�|_t|p�d�|_t|p�d�|_||_||_||_|	|_|
|_|jtur�|j|_|jdu�rt�|_|tu�r(|j�|�|�r8|�|�dS)Nr��|css|]}t�|�VqdSr)r{�escape)�.0�dr%r%r&�	<genexpr>��z+RawConfigParser.__init__.<locals>.<genexpr>r�r%)�_dict�	_sections�	_defaultsr�_converters�_proxiesr�tuple�_delimiters�	OPTCRE_NV�OPTCRE�_optcrer:r{r|�_OPT_NV_TMPLr7�VERBOSE�	_OPT_TMPL�_comment_prefixes�_inline_comment_prefixes�_strict�_allow_no_value�_empty_lines_in_valuesr��_interpolation�_UNSET�_DEFAULT_INTERPOLATIONr�update�_read_defaults)
r#rR�	dict_type�allow_no_valuer�r�r�r�r�r�r�r�r�r%r%r&r"�s8






zRawConfigParser.__init__cCs|jSr)r�r'r%r%r&rR�szRawConfigParser.defaultscCst|j���S)z3Return a list of section names, excluding [DEFAULT])�listr��keysr'r%r%r&�sections�szRawConfigParser.sectionscCsJ||jkrtd|��||jvr(t|��|��|j|<t||�|j|<dS)z�Create a new section in the configuration.

        Raise DuplicateSectionError if a section by the specified name
        already exists. Raise ValueError if name is DEFAULT.
        zInvalid section name: %rN)r�rAr�rr�rr�r0r%r%r&�add_section�s

zRawConfigParser.add_sectioncCs
||jvS)z~Indicate whether the named section is present in the configuration.

        The DEFAULT section is not acknowledged.
        )r�r0r%r%r&�has_section�szRawConfigParser.has_sectioncCsHz|j|��}Wnty.t|�d�Yn0|�|j�t|���S)z9Return a list of option names for the given section name.N)r��copyrprr�r�r�r�)r#r.�optsr%r%r&�options�szRawConfigParser.optionsc	Cs�t|tttjf�r|g}t�|�}g}|D]x}z<t||d��}|�||�Wd�n1s^0YWnt	y�Yq*Yn0t|tj�r�t�
|�}|�|�q*|S)a�Read and parse a filename or an iterable of filenames.

        Files that cannot be opened are silently ignored; this is
        designed so that you can specify an iterable of potential
        configuration file locations (e.g. current directory, user's
        home directory, systemwide directory), and all existing
        configuration files in the iterable will be read.  A single
        filename may also be given.

        Return list of successfully read files.
        )�encodingN)�
isinstance�str�bytes�os�PathLikerZ
text_encoding�open�_read�OSError�fspathr6)r#�	filenamesr��read_okrC�fpr%r%r&�read�s
.

zRawConfigParser.readcCs:|dur*z
|j}Wnty(d}Yn0|�||�dS)aPLike read() but the argument must be a file-like object.

        The `f' argument must be iterable, returning one line at a time.
        Optional second argument is the `source' specifying the name of the
        file being read. If not given, it is taken from f.name. If `f' has no
        `name' attribute, `<???>' is used.
        Nz<???>)�name�AttributeErrorr�)r#�fr;r%r%r&�	read_file�s

zRawConfigParser.read_file�<string>cCst�|�}|�||�dS)z'Read configuration from a given string.N)r�StringIOr�)r#�stringr;�sfiler%r%r&�read_strings
zRawConfigParser.read_string�<dict>c
Cs�t�}|��D]�\}}t|�}z|�|�Wn&ttfyR|jrN||vrN�Yn0|�|�|��D]`\}}|�t|��}|dur�t|�}|jr�||f|vr�t	|||��|�||f�|�|||�qfqdS)aRead configuration from a dictionary.

        Keys are section names, values are dictionaries with keys and values
        that should be present in the section. If the used dictionary type
        preserves order, sections and their keys will be added in order.

        All types held in the dictionary are converted to strings during
        reading, including section names, option names and keys.

        Optional second argument is the `source' specifying the name of the
        dictionary being read.
        N)
�setr�r�r�rrAr��addrmr)r#�
dictionaryr;�elements_addedr.r��keyrMr%r%r&�	read_dicts"

zRawConfigParser.read_dictcCs"tjdtdd�|j||d�dS)z"Deprecated, use read_file instead.zRThis method will be removed in future versions.  Use 'parser.read_file()' instead.rErF)r;N)rIrJrKr�)r#r�rCr%r%r&�readfp/s�zRawConfigParser.readfp�rgr�rhcCs�z|�||�}Wn&ty6|tur*�n|YSYn0|�|�}z||}Wn.ty||turpt||��n|YSYn0|s�|dur�|S|j�|||||�SdS)a]Get an option value for a given section.

        If `vars' is provided, it must be a dictionary. The option is looked up
        in `vars' (if provided), `section', and in `DEFAULTSECT' in that order.
        If the key is not found and `fallback' is provided, it is used as
        a fallback value. `None' can be provided as a `fallback' value.

        If interpolation is enabled and the optional argument `raw' is False,
        all interpolations are expanded in the return values.

        Arguments `raw', `vars', and `fallback' are keyword only.

        The section DEFAULT is special.
        N)�
_unify_valuesrr�rmrpr	r�rS)r#r.r=rgr�rhr�rMr%r%r&rk9s 
zRawConfigParser.getcKs||j||fi|���Sr)rk)r#r.�convr=�kwargsr%r%r&�_get]szRawConfigParser._getc	KsHz|j|||f||d�|��WSttfyB|tur:�|YS0dS)N)rgr�)r�rr	r�)r#r.r=r�rgr�rhr�r%r%r&�	_get_conv`szRawConfigParser._get_convcKs|j||tf|||d�|��S�Nr�)r��int�r#r.r=rgr�rhr�r%r%r&�getintks���zRawConfigParser.getintcKs|j||tf|||d�|��Sr�)r��floatr�r%r%r&�getfloatrs���zRawConfigParser.getfloatcKs |j|||jf|||d�|��Sr�)r��_convert_to_booleanr�r%r%r&�
getbooleanys���zRawConfigParser.getbooleanc	s��turtt����S�j���z���j��Wn$tyX��j	krTt
���Yn0t����}|r�|��D]\}}|���
|�<qr���fdd�}�fdd�}|r�|n|��fdd�|D�S)a�Return a list of (name, value) tuples for each option in a section.

        All % interpolations are expanded in the return values, based on the
        defaults passed into the constructor, unless the optional argument
        `raw' is true.  Additional substitutions may be provided using the
        `vars' argument, which must be a dictionary whose contents overrides
        any pre-existing defaults.

        The section DEFAULT is special.
        cs�j���|�|��Sr)r�rS�r=)r�r.r#r%r&�value_getter_interp�sz2RawConfigParser.items.<locals>.value_getter_interpcs�|Srr%r�)r�r%r&�value_getter_raw�sz/RawConfigParser.items.<locals>.value_getter_rawcsg|]}|�|�f�qSr%r%)r�r=)�value_getterr%r&�
<listcomp>�r�z)RawConfigParser.items.<locals>.<listcomp>)r��superrr�r�r�r�r�rpr�rr�r�rm)	r#r.rgr��	orig_keysr�rMr�r���	__class__)r�r.r#r�r&r��s 

zRawConfigParser.itemscCs.|��D]}||}||=||fSt�dS)z�Remove a section from the parser and return it as
        a (section_name, section_proxy) tuple. If no section is present, raise
        KeyError.

        The section DEFAULT is never returned because it cannot be removed.
        N)r�rp�r#r�rMr%r%r&�popitem�s
zRawConfigParser.popitemcCs|��Sr)�lower)r#�	optionstrr%r%r&rm�szRawConfigParser.optionxformcCsV|r||jkr"|�|�}||jvS||jvr0dS|�|�}||j|vpP||jvSdS)z�Check for the existence of a given option in a given section.
        If the specified `section' is None or an empty string, DEFAULT is
        assumed. If the specified `section' does not exist, returns False.FN)r�rmr�r�)r#r.r=r%r%r&�
has_option�s



zRawConfigParser.has_optioncCsj|r|j�||||�}|r$||jkr,|j}n,z|j|}WntyVt|�d�Yn0|||�|�<dS)zSet an option.N)r�rUr�r�r�rprrm)r#r.r=rM�sectdictr%r%r&r��szRawConfigParser.setcCsh|rd�|jd�}n
|jd}|jr>|�||j|j��|�|jD]}|�|||j|��|�qDdS)aOWrite an .ini-format representation of the configuration state.

        If `space_around_delimiters' is True (the default), delimiters
        between keys and values are surrounded by spaces.

        Please note that comments in the original configuration file are not
        preserved when writing the configuration back.
        z {} rN)r7r�r��_write_sectionr�r�r�)r#r��space_around_delimitersr�r.r%r%r&�write�s	

zRawConfigParser.writecCsx|�d�|��|D]T\}}|j�||||�}|dus<|jsR|t|��dd�}nd}|�d�||��q|�d�dS)z-Write a single section to the specified `fp'.z[{}]
N�
z
	rz{}{}
)r	r7r�rWr�r�r`)r#r��section_name�
section_items�	delimiterr�rMr%r%r&r�szRawConfigParser._write_sectioncCsb|r||jkr|j}n,z|j|}Wnty@t|�d�Yn0|�|�}||v}|r^||=|S)zRemove an option.N)r�r�r�rprrm)r#r.r=r�existedr%r%r&�
remove_option�s
zRawConfigParser.remove_optioncCs"||jv}|r|j|=|j|=|S)zRemove a file section.)r�r�)r#r.rr%r%r&�remove_sections

zRawConfigParser.remove_sectioncCs&||jkr|�|�st|��|j|Sr)r�r�rpr��r#r�r%r%r&�__getitem__szRawConfigParser.__getitem__cCsX||vr|||urdS||jkr.|j��n||jvrF|j|��|�||i�dSr)r�r��clearr�r�rr%r%r&�__setitem__
s

zRawConfigParser.__setitem__cCs2||jkrtd��|�|�s$t|��|�|�dS)Nz"Cannot remove the default section.)r�rAr�rprrr%r%r&�__delitem__s


zRawConfigParser.__delitem__cCs||jkp|�|�Sr)r�r�rr%r%r&�__contains__!szRawConfigParser.__contains__cCst|j�dS)Nr)r�r�r'r%r%r&�__len__$szRawConfigParser.__len__cCst�|jf|j���Sr)�	itertools�chainr�r�r�r'r%r%r&�__iter__'szRawConfigParser.__iter__cCs t�}d}d}d}d}d}d}	t|dd�D�]�\}}
tj}dd�|jD�}|tjkr�|r�i}
|��D]T\}}|
�||d�}|dkr�qd||
|<|dks�|dkrd|
|d��rdt||�}qd|
}qJ|j	D]}|
�
��|�r�d}q�q�|tjkr�d}|
d|��
�}|�sN|j�rF|du�rL|du�rL|�rL||du�rL||�
d�q*tj}q*|j�|
�}|�rh|��nd}|du�r�|�r�||k�r�||�
|�q*|}|j�|�}|�r<|�d	�}||jv�r�|j�r�||v�r�t|||��|j|}|�|�n@||jk�r
|j}n,|��}||j|<t||�|j|<|�|�d}q*|du�rTt|||
��q*|j�|�}|�r�|�d
dd�\}}}|�s�|�|	|||
�}	|� |�!��}|j�r�||f|v�r�t"||||��|�||f�|du�r�|�
�}|g||<nd||<q*|�|	|||
�}	q*|�#�|	�r|	�dS)
a`Parse a sectioned configuration file.

        Each section in a configuration file contains a header, indicated by
        a name in square brackets (`[]'), plus key/value options, indicated by
        `name' and `value' delimited with a specific substring (`=' or `:' by
        default).

        Values can span multiple lines, as long as they are indented deeper
        than the first line of the value. Depending on the parser's mode, blank
        lines may be treated as parts of multiline values or ignored.

        Configuration files may include comments, prefixed by specific
        characters (`#' and `;' by default). Comments may appear on their own
        in an otherwise empty line or may be entered in lines holding values or
        section names. Please note that comments get stripped off when reading
        configuration files.
        Nrr)�startcSsi|]
}|d�qS)���r%)r�rur%r%r&�
<dictcomp>Gr�z)RawConfigParser._read.<locals>.<dictcomp>rr�headerr=�virM)$r��	enumerate�sys�maxsizer�r�rc�isspace�minr��strip�
startswithr�r6�NONSPACECRE�searchr�SECTCRErlrnr�r�rr�r�r�r�rr�rr��
_handle_errorrm�rstripr�_join_multiline_values)r#r��fpnamer��cursect�sectname�optnamer<�indent_levelr�rN�
comment_start�inline_prefixes�
next_prefixes�prefix�indexrM�first_nonspace�cur_indent_level�mor�optvalr%r%r&r�+s� 

���
�




�

zRawConfigParser._readcCsr|j|jf}t�|f|j���}|D]H\}}|��D]6\}}t|t�rTd�|��	�}|j
�||||�||<q4q$dS)Nr
)r�r�rrr�r�r�r�r:r+r�rV)r#rR�all_sectionsr.r�r��valr%r%r&r,�s
�z&RawConfigParser._join_multiline_valuescCs&|��D]\}}||j|�|�<qdS)zTRead the defaults passed in the initializer.
        Note: values can be non-string.N)r�r�rm)r#rRr�rMr%r%r&r��szRawConfigParser._read_defaultscCs |st|�}|�|t|��|Sr)rr6r5)r#�excr-r<rNr%r%r&r*�szRawConfigParser._handle_errorcCs~i}z|j|}Wn$ty6||jkr2t|��Yn0i}|rp|��D]&\}}|dur`t|�}|||�|�<qHt|||j�S)z�Create a sequence of lookups with 'vars' taking priority over
        the 'section' which takes priority over the DEFAULTSECT.

        N)	r�rpr�rr�r�rm�	_ChainMapr�)r#r.r��sectiondict�vardictr�rMr%r%r&r��s
zRawConfigParser._unify_valuescCs(|��|jvrtd|��|j|��S)zAReturn a boolean value translating from other types if necessary.zNot a boolean: %s)r�BOOLEAN_STATESrArLr%r%r&r��sz#RawConfigParser._convert_to_booleanr)r.r=rMcCsDt|t�std��t|t�s$td��|jr.|r@t|t�s@td��dS)a�Raises a TypeError for non-string values.

        The only legal non-string value if we allow valueless
        options is None, so we need to check if the value is a
        string if:
        - we do not allow valueless options, or
        - we allow valueless options but the value is not None

        For compatibility reasons this method is not used in classic set()
        for RawConfigParsers. It is invoked in every case for mapping protocol
        access and in ConfigParser.set().
        zsection names must be stringszoption keys must be stringszoption values must be stringsN)r�r��	TypeErrorr��r#r.r=rMr%r%r&�_validate_value_types�s



z%RawConfigParser._validate_value_typescCs|jSr)r�r'r%r%r&r��szRawConfigParser.converters)N)N)r�)r�)N)N)T)?r)r*r+r,�
_SECT_TMPLr�r�rr�r{r|r�r)r7r�r�r'rA�
_default_dictrr�r"rRr�r�r�r�r�r�r�r�r�rkr�r�r�r�r�r�rrmrr�r	rrrrrrrrrr�r,r�r*r�r�rDrOr��
__classcell__r%r%r�r&r[s�

�
��/	





$����
#



{rcs<eZdZdZe�Zd	�fdd�	Z�fdd�Zdd�Z�Z	S)
rz(ConfigParser implementing interpolation.Ncs"|j||d�t��|||�dS)zmSet an option.  Extends RawConfigParser.set by validating type and
        interpolation syntax on the value.�r=rMN)rDr�r�rCr�r%r&r��szConfigParser.setcs|j|d�t��|�dS)z�Create a new section in the configuration.  Extends
        RawConfigParser.add_section by validating if the section name is
        a string.)r.N)rDr�r�r0r�r%r&r��szConfigParser.add_sectioncCs6z(|j}t�|_|�|j|i�W||_n||_0dS)z�Reads the defaults passed in the initializer, implicitly converting
        values to strings like the rest of the API.

        Does not perform interpolation for backwards compatibility.
        N)r�rr�r�)r#rR�hold_interpolationr%r%r&r�s
zConfigParser._read_defaults)N)
r)r*r+r,rr�r�r�r�rGr%r%r�r&r�s
rcs eZdZdZ�fdd�Z�ZS)rz8ConfigParser alias for backwards compatibility purposes.cs&t�j|i|��tjdtdd�dS)Nz�The SafeConfigParser class has been renamed to ConfigParser in Python 3.2. This alias will be removed in future versions. Use ConfigParser directly instead.rErF)r�r"rIrJrK)r#r/r�r�r%r&r"s�zSafeConfigParser.__init__)r)r*r+r,r"rGr%r%r�r&rsrc@s�eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dd�Zdd�Ze
dd��Ze
dd��Zddddd�dd�ZdS)rz+A proxy for a single section from a parser.cCsF||_||_|jD].}d|}tj|jt||�d�}t|||�qdS)z@Creates a view on a section of the specified `name` in `parser`.rk��_implN)�_parser�_namer�r�r�rk�getattr�setattr)r#rQr�r�r��getterr%r%r&r""s
zSectionProxy.__init__cCsd�|j�S)Nz
<Section: {}>)r7rMr'r%r%r&r(+szSectionProxy.__repr__cCs(|j�|j|�st|��|j�|j|�Sr)rLrrMrprkrr%r%r&r.szSectionProxy.__getitem__cCs"|jj||d�|j�|j||�S)NrH)rLrDr�rMrr%r%r&r3szSectionProxy.__setitem__cCs,|j�|j|�r |j�|j|�s(t|��dSr)rLrrMrrprr%r%r&r7s
��zSectionProxy.__delitem__cCs|j�|j|�Sr)rLrrMrr%r%r&r>szSectionProxy.__contains__cCst|���Sr)r��_optionsr'r%r%r&rAszSectionProxy.__len__cCs|����Sr)rQrr'r%r%r&rDszSectionProxy.__iter__cCs*|j|jjkr|j�|j�S|j��SdSr)rMrLr�r�rRr'r%r%r&rQGszSectionProxy._optionscCs|jSr)rLr'r%r%r&rQMszSectionProxy.parsercCs|jSr)rMr'r%r%r&r�RszSectionProxy.nameNF)rgr�rKcKs(|s|jj}||j|f|||d�|��S)z�Get an option value.

        Unless `fallback` is provided, `None` will be returned if the option
        is not found.

        r�)rLrkrM)r#r=rhrgr�rKr�r%r%r&rkWs	���zSectionProxy.get)N)r)r*r+r,r"r(rrrrrrrQrOrQr�rkr%r%r%r&rs	

rc@sJeZdZdZe�d�Zdd�Zdd�Zdd�Z	d	d
�Z
dd�Zd
d�ZdS)ra/Enables reuse of get*() methods between the parser and section proxies.

    If a parser class implements a getter directly, the value for the given
    key will be ``None``. The presence of the converter name here enables
    section proxies to find and use the implementation on the parser class.
    z^get(?P<name>.+)$cCsR||_i|_t|j�D]6}|j�|�}|rtt|j|��s<qd|j|�d�<qdS)Nr�)rL�_data�dir�	GETTERCRErl�callablerNrn)r#rQrPrwr%r%r&r"qszConverterMapping.__init__cCs
|j|Sr)rRrr%r%r&rzszConverterMapping.__getitem__c	Cs�zd|}Wn&ty2td�|t|����Yn0|dkrDtd��||j|<tj|jj|d�}||_	t
|j||�|j��D] }tj|j|d�}t
|||�q~dS)NrkzIncompatible key: {} (type: {})z)Incompatible key: cannot use "" as a name)r�rJ)
rBrAr7�typerRr�r�rLr��	converterrO�valuesrk)r#r�rM�k�func�proxyrPr%r%r&r}s�

zConverterMapping.__setitem__c	Cszzd|p
d}Wnty*t|��Yn0|j|=t�|jf|j���D]*}zt||�WqJtyrYqJYqJ0qJdS)Nrk)	rBrprRrrrLrX�delattrr�)r#r�rY�instr%r%r&r�szConverterMapping.__delitem__cCs
t|j�Sr)�iterrRr'r%r%r&r�szConverterMapping.__iter__cCs
t|j�Sr)r�rRr'r%r%r&r�szConverterMapping.__len__N)
r)r*r+r,r{r|rTr"rrrrrr%r%r%r&rgs
	r)*r,�collections.abcr�collectionsrr>r��compatrrr�r{r!rI�__all__r�rFrrr!rrrrr	r
rr
rrr�objectr�rrrrrrrrrr%r%r%r&�<module>sP

	

.QX& HPK�@u\�($K�� backports/configparser/compat.pynu�[���import types
import io as _io


def text_encoding(encoding, stacklevel=2):
    """
    Stubbed version of io.text_encoding as found in Python 3.10
    """
    return encoding


def copy_module(mod, **defaults):
    copy = types.ModuleType(mod.__name__, doc=mod.__doc__)
    vars(copy).update(defaults)
    vars(copy).update(vars(mod))
    return copy


io = copy_module(_io, text_encoding=text_encoding)
PK�@u\�V��Elibfuturize/fixes/fix_add__future__imports_except_unicode_literals.pynu�[���"""
Fixer for adding:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function

This is "stage 1": hopefully uncontroversial changes.

Stage 2 adds ``unicode_literals``.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import future_import

class FixAddFutureImportsExceptUnicodeLiterals(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"

    run_order = 9

    def transform(self, node, results):
        # Reverse order:
        future_import(u"absolute_import", node)
        future_import(u"division", node)
        future_import(u"print_function", node)
PK�@u\Ʉf��!libfuturize/fixes/fix_division.pynu�[���"""
UNFINISHED
For the ``future`` package.

Adds this import line:

    from __future__ import division

at the top so the code runs identically on Py3 and Py2.6/2.7
"""

from libpasteurize.fixes.fix_division import FixDivision
PK�@u\�:�{��libfuturize/fixes/fix_bytes.pynu�[���"""Optional fixer that changes all unprefixed string literals "..." to b"...".

br'abcd' is a SyntaxError on Python 2 but valid on Python 3.
ur'abcd' is a SyntaxError on Python 3 but valid on Python 2.

"""
from __future__ import unicode_literals

import re
from lib2to3.pgen2 import token
from lib2to3 import fixer_base

_literal_re = re.compile(r"[^bBuUrR]?[\'\"]")

class FixBytes(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "STRING"

    def transform(self, node, results):
        if node.type == token.STRING:
            if _literal_re.match(node.value):
                new = node.clone()
                new.value = u'b' + new.value
                return new
PK�@u\C��VV"libfuturize/fixes/fix_next_call.pynu�[���"""
Based on fix_next.py by Collin Winter.

Replaces it.next() -> next(it), per PEP 3114.

Unlike fix_next.py, this fixer doesn't replace the name of a next method with __next__,
which would break Python 2 compatibility without further help from fixers in
stage 2.
"""

# Local imports
from lib2to3.pgen2 import token
from lib2to3.pygram import python_symbols as syms
from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, Call, find_binding

bind_warning = "Calls to builtin next() possibly shadowed by global binding"


class FixNextCall(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = """
    power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > >
    |
    power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > >
    |
    global=global_stmt< 'global' any* 'next' any* >
    """

    order = "pre" # Pre-order tree traversal

    def start_tree(self, tree, filename):
        super(FixNextCall, self).start_tree(tree, filename)

        n = find_binding('next', tree)
        if n:
            self.warning(n, bind_warning)
            self.shadowed_next = True
        else:
            self.shadowed_next = False

    def transform(self, node, results):
        assert results

        base = results.get("base")
        attr = results.get("attr")
        name = results.get("name")

        if base:
            if self.shadowed_next:
                # Omit this:
                # attr.replace(Name("__next__", prefix=attr.prefix))
                pass
            else:
                base = [n.clone() for n in base]
                base[0].prefix = ""
                node.replace(Call(Name("next", prefix=node.prefix), base))
        elif name:
            # Omit this:
            # n = Name("__next__", prefix=name.prefix)
            # name.replace(n)
            pass
        elif attr:
            # We don't do this transformation if we're assigning to "x.next".
            # Unfortunately, it doesn't seem possible to do this in PATTERN,
            #  so it's being done here.
            if is_assign_target(node):
                head = results["head"]
                if "".join([str(n) for n in head]).strip() == '__builtin__':
                    self.warning(node, bind_warning)
                return
            # Omit this:
            # attr.replace(Name("__next__"))
        elif "global" in results:
            self.warning(node, bind_warning)
            self.shadowed_next = True


### The following functions help test if node is part of an assignment
###  target.

def is_assign_target(node):
    assign = find_assign(node)
    if assign is None:
        return False

    for child in assign.children:
        if child.type == token.EQUAL:
            return False
        elif is_subtree(child, node):
            return True
    return False

def find_assign(node):
    if node.type == syms.expr_stmt:
        return node
    if node.type == syms.simple_stmt or node.parent is None:
        return None
    return find_assign(node.parent)

def is_subtree(root, node):
    if root == node:
        return True
    return any(is_subtree(c, node) for c in root.children)
PK�@u\D�;�!libfuturize/fixes/fix_UserDict.pynu�[���"""Fix UserDict.

Incomplete!

TODO: base this on fix_urllib perhaps?
"""


# Local imports
from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, attr_chain
from lib2to3.fixes.fix_imports import alternates, build_pattern, FixImports

MAPPING = {'UserDict':  'collections',
}

# def alternates(members):
#     return "(" + "|".join(map(repr, members)) + ")"
#
#
# def build_pattern(mapping=MAPPING):
#     mod_list = ' | '.join(["module_name='%s'" % key for key in mapping])
#     bare_names = alternates(mapping.keys())
#
#     yield """name_import=import_name< 'import' ((%s) |
#                multiple_imports=dotted_as_names< any* (%s) any* >) >
#           """ % (mod_list, mod_list)
#     yield """import_from< 'from' (%s) 'import' ['(']
#               ( any | import_as_name< any 'as' any > |
#                 import_as_names< any* >)  [')'] >
#           """ % mod_list
#     yield """import_name< 'import' (dotted_as_name< (%s) 'as' any > |
#                multiple_imports=dotted_as_names<
#                  any* dotted_as_name< (%s) 'as' any > any* >) >
#           """ % (mod_list, mod_list)
#
#     # Find usages of module members in code e.g. thread.foo(bar)
#     yield "power< bare_with_attr=(%s) trailer<'.' any > any* >" % bare_names


# class FixUserDict(fixer_base.BaseFix):
class FixUserdict(FixImports):

    BM_compatible = True
    keep_line_order = True
    # This is overridden in fix_imports2.
    mapping = MAPPING

    # We want to run this fixer late, so fix_import doesn't try to make stdlib
    # renames into relative imports.
    run_order = 6

    def build_pattern(self):
        return "|".join(build_pattern(self.mapping))

    def compile_pattern(self):
        # We override this, so MAPPING can be pragmatically altered and the
        # changes will be reflected in PATTERN.
        self.PATTERN = self.build_pattern()
        super(FixImports, self).compile_pattern()

    # Don't match the node if it's within another match.
    def match(self, node):
        match = super(FixImports, self).match
        results = match(node)
        if results:
            # Module usage could be in the trailer of an attribute lookup, so we
            # might have nested matches when "bare_with_attr" is present.
            if "bare_with_attr" not in results and \
                    any(match(obj) for obj in attr_chain(node, "parent")):
                return False
            return results
        return False

    def start_tree(self, tree, filename):
        super(FixImports, self).start_tree(tree, filename)
        self.replace = {}

    def transform(self, node, results):
        import_mod = results.get("module_name")
        if import_mod:
            mod_name = import_mod.value
            new_name = unicode(self.mapping[mod_name])
            import_mod.replace(Name(new_name, prefix=import_mod.prefix))
            if "name_import" in results:
                # If it's not a "from x import x, y" or "import x as y" import,
                # marked its usage to be replaced.
                self.replace[mod_name] = new_name
            if "multiple_imports" in results:
                # This is a nasty hack to fix multiple imports on a line (e.g.,
                # "import StringIO, urlparse"). The problem is that I can't
                # figure out an easy way to make a pattern recognize the keys of
                # MAPPING randomly sprinkled in an import statement.
                results = self.match(node)
                if results:
                    self.transform(node, results)
        else:
            # Replace usage of the module.
            bare_name = results["bare_with_attr"][0]
            new_name = self.replace.get(bare_name.value)
            if new_name:
                bare_name.replace(Name(new_name, prefix=bare_name.prefix))
PK�@u\;�"4ttlibfuturize/fixes/__init__.pynu�[���import sys
from lib2to3 import refactor

# The following fixers are "safe": they convert Python 2 code to more
# modern Python 2 code. They should be uncontroversial to apply to most
# projects that are happy to drop support for Py2.5 and below. Applying
# them first will reduce the size of the patch set for the real porting.
lib2to3_fix_names_stage1 = set([
    'lib2to3.fixes.fix_apply',
    'lib2to3.fixes.fix_except',
    'lib2to3.fixes.fix_exec',
    'lib2to3.fixes.fix_exitfunc',
    'lib2to3.fixes.fix_funcattrs',
    'lib2to3.fixes.fix_has_key',
    'lib2to3.fixes.fix_idioms',
    # 'lib2to3.fixes.fix_import',    # makes any implicit relative imports explicit. (Use with ``from __future__ import absolute_import)
    'lib2to3.fixes.fix_intern',
    'lib2to3.fixes.fix_isinstance',
    'lib2to3.fixes.fix_methodattrs',
    'lib2to3.fixes.fix_ne',
    # 'lib2to3.fixes.fix_next',         # would replace ``next`` method names
                                        # with ``__next__``.
    'lib2to3.fixes.fix_numliterals',    # turns 1L into 1, 0755 into 0o755
    'lib2to3.fixes.fix_paren',
    # 'lib2to3.fixes.fix_print',        # see the libfuturize fixer that also
                                        # adds ``from __future__ import print_function``
    # 'lib2to3.fixes.fix_raise',   # uses incompatible with_traceback() method on exceptions
    'lib2to3.fixes.fix_reduce',    # reduce is available in functools on Py2.6/Py2.7
    'lib2to3.fixes.fix_renames',        # sys.maxint -> sys.maxsize
    # 'lib2to3.fixes.fix_set_literal',  # this is unnecessary and breaks Py2.6 support
    'lib2to3.fixes.fix_repr',
    'lib2to3.fixes.fix_standarderror',
    'lib2to3.fixes.fix_sys_exc',
    'lib2to3.fixes.fix_throw',
    'lib2to3.fixes.fix_tuple_params',
    'lib2to3.fixes.fix_types',
    'lib2to3.fixes.fix_ws_comma',       # can perhaps decrease readability: see issue #58
    'lib2to3.fixes.fix_xreadlines',
])

# The following fixers add a dependency on the ``future`` package on order to
# support Python 2:
lib2to3_fix_names_stage2 = set([
    # 'lib2to3.fixes.fix_buffer',    # perhaps not safe. Test this.
    # 'lib2to3.fixes.fix_callable',  # not needed in Py3.2+
    'lib2to3.fixes.fix_dict',        # TODO: add support for utils.viewitems() etc. and move to stage2
    # 'lib2to3.fixes.fix_execfile',  # some problems: see issue #37.
                                     # We use a custom fixer instead (see below)
    # 'lib2to3.fixes.fix_future',    # we don't want to remove __future__ imports
    'lib2to3.fixes.fix_getcwdu',
    # 'lib2to3.fixes.fix_imports',   # called by libfuturize.fixes.fix_future_standard_library
    # 'lib2to3.fixes.fix_imports2',  # we don't handle this yet (dbm)
    # 'lib2to3.fixes.fix_input',     # Called conditionally by libfuturize.fixes.fix_input
    'lib2to3.fixes.fix_itertools',
    'lib2to3.fixes.fix_itertools_imports',
    'lib2to3.fixes.fix_filter',
    'lib2to3.fixes.fix_long',
    'lib2to3.fixes.fix_map',
    # 'lib2to3.fixes.fix_metaclass', # causes SyntaxError in Py2! Use the one from ``six`` instead
    'lib2to3.fixes.fix_next',
    'lib2to3.fixes.fix_nonzero',     # TODO: cause this to import ``object`` and/or add a decorator for mapping __bool__ to __nonzero__
    'lib2to3.fixes.fix_operator',    # we will need support for this by e.g. extending the Py2 operator module to provide those functions in Py3
    'lib2to3.fixes.fix_raw_input',
    # 'lib2to3.fixes.fix_unicode',   # strips off the u'' prefix, which removes a potentially helpful source of information for disambiguating unicode/byte strings
    # 'lib2to3.fixes.fix_urllib',    # included in libfuturize.fix_future_standard_library_urllib
    # 'lib2to3.fixes.fix_xrange',    # custom one because of a bug with Py3.3's lib2to3
    'lib2to3.fixes.fix_zip',
])

libfuturize_fix_names_stage1 = set([
    'libfuturize.fixes.fix_absolute_import',
    'libfuturize.fixes.fix_next_call',  # obj.next() -> next(obj). Unlike
                                        # lib2to3.fixes.fix_next, doesn't change
                                        # the ``next`` method to ``__next__``.
    'libfuturize.fixes.fix_print_with_import',
    'libfuturize.fixes.fix_raise',
    # 'libfuturize.fixes.fix_order___future__imports',  # TODO: consolidate to a single line to simplify testing
])

libfuturize_fix_names_stage2 = set([
    'libfuturize.fixes.fix_basestring',
    # 'libfuturize.fixes.fix_add__future__imports_except_unicode_literals',  # just in case
    'libfuturize.fixes.fix_cmp',
    'libfuturize.fixes.fix_division_safe',
    'libfuturize.fixes.fix_execfile',
    'libfuturize.fixes.fix_future_builtins',
    'libfuturize.fixes.fix_future_standard_library',
    'libfuturize.fixes.fix_future_standard_library_urllib',
    'libfuturize.fixes.fix_input',
    'libfuturize.fixes.fix_metaclass',
    'libpasteurize.fixes.fix_newstyle',
    'libfuturize.fixes.fix_object',
    # 'libfuturize.fixes.fix_order___future__imports',  # TODO: consolidate to a single line to simplify testing
    'libfuturize.fixes.fix_unicode_keep_u',
    # 'libfuturize.fixes.fix_unicode_literals_import',
    'libfuturize.fixes.fix_xrange_with_import',  # custom one because of a bug with Py3.3's lib2to3
])
PK�@u\������+libfuturize/fixes/fix_xrange_with_import.pynu�[���"""
For the ``future`` package.

Turns any xrange calls into range calls and adds this import line:

    from builtins import range

at the top.
"""

from lib2to3.fixes.fix_xrange import FixXrange

from libfuturize.fixer_util import touch_import_top


class FixXrangeWithImport(FixXrange):
    def transform(self, node, results):
        result = super(FixXrangeWithImport, self).transform(node, results)
        touch_import_top('builtins', 'range', node)
        return result
PK�@u\Fy���!libfuturize/fixes/fix_execfile.pynu�[���# coding: utf-8
"""
Fixer for the execfile() function on Py2, which was removed in Py3.

The Lib/lib2to3/fixes/fix_execfile.py module has some problems: see
python-future issue #37. This fixer merely imports execfile() from
past.builtins and leaves the code alone.

Adds this import line::

    from past.builtins import execfile

for the function execfile() that was removed from Py3.
"""

from __future__ import unicode_literals
from lib2to3 import fixer_base

from libfuturize.fixer_util import touch_import_top


expression = "name='execfile'"


class FixExecfile(fixer_base.BaseFix):
    BM_compatible = True
    run_order = 9

    PATTERN = """
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              """.format(expression)

    def transform(self, node, results):
        name = results["name"]
        touch_import_top(u'past.builtins', name.value, node)
PK�@u\���@@libfuturize/fixes/fix_raise.pynu�[���"""Fixer for 'raise E, V'

From Armin Ronacher's ``python-modernize``.

raise         -> raise
raise E       -> raise E
raise E, 5    -> raise E(5)
raise E, 5, T -> raise E(5).with_traceback(T)
raise E, None, T -> raise E.with_traceback(T)

raise (((E, E'), E''), E'''), 5 -> raise E(5)
raise "foo", V, T               -> warns about string exceptions

raise E, (V1, V2) -> raise E(V1, V2)
raise E, (V1, V2), T -> raise E(V1, V2).with_traceback(T)


CAVEATS:
1) "raise E, V, T" cannot be translated safely in general. If V
   is not a tuple or a (number, string, None) literal, then:

   raise E, V, T -> from future.utils import raise_
                    raise_(E, V, T)
"""
# Author: Collin Winter, Armin Ronacher, Mark Huang

# Local imports
from lib2to3 import pytree, fixer_base
from lib2to3.pgen2 import token
from lib2to3.fixer_util import Name, Call, is_tuple, Comma, Attr, ArgList

from libfuturize.fixer_util import touch_import_top


class FixRaise(fixer_base.BaseFix):

    BM_compatible = True
    PATTERN = """
    raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] >
    """

    def transform(self, node, results):
        syms = self.syms

        exc = results["exc"].clone()
        if exc.type == token.STRING:
            msg = "Python 3 does not support string exceptions"
            self.cannot_convert(node, msg)
            return

        # Python 2 supports
        #  raise ((((E1, E2), E3), E4), E5), V
        # as a synonym for
        #  raise E1, V
        # Since Python 3 will not support this, we recurse down any tuple
        # literals, always taking the first element.
        if is_tuple(exc):
            while is_tuple(exc):
                # exc.children[1:-1] is the unparenthesized tuple
                # exc.children[1].children[0] is the first element of the tuple
                exc = exc.children[1].children[0].clone()
            exc.prefix = u" "

        if "tb" in results:
            tb = results["tb"].clone()
        else:
            tb = None

        if "val" in results:
            val = results["val"].clone()
            if is_tuple(val):
                # Assume that exc is a subclass of Exception and call exc(*val).
                args = [c.clone() for c in val.children[1:-1]]
                exc = Call(exc, args)
            elif val.type in (token.NUMBER, token.STRING):
                # Handle numeric and string literals specially, e.g.
                # "raise Exception, 5" -> "raise Exception(5)".
                val.prefix = u""
                exc = Call(exc, [val])
            elif val.type == token.NAME and val.value == u"None":
                # Handle None specially, e.g.
                # "raise Exception, None" -> "raise Exception".
                pass
            else:
                # val is some other expression. If val evaluates to an instance
                # of exc, it should just be raised. If val evaluates to None,
                # a default instance of exc should be raised (as above). If val
                # evaluates to a tuple, exc(*val) should be called (as
                # above). Otherwise, exc(val) should be called. We can only
                # tell what to do at runtime, so defer to future.utils.raise_(),
                # which handles all of these cases.
                touch_import_top(u"future.utils", u"raise_", node)
                exc.prefix = u""
                args = [exc, Comma(), val]
                if tb is not None:
                    args += [Comma(), tb]
                return Call(Name(u"raise_"), args, prefix=node.prefix)

        if tb is not None:
            tb.prefix = ""
            exc_list = Attr(exc, Name('with_traceback')) + [ArgList([tb])]
        else:
            exc_list = [exc]

        return pytree.Node(syms.raise_stmt,
                           [Name(u"raise")] + exc_list,
                           prefix=node.prefix)
PKAu\�H{l��libfuturize/fixes/fix_input.pynu�[���"""
Fixer for input.

Does a check for `from builtins import input` before running the lib2to3 fixer.
The fixer will not run when the input is already present.


this:
    a = input()
becomes:
    from builtins import input
    a = eval(input())

and this:
    from builtins import input
    a = input()
becomes (no change):
    from builtins import input
    a = input()
"""

import lib2to3.fixes.fix_input
from lib2to3.fixer_util import does_tree_import


class FixInput(lib2to3.fixes.fix_input.FixInput):
    def transform(self, node, results):

        if does_tree_import('builtins', 'input', node):
            return

        return super(FixInput, self).transform(node, results)
PKAu\�rؑ��libfuturize/fixes/fix_cmp.pynu�[���# coding: utf-8
"""
Fixer for the cmp() function on Py2, which was removed in Py3.

Adds this import line::

    from past.builtins import cmp

if cmp() is called in the code.
"""

from __future__ import unicode_literals
from lib2to3 import fixer_base

from libfuturize.fixer_util import touch_import_top


expression = "name='cmp'"


class FixCmp(fixer_base.BaseFix):
    BM_compatible = True
    run_order = 9

    PATTERN = """
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              """.format(expression)

    def transform(self, node, results):
        name = results["name"]
        touch_import_top(u'past.builtins', name.value, node)
PKAu\��z��:libfuturize/fixes/__pycache__/fix_next_call.cpython-39.pycnu�[���a

��?hV�@sndZddlmZddlmZddlmZddlm	Z	m
Z
mZdZGdd�dej
�Zd	d
�Zdd�Zd
d�ZdS)a
Based on fix_next.py by Collin Winter.

Replaces it.next() -> next(it), per PEP 3114.

Unlike fix_next.py, this fixer doesn't replace the name of a next method with __next__,
which would break Python 2 compatibility without further help from fixers in
stage 2.
�)�token)�python_symbols)�
fixer_base)�Name�Call�find_bindingz;Calls to builtin next() possibly shadowed by global bindingcs0eZdZdZdZdZ�fdd�Zdd�Z�ZS)�FixNextCallTz�
    power< base=any+ trailer< '.' attr='next' > trailer< '(' ')' > >
    |
    power< head=any+ trailer< '.' attr='next' > not trailer< '(' ')' > >
    |
    global=global_stmt< 'global' any* 'next' any* >
    �precs>tt|��||�td|�}|r4|�|t�d|_nd|_dS)N�nextTF)�superr�
start_treer�warning�bind_warning�
shadowed_next)�self�tree�filename�n��	__class__��I/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_next_call.pyr s
zFixNextCall.start_treecCs�|sJ�|�d�}|�d�}|�d�}|rf|jr2q�dd�|D�}d|d_|�ttd|jd	�|��nb|rln\|r�t|�r�|d
}d�dd�|D����dkr�|�	|t
�dSnd
|vr�|�	|t
�d|_dS)N�base�attr�namecSsg|]}|���qSr)�clone��.0rrrr�
<listcomp>7�z)FixNextCall.transform.<locals>.<listcomp>�rr
)�prefix�headcSsg|]}t|��qSr)�strrrrrrEr�__builtin__�globalT)�getrr!�replacerr�is_assign_target�join�stripr
r)r�node�resultsrrrr"rrr�	transform*s*



zFixNextCall.transform)	�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERN�orderrr-�
__classcell__rrrrrs

rcCsFt|�}|durdS|jD]&}|jtjkr0dSt||�rdSqdS)NFT)�find_assign�children�typer�EQUAL�
is_subtree)r+Zassign�childrrrr(Rs

r(cCs4|jtjkr|S|jtjks&|jdur*dSt|j�S�N)r5�symsZ	expr_stmtZsimple_stmt�parentr3�r+rrrr3^s
r3cs$|�krdSt�fdd�|jD��S)NTc3s|]}t|��VqdSr9)r7)r�cr<rr�	<genexpr>hrzis_subtree.<locals>.<genexpr>)�anyr4)�rootr+rr<rr7esr7N)�__doc__Z
lib2to3.pgen2rZlib2to3.pygramrr:Zlib2to3rZlib2to3.fixer_utilrrrrZBaseFixrr(r3r7rrrr�<module>s>PKAu\�����@libfuturize/fixes/__pycache__/fix_future_builtins.cpython-39.pycnu�[���a

��?h��@szdZddlmZddlmZddlmZddlm	Z	m
Z
mZddlm
Z
d��Zd�d	d
�eD��ZGdd�dej�Zd
S)z�
For the ``future`` package.

Adds this import line::

    from builtins import XYZ

for each of the functions XYZ that is used in the module.

Adds these imports after any other imports (in an initial block of them).
�)�unicode_literals)�
fixer_base)�python_symbols)�Name�Call�in_special_context)�touch_import_topzsfilter map zip
                       ascii chr hex input next oct
                       bytes range str raw_input�|cCsg|]}d�|��qS)z
name='{0}')�format)�.0�name�r
�O/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_future_builtins.py�
<listcomp>$�rc@s&eZdZdZdZd�e�Zdd�ZdS)�FixFutureBuiltinsT�z�
              power<
                 ({0}) trailer< '(' [arglist=any] ')' >
              rest=any* >
              |
              power<
                  'map' trailer< '(' [arglist=any] ')' >
              >
              cCs|d}td|j|�dS)Nr�builtins)r�value)�self�node�resultsrr
r
r�	transform8szFixFutureBuiltins.transformN)	�__name__�
__module__�__qualname__Z
BM_compatibleZ	run_orderr
�
expressionZPATTERNrr
r
r
rr's�
rN)�__doc__�
__future__rZlib2to3rZlib2to3.pygramrZsymsZlib2to3.fixer_utilrrrZlibfuturize.fixer_utilr�splitZreplaced_builtin_fns�joinrZBaseFixrr
r
r
r�<module>s	PK
Au\�#�jjOlibfuturize/fixes/__pycache__/fix_future_standard_library_urllib.cpython-39.pycnu�[���a

��?h��@s4dZddlmZddlmZmZGdd�de�ZdS)a 
For the ``future`` package.

A special fixer that ensures that these lines have been added::

    from future import standard_library
    standard_library.install_hooks()

even if the only module imported was ``urllib``, in which case the regular fixer
wouldn't have added these lines.

�)�	FixUrllib)�touch_import_top�	find_rootcs eZdZdZ�fdd�Z�ZS)�FixFutureStandardLibraryUrllib�cs*t|�}tt|��||�}tdd|�|S)N�futureZstandard_library)r�superr�	transformr)�self�node�results�root�result��	__class__��^/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_future_standard_library_urllib.pyr	sz(FixFutureStandardLibraryUrllib.transform)�__name__�
__module__�__qualname__Z	run_orderr	�
__classcell__rrrrrsrN)�__doc__Zlib2to3.fixes.fix_urllibrZlibfuturize.fixer_utilrrrrrrr�<module>s
PKAu\�ڌ���5libfuturize/fixes/__pycache__/__init__.cpython-39.pycnu�[���a

��?ht�@sHddlZddlmZegd��Zegd��Zegd��Zegd��ZdS)�N)�refactor)zlib2to3.fixes.fix_applyzlib2to3.fixes.fix_exceptzlib2to3.fixes.fix_execzlib2to3.fixes.fix_exitfunczlib2to3.fixes.fix_funcattrszlib2to3.fixes.fix_has_keyzlib2to3.fixes.fix_idiomszlib2to3.fixes.fix_internzlib2to3.fixes.fix_isinstancezlib2to3.fixes.fix_methodattrszlib2to3.fixes.fix_nezlib2to3.fixes.fix_numliteralszlib2to3.fixes.fix_parenzlib2to3.fixes.fix_reducezlib2to3.fixes.fix_renameszlib2to3.fixes.fix_reprzlib2to3.fixes.fix_standarderrorzlib2to3.fixes.fix_sys_exczlib2to3.fixes.fix_throwzlib2to3.fixes.fix_tuple_paramszlib2to3.fixes.fix_typeszlib2to3.fixes.fix_ws_commazlib2to3.fixes.fix_xreadlines)zlib2to3.fixes.fix_dictzlib2to3.fixes.fix_getcwduzlib2to3.fixes.fix_itertoolsz#lib2to3.fixes.fix_itertools_importszlib2to3.fixes.fix_filterzlib2to3.fixes.fix_longzlib2to3.fixes.fix_mapzlib2to3.fixes.fix_nextzlib2to3.fixes.fix_nonzerozlib2to3.fixes.fix_operatorzlib2to3.fixes.fix_raw_inputzlib2to3.fixes.fix_zip)z%libfuturize.fixes.fix_absolute_importzlibfuturize.fixes.fix_next_callz'libfuturize.fixes.fix_print_with_importzlibfuturize.fixes.fix_raise)
z libfuturize.fixes.fix_basestringzlibfuturize.fixes.fix_cmpz#libfuturize.fixes.fix_division_safezlibfuturize.fixes.fix_execfilez%libfuturize.fixes.fix_future_builtinsz-libfuturize.fixes.fix_future_standard_libraryz4libfuturize.fixes.fix_future_standard_library_urllibzlibfuturize.fixes.fix_inputzlibfuturize.fixes.fix_metaclassz libpasteurize.fixes.fix_newstylezlibfuturize.fixes.fix_objectz$libfuturize.fixes.fix_unicode_keep_uz(libfuturize.fixes.fix_xrange_with_import)�sysZlib2to3r�setZlib2to3_fix_names_stage1Zlib2to3_fix_names_stage2Zlibfuturize_fix_names_stage1Zlibfuturize_fix_names_stage2�rr�D/usr/local/lib/python3.9/site-packages/libfuturize/fixes/__init__.py�<module>s
#
PKAu\����9libfuturize/fixes/__pycache__/fix_division.cpython-39.pycnu�[���a

��?h��@sdZddlmZdS)z�
UNFINISHED
For the ``future`` package.

Adds this import line:

    from __future__ import division

at the top so the code runs identically on Py3 and Py2.6/2.7
�)�FixDivisionN)�__doc__Z libpasteurize.fixes.fix_divisionr�rr�H/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_division.py�<module>sPKAu\;��?libfuturize/fixes/__pycache__/fix_unicode_keep_u.cpython-39.pycnu�[���a

��?h�@s<dZddlmZddlmZddd�ZGdd�dej�Zd	S)
atFixer that changes unicode to str and unichr to chr, but -- unlike the
lib2to3 fix_unicode.py fixer, does not change u"..." into "...".

The reason is that Py3.3+ supports the u"..." string prefix, and, if
present, the prefix may provide useful information for disambiguating
between byte strings and unicode strings, which is often the hardest part
of the porting task.

�)�token)�
fixer_base�chr�str)�unichr�unicodec@seZdZdZdZdd�ZdS)�FixUnicodeKeepUTz'unicode' | 'unichr'cCs(|jtjkr$|��}t|j|_|SdS)N)�typer�NAME�clone�_mapping�value)�self�node�results�new�r�N/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_unicode_keep_u.py�	transformszFixUnicodeKeepU.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNrrrrrrsrN)�__doc__Z
lib2to3.pgen2rZlib2to3rrZBaseFixrrrrr�<module>s

PKAu\uj4EHlibfuturize/fixes/__pycache__/fix_order___future__imports.cpython-39.pycnu�[���a

��?h=�@s2dZddlmZddlmZGdd�dej�ZdS)aE
UNFINISHED

Fixer for turning multiple lines like these:

    from __future__ import division
    from __future__ import absolute_import
    from __future__ import print_function

into a single line like this:

    from __future__ import (absolute_import, division, print_function)

This helps with testing of ``futurize``.
�)�
fixer_base)�
future_importc@s eZdZdZdZdZdd�ZdS)�FixOrderFutureImportsTZ
file_input�
cCsdS)N�)�self�node�resultsrr�W/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_order___future__imports.py�	transform"szFixOrderFutureImports.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderrrrrr
rs
rN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrrrrr
�<module>sPKAu\;��9libfuturize/fixes/__pycache__/fix_execfile.cpython-39.pycnu�[���a

��?h��@sBdZddlmZddlmZddlmZdZGdd�dej�Z	dS)	ao
Fixer for the execfile() function on Py2, which was removed in Py3.

The Lib/lib2to3/fixes/fix_execfile.py module has some problems: see
python-future issue #37. This fixer merely imports execfile() from
past.builtins and leaves the code alone.

Adds this import line::

    from past.builtins import execfile

for the function execfile() that was removed from Py3.
�)�unicode_literals)�
fixer_base)�touch_import_topzname='execfile'c@s&eZdZdZdZd�e�Zdd�ZdS)�FixExecfileT�	zs
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              cCs|d}td|j|�dS)N�namez
past.builtins)r�value)�self�node�resultsr�r�H/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_execfile.py�	transform#szFixExecfile.transformN)	�__name__�
__module__�__qualname__Z
BM_compatibleZ	run_order�format�
expressionZPATTERNrrrrr
rs�rN)
�__doc__�
__future__rZlib2to3rZlibfuturize.fixer_utilrrZBaseFixrrrrr
�<module>s
PKAu\��޺	�	6libfuturize/fixes/__pycache__/fix_raise.cpython-39.pycnu�[���a

��?h@�@sbdZddlmZmZddlmZddlmZmZm	Z	m
Z
mZmZddl
mZGdd�dej�ZdS)	a�Fixer for 'raise E, V'

From Armin Ronacher's ``python-modernize``.

raise         -> raise
raise E       -> raise E
raise E, 5    -> raise E(5)
raise E, 5, T -> raise E(5).with_traceback(T)
raise E, None, T -> raise E.with_traceback(T)

raise (((E, E'), E''), E'''), 5 -> raise E(5)
raise "foo", V, T               -> warns about string exceptions

raise E, (V1, V2) -> raise E(V1, V2)
raise E, (V1, V2), T -> raise E(V1, V2).with_traceback(T)


CAVEATS:
1) "raise E, V, T" cannot be translated safely in general. If V
   is not a tuple or a (number, string, None) literal, then:

   raise E, V, T -> from future.utils import raise_
                    raise_(E, V, T)
�)�pytree�
fixer_base)�token)�Name�Call�is_tuple�Comma�Attr�ArgList)�touch_import_topc@seZdZdZdZdd�ZdS)�FixRaiseTzB
    raise_stmt< 'raise' exc=any [',' val=any [',' tb=any]] >
    c
Cs�|j}|d��}|jtjkr2d}|�||�dSt|�r^t|�rX|jdjd��}q:d|_d|vrt|d��}nd}d|v�rB|d��}t|�r�dd	�|jdd
�D�}t	||�}n�|jtj
tjfvr�d|_t	||g�}nb|jtjkr�|jdkr�nJt
d
d|�d|_|t�|g}|du�r.|t�|g7}t	td�||jd�S|du�rnd|_t|td��t|g�g}	n|g}	tj|jtd�g|	|jd�S)N�excz+Python 3 does not support string exceptions�r� �tb�valcSsg|]}|���qS�)�clone)�.0�crr�E/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_raise.py�
<listcomp>I�z&FixRaise.transform.<locals>.<listcomp>�����Nonezfuture.utilsZraise_)�prefix�with_traceback�raise)�symsr�typer�STRINGZcannot_convertr�childrenrr�NUMBER�NAME�valuerrrr	r
r�NodeZ
raise_stmt)
�self�node�resultsrr
�msgrr�argsZexc_listrrr�	transform*sJ
	

�zFixRaise.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNr,rrrrr#srN)�__doc__Zlib2to3rrZ
lib2to3.pgen2rZlib2to3.fixer_utilrrrrr	r
Zlibfuturize.fixer_utilrZBaseFixrrrrr�<module>s
 PK$Au\v�OC7libfuturize/fixes/__pycache__/fix_object.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)zf
Fixer that adds ``from builtins import object`` if there is a line
like this:
    class Foo(object):
�)�
fixer_base��touch_import_topc@seZdZdZdd�ZdS)�	FixObjectz<classdef< 'class' NAME '(' name='object' ')' colon=':' any >cCstdd|�dS)N�builtins�objectr)�self�node�results�r�F/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_object.py�	transformszFixObject.transformN)�__name__�
__module__�__qualname__ZPATTERNr
rrrrrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrrrrr�<module>sPK'Au\�&D�334libfuturize/fixes/__pycache__/fix_cmp.cpython-39.pycnu�[���a

��?h��@sBdZddlmZddlmZddlmZdZGdd�dej�Z	dS)	z�
Fixer for the cmp() function on Py2, which was removed in Py3.

Adds this import line::

    from past.builtins import cmp

if cmp() is called in the code.
�)�unicode_literals)�
fixer_base)�touch_import_topz
name='cmp'c@s&eZdZdZdZd�e�Zdd�ZdS)�FixCmpT�	zs
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              cCs|d}td|j|�dS)N�namez
past.builtins)r�value)�self�node�resultsr�r�C/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_cmp.py�	transformszFixCmp.transformN)	�__name__�
__module__�__qualname__Z
BM_compatibleZ	run_order�format�
expressionZPATTERNrrrrr
rs�rN)
�__doc__�
__future__rZlib2to3rZlibfuturize.fixer_utilrrZBaseFixrrrrr
�<module>s

PK*Au\�#N�6libfuturize/fixes/__pycache__/fix_bytes.cpython-39.pycnu�[���a

��?h��@sPdZddlmZddlZddlmZddlmZe�d�Z	Gdd�dej
�ZdS)	z�Optional fixer that changes all unprefixed string literals "..." to b"...".

br'abcd' is a SyntaxError on Python 2 but valid on Python 3.
ur'abcd' is a SyntaxError on Python 3 but valid on Python 2.

�)�unicode_literalsN)�token)�
fixer_basez[^bBuUrR]?[\'\"]c@seZdZdZdZdd�ZdS)�FixBytesT�STRINGcCs4|jtjkr0t�|j�r0|��}d|j|_|SdS)N�b)�typerr�_literal_re�match�value�clone)�self�node�results�new�r�E/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_bytes.py�	transforms
zFixBytes.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNrrrrrrsr)�__doc__�
__future__r�reZ
lib2to3.pgen2rZlib2to3r�compiler	ZBaseFixrrrrr�<module>s
PK,Au\�o��e
e
6libfuturize/fixes/__pycache__/fix_print.cpython-39.pycnu�[���a

��?h)�@s\dZddlmZmZmZddlmZddlmZm	Z	m
Z
mZe�d�Z
Gdd�dej�ZdS)	a,Fixer for print.

Change:
    "print"          into "print()"
    "print ..."      into "print(...)"
    "print(...)"     not changed
    "print ... ,"    into "print(..., end=' ')"
    "print >>x, ..." into "print(..., file=x)"

No changes are applied if print_function is imported from __future__

�)�patcomp�pytree�
fixer_base)�token)�Name�Call�Comma�Stringz8atom< '(' [arith_expr|atom|power|term|STRING|NAME] ')' >c@s$eZdZdZdZdd�Zdd�ZdS)�FixPrintTzP
              simple_stmt< any* bare='print' any* > | print_stmt
              cCs�|sJ�|�d�}|r4|�ttd�g|jd��dS|jdtd�ksJJ�|jdd�}t|�dkrvt�|d�rvdSd}}}|r�|dt	�kr�|dd�}d}dd	�|d�
�D�}|r�|djdd
kr�|djdd�dvr�d
}|�r2|dt�
tjd�k�r2t|�dk�sJ�|d��}|dd�}dd	�|D�}	|	�rPd
|	d_|du�sn|du�sn|du�r�|du�r�|�|	dtt|���|du�r�|�|	dtt|���|du�r�|�|	d|�ttd�|	�}
|j|
_|
S)NZbare�print)�prefixr����� cSsg|]}|jtjkr|�qS�)�typer�STRING)�.0Zleafrr�E/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_print.py�
<listcomp>?�z&FixPrint.transform.<locals>.<listcomp>�r���)z\tz\nz\r�z>>��cSsg|]}|���qSr)�clone)r�argrrrrKr�sep�end�file)�get�replacerrr�children�len�parend_expr�matchrZleaves�valuer�Leafr�
RIGHTSHIFTr�	add_kwargr	�repr)�self�node�resultsZ
bare_print�argsrrr Z
string_leavesZl_argsZn_stmtrrr�	transform$sP
����



zFixPrint.transformcCsNd|_t�|jjt|�t�tjd�|f�}|r@|�	t
��d|_|�	|�dS)Nr�=r)rr�NodeZsyms�argumentrr(r�EQUAL�appendr)r,Zl_nodesZs_kwdZn_exprZ
n_argumentrrrr*^s
��zFixPrint.add_kwargN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNr0r*rrrrr
s:r
N)�__doc__Zlib2to3rrrZ
lib2to3.pgen2rZlib2to3.fixer_utilrrrr	�compile_patternr%ZBaseFixr
rrrr�<module>s�PK/Au\�����:libfuturize/fixes/__pycache__/fix_metaclass.cpython-39.pycnu�[���a

��?hb%�@s�dZddlmZddlmZddlmZmZmZm	Z	m
Z
mZmZm
Z
mZdd�Zdd�Zd	d
�Zdd�Zd
d�Zdd�ZGdd�dej�ZdS)a�Fixer for __metaclass__ = X -> (future.utils.with_metaclass(X)) methods.

   The various forms of classef (inherits nothing, inherits once, inherints
   many) don't parse the same in the CST so we look at ALL classes for
   a __metaclass__ and if we find one normalize the inherits to all be
   an arglist.

   For one-liner classes ('class X: pass') there is no indent/dedent so
   we normalize those into having a suite.

   Moving the __metaclass__ into the classdef can also cause the class
   body to be empty so there is some special casing for that as well.

   This fixer also tries very hard to keep original indenting and spacing
   in all those corner cases.
�)�
fixer_base)�token)	�Name�syms�Node�Leaf�touch_import�Call�String�Comma�parenthesizecCsz|jD]n}|jtjkr"t|�S|jtjkr|jr|jd}|jtjkr|jr|jd}t|t�r|j	dkrdSqdS)z� we have to check the cls_node without changing it.
        There are two possibilities:
          1)  clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta')
          2)  clsdef => simple_stmt => expr_stmt => Leaf('__meta')
    r�
__metaclass__TF)
�children�typer�suite�
has_metaclass�simple_stmt�	expr_stmt�
isinstancer�value)�parent�node�	expr_nodeZ	left_side�r�I/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_metaclass.pyr&s



�rcCs�|jD]}|jtjkrdSqt|j�D]\}}|jtjkr(qJq(td��ttjg�}|j|dd�r�|j|d}|�	|�
��|��qV|�	|�|}dS)zf one-line classes don't get a suite in the parse tree so we add
        one to normalize the tree
    NzNo class suite and no ':'!�)rrrr�	enumerater�COLON�
ValueErrorr�append_child�clone�remove)�cls_noder�ir�	move_noderrr�fixup_parse_tree9s


r%c
Cs�t|j�D]\}}|jtjkr
q(q
dS|��ttjg�}ttj	|g�}|j|d�rz|j|}|�
|���|��qJ|�||�|jdjd}|jdjd}	|	j
|_
dS)z� if there is a semi-colon all the parts count as part of the same
        simple_stmt.  We just want the __metaclass__ part so we move
        everything efter the semi-colon into its own simple_stmt node
    Nr)rrrr�SEMIr!rrrrrr �insert_child�prefix)
rr#Z	stmt_nodeZsemi_indrZnew_exprZnew_stmtr$Z	new_leaf1Z	old_leaf1rrr�fixup_simple_stmtSs

r)cCs*|jr&|jdjtjkr&|jd��dS)N���)rrr�NEWLINEr!)rrrr�remove_trailing_newlineksr,ccs�|jD]}|jtjkrq$qtd��tt|j��D]t\}}|jtjkr2|jr2|jd}|jtjkr2|jr2|jd}t	|t
�r2|jdkr2t|||�t
|�|||fVq2dS)NzNo class suite!rr
)rrrrr�listrrrrrrr)r,)r"rr#Zsimple_noderZ	left_noderrr�
find_metasps



�r.cCsz|jddd�}|r,|��}|jtjkrq,q|rv|��}t|t�r^|jtjkr^|jrZd|_dS|�	|jddd��q,dS)z� If an INDENT is followed by a thing with a prefix then nuke the prefix
        Otherwise we get in trouble when removing __metaclass__ at suite start
    Nr*�)
r�poprr�INDENTrr�DEDENTr(�extend)rZkidsrrrr�fixup_indent�sr4c@seZdZdZdZdd�ZdS)�FixMetaclassTz
    classdef<any*>
    cCs�t|�sdSt|�d}t|�D]\}}}|}|��q |jdj}t|j�dkr�|jdjtjkrp|jd}n(|jd�	�}	t
tj|	g�}|�d|�n�t|j�dkr�t
tjg�}|�d|�nZt|j�dk�rt
tjg�}|�dt
tjd��|�d|�|�dt
tjd��ntd	��|jdjd}
d
|
_|
j}tdd|�|jdjd�	�}d
|_|g}
|j�r�t|j�dk�r�|jd�	�}d|_nVt|�	��}d|_ttd�td�t�|t�t
tjt
tjd�t
tjd�gdd�gdd�}|
�t�|g�|�ttd|jd�|
��t|�|j�sX|��t
|d�}||_|� |�|� t
tj!d��nbt|j�dk�r�|jdjtj"k�r�|jdjtj#k�r�t
|d�}|�d|�|�dt
tj!d��dS)Nr������)�(zUnexpected class definition�	metaclasszfuture.utils�with_metaclassr/r� rz	'NewBase'�{�})r(�pass�
���r*)$rr%r.r!rr�lenr�arglistr rZ	set_childr'rr�RPAR�LPARrrr(rrr	rr
rZatom�LBRACE�RBRACEr3�replacer4rr+r1r2)�selfr�resultsZlast_metaclassrr#�stmt�	text_typerFrZmeta_txtZorig_meta_prefixr=�	arguments�base�basesZ	pass_leafrrr�	transform�s�
��
��

��
zFixMetaclass.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNrSrrrrr5�sr5N)�__doc__Zlib2to3rZlib2to3.pygramrZlib2to3.fixer_utilrrrrrr	r
rrrr%r)r,r.r4ZBaseFixr5rrrr�<module>s,PK1Au\�"8���9libfuturize/fixes/__pycache__/fix_UserDict.cpython-39.pycnu�[���a

��?h�@sPdZddlmZddlmZmZddlmZmZm	Z	ddiZ
Gdd�de	�Zd	S)
zCFix UserDict.

Incomplete!

TODO: base this on fix_urllib perhaps?
�)�
fixer_base)�Name�
attr_chain)�
alternates�
build_pattern�
FixImports�UserDict�collectionscsTeZdZdZdZeZdZdd�Z�fdd�Z	�fdd�Z
�fd	d
�Zdd�Z�Z
S)
�FixUserdictT�cCsd�t|j��S)N�|)�joinr�mapping��self�r�H/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_UserDict.pyr5szFixUserdict.build_patterncs|��|_tt|���dS�N)rZPATTERN�superr�compile_patternr��	__class__rrr8s
zFixUserdict.compile_patterncsHtt|�j��|�}|rDd|vr@t�fdd�t|d�D��r@dS|SdS)N�bare_with_attrc3s|]}�|�VqdSrr)�.0�obj��matchrr�	<genexpr>F�z$FixUserdict.match.<locals>.<genexpr>�parentF)rrr�anyr)r�node�resultsrrrr?s�zFixUserdict.matchcstt|��||�i|_dSr)rr�
start_tree�replace)r�tree�filenamerrrr#KszFixUserdict.start_treecCs�|�d�}|rl|j}t|j|�}|�t||jd��d|vrH||j|<d|vr�|�|�}|r�|�||�n2|dd}|j�|j�}|r�|�t||jd��dS)N�module_name)�prefixZname_importZmultiple_importsrr)	�get�value�unicoderr$rr(r�	transform)rr!r"Z
import_mod�mod_name�new_nameZ	bare_namerrrr,Os


zFixUserdict.transform)�__name__�
__module__�__qualname__Z
BM_compatibleZkeep_line_order�MAPPINGrZ	run_orderrrrr#r,�
__classcell__rrrrr
*sr
N)�__doc__Zlib2to3rZlib2to3.fixer_utilrrZlib2to3.fixes.fix_importsrrrr2r
rrrr�<module>s
	PK4Au\�+�((Hlibfuturize/fixes/__pycache__/fix_future_standard_library.cpython-39.pycnu�[���a

��?h��@s0dZddlmZddlmZGdd�de�ZdS)a
For the ``future`` package.

Changes any imports needed to reflect the standard library reorganization. Also
Also adds these import lines:

    from future import standard_library
    standard_library.install_aliases()

after any __future__ imports but before any other imports.
�)�
FixImports)�touch_import_topcs eZdZdZ�fdd�Z�ZS)�FixFutureStandardLibrary�cs"tt|��||�}tdd|�|S)N�futureZstandard_library)�superr�	transformr)�self�node�results�result��	__class__��W/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_future_standard_library.pyrsz"FixFutureStandardLibrary.transform)�__name__�
__module__�__qualname__Z	run_orderr�
__classcell__rrr
rrsrN)�__doc__Zlib2to3.fixes.fix_importsrZlibfuturize.fixer_utilrrrrrr�<module>sPK7Au\�ú�kkClibfuturize/fixes/__pycache__/fix_xrange_with_import.cpython-39.pycnu�[���a

��?h��@s0dZddlmZddlmZGdd�de�ZdS)z�
For the ``future`` package.

Turns any xrange calls into range calls and adds this import line:

    from builtins import range

at the top.
�)�	FixXrange)�touch_import_topcseZdZ�fdd�Z�ZS)�FixXrangeWithImportcs"tt|��||�}tdd|�|S)N�builtins�range)�superr�	transformr)�self�node�results�result��	__class__��R/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_xrange_with_import.pyrszFixXrangeWithImport.transform)�__name__�
__module__�__qualname__r�
__classcell__rrr
rrsrN)�__doc__Zlib2to3.fixes.fix_xrangerZlibfuturize.fixer_utilrrrrrr�<module>s
PK:Au\�h�,,6libfuturize/fixes/__pycache__/fix_input.cpython-39.pycnu�[���a

��?h��@s2dZddlZddlmZGdd�dejjj�ZdS)aq
Fixer for input.

Does a check for `from builtins import input` before running the lib2to3 fixer.
The fixer will not run when the input is already present.


this:
    a = input()
becomes:
    from builtins import input
    a = eval(input())

and this:
    from builtins import input
    a = input()
becomes (no change):
    from builtins import input
    a = input()
�N)�does_tree_importcseZdZ�fdd�Z�ZS)�FixInputcs"tdd|�rdStt|��||�S)N�builtins�input)r�superr�	transform)�self�node�results��	__class__��E/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_input.pyrszFixInput.transform)�__name__�
__module__�__qualname__r�
__classcell__r
r
rrrsr)�__doc__Zlib2to3.fixes.fix_inputZlib2to3Zlib2to3.fixer_utilr�fixesZ	fix_inputrr
r
r
r�<module>sPK=Au\X�Xebb<libfuturize/fixes/__pycache__/fix_oldstr_wrap.cpython-39.pycnu�[���a

��?h��@spdZddlmZddlZddlmZddlmZddlm	Z	ddl
mZmZm
Z
e�d�ZGd	d
�d
ej�ZdS)a
For the ``future`` package.

Adds this import line:

    from past.builtins import str as oldstr

at the top and wraps any unadorned string literals 'abc' or explicit byte-string
literals b'abc' in oldstr() calls so the code has the same behaviour on Py3 as
on Py2.6/2.7.
�)�unicode_literalsN)�
fixer_base)�token)�syms)�
future_import�touch_import_top�wrap_in_fn_callz[^uUrR]?[\'\"]c@seZdZdZdZdd�ZdS)�
FixOldstrWrapT�STRINGcCsX|jtjkrTtdd|�t�|j�rT|��}d|_d|j|_t	d|g|jd�}|SdS)Nz
past.typesZoldstr��b)�prefix)
�typerr
r�_literal_re�match�value�cloner
r)�self�node�results�new�wrapped�r�K/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_oldstr_wrap.py�	transformszFixOldstrWrap.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNrrrrrr	sr	)�__doc__�
__future__r�reZlib2to3rZ
lib2to3.pgen2rZlib2to3.fixer_utilrZlibfuturize.fixer_utilrrr�compilerZBaseFixr	rrrr�<module>s
PKAAu\�JD;�	�	@libfuturize/fixes/__pycache__/fix_absolute_import.cpython-39.pycnu�[���a

��?hD�@sddZddlmZmZmZmZddlmZddlm	Z	m
Z
ddlmZddlm
Z
Gdd�de�Zd	S)
a�
Fixer for import statements, with a __future__ import line.

Based on lib2to3/fixes/fix_import.py, but extended slightly so it also
supports Cython modules.

If spam is being imported from the local directory, this import:
    from spam import eggs
becomes:
    from __future__ import absolute_import
    from .spam import eggs

and this import:
    import spam
becomes:
    from __future__ import absolute_import
    from . import spam
�)�dirname�join�exists�sep)�	FixImport)�
FromImport�syms)�traverse_imports)�
future_importc@s eZdZdZdd�Zdd�ZdS)�FixAbsoluteImport�	cCs�|jr
dS|d}|jtjkr`t|d�s4|jd}q|�|j�r�d|j|_|��t	d|�ndd}d}t
|�D]}|�|�r�d}qpd}qp|r�|r�|�|d	�dStd|g�}|j
|_
t	d|�|SdS)
z�
        Copied from FixImport.transform(), but with this line added in
        any modules that had implicit relative imports changed:

            from __future__ import absolute_import"
        N�imp�valuer�.�absolute_importFTz#absolute and local imports together)�skip�typerZimport_from�hasattr�children�probably_a_local_importr�changedr
r	�warningr�prefix)�self�node�resultsr
Z
have_localZ
have_absolute�mod_name�new�r�O/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_absolute_import.py�	transforms0


zFixAbsoluteImport.transformcCsv|�d�rdS|�dd�d}t|j�}t||�}ttt|�d��sHdSdtddd	d
dfD]}t||�rZdSqZdS)
zq
        Like the corresponding method in the base class, but this also
        supports Cython modules.
        rF�rz__init__.pyz.pyz.pycz.soz.slz.pydz.pyxT)�
startswith�splitr�filenamerrr)rZimp_name�	base_path�extrrrrIs


z)FixAbsoluteImport.probably_a_local_importN)�__name__�
__module__�__qualname__Z	run_orderr rrrrrrs*rN)�__doc__�os.pathrrrrZlib2to3.fixes.fix_importrZlib2to3.fixer_utilrrr	Zlibfuturize.fixer_utilr
rrrrr�<module>sPKDAu\Iڭ;libfuturize/fixes/__pycache__/fix_basestring.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)zd
Fixer that adds ``from past.builtins import basestring`` if there is a
reference to ``basestring``
�)�
fixer_base��touch_import_topc@seZdZdZdZdd�ZdS)�
FixBasestringTz'basestring'cCstdd|�dS)Nz
past.builtins�
basestringr)�self�node�results�r
�J/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_basestring.py�	transformszFixBasestring.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNrr
r
r
rrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrr
r
r
r�<module>sPKFAu\�&�˰�Blibfuturize/fixes/__pycache__/fix_print_with_import.cpython-39.pycnu�[���a

��?h��@s0dZddlmZddlmZGdd�de�ZdS)z�
For the ``future`` package.

Turns any print statements into functions and adds this import line:

    from __future__ import print_function

at the top to retain compatibility with Python 2.6+.
�)�FixPrint)�
future_importcs eZdZdZ�fdd�Z�ZS)�FixPrintWithImport�cs td|�tt|��||�}|S)N�print_function)r�superr�	transform)�self�node�resultsZn_stmt��	__class__��Q/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_print_with_import.pyrs
zFixPrintWithImport.transform)�__name__�
__module__�__qualname__Z	run_orderr�
__classcell__rrrrrsrN)�__doc__Zlibfuturize.fixes.fix_printrZlibfuturize.fixer_utilrrrrrr�<module>s
PKIAu\H�aU))]libfuturize/fixes/__pycache__/fix_add__future__imports_except_unicode_literals.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)z�
Fixer for adding:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function

This is "stage 1": hopefully uncontroversial changes.

Stage 2 adds ``unicode_literals``.
�)�
fixer_base��
future_importc@s eZdZdZdZdZdd�ZdS)�(FixAddFutureImportsExceptUnicodeLiteralsTZ
file_input�	cCs"td|�td|�td|�dS)N�absolute_import�division�print_functionr)�self�node�results�r
�l/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_add__future__imports_except_unicode_literals.py�	transforms

z2FixAddFutureImportsExceptUnicodeLiterals.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderrr
r
r
rrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrr
r
r
r�<module>sPKKAu\Yu��>libfuturize/fixes/__pycache__/fix_division_safe.cpython-39.pycnu�[���a

��?h/�@s|dZddlZddlmZmZmZddlmZddlm	Z	m
Z
mZmZdd�Z
e�d�Zd	d
�Zdd�ZGd
d�dej�ZdS)aL
For the ``future`` package.

Adds this import line:

    from __future__ import division

at the top and changes any old-style divisions to be calls to
past.utils.old_div so the code runs as before on Py2.6/2.7 and has the same
behaviour on Py3.

If "from __future__ import division" is already in effect, this fixer does
nothing.
�N)�Leaf�Node�Comma)�
fixer_base)�token�
future_import�touch_import_top�wrap_in_fn_callcCs,tj}|j|ko*|jj|ko*|jj|kS)zw
    __future__.division redefines the meaning of a single slash for division,
    so we match that and only that.
    )r�SLASH�type�next_sibling�prev_sibling)�nodeZslash�r�M/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_division_safe.py�match_divisions�rz^[0-9]*[.][0-9]*$cCst|j�pt|j�S)N)�
_is_floatyr
r)rrrr�	is_floaty"srcCsVt|t�r|d}t|t�r(t�|j�St|t�rRt|jdt�rR|jdjdkSdS)Nr�floatF)�
isinstance�listr�const_re�match�valuer�children)�exprrrrr&s


rcs:eZdZdZejZdZ�fdd�Zdd�Z	dd�Z
�ZS)	�FixDivisionSafe�z4
    term<(not('/') any)+ '/' ((not('/') any))>
    cs"tt|��||�d|jv|_dS)zO
        Skip this fixer if "__future__.division" is already imported.
        �divisionN)�superr�
start_treeZfuture_features�skip)�self�tree�name��	__class__rrr >szFixDivisionSafe.start_treecCs�|j|jjkr�d}d}g}|jD]b}|r.d}q t|�rtt|�std}d|d_td|t�|j	�
�g|jd�g}d}q |�|�
��q |r�tt
d�r�t
|j||jd�St
|j|�SdS)	z�
        Since the tree needs to be fixed once and only once if and only if it
        matches, we can start discarding matches after the first.
        FT�r�old_div)�prefix�fixers_applied)r*)rZsymsZtermrrrr)r	rr�clone�append�hasattrrr*)r"r�matchedr!r�childrrrrEs,

�
zFixDivisionSafe.matchcCs$|jr
dStd|�tdd|�|S)Nrz
past.utilsr()r!rr)r"r�resultsrrr�	transformhs

zFixDivisionSafe.transform)�__name__�
__module__�__qualname__Z	run_orderrr
Z_accept_typeZPATTERNr rr1�
__classcell__rrr%rr4s#r)�__doc__�reZlib2to3.fixer_utilrrrZlib2to3rZlibfuturize.fixer_utilrrrr	r�compilerrrZBaseFixrrrrr�<module>s	
PKNAu\;ɧ���Llibfuturize/fixes/__pycache__/fix_remove_old__future__imports.cpython-39.pycnu�[���a

��?hS�@s2dZddlmZddlmZGdd�dej�ZdS)a�
Fixer for removing any of these lines:

    from __future__ import with_statement
    from __future__ import nested_scopes
    from __future__ import generators

The reason is that __future__ imports like these are required to be the first
line of code (after docstrings) on Python 2.6+, which can get in the way.

These imports are always enabled in Python 2.6+, which is the minimum sane
version to target for Py2/3 compatibility.
�)�
fixer_base��remove_future_importc@s eZdZdZdZdZdd�ZdS)�FixRemoveOldFutureImportsTZ
file_input�cCs"td|�td|�td|�dS)N�with_statement�
nested_scopes�
generatorsr)�self�node�results�r
�[/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_remove_old__future__imports.py�	transforms

z#FixRemoveOldFutureImports.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderrr
r
r
rrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrr
r
r
r�<module>sPKPAu\�ވHlibfuturize/fixes/__pycache__/fix_unicode_literals_import.cpython-39.pycnu�[���a

��?ho�@s2dZddlmZddlmZGdd�dej�ZdS)zA
Adds this import:

    from __future__ import unicode_literals

�)�
fixer_base��
future_importc@s eZdZdZdZdZdd�ZdS)�FixUnicodeLiteralsImportTZ
file_input�	cCstd|�dS)N�unicode_literalsr)�self�node�results�r�W/usr/local/lib/python3.9/site-packages/libfuturize/fixes/fix_unicode_literals_import.py�	transformsz"FixUnicodeLiteralsImport.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderr
rrrrrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrrrrr�<module>sPKVAu\*���$libfuturize/fixes/fix_oldstr_wrap.pynu�[���"""
For the ``future`` package.

Adds this import line:

    from past.builtins import str as oldstr

at the top and wraps any unadorned string literals 'abc' or explicit byte-string
literals b'abc' in oldstr() calls so the code has the same behaviour on Py3 as
on Py2.6/2.7.
"""

from __future__ import unicode_literals
import re
from lib2to3 import fixer_base
from lib2to3.pgen2 import token
from lib2to3.fixer_util import syms
from libfuturize.fixer_util import (future_import, touch_import_top,
                                    wrap_in_fn_call)


_literal_re = re.compile(r"[^uUrR]?[\'\"]")


class FixOldstrWrap(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "STRING"

    def transform(self, node, results):
        if node.type == token.STRING:
            touch_import_top(u'past.types', u'oldstr', node)
            if _literal_re.match(node.value):
                new = node.clone()
                # Strip any leading space or comments:
                # TODO: check: do we really want to do this?
                new.prefix = u''
                new.value = u'b' + new.value
                wrapped = wrap_in_fn_call("oldstr", [new], prefix=node.prefix)
                return wrapped
PKXAu\�S��b%b%"libfuturize/fixes/fix_metaclass.pynu�[���# coding: utf-8
"""Fixer for __metaclass__ = X -> (future.utils.with_metaclass(X)) methods.

   The various forms of classef (inherits nothing, inherits once, inherints
   many) don't parse the same in the CST so we look at ALL classes for
   a __metaclass__ and if we find one normalize the inherits to all be
   an arglist.

   For one-liner classes ('class X: pass') there is no indent/dedent so
   we normalize those into having a suite.

   Moving the __metaclass__ into the classdef can also cause the class
   body to be empty so there is some special casing for that as well.

   This fixer also tries very hard to keep original indenting and spacing
   in all those corner cases.
"""
# This is a derived work of Lib/lib2to3/fixes/fix_metaclass.py under the
# copyright of the Python Software Foundation, licensed under the Python
# Software Foundation License 2.
#
# Copyright notice:
#
#     Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
#     2011, 2012, 2013 Python Software Foundation. All rights reserved.
#
# Full license text: http://docs.python.org/3.4/license.html

# Author: Jack Diederich, Daniel Neuhäuser

# Local imports
from lib2to3 import fixer_base
from lib2to3.pygram import token
from lib2to3.fixer_util import Name, syms, Node, Leaf, touch_import, Call, \
    String, Comma, parenthesize


def has_metaclass(parent):
    """ we have to check the cls_node without changing it.
        There are two possibilities:
          1)  clsdef => suite => simple_stmt => expr_stmt => Leaf('__meta')
          2)  clsdef => simple_stmt => expr_stmt => Leaf('__meta')
    """
    for node in parent.children:
        if node.type == syms.suite:
            return has_metaclass(node)
        elif node.type == syms.simple_stmt and node.children:
            expr_node = node.children[0]
            if expr_node.type == syms.expr_stmt and expr_node.children:
                left_side = expr_node.children[0]
                if isinstance(left_side, Leaf) and \
                        left_side.value == '__metaclass__':
                    return True
    return False


def fixup_parse_tree(cls_node):
    """ one-line classes don't get a suite in the parse tree so we add
        one to normalize the tree
    """
    for node in cls_node.children:
        if node.type == syms.suite:
            # already in the preferred format, do nothing
            return

    # !%@#! one-liners have no suite node, we have to fake one up
    for i, node in enumerate(cls_node.children):
        if node.type == token.COLON:
            break
    else:
        raise ValueError("No class suite and no ':'!")

    # move everything into a suite node
    suite = Node(syms.suite, [])
    while cls_node.children[i+1:]:
        move_node = cls_node.children[i+1]
        suite.append_child(move_node.clone())
        move_node.remove()
    cls_node.append_child(suite)
    node = suite


def fixup_simple_stmt(parent, i, stmt_node):
    """ if there is a semi-colon all the parts count as part of the same
        simple_stmt.  We just want the __metaclass__ part so we move
        everything efter the semi-colon into its own simple_stmt node
    """
    for semi_ind, node in enumerate(stmt_node.children):
        if node.type == token.SEMI: # *sigh*
            break
    else:
        return

    node.remove() # kill the semicolon
    new_expr = Node(syms.expr_stmt, [])
    new_stmt = Node(syms.simple_stmt, [new_expr])
    while stmt_node.children[semi_ind:]:
        move_node = stmt_node.children[semi_ind]
        new_expr.append_child(move_node.clone())
        move_node.remove()
    parent.insert_child(i, new_stmt)
    new_leaf1 = new_stmt.children[0].children[0]
    old_leaf1 = stmt_node.children[0].children[0]
    new_leaf1.prefix = old_leaf1.prefix


def remove_trailing_newline(node):
    if node.children and node.children[-1].type == token.NEWLINE:
        node.children[-1].remove()


def find_metas(cls_node):
    # find the suite node (Mmm, sweet nodes)
    for node in cls_node.children:
        if node.type == syms.suite:
            break
    else:
        raise ValueError("No class suite!")

    # look for simple_stmt[ expr_stmt[ Leaf('__metaclass__') ] ]
    for i, simple_node in list(enumerate(node.children)):
        if simple_node.type == syms.simple_stmt and simple_node.children:
            expr_node = simple_node.children[0]
            if expr_node.type == syms.expr_stmt and expr_node.children:
                # Check if the expr_node is a simple assignment.
                left_node = expr_node.children[0]
                if isinstance(left_node, Leaf) and \
                        left_node.value == u'__metaclass__':
                    # We found a assignment to __metaclass__.
                    fixup_simple_stmt(node, i, simple_node)
                    remove_trailing_newline(simple_node)
                    yield (node, i, simple_node)


def fixup_indent(suite):
    """ If an INDENT is followed by a thing with a prefix then nuke the prefix
        Otherwise we get in trouble when removing __metaclass__ at suite start
    """
    kids = suite.children[::-1]
    # find the first indent
    while kids:
        node = kids.pop()
        if node.type == token.INDENT:
            break

    # find the first Leaf
    while kids:
        node = kids.pop()
        if isinstance(node, Leaf) and node.type != token.DEDENT:
            if node.prefix:
                node.prefix = u''
            return
        else:
            kids.extend(node.children[::-1])


class FixMetaclass(fixer_base.BaseFix):
    BM_compatible = True

    PATTERN = """
    classdef<any*>
    """

    def transform(self, node, results):
        if not has_metaclass(node):
            return

        fixup_parse_tree(node)

        # find metaclasses, keep the last one
        last_metaclass = None
        for suite, i, stmt in find_metas(node):
            last_metaclass = stmt
            stmt.remove()

        text_type = node.children[0].type # always Leaf(nnn, 'class')

        # figure out what kind of classdef we have
        if len(node.children) == 7:
            # Node(classdef, ['class', 'name', '(', arglist, ')', ':', suite])
            #                 0        1       2    3        4    5    6
            if node.children[3].type == syms.arglist:
                arglist = node.children[3]
            # Node(classdef, ['class', 'name', '(', 'Parent', ')', ':', suite])
            else:
                parent = node.children[3].clone()
                arglist = Node(syms.arglist, [parent])
                node.set_child(3, arglist)
        elif len(node.children) == 6:
            # Node(classdef, ['class', 'name', '(',  ')', ':', suite])
            #                 0        1       2     3    4    5
            arglist = Node(syms.arglist, [])
            node.insert_child(3, arglist)
        elif len(node.children) == 4:
            # Node(classdef, ['class', 'name', ':', suite])
            #                 0        1       2    3
            arglist = Node(syms.arglist, [])
            node.insert_child(2, Leaf(token.RPAR, u')'))
            node.insert_child(2, arglist)
            node.insert_child(2, Leaf(token.LPAR, u'('))
        else:
            raise ValueError("Unexpected class definition")

        # now stick the metaclass in the arglist
        meta_txt = last_metaclass.children[0].children[0]
        meta_txt.value = 'metaclass'
        orig_meta_prefix = meta_txt.prefix

        # Was: touch_import(None, u'future.utils', node)
        touch_import(u'future.utils', u'with_metaclass', node)

        metaclass = last_metaclass.children[0].children[2].clone()
        metaclass.prefix = u''

        arguments = [metaclass]

        if arglist.children:
            if len(arglist.children) == 1:
                base = arglist.children[0].clone()
                base.prefix = u' '
            else:
                # Unfortunately six.with_metaclass() only allows one base
                # class, so we have to dynamically generate a base class if
                # there is more than one.
                bases = parenthesize(arglist.clone())
                bases.prefix = u' '
                base = Call(Name('type'), [
                    String("'NewBase'"),
                    Comma(),
                    bases,
                    Comma(),
                    Node(
                        syms.atom,
                        [Leaf(token.LBRACE, u'{'), Leaf(token.RBRACE, u'}')],
                        prefix=u' '
                    )
                ], prefix=u' ')
            arguments.extend([Comma(), base])

        arglist.replace(Call(
            Name(u'with_metaclass', prefix=arglist.prefix),
            arguments
        ))

        fixup_indent(suite)

        # check for empty suite
        if not suite.children:
            # one-liner that was just __metaclass_
            suite.remove()
            pass_leaf = Leaf(text_type, u'pass')
            pass_leaf.prefix = orig_meta_prefix
            node.append_child(pass_leaf)
            node.append_child(Leaf(token.NEWLINE, u'\n'))

        elif len(suite.children) > 1 and \
                 (suite.children[-2].type == token.INDENT and
                  suite.children[-1].type == token.DEDENT):
            # there was only one line in the class body and it was __metaclass__
            pass_leaf = Leaf(text_type, u'pass')
            suite.insert_child(-1, pass_leaf)
            suite.insert_child(-1, Leaf(token.NEWLINE, u'\n'))
PK[Au\9�k0��#libfuturize/fixes/fix_basestring.pynu�[���"""
Fixer that adds ``from past.builtins import basestring`` if there is a
reference to ``basestring``
"""

from lib2to3 import fixer_base

from libfuturize.fixer_util import touch_import_top


class FixBasestring(fixer_base.BaseFix):
    BM_compatible = True

    PATTERN = "'basestring'"

    def transform(self, node, results):
        touch_import_top(u'past.builtins', 'basestring', node)
PK]Au\C5��))libfuturize/fixes/fix_print.pynu�[���# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.

"""Fixer for print.

Change:
    "print"          into "print()"
    "print ..."      into "print(...)"
    "print(...)"     not changed
    "print ... ,"    into "print(..., end=' ')"
    "print >>x, ..." into "print(..., file=x)"

No changes are applied if print_function is imported from __future__

"""

# Local imports
from lib2to3 import patcomp, pytree, fixer_base
from lib2to3.pgen2 import token
from lib2to3.fixer_util import Name, Call, Comma, String
# from libmodernize import add_future

parend_expr = patcomp.compile_pattern(
              """atom< '(' [arith_expr|atom|power|term|STRING|NAME] ')' >"""
              )


class FixPrint(fixer_base.BaseFix):

    BM_compatible = True

    PATTERN = """
              simple_stmt< any* bare='print' any* > | print_stmt
              """

    def transform(self, node, results):
        assert results

        bare_print = results.get("bare")

        if bare_print:
            # Special-case print all by itself.
            bare_print.replace(Call(Name(u"print"), [],
                               prefix=bare_print.prefix))
            # The "from __future__ import print_function"" declaration is added
            # by the fix_print_with_import fixer, so we skip it here.
            # add_future(node, u'print_function')
            return
        assert node.children[0] == Name(u"print")
        args = node.children[1:]
        if len(args) == 1 and parend_expr.match(args[0]):
            # We don't want to keep sticking parens around an
            # already-parenthesised expression.
            return

        sep = end = file = None
        if args and args[-1] == Comma():
            args = args[:-1]
            end = " "

            # try to determine if the string ends in a non-space whitespace character, in which
            # case there should be no space at the end of the conversion
            string_leaves = [leaf for leaf in args[-1].leaves() if leaf.type == token.STRING]
            if (
                string_leaves
                and string_leaves[-1].value[0] != "r"  # "raw" string
                and string_leaves[-1].value[-3:-1] in (r"\t", r"\n", r"\r")
            ):
                end = ""
        if args and args[0] == pytree.Leaf(token.RIGHTSHIFT, u">>"):
            assert len(args) >= 2
            file = args[1].clone()
            args = args[3:] # Strip a possible comma after the file expression
        # Now synthesize a print(args, sep=..., end=..., file=...) node.
        l_args = [arg.clone() for arg in args]
        if l_args:
            l_args[0].prefix = u""
        if sep is not None or end is not None or file is not None:
            if sep is not None:
                self.add_kwarg(l_args, u"sep", String(repr(sep)))
            if end is not None:
                self.add_kwarg(l_args, u"end", String(repr(end)))
            if file is not None:
                self.add_kwarg(l_args, u"file", file)
        n_stmt = Call(Name(u"print"), l_args)
        n_stmt.prefix = node.prefix

        # Note that there are corner cases where adding this future-import is
        # incorrect, for example when the file also has a 'print ()' statement
        # that was intended to print "()".
        # add_future(node, u'print_function')
        return n_stmt

    def add_kwarg(self, l_nodes, s_kwd, n_expr):
        # XXX All this prefix-setting may lose comments (though rarely)
        n_expr.prefix = u""
        n_argument = pytree.Node(self.syms.argument,
                                 (Name(s_kwd),
                                  pytree.Leaf(token.EQUAL, u"="),
                                  n_expr))
        if l_nodes:
            l_nodes.append(Comma())
            n_argument.prefix = u" "
        l_nodes.append(n_argument)
PKbAu\��em��7libfuturize/fixes/fix_future_standard_library_urllib.pynu�[���"""
For the ``future`` package.

A special fixer that ensures that these lines have been added::

    from future import standard_library
    standard_library.install_hooks()

even if the only module imported was ``urllib``, in which case the regular fixer
wouldn't have added these lines.

"""

from lib2to3.fixes.fix_urllib import FixUrllib
from libfuturize.fixer_util import touch_import_top, find_root


class FixFutureStandardLibraryUrllib(FixUrllib):     # not a subclass of FixImports
    run_order = 8

    def transform(self, node, results):
        # transform_member() in lib2to3/fixes/fix_urllib.py breaks node so find_root(node)
        # no longer works after the super() call below. So we find the root first:
        root = find_root(node)
        result = super(FixFutureStandardLibraryUrllib, self).transform(node, results)
        # TODO: add a blank line between any __future__ imports and this?
        touch_import_top(u'future', u'standard_library', root)
        return result
PKdAu\6<����0libfuturize/fixes/fix_future_standard_library.pynu�[���"""
For the ``future`` package.

Changes any imports needed to reflect the standard library reorganization. Also
Also adds these import lines:

    from future import standard_library
    standard_library.install_aliases()

after any __future__ imports but before any other imports.
"""

from lib2to3.fixes.fix_imports import FixImports
from libfuturize.fixer_util import touch_import_top


class FixFutureStandardLibrary(FixImports):
    run_order = 8

    def transform(self, node, results):
        result = super(FixFutureStandardLibrary, self).transform(node, results)
        # TODO: add a blank line between any __future__ imports and this?
        touch_import_top(u'future', u'standard_library', node)
        return result
PKgAu\�䭫��*libfuturize/fixes/fix_print_with_import.pynu�[���"""
For the ``future`` package.

Turns any print statements into functions and adds this import line:

    from __future__ import print_function

at the top to retain compatibility with Python 2.6+.
"""

from libfuturize.fixes.fix_print import FixPrint
from libfuturize.fixer_util import future_import

class FixPrintWithImport(FixPrint):
    run_order = 7
    def transform(self, node, results):
        # Add the __future__ import first. (Otherwise any shebang or encoding
        # comment line attached as a prefix to the print statement will be
        # copied twice and appear twice.)
        future_import(u'print_function', node)
        n_stmt = super(FixPrintWithImport, self).transform(node, results)
        return n_stmt
PKjAu\/�D�SS4libfuturize/fixes/fix_remove_old__future__imports.pynu�[���"""
Fixer for removing any of these lines:

    from __future__ import with_statement
    from __future__ import nested_scopes
    from __future__ import generators

The reason is that __future__ imports like these are required to be the first
line of code (after docstrings) on Python 2.6+, which can get in the way.

These imports are always enabled in Python 2.6+, which is the minimum sane
version to target for Py2/3 compatibility.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import remove_future_import

class FixRemoveOldFutureImports(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"
    run_order = 1

    def transform(self, node, results):
        remove_future_import(u"with_statement", node)
        remove_future_import(u"nested_scopes", node)
        remove_future_import(u"generators", node)
PKlAu\]�md//&libfuturize/fixes/fix_division_safe.pynu�[���"""
For the ``future`` package.

Adds this import line:

    from __future__ import division

at the top and changes any old-style divisions to be calls to
past.utils.old_div so the code runs as before on Py2.6/2.7 and has the same
behaviour on Py3.

If "from __future__ import division" is already in effect, this fixer does
nothing.
"""

import re
from lib2to3.fixer_util import Leaf, Node, Comma
from lib2to3 import fixer_base
from libfuturize.fixer_util import (token, future_import, touch_import_top,
                                    wrap_in_fn_call)


def match_division(node):
    u"""
    __future__.division redefines the meaning of a single slash for division,
    so we match that and only that.
    """
    slash = token.SLASH
    return node.type == slash and not node.next_sibling.type == slash and \
                                  not node.prev_sibling.type == slash

const_re = re.compile('^[0-9]*[.][0-9]*$')

def is_floaty(node):
    return _is_floaty(node.prev_sibling) or _is_floaty(node.next_sibling)


def _is_floaty(expr):
    if isinstance(expr, list):
        expr = expr[0]

    if isinstance(expr, Leaf):
        # If it's a leaf, let's see if it's a numeric constant containing a '.'
        return const_re.match(expr.value)
    elif isinstance(expr, Node):
        # If the expression is a node, let's see if it's a direct cast to float
        if isinstance(expr.children[0], Leaf):
            return expr.children[0].value == u'float'
    return False


class FixDivisionSafe(fixer_base.BaseFix):
    # BM_compatible = True
    run_order = 4    # this seems to be ignored?

    _accept_type = token.SLASH

    PATTERN = """
    term<(not('/') any)+ '/' ((not('/') any))>
    """

    def start_tree(self, tree, name):
        """
        Skip this fixer if "__future__.division" is already imported.
        """
        super(FixDivisionSafe, self).start_tree(tree, name)
        self.skip = "division" in tree.future_features

    def match(self, node):
        u"""
        Since the tree needs to be fixed once and only once if and only if it
        matches, we can start discarding matches after the first.
        """
        if node.type == self.syms.term:
            matched = False
            skip = False
            children = []
            for child in node.children:
                if skip:
                    skip = False
                    continue
                if match_division(child) and not is_floaty(child):
                    matched = True

                    # Strip any leading space for the first number:
                    children[0].prefix = u''

                    children = [wrap_in_fn_call("old_div",
                                                children + [Comma(), child.next_sibling.clone()],
                                                prefix=node.prefix)]
                    skip = True
                else:
                    children.append(child.clone())
            if matched:
                # In Python 2.6, `Node` does not have the fixers_applied attribute
                # https://github.com/python/cpython/blob/8493c0cd66cfc181ac1517268a74f077e9998701/Lib/lib2to3/pytree.py#L235
                if hasattr(Node, "fixers_applied"):
                    return Node(node.type, children, fixers_applied=node.fixers_applied)
                else:
                    return Node(node.type, children)

        return False

    def transform(self, node, results):
        if self.skip:
            return
        future_import(u"division", node)
        touch_import_top(u'past.utils', u'old_div', node)
        return results
PKoAu\���'libfuturize/fixes/fix_unicode_keep_u.pynu�[���"""Fixer that changes unicode to str and unichr to chr, but -- unlike the
lib2to3 fix_unicode.py fixer, does not change u"..." into "...".

The reason is that Py3.3+ supports the u"..." string prefix, and, if
present, the prefix may provide useful information for disambiguating
between byte strings and unicode strings, which is often the hardest part
of the porting task.

"""

from lib2to3.pgen2 import token
from lib2to3 import fixer_base

_mapping = {u"unichr" : u"chr", u"unicode" : u"str"}

class FixUnicodeKeepU(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "'unicode' | 'unichr'"

    def transform(self, node, results):
        if node.type == token.NAME:
            new = node.clone()
            new.value = _mapping[node.value]
            return new
PKqAu\5�Ioo0libfuturize/fixes/fix_unicode_literals_import.pynu�[���"""
Adds this import:

    from __future__ import unicode_literals

"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import future_import

class FixUnicodeLiteralsImport(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"

    run_order = 9

    def transform(self, node, results):
        future_import(u"unicode_literals", node)
PKtAu\H�(���(libfuturize/fixes/fix_future_builtins.pynu�[���"""
For the ``future`` package.

Adds this import line::

    from builtins import XYZ

for each of the functions XYZ that is used in the module.

Adds these imports after any other imports (in an initial block of them).
"""

from __future__ import unicode_literals

from lib2to3 import fixer_base
from lib2to3.pygram import python_symbols as syms
from lib2to3.fixer_util import Name, Call, in_special_context

from libfuturize.fixer_util import touch_import_top

# All builtins are:
#     from future.builtins.iterators import (filter, map, zip)
#     from future.builtins.misc import (ascii, chr, hex, input, isinstance, oct, open, round, super)
#     from future.types import (bytes, dict, int, range, str)
# We don't need isinstance any more.

replaced_builtin_fns = '''filter map zip
                       ascii chr hex input next oct
                       bytes range str raw_input'''.split()
                       # This includes raw_input as a workaround for the
                       # lib2to3 fixer for raw_input on Py3 (only), allowing
                       # the correct import to be included. (Py3 seems to run
                       # the fixers the wrong way around, perhaps ignoring the
                       # run_order class attribute below ...)

expression = '|'.join(["name='{0}'".format(name) for name in replaced_builtin_fns])


class FixFutureBuiltins(fixer_base.BaseFix):
    BM_compatible = True
    run_order = 7

    # Currently we only match uses as a function. This doesn't match e.g.:
    #     if isinstance(s, str):
    #         ...
    PATTERN = """
              power<
                 ({0}) trailer< '(' [arglist=any] ')' >
              rest=any* >
              |
              power<
                  'map' trailer< '(' [arglist=any] ')' >
              >
              """.format(expression)

    def transform(self, node, results):
        name = results["name"]
        touch_import_top(u'builtins', name.value, node)
        # name.replace(Name(u"input", prefix=name.prefix))
PKvAu\:m���libfuturize/fixes/fix_object.pynu�[���"""
Fixer that adds ``from builtins import object`` if there is a line
like this:
    class Foo(object):
"""

from lib2to3 import fixer_base

from libfuturize.fixer_util import touch_import_top


class FixObject(fixer_base.BaseFix):

    PATTERN = u"classdef< 'class' NAME '(' name='object' ')' colon=':' any >"

    def transform(self, node, results):
        touch_import_top(u'builtins', 'object', node)
PKyAu\���/DD(libfuturize/fixes/fix_absolute_import.pynu�[���"""
Fixer for import statements, with a __future__ import line.

Based on lib2to3/fixes/fix_import.py, but extended slightly so it also
supports Cython modules.

If spam is being imported from the local directory, this import:
    from spam import eggs
becomes:
    from __future__ import absolute_import
    from .spam import eggs

and this import:
    import spam
becomes:
    from __future__ import absolute_import
    from . import spam
"""

from os.path import dirname, join, exists, sep
from lib2to3.fixes.fix_import import FixImport
from lib2to3.fixer_util import FromImport, syms
from lib2to3.fixes.fix_import import traverse_imports

from libfuturize.fixer_util import future_import


class FixAbsoluteImport(FixImport):
    run_order = 9

    def transform(self, node, results):
        """
        Copied from FixImport.transform(), but with this line added in
        any modules that had implicit relative imports changed:

            from __future__ import absolute_import"
        """
        if self.skip:
            return
        imp = results['imp']

        if node.type == syms.import_from:
            # Some imps are top-level (eg: 'import ham')
            # some are first level (eg: 'import ham.eggs')
            # some are third level (eg: 'import ham.eggs as spam')
            # Hence, the loop
            while not hasattr(imp, 'value'):
                imp = imp.children[0]
            if self.probably_a_local_import(imp.value):
                imp.value = u"." + imp.value
                imp.changed()
                future_import(u"absolute_import", node)
        else:
            have_local = False
            have_absolute = False
            for mod_name in traverse_imports(imp):
                if self.probably_a_local_import(mod_name):
                    have_local = True
                else:
                    have_absolute = True
            if have_absolute:
                if have_local:
                    # We won't handle both sibling and absolute imports in the
                    # same statement at the moment.
                    self.warning(node, "absolute and local imports together")
                return

            new = FromImport(u".", [imp])
            new.prefix = node.prefix
            future_import(u"absolute_import", node)
            return new

    def probably_a_local_import(self, imp_name):
        """
        Like the corresponding method in the base class, but this also
        supports Cython modules.
        """
        if imp_name.startswith(u"."):
            # Relative imports are certainly not local imports.
            return False
        imp_name = imp_name.split(u".", 1)[0]
        base_path = dirname(self.filename)
        base_path = join(base_path, imp_name)
        # If there is no __init__.py next to the file its not in a package
        # so can't be a relative import.
        if not exists(join(dirname(base_path), "__init__.py")):
            return False
        for ext in [".py", sep, ".pyc", ".so", ".sl", ".pyd", ".pyx"]:
            if exists(base_path + ext):
                return True
        return False
PK{Au\ �h==0libfuturize/fixes/fix_order___future__imports.pynu�[���"""
UNFINISHED

Fixer for turning multiple lines like these:

    from __future__ import division
    from __future__ import absolute_import
    from __future__ import print_function

into a single line like this:

    from __future__ import (absolute_import, division, print_function)

This helps with testing of ``futurize``.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import future_import

class FixOrderFutureImports(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"

    run_order = 10

    # def match(self, node):
    #     """
    #     Match only once per file
    #     """
    #     if hasattr(node, 'type') and node.type == syms.file_input:
    #         return True
    #     return False

    def transform(self, node, results):
        # TODO    # write me
        pass
PK�Au\4Rlibfuturize/__init__.pynu�[���# empty to make this a package
PK�Au\��D��/libfuturize/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�>/usr/local/lib/python3.9/site-packages/libfuturize/__init__.py�<module>�PK�Au\i{<;%;%+libfuturize/__pycache__/main.cpython-39.pycnu�[���a

��?h�5�@s�dZddlmZmZmZddlZddlmZddlZddl	Z	ddl
Z
ddlZddlm
Z
mZddlmZddlmZmZmZmZdZdd	d
�ZdS)a�
futurize: automatic conversion to clean 2/3 code using ``python-future``
======================================================================

Like Armin Ronacher's modernize.py, ``futurize`` attempts to produce clean
standard Python 3 code that runs on both Py2 and Py3.

One pass
--------

Use it like this on Python 2 code:

  $ futurize --verbose mypython2script.py

This will attempt to port the code to standard Py3 code that also
provides Py2 compatibility with the help of the right imports from
``future``.

To write changes to the files, use the -w flag.

Two stages
----------

The ``futurize`` script can also be called in two separate stages. First:

  $ futurize --stage1 mypython2script.py

This produces more modern Python 2 code that is not yet compatible with Python
3. The tests should still run and the diff should be uncontroversial to apply to
most Python projects that are willing to drop support for Python 2.5 and lower.

After this, the recommended approach is to explicitly mark all strings that must
be byte-strings with a b'' prefix and all text (unicode) strings with a u''
prefix, and then invoke the second stage of Python 2 to 2/3 conversion with::

  $ futurize --stage2 mypython2script.py

Stage 2 adds a dependency on ``future``. It converts most remaining Python
2-specific code to Python 3 code and adds appropriate imports from ``future``
to restore Py2 support.

The command above leaves all unadorned string literals as native strings
(byte-strings on Py2, unicode strings on Py3). If instead you would like all
unadorned string literals to be promoted to unicode, you can also pass this
flag:

  $ futurize --stage2 --unicode-literals mypython2script.py

This adds the declaration ``from __future__ import unicode_literals`` to the
top of each file, which implicitly declares all unadorned string literals to be
unicode strings (``unicode`` on Py2).

All imports
-----------

The --all-imports option forces adding all ``__future__`` imports,
``builtins`` imports, and standard library aliases, even if they don't
seem necessary for the current state of each module. (This can simplify
testing, and can reduce the need to think about Py2 compatibility when editing
the code further.)

�)�absolute_import�print_function�unicode_literalsN)�__version__)�warn�StdoutRefactoringTool)�refactor)�lib2to3_fix_names_stage1�lib2to3_fix_names_stage2�libfuturize_fix_names_stage1�libfuturize_fix_names_stage2zlibfuturize.fixescs�tjdd�}|jddddd�|jdd	dd
d�|jdddd
d�|jddddd�|jddddd�|jddddd�|jdddgdd�|jddddd d!d"�|jd#d$dgd%d�|jd&d'dd(d�|jd)d*dd+d�|jd,d-dd.d�|jd/dd0d�|jd1d2dd3d�|jd4d5dd6d7d�|jd8d9dd:d;d<d=�|jd>d?dd@d�|jdAdd:d;dBd=�i}d6}|�|�\}}|j�r�dC|dD<|j�s�tdE�dC|_|j�r�|j�s�|�	dF�|j
�r�|j�s�|�	dG�|j�s�|j�r�tdH�|j�s|j�r|�	dI�dJ|v�r(dC}|j�r(tdKt
jdL�dMS|j�r8dC|dN<|j�rFtjntj}tjdO|dP�t�dQ�}|j�st|j�r�|jdRu�s�J�d6|_ndC|_t�}|j�s�|j�r�|�t�|�t�|j�s�|j�r�|�t�|�t�|j�r�|� dS�|j!�rtt"�dTS|j#�r8tdU�t$|�D]}t|��q|�s8dTS|�s^tdVt
jdL�tdWt
jdL�dMSt�}	|j%D]��dX�v�r�|	� ��nx�fdYdZ�|D�}
t&|
�dk�r�td[d\�'d]d^�|
D��t
jdL�dMSt&|
�dTk�r�td_t
jdL�dMS|	� |
dT��qjt�}|j(�rX|j�r*d`}|� |da�n.db}|� |dc�|� |dd�|� |de�t�}
|j)�rhd6}|j)D]���dfk�r�dC}n�dX�v�r�|
� ��nx�fdgdZ�|D�}
t&|
�dk�r�td[d\�'dhd^�|
D��t
jdL�dMSt&|
�dTk�rtd_t
jdL�dMS|
� |
dT��qpt&|
|	@�dTk�rRtdid\�'djd^�|
|	@D��t
jdL�dMS|�rb|�*|
�n|
}n
|�*|
�}||B|	}t+j,�-|�}|�r�|�.t+j/��s�t+j,�0|��s�t+j,�1|�}|j�r�|�2t+j/�}|�3dk|j|�t4j5j6�r�i}n|j
|j|dl�}t7t$|�|t$|
�|j|jfi|��}|j8�s�|�r:|�9�nPz|�:||jdR|j;�Wn6t:j<�y�|j;dk�stJ�tdmt
jdL�YdS0|�=�t>t?|j8��S)nz�Main program.

    Args:
        fixer_pkg: the name of a package where the fixers are located.
        args: optional; a list of command line arguments. If omitted,
              sys.argv[1:] is used.

    Returns a suggested exit status (0, 1, 2).
    zfuturize [options] file|dir ...)�usagez-Vz	--version�
store_truez%Report the version number of futurize)�action�helpz-az
--all-importsz4Add all __future__ and future imports to each modulez-1z--stage1zZModernize Python 2 code only; no compatibility with Python 3 (or dependency on ``future``)z-2z--stage2z^Take modernized (stage1) code and add a dependency on ``future`` to provide Py3 compatibility.z-0z
--both-stageszApply both stages 1 and 2z-uz--unicode-literalsz{Add ``from __future__ import unicode_literals`` to implicitly convert all unadorned string literals '' into unicode stringsz-fz--fix�appendz�Each FIX specifies a transformation; default: all.
Either use '-f division -f metaclass' etc. or use the fully-qualified module name: '-f lib2to3.fixes.fix_types -f libfuturize.fixes.fix_unicode_keep_u')r�defaultrz-jz--processes�store��intzRun 2to3 concurrently)rr�typerz-xz--nofixzPrevent a fixer from being run.z-lz--list-fixeszList available transformationsz-pz--print-functionz0Modify the grammar so that print() is a functionz-vz	--verbosezMore verbose loggingz
--no-diffsz#Don't show diffs of the refactoringz-wz--writezWrite back modified filesz-nz--nobackupsFz'Don't write backups for modified files.z-oz--output-dir�str�zpPut output files in this directory instead of overwriting the input files.  Requires -n. For Python >= 2.7 only.)rrrrz-Wz--write-unchanged-fileszYAlso write files even if no changes were required (useful with --output-dir); implies -w.z--add-suffixz�Append this string to all output filenames. Requires -n if non-empty. For Python >= 2.7 only.ex: --add-suffix='3' will generate .py3 files.T�write_unchanged_filesz&--write-unchanged-files/-W implies -w.z%Can't use --output-dir/-o without -n.z"Can't use --add-suffix without -n.z@not writing files and not printing diffs; that's not very usefulzCan't use -n without -w�-zCan't write to stdin.)�file�rz%(name)s: %(message)s)�format�levelzlibfuturize.mainNz-libfuturize.fixes.fix_unicode_literals_importrz2Available transformations for the -f/--fix option:z1At least one file or directory argument required.zUse --help to show usage.z.fix_cs g|]}|�d����r|�qS�zfix_{0}��endswithr��.0�f��fix��:/usr/local/lib/python3.9/site-packages/libfuturize/main.py�
<listcomp>�s�zmain.<locals>.<listcomp>zOAmbiguous fixer name. Choose a fully qualified module name instead from these:
�
css|]}d|VqdS�z  Nr'�r#Zmyfr'r'r(�	<genexpr>��zmain.<locals>.<genexpr>z1Unknown fixer. Use --list-fixes or -l for a list.zlibfuturize.fixes.Z0fix_add__future__imports_except_unicode_literalszlibpasteurize.fixes.Zfix_add_all__future__importsZ&fix_add_future_standard_library_importZfix_add_all_future_builtins�allcs g|]}|�d����r|�qSrr r"r%r'r(r)�s�css|]}d|VqdSr+r'r,r'r'r(r-r.z[Conflicting usage: the following fixers have been simultaneously requested and disallowed:
css|]}d|VqdSr+r'r,r'r'r(r-r.z7Output in %r will mirror the input directory %r layout.)Z
append_suffix�
output_dir�input_base_dirz+Sorry, -j isn't supported on this platform.)@�optparse�OptionParser�
add_option�
parse_argsr�writerr0Z	nobackups�errorZ
add_suffixZno_diffs�print�sys�stderrr�verbose�logging�DEBUG�INFO�basicConfig�	getLoggerZstage1Zstage2Zboth_stages�set�updater	rr
rr�add�versionrZ
list_fixes�sortedZnofix�len�joinZall_importsr&�union�os�path�commonprefixr!�sep�isdir�dirname�rstrip�info�future�utilsZPY26r�errors�refactor_stdinrZ	processesZMultiprocessingUnsupportedZ	summarizer�bool)�args�parser�flagsrT�optionsr�loggerZavail_fixesZfixnameZunwanted_fixes�foundZextra_fixes�prefix�explicitZall_present�	requested�fixer_namesr1�extra_kwargs�rtr'r%r(�mainTs�
�
�
�
�
�
����
�
�
��
���
��









�

����


�����

��
�
��
��rb)N)�__doc__�
__future__rrrZfuture.utilsrQrr9r<r2rIZlib2to3.mainrrZlib2to3rZlibfuturize.fixesr	r
rrZ	fixer_pkgrbr'r'r'r(�<module>s?PK�Au\�	�QQ.Q.1libfuturize/__pycache__/fixer_util.cpython-39.pycnu�[���a

��?h�C�@s@dZddlmZmZmZmZmZmZmZm	Z	ddl
mZmZddl
mZddl
mZddlZdd�Zd3d	d
�Zd4dd�Zd5d
d�Zdd�Zdd�Zdd�Zdd�Zd6dd�ZejejejejejfZ ej!ej"fZ#dd�Z$d7dd�Z%dd�Z&dd �Z'd!d"�Z(d#d$�Z)d%d&�Z*d'd(�Z+d)d*�Z,d+Z-d,Z.d-d.�Z/d/d0�Z0d8d1d2�Z1dS)9z�
Utility functions from 2to3, 3to2 and python-modernize (and some home-grown
ones).

Licences:
2to3: PSF License v2
3to2: Apache Software License (from 3to2/setup.py)
python-modernize licence: BSD (from python-modernize/LICENSE)
�)�
FromImport�Newline�	is_import�	find_root�does_tree_import�Call�Name�Comma)�Leaf�Node)�python_symbols)�tokenNcs~d�vr�S��d�r"�dd���fdd�|D�}t|�dkr^tdd	�d
d�|D����nt|�dkrrtd
��|dSdS)al
    Examples:
    >>> canonical_fix_name('fix_wrap_text_literals')
    'libfuturize.fixes.fix_wrap_text_literals'
    >>> canonical_fix_name('wrap_text_literals')
    'libfuturize.fixes.fix_wrap_text_literals'
    >>> canonical_fix_name('wrap_te')
    ValueError("unknown fixer name")
    >>> canonical_fix_name('wrap')
    ValueError("ambiguous fixer name")
    z.fix_Zfix_�Ncs g|]}|�d����r|�qS)zfix_{0})�endswith�format)�.0�f��fix��@/usr/local/lib/python3.9/site-packages/libfuturize/fixer_util.py�
<listcomp>(s�z&canonical_fix_name.<locals>.<listcomp>�zOAmbiguous fixer name. Choose a fully qualified module name instead from these:
�
css|]}d|VqdS)z  Nr)rZmyfrrr�	<genexpr>-�z%canonical_fix_name.<locals>.<genexpr>rz1Unknown fixer. Use --list-fixes or -l for a list.)�
startswith�len�
ValueError�join)rZavail_fixes�foundrrr�canonical_fix_names
�r!cCsttjd|d�S)N�*��prefix)r
r
�STARr#rrr�Star6sr&cCsttjd|d�S)Nz**r#)r
r
�
DOUBLESTARr#rrr�
DoubleStar9sr(cCsttjd|d�S)N�-r#)r
r
�MINUSr#rrr�Minus<sr+cCs.g}|D]}|�|�|�t��q|d=|S)z{
    Accepts/turns: (Name, Name, ..., Name, Name)
    Returns/into: (Name, Comma, Name, Comma, ..., Name, Comma, Name)
    ���)�appendr	)ZleafsZ	new_leafsZleafrrr�	commatize?s
r.cCsx|jdur |jjtjkr |j}q|jdur.dS|jtjkr@|jS|jdur`|jjtjkr`|jjS|jdurndS|jSdS)zf
    Returns the indentation for this node
    Iff a node is in a suite, then it has indentation.
    N�)	�parent�type�syms�suiter
�INDENT�valueZprev_siblingr$��noderrr�indentationKs

r8cCs2t|�}tdd�|��D��}|s&dSt|�SdS)a
    Dirty little trick to get the difference between each indentation level
    Implemented by finding the shortest indentation string
    (technically, the "least" of all of the indentation strings, but
    tabs and spaces mixed won't get this far, so those are synonymous.)
    css |]}|jtjkr|jVqdS�N)r1r
r4r5�r�irrrrirz#indentation_step.<locals>.<genexpr>z    N)r�setZ	pre_order�min)r7�rZall_indentsrrr�indentation_step`s
r?cCs�|jD]}|jtjkrdSqt|j�D]\}}|jtjkr(qJq(td��ttjt	�t
tjt|�t
|��g�}|j|d}|��d|_|�|�|�|�dS)zj
    Turn the stuff after the first colon in parent's children
    into a suite, if it wasn't already
    NzNo class suite and no ':'!rr/)�childrenr1r2r3�	enumerater
�COLONrrrr
r4r8r?�remover$Zappend_child)r0r7r;r3Zone_noderrr�suitifyps
&
rDcCsN|durd}td|d�|g}|durB|�tddd�t|dd�g�ttj|�S)z�
    Accepts a package (Name node), name to import it as (string), and
    optional prefix and returns a node:
    import <package> [as <as_name>]
    Nr/�importr#�as� )r�extendrr2�import_name)�package�as_namer$r@rrr�
NameImport�s
�rLccs|jtvsJ�|j}|jtjkrD|j}|jtjkr6qDn|V|j}q|j}|jtjksZJ�|j}|durv|V|j}q`|j}|jt	vr�|}|jdur�|jV|j}q�|j}|j}|dur�dS|jt	vr�|jtj
kr�|V|j}|dur�|j}|j}|dur�q�q�dS)z�
    Generator yields all nodes for which a node (an import_stmt) has scope
    The purpose of this is for a call to _find() on each of them
    N)r1�
_import_stmts�next_siblingr
�SEMI�NEWLINEr0r2�simple_stmt�_compound_stmtsr3)r7�testZnxtr0�context�c�prrr�import_binding_scope�sB


rWcCsDt|�}tddd�}t|dd�}ttj|||g�}|dur@||_|S)NrFrGr#)rrr2�import_as_namer$)�namerKr$�new_nameZnew_asZnew_as_name�new_noderrr�ImportAsName�sr\cCs,|jtjko*t|j�dko*|jdjtjkS)z<
    Returns True if the node appears to be a docstring
    r)r1r2rQrr@r
�STRINGr6rrr�is_docstring�s
��r^cCs�t|�}td||�rdSd}t|j�D]D\}}t|�s>t|�rB|}t|�rLq&t|�}|s\ql||vr&dSq&tdt	t
j|dd�g�}|dkr�|dkr�|jdj|_d|jd_|t
�g}|�|ttj|��dS)z
    This seems to work
    �
__future__NrGr#rr/)rrrAr@�is_shebang_comment�is_encoding_commentr^�check_future_importrr
r
�NAMEr$r�insert_childrr2rQ)�featurer7�rootZshebang_encoding_idx�idx�names�import_r@rrr�
future_import�s(
rjc	Cs�t|�}td||�rdSd}t|j�D]8\}}|jtjkr&|jr&|jdjtjkr&|d}q`q&|j|d�D]*}|jtj	kr�|d7}qn|j
}d|_
q�qnd}tdttj
|dd�g�}|t�g}|�|ttj||d��dS)zD
    An alternative to future_import() which might not work ...
    r_Nrrr/rGr#)rrrAr@r1r2rQr
r]rPr$rr
rcrrdr)	rer7rf�
insert_posrgZthing_afterr$rir@rrr�future_import2s*�
rlcCs�dd�|D�}tdd�|D��}t|�D]R\}}|jtjkrj|jdjtjkrj|jdj}|jd||<q(||}|||<q(|S)z/
    Parse a list of arguments into a dict
    cSsg|]}|jtjkr|�qSr)r1r
�COMMAr:rrrr2rzparse_args.<locals>.<listcomp>cSsg|]}|df�qSr9r)r�krrrr4rrr�)	�dictrAr1r2�argumentr@r
�EQUALr5)Zarglist�schemeZret_mappingr;�argZslotrrr�
parse_args.s
rucCs |jtjko|jot|jd�S)Nr)r1r2rQr@rr6rrr�is_import_stmtHs�rvcCst|�}t|||�rdSd}dD]}td||�r d}q:q |r�d\}}t|j�D]>\}}t|�rP|}|}	|r�|j}|	d7}	t|�sh|	}q�qhq�qP|dus�J�|dus�J�|}
n4t|j�D]$\}}|jtjkr�q�t	|�s�q�q�|}
g}|du�rt
tjtt
jd�tt
j|d	d
�g�}n�t|tt
j|d	d
�g�}|dk�r�t
tjt
tjtt
jd�t
tjtt
jd�tt
jd
�g�t
tjtt
jd�tt
jd�g�g�g�}
|
t�g}|t�g}|j|
j}d|j|
_|�|
t
tj||d
��t|�dk�r�|�|
dt
tj|��dS)a�Works like `does_tree_import` but adds an import statement at the
    top if it was not imported (but below any __future__ imports) and below any
    comments such as shebang lines).

    Based on lib2to3.fixer_util.touch_import()

    Calling this multiple times adds the imports in reverse order.

    Also adds "standard_library.install_aliases()" after "from future import
    standard_library".  This should probably be factored into another function.
    NF)�absolute_import�division�print_function�unicode_literalsr_T)NNrrErGr#Zstandard_library�.Zinstall_aliases�(�)r/r)rrrAr@rbrNr1r2rQr^rrIr
r
rcr�power�trailer�DOT�LPAR�RPARrr$rdr)rJZname_to_importr7rfr rY�start�endrgZidx2rkZchildren_hooksriZ
install_hooksZchildren_importZ
old_prefixrrr�touch_import_topMsr


�


�
����


r�cCsT|}|jtjkr|jst�S|jd}|jtjkrRt|jdd�rR|jdjdksXt�S|jdjtj	krv|jd}n
|jd}|jtj
kr�t�}|jD]P}|jtjkr�|�|j�q�|jtj
kr�|jd}|jtjks�J�|�|j�q�|S|jtj
k�r$|jd}|jtjk�sJ�t|jg�S|jtjk�r>t|jg�Sd�sPJd|��d	S)
zZIf this is a future import, return set of symbols that are imported,
    else return None.rrr5r_�rFzstrange import: %sN)r1r2rQr@r<�import_from�hasattrr5r
r�Zimport_as_namesrc�addrX)r7Zsavenode�result�nrrrrb�s<
��



rbz^#!.*pythonz^#.*coding[:=]\s*([-\w.]+)cCstt�t|j��S)z�
    Comments are prefixes for Leaf nodes. Returns whether the given node has a
    prefix that looks like a shebang line or an encoding line:

        #!/usr/bin/env python
        #!/usr/bin/python3
    )�bool�re�match�
SHEBANG_REGEXr$r6rrrr`�sr`cCstt�t|j��S)a
    Comments are prefixes for Leaf nodes. Returns whether the given node has a
    prefix that looks like an encoding line:

        # coding: utf-8
        # encoding: utf-8
        # -*- coding: <encoding name> -*-
        # vim: set fileencoding=<encoding name> :
    )r�r�r��ENCODING_REGEXr$r6rrrra�s
racCsHt|�dksJ�t|�dkr2|\}}|t�|g}n|}tt|�||d�S)z�
    Example:
    >>> wrap_in_fn_call("oldstr", (arg,))
    oldstr(arg)

    >>> wrap_in_fn_call("olddiv", (arg1, arg2))
    olddiv(arg1, arg2)

    >>> wrap_in_fn_call("olddiv", [arg1, comma, arg2, comma, arg3])
    olddiv(arg1, arg2, arg3)
    rror#)rr	rr)�fn_name�argsr$�expr1Zexpr2Znewargsrrr�wrap_in_fn_call�sr�)N)N)N)NN)N)N)2�__doc__Zlib2to3.fixer_utilrrrrrrrr	Zlib2to3.pytreer
rZlib2to3.pygramrr2r
r�r!r&r(r+r.r8r?rDrLZif_stmtZ
while_stmtZfor_stmtZtry_stmtZ	with_stmtrRrIr�rMrWr\r^rjrlrurvr�rbr�r�r`rar�rrrr�<module>s<
("



;

( b)
PK�Au\j��/�C�Clibfuturize/fixer_util.pynu�[���"""
Utility functions from 2to3, 3to2 and python-modernize (and some home-grown
ones).

Licences:
2to3: PSF License v2
3to2: Apache Software License (from 3to2/setup.py)
python-modernize licence: BSD (from python-modernize/LICENSE)
"""

from lib2to3.fixer_util import (FromImport, Newline, is_import,
                                find_root, does_tree_import,
                                Call, Name, Comma)
from lib2to3.pytree import Leaf, Node
from lib2to3.pygram import python_symbols as syms
from lib2to3.pygram import token
import re


def canonical_fix_name(fix, avail_fixes):
    """
    Examples:
    >>> canonical_fix_name('fix_wrap_text_literals')
    'libfuturize.fixes.fix_wrap_text_literals'
    >>> canonical_fix_name('wrap_text_literals')
    'libfuturize.fixes.fix_wrap_text_literals'
    >>> canonical_fix_name('wrap_te')
    ValueError("unknown fixer name")
    >>> canonical_fix_name('wrap')
    ValueError("ambiguous fixer name")
    """
    if ".fix_" in fix:
        return fix
    else:
        if fix.startswith('fix_'):
            fix = fix[4:]
        # Infer the full module name for the fixer.
        # First ensure that no names clash (e.g.
        # lib2to3.fixes.fix_blah and libfuturize.fixes.fix_blah):
        found = [f for f in avail_fixes
                 if f.endswith('fix_{0}'.format(fix))]
        if len(found) > 1:
            raise ValueError("Ambiguous fixer name. Choose a fully qualified "
                  "module name instead from these:\n" +
                  "\n".join("  " + myf for myf in found))
        elif len(found) == 0:
            raise ValueError("Unknown fixer. Use --list-fixes or -l for a list.")
        return found[0]



## These functions are from 3to2 by Joe Amenta:

def Star(prefix=None):
    return Leaf(token.STAR, u'*', prefix=prefix)

def DoubleStar(prefix=None):
    return Leaf(token.DOUBLESTAR, u'**', prefix=prefix)

def Minus(prefix=None):
    return Leaf(token.MINUS, u'-', prefix=prefix)

def commatize(leafs):
    """
    Accepts/turns: (Name, Name, ..., Name, Name)
    Returns/into: (Name, Comma, Name, Comma, ..., Name, Comma, Name)
    """
    new_leafs = []
    for leaf in leafs:
        new_leafs.append(leaf)
        new_leafs.append(Comma())
    del new_leafs[-1]
    return new_leafs

def indentation(node):
    """
    Returns the indentation for this node
    Iff a node is in a suite, then it has indentation.
    """
    while node.parent is not None and node.parent.type != syms.suite:
        node = node.parent
    if node.parent is None:
        return u""
    # The first three children of a suite are NEWLINE, INDENT, (some other node)
    # INDENT.value contains the indentation for this suite
    # anything after (some other node) has the indentation as its prefix.
    if node.type == token.INDENT:
        return node.value
    elif node.prev_sibling is not None and node.prev_sibling.type == token.INDENT:
        return node.prev_sibling.value
    elif node.prev_sibling is None:
        return u""
    else:
        return node.prefix

def indentation_step(node):
    """
    Dirty little trick to get the difference between each indentation level
    Implemented by finding the shortest indentation string
    (technically, the "least" of all of the indentation strings, but
    tabs and spaces mixed won't get this far, so those are synonymous.)
    """
    r = find_root(node)
    # Collect all indentations into one set.
    all_indents = set(i.value for i in r.pre_order() if i.type == token.INDENT)
    if not all_indents:
        # nothing is indented anywhere, so we get to pick what we want
        return u"    " # four spaces is a popular convention
    else:
        return min(all_indents)

def suitify(parent):
    """
    Turn the stuff after the first colon in parent's children
    into a suite, if it wasn't already
    """
    for node in parent.children:
        if node.type == syms.suite:
            # already in the preferred format, do nothing
            return

    # One-liners have no suite node, we have to fake one up
    for i, node in enumerate(parent.children):
        if node.type == token.COLON:
            break
    else:
        raise ValueError(u"No class suite and no ':'!")
    # Move everything into a suite node
    suite = Node(syms.suite, [Newline(), Leaf(token.INDENT, indentation(node) + indentation_step(node))])
    one_node = parent.children[i+1]
    one_node.remove()
    one_node.prefix = u''
    suite.append_child(one_node)
    parent.append_child(suite)

def NameImport(package, as_name=None, prefix=None):
    """
    Accepts a package (Name node), name to import it as (string), and
    optional prefix and returns a node:
    import <package> [as <as_name>]
    """
    if prefix is None:
        prefix = u""
    children = [Name(u"import", prefix=prefix), package]
    if as_name is not None:
        children.extend([Name(u"as", prefix=u" "),
                         Name(as_name, prefix=u" ")])
    return Node(syms.import_name, children)

_compound_stmts = (syms.if_stmt, syms.while_stmt, syms.for_stmt, syms.try_stmt, syms.with_stmt)
_import_stmts = (syms.import_name, syms.import_from)

def import_binding_scope(node):
    """
    Generator yields all nodes for which a node (an import_stmt) has scope
    The purpose of this is for a call to _find() on each of them
    """
    # import_name / import_from are small_stmts
    assert node.type in _import_stmts
    test = node.next_sibling
    # A small_stmt can only be followed by a SEMI or a NEWLINE.
    while test.type == token.SEMI:
        nxt = test.next_sibling
        # A SEMI can only be followed by a small_stmt or a NEWLINE
        if nxt.type == token.NEWLINE:
            break
        else:
            yield nxt
        # A small_stmt can only be followed by either a SEMI or a NEWLINE
        test = nxt.next_sibling
    # Covered all subsequent small_stmts after the import_stmt
    # Now to cover all subsequent stmts after the parent simple_stmt
    parent = node.parent
    assert parent.type == syms.simple_stmt
    test = parent.next_sibling
    while test is not None:
        # Yes, this will yield NEWLINE and DEDENT.  Deal with it.
        yield test
        test = test.next_sibling

    context = parent.parent
    # Recursively yield nodes following imports inside of a if/while/for/try/with statement
    if context.type in _compound_stmts:
        # import is in a one-liner
        c = context
        while c.next_sibling is not None:
            yield c.next_sibling
            c = c.next_sibling
        context = context.parent

    # Can't chain one-liners on one line, so that takes care of that.

    p = context.parent
    if p is None:
        return

    # in a multi-line suite

    while p.type in _compound_stmts:

        if context.type == syms.suite:
            yield context

        context = context.next_sibling

        if context is None:
            context = p.parent
            p = context.parent
            if p is None:
                break

def ImportAsName(name, as_name, prefix=None):
    new_name = Name(name)
    new_as = Name(u"as", prefix=u" ")
    new_as_name = Name(as_name, prefix=u" ")
    new_node = Node(syms.import_as_name, [new_name, new_as, new_as_name])
    if prefix is not None:
        new_node.prefix = prefix
    return new_node


def is_docstring(node):
    """
    Returns True if the node appears to be a docstring
    """
    return (node.type == syms.simple_stmt and
            len(node.children) > 0 and node.children[0].type == token.STRING)


def future_import(feature, node):
    """
    This seems to work
    """
    root = find_root(node)

    if does_tree_import(u"__future__", feature, node):
        return

    # Look for a shebang or encoding line
    shebang_encoding_idx = None

    for idx, node in enumerate(root.children):
        # Is it a shebang or encoding line?
        if is_shebang_comment(node) or is_encoding_comment(node):
            shebang_encoding_idx = idx
        if is_docstring(node):
            # skip over docstring
            continue
        names = check_future_import(node)
        if not names:
            # not a future statement; need to insert before this
            break
        if feature in names:
            # already imported
            return

    import_ = FromImport(u'__future__', [Leaf(token.NAME, feature, prefix=" ")])
    if shebang_encoding_idx == 0 and idx == 0:
        # If this __future__ import would go on the first line,
        # detach the shebang / encoding prefix from the current first line.
        # and attach it to our new __future__ import node.
        import_.prefix = root.children[0].prefix
        root.children[0].prefix = u''
        # End the __future__ import line with a newline and add a blank line
        # afterwards:
    children = [import_ , Newline()]
    root.insert_child(idx, Node(syms.simple_stmt, children))


def future_import2(feature, node):
    """
    An alternative to future_import() which might not work ...
    """
    root = find_root(node)

    if does_tree_import(u"__future__", feature, node):
        return

    insert_pos = 0
    for idx, node in enumerate(root.children):
        if node.type == syms.simple_stmt and node.children and \
           node.children[0].type == token.STRING:
            insert_pos = idx + 1
            break

    for thing_after in root.children[insert_pos:]:
        if thing_after.type == token.NEWLINE:
            insert_pos += 1
            continue

        prefix = thing_after.prefix
        thing_after.prefix = u""
        break
    else:
        prefix = u""

    import_ = FromImport(u"__future__", [Leaf(token.NAME, feature, prefix=u" ")])

    children = [import_, Newline()]
    root.insert_child(insert_pos, Node(syms.simple_stmt, children, prefix=prefix))

def parse_args(arglist, scheme):
    u"""
    Parse a list of arguments into a dict
    """
    arglist = [i for i in arglist if i.type != token.COMMA]

    ret_mapping = dict([(k, None) for k in scheme])

    for i, arg in enumerate(arglist):
        if arg.type == syms.argument and arg.children[1].type == token.EQUAL:
            # argument < NAME '=' any >
            slot = arg.children[0].value
            ret_mapping[slot] = arg.children[2]
        else:
            slot = scheme[i]
            ret_mapping[slot] = arg

    return ret_mapping


# def is_import_from(node):
#     """Returns true if the node is a statement "from ... import ..."
#     """
#     return node.type == syms.import_from


def is_import_stmt(node):
    return (node.type == syms.simple_stmt and node.children and
            is_import(node.children[0]))


def touch_import_top(package, name_to_import, node):
    """Works like `does_tree_import` but adds an import statement at the
    top if it was not imported (but below any __future__ imports) and below any
    comments such as shebang lines).

    Based on lib2to3.fixer_util.touch_import()

    Calling this multiple times adds the imports in reverse order.

    Also adds "standard_library.install_aliases()" after "from future import
    standard_library".  This should probably be factored into another function.
    """

    root = find_root(node)

    if does_tree_import(package, name_to_import, root):
        return

    # Ideally, we would look for whether futurize --all-imports has been run,
    # as indicated by the presence of ``from builtins import (ascii, ...,
    # zip)`` -- and, if it has, we wouldn't import the name again.

    # Look for __future__ imports and insert below them
    found = False
    for name in ['absolute_import', 'division', 'print_function',
                 'unicode_literals']:
        if does_tree_import('__future__', name, root):
            found = True
            break
    if found:
        # At least one __future__ import. We want to loop until we've seen them
        # all.
        start, end = None, None
        for idx, node in enumerate(root.children):
            if check_future_import(node):
                start = idx
                # Start looping
                idx2 = start
                while node:
                    node = node.next_sibling
                    idx2 += 1
                    if not check_future_import(node):
                        end = idx2
                        break
                break
        assert start is not None
        assert end is not None
        insert_pos = end
    else:
        # No __future__ imports.
        # We look for a docstring and insert the new node below that. If no docstring
        # exists, just insert the node at the top.
        for idx, node in enumerate(root.children):
            if node.type != syms.simple_stmt:
                break
            if not is_docstring(node):
                # This is the usual case.
                break
        insert_pos = idx

    children_hooks = []
    if package is None:
        import_ = Node(syms.import_name, [
            Leaf(token.NAME, u"import"),
            Leaf(token.NAME, name_to_import, prefix=u" ")
        ])
    else:
        import_ = FromImport(package, [Leaf(token.NAME, name_to_import, prefix=u" ")])
        if name_to_import == u'standard_library':
            # Add:
            #     standard_library.install_aliases()
            # after:
            #     from future import standard_library
            install_hooks = Node(syms.simple_stmt,
                                 [Node(syms.power,
                                       [Leaf(token.NAME, u'standard_library'),
                                        Node(syms.trailer, [Leaf(token.DOT, u'.'),
                                        Leaf(token.NAME, u'install_aliases')]),
                                        Node(syms.trailer, [Leaf(token.LPAR, u'('),
                                                            Leaf(token.RPAR, u')')])
                                       ])
                                 ]
                                )
            children_hooks = [install_hooks, Newline()]

        # FromImport(package, [Leaf(token.NAME, name_to_import, prefix=u" ")])

    children_import = [import_, Newline()]
    old_prefix = root.children[insert_pos].prefix
    root.children[insert_pos].prefix = u''
    root.insert_child(insert_pos, Node(syms.simple_stmt, children_import, prefix=old_prefix))
    if len(children_hooks) > 0:
        root.insert_child(insert_pos + 1, Node(syms.simple_stmt, children_hooks))


## The following functions are from python-modernize by Armin Ronacher:
# (a little edited).

def check_future_import(node):
    """If this is a future import, return set of symbols that are imported,
    else return None."""
    # node should be the import statement here
    savenode = node
    if not (node.type == syms.simple_stmt and node.children):
        return set()
    node = node.children[0]
    # now node is the import_from node
    if not (node.type == syms.import_from and
            # node.type == token.NAME and      # seems to break it
            hasattr(node.children[1], 'value') and
            node.children[1].value == u'__future__'):
        return set()
    if node.children[3].type == token.LPAR:
        node = node.children[4]
    else:
        node = node.children[3]
    # now node is the import_as_name[s]
    if node.type == syms.import_as_names:
        result = set()
        for n in node.children:
            if n.type == token.NAME:
                result.add(n.value)
            elif n.type == syms.import_as_name:
                n = n.children[0]
                assert n.type == token.NAME
                result.add(n.value)
        return result
    elif node.type == syms.import_as_name:
        node = node.children[0]
        assert node.type == token.NAME
        return set([node.value])
    elif node.type == token.NAME:
        return set([node.value])
    else:
        # TODO: handle brackets like this:
        #     from __future__ import (absolute_import, division)
        assert False, "strange import: %s" % savenode


SHEBANG_REGEX = r'^#!.*python'
ENCODING_REGEX = r"^#.*coding[:=]\s*([-\w.]+)"


def is_shebang_comment(node):
    """
    Comments are prefixes for Leaf nodes. Returns whether the given node has a
    prefix that looks like a shebang line or an encoding line:

        #!/usr/bin/env python
        #!/usr/bin/python3
    """
    return bool(re.match(SHEBANG_REGEX, node.prefix))


def is_encoding_comment(node):
    """
    Comments are prefixes for Leaf nodes. Returns whether the given node has a
    prefix that looks like an encoding line:

        # coding: utf-8
        # encoding: utf-8
        # -*- coding: <encoding name> -*-
        # vim: set fileencoding=<encoding name> :
    """
    return bool(re.match(ENCODING_REGEX, node.prefix))


def wrap_in_fn_call(fn_name, args, prefix=None):
    """
    Example:
    >>> wrap_in_fn_call("oldstr", (arg,))
    oldstr(arg)

    >>> wrap_in_fn_call("olddiv", (arg1, arg2))
    olddiv(arg1, arg2)

    >>> wrap_in_fn_call("olddiv", [arg1, comma, arg2, comma, arg3])
    olddiv(arg1, arg2, arg3)
    """
    assert len(args) > 0
    if len(args) == 2:
        expr1, expr2 = args
        newargs = [expr1, Comma(), expr2]
    else:
        newargs = args
    return Call(Name(fn_name), newargs, prefix=prefix)
PK�Au\�BPs�5�5libfuturize/main.pynu�[���"""
futurize: automatic conversion to clean 2/3 code using ``python-future``
======================================================================

Like Armin Ronacher's modernize.py, ``futurize`` attempts to produce clean
standard Python 3 code that runs on both Py2 and Py3.

One pass
--------

Use it like this on Python 2 code:

  $ futurize --verbose mypython2script.py

This will attempt to port the code to standard Py3 code that also
provides Py2 compatibility with the help of the right imports from
``future``.

To write changes to the files, use the -w flag.

Two stages
----------

The ``futurize`` script can also be called in two separate stages. First:

  $ futurize --stage1 mypython2script.py

This produces more modern Python 2 code that is not yet compatible with Python
3. The tests should still run and the diff should be uncontroversial to apply to
most Python projects that are willing to drop support for Python 2.5 and lower.

After this, the recommended approach is to explicitly mark all strings that must
be byte-strings with a b'' prefix and all text (unicode) strings with a u''
prefix, and then invoke the second stage of Python 2 to 2/3 conversion with::

  $ futurize --stage2 mypython2script.py

Stage 2 adds a dependency on ``future``. It converts most remaining Python
2-specific code to Python 3 code and adds appropriate imports from ``future``
to restore Py2 support.

The command above leaves all unadorned string literals as native strings
(byte-strings on Py2, unicode strings on Py3). If instead you would like all
unadorned string literals to be promoted to unicode, you can also pass this
flag:

  $ futurize --stage2 --unicode-literals mypython2script.py

This adds the declaration ``from __future__ import unicode_literals`` to the
top of each file, which implicitly declares all unadorned string literals to be
unicode strings (``unicode`` on Py2).

All imports
-----------

The --all-imports option forces adding all ``__future__`` imports,
``builtins`` imports, and standard library aliases, even if they don't
seem necessary for the current state of each module. (This can simplify
testing, and can reduce the need to think about Py2 compatibility when editing
the code further.)

"""

from __future__ import (absolute_import, print_function, unicode_literals)
import future.utils
from future import __version__

import sys
import logging
import optparse
import os

from lib2to3.main import warn, StdoutRefactoringTool
from lib2to3 import refactor

from libfuturize.fixes import (lib2to3_fix_names_stage1,
                               lib2to3_fix_names_stage2,
                               libfuturize_fix_names_stage1,
                               libfuturize_fix_names_stage2)

fixer_pkg = 'libfuturize.fixes'


def main(args=None):
    """Main program.

    Args:
        fixer_pkg: the name of a package where the fixers are located.
        args: optional; a list of command line arguments. If omitted,
              sys.argv[1:] is used.

    Returns a suggested exit status (0, 1, 2).
    """

    # Set up option parser
    parser = optparse.OptionParser(usage="futurize [options] file|dir ...")
    parser.add_option("-V", "--version", action="store_true",
                      help="Report the version number of futurize")
    parser.add_option("-a", "--all-imports", action="store_true",
                      help="Add all __future__ and future imports to each module")
    parser.add_option("-1", "--stage1", action="store_true",
                      help="Modernize Python 2 code only; no compatibility with Python 3 (or dependency on ``future``)")
    parser.add_option("-2", "--stage2", action="store_true",
                      help="Take modernized (stage1) code and add a dependency on ``future`` to provide Py3 compatibility.")
    parser.add_option("-0", "--both-stages", action="store_true",
                      help="Apply both stages 1 and 2")
    parser.add_option("-u", "--unicode-literals", action="store_true",
                      help="Add ``from __future__ import unicode_literals`` to implicitly convert all unadorned string literals '' into unicode strings")
    parser.add_option("-f", "--fix", action="append", default=[],
                      help="Each FIX specifies a transformation; default: all.\nEither use '-f division -f metaclass' etc. or use the fully-qualified module name: '-f lib2to3.fixes.fix_types -f libfuturize.fixes.fix_unicode_keep_u'")
    parser.add_option("-j", "--processes", action="store", default=1,
                      type="int", help="Run 2to3 concurrently")
    parser.add_option("-x", "--nofix", action="append", default=[],
                      help="Prevent a fixer from being run.")
    parser.add_option("-l", "--list-fixes", action="store_true",
                      help="List available transformations")
    parser.add_option("-p", "--print-function", action="store_true",
                      help="Modify the grammar so that print() is a function")
    parser.add_option("-v", "--verbose", action="store_true",
                      help="More verbose logging")
    parser.add_option("--no-diffs", action="store_true",
                      help="Don't show diffs of the refactoring")
    parser.add_option("-w", "--write", action="store_true",
                      help="Write back modified files")
    parser.add_option("-n", "--nobackups", action="store_true", default=False,
                      help="Don't write backups for modified files.")
    parser.add_option("-o", "--output-dir", action="store", type="str",
                      default="", help="Put output files in this directory "
                      "instead of overwriting the input files.  Requires -n. "
                      "For Python >= 2.7 only.")
    parser.add_option("-W", "--write-unchanged-files", action="store_true",
                      help="Also write files even if no changes were required"
                      " (useful with --output-dir); implies -w.")
    parser.add_option("--add-suffix", action="store", type="str", default="",
                      help="Append this string to all output filenames."
                      " Requires -n if non-empty. For Python >= 2.7 only."
                      "ex: --add-suffix='3' will generate .py3 files.")

    # Parse command line arguments
    flags = {}
    refactor_stdin = False
    options, args = parser.parse_args(args)

    if options.write_unchanged_files:
        flags["write_unchanged_files"] = True
        if not options.write:
            warn("--write-unchanged-files/-W implies -w.")
        options.write = True
    # If we allowed these, the original files would be renamed to backup names
    # but not replaced.
    if options.output_dir and not options.nobackups:
        parser.error("Can't use --output-dir/-o without -n.")
    if options.add_suffix and not options.nobackups:
        parser.error("Can't use --add-suffix without -n.")

    if not options.write and options.no_diffs:
        warn("not writing files and not printing diffs; that's not very useful")
    if not options.write and options.nobackups:
        parser.error("Can't use -n without -w")
    if "-" in args:
        refactor_stdin = True
        if options.write:
            print("Can't write to stdin.", file=sys.stderr)
            return 2
    # Is this ever necessary?
    if options.print_function:
        flags["print_function"] = True

    # Set up logging handler
    level = logging.DEBUG if options.verbose else logging.INFO
    logging.basicConfig(format='%(name)s: %(message)s', level=level)
    logger = logging.getLogger('libfuturize.main')

    if options.stage1 or options.stage2:
        assert options.both_stages is None
        options.both_stages = False
    else:
        options.both_stages = True

    avail_fixes = set()

    if options.stage1 or options.both_stages:
        avail_fixes.update(lib2to3_fix_names_stage1)
        avail_fixes.update(libfuturize_fix_names_stage1)
    if options.stage2 or options.both_stages:
        avail_fixes.update(lib2to3_fix_names_stage2)
        avail_fixes.update(libfuturize_fix_names_stage2)

    if options.unicode_literals:
        avail_fixes.add('libfuturize.fixes.fix_unicode_literals_import')

    if options.version:
        print(__version__)
        return 0
    if options.list_fixes:
        print("Available transformations for the -f/--fix option:")
        # for fixname in sorted(refactor.get_all_fix_names(fixer_pkg)):
        for fixname in sorted(avail_fixes):
            print(fixname)
        if not args:
            return 0
    if not args:
        print("At least one file or directory argument required.",
              file=sys.stderr)
        print("Use --help to show usage.", file=sys.stderr)
        return 2

    unwanted_fixes = set()
    for fix in options.nofix:
        if ".fix_" in fix:
            unwanted_fixes.add(fix)
        else:
            # Infer the full module name for the fixer.
            # First ensure that no names clash (e.g.
            # lib2to3.fixes.fix_blah and libfuturize.fixes.fix_blah):
            found = [f for f in avail_fixes
                     if f.endswith('fix_{0}'.format(fix))]
            if len(found) > 1:
                print("Ambiguous fixer name. Choose a fully qualified "
                      "module name instead from these:\n" +
                      "\n".join("  " + myf for myf in found),
                      file=sys.stderr)
                return 2
            elif len(found) == 0:
                print("Unknown fixer. Use --list-fixes or -l for a list.",
                      file=sys.stderr)
                return 2
            unwanted_fixes.add(found[0])

    extra_fixes = set()
    if options.all_imports:
        if options.stage1:
            prefix = 'libfuturize.fixes.'
            extra_fixes.add(prefix +
                            'fix_add__future__imports_except_unicode_literals')
        else:
            # In case the user hasn't run stage1 for some reason:
            prefix = 'libpasteurize.fixes.'
            extra_fixes.add(prefix + 'fix_add_all__future__imports')
            extra_fixes.add(prefix + 'fix_add_future_standard_library_import')
            extra_fixes.add(prefix + 'fix_add_all_future_builtins')
    explicit = set()
    if options.fix:
        all_present = False
        for fix in options.fix:
            if fix == 'all':
                all_present = True
            else:
                if ".fix_" in fix:
                    explicit.add(fix)
                else:
                    # Infer the full module name for the fixer.
                    # First ensure that no names clash (e.g.
                    # lib2to3.fixes.fix_blah and libfuturize.fixes.fix_blah):
                    found = [f for f in avail_fixes
                             if f.endswith('fix_{0}'.format(fix))]
                    if len(found) > 1:
                        print("Ambiguous fixer name. Choose a fully qualified "
                              "module name instead from these:\n" +
                              "\n".join("  " + myf for myf in found),
                              file=sys.stderr)
                        return 2
                    elif len(found) == 0:
                        print("Unknown fixer. Use --list-fixes or -l for a list.",
                              file=sys.stderr)
                        return 2
                    explicit.add(found[0])
        if len(explicit & unwanted_fixes) > 0:
            print("Conflicting usage: the following fixers have been "
                  "simultaneously requested and disallowed:\n" +
                  "\n".join("  " + myf for myf in (explicit & unwanted_fixes)),
                  file=sys.stderr)
            return 2
        requested = avail_fixes.union(explicit) if all_present else explicit
    else:
        requested = avail_fixes.union(explicit)
    fixer_names = (requested | extra_fixes) - unwanted_fixes

    input_base_dir = os.path.commonprefix(args)
    if (input_base_dir and not input_base_dir.endswith(os.sep)
        and not os.path.isdir(input_base_dir)):
        # One or more similar names were passed, their directory is the base.
        # os.path.commonprefix() is ignorant of path elements, this corrects
        # for that weird API.
        input_base_dir = os.path.dirname(input_base_dir)
    if options.output_dir:
        input_base_dir = input_base_dir.rstrip(os.sep)
        logger.info('Output in %r will mirror the input directory %r layout.',
                    options.output_dir, input_base_dir)

    # Initialize the refactoring tool
    if future.utils.PY26:
        extra_kwargs = {}
    else:
        extra_kwargs = {
                        'append_suffix': options.add_suffix,
                        'output_dir': options.output_dir,
                        'input_base_dir': input_base_dir,
                       }

    rt = StdoutRefactoringTool(
            sorted(fixer_names), flags, sorted(explicit),
            options.nobackups, not options.no_diffs,
            **extra_kwargs)

    # Refactor all files and directories passed as arguments
    if not rt.errors:
        if refactor_stdin:
            rt.refactor_stdin()
        else:
            try:
                rt.refactor(args, options.write, None,
                            options.processes)
            except refactor.MultiprocessingUnsupported:
                assert options.processes > 1
                print("Sorry, -j isn't " \
                      "supported on this platform.", file=sys.stderr)
                return 1
        rt.summarize()

    # Return error status (0 if rt.errors is zero)
    return int(bool(rt.errors))
PK�Au\future/tests/__init__.pynu�[���PK�Au\�:�x��0future/tests/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�?/usr/local/lib/python3.9/site-packages/future/tests/__init__.py�<module>�PK�Au\�o�AA,future/tests/__pycache__/base.cpython-39.pycnu�[���a

��?h�M�@s�ddlmZmZddlZddlZddlZddlZddlZddlZddl	Z	ddl
mZddlm
Z
mZmZmZmZddlmZmZmZer�ddlZdd�Zdd	�ZGd
d�de�ZGdd
�d
e�ZGdd�de�ZGdd�dej�Ze�ed�Zdd�Z dd�Z!dd�Z"dd�Z#e$ejd��s(ejj%ej_&d$dd�Z'e$ejd��sNe
ejde'�Gdd�de(�Z)Gd d!�d!e)�Z*d%d"d#�Z+e$ejd#��s�e
ejd#e+�dS)&�)�print_function�absolute_importN)�dedent)�bind_method�PY26�PY3�PY2�PY27)�check_output�STDOUT�CalledProcessErrorcCs|�d�r|dd�}t|�S)z,
    Removes any leading 
 and dedents.
    �
�N)�
startswithr)�code�r�;/usr/local/lib/python3.9/site-packages/future/tests/base.py�
reformat_codes
rcsd|�d��dd�t��D�}dd�t��D�}dd�t��D�}|��|ksTJd��dd�}d	d
�}||�||�ks|Jd��t�fdd�|D��}tt||��}t�fd
d�|D��}tt||��}	t�fdd�|D��}
tt||
��}g}tt���D]`}
|
|v�r|�||
�q�|
|v�r0|�|	|
�q�|
|v�rJ|�||
�q�|��|
�q�d�	|�S)a
    Returns the code block with any ``__future__`` import lines sorted, and
    then any ``future`` import lines sorted, then any ``builtins`` import lines
    sorted.

    This only sorts the lines within the expected blocks.

    See test_order_future_lines() for an example.
    r
cSsg|]\}}|�d�r|�qS)�from __future__ import �r��.0�i�linerrr�
<listcomp>*s
�z&order_future_lines.<locals>.<listcomp>cSs(g|] \}}|�d�s |�d�r|�qS)zfrom futurez	from pastrrrrrr-s

�cSsg|]\}}|�d�r|�qS)z
from builtinsrrrrrr1s
�zIinternal usage error: dedent the code before calling order_future_lines()cSst|�dkrt|�SdS)Nr)�len�max�Znumbersrrr�mymax7sz!order_future_lines.<locals>.mymaxcSst|�dkrt|�Std�S)Nr�inf)r�min�floatrrrr�mymin:sz!order_future_lines.<locals>.myminz2the __future__ and future imports are out of ordercsg|]}�|�qSrr�rr��linesrrrC�csg|]}�|�qSrrr#r$rrrFr&csg|]}�|�qSrrr#r$rrrIr&)
�split�	enumerate�lstrip�sorted�dict�zip�ranger�append�join)rZuufuture_line_numbersZfuture_line_numbersZbuiltins_line_numbersrr"ZuulZsorted_uufuture_lines�flZsorted_future_lines�blZsorted_builtins_lines�	new_linesrrr$r�order_future_liness4
�


r3c@s"eZdZdZddd�Zdd�ZdS)�VerboseCalledProcessErrorz�
    Like CalledProcessError, but it displays more information (message and
    script output) for diagnosing test failures etc.
    NcCs||_||_||_||_dS�N)�msg�
returncode�cmd�output)�selfr6r7r8r9rrr�__init__`sz"VerboseCalledProcessError.__init__cCsd|j|j|j|jfS)Nz>Command '%s' failed with exit status %d
Message: %s
Output: %s)r8r7r6r9)r:rrr�__str__fs�z!VerboseCalledProcessError.__str__)N)�__name__�
__module__�__qualname__�__doc__r;r<rrrrr4[s
r4c@seZdZdS)�
FuturizeErrorN�r=r>r?rrrrrAjsrAc@seZdZdS)�PasteurizeErrorNrBrrrrrCmsrCc@steZdZdZdd�Zddd�Zdd	d
�Zdd�Zdd
d�Zdd�Z	ddd�Z
ddd�Zd dd�Zde
jfdd�ZdS)!�CodeHandlerzt
    Handy mixin for test classes for writing / reading / futurizing /
    running .py files in the test suite.
    cCsjtd�|_td�|_tjg|_t��tj	j
|_t�d�}|rXdt�
�tj|i|_ndt�
�i|_dS)zi
        The outputs from the various futurize stages should have the
        following headers:
        z�
        from __future__ import absolute_import
        from __future__ import division
        from __future__ import print_function
        a4
        from __future__ import absolute_import
        from __future__ import division
        from __future__ import print_function
        from __future__ import unicode_literals
        from future import standard_library
        standard_library.install_aliases()
        from builtins import *
        �
PYTHONPATHN)r�headers1�headers2�sys�
executable�interpreters�tempfile�mkdtemp�os�path�sep�tempdir�getenv�getcwd�pathsep�env)r:Zpypathrrr�setUpvs

	

zCodeHandler.setUp�r�FTcCsP|rt|�}|�|�|j||||d�|��}|rL|jD]}	|j|	d�}
q:|S)a�
        Converts the code block using ``futurize`` and returns the
        resulting code.

        Passing stages=[1] or stages=[2] passes the flag ``--stage1`` or
        ``stage2`` to ``futurize``. Passing both stages runs ``futurize``
        with both stages by default.

        If from3 is False, runs ``futurize``, converting from Python 2 to
        both 2 and 3. If from3 is True, runs ``pasteurize`` to convert
        from Python 3 to both 2 and 3.

        Optionally reformats the code block first using the reformat() function.

        If run is True, runs the resulting code under all Python
        interpreters in self.interpreters.
        )�stages�all_imports�from3�conservative)�interpreter)r�_write_test_script�_futurize_test_script�_read_test_scriptrJ�_run_test_script)r:rrXrYrZZreformat�runr[r9r\�_rrr�convert�s
�
zCodeHandler.convertcCsp|r|�|�}|�|�}t|t�r6t|t�s6|�d�}t|t�rTt|t�sT|�d�}|�t|���|���dS)a�
        Compares whether the code blocks are equal. If not, raises an
        exception so the test fails. Ignores any trailing whitespace like
        blank lines.

        If ignore_imports is True, passes the code blocks into the
        strip_future_imports method.

        If one code block is a unicode string and the other a
        byte-string, it assumes the byte-string is encoded as utf-8.
        �utf-8N)�strip_future_imports�
isinstance�bytes�decode�assertEqualr3�rstrip)r:r9�expected�ignore_importsrrr�compare�s



�zCodeHandler.comparecCs`g}|�d�D]F}|�d�s|�d�s|�d�sd|vsd|vs|�d�s|�|�qd�|�S)a
        Strips any of these import lines:

            from __future__ import <anything>
            from future <anything>
            from future.<anything>
            from builtins <anything>

        or any line containing:
            install_hooks()
        or:
            install_aliases()

        Limitation: doesn't handle imports split across multiple lines like
        this:

            from __future__ import (absolute_import, division, print_function,
                                    unicode_literals)
        r
rzfrom future zfrom builtins zinstall_hooks()zinstall_aliases()zfrom future.)r'rr.r/)r:rr9rrrrre�s
�����z CodeHandler.strip_future_importsc	Cs`|j||||||d�}	|r0d|vr(|jn|j}
nd}
t|�}|
|vrHd}
|j|	|
||d�dS)a<
        Convenience method that calls convert() and compare().

        Reformats the code blocks automatically using the reformat_code()
        function.

        If all_imports is passed, we add the appropriate import headers
        for the stage(s) selected to the ``expected`` code-block, so they
        needn't appear repeatedly in the test code.

        If ignore_imports is True, ignores the presence of any lines
        beginning:

            from __future__ import ...
            from future import ...

        for the purpose of the comparison.
        )rXrYrZrar[rW�)rlN)rcrGrFrrm)r:�beforerkrXrYrlrZrar[r9�headersZreformattedrrr�
convert_check�s
��zCodeHandler.convert_checkcKs|j||fi|��dS)ze
        Convenience method to ensure the code is unchanged by the
        futurize process.
        N)rq)r:r�kwargsrrr�	unchangedszCodeHandler.unchanged�mytestscript.pycCs\t|t�r|�d�}tj|j|ddd��}|�t|��Wd�n1sN0YdS)z�
        Dedents the given code (a multiline string) and writes it out to
        a file in a temporary folder like /tmp/tmpUDCn7x/mytestscript.py.
        rd�wt��encodingN)rfrgrh�io�openrP�writer)r:r�filename�frrrr]!s

zCodeHandler._write_test_scriptcCsBtj|j|ddd��}|��}Wd�n1s40Y|S)N�rtrdrv)rxryrP�read)r:r{r|Z	newsourcerrrr_,s&zCodeHandler._read_test_scriptcCsXg}t|�}|r|�d�|r$d}nNd}|dgkr>|�d�n&|dgkrT|�d�n|ddgksdJ�|rr|�d�|j|}tj|g|d	|g}	zt|	t|jd
�}
Wn�t�yR}z�t	|��4}dd�
|	�d
|j|d|��f}
Wd�n1s�0Yd|v�rtnt
}t|d��s(d|_||
|j|j|jd��WYd}~n
d}~00|
S)Nz
--all-importsz
pasteurize.pyzfuturize.pyrz--stage1rWz--stage2z--conservativez-w)�stderrrT�8Error running the command %s
%s
Contents of file %s:

%s� �env=%s�----
%s
----Zfuturizer9�r9)�listr.rPrHrIr
rrTrryr/r~rArC�hasattrr9r7r8)r:r{rXrYrZr[�params�script�fnZ	call_argsr9�er|r6Z
ErrorClassrrrr^1s@






��",z!CodeHandler._futurize_test_scriptcCs�|j|}zt||g|jtd�}Wn�ty�}z�t|��8}dd�||g�d|j|d|��f}Wd�n1sx0Yt|d�s�d|_	t
||j|j|j	d��WYd}~n
d}~00|S)N)rTrr�r�r�r�r9r�)
rPr
rTrrryr/r~r�r9r4r7r8)r:r{r\r�r9r�r|r6rrrr``s$
�


��"
,zCodeHandler._run_test_scriptN)rVFFTTF)T)rVFTFTF)rt)rt)rtrVFFF)r=r>r?r@rUrcrmrerqrsr]r_r^rHrIr`rrrrrDqs*+�

"�
#

�
/�rDz#this test is known to fail on Py2.6cCsts|St�|�Sr5)r�unittest�expectedFailure��funcrrr�expectedFailurePY3sr�cCsts|St�|�Sr5)rr�r�r�rrr�expectedFailurePY26�sr�cCsts|St�|�Sr5)r	r�r�r�rrr�expectedFailurePY27�sr�cCsts|St�|�Sr5)rr�r�r�rrr�expectedFailurePY2�sr��assertRaisesRegexcCsTt|ttf�r$|sJd��t�|�}|�|�sP|p4d}d||j|f}|�|��dS)z=Fail the test unless the text matches the regular expression.z!expected_regex must not be empty.zRegex didn't matchz%s: %r not found in %rN)rf�str�unicode�re�compile�search�pattern�failureException)r:�text�expected_regexr6rrr�assertRegex�s

r�c@s&eZdZddd�Zdd�Zdd�ZdS)	�_AssertRaisesBaseContextNcCsn||_||_|dur@z|j|_WqFty<t|�|_YqF0nd|_t|ttf�r^t�	|�}||_
d|_dSr5)rk�	test_caser=�obj_name�AttributeErrorr�rfrgr�r�r�r6)r:rkr��callable_objr�rrrr;�s
z!_AssertRaisesBaseContext.__init__cCs |j�|j|�}|j�|��dSr5)r�Z_formatMessager6r�)r:ZstandardMsgr6rrr�
_raiseFailure�sz&_AssertRaisesBaseContext._raiseFailurecCsP|dur|�dd�|_|S|�||i|��Wd�n1sB0YdS)z�
        If callable_obj is None, assertRaises/Warns is being used as a
        context manager, so check for a 'msg' kwarg and return self.
        If callable_obj is not None, call it passing args and kwargs.
        Nr6)�popr6)r:�namer��argsrrrrr�handle�s
z_AssertRaisesBaseContext.handle)NN)r=r>r?r;r�r�rrrrr��s
�
r�c@s eZdZdZdd�Zdd�ZdS)�_AssertWarnsContextzBA context manager used to implement TestCase.assertWarns* methods.cCsNtj��D]}t|dd�r
i|_q
tjdd�|_|j��|_t�	d|j
�|S)N�__warningregistry__T)�record�always)rH�modules�values�getattrr��warnings�catch_warnings�warnings_manager�	__enter__�simplefilterrk)r:�vrrrr��sz_AssertWarnsContext.__enter__cCs�|j�|||�|durdSz|jj}WntyDt|j�}Yn0d}|jD]Z}|j}t||j�shqP|durt|}|j	dur�|j	�
t|��s�qP||_|j|_|j
|_
dS|dur�|�d�|j	jt|���|jr�|�d�||j��n|�d�|��dS)Nz"{}" does not match "{}"z{} not triggered by {}z{} not triggered)r��__exit__rkr=r�r�r��messagerfr�r��warningr{�linenor��formatr�r�)r:�exc_type�	exc_value�tbZexc_nameZfirst_matching�m�wrrrr��s>

��
�z_AssertWarnsContext.__exit__N)r=r>r?r@r�r�rrrrr��sr�cOst|||�}|�d|||�S)a�Fail unless a warning of class warnClass is triggered
       by callable_obj when invoked with arguments args and keyword
       arguments kwargs.  If a different type of warning is
       triggered, it will not be handled: depending on the other
       warning filtering rules in effect, it might be silenced, printed
       out, or raised as an exception.

       If called with callable_obj omitted or None, will return a
       context object used like this::

            with self.assertWarns(SomeWarning):
                do_something()

       An optional keyword argument 'msg' can be provided when assertWarns
       is used as a context object.

       The context manager keeps a reference to the first matching
       warning as the 'warning' attribute; similarly, the 'filename'
       and 'lineno' attributes give you information about the line
       of Python code from which the warning was triggered.
       This allows you to inspect the warning after the assertion::

           with self.assertWarns(SomeWarning) as cm:
               do_something()
           the_warning = cm.warning
           self.assertEqual(the_warning.some_attribute, 147)
    �assertWarns)r�r�)r:Zexpected_warningr�r�rr�contextrrrr��sr�)N)N),�
__future__rrrMrKr�rHr�r�rx�textwraprZfuture.utilsrrrrr	Zfuture.moves.subprocessr
rrZ	unittest2rr3r4rArCZTestCaserDZskipIfZskip26r�r�r�r�r�ZassertRaisesRegexpr�r��objectr�r�r�rrrr�<module>sF	@


"1
PK�Au\t�{��M�Mfuture/tests/base.pynu�[���from __future__ import print_function, absolute_import
import os
import tempfile
import unittest
import sys
import re
import warnings
import io
from textwrap import dedent

from future.utils import bind_method, PY26, PY3, PY2, PY27
from future.moves.subprocess import check_output, STDOUT, CalledProcessError

if PY26:
    import unittest2 as unittest


def reformat_code(code):
    """
    Removes any leading \n and dedents.
    """
    if code.startswith('\n'):
        code = code[1:]
    return dedent(code)


def order_future_lines(code):
    """
    Returns the code block with any ``__future__`` import lines sorted, and
    then any ``future`` import lines sorted, then any ``builtins`` import lines
    sorted.

    This only sorts the lines within the expected blocks.

    See test_order_future_lines() for an example.
    """

    # We need .splitlines(keepends=True), which doesn't exist on Py2,
    # so we use this instead:
    lines = code.split('\n')

    uufuture_line_numbers = [i for i, line in enumerate(lines)
                               if line.startswith('from __future__ import ')]

    future_line_numbers = [i for i, line in enumerate(lines)
                             if line.startswith('from future')
                             or line.startswith('from past')]

    builtins_line_numbers = [i for i, line in enumerate(lines)
                             if line.startswith('from builtins')]

    assert code.lstrip() == code, ('internal usage error: '
            'dedent the code before calling order_future_lines()')

    def mymax(numbers):
        return max(numbers) if len(numbers) > 0 else 0

    def mymin(numbers):
        return min(numbers) if len(numbers) > 0 else float('inf')

    assert mymax(uufuture_line_numbers) <= mymin(future_line_numbers), \
            'the __future__ and future imports are out of order'

    # assert mymax(future_line_numbers) <= mymin(builtins_line_numbers), \
    #         'the future and builtins imports are out of order'

    uul = sorted([lines[i] for i in uufuture_line_numbers])
    sorted_uufuture_lines = dict(zip(uufuture_line_numbers, uul))

    fl = sorted([lines[i] for i in future_line_numbers])
    sorted_future_lines = dict(zip(future_line_numbers, fl))

    bl = sorted([lines[i] for i in builtins_line_numbers])
    sorted_builtins_lines = dict(zip(builtins_line_numbers, bl))

    # Replace the old unsorted "from __future__ import ..." lines with the
    # new sorted ones:
    new_lines = []
    for i in range(len(lines)):
        if i in uufuture_line_numbers:
            new_lines.append(sorted_uufuture_lines[i])
        elif i in future_line_numbers:
            new_lines.append(sorted_future_lines[i])
        elif i in builtins_line_numbers:
            new_lines.append(sorted_builtins_lines[i])
        else:
            new_lines.append(lines[i])
    return '\n'.join(new_lines)


class VerboseCalledProcessError(CalledProcessError):
    """
    Like CalledProcessError, but it displays more information (message and
    script output) for diagnosing test failures etc.
    """
    def __init__(self, msg, returncode, cmd, output=None):
        self.msg = msg
        self.returncode = returncode
        self.cmd = cmd
        self.output = output

    def __str__(self):
        return ("Command '%s' failed with exit status %d\nMessage: %s\nOutput: %s"
                % (self.cmd, self.returncode, self.msg, self.output))

class FuturizeError(VerboseCalledProcessError):
    pass

class PasteurizeError(VerboseCalledProcessError):
    pass


class CodeHandler(unittest.TestCase):
    """
    Handy mixin for test classes for writing / reading / futurizing /
    running .py files in the test suite.
    """
    def setUp(self):
        """
        The outputs from the various futurize stages should have the
        following headers:
        """
        # After stage1:
        # TODO: use this form after implementing a fixer to consolidate
        #       __future__ imports into a single line:
        # self.headers1 = """
        # from __future__ import absolute_import, division, print_function
        # """
        self.headers1 = reformat_code("""
        from __future__ import absolute_import
        from __future__ import division
        from __future__ import print_function
        """)

        # After stage2 --all-imports:
        # TODO: use this form after implementing a fixer to consolidate
        #       __future__ imports into a single line:
        # self.headers2 = """
        # from __future__ import (absolute_import, division,
        #                         print_function, unicode_literals)
        # from future import standard_library
        # from future.builtins import *
        # """
        self.headers2 = reformat_code("""
        from __future__ import absolute_import
        from __future__ import division
        from __future__ import print_function
        from __future__ import unicode_literals
        from future import standard_library
        standard_library.install_aliases()
        from builtins import *
        """)
        self.interpreters = [sys.executable]
        self.tempdir = tempfile.mkdtemp() + os.path.sep
        pypath = os.getenv('PYTHONPATH')
        if pypath:
            self.env = {'PYTHONPATH': os.getcwd() + os.pathsep + pypath}
        else:
            self.env = {'PYTHONPATH': os.getcwd()}

    def convert(self, code, stages=(1, 2), all_imports=False, from3=False,
                reformat=True, run=True, conservative=False):
        """
        Converts the code block using ``futurize`` and returns the
        resulting code.

        Passing stages=[1] or stages=[2] passes the flag ``--stage1`` or
        ``stage2`` to ``futurize``. Passing both stages runs ``futurize``
        with both stages by default.

        If from3 is False, runs ``futurize``, converting from Python 2 to
        both 2 and 3. If from3 is True, runs ``pasteurize`` to convert
        from Python 3 to both 2 and 3.

        Optionally reformats the code block first using the reformat() function.

        If run is True, runs the resulting code under all Python
        interpreters in self.interpreters.
        """
        if reformat:
            code = reformat_code(code)
        self._write_test_script(code)
        self._futurize_test_script(stages=stages, all_imports=all_imports,
                                   from3=from3, conservative=conservative)
        output = self._read_test_script()
        if run:
            for interpreter in self.interpreters:
                _ = self._run_test_script(interpreter=interpreter)
        return output

    def compare(self, output, expected, ignore_imports=True):
        """
        Compares whether the code blocks are equal. If not, raises an
        exception so the test fails. Ignores any trailing whitespace like
        blank lines.

        If ignore_imports is True, passes the code blocks into the
        strip_future_imports method.

        If one code block is a unicode string and the other a
        byte-string, it assumes the byte-string is encoded as utf-8.
        """
        if ignore_imports:
            output = self.strip_future_imports(output)
            expected = self.strip_future_imports(expected)
        if isinstance(output, bytes) and not isinstance(expected, bytes):
            output = output.decode('utf-8')
        if isinstance(expected, bytes) and not isinstance(output, bytes):
            expected = expected.decode('utf-8')
        self.assertEqual(order_future_lines(output.rstrip()),
                         expected.rstrip())

    def strip_future_imports(self, code):
        """
        Strips any of these import lines:

            from __future__ import <anything>
            from future <anything>
            from future.<anything>
            from builtins <anything>

        or any line containing:
            install_hooks()
        or:
            install_aliases()

        Limitation: doesn't handle imports split across multiple lines like
        this:

            from __future__ import (absolute_import, division, print_function,
                                    unicode_literals)
        """
        output = []
        # We need .splitlines(keepends=True), which doesn't exist on Py2,
        # so we use this instead:
        for line in code.split('\n'):
            if not (line.startswith('from __future__ import ')
                    or line.startswith('from future ')
                    or line.startswith('from builtins ')
                    or 'install_hooks()' in line
                    or 'install_aliases()' in line
                    # but don't match "from future_builtins" :)
                    or line.startswith('from future.')):
                output.append(line)
        return '\n'.join(output)

    def convert_check(self, before, expected, stages=(1, 2), all_imports=False,
                      ignore_imports=True, from3=False, run=True,
                      conservative=False):
        """
        Convenience method that calls convert() and compare().

        Reformats the code blocks automatically using the reformat_code()
        function.

        If all_imports is passed, we add the appropriate import headers
        for the stage(s) selected to the ``expected`` code-block, so they
        needn't appear repeatedly in the test code.

        If ignore_imports is True, ignores the presence of any lines
        beginning:

            from __future__ import ...
            from future import ...

        for the purpose of the comparison.
        """
        output = self.convert(before, stages=stages, all_imports=all_imports,
                              from3=from3, run=run, conservative=conservative)
        if all_imports:
            headers = self.headers2 if 2 in stages else self.headers1
        else:
            headers = ''

        reformatted = reformat_code(expected)
        if headers in reformatted:
            headers = ''

        self.compare(output, headers + reformatted,
                     ignore_imports=ignore_imports)

    def unchanged(self, code, **kwargs):
        """
        Convenience method to ensure the code is unchanged by the
        futurize process.
        """
        self.convert_check(code, code, **kwargs)

    def _write_test_script(self, code, filename='mytestscript.py'):
        """
        Dedents the given code (a multiline string) and writes it out to
        a file in a temporary folder like /tmp/tmpUDCn7x/mytestscript.py.
        """
        if isinstance(code, bytes):
            code = code.decode('utf-8')
        # Be explicit about encoding the temp file as UTF-8 (issue #63):
        with io.open(self.tempdir + filename, 'wt', encoding='utf-8') as f:
            f.write(dedent(code))

    def _read_test_script(self, filename='mytestscript.py'):
        with io.open(self.tempdir + filename, 'rt', encoding='utf-8') as f:
            newsource = f.read()
        return newsource

    def _futurize_test_script(self, filename='mytestscript.py', stages=(1, 2),
                              all_imports=False, from3=False,
                              conservative=False):
        params = []
        stages = list(stages)
        if all_imports:
            params.append('--all-imports')
        if from3:
            script = 'pasteurize.py'
        else:
            script = 'futurize.py'
            if stages == [1]:
                params.append('--stage1')
            elif stages == [2]:
                params.append('--stage2')
            else:
                assert stages == [1, 2]
            if conservative:
                params.append('--conservative')
            # No extra params needed

        # Absolute file path:
        fn = self.tempdir + filename
        call_args = [sys.executable, script] + params + ['-w', fn]
        try:
            output = check_output(call_args, stderr=STDOUT, env=self.env)
        except CalledProcessError as e:
            with open(fn) as f:
                msg = (
                    'Error running the command %s\n'
                    '%s\n'
                    'Contents of file %s:\n'
                    '\n'
                    '%s') % (
                        ' '.join(call_args),
                        'env=%s' % self.env,
                        fn,
                        '----\n%s\n----' % f.read(),
                    )
            ErrorClass = (FuturizeError if 'futurize' in script else PasteurizeError)

            if not hasattr(e, 'output'):
                # The attribute CalledProcessError.output doesn't exist on Py2.6
                e.output = None
            raise ErrorClass(msg, e.returncode, e.cmd, output=e.output)
        return output

    def _run_test_script(self, filename='mytestscript.py',
                         interpreter=sys.executable):
        # Absolute file path:
        fn = self.tempdir + filename
        try:
            output = check_output([interpreter, fn],
                                  env=self.env, stderr=STDOUT)
        except CalledProcessError as e:
            with open(fn) as f:
                msg = (
                    'Error running the command %s\n'
                    '%s\n'
                    'Contents of file %s:\n'
                    '\n'
                    '%s') % (
                        ' '.join([interpreter, fn]),
                        'env=%s' % self.env,
                        fn,
                        '----\n%s\n----' % f.read(),
                    )
            if not hasattr(e, 'output'):
                # The attribute CalledProcessError.output doesn't exist on Py2.6
                e.output = None
            raise VerboseCalledProcessError(msg, e.returncode, e.cmd, output=e.output)
        return output


# Decorator to skip some tests on Python 2.6 ...
skip26 = unittest.skipIf(PY26, "this test is known to fail on Py2.6")


def expectedFailurePY3(func):
    if not PY3:
        return func
    return unittest.expectedFailure(func)

def expectedFailurePY26(func):
    if not PY26:
        return func
    return unittest.expectedFailure(func)


def expectedFailurePY27(func):
    if not PY27:
        return func
    return unittest.expectedFailure(func)


def expectedFailurePY2(func):
    if not PY2:
        return func
    return unittest.expectedFailure(func)


# Renamed in Py3.3:
if not hasattr(unittest.TestCase, 'assertRaisesRegex'):
    unittest.TestCase.assertRaisesRegex = unittest.TestCase.assertRaisesRegexp

# From Py3.3:
def assertRegex(self, text, expected_regex, msg=None):
    """Fail the test unless the text matches the regular expression."""
    if isinstance(expected_regex, (str, unicode)):
        assert expected_regex, "expected_regex must not be empty."
        expected_regex = re.compile(expected_regex)
    if not expected_regex.search(text):
        msg = msg or "Regex didn't match"
        msg = '%s: %r not found in %r' % (msg, expected_regex.pattern, text)
        raise self.failureException(msg)

if not hasattr(unittest.TestCase, 'assertRegex'):
    bind_method(unittest.TestCase, 'assertRegex', assertRegex)

class _AssertRaisesBaseContext(object):

    def __init__(self, expected, test_case, callable_obj=None,
                 expected_regex=None):
        self.expected = expected
        self.test_case = test_case
        if callable_obj is not None:
            try:
                self.obj_name = callable_obj.__name__
            except AttributeError:
                self.obj_name = str(callable_obj)
        else:
            self.obj_name = None
        if isinstance(expected_regex, (bytes, str)):
            expected_regex = re.compile(expected_regex)
        self.expected_regex = expected_regex
        self.msg = None

    def _raiseFailure(self, standardMsg):
        msg = self.test_case._formatMessage(self.msg, standardMsg)
        raise self.test_case.failureException(msg)

    def handle(self, name, callable_obj, args, kwargs):
        """
        If callable_obj is None, assertRaises/Warns is being used as a
        context manager, so check for a 'msg' kwarg and return self.
        If callable_obj is not None, call it passing args and kwargs.
        """
        if callable_obj is None:
            self.msg = kwargs.pop('msg', None)
            return self
        with self:
            callable_obj(*args, **kwargs)

class _AssertWarnsContext(_AssertRaisesBaseContext):
    """A context manager used to implement TestCase.assertWarns* methods."""

    def __enter__(self):
        # The __warningregistry__'s need to be in a pristine state for tests
        # to work properly.
        for v in sys.modules.values():
            if getattr(v, '__warningregistry__', None):
                v.__warningregistry__ = {}
        self.warnings_manager = warnings.catch_warnings(record=True)
        self.warnings = self.warnings_manager.__enter__()
        warnings.simplefilter("always", self.expected)
        return self

    def __exit__(self, exc_type, exc_value, tb):
        self.warnings_manager.__exit__(exc_type, exc_value, tb)
        if exc_type is not None:
            # let unexpected exceptions pass through
            return
        try:
            exc_name = self.expected.__name__
        except AttributeError:
            exc_name = str(self.expected)
        first_matching = None
        for m in self.warnings:
            w = m.message
            if not isinstance(w, self.expected):
                continue
            if first_matching is None:
                first_matching = w
            if (self.expected_regex is not None and
                not self.expected_regex.search(str(w))):
                continue
            # store warning for later retrieval
            self.warning = w
            self.filename = m.filename
            self.lineno = m.lineno
            return
        # Now we simply try to choose a helpful failure message
        if first_matching is not None:
            self._raiseFailure('"{}" does not match "{}"'.format(
                     self.expected_regex.pattern, str(first_matching)))
        if self.obj_name:
            self._raiseFailure("{} not triggered by {}".format(exc_name,
                                                               self.obj_name))
        else:
            self._raiseFailure("{} not triggered".format(exc_name))


def assertWarns(self, expected_warning, callable_obj=None, *args, **kwargs):
    """Fail unless a warning of class warnClass is triggered
       by callable_obj when invoked with arguments args and keyword
       arguments kwargs.  If a different type of warning is
       triggered, it will not be handled: depending on the other
       warning filtering rules in effect, it might be silenced, printed
       out, or raised as an exception.

       If called with callable_obj omitted or None, will return a
       context object used like this::

            with self.assertWarns(SomeWarning):
                do_something()

       An optional keyword argument 'msg' can be provided when assertWarns
       is used as a context object.

       The context manager keeps a reference to the first matching
       warning as the 'warning' attribute; similarly, the 'filename'
       and 'lineno' attributes give you information about the line
       of Python code from which the warning was triggered.
       This allows you to inspect the warning after the assertion::

           with self.assertWarns(SomeWarning) as cm:
               do_something()
           the_warning = cm.warning
           self.assertEqual(the_warning.some_attribute, 147)
    """
    context = _AssertWarnsContext(expected_warning, self, callable_obj)
    return context.handle('assertWarns', callable_obj, args, kwargs)

if not hasattr(unittest.TestCase, 'assertWarns'):
    bind_method(unittest.TestCase, 'assertWarns', assertWarns)
PK�Au\fa�zzfuture/__init__.pynu�[���"""
future: Easy, safe support for Python 2/3 compatibility
=======================================================

``future`` is the missing compatibility layer between Python 2 and Python
3. It allows you to use a single, clean Python 3.x-compatible codebase to
support both Python 2 and Python 3 with minimal overhead.

It is designed to be used as follows::

    from __future__ import (absolute_import, division,
                            print_function, unicode_literals)
    from builtins import (
             bytes, dict, int, list, object, range, str,
             ascii, chr, hex, input, next, oct, open,
             pow, round, super,
             filter, map, zip)

followed by predominantly standard, idiomatic Python 3 code that then runs
similarly on Python 2.6/2.7 and Python 3.3+.

The imports have no effect on Python 3. On Python 2, they shadow the
corresponding builtins, which normally have different semantics on Python 3
versus 2, to provide their Python 3 semantics.


Standard library reorganization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``future`` supports the standard library reorganization (PEP 3108) through the
following Py3 interfaces:

    >>> # Top-level packages with Py3 names provided on Py2:
    >>> import html.parser
    >>> import queue
    >>> import tkinter.dialog
    >>> import xmlrpc.client
    >>> # etc.

    >>> # Aliases provided for extensions to existing Py2 module names:
    >>> from future.standard_library import install_aliases
    >>> install_aliases()

    >>> from collections import Counter, OrderedDict   # backported to Py2.6
    >>> from collections import UserDict, UserList, UserString
    >>> import urllib.request
    >>> from itertools import filterfalse, zip_longest
    >>> from subprocess import getoutput, getstatusoutput


Automatic conversion
--------------------

An included script called `futurize
<https://python-future.org/automatic_conversion.html>`_ aids in converting
code (from either Python 2 or Python 3) to code compatible with both
platforms. It is similar to ``python-modernize`` but goes further in
providing Python 3 compatibility through the use of the backported types
and builtin functions in ``future``.


Documentation
-------------

See: https://python-future.org


Credits
-------

:Author:  Ed Schofield, Jordan M. Adler, et al
:Sponsor: Python Charmers: https://pythoncharmers.com
:Others:  See docs/credits.rst or https://python-future.org/credits.html


Licensing
---------
Copyright 2013-2024 Python Charmers, Australia.
The software is distributed under an MIT licence. See LICENSE.txt.

"""

__title__ = 'future'
__author__ = 'Ed Schofield'
__license__ = 'MIT'
__copyright__ = 'Copyright 2013-2024 Python Charmers (https://pythoncharmers.com)'
__ver_major__ = 1
__ver_minor__ = 0
__ver_patch__ = 0
__ver_sub__ = ''
__version__ = "%d.%d.%d%s" % (__ver_major__, __ver_minor__,
                              __ver_patch__, __ver_sub__)
PK�Au\n����future/builtins/misc.pynu�[���"""
A module that brings in equivalents of various modified Python 3 builtins
into Py2. Has no effect on Py3.

The builtin functions are:

- ``ascii`` (from Py2's future_builtins module)
- ``hex`` (from Py2's future_builtins module)
- ``oct`` (from Py2's future_builtins module)
- ``chr`` (equivalent to ``unichr`` on Py2)
- ``input`` (equivalent to ``raw_input`` on Py2)
- ``next`` (calls ``__next__`` if it exists, else ``next`` method)
- ``open`` (equivalent to io.open on Py2)
- ``super`` (backport of Py3's magic zero-argument super() function
- ``round`` (new "Banker's Rounding" behaviour from Py3)
- ``max`` (new default option from Py3.4)
- ``min`` (new default option from Py3.4)

``isinstance`` is also currently exported for backwards compatibility
with v0.8.2, although this has been deprecated since v0.9.


input()
-------
Like the new ``input()`` function from Python 3 (without eval()), except
that it returns bytes. Equivalent to Python 2's ``raw_input()``.

Warning: By default, importing this module *removes* the old Python 2
input() function entirely from ``__builtin__`` for safety. This is
because forgetting to import the new ``input`` from ``future`` might
otherwise lead to a security vulnerability (shell injection) on Python 2.

To restore it, you can retrieve it yourself from
``__builtin__._old_input``.

Fortunately, ``input()`` seems to be seldom used in the wild in Python
2...

"""

from future import utils


if utils.PY2:
    from io import open
    from future_builtins import ascii, oct, hex
    from __builtin__ import unichr as chr, pow as _builtin_pow
    import __builtin__

    # Only for backward compatibility with future v0.8.2:
    isinstance = __builtin__.isinstance

    # Warning: Python 2's input() is unsafe and MUST not be able to be used
    # accidentally by someone who expects Python 3 semantics but forgets
    # to import it on Python 2. Versions of ``future`` prior to 0.11
    # deleted it from __builtin__.  Now we keep in __builtin__ but shadow
    # the name like all others. Just be sure to import ``input``.

    input = raw_input

    from future.builtins.newnext import newnext as next
    from future.builtins.newround import newround as round
    from future.builtins.newsuper import newsuper as super
    from future.builtins.new_min_max import newmax as max
    from future.builtins.new_min_max import newmin as min
    from future.types.newint import newint

    _SENTINEL = object()

    def pow(x, y, z=_SENTINEL):
        """
        pow(x, y[, z]) -> number

        With two arguments, equivalent to x**y.  With three arguments,
        equivalent to (x**y) % z, but may be more efficient (e.g. for ints).
        """
        # Handle newints
        if isinstance(x, newint):
            x = long(x)
        if isinstance(y, newint):
            y = long(y)
        if isinstance(z, newint):
            z = long(z)

        try:
            if z == _SENTINEL:
                return _builtin_pow(x, y)
            else:
                return _builtin_pow(x, y, z)
        except ValueError:
            if z == _SENTINEL:
                return _builtin_pow(x+0j, y)
            else:
                return _builtin_pow(x+0j, y, z)


    # ``future`` doesn't support Py3.0/3.1. If we ever did, we'd add this:
    #     callable = __builtin__.callable

    __all__ = ['ascii', 'chr', 'hex', 'input', 'isinstance', 'next', 'oct',
               'open', 'pow', 'round', 'super', 'max', 'min']

else:
    import builtins
    ascii = builtins.ascii
    chr = builtins.chr
    hex = builtins.hex
    input = builtins.input
    next = builtins.next
    # Only for backward compatibility with future v0.8.2:
    isinstance = builtins.isinstance
    oct = builtins.oct
    open = builtins.open
    pow = builtins.pow
    round = builtins.round
    super = builtins.super
    if utils.PY34_PLUS:
        max = builtins.max
        min = builtins.min
        __all__ = []
    else:
        from future.builtins.new_min_max import newmax as max
        from future.builtins.new_min_max import newmin as min
        __all__ = ['min', 'max']

    # The callable() function was removed from Py3.0 and 3.1 and
    # reintroduced into Py3.2+. ``future`` doesn't support Py3.0/3.1. If we ever
    # did, we'd add this:
    # try:
    #     callable = builtins.callable
    # except AttributeError:
    #     # Definition from Pandas
    #     def callable(obj):
    #         return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
    #     __all__.append('callable')
PK�Au\�����future/builtins/new_min_max.pynu�[���import itertools

from future import utils
if utils.PY2:
    from __builtin__ import max as _builtin_max, min as _builtin_min
else:
    from builtins import max as _builtin_max, min as _builtin_min

_SENTINEL = object()


def newmin(*args, **kwargs):
    return new_min_max(_builtin_min, *args, **kwargs)


def newmax(*args, **kwargs):
    return new_min_max(_builtin_max, *args, **kwargs)


def new_min_max(_builtin_func, *args, **kwargs):
    """
    To support the argument "default" introduced in python 3.4 for min and max
    :param _builtin_func: builtin min or builtin max
    :param args:
    :param kwargs:
    :return: returns the min or max based on the arguments passed
    """

    for key, _ in kwargs.items():
        if key not in set(['key', 'default']):
            raise TypeError('Illegal argument %s', key)

    if len(args) == 0:
        raise TypeError

    if len(args) != 1 and kwargs.get('default', _SENTINEL) is not _SENTINEL:
        raise TypeError

    if len(args) == 1:
        iterator = iter(args[0])
        try:
            first = next(iterator)
        except StopIteration:
            if kwargs.get('default', _SENTINEL) is not _SENTINEL:
                return kwargs.get('default')
            else:
                raise ValueError('{}() arg is an empty sequence'.format(_builtin_func.__name__))
        else:
            iterator = itertools.chain([first], iterator)
        if kwargs.get('key') is not None:
            return _builtin_func(iterator, key=kwargs.get('key'))
        else:
            return _builtin_func(iterator)

    if len(args) > 1:
        if kwargs.get('key') is not None:
            return _builtin_func(args, key=kwargs.get('key'))
        else:
            return _builtin_func(args)
PK�Au\6���==future/builtins/disabled.pynu�[���"""
This disables builtin functions (and one exception class) which are
removed from Python 3.3.

This module is designed to be used like this::

    from future.builtins.disabled import *

This disables the following obsolete Py2 builtin functions::

    apply, cmp, coerce, execfile, file, input, long,
    raw_input, reduce, reload, unicode, xrange

We don't hack __builtin__, which is very fragile because it contaminates
imported modules too. Instead, we just create new functions with
the same names as the obsolete builtins from Python 2 which raise
NameError exceptions when called.

Note that both ``input()`` and ``raw_input()`` are among the disabled
functions (in this module). Although ``input()`` exists as a builtin in
Python 3, the Python 2 ``input()`` builtin is unsafe to use because it
can lead to shell injection. Therefore we shadow it by default upon ``from
future.builtins.disabled import *``, in case someone forgets to import our
replacement ``input()`` somehow and expects Python 3 semantics.

See the ``future.builtins.misc`` module for a working version of
``input`` with Python 3 semantics.

(Note that callable() is not among the functions disabled; this was
reintroduced into Python 3.2.)

This exception class is also disabled:

    StandardError

"""

from __future__ import division, absolute_import, print_function

from future import utils


OBSOLETE_BUILTINS = ['apply', 'chr', 'cmp', 'coerce', 'execfile', 'file',
                     'input', 'long', 'raw_input', 'reduce', 'reload',
                     'unicode', 'xrange', 'StandardError']


def disabled_function(name):
    '''
    Returns a function that cannot be called
    '''
    def disabled(*args, **kwargs):
        '''
        A function disabled by the ``future`` module. This function is
        no longer a builtin in Python 3.
        '''
        raise NameError('obsolete Python 2 builtin {0} is disabled'.format(name))
    return disabled


if not utils.PY3:
    for fname in OBSOLETE_BUILTINS:
        locals()[fname] = disabled_function(fname)
    __all__ = OBSOLETE_BUILTINS
else:
    __all__ = []
PK�Au\�C��future/builtins/__init__.pynu�[���"""
A module that brings in equivalents of the new and modified Python 3
builtins into Py2. Has no effect on Py3.

See the docs `here <https://python-future.org/what-else.html>`_
(``docs/what-else.rst``) for more information.

"""

from future.builtins.iterators import (filter, map, zip)
# The isinstance import is no longer needed. We provide it only for
# backward-compatibility with future v0.8.2. It will be removed in future v1.0.
from future.builtins.misc import (ascii, chr, hex, input, isinstance, next,
                                  oct, open, pow, round, super, max, min)
from future.utils import PY3

if PY3:
    import builtins
    bytes = builtins.bytes
    dict = builtins.dict
    int = builtins.int
    list = builtins.list
    object = builtins.object
    range = builtins.range
    str = builtins.str
    __all__ = []
else:
    from future.types import (newbytes as bytes,
                              newdict as dict,
                              newint as int,
                              newlist as list,
                              newobject as object,
                              newrange as range,
                              newstr as str)
from future import utils


if not utils.PY3:
    # We only import names that shadow the builtins on Py2. No other namespace
    # pollution on Py2.

    # Only shadow builtins on Py2; no new names
    __all__ = ['filter', 'map', 'zip',
               'ascii', 'chr', 'hex', 'input', 'next', 'oct', 'open', 'pow',
               'round', 'super',
               'bytes', 'dict', 'int', 'list', 'object', 'range', 'str', 'max', 'min'
              ]

else:
    # No namespace pollution on Py3
    __all__ = []
PK�Au\�&	ttfuture/builtins/iterators.pynu�[���"""
This module is designed to be used as follows::

    from future.builtins.iterators import *

And then, for example::

    for i in range(10**15):
        pass

    for (a, b) in zip(range(10**15), range(-10**15, 0)):
        pass

Note that this is standard Python 3 code, plus some imports that do
nothing on Python 3.

The iterators this brings in are::

- ``range``
- ``filter``
- ``map``
- ``zip``

On Python 2, ``range`` is a pure-Python backport of Python 3's ``range``
iterator with slicing support. The other iterators (``filter``, ``map``,
``zip``) are from the ``itertools`` module on Python 2. On Python 3 these
are available in the module namespace but not exported for * imports via
__all__ (zero no namespace pollution).

Note that these are also available in the standard library
``future_builtins`` module on Python 2 -- but not Python 3, so using
the standard library version is not portable, nor anywhere near complete.
"""

from __future__ import division, absolute_import, print_function

import itertools
from future import utils

if not utils.PY3:
    filter = itertools.ifilter
    map = itertools.imap
    from future.types import newrange as range
    zip = itertools.izip
    __all__ = ['filter', 'map', 'range', 'zip']
else:
    import builtins
    filter = builtins.filter
    map = builtins.map
    range = builtins.range
    zip = builtins.zip
    __all__ = []
PK�Au\�Kh��future/builtins/newnext.pynu�[���'''
This module provides a newnext() function in Python 2 that mimics the
behaviour of ``next()`` in Python 3, falling back to Python 2's behaviour for
compatibility if this fails.

``newnext(iterator)`` calls the iterator's ``__next__()`` method if it exists. If this
doesn't exist, it falls back to calling a ``next()`` method.

For example:

    >>> class Odds(object):
    ...     def __init__(self, start=1):
    ...         self.value = start - 2
    ...     def __next__(self):                 # note the Py3 interface
    ...         self.value += 2
    ...         return self.value
    ...     def __iter__(self):
    ...         return self
    ...
    >>> iterator = Odds()
    >>> next(iterator)
    1
    >>> next(iterator)
    3

If you are defining your own custom iterator class as above, it is preferable
to explicitly decorate the class with the @implements_iterator decorator from
``future.utils`` as follows:

    >>> @implements_iterator
    ... class Odds(object):
    ...     # etc
    ...     pass

This next() function is primarily for consuming iterators defined in Python 3
code elsewhere that we would like to run on Python 2 or 3.
'''

_builtin_next = next

_SENTINEL = object()

def newnext(iterator, default=_SENTINEL):
    """
    next(iterator[, default])

    Return the next item from the iterator. If default is given and the iterator
    is exhausted, it is returned instead of raising StopIteration.
    """

    # args = []
    # if default is not _SENTINEL:
    #     args.append(default)
    try:
        try:
            return iterator.__next__()
        except AttributeError:
            try:
                return iterator.next()
            except AttributeError:
                raise TypeError("'{0}' object is not an iterator".format(
                                           iterator.__class__.__name__))
    except StopIteration as e:
        if default is _SENTINEL:
            raise e
        else:
            return default


__all__ = ['newnext']
PK�Au\g��}��3future/builtins/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s�dZddlmZmZmZddlmZmZmZm	Z	m
Z
mZmZm
Z
mZmZmZmZmZddlmZer�ddlZejZejZejZejZejZejZejZgZn$ddlmZm Zm!Zm"Zm#Zm$Zm%Zddl&m'Z'e'js�gd�ZngZdS)	z�
A module that brings in equivalents of the new and modified Python 3
builtins into Py2. Has no effect on Py3.

See the docs `here <https://python-future.org/what-else.html>`_
(``docs/what-else.rst``) for more information.

�)�filter�map�zip)
�ascii�chr�hex�input�
isinstance�next�oct�open�pow�round�super�max�min)�PY3N)�newbytes�newdict�newint�newlist�	newobject�newrange�newstr)�utils)rrrrrrrr
rrr
rr�bytes�dict�int�list�object�range�strrr)(�__doc__Zfuture.builtins.iteratorsrrrZfuture.builtins.miscrrrrr	r
rrr
rrrrZfuture.utilsr�builtinsrrrrrr r!�__all__Zfuture.typesrrrrrrr�futurer�r&r&�B/usr/local/lib/python3.9/site-packages/future/builtins/__init__.py�<module>s$	<$
PK�Au\�^���4future/builtins/__pycache__/iterators.cpython-39.pycnu�[���a

��?ht�@s�dZddlmZmZmZddlZddlmZejsZej	Z
ejZddl
mZejZgd�Zn$ddlZej
Z
ejZejZejZgZdS)a�
This module is designed to be used as follows::

    from future.builtins.iterators import *

And then, for example::

    for i in range(10**15):
        pass

    for (a, b) in zip(range(10**15), range(-10**15, 0)):
        pass

Note that this is standard Python 3 code, plus some imports that do
nothing on Python 3.

The iterators this brings in are::

- ``range``
- ``filter``
- ``map``
- ``zip``

On Python 2, ``range`` is a pure-Python backport of Python 3's ``range``
iterator with slicing support. The other iterators (``filter``, ``map``,
``zip``) are from the ``itertools`` module on Python 2. On Python 3 these
are available in the module namespace but not exported for * imports via
__all__ (zero no namespace pollution).

Note that these are also available in the standard library
``future_builtins`` module on Python 2 -- but not Python 3, so using
the standard library version is not portable, nor anywhere near complete.
�)�division�absolute_import�print_functionN)�utils)�newrange)�filter�map�range�zip)�__doc__�
__future__rrr�	itertools�futurer�PY3�ifilterr�imaprZfuture.typesrr	�izipr
�__all__�builtins�rr�C/usr/local/lib/python3.9/site-packages/future/builtins/iterators.py�<module>s"
PK�Au\0D	��3future/builtins/__pycache__/newsuper.cpython-39.pycnu�[���a

��?h	�@shdZddlmZddlZddlmZddlmZmZe	Z
e�Zeedfdd�Z
d	d
�Zdd�ZdgZdS)
ah
This module provides a newsuper() function in Python 2 that mimics the
behaviour of super() in Python 3. It is designed to be used as follows:

    from __future__ import division, absolute_import, print_function
    from future.builtins import super

And then, for example:

    class VerboseList(list):
        def append(self, item):
            print('Adding an item')
            super().append(item)        # new simpler super() function

Importing this module on Python 3 has no effect.

This is based on (i.e. almost identical to) Ryan Kelly's magicsuper
module here:

    https://github.com/rfk/magicsuper.git

Excerpts from Ryan's docstring:

  "Of course, you can still explicitly pass in the arguments if you want
  to do something strange.  Sometimes you really do want that, e.g. to
  skip over some classes in the method resolution order.

  "How does it work?  By inspecting the calling frame to determine the
  function object being executed and the object on which it's being
  called, and then walking the object's __mro__ chain to find out where
  that function was defined.  Yuck, but it seems to work..."
�)�absolute_importN)�FunctionType)�PY3�PY26�cCs�|tur�t�|�}z|j|jjd}WnttfyFtd��Yn0zt	||j�}Wn^t
ttfy�zt	|j|j�}Wn2t
y�td��Ynty�td��Yn0Yn0|tur�t
||�St
|�S)z�Like builtin super(), but capable of magic.

    This acts just like the builtin super() function, but if called
    without any arguments it attempts to infer them at runtime.
    rz'super() used in a function with no argsz$super() used with an old-style classzsuper() called outside a method)�	_SENTINEL�sys�	_getframe�f_locals�f_code�co_varnames�
IndexError�KeyError�RuntimeError�
find_owner�AttributeError�	TypeError�	__class__�_builtin_super)�typZtype_or_obj�
framedepth�f�r�B/usr/local/lib/python3.9/site-packages/future/builtins/newsuper.py�newsuper-s$

rcCs�|jD]�}|j��D]�}zLt|t�sbt|t�r6|j}qz
|j}Wqty^|�	||�}Yq0qWntt
fy~YqYn0|j|ur|Sqqt
�dS)z=Find the class that owns the currently-executing method.
    N)�__mro__�__dict__�values�
isinstancer�property�fget�__func__r�__get__r�	func_code)�cls�coder�methrrrrOs





rcOs,t�d�}|jj}ttdd�|�|i|��S)Nr�)r)rr	r�co_name�getattrr)�args�kwdsr�nmrrr�supermks
r-)�__doc__�
__future__rr�typesrZfuture.utilsrr�superr�objectrrrr-�__all__rrrr�<module>s!"PK�Au\���V		3future/builtins/__pycache__/disabled.cpython-39.pycnu�[���a

��?h=�@s`dZddlmZmZmZddlmZgd�Zdd�Zej	sXeD]Z
ee
�e�e
<q>eZngZdS)a�
This disables builtin functions (and one exception class) which are
removed from Python 3.3.

This module is designed to be used like this::

    from future.builtins.disabled import *

This disables the following obsolete Py2 builtin functions::

    apply, cmp, coerce, execfile, file, input, long,
    raw_input, reduce, reload, unicode, xrange

We don't hack __builtin__, which is very fragile because it contaminates
imported modules too. Instead, we just create new functions with
the same names as the obsolete builtins from Python 2 which raise
NameError exceptions when called.

Note that both ``input()`` and ``raw_input()`` are among the disabled
functions (in this module). Although ``input()`` exists as a builtin in
Python 3, the Python 2 ``input()`` builtin is unsafe to use because it
can lead to shell injection. Therefore we shadow it by default upon ``from
future.builtins.disabled import *``, in case someone forgets to import our
replacement ``input()`` somehow and expects Python 3 semantics.

See the ``future.builtins.misc`` module for a working version of
``input`` with Python 3 semantics.

(Note that callable() is not among the functions disabled; this was
reintroduced into Python 3.2.)

This exception class is also disabled:

    StandardError

�)�division�absolute_import�print_function)�utils)�apply�chr�cmpZcoerceZexecfile�file�input�long�	raw_input�reduce�reload�unicode�xrange�
StandardErrorcs�fdd�}|S)z2
    Returns a function that cannot be called
    cstd�����dS)zy
        A function disabled by the ``future`` module. This function is
        no longer a builtin in Python 3.
        z)obsolete Python 2 builtin {0} is disabledN)�	NameError�format)�args�kwargs��name��B/usr/local/lib/python3.9/site-packages/future/builtins/disabled.py�disabled4sz#disabled_function.<locals>.disabledr)rrrrr�disabled_function0srN)
�__doc__�
__future__rrr�futurerZOBSOLETE_BUILTINSr�PY3�fname�locals�__all__rrrr�<module>s%
PK�Au\��]+��/future/builtins/__pycache__/misc.cpython-39.pycnu�[���a

��?h��@sDdZddlmZejr�ddlmZddlmZmZm	Z	ddl
mZm
Zddl
Z
e
jZeZddlmZddlmZdd	lmZdd
lmZddlmZddl m!Z!e"�Z#e#fd
d�Z
gd�Z$n�ddl%Z%e%jZe%jZe%j	Z	e%jZe%jZe%jZe%jZe%jZe%j
Z
e%jZe%jZej&�r e%jZe%jZgZ$n dd
lmZddlmZddgZ$dS)a�
A module that brings in equivalents of various modified Python 3 builtins
into Py2. Has no effect on Py3.

The builtin functions are:

- ``ascii`` (from Py2's future_builtins module)
- ``hex`` (from Py2's future_builtins module)
- ``oct`` (from Py2's future_builtins module)
- ``chr`` (equivalent to ``unichr`` on Py2)
- ``input`` (equivalent to ``raw_input`` on Py2)
- ``next`` (calls ``__next__`` if it exists, else ``next`` method)
- ``open`` (equivalent to io.open on Py2)
- ``super`` (backport of Py3's magic zero-argument super() function
- ``round`` (new "Banker's Rounding" behaviour from Py3)
- ``max`` (new default option from Py3.4)
- ``min`` (new default option from Py3.4)

``isinstance`` is also currently exported for backwards compatibility
with v0.8.2, although this has been deprecated since v0.9.


input()
-------
Like the new ``input()`` function from Python 3 (without eval()), except
that it returns bytes. Equivalent to Python 2's ``raw_input()``.

Warning: By default, importing this module *removes* the old Python 2
input() function entirely from ``__builtin__`` for safety. This is
because forgetting to import the new ``input`` from ``future`` might
otherwise lead to a security vulnerability (shell injection) on Python 2.

To restore it, you can retrieve it yourself from
``__builtin__._old_input``.

Fortunately, ``input()`` seems to be seldom used in the wild in Python
2...

�)�utils)�open)�ascii�oct�hex)�unichr�powN)�newnext)�newround)�newsuper)�newmax)�newmin)�newintcCs�t|t�rt|�}t|t�r$t|�}t|t�r6t|�}z&|tkrLt||�WSt|||�WSWn@ty�|tkr�t|d|�YSt|d||�YSYn0dS)z�
        pow(x, y[, z]) -> number

        With two arguments, equivalent to x**y.  With three arguments,
        equivalent to (x**y) % z, but may be more efficient (e.g. for ints).
        yN)�
isinstancer�long�	_SENTINEL�_builtin_pow�
ValueError)�x�y�z�r�>/usr/local/lib/python3.9/site-packages/future/builtins/misc.pyrFs


r)
r�chrr�inputr�nextrrr�round�super�max�minrr)'�__doc__�futurer�PY2�iorZfuture_builtinsrrr�__builtin__rrrrr�	raw_inputrZfuture.builtins.newnextr	rZfuture.builtins.newroundr
rZfuture.builtins.newsuperrrZfuture.builtins.new_min_maxrrr
rZfuture.types.newintr�objectr�__all__�builtinsZ	PY34_PLUSrrrr�<module>sH(
PK�Au\~"=,�
�
3future/builtins/__pycache__/newround.cpython-39.pycnu�[���a

��?hv�@sPdZddlmZddlmZmZmZddlmZm	Z	d
dd�Z
dd	�ZdgZdS)zD
``python-future``: pure Python implementation of Python 3 round().
�)�division)�PYPY�PY26�bind_method)�Decimal�ROUND_HALF_EVENNcCs�d}|durd}d}t|d�r(|�|�Std�|}dtt|��vrNt|�}t|t�r^|}ntsnt�|�}nt	|�}|dkr�t
||�|}n|j|td�}|r�t
|�St|�SdS)	a�
    See Python 3 documentation: uses Banker's Rounding.

    Delegates to the __round__ method if for some reason this exists.

    If not, rounds a number to a given precision in decimal digits (default
    0 digits). This returns an int when called with one argument,
    otherwise the same type as the number. ndigits may be negative.

    See the test_round method in future/tests/test_builtins.py for
    examples.
    FNTr�	__round__�10Znumpy)�rounding)�hasattrrr�repr�type�float�
isinstancer�
from_float�
from_float_26�newround�quantizer�int)�number�ndigitsZ
return_int�exponent�d�result�r�B/usr/local/lib/python3.9/site-packages/future/builtins/newround.pyr
s(



rc	Cs�ddl}ddlm}t|ttf�r*t|�S|�|�s>|�|�rJtt	|��S|�
d|�dkr`d}nd}t|���\}}dd�}||�d}||t
|d|�|�}|S)	a�Converts a float to a decimal number, exactly.

    Note that Decimal.from_float(0.1) is not the same as Decimal('0.1').
    Since 0.1 is not exactly representable in binary floating point, the
    value is stored as the nearest representable value which is
    0x1.999999999999ap-4.  The exact equivalent of the value in decimal
    is 0.1000000000000000055511151231257827021181583404541015625.

    >>> Decimal.from_float(0.1)
    Decimal('0.1000000000000000055511151231257827021181583404541015625')
    >>> Decimal.from_float(float('nan'))
    Decimal('NaN')
    >>> Decimal.from_float(float('inf'))
    Decimal('Infinity')
    >>> Decimal.from_float(-float('inf'))
    Decimal('-Infinity')
    >>> Decimal.from_float(-0.0)
    Decimal('-0')

    rN)�_dec_from_tripleg�?�cSs$|dkrttt|���dSdSdS)Nr�)�len�bin�abs)rrrr�
bit_length_sz!from_float_26.<locals>.bit_length�)�math�decimalrrr�longr�isinf�isnanr�copysignr!�as_integer_ratio�str)	�fZ_mathr�sign�nrr"�krrrrr=sr)N)
�__doc__�
__future__rZfuture.utilsrrrr%rrrr�__all__rrrr�<module>s
0,PK�Au\5����2future/builtins/__pycache__/newnext.cpython-39.pycnu�[���a

��?h��@s$dZeZe�Zefdd�ZdgZdS)a�
This module provides a newnext() function in Python 2 that mimics the
behaviour of ``next()`` in Python 3, falling back to Python 2's behaviour for
compatibility if this fails.

``newnext(iterator)`` calls the iterator's ``__next__()`` method if it exists. If this
doesn't exist, it falls back to calling a ``next()`` method.

For example:

    >>> class Odds(object):
    ...     def __init__(self, start=1):
    ...         self.value = start - 2
    ...     def __next__(self):                 # note the Py3 interface
    ...         self.value += 2
    ...         return self.value
    ...     def __iter__(self):
    ...         return self
    ...
    >>> iterator = Odds()
    >>> next(iterator)
    1
    >>> next(iterator)
    3

If you are defining your own custom iterator class as above, it is preferable
to explicitly decorate the class with the @implements_iterator decorator from
``future.utils`` as follows:

    >>> @implements_iterator
    ... class Odds(object):
    ...     # etc
    ...     pass

This next() function is primarily for consuming iterators defined in Python 3
code elsewhere that we would like to run on Python 2 or 3.
cCs�zZz|��WWStyVz|��WYWStyPtd�|jj���Yn0Yn0WnBty�}z*|turx|�n|WYd}~SWYd}~n
d}~00dS)z�
    next(iterator[, default])

    Return the next item from the iterator. If default is given and the iterator
    is exhausted, it is returned instead of raising StopIteration.
    z'{0}' object is not an iteratorN)	�__next__�AttributeError�next�	TypeError�format�	__class__�__name__�
StopIteration�	_SENTINEL)�iterator�default�e�r
�A/usr/local/lib/python3.9/site-packages/future/builtins/newnext.py�newnext+s�rN)�__doc__rZ
_builtin_next�objectr	r�__all__r
r
r
r�<module>s&PK�Au\��Ï6future/builtins/__pycache__/new_min_max.cpython-39.pycnu�[���a

��?h��@s^ddlZddlmZejr,ddlmZmZnddl	mZmZe
�Zdd�Zdd�Z
dd	�ZdS)
�N)�utils)�max�mincOsttg|�Ri|��S�N)�new_min_max�_builtin_min��args�kwargs�r�E/usr/local/lib/python3.9/site-packages/future/builtins/new_min_max.py�newminsr
cOsttg|�Ri|��Sr)r�_builtin_maxrrrr�newmaxsrcOs4|��D]"\}}|tddg�vrtd|��qt|�dkr<t�t|�dkr\|�dt�tur\t�t|�dkr�t|d�}zt|�}Wn@ty�|�dt�tur�|�d�YSt	d�
|j���Yn0t�
|g|�}|�d�dur�|||�d�d�S||�St|�dk�r0|�d�du�r(|||�d�d�S||�SdS)	z�
    To support the argument "default" introduced in python 3.4 for min and max
    :param _builtin_func: builtin min or builtin max
    :param args:
    :param kwargs:
    :return: returns the min or max based on the arguments passed
    �key�defaultzIllegal argument %sr�z{}() arg is an empty sequenceN)r)�items�set�	TypeError�len�get�	_SENTINEL�iter�next�
StopIteration�
ValueError�format�__name__�	itertools�chain)Z
_builtin_funcr	r
r�_�iterator�firstrrrrs.	r)r�futurer�PY2�__builtin__rrrr�builtins�objectrr
rrrrrr�<module>sPK�Au\���		future/builtins/newsuper.pynu�[���'''
This module provides a newsuper() function in Python 2 that mimics the
behaviour of super() in Python 3. It is designed to be used as follows:

    from __future__ import division, absolute_import, print_function
    from future.builtins import super

And then, for example:

    class VerboseList(list):
        def append(self, item):
            print('Adding an item')
            super().append(item)        # new simpler super() function

Importing this module on Python 3 has no effect.

This is based on (i.e. almost identical to) Ryan Kelly's magicsuper
module here:

    https://github.com/rfk/magicsuper.git

Excerpts from Ryan's docstring:

  "Of course, you can still explicitly pass in the arguments if you want
  to do something strange.  Sometimes you really do want that, e.g. to
  skip over some classes in the method resolution order.

  "How does it work?  By inspecting the calling frame to determine the
  function object being executed and the object on which it's being
  called, and then walking the object's __mro__ chain to find out where
  that function was defined.  Yuck, but it seems to work..."
'''

from __future__ import absolute_import
import sys
from types import FunctionType

from future.utils import PY3, PY26


_builtin_super = super

_SENTINEL = object()

def newsuper(typ=_SENTINEL, type_or_obj=_SENTINEL, framedepth=1):
    '''Like builtin super(), but capable of magic.

    This acts just like the builtin super() function, but if called
    without any arguments it attempts to infer them at runtime.
    '''
    #  Infer the correct call if used without arguments.
    if typ is _SENTINEL:
        # We'll need to do some frame hacking.
        f = sys._getframe(framedepth)

        try:
            # Get the function's first positional argument.
            type_or_obj = f.f_locals[f.f_code.co_varnames[0]]
        except (IndexError, KeyError,):
            raise RuntimeError('super() used in a function with no args')

        try:
            typ = find_owner(type_or_obj, f.f_code)
        except (AttributeError, RuntimeError, TypeError):
            # see issues #160, #267
            try:
                typ = find_owner(type_or_obj.__class__, f.f_code)
            except AttributeError:
                raise RuntimeError('super() used with an old-style class')
            except TypeError:
                raise RuntimeError('super() called outside a method')

    #  Dispatch to builtin super().
    if type_or_obj is not _SENTINEL:
        return _builtin_super(typ, type_or_obj)
    return _builtin_super(typ)


def find_owner(cls, code):
    '''Find the class that owns the currently-executing method.
    '''
    for typ in cls.__mro__:
        for meth in typ.__dict__.values():
            # Drill down through any wrappers to the underlying func.
            # This handles e.g. classmethod() and staticmethod().
            try:
                while not isinstance(meth,FunctionType):
                    if isinstance(meth, property):
                        # Calling __get__ on the property will invoke
                        # user code which might throw exceptions or have
                        # side effects
                        meth = meth.fget
                    else:
                        try:
                            meth = meth.__func__
                        except AttributeError:
                            meth = meth.__get__(cls, typ)
            except (AttributeError, TypeError):
                continue
            if meth.func_code is code:
                return typ   # Aha!  Found you.
        #  Not found! Move onto the next class in MRO.

    raise TypeError


def superm(*args, **kwds):
    f = sys._getframe(1)
    nm = f.f_code.co_name
    return getattr(newsuper(framedepth=2),nm)(*args, **kwds)


__all__ = ['newsuper']
PK�Au\�H��vvfuture/builtins/newround.pynu�[���"""
``python-future``: pure Python implementation of Python 3 round().
"""

from __future__ import division
from future.utils import PYPY, PY26, bind_method

# Use the decimal module for simplicity of implementation (and
# hopefully correctness).
from decimal import Decimal, ROUND_HALF_EVEN


def newround(number, ndigits=None):
    """
    See Python 3 documentation: uses Banker's Rounding.

    Delegates to the __round__ method if for some reason this exists.

    If not, rounds a number to a given precision in decimal digits (default
    0 digits). This returns an int when called with one argument,
    otherwise the same type as the number. ndigits may be negative.

    See the test_round method in future/tests/test_builtins.py for
    examples.
    """
    return_int = False
    if ndigits is None:
        return_int = True
        ndigits = 0
    if hasattr(number, '__round__'):
        return number.__round__(ndigits)

    exponent = Decimal('10') ** (-ndigits)

    # Work around issue #24: round() breaks on PyPy with NumPy's types
    # Also breaks on CPython with NumPy's specialized int types like uint64
    if 'numpy' in repr(type(number)):
        number = float(number)

    if isinstance(number, Decimal):
        d = number
    else:
        if not PY26:
            d = Decimal.from_float(number)
        else:
            d = from_float_26(number)

    if ndigits < 0:
        result = newround(d / exponent) * exponent
    else:
        result = d.quantize(exponent, rounding=ROUND_HALF_EVEN)

    if return_int:
        return int(result)
    else:
        return float(result)


### From Python 2.7's decimal.py. Only needed to support Py2.6:

def from_float_26(f):
    """Converts a float to a decimal number, exactly.

    Note that Decimal.from_float(0.1) is not the same as Decimal('0.1').
    Since 0.1 is not exactly representable in binary floating point, the
    value is stored as the nearest representable value which is
    0x1.999999999999ap-4.  The exact equivalent of the value in decimal
    is 0.1000000000000000055511151231257827021181583404541015625.

    >>> Decimal.from_float(0.1)
    Decimal('0.1000000000000000055511151231257827021181583404541015625')
    >>> Decimal.from_float(float('nan'))
    Decimal('NaN')
    >>> Decimal.from_float(float('inf'))
    Decimal('Infinity')
    >>> Decimal.from_float(-float('inf'))
    Decimal('-Infinity')
    >>> Decimal.from_float(-0.0)
    Decimal('-0')

    """
    import math as _math
    from decimal import _dec_from_triple    # only available on Py2.6 and Py2.7 (not 3.3)

    if isinstance(f, (int, long)):        # handle integer inputs
        return Decimal(f)
    if _math.isinf(f) or _math.isnan(f):  # raises TypeError if not a float
        return Decimal(repr(f))
    if _math.copysign(1.0, f) == 1.0:
        sign = 0
    else:
        sign = 1
    n, d = abs(f).as_integer_ratio()
    # int.bit_length() method doesn't exist on Py2.6:
    def bit_length(d):
        if d != 0:
            return len(bin(abs(d))) - 2
        else:
            return 0
    k = bit_length(d) - 1
    result = _dec_from_triple(sign, str(n*5**k), -k)
    return result


__all__ = ['newround']
PK�Au\�D���*future/__pycache__/__init__.cpython-39.pycnu�[���a

��?hz�@s8dZdZdZdZdZdZdZdZdZdeeeefZ	d	S)
a

future: Easy, safe support for Python 2/3 compatibility
=======================================================

``future`` is the missing compatibility layer between Python 2 and Python
3. It allows you to use a single, clean Python 3.x-compatible codebase to
support both Python 2 and Python 3 with minimal overhead.

It is designed to be used as follows::

    from __future__ import (absolute_import, division,
                            print_function, unicode_literals)
    from builtins import (
             bytes, dict, int, list, object, range, str,
             ascii, chr, hex, input, next, oct, open,
             pow, round, super,
             filter, map, zip)

followed by predominantly standard, idiomatic Python 3 code that then runs
similarly on Python 2.6/2.7 and Python 3.3+.

The imports have no effect on Python 3. On Python 2, they shadow the
corresponding builtins, which normally have different semantics on Python 3
versus 2, to provide their Python 3 semantics.


Standard library reorganization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``future`` supports the standard library reorganization (PEP 3108) through the
following Py3 interfaces:

    >>> # Top-level packages with Py3 names provided on Py2:
    >>> import html.parser
    >>> import queue
    >>> import tkinter.dialog
    >>> import xmlrpc.client
    >>> # etc.

    >>> # Aliases provided for extensions to existing Py2 module names:
    >>> from future.standard_library import install_aliases
    >>> install_aliases()

    >>> from collections import Counter, OrderedDict   # backported to Py2.6
    >>> from collections import UserDict, UserList, UserString
    >>> import urllib.request
    >>> from itertools import filterfalse, zip_longest
    >>> from subprocess import getoutput, getstatusoutput


Automatic conversion
--------------------

An included script called `futurize
<https://python-future.org/automatic_conversion.html>`_ aids in converting
code (from either Python 2 or Python 3) to code compatible with both
platforms. It is similar to ``python-modernize`` but goes further in
providing Python 3 compatibility through the use of the backported types
and builtin functions in ``future``.


Documentation
-------------

See: https://python-future.org


Credits
-------

:Author:  Ed Schofield, Jordan M. Adler, et al
:Sponsor: Python Charmers: https://pythoncharmers.com
:Others:  See docs/credits.rst or https://python-future.org/credits.html


Licensing
---------
Copyright 2013-2024 Python Charmers, Australia.
The software is distributed under an MIT licence. See LICENSE.txt.

�futurezEd Schofield�MITz@Copyright 2013-2024 Python Charmers (https://pythoncharmers.com)���z
%d.%d.%d%sN)
�__doc__�	__title__�
__author__�__license__�
__copyright__Z
__ver_major__Z
__ver_minor__Z
__ver_patch__Z__ver_sub__�__version__�rr�9/usr/local/lib/python3.9/site-packages/future/__init__.py�<module>sR�PK�Au\Nw��**future/types/newopen.pynu�[���"""
A substitute for the Python 3 open() function.

Note that io.open() is more complete but maybe slower. Even so, the
completeness may be a better default. TODO: compare these
"""

_builtin_open = open

class newopen(object):
    """Wrapper providing key part of Python 3 open() interface.

    From IPython's py3compat.py module. License: BSD.
    """
    def __init__(self, fname, mode="r", encoding="utf-8"):
        self.f = _builtin_open(fname, mode)
        self.enc = encoding

    def write(self, s):
        return self.f.write(s.encode(self.enc))

    def read(self, size=-1):
        return self.f.read(size).decode(self.enc)

    def close(self):
        return self.f.close()

    def __enter__(self):
        return self

    def __exit__(self, etype, value, traceback):
        self.f.close()
PK�Au\l��9�=�=future/types/newstr.pynu�[���"""
This module redefines ``str`` on Python 2.x to be a subclass of the Py2
``unicode`` type that behaves like the Python 3.x ``str``.

The main differences between ``newstr`` and Python 2.x's ``unicode`` type are
the stricter type-checking and absence of a `u''` prefix in the representation.

It is designed to be used together with the ``unicode_literals`` import
as follows:

    >>> from __future__ import unicode_literals
    >>> from builtins import str, isinstance

On Python 3.x and normally on Python 2.x, these expressions hold

    >>> str('blah') is 'blah'
    True
    >>> isinstance('blah', str)
    True

However, on Python 2.x, with this import:

    >>> from __future__ import unicode_literals

the same expressions are False:

    >>> str('blah') is 'blah'
    False
    >>> isinstance('blah', str)
    False

This module is designed to be imported together with ``unicode_literals`` on
Python 2 to bring the meaning of ``str`` back into alignment with unprefixed
string literals (i.e. ``unicode`` subclasses).

Note that ``str()`` (and ``print()``) would then normally call the
``__unicode__`` method on objects in Python 2. To define string
representations of your objects portably across Py3 and Py2, use the
:func:`python_2_unicode_compatible` decorator in  :mod:`future.utils`.

"""

from numbers import Number

from future.utils import PY3, istext, with_metaclass, isnewbytes
from future.types import no, issubset
from future.types.newobject import newobject


if PY3:
    # We'll probably never use newstr on Py3 anyway...
    unicode = str
    from collections.abc import Iterable
else:
    from collections import Iterable


class BaseNewStr(type):
    def __instancecheck__(cls, instance):
        if cls == newstr:
            return isinstance(instance, unicode)
        else:
            return issubclass(instance.__class__, cls)


class newstr(with_metaclass(BaseNewStr, unicode)):
    """
    A backport of the Python 3 str object to Py2
    """
    no_convert_msg = "Can't convert '{0}' object to str implicitly"

    def __new__(cls, *args, **kwargs):
        """
        From the Py3 str docstring:

          str(object='') -> str
          str(bytes_or_buffer[, encoding[, errors]]) -> str

          Create a new string object from the given object. If encoding or
          errors is specified, then the object must expose a data buffer
          that will be decoded using the given encoding and error handler.
          Otherwise, returns the result of object.__str__() (if defined)
          or repr(object).
          encoding defaults to sys.getdefaultencoding().
          errors defaults to 'strict'.

        """
        if len(args) == 0:
            return super(newstr, cls).__new__(cls)
        # Special case: If someone requests str(str(u'abc')), return the same
        # object (same id) for consistency with Py3.3. This is not true for
        # other objects like list or dict.
        elif type(args[0]) == newstr and cls == newstr:
            return args[0]
        elif isinstance(args[0], unicode):
            value = args[0]
        elif isinstance(args[0], bytes):   # i.e. Py2 bytes or newbytes
            if 'encoding' in kwargs or len(args) > 1:
                value = args[0].decode(*args[1:], **kwargs)
            else:
                value = args[0].__str__()
        else:
            value = args[0]
        return super(newstr, cls).__new__(cls, value)

    def __repr__(self):
        """
        Without the u prefix
        """

        value = super(newstr, self).__repr__()
        # assert value[0] == u'u'
        return value[1:]

    def __getitem__(self, y):
        """
        Warning: Python <= 2.7.6 has a bug that causes this method never to be called
        when y is a slice object. Therefore the type of newstr()[:2] is wrong
        (unicode instead of newstr).
        """
        return newstr(super(newstr, self).__getitem__(y))

    def __contains__(self, key):
        errmsg = "'in <string>' requires string as left operand, not {0}"
        # Don't use isinstance() here because we only want to catch
        # newstr, not Python 2 unicode:
        if type(key) == newstr:
            newkey = key
        elif isinstance(key, unicode) or isinstance(key, bytes) and not isnewbytes(key):
            newkey = newstr(key)
        else:
            raise TypeError(errmsg.format(type(key)))
        return issubset(list(newkey), list(self))

    @no('newbytes')
    def __add__(self, other):
        return newstr(super(newstr, self).__add__(other))

    @no('newbytes')
    def __radd__(self, left):
        " left + self "
        try:
            return newstr(left) + self
        except:
            return NotImplemented

    def __mul__(self, other):
        return newstr(super(newstr, self).__mul__(other))

    def __rmul__(self, other):
        return newstr(super(newstr, self).__rmul__(other))

    def join(self, iterable):
        errmsg = 'sequence item {0}: expected unicode string, found bytes'
        for i, item in enumerate(iterable):
            # Here we use type() rather than isinstance() because
            # __instancecheck__ is being overridden. E.g.
            # isinstance(b'abc', newbytes) is True on Py2.
            if isnewbytes(item):
                raise TypeError(errmsg.format(i))
        # Support use as a staticmethod: str.join('-', ['a', 'b'])
        if type(self) == newstr:
            return newstr(super(newstr, self).join(iterable))
        else:
            return newstr(super(newstr, newstr(self)).join(iterable))

    @no('newbytes')
    def find(self, sub, *args):
        return super(newstr, self).find(sub, *args)

    @no('newbytes')
    def rfind(self, sub, *args):
        return super(newstr, self).rfind(sub, *args)

    @no('newbytes', (1, 2))
    def replace(self, old, new, *args):
        return newstr(super(newstr, self).replace(old, new, *args))

    def decode(self, *args):
        raise AttributeError("decode method has been disabled in newstr")

    def encode(self, encoding='utf-8', errors='strict'):
        """
        Returns bytes

        Encode S using the codec registered for encoding. Default encoding
        is 'utf-8'. errors may be given to set a different error
        handling scheme. Default is 'strict' meaning that encoding errors raise
        a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
        'xmlcharrefreplace' as well as any other name registered with
        codecs.register_error that can handle UnicodeEncodeErrors.
        """
        from future.types.newbytes import newbytes
        # Py2 unicode.encode() takes encoding and errors as optional parameter,
        # not keyword arguments as in Python 3 str.

        # For the surrogateescape error handling mechanism, the
        # codecs.register_error() function seems to be inadequate for an
        # implementation of it when encoding. (Decoding seems fine, however.)
        # For example, in the case of
        #     u'\udcc3'.encode('ascii', 'surrogateescape_handler')
        # after registering the ``surrogateescape_handler`` function in
        # future.utils.surrogateescape, both Python 2.x and 3.x raise an
        # exception anyway after the function is called because the unicode
        # string it has to return isn't encodable strictly as ASCII.

        if errors == 'surrogateescape':
            if encoding == 'utf-16':
                # Known to fail here. See test_encoding_works_normally()
                raise NotImplementedError('FIXME: surrogateescape handling is '
                                          'not yet implemented properly')
            # Encode char by char, building up list of byte-strings
            mybytes = []
            for c in self:
                code = ord(c)
                if 0xD800 <= code <= 0xDCFF:
                    mybytes.append(newbytes([code - 0xDC00]))
                else:
                    mybytes.append(c.encode(encoding=encoding))
            return newbytes(b'').join(mybytes)
        return newbytes(super(newstr, self).encode(encoding, errors))

    @no('newbytes', 1)
    def startswith(self, prefix, *args):
        if isinstance(prefix, Iterable):
            for thing in prefix:
                if isnewbytes(thing):
                    raise TypeError(self.no_convert_msg.format(type(thing)))
        return super(newstr, self).startswith(prefix, *args)

    @no('newbytes', 1)
    def endswith(self, prefix, *args):
        # Note we need the decorator above as well as the isnewbytes()
        # check because prefix can be either a bytes object or e.g. a
        # tuple of possible prefixes. (If it's a bytes object, each item
        # in it is an int.)
        if isinstance(prefix, Iterable):
            for thing in prefix:
                if isnewbytes(thing):
                    raise TypeError(self.no_convert_msg.format(type(thing)))
        return super(newstr, self).endswith(prefix, *args)

    @no('newbytes', 1)
    def split(self, sep=None, maxsplit=-1):
        # Py2 unicode.split() takes maxsplit as an optional parameter,
        # not as a keyword argument as in Python 3 str.
        parts = super(newstr, self).split(sep, maxsplit)
        return [newstr(part) for part in parts]

    @no('newbytes', 1)
    def rsplit(self, sep=None, maxsplit=-1):
        # Py2 unicode.rsplit() takes maxsplit as an optional parameter,
        # not as a keyword argument as in Python 3 str.
        parts = super(newstr, self).rsplit(sep, maxsplit)
        return [newstr(part) for part in parts]

    @no('newbytes', 1)
    def partition(self, sep):
        parts = super(newstr, self).partition(sep)
        return tuple(newstr(part) for part in parts)

    @no('newbytes', 1)
    def rpartition(self, sep):
        parts = super(newstr, self).rpartition(sep)
        return tuple(newstr(part) for part in parts)

    @no('newbytes', 1)
    def index(self, sub, *args):
        """
        Like newstr.find() but raise ValueError when the substring is not
        found.
        """
        pos = self.find(sub, *args)
        if pos == -1:
            raise ValueError('substring not found')
        return pos

    def splitlines(self, keepends=False):
        """
        S.splitlines(keepends=False) -> list of strings

        Return a list of the lines in S, breaking at line boundaries.
        Line breaks are not included in the resulting list unless keepends
        is given and true.
        """
        # Py2 unicode.splitlines() takes keepends as an optional parameter,
        # not as a keyword argument as in Python 3 str.
        parts = super(newstr, self).splitlines(keepends)
        return [newstr(part) for part in parts]

    def __eq__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__eq__(other)
        else:
            return NotImplemented

    def __hash__(self):
        if (isinstance(self, unicode) or
            isinstance(self, bytes) and not isnewbytes(self)):
            return super(newstr, self).__hash__()
        else:
            raise NotImplementedError()

    def __ne__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__ne__(other)
        else:
            return True

    unorderable_err = 'unorderable types: str() and {0}'

    def __lt__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__lt__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __le__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__le__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __gt__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__gt__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __ge__(self, other):
        if (isinstance(other, unicode) or
            isinstance(other, bytes) and not isnewbytes(other)):
            return super(newstr, self).__ge__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __getattribute__(self, name):
        """
        A trick to cause the ``hasattr`` builtin-fn to return False for
        the 'decode' method on Py2.
        """
        if name in ['decode', u'decode']:
            raise AttributeError("decode method has been disabled in newstr")
        return super(newstr, self).__getattribute__(name)

    def __native__(self):
        """
        A hook for the future.utils.native() function.
        """
        return unicode(self)

    @staticmethod
    def maketrans(x, y=None, z=None):
        """
        Return a translation table usable for str.translate().

        If there is only one argument, it must be a dictionary mapping Unicode
        ordinals (integers) or characters to Unicode ordinals, strings or None.
        Character keys will be then converted to ordinals.
        If there are two arguments, they must be strings of equal length, and
        in the resulting dictionary, each character in x will be mapped to the
        character at the same position in y. If there is a third argument, it
        must be a string, whose characters will be mapped to None in the result.
        """

        if y is None:
            assert z is None
            if not isinstance(x, dict):
                raise TypeError('if you give only one argument to maketrans it must be a dict')
            result = {}
            for (key, value) in x.items():
                if len(key) > 1:
                    raise ValueError('keys in translate table must be strings or integers')
                result[ord(key)] = value
        else:
            if not isinstance(x, unicode) and isinstance(y, unicode):
                raise TypeError('x and y must be unicode strings')
            if not len(x) == len(y):
                raise ValueError('the first two maketrans arguments must have equal length')
            result = {}
            for (xi, yi) in zip(x, y):
                if len(xi) > 1:
                    raise ValueError('keys in translate table must be strings or integers')
                result[ord(xi)] = ord(yi)

        if z is not None:
            for char in z:
                result[ord(char)] = None
        return result

    def translate(self, table):
        """
        S.translate(table) -> str

        Return a copy of the string S, where all characters have been mapped
        through the given translation table, which must be a mapping of
        Unicode ordinals to Unicode ordinals, strings, or None.
        Unmapped characters are left untouched. Characters mapped to None
        are deleted.
        """
        l = []
        for c in self:
            if ord(c) in table:
                val = table[ord(c)]
                if val is None:
                    continue
                elif isinstance(val, unicode):
                    l.append(val)
                else:
                    l.append(chr(val))
            else:
                l.append(c)
        return ''.join(l)

    def isprintable(self):
        raise NotImplementedError('fixme')

    def isidentifier(self):
        raise NotImplementedError('fixme')

    def format_map(self):
        raise NotImplementedError('fixme')


__all__ = ['newstr']
PK�Au\��]��future/types/__init__.pynu�[���"""
This module contains backports the data types that were significantly changed
in the transition from Python 2 to Python 3.

- an implementation of Python 3's bytes object (pure Python subclass of
  Python 2's builtin 8-bit str type)
- an implementation of Python 3's str object (pure Python subclass of
  Python 2's builtin unicode type)
- a backport of the range iterator from Py3 with slicing support

It is used as follows::

    from __future__ import division, absolute_import, print_function
    from builtins import bytes, dict, int, range, str

to bring in the new semantics for these functions from Python 3. And
then, for example::

    b = bytes(b'ABCD')
    assert list(b) == [65, 66, 67, 68]
    assert repr(b) == "b'ABCD'"
    assert [65, 66] in b

    # These raise TypeErrors:
    # b + u'EFGH'
    # b.split(u'B')
    # bytes(b',').join([u'Fred', u'Bill'])


    s = str(u'ABCD')

    # These raise TypeErrors:
    # s.join([b'Fred', b'Bill'])
    # s.startswith(b'A')
    # b'B' in s
    # s.find(b'A')
    # s.replace(u'A', b'a')

    # This raises an AttributeError:
    # s.decode('utf-8')

    assert repr(s) == 'ABCD'      # consistent repr with Py3 (no u prefix)


    for i in range(10**11)[:10]:
        pass

and::

    class VerboseList(list):
        def append(self, item):
            print('Adding an item')
            super().append(item)        # new simpler super() function

For more information:
---------------------

- future.types.newbytes
- future.types.newdict
- future.types.newint
- future.types.newobject
- future.types.newrange
- future.types.newstr


Notes
=====

range()
-------
``range`` is a custom class that backports the slicing behaviour from
Python 3 (based on the ``xrange`` module by Dan Crosta). See the
``newrange`` module docstring for more details.


super()
-------
``super()`` is based on Ryan Kelly's ``magicsuper`` module. See the
``newsuper`` module docstring for more details.


round()
-------
Python 3 modifies the behaviour of ``round()`` to use "Banker's Rounding".
See http://stackoverflow.com/a/10825998. See the ``newround`` module
docstring for more details.

"""

from __future__ import absolute_import, division, print_function

import functools
from numbers import Integral

from future import utils


# Some utility functions to enforce strict type-separation of unicode str and
# bytes:
def disallow_types(argnums, disallowed_types):
    """
    A decorator that raises a TypeError if any of the given numbered
    arguments is of the corresponding given type (e.g. bytes or unicode
    string).

    For example:

        @disallow_types([0, 1], [unicode, bytes])
        def f(a, b):
            pass

    raises a TypeError when f is called if a unicode object is passed as
    `a` or a bytes object is passed as `b`.

    This also skips over keyword arguments, so

        @disallow_types([0, 1], [unicode, bytes])
        def g(a, b=None):
            pass

    doesn't raise an exception if g is called with only one argument a,
    e.g.:

        g(b'Byte string')

    Example use:

    >>> class newbytes(object):
    ...     @disallow_types([1], [unicode])
    ...     def __add__(self, other):
    ...          pass

    >>> newbytes('1234') + u'1234'      #doctest: +IGNORE_EXCEPTION_DETAIL
    Traceback (most recent call last):
      ...
    TypeError: can't concat 'bytes' to (unicode) str
    """

    def decorator(function):

        @functools.wraps(function)
        def wrapper(*args, **kwargs):
            # These imports are just for this decorator, and are defined here
            # to prevent circular imports:
            from .newbytes import newbytes
            from .newint import newint
            from .newstr import newstr

            errmsg = "argument can't be {0}"
            for (argnum, mytype) in zip(argnums, disallowed_types):
                # Handle the case where the type is passed as a string like 'newbytes'.
                if isinstance(mytype, str) or isinstance(mytype, bytes):
                    mytype = locals()[mytype]

                # Only restrict kw args only if they are passed:
                if len(args) <= argnum:
                    break

                # Here we use type() rather than isinstance() because
                # __instancecheck__ is being overridden. E.g.
                # isinstance(b'abc', newbytes) is True on Py2.
                if type(args[argnum]) == mytype:
                    raise TypeError(errmsg.format(mytype))

            return function(*args, **kwargs)
        return wrapper
    return decorator


def no(mytype, argnums=(1,)):
    """
    A shortcut for the disallow_types decorator that disallows only one type
    (in any position in argnums).

    Example use:

    >>> class newstr(object):
    ...     @no('bytes')
    ...     def __add__(self, other):
    ...          pass

    >>> newstr(u'1234') + b'1234'     #doctest: +IGNORE_EXCEPTION_DETAIL
    Traceback (most recent call last):
      ...
    TypeError: argument can't be bytes

    The object can also be passed directly, but passing the string helps
    to prevent circular import problems.
    """
    if isinstance(argnums, Integral):
        argnums = (argnums,)
    disallowed_types = [mytype] * len(argnums)
    return disallow_types(argnums, disallowed_types)


def issubset(list1, list2):
    """
    Examples:

    >>> issubset([], [65, 66, 67])
    True
    >>> issubset([65], [65, 66, 67])
    True
    >>> issubset([65, 66], [65, 66, 67])
    True
    >>> issubset([65, 67], [65, 66, 67])
    False
    """
    n = len(list1)
    for startpos in range(len(list2) - n + 1):
        if list2[startpos:startpos+n] == list1:
            return True
    return False


if utils.PY3:
    import builtins
    bytes = builtins.bytes
    dict = builtins.dict
    int = builtins.int
    list = builtins.list
    object = builtins.object
    range = builtins.range
    str = builtins.str

    # The identity mapping
    newtypes = {bytes: bytes,
                dict: dict,
                int: int,
                list: list,
                object: object,
                range: range,
                str: str}

    __all__ = ['newtypes']

else:

    from .newbytes import newbytes
    from .newdict import newdict
    from .newint import newint
    from .newlist import newlist
    from .newrange import newrange
    from .newobject import newobject
    from .newstr import newstr

    newtypes = {bytes: newbytes,
                dict: newdict,
                int: newint,
                long: newint,
                list: newlist,
                object: newobject,
                range: newrange,
                str: newbytes,
                unicode: newstr}

    __all__ = ['newbytes', 'newdict', 'newint', 'newlist', 'newrange', 'newstr', 'newtypes']
PK�Au\#6���future/types/newrange.pynu�[���"""
Nearly identical to xrange.py, by Dan Crosta, from

    https://github.com/dcrosta/xrange.git

This is included here in the ``future`` package rather than pointed to as
a dependency because there is no package for ``xrange`` on PyPI. It is
also tweaked to appear like a regular Python 3 ``range`` object rather
than a Python 2 xrange.

From Dan Crosta's README:

    "A pure-Python implementation of Python 2.7's xrange built-in, with
    some features backported from the Python 3.x range built-in (which
    replaced xrange) in that version."

    Read more at
        https://late.am/post/2012/06/18/what-the-heck-is-an-xrange
"""
from __future__ import absolute_import

from future.utils import PY2

if PY2:
    from collections import Sequence, Iterator
else:
    from collections.abc import Sequence, Iterator
from itertools import islice

from future.backports.misc import count   # with step parameter on Py2.6
# For backward compatibility with python-future versions < 0.14.4:
_count = count


class newrange(Sequence):
    """
    Pure-Python backport of Python 3's range object.  See `the CPython
    documentation for details:
    <http://docs.python.org/py3k/library/functions.html#range>`_
    """

    def __init__(self, *args):
        if len(args) == 1:
            start, stop, step = 0, args[0], 1
        elif len(args) == 2:
            start, stop, step = args[0], args[1], 1
        elif len(args) == 3:
            start, stop, step = args
        else:
            raise TypeError('range() requires 1-3 int arguments')

        try:
            start, stop, step = int(start), int(stop), int(step)
        except ValueError:
            raise TypeError('an integer is required')

        if step == 0:
            raise ValueError('range() arg 3 must not be zero')
        elif step < 0:
            stop = min(stop, start)
        else:
            stop = max(stop, start)

        self._start = start
        self._stop = stop
        self._step = step
        self._len = (stop - start) // step + bool((stop - start) % step)

    @property
    def start(self):
        return self._start

    @property
    def stop(self):
        return self._stop

    @property
    def step(self):
        return self._step

    def __repr__(self):
        if self._step == 1:
            return 'range(%d, %d)' % (self._start, self._stop)
        return 'range(%d, %d, %d)' % (self._start, self._stop, self._step)

    def __eq__(self, other):
        return (isinstance(other, newrange) and
                (self._len == 0 == other._len or
                 (self._start, self._step, self._len) ==
                 (other._start, other._step, other._len)))

    def __len__(self):
        return self._len

    def index(self, value):
        """Return the 0-based position of integer `value` in
        the sequence this range represents."""
        try:
            diff = value - self._start
        except TypeError:
            raise ValueError('%r is not in range' % value)
        quotient, remainder = divmod(diff, self._step)
        if remainder == 0 and 0 <= quotient < self._len:
            return abs(quotient)
        raise ValueError('%r is not in range' % value)

    def count(self, value):
        """Return the number of occurrences of integer `value`
        in the sequence this range represents."""
        # a value can occur exactly zero or one times
        return int(value in self)

    def __contains__(self, value):
        """Return ``True`` if the integer `value` occurs in
        the sequence this range represents."""
        try:
            self.index(value)
            return True
        except ValueError:
            return False

    def __reversed__(self):
        return iter(self[::-1])

    def __getitem__(self, index):
        """Return the element at position ``index`` in the sequence
        this range represents, or raise :class:`IndexError` if the
        position is out of range."""
        if isinstance(index, slice):
            return self.__getitem_slice(index)
        if index < 0:
            # negative indexes access from the end
            index = self._len + index
        if index < 0 or index >= self._len:
            raise IndexError('range object index out of range')
        return self._start + index * self._step

    def __getitem_slice(self, slce):
        """Return a range which represents the requested slce
        of the sequence represented by this range.
        """
        scaled_indices = (self._step * n for n in slce.indices(self._len))
        start_offset, stop_offset, new_step = scaled_indices
        return newrange(self._start + start_offset,
                        self._start + stop_offset,
                        new_step)

    def __iter__(self):
        """Return an iterator which enumerates the elements of the
        sequence this range represents."""
        return range_iterator(self)


class range_iterator(Iterator):
    """An iterator for a :class:`range`.
    """
    def __init__(self, range_):
        self._stepper = islice(count(range_.start, range_.step), len(range_))

    def __iter__(self):
        return self

    def __next__(self):
        return next(self._stepper)

    def next(self):
        return next(self._stepper)


__all__ = ['newrange']
PK�Au\�����future/types/newmemoryview.pynu�[���"""
A pretty lame implementation of a memoryview object for Python 2.6.
"""
from numbers import Integral
import string

from future.utils import istext, isbytes, PY2, with_metaclass
from future.types import no, issubset

if PY2:
    from collections import Iterable
else:
    from collections.abc import Iterable

# class BaseNewBytes(type):
#     def __instancecheck__(cls, instance):
#         return isinstance(instance, _builtin_bytes)


class newmemoryview(object):   # with_metaclass(BaseNewBytes, _builtin_bytes)):
    """
    A pretty lame backport of the Python 2.7 and Python 3.x
    memoryviewview object to Py2.6.
    """
    def __init__(self, obj):
        return obj


__all__ = ['newmemoryview']
PK�Au\�����future/types/newlist.pynu�[���"""
A list subclass for Python 2 that behaves like Python 3's list.

The primary difference is that lists have a .copy() method in Py3.

Example use:

>>> from builtins import list
>>> l1 = list()    # instead of {} for an empty list
>>> l1.append('hello')
>>> l2 = l1.copy()

"""

import sys
import copy

from future.utils import with_metaclass
from future.types.newobject import newobject


_builtin_list = list
ver = sys.version_info[:2]


class BaseNewList(type):
    def __instancecheck__(cls, instance):
        if cls == newlist:
            return isinstance(instance, _builtin_list)
        else:
            return issubclass(instance.__class__, cls)


class newlist(with_metaclass(BaseNewList, _builtin_list)):
    """
    A backport of the Python 3 list object to Py2
    """
    def copy(self):
        """
        L.copy() -> list -- a shallow copy of L
        """
        return copy.copy(self)

    def clear(self):
        """L.clear() -> None -- remove all items from L"""
        for i in range(len(self)):
            self.pop()

    def __new__(cls, *args, **kwargs):
        """
        list() -> new empty list
        list(iterable) -> new list initialized from iterable's items
        """

        if len(args) == 0:
            return super(newlist, cls).__new__(cls)
        elif type(args[0]) == newlist:
            value = args[0]
        else:
            value = args[0]
        return super(newlist, cls).__new__(cls, value)

    def __add__(self, value):
        return newlist(super(newlist, self).__add__(value))

    def __radd__(self, left):
        " left + self "
        try:
            return newlist(left) + self
        except:
            return NotImplemented

    def __getitem__(self, y):
        """
        x.__getitem__(y) <==> x[y]

        Warning: a bug in Python 2.x prevents indexing via a slice from
        returning a newlist object.
        """
        if isinstance(y, slice):
            return newlist(super(newlist, self).__getitem__(y))
        else:
            return super(newlist, self).__getitem__(y)

    def __native__(self):
        """
        Hook for the future.utils.native() function
        """
        return list(self)

    def __nonzero__(self):
        return len(self) > 0


__all__ = ['newlist']
PK�Au\�pU�^4^4future/types/newint.pynu�[���"""
Backport of Python 3's int, based on Py2's long.

They are very similar. The most notable difference is:

- representation: trailing L in Python 2 removed in Python 3
"""
from __future__ import division

import struct

from future.types.newbytes import newbytes
from future.types.newobject import newobject
from future.utils import PY3, isint, istext, isbytes, with_metaclass, native


if PY3:
    long = int
    from collections.abc import Iterable
else:
    from collections import Iterable


class BaseNewInt(type):
    def __instancecheck__(cls, instance):
        if cls == newint:
            # Special case for Py2 short or long int
            return isinstance(instance, (int, long))
        else:
            return issubclass(instance.__class__, cls)


class newint(with_metaclass(BaseNewInt, long)):
    """
    A backport of the Python 3 int object to Py2
    """
    def __new__(cls, x=0, base=10):
        """
        From the Py3 int docstring:

        |  int(x=0) -> integer
        |  int(x, base=10) -> integer
        |
        |  Convert a number or string to an integer, or return 0 if no
        |  arguments are given.  If x is a number, return x.__int__().  For
        |  floating point numbers, this truncates towards zero.
        |
        |  If x is not a number or if base is given, then x must be a string,
        |  bytes, or bytearray instance representing an integer literal in the
        |  given base.  The literal can be preceded by '+' or '-' and be
        |  surrounded by whitespace.  The base defaults to 10.  Valid bases are
        |  0 and 2-36. Base 0 means to interpret the base from the string as an
        |  integer literal.
        |  >>> int('0b100', base=0)
        |  4

        """
        try:
            val = x.__int__()
        except AttributeError:
            val = x
        else:
            if not isint(val):
                raise TypeError('__int__ returned non-int ({0})'.format(
                    type(val)))

        if base != 10:
            # Explicit base
            if not (istext(val) or isbytes(val) or isinstance(val, bytearray)):
                raise TypeError(
                    "int() can't convert non-string with explicit base")
            try:
                return super(newint, cls).__new__(cls, val, base)
            except TypeError:
                return super(newint, cls).__new__(cls, newbytes(val), base)
        # After here, base is 10
        try:
            return super(newint, cls).__new__(cls, val)
        except TypeError:
            # Py2 long doesn't handle bytearray input with an explicit base, so
            # handle this here.
            # Py3: int(bytearray(b'10'), 2) == 2
            # Py2: int(bytearray(b'10'), 2) == 2 raises TypeError
            # Py2: long(bytearray(b'10'), 2) == 2 raises TypeError
            try:
                return super(newint, cls).__new__(cls, newbytes(val))
            except:
                raise TypeError("newint argument must be a string or a number,"
                                "not '{0}'".format(type(val)))

    def __repr__(self):
        """
        Without the L suffix
        """
        value = super(newint, self).__repr__()
        assert value[-1] == 'L'
        return value[:-1]

    def __add__(self, other):
        value = super(newint, self).__add__(other)
        if value is NotImplemented:
            return long(self) + other
        return newint(value)

    def __radd__(self, other):
        value = super(newint, self).__radd__(other)
        if value is NotImplemented:
            return other + long(self)
        return newint(value)

    def __sub__(self, other):
        value = super(newint, self).__sub__(other)
        if value is NotImplemented:
            return long(self) - other
        return newint(value)

    def __rsub__(self, other):
        value = super(newint, self).__rsub__(other)
        if value is NotImplemented:
            return other - long(self)
        return newint(value)

    def __mul__(self, other):
        value = super(newint, self).__mul__(other)
        if isint(value):
            return newint(value)
        elif value is NotImplemented:
            return long(self) * other
        return value

    def __rmul__(self, other):
        value = super(newint, self).__rmul__(other)
        if isint(value):
            return newint(value)
        elif value is NotImplemented:
            return other * long(self)
        return value

    def __div__(self, other):
        # We override this rather than e.g. relying on object.__div__ or
        # long.__div__ because we want to wrap the value in a newint()
        # call if other is another int
        value = long(self) / other
        if isinstance(other, (int, long)):
            return newint(value)
        else:
            return value

    def __rdiv__(self, other):
        value = other / long(self)
        if isinstance(other, (int, long)):
            return newint(value)
        else:
            return value

    def __idiv__(self, other):
        # long has no __idiv__ method. Use __itruediv__ and cast back to
        # newint:
        value = self.__itruediv__(other)
        if isinstance(other, (int, long)):
            return newint(value)
        else:
            return value

    def __truediv__(self, other):
        value = super(newint, self).__truediv__(other)
        if value is NotImplemented:
            value = long(self) / other
        return value

    def __rtruediv__(self, other):
        return super(newint, self).__rtruediv__(other)

    def __itruediv__(self, other):
        # long has no __itruediv__ method
        mylong = long(self)
        mylong /= other
        return mylong

    def __floordiv__(self, other):
        return newint(super(newint, self).__floordiv__(other))

    def __rfloordiv__(self, other):
        return newint(super(newint, self).__rfloordiv__(other))

    def __ifloordiv__(self, other):
        # long has no __ifloordiv__ method
        mylong = long(self)
        mylong //= other
        return newint(mylong)

    def __mod__(self, other):
        value = super(newint, self).__mod__(other)
        if value is NotImplemented:
            return long(self) % other
        return newint(value)

    def __rmod__(self, other):
        value = super(newint, self).__rmod__(other)
        if value is NotImplemented:
            return other % long(self)
        return newint(value)

    def __divmod__(self, other):
        value = super(newint, self).__divmod__(other)
        if value is NotImplemented:
            mylong = long(self)
            return (mylong // other, mylong % other)
        return (newint(value[0]), newint(value[1]))

    def __rdivmod__(self, other):
        value = super(newint, self).__rdivmod__(other)
        if value is NotImplemented:
            mylong = long(self)
            return (other // mylong, other % mylong)
        return (newint(value[0]), newint(value[1]))

    def __pow__(self, other):
        value = super(newint, self).__pow__(other)
        if value is NotImplemented:
            return long(self) ** other
        return newint(value)

    def __rpow__(self, other):
        value = super(newint, self).__rpow__(other)
        if isint(value):
            return newint(value)
        elif value is NotImplemented:
            return other ** long(self)
        return value

    def __lshift__(self, other):
        if not isint(other):
            raise TypeError(
                "unsupported operand type(s) for <<: '%s' and '%s'" %
                (type(self).__name__, type(other).__name__))
        return newint(super(newint, self).__lshift__(other))

    def __rshift__(self, other):
        if not isint(other):
            raise TypeError(
                "unsupported operand type(s) for >>: '%s' and '%s'" %
                (type(self).__name__, type(other).__name__))
        return newint(super(newint, self).__rshift__(other))

    def __and__(self, other):
        if not isint(other):
            raise TypeError(
                "unsupported operand type(s) for &: '%s' and '%s'" %
                (type(self).__name__, type(other).__name__))
        return newint(super(newint, self).__and__(other))

    def __or__(self, other):
        if not isint(other):
            raise TypeError(
                "unsupported operand type(s) for |: '%s' and '%s'" %
                (type(self).__name__, type(other).__name__))
        return newint(super(newint, self).__or__(other))

    def __xor__(self, other):
        if not isint(other):
            raise TypeError(
                "unsupported operand type(s) for ^: '%s' and '%s'" %
                (type(self).__name__, type(other).__name__))
        return newint(super(newint, self).__xor__(other))

    def __neg__(self):
        return newint(super(newint, self).__neg__())

    def __pos__(self):
        return newint(super(newint, self).__pos__())

    def __abs__(self):
        return newint(super(newint, self).__abs__())

    def __invert__(self):
        return newint(super(newint, self).__invert__())

    def __int__(self):
        return self

    def __nonzero__(self):
        return self.__bool__()

    def __bool__(self):
        """
        So subclasses can override this, Py3-style
        """
        if PY3:
            return super(newint, self).__bool__()

        return super(newint, self).__nonzero__()

    def __native__(self):
        return long(self)

    def to_bytes(self, length, byteorder='big', signed=False):
        """
        Return an array of bytes representing an integer.

        The integer is represented using length bytes.  An OverflowError is
        raised if the integer is not representable with the given number of
        bytes.

        The byteorder argument determines the byte order used to represent the
        integer.  If byteorder is 'big', the most significant byte is at the
        beginning of the byte array.  If byteorder is 'little', the most
        significant byte is at the end of the byte array.  To request the native
        byte order of the host system, use `sys.byteorder' as the byte order value.

        The signed keyword-only argument determines whether two's complement is
        used to represent the integer.  If signed is False and a negative integer
        is given, an OverflowError is raised.
        """
        if length < 0:
            raise ValueError("length argument must be non-negative")
        if length == 0 and self == 0:
            return newbytes()
        if signed and self < 0:
            bits = length * 8
            num = (2**bits) + self
            if num <= 0:
                raise OverflowError("int too small to convert")
        else:
            if self < 0:
                raise OverflowError("can't convert negative int to unsigned")
            num = self
        if byteorder not in ('little', 'big'):
            raise ValueError("byteorder must be either 'little' or 'big'")
        h = b'%x' % num
        s = newbytes((b'0'*(len(h) % 2) + h).zfill(length*2).decode('hex'))
        if signed:
            high_set = s[0] & 0x80
            if self > 0 and high_set:
                raise OverflowError("int too big to convert")
            if self < 0 and not high_set:
                raise OverflowError("int too small to convert")
        if len(s) > length:
            raise OverflowError("int too big to convert")
        return s if byteorder == 'big' else s[::-1]

    @classmethod
    def from_bytes(cls, mybytes, byteorder='big', signed=False):
        """
        Return the integer represented by the given array of bytes.

        The mybytes argument must either support the buffer protocol or be an
        iterable object producing bytes.  Bytes and bytearray are examples of
        built-in objects that support the buffer protocol.

        The byteorder argument determines the byte order used to represent the
        integer.  If byteorder is 'big', the most significant byte is at the
        beginning of the byte array.  If byteorder is 'little', the most
        significant byte is at the end of the byte array.  To request the native
        byte order of the host system, use `sys.byteorder' as the byte order value.

        The signed keyword-only argument indicates whether two's complement is
        used to represent the integer.
        """
        if byteorder not in ('little', 'big'):
            raise ValueError("byteorder must be either 'little' or 'big'")
        if isinstance(mybytes, unicode):
            raise TypeError("cannot convert unicode objects to bytes")
        # mybytes can also be passed as a sequence of integers on Py3.
        # Test for this:
        elif isinstance(mybytes, Iterable):
            mybytes = newbytes(mybytes)
        b = mybytes if byteorder == 'big' else mybytes[::-1]
        if len(b) == 0:
            b = b'\x00'
        # The encode() method has been disabled by newbytes, but Py2's
        # str has it:
        num = int(native(b).encode('hex'), 16)
        if signed and (b[0] & 0x80):
            num = num - (2 ** (len(b)*8))
        return cls(num)


# def _twos_comp(val, bits):
#     """compute the 2's compliment of int value val"""
#     if( (val&(1<<(bits-1))) != 0 ):
#         val = val - (1<<bits)
#     return val


__all__ = ['newint']
PK�Au\E[�]��future/types/newdict.pynu�[���"""
A dict subclass for Python 2 that behaves like Python 3's dict

Example use:

>>> from builtins import dict
>>> d1 = dict()    # instead of {} for an empty dict
>>> d2 = dict(key1='value1', key2='value2')

The keys, values and items methods now return iterators on Python 2.x
(with set-like behaviour on Python 2.7).

>>> for d in (d1, d2):
...     assert not isinstance(d.keys(), list)
...     assert not isinstance(d.values(), list)
...     assert not isinstance(d.items(), list)
"""

import sys

from future.utils import with_metaclass
from future.types.newobject import newobject


_builtin_dict = dict
ver = sys.version_info


class BaseNewDict(type):
    def __instancecheck__(cls, instance):
        if cls == newdict:
            return isinstance(instance, _builtin_dict)
        else:
            return issubclass(instance.__class__, cls)


class newdict(with_metaclass(BaseNewDict, _builtin_dict)):
    """
    A backport of the Python 3 dict object to Py2
    """

    if ver >= (3,):
        # Inherit items, keys and values from `dict` in 3.x
        pass
    elif ver >= (2, 7):
        items = dict.viewitems
        keys = dict.viewkeys
        values = dict.viewvalues
    else:
        items = dict.iteritems
        keys = dict.iterkeys
        values = dict.itervalues

    def __new__(cls, *args, **kwargs):
        """
        dict() -> new empty dictionary
        dict(mapping) -> new dictionary initialized from a mapping object's
            (key, value) pairs
        dict(iterable) -> new dictionary initialized as if via:
            d = {}
            for k, v in iterable:
                d[k] = v
        dict(**kwargs) -> new dictionary initialized with the name=value pairs
            in the keyword argument list.  For example:  dict(one=1, two=2)
        """

        return super(newdict, cls).__new__(cls, *args)

    def __native__(self):
        """
        Hook for the future.utils.native() function
        """
        return dict(self)


__all__ = ['newdict']
PKBu\�
P^^0future/types/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s:dZddlmZmZmZddlZddlmZddlm	Z	dd�Z
dd	d
�Zdd�Ze	j
r�ddlZejZejZejZejZejZejZejZeeeeeeeeeeeeeeiZd
gZn�ddlmZddlmZddlmZddlmZddlmZddlmZddlmZeeeeeeeeeeeeeeeee ei	Zgd�ZdS)aY
This module contains backports the data types that were significantly changed
in the transition from Python 2 to Python 3.

- an implementation of Python 3's bytes object (pure Python subclass of
  Python 2's builtin 8-bit str type)
- an implementation of Python 3's str object (pure Python subclass of
  Python 2's builtin unicode type)
- a backport of the range iterator from Py3 with slicing support

It is used as follows::

    from __future__ import division, absolute_import, print_function
    from builtins import bytes, dict, int, range, str

to bring in the new semantics for these functions from Python 3. And
then, for example::

    b = bytes(b'ABCD')
    assert list(b) == [65, 66, 67, 68]
    assert repr(b) == "b'ABCD'"
    assert [65, 66] in b

    # These raise TypeErrors:
    # b + u'EFGH'
    # b.split(u'B')
    # bytes(b',').join([u'Fred', u'Bill'])


    s = str(u'ABCD')

    # These raise TypeErrors:
    # s.join([b'Fred', b'Bill'])
    # s.startswith(b'A')
    # b'B' in s
    # s.find(b'A')
    # s.replace(u'A', b'a')

    # This raises an AttributeError:
    # s.decode('utf-8')

    assert repr(s) == 'ABCD'      # consistent repr with Py3 (no u prefix)


    for i in range(10**11)[:10]:
        pass

and::

    class VerboseList(list):
        def append(self, item):
            print('Adding an item')
            super().append(item)        # new simpler super() function

For more information:
---------------------

- future.types.newbytes
- future.types.newdict
- future.types.newint
- future.types.newobject
- future.types.newrange
- future.types.newstr


Notes
=====

range()
-------
``range`` is a custom class that backports the slicing behaviour from
Python 3 (based on the ``xrange`` module by Dan Crosta). See the
``newrange`` module docstring for more details.


super()
-------
``super()`` is based on Ryan Kelly's ``magicsuper`` module. See the
``newsuper`` module docstring for more details.


round()
-------
Python 3 modifies the behaviour of ``round()`` to use "Banker's Rounding".
See http://stackoverflow.com/a/10825998. See the ``newround`` module
docstring for more details.

�)�absolute_import�division�print_functionN)�Integral)�utilscs��fdd�}|S)a�
    A decorator that raises a TypeError if any of the given numbered
    arguments is of the corresponding given type (e.g. bytes or unicode
    string).

    For example:

        @disallow_types([0, 1], [unicode, bytes])
        def f(a, b):
            pass

    raises a TypeError when f is called if a unicode object is passed as
    `a` or a bytes object is passed as `b`.

    This also skips over keyword arguments, so

        @disallow_types([0, 1], [unicode, bytes])
        def g(a, b=None):
            pass

    doesn't raise an exception if g is called with only one argument a,
    e.g.:

        g(b'Byte string')

    Example use:

    >>> class newbytes(object):
    ...     @disallow_types([1], [unicode])
    ...     def __add__(self, other):
    ...          pass

    >>> newbytes('1234') + u'1234'      #doctest: +IGNORE_EXCEPTION_DETAIL
    Traceback (most recent call last):
      ...
    TypeError: can't concat 'bytes' to (unicode) str
    cst������fdd��}|S)Ncs�ddlm}ddlm}ddlm}d}t���D]T\}}t|t�sNt|t�rXt�|}t|�|krhq�t	||�|kr2t
|�|���q2�|i|��S)N���newbytes��newint��newstrzargument can't be {0})r	rr
�zip�
isinstance�str�bytes�locals�len�type�	TypeError�format)�args�kwargsr	rr
�errmsgZargnum�mytype)�argnums�disallowed_types�function��?/usr/local/lib/python3.9/site-packages/future/types/__init__.py�wrapper�s
z2disallow_types.<locals>.decorator.<locals>.wrapper)�	functools�wraps)rr �rr)rr�	decorator�sz!disallow_types.<locals>.decoratorr)rrr$rr#r�disallow_typesds'r%�rcCs(t|t�r|f}|gt|�}t||�S)a
    A shortcut for the disallow_types decorator that disallows only one type
    (in any position in argnums).

    Example use:

    >>> class newstr(object):
    ...     @no('bytes')
    ...     def __add__(self, other):
    ...          pass

    >>> newstr(u'1234') + b'1234'     #doctest: +IGNORE_EXCEPTION_DETAIL
    Traceback (most recent call last):
      ...
    TypeError: argument can't be bytes

    The object can also be passed directly, but passing the string helps
    to prevent circular import problems.
    )rrrr%)rrrrrr�no�s
r'cCs@t|�}tt|�|d�D]}||||�|krdSqdS)z�
    Examples:

    >>> issubset([], [65, 66, 67])
    True
    >>> issubset([65], [65, 66, 67])
    True
    >>> issubset([65, 66], [65, 66, 67])
    True
    >>> issubset([65, 67], [65, 66, 67])
    False
    rTF)r�range)Zlist1Zlist2�n�startposrrr�issubset�s

r+�newtypesrr)�newdictr
)�newlist)�newrange)�	newobjectr)r	r-rr.r/r
r,)r&)!�__doc__�
__future__rrrr!Znumbersr�futurerr%r'r+�PY3�builtinsr�dict�int�list�objectr(rr,�__all__r	r-rr.r/r0r
�long�unicoderrrr�<module>sVYF
��
PKBu\��r�))/future/types/__pycache__/newopen.cpython-39.pycnu�[���a

��?h*�@sdZeZGdd�de�ZdS)z�
A substitute for the Python 3 open() function.

Note that io.open() is more complete but maybe slower. Even so, the
completeness may be a better default. TODO: compare these
c@sDeZdZdZddd�Zdd�Zdd	d
�Zdd�Zd
d�Zdd�Z	dS)�newopenztWrapper providing key part of Python 3 open() interface.

    From IPython's py3compat.py module. License: BSD.
    �r�utf-8cCst||�|_||_dS�N)�
_builtin_open�f�enc)�self�fname�mode�encoding�r�>/usr/local/lib/python3.9/site-packages/future/types/newopen.py�__init__sznewopen.__init__cCs|j�|�|j��Sr)r�write�encoder)r�srrr
rsz
newopen.write���cCs|j�|��|j�Sr)r�read�decoder)r�sizerrr
rsznewopen.readcCs
|j��Sr�r�close�rrrr
rsz
newopen.closecCs|Srrrrrr
�	__enter__sznewopen.__enter__cCs|j��dSrr)r�etype�value�	tracebackrrr
�__exit__sznewopen.__exit__N)rr)r)
�__name__�
__module__�__qualname__�__doc__rrrrrrrrrr
r
s

rN)r!�openr�objectrrrrr
�<module>sPKBu\�<��/future/types/__pycache__/newlist.cpython-39.pycnu�[���a

��?h��@sndZddlZddlZddlmZddlmZeZej	dd�Z
Gdd�de�ZGdd	�d	eee��Z
d	gZdS)
a
A list subclass for Python 2 that behaves like Python 3's list.

The primary difference is that lists have a .copy() method in Py3.

Example use:

>>> from builtins import list
>>> l1 = list()    # instead of {} for an empty list
>>> l1.append('hello')
>>> l2 = l1.copy()

�N)�with_metaclass)�	newobject�c@seZdZdd�ZdS)�BaseNewListcCs"|tkrt|t�St|j|�SdS�N)�newlist�
isinstance�
_builtin_list�
issubclass�	__class__)�cls�instance�r�>/usr/local/lib/python3.9/site-packages/future/types/newlist.py�__instancecheck__s
zBaseNewList.__instancecheck__N)�__name__�
__module__�__qualname__rrrrrrsrcs`eZdZdZdd�Zdd�Z�fdd�Z�fdd	�Zd
d�Z�fdd
�Z	dd�Z
dd�Z�ZS)rz7
    A backport of the Python 3 list object to Py2
    cCs
t�|�S)z9
        L.copy() -> list -- a shallow copy of L
        )�copy��selfrrrr&sznewlist.copycCstt|��D]}|��qdS)z,L.clear() -> None -- remove all items from LN)�range�len�pop)r�irrr�clear,sz
newlist.clearcsPt|�dkrtt|��|�St|d�tkr6|d}n|d}tt|��||�S)zo
        list() -> new empty list
        list(iterable) -> new list initialized from iterable's items
        r)r�superr�__new__�type)r�args�kwargs�value�rrrr1s
znewlist.__new__csttt|��|��Sr)rr�__add__)rr!r"rrr#?sznewlist.__add__cCs$zt|�|WStYS0dS)z
 left + self N)r�NotImplemented)r�leftrrr�__radd__Bsznewlist.__radd__cs2t|t�rttt|��|��Stt|��|�SdS)z�
        x.__getitem__(y) <==> x[y]

        Warning: a bug in Python 2.x prevents indexing via a slice from
        returning a newlist object.
        N)r�slicerr�__getitem__)r�yr"rrr(Is
znewlist.__getitem__cCst|�S)z=
        Hook for the future.utils.native() function
        )�listrrrr�
__native__Usznewlist.__native__cCst|�dkS)Nr)rrrrr�__nonzero__[sznewlist.__nonzero__)
rrr�__doc__rrrr#r&r(r+r,�
__classcell__rrr"rr"sr)r-�sysrZfuture.utilsrZfuture.types.newobjectrr*r	�version_info�verrrr�__all__rrrr�<module>s=PK	Bu\^�Ty11.future/types/__pycache__/newint.cpython-39.pycnu�[���a

��?h^4�@s�dZddlmZddlZddlmZddlmZddlm	Z	m
Z
mZmZm
Z
mZe	rfeZddlmZnddlmZGdd	�d	e�ZGd
d�de
ee��ZdgZdS)z�
Backport of Python 3's int, based on Py2's long.

They are very similar. The most notable difference is:

- representation: trailing L in Python 2 removed in Python 3
�)�divisionN)�newbytes)�	newobject)�PY3�isint�istext�isbytes�with_metaclass�native)�Iterablec@seZdZdd�ZdS)�
BaseNewIntcCs&|tkrt|ttf�St|j|�SdS�N)�newint�
isinstance�int�long�
issubclass�	__class__)�cls�instance�r�=/usr/local/lib/python3.9/site-packages/future/types/newint.py�__instancecheck__szBaseNewInt.__instancecheck__N)�__name__�
__module__�__qualname__rrrrrrsrcs�eZdZdZdR�fdd�	Z�fdd�Z�fdd	�Z�fd
d�Z�fdd
�Z�fdd�Z	�fdd�Z
�fdd�Zdd�Zdd�Z
dd�Z�fdd�Z�fdd�Zdd�Z�fd d!�Z�fd"d#�Zd$d%�Z�fd&d'�Z�fd(d)�Z�fd*d+�Z�fd,d-�Z�fd.d/�Z�fd0d1�Z�fd2d3�Z�fd4d5�Z�fd6d7�Z�fd8d9�Z�fd:d;�Z�fd<d=�Z �fd>d?�Z!�fd@dA�Z"�fdBdC�Z#dDdE�Z$dFdG�Z%�fdHdI�Z&dJdK�Z'dSdNdO�Z(e)dTdPdQ��Z*�Z+S)Urz6
    A backport of the Python 3 int object to Py2
    r�
csz|��}Wnty"|}Yn0t|�s>td�t|����|dkr�t|�sht|�sht|t	�shtd��zt
t|��|||�WSty�t
t|��|t
|�|�YS0zt
t|��||�WSt�yzt
t|��|t
|��WYStd�t|����Yn0Yn0dS)a.
        From the Py3 int docstring:

        |  int(x=0) -> integer
        |  int(x, base=10) -> integer
        |
        |  Convert a number or string to an integer, or return 0 if no
        |  arguments are given.  If x is a number, return x.__int__().  For
        |  floating point numbers, this truncates towards zero.
        |
        |  If x is not a number or if base is given, then x must be a string,
        |  bytes, or bytearray instance representing an integer literal in the
        |  given base.  The literal can be preceded by '+' or '-' and be
        |  surrounded by whitespace.  The base defaults to 10.  Valid bases are
        |  0 and 2-36. Base 0 means to interpret the base from the string as an
        |  integer literal.
        |  >>> int('0b100', base=0)
        |  4

        z__int__ returned non-int ({0})rz1int() can't convert non-string with explicit basez6newint argument must be a string or a number,not '{0}'N)�__int__�AttributeErrorr�	TypeError�format�typerrr�	bytearray�superr�__new__r)r�x�base�val�rrrr$%s4
���znewint.__new__cs*tt|���}|ddksJ�|dd�S)z&
        Without the L suffix
        ����LN)r#r�__repr__)�self�valuer(rrr+[sznewint.__repr__cs,tt|��|�}|tur$t|�|St|�Sr
)r#r�__add__�NotImplementedr�r,�otherr-r(rrr.csznewint.__add__cs,tt|��|�}|tur$|t|�St|�Sr
)r#r�__radd__r/rr0r(rrr2isznewint.__radd__cs,tt|��|�}|tur$t|�|St|�Sr
)r#r�__sub__r/rr0r(rrr3osznewint.__sub__cs,tt|��|�}|tur$|t|�St|�Sr
)r#r�__rsub__r/rr0r(rrr4usznewint.__rsub__cs8tt|��|�}t|�r t|�S|tur4t|�|S|Sr
)r#r�__mul__rr/rr0r(rrr5{sznewint.__mul__cs8tt|��|�}t|�r t|�S|tur4|t|�S|Sr
)r#r�__rmul__rr/rr0r(rrr6�sznewint.__rmul__cCs*t|�|}t|ttf�r"t|�S|SdSr
�rrrrr0rrr�__div__�sznewint.__div__cCs*|t|�}t|ttf�r"t|�S|SdSr
r7r0rrr�__rdiv__�sznewint.__rdiv__cCs(|�|�}t|ttf�r t|�S|SdSr
)�__itruediv__rrrrr0rrr�__idiv__�s
znewint.__idiv__cs(tt|��|�}|tur$t|�|}|Sr
)r#r�__truediv__r/rr0r(rrr<�sznewint.__truediv__cstt|��|�Sr
)r#r�__rtruediv__�r,r1r(rrr=�sznewint.__rtruediv__cCst|�}||}|Sr
�r�r,r1�mylongrrrr:�sznewint.__itruediv__csttt|��|��Sr
)rr#�__floordiv__r>r(rrrB�sznewint.__floordiv__csttt|��|��Sr
)rr#�
__rfloordiv__r>r(rrrC�sznewint.__rfloordiv__cCst|�}||}t|�Sr
)rrr@rrr�
__ifloordiv__�sznewint.__ifloordiv__cs,tt|��|�}|tur$t|�|St|�Sr
)r#r�__mod__r/rr0r(rrrE�sznewint.__mod__cs,tt|��|�}|tur$|t|�St|�Sr
)r#r�__rmod__r/rr0r(rrrF�sznewint.__rmod__csHtt|��|�}|tur0t|�}||||fSt|d�t|d�fS�Nr�)r#r�
__divmod__r/r�r,r1r-rAr(rrrI�s
znewint.__divmod__csHtt|��|�}|tur0t|�}||||fSt|d�t|d�fSrG)r#r�__rdivmod__r/rrJr(rrrK�s
znewint.__rdivmod__cs,tt|��|�}|tur$t|�|St|�Sr
)r#r�__pow__r/rr0r(rrrL�sznewint.__pow__cs8tt|��|�}t|�r t|�S|tur4|t|�S|Sr
)r#r�__rpow__rr/rr0r(rrrM�sznewint.__rpow__cs8t|�s$tdt|�jt|�jf��ttt|��|��S)Nz1unsupported operand type(s) for <<: '%s' and '%s')rrr!rrr#�
__lshift__r>r(rrrN�s��znewint.__lshift__cs8t|�s$tdt|�jt|�jf��ttt|��|��S)Nz1unsupported operand type(s) for >>: '%s' and '%s')rrr!rrr#�
__rshift__r>r(rrrO�s��znewint.__rshift__cs8t|�s$tdt|�jt|�jf��ttt|��|��S)Nz0unsupported operand type(s) for &: '%s' and '%s')rrr!rrr#�__and__r>r(rrrP�s��znewint.__and__cs8t|�s$tdt|�jt|�jf��ttt|��|��S)Nz0unsupported operand type(s) for |: '%s' and '%s')rrr!rrr#�__or__r>r(rrrQ�s��z
newint.__or__cs8t|�s$tdt|�jt|�jf��ttt|��|��S)Nz0unsupported operand type(s) for ^: '%s' and '%s')rrr!rrr#�__xor__r>r(rrrRs��znewint.__xor__csttt|����Sr
)rr#�__neg__�r,r(rrrSsznewint.__neg__csttt|����Sr
)rr#�__pos__rTr(rrrUsznewint.__pos__csttt|����Sr
)rr#�__abs__rTr(rrrVsznewint.__abs__csttt|����Sr
)rr#�
__invert__rTr(rrrWsznewint.__invert__cCs|Sr
rrTrrrrsznewint.__int__cCs|��Sr
)�__bool__rTrrr�__nonzero__sznewint.__nonzero__cs trtt|���Stt|���S)z<
        So subclasses can override this, Py3-style
        )rr#rrXrYrTr(rrrXsznewint.__bool__cCst|�Sr
r?rTrrr�
__native__&sznewint.__native__�bigFc	Cs|dkrtd��|dkr&|dkr&t�S|rX|dkrX|d}d||}|dkrltd��n|dkrhtd��|}|dvr|td��d	|}td
t|�d|�|d��d��}|r�|dd@}|dkr�|r�td
��|dkr�|s�td��t|�|kr�td
��|dk�r|S|ddd�S)aG
        Return an array of bytes representing an integer.

        The integer is represented using length bytes.  An OverflowError is
        raised if the integer is not representable with the given number of
        bytes.

        The byteorder argument determines the byte order used to represent the
        integer.  If byteorder is 'big', the most significant byte is at the
        beginning of the byte array.  If byteorder is 'little', the most
        significant byte is at the end of the byte array.  To request the native
        byte order of the host system, use `sys.byteorder' as the byte order value.

        The signed keyword-only argument determines whether two's complement is
        used to represent the integer.  If signed is False and a negative integer
        is given, an OverflowError is raised.
        rz$length argument must be non-negative��zint too small to convertz&can't convert negative int to unsigned��littler[�*byteorder must be either 'little' or 'big's%x�0�hex�zint too big to convertr[Nr))�
ValueErrorr�
OverflowError�len�zfill�decode)	r,�length�	byteorder�signed�bits�num�h�sZhigh_setrrr�to_bytes)s2
(znewint.to_bytescCs�|dvrtd��t|t�r$td��nt|t�r6t|�}|dkrB|n|ddd�}t|�dkr`d}tt|��	d	�d
�}|r�|dd@r�|dt|�d
}||�S)a'
        Return the integer represented by the given array of bytes.

        The mybytes argument must either support the buffer protocol or be an
        iterable object producing bytes.  Bytes and bytearray are examples of
        built-in objects that support the buffer protocol.

        The byteorder argument determines the byte order used to represent the
        integer.  If byteorder is 'big', the most significant byte is at the
        beginning of the byte array.  If byteorder is 'little', the most
        significant byte is at the end of the byte array.  To request the native
        byte order of the host system, use `sys.byteorder' as the byte order value.

        The signed keyword-only argument indicates whether two's complement is
        used to represent the integer.
        r^r`z'cannot convert unicode objects to bytesr[Nr)r�rb�rcr]r\)
rdr�unicoderrrrfrr
�encode)rZmybytesrjrk�brmrrr�
from_bytesVs


znewint.from_bytes)rr)r[F)r[F),rrr�__doc__r$r+r.r2r3r4r5r6r8r9r;r<r=r:rBrCrDrErFrIrKrLrMrNrOrPrQrRrSrUrVrWrrYrXrZrp�classmethodrv�
__classcell__rrr(rr!sP6
		
-r)rw�
__future__r�structZfuture.types.newbytesrZfuture.types.newobjectrZfuture.utilsrrrrr	r
rr�collections.abcr�collectionsr!rr�__all__rrrr�<module>s 	cPKBu\h/�P
P
1future/types/__pycache__/newobject.cpython-39.pycnu�[���a

��?h
�@sdZGdd�de�ZdgZdS)u�
An object subclass for Python 2 that gives new-style classes written in the
style of Python 3 (with ``__next__`` and unicode-returning ``__str__`` methods)
the appropriate Python 2-style ``next`` and ``__unicode__`` methods for compatible.

Example use::

    from builtins import object

    my_unicode_str = u'Unicode string: 孔子'

    class A(object):
        def __str__(self):
            return my_unicode_str

    a = A()
    print(str(a))

    # On Python 2, these relations hold:
    assert unicode(a) == my_unicode_string
    assert str(a) == my_unicode_string.encode('utf-8')


Another example::

    from builtins import object

    class Upper(object):
        def __init__(self, iterable):
            self._iter = iter(iterable)
        def __next__(self):                 # note the Py3 interface
            return next(self._iter).upper()
        def __iter__(self):
            return self

    assert list(Upper('hello')) == list('HELLO')

c@s<eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�ZgZ	dS)
�	newobjectz�
    A magical object class that provides Python 2 compatibility methods::
        next
        __unicode__
        __nonzero__

    Subclasses of this class can merely define the Python 3 methods (__next__,
    __str__, and __bool__).
    cCs$t|d�rt|��|�Std��dS)N�__next__znewobject is not an iterator)�hasattr�typer�	TypeError��self�r�@/usr/local/lib/python3.9/site-packages/future/types/newobject.py�next3s
znewobject.nextcCs>t|d�rt|��|�}nt|�}t|t�r0|S|�d�SdS)N�__str__zutf-8)rrr�str�
isinstance�unicode�decode)r�srrr	�__unicode__8s

znewobject.__unicode__cCs4t|d�rt|��|�St|d�r0t|��|�SdS)N�__bool__�__len__T)rrrrrrrr	�__nonzero__Ds


znewobject.__nonzero__cCst|d�stS|��S)N�__int__)r�NotImplementedrrrrr	�__long__Ss
znewobject.__long__cCst|�S)z=
        Hook for the future.utils.native() function
        )�objectrrrr	�
__native__msznewobject.__native__N)
�__name__�
__module__�__qualname__�__doc__r
rrrr�	__slots__rrrr	r)s	rN)rrr�__all__rrrr	�<module>s(LPKBu\%����5future/types/__pycache__/newmemoryview.cpython-39.pycnu�[���a

��?h��@sxdZddlmZddlZddlmZmZmZmZddl	m
Z
mZerRddlm
Z
nddlm
Z
Gdd�de�ZdgZdS)	zE
A pretty lame implementation of a memoryview object for Python 2.6.
�)�IntegralN)�istext�isbytes�PY2�with_metaclass)�no�issubset)�Iterablec@seZdZdZdd�ZdS)�
newmemoryviewze
    A pretty lame backport of the Python 2.7 and Python 3.x
    memoryviewview object to Py2.6.
    cCs|S)N�)�self�objrr�D/usr/local/lib/python3.9/site-packages/future/types/newmemoryview.py�__init__sznewmemoryview.__init__N)�__name__�
__module__�__qualname__�__doc__rrrrrr
sr
)rZnumbersr�stringZfuture.utilsrrrrZfuture.typesrr�collectionsr	�collections.abc�objectr
�__all__rrrr�<module>s	PKBu\�7�d�	�	/future/types/__pycache__/newdict.cpython-39.pycnu�[���a

��?h��@s^dZddlZddlmZddlmZeZejZ	Gdd�de
�ZGdd�deee��ZdgZ
dS)	a�
A dict subclass for Python 2 that behaves like Python 3's dict

Example use:

>>> from builtins import dict
>>> d1 = dict()    # instead of {} for an empty dict
>>> d2 = dict(key1='value1', key2='value2')

The keys, values and items methods now return iterators on Python 2.x
(with set-like behaviour on Python 2.7).

>>> for d in (d1, d2):
...     assert not isinstance(d.keys(), list)
...     assert not isinstance(d.values(), list)
...     assert not isinstance(d.items(), list)
�N)�with_metaclass)�	newobjectc@seZdZdd�ZdS)�BaseNewDictcCs"|tkrt|t�St|j|�SdS)N)�newdict�
isinstance�
_builtin_dict�
issubclass�	__class__)�cls�instance�r�>/usr/local/lib/python3.9/site-packages/future/types/newdict.py�__instancecheck__s
zBaseNewDict.__instancecheck__N)�__name__�
__module__�__qualname__rrrrr
rsrcs`eZdZdZedkrn.edkr2ejZejZ	ej
ZnejZej
Z	ejZ�fdd�Zdd�Z�ZS)rz7
    A backport of the Python 3 dict object to Py2
    )�)��cstt|�j|g|�R�S)a�
        dict() -> new empty dictionary
        dict(mapping) -> new dictionary initialized from a mapping object's
            (key, value) pairs
        dict(iterable) -> new dictionary initialized as if via:
            d = {}
            for k, v in iterable:
                d[k] = v
        dict(**kwargs) -> new dictionary initialized with the name=value pairs
            in the keyword argument list.  For example:  dict(one=1, two=2)
        )�superr�__new__)r
�args�kwargs�r	rr
r6s
znewdict.__new__cCst|�S)z=
        Hook for the future.utils.native() function
        )�dict)�selfrrr
�
__native__Esznewdict.__native__)rrr�__doc__�verr�	viewitems�items�viewkeys�keys�
viewvalues�values�	iteritems�iterkeys�
itervaluesrr�
__classcell__rrrr
r%sr)r�sysZfuture.utilsrZfuture.types.newobjectrrr�version_infor�typerr�__all__rrrr
�<module>s'PKBu\�U`��0future/types/__pycache__/newrange.cpython-39.pycnu�[���a

��?h��@s�dZddlmZddlmZer2ddlmZmZnddlmZmZddl	m
Z
ddlmZeZ
Gdd�de�ZGd	d
�d
e�ZdgZdS)aw
Nearly identical to xrange.py, by Dan Crosta, from

    https://github.com/dcrosta/xrange.git

This is included here in the ``future`` package rather than pointed to as
a dependency because there is no package for ``xrange`` on PyPI. It is
also tweaked to appear like a regular Python 3 ``range`` object rather
than a Python 2 xrange.

From Dan Crosta's README:

    "A pure-Python implementation of Python 2.7's xrange built-in, with
    some features backported from the Python 3.x range built-in (which
    replaced xrange) in that version."

    Read more at
        https://late.am/post/2012/06/18/what-the-heck-is-an-xrange
�)�absolute_import)�PY2)�Sequence�Iterator)�islice)�countc@s�eZdZdZdd�Zedd��Zedd��Zedd	��Zd
d�Z	dd
�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�ZdS)�newrangez�
    Pure-Python backport of Python 3's range object.  See `the CPython
    documentation for details:
    <http://docs.python.org/py3k/library/functions.html#range>`_
    cGst|�dkr"d|dd}}}nFt|�dkrH|d|dd}}}n t|�dkr`|\}}}ntd��z t|�t|�t|�}}}Wnty�td��Yn0|dkr�td��n|dkr�t||�}n
t||�}||_||_||_|||t	|||�|_
dS)N�r��z"range() requires 1-3 int argumentszan integer is requiredzrange() arg 3 must not be zero)�len�	TypeError�int�
ValueError�min�max�_start�_stop�_step�bool�_len)�self�args�start�stop�step�r�?/usr/local/lib/python3.9/site-packages/future/types/newrange.py�__init__*s( 

znewrange.__init__cCs|jS�N)r�rrrrrEsznewrange.startcCs|jSr)rr rrrrIsz
newrange.stopcCs|jSr�rr rrrrMsz
newrange.stepcCs.|jdkrd|j|jfSd|j|j|jfS)Nr	z
range(%d, %d)zrange(%d, %d, %d))rrrr rrr�__repr__Qs
znewrange.__repr__cCsFt|t�oD|jdko |jknpD|j|j|jf|j|j|jfkS)Nr)�
isinstancerrrr)r�otherrrr�__eq__Vs
��znewrange.__eq__cCs|jSr)rr rrr�__len__\sznewrange.__len__cCsxz||j}Wnty,td|��Yn0t||j�\}}|dkrhd|kr\|jkrhnnt|�Std|��dS)z]Return the 0-based position of integer `value` in
        the sequence this range represents.z%r is not in rangerN)rr
r�divmodrr�abs)r�value�diffZquotient�	remainderrrr�index_s"znewrange.indexcCst||v�S)zbReturn the number of occurrences of integer `value`
        in the sequence this range represents.)r�rr)rrrrksznewrange.countcCs*z|�|�WdSty$YdS0dS)z\Return ``True`` if the integer `value` occurs in
        the sequence this range represents.TFN)r,rr-rrr�__contains__qs

znewrange.__contains__cCst|ddd��S)N���)�iterr rrr�__reversed__zsznewrange.__reversed__cCsPt|t�r|�|�S|dkr&|j|}|dks8||jkr@td��|j||jS)z�Return the element at position ``index`` in the sequence
        this range represents, or raise :class:`IndexError` if the
        position is out of range.rzrange object index out of range)r#�slice�_newrange__getitem_slicer�
IndexErrorrr)rr,rrr�__getitem__}s


znewrange.__getitem__cs<�fdd�|��j�D�}|\}}}t�j|�j||�S)znReturn a range which represents the requested slce
        of the sequence represented by this range.
        c3s|]}�j|VqdSrr!)�.0�nr rr�	<genexpr>��z+newrange.__getitem_slice.<locals>.<genexpr>)�indicesrrr)rZslceZscaled_indicesZstart_offsetZstop_offsetZnew_steprr rZ__getitem_slice�s

�znewrange.__getitem_slicecCst|�S)z_Return an iterator which enumerates the elements of the
        sequence this range represents.)�range_iteratorr rrr�__iter__�sznewrange.__iter__N)�__name__�
__module__�__qualname__�__doc__r�propertyrrrr"r%r&r,rr.r1r5r3r<rrrrr#s$


	

rc@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)r;z&An iterator for a :class:`range`.
    cCstt|j|j�t|��|_dSr)rrrrr�_stepper)rZrange_rrrr�szrange_iterator.__init__cCs|Srrr rrrr<�szrange_iterator.__iter__cCs
t|j�Sr��nextrBr rrr�__next__�szrange_iterator.__next__cCs
t|j�SrrCr rrrrD�szrange_iterator.nextN)r=r>r?r@rr<rErDrrrrr;�s
r;N)r@�
__future__rZfuture.utilsr�collectionsrr�collections.abc�	itertoolsrZfuture.backports.miscr�_countrr;�__all__rrrr�<module>swPKBu\w��/�8�8.future/types/__pycache__/newstr.cpython-39.pycnu�[���a

��?h�=�@s�dZddlmZddlmZmZmZmZddlm	Z	m
Z
ddlmZerZe
ZddlmZnddlmZGdd�de�ZGd	d
�d
eee��Zd
gZdS)a
This module redefines ``str`` on Python 2.x to be a subclass of the Py2
``unicode`` type that behaves like the Python 3.x ``str``.

The main differences between ``newstr`` and Python 2.x's ``unicode`` type are
the stricter type-checking and absence of a `u''` prefix in the representation.

It is designed to be used together with the ``unicode_literals`` import
as follows:

    >>> from __future__ import unicode_literals
    >>> from builtins import str, isinstance

On Python 3.x and normally on Python 2.x, these expressions hold

    >>> str('blah') is 'blah'
    True
    >>> isinstance('blah', str)
    True

However, on Python 2.x, with this import:

    >>> from __future__ import unicode_literals

the same expressions are False:

    >>> str('blah') is 'blah'
    False
    >>> isinstance('blah', str)
    False

This module is designed to be imported together with ``unicode_literals`` on
Python 2 to bring the meaning of ``str`` back into alignment with unprefixed
string literals (i.e. ``unicode`` subclasses).

Note that ``str()`` (and ``print()``) would then normally call the
``__unicode__`` method on objects in Python 2. To define string
representations of your objects portably across Py3 and Py2, use the
:func:`python_2_unicode_compatible` decorator in  :mod:`future.utils`.

�)�Number)�PY3�istext�with_metaclass�
isnewbytes)�no�issubset)�	newobject)�Iterablec@seZdZdd�ZdS)�
BaseNewStrcCs"|tkrt|t�St|j|�SdS�N)�newstr�
isinstance�unicode�
issubclass�	__class__)�cls�instance�r�=/usr/local/lib/python3.9/site-packages/future/types/newstr.py�__instancecheck__;s
zBaseNewStr.__instancecheck__N)�__name__�
__module__�__qualname__rrrrrr:srcs"eZdZdZdZ�fdd�Z�fdd�Z�fdd�Zd	d
�Ze	d��fdd
��Z
e	d�dd��Z�fdd�Z�fdd�Z
�fdd�Ze	d��fdd��Ze	d��fdd��Ze	dd��fdd��Zdd�ZdT�fd!d"�	Ze	dd#��fd$d%��Ze	dd#��fd&d'��Ze	dd#�dU�fd*d+�	�Ze	dd#�dV�fd,d-�	�Ze	dd#��fd.d/��Ze	dd#��fd0d1��Ze	dd#�d2d3��ZdW�fd5d6�	Z�fd7d8�Z�fd9d:�Z�fd;d<�Zd=Z�fd>d?�Z �fd@dA�Z!�fdBdC�Z"�fdDdE�Z#�fdFdG�Z$dHdI�Z%e&dXdJdK��Z'dLdM�Z(dNdO�Z)dPdQ�Z*dRdS�Z+�Z,S)Yr
z6
    A backport of the Python 3 str object to Py2
    z,Can't convert '{0}' object to str implicitlycs�t|�dkrtt|��|�St|d�tkr<|tkr<|dSt|dt�rT|d}nVt|dt�r�d|vsvt|�dkr�|dj|dd�i|��}q�|d�	�}n|d}tt|��||�S)a/
        From the Py3 str docstring:

          str(object='') -> str
          str(bytes_or_buffer[, encoding[, errors]]) -> str

          Create a new string object from the given object. If encoding or
          errors is specified, then the object must expose a data buffer
          that will be decoded using the given encoding and error handler.
          Otherwise, returns the result of object.__str__() (if defined)
          or repr(object).
          encoding defaults to sys.getdefaultencoding().
          errors defaults to 'strict'.

        r�encoding�N)
�len�superr
�__new__�typerr�bytes�decode�__str__)r�args�kwargs�value�rrrrHs
znewstr.__new__cstt|���}|dd�S)z&
        Without the u prefix
        rN)rr
�__repr__)�selfr%r&rrr'jsznewstr.__repr__csttt|��|��S)z�
        Warning: Python <= 2.7.6 has a bug that causes this method never to be called
        when y is a slice object. Therefore the type of newstr()[:2] is wrong
        (unicode instead of newstr).
        )r
r�__getitem__)r(�yr&rrr)ssznewstr.__getitem__cCs`d}t|�tkr|}n8t|t�s2t|t�r<t|�s<t|�}nt|�t|����tt	|�t	|��S)Nz6'in <string>' requires string as left operand, not {0})
rr
rrr r�	TypeError�formatr�list)r(�key�errmsgZnewkeyrrr�__contains__{s
znewstr.__contains__�newbytescsttt|��|��Sr)r
r�__add__�r(�otherr&rrr2�sznewstr.__add__cCs$zt|�|WStYS0dS)z
 left + self N)r
�NotImplemented)r(�leftrrr�__radd__�sznewstr.__radd__csttt|��|��Sr)r
r�__mul__r3r&rrr8�sznewstr.__mul__csttt|��|��Sr)r
r�__rmul__r3r&rrr9�sznewstr.__rmul__cshd}t|�D]\}}t|�rt|�|���qt|�tkrLttt|��|��Stttt|���|��SdS)Nz7sequence item {0}: expected unicode string, found bytes)�	enumeraterr+r,rr
r�join)r(�iterabler/�i�itemr&rrr;�sznewstr.joincstt|�j|g|�R�Sr)rr
�find�r(�subr#r&rrr?�sznewstr.findcstt|�j|g|�R�Sr)rr
�rfindr@r&rrrB�sznewstr.rfind)r�csttt|�j||g|�R��Sr)r
r�replace)r(�old�newr#r&rrrD�sznewstr.replacecGstd��dS)N�)decode method has been disabled in newstr)�AttributeError)r(r#rrrr!�sz
newstr.decode�utf-8�strictcs�ddlm}|dkr�|dkr$td��g}|D]L}t|�}d|krLdkrfnn|�||dg��q,|�|j|d	��q,|d
��|�S|tt|��||��S)a�
        Returns bytes

        Encode S using the codec registered for encoding. Default encoding
        is 'utf-8'. errors may be given to set a different error
        handling scheme. Default is 'strict' meaning that encoding errors raise
        a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
        'xmlcharrefreplace' as well as any other name registered with
        codecs.register_error that can handle UnicodeEncodeErrors.
        r)r1�surrogateescapezutf-16z?FIXME: surrogateescape handling is not yet implemented properlyi�i��i�)r�)	Zfuture.types.newbytesr1�NotImplementedError�ord�append�encoder;rr
)r(r�errorsr1Zmybytes�c�coder&rrrP�sz
newstr.encodercsHt|t�r0|D] }t|�rt|j�t|����qtt|�j	|g|�R�Sr)
rr
rr+�no_convert_msgr,rrr
�
startswith�r(�prefixr#�thingr&rrrU�s

znewstr.startswithcsHt|t�r0|D] }t|�rt|j�t|����qtt|�j	|g|�R�Sr)
rr
rr+rTr,rrr
�endswithrVr&rrrY�s

znewstr.endswithN���cs tt|��||�}dd�|D�S)NcSsg|]}t|��qSr�r
��.0�partrrr�
<listcomp>�rLz newstr.split.<locals>.<listcomp>)rr
�split�r(�sep�maxsplit�partsr&rrr`�sznewstr.splitcs tt|��||�}dd�|D�S)NcSsg|]}t|��qSrr[r\rrrr_�rLz!newstr.rsplit.<locals>.<listcomp>)rr
�rsplitrar&rrre�sz
newstr.rsplitcs"tt|��|�}tdd�|D��S)Ncss|]}t|�VqdSrr[r\rrr�	<genexpr>rLz#newstr.partition.<locals>.<genexpr>)rr
�	partition�tuple�r(rbrdr&rrrgsznewstr.partitioncs"tt|��|�}tdd�|D��S)Ncss|]}t|�VqdSrr[r\rrrrf	rLz$newstr.rpartition.<locals>.<genexpr>)rr
�
rpartitionrhrir&rrrjsznewstr.rpartitioncGs&|j|g|�R�}|dkr"td��|S)zb
        Like newstr.find() but raise ValueError when the substring is not
        found.
        rZzsubstring not found)r?�
ValueError)r(rAr#�posrrr�indexsznewstr.indexFcstt|��|�}dd�|D�S)z�
        S.splitlines(keepends=False) -> list of strings

        Return a list of the lines in S, breaking at line boundaries.
        Line breaks are not included in the resulting list unless keepends
        is given and true.
        cSsg|]}t|��qSrr[r\rrrr_!rLz%newstr.splitlines.<locals>.<listcomp>)rr
�
splitlines)r(�keependsrdr&rrrns
znewstr.splitlinescs4t|t�st|t�r,t|�s,tt|��|�StSdSr)rrr rrr
�__eq__r5r3r&rrrp#s
��z
newstr.__eq__cs4t|t�st|t�r*t|�s*tt|���St��dSr)rrr rrr
�__hash__rM�r(r&rrrq*s
��znewstr.__hash__cs4t|t�st|t�r,t|�s,tt|��|�SdSdS)NT)rrr rrr
�__ne__r3r&rrrs1s
��z
newstr.__ne__z unorderable types: str() and {0}csDt|t�st|t�r,t|�s,tt|��|�St|j�	t
|����dSr)rrr rrr
�__lt__r+�unorderable_errr,rr3r&rrrt:s
��z
newstr.__lt__csDt|t�st|t�r,t|�s,tt|��|�St|j�	t
|����dSr)rrr rrr
�__le__r+rur,rr3r&rrrv@s
��z
newstr.__le__csDt|t�st|t�r,t|�s,tt|��|�St|j�	t
|����dSr)rrr rrr
�__gt__r+rur,rr3r&rrrwFs
��z
newstr.__gt__csDt|t�st|t�r,t|�s,tt|��|�St|j�	t
|����dSr)rrr rrr
�__ge__r+rur,rr3r&rrrxLs
��z
newstr.__ge__cs |dvrtd��tt|��|�S)zu
        A trick to cause the ``hasattr`` builtin-fn to return False for
        the 'decode' method on Py2.
        )r!r!rG)rHrr
�__getattribute__)r(�namer&rrryRsznewstr.__getattribute__cCst|�S)z@
        A hook for the future.utils.native() function.
        )rrrrrr�
__native__[sznewstr.__native__c	Cs�|dur^|dusJ�t|t�s&td��i}|��D](\}}t|�dkrNtd��||t|�<q2npt|t�szt|t�rztd��t|�t|�ks�td��i}t||�D],\}}t|�dkr�td��t|�|t|�<q�|dur�|D]}d|t|�<q�|S)a_
        Return a translation table usable for str.translate().

        If there is only one argument, it must be a dictionary mapping Unicode
        ordinals (integers) or characters to Unicode ordinals, strings or None.
        Character keys will be then converted to ordinals.
        If there are two arguments, they must be strings of equal length, and
        in the resulting dictionary, each character in x will be mapped to the
        character at the same position in y. If there is a third argument, it
        must be a string, whose characters will be mapped to None in the result.
        Nz<if you give only one argument to maketrans it must be a dictrz3keys in translate table must be strings or integerszx and y must be unicode stringsz8the first two maketrans arguments must have equal length)	r�dictr+�itemsrrkrNr�zip)	�xr*�z�resultr.r%�xi�yi�charrrr�	maketransas,
znewstr.maketranscCslg}|D]X}t|�|vrV|t|�}|dur0qq`t|t�rF|�|�q`|�t|��q|�|�qd�|�S)a`
        S.translate(table) -> str

        Return a copy of the string S, where all characters have been mapped
        through the given translation table, which must be a mapping of
        Unicode ordinals to Unicode ordinals, strings, or None.
        Unmapped characters are left untouched. Characters mapped to None
        are deleted.
        N�)rNrrrO�chrr;)r(�table�lrR�valrrr�	translate�s

znewstr.translatecCstd��dS�NZfixme�rMrrrrr�isprintable�sznewstr.isprintablecCstd��dSr�r�rrrrr�isidentifier�sznewstr.isidentifiercCstd��dSr�r�rrrrr�
format_map�sznewstr.format_map)rIrJ)NrZ)NrZ)F)NN)-rrr�__doc__rTrr'r)r0rr2r7r8r9r;r?rBrDr!rPrUrYr`rergrjrmrnrprqrsrurtrvrwrxryr{�staticmethodr�r�r�r�r��
__classcell__rrr&rr
Bsh"	
)


	&r
N)r�ZnumbersrZfuture.utilsrrrrZfuture.typesrrZfuture.types.newobjectr	�strr�collections.abcr
�collectionsrrr
�__all__rrrr�<module>s*jPKBu\�JB�7�70future/types/__pycache__/newbytes.cpython-39.pycnu�[���a

��?h�?�@s�dZddlmZddlZddlZddlmZmZmZm	Z	m
Z
ddlmZm
Z
ddlmZerjddlmZnddlmZeZe	r�eZGdd	�d	e�Zd
d�ZGdd
�d
e
ee��Zd
gZdS)z�
Pure-Python implementation of a Python 3-like bytes object for Python 2.

Why do this? Without it, the Python 2 bytes object is a very, very
different beast to the Python 3 bytes object.
�)�IntegralN)�istext�isbytes�PY2�PY3�with_metaclass)�no�issubset)�	newobject)�Iterablec@seZdZdd�ZdS)�BaseNewBytescCs"|tkrt|t�St|j|�SdS�N)�newbytes�
isinstance�_builtin_bytes�
issubclass�	__class__)�cls�instance�r�?/usr/local/lib/python3.9/site-packages/future/types/newbytes.py�__instancecheck__s
zBaseNewBytes.__instancecheck__N)�__name__�
__module__�__qualname__rrrrrrsrcCs t|t�r|�d�St|�SdS)N�ascii)r�str�encode�chr)�xrrr�_newchr%s

r cs�eZdZdZ�fdd�Z�fdd�Z�fdd�Z�fdd	�Zd
d�Zdd
�Z	e
e��fdd��Ze
e�dd��Z
e
e��fdd��Ze
e��fdd��Zdd�Zdd�Z�fdd�Zedd��Ze
e��fdd��Ze
e��fd d!��Ze
ed"��fd#d$��Zd%d&�Zd\�fd)d*�	Ze
e��fd+d,��Ze
e��fd-d.��Ze
e�d]�fd1d2�	�Zd^�fd4d5�	Ze
e�d_�fd6d7�	�Ze
e��fd8d9��Ze
e��fd:d;��Z e
ed<�d=d>��Z!e
e��fd?d@��Z"�fdAdB�Z#�fdCdD�Z$dEZ%�fdFdG�Z&�fdHdI�Z'�fdJdK�Z(�fdLdM�Z)�fdNdO�Z*�fdPdQ�Z+e
e�d`�fdRdS�	�Z,e
e�da�fdTdU�	�Z-�fdVdW�Z.e
e��fdXdY��Z/ee
e�dZd[���Z0�Z1S)brz8
    A backport of the Python 3 bytes object to Py2
    csd}d}t|�dkr$tt|��|�St|�dkrTt|�}t|�dkrL|��}|��}t|d�tkrl|dSt|dt�r�|d}�nXt|dt	��r*z<d|vr�|dus�J�|d}d|vr�|dus�J�|d}Wnt
y�td��Yn0|du�rtd��|g}|du�r|�|�|dj
|�}n�t|dd	��rH|d��}n�t|dt��r�t|d�dk�rpd
}n0ztdd�|dD��}Wntd
��Yn0n<t|dt��r�|ddk�r�td��d|d}n|d}t|�tk�r�t�|�Stt|��||�SdS)aK
        From the Py3 bytes docstring:

        bytes(iterable_of_ints) -> bytes
        bytes(string, encoding[, errors]) -> bytes
        bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer
        bytes(int) -> bytes object of size given by the parameter initialized with null bytes
        bytes() -> empty bytes object

        Construct an immutable array of bytes from:
          - an iterable yielding integers in range(256)
          - a text string encoded using the specified encoding
          - any object implementing the buffer API.
          - an integer
        Nr���encoding�errorsz#Argument given by name and positionz+unicode string argument without an encoding�	__bytes__�cSsg|]}t|��qSr)r )�.0rrrr�
<listcomp>wr&z$newbytes.__new__.<locals>.<listcomp>zbytes must be in range(0, 256)znegative count�)�len�superr�__new__�list�pop�typerr�unicode�AssertionError�	TypeError�appendr�hasattrr%r�	bytearray�
ValueErrorr�copy)r�args�kwargsr#r$�valueZnewargs�rrrr,0s\



znewbytes.__new__csdtt|���S)N�b)r+r�__repr__��selfr;rrr=�sznewbytes.__repr__csdd�tt|����S)Nr<z'{0}')�formatr+r�__str__r>r;rrrA�sznewbytes.__str__cs.tt|��|�}t|t�r"t|�St|�SdSr
)r+r�__getitem__rr�ord)r?�yr:r;rrrB�s
znewbytes.__getitem__cGs|�t|��Sr
)rB�slice�r?r8rrr�__getslice__�sznewbytes.__getslice__cCsBt|t�rt|g�}nt|�tkr(|}nt|�}tt|�t|��Sr
)r�intrr/r	r-)r?�keyZnewbyteskeyrrr�__contains__�s
znewbytes.__contains__csttt|��|��Sr
)rr+�__add__�r?�otherr;rrrK�sznewbytes.__add__cCst|�|Sr
�r)r?�leftrrr�__radd__�sznewbytes.__radd__csttt|��|��Sr
)rr+�__mul__rLr;rrrQ�sznewbytes.__mul__csttt|��|��Sr
)rr+�__rmul__rLr;rrrR�sznewbytes.__rmul__cCs�t|t�rt�|�}n�t|t�rVg}|D]"}t|t�r@t�|�}|�|�q(t|�}nBt|jd�r�t|jd�r�|��D] \}}t|t�rvt�|�||<qvt�	||�S)NrB�	iteritems)
rrrrA�tupler3r4rrS�__mod__)r?�valsZnewvals�v�krrrrU�s 





�
znewbytes.__mod__cCs
|�|�Sr
)rUrLrrr�__imod__�sznewbytes.__imod__csjd}t|�st|�r(t|�dt|����t|�D]$\}}t|�r0t|�|t|����q0ttt|��|��S)Nz,sequence item {0}: expected bytes, {1} foundr)	rrr2r@r/�	enumeraterr+�join)r?Ziterable_of_bytes�errmsg�i�itemr;rrr[�sz
newbytes.joincCs||�dd��d��S)N� ��hex)�replace�decode)r�stringrrr�fromhex�sznewbytes.fromhexcstt|�j|g|�R�Sr
)r+r�find�r?�subr8r;rrrf�sz
newbytes.findcstt|�j|g|�R�Sr
)r+r�rfindrgr;rrri�sznewbytes.rfind)�r!csttt|�j||g|�R��Sr
)rr+rb)r?�old�newr8r;rrrb�sznewbytes.replacecGstd��dS)N�+encode method has been disabled in newbytes)�AttributeErrorrFrrrr�sznewbytes.encode�utf-8�strictcs<ddlm}|dkr&ddlm}|�|tt|��||��S)a�
        Returns a newstr (i.e. unicode subclass)

        Decode B using the codec registered for encoding. Default encoding
        is 'utf-8'. errors may be given to set a different error
        handling scheme.  Default is 'strict' meaning that encoding errors raise
        a UnicodeDecodeError.  Other possible values are 'ignore' and 'replace'
        as well as any other name registered with codecs.register_error that is
        able to handle UnicodeDecodeErrors.
        r)�newstr�surrogateescape)�register_surrogateescape)Zfuture.types.newstrrqZfuture.utils.surrogateescapersr+rrc)r?r#r$rqrsr;rrrc�s
znewbytes.decodecstt|�j|g|�R�Sr
)r+r�
startswith�r?�prefixr8r;rrrtsznewbytes.startswithcstt|�j|g|�R�Sr
)r+r�endswithrur;rrrwsznewbytes.endswithN���cs tt|��||�}dd�|D�S)NcSsg|]}t|��qSrrN�r'�partrrrr('r&z"newbytes.split.<locals>.<listcomp>)r+r�split�r?�sep�maxsplit�partsr;rrr{"sznewbytes.splitFcstt|��|�}dd�|D�S)z�
        B.splitlines([keepends]) -> list of lines

        Return a list of the lines in B, breaking at line boundaries.
        Line breaks are not included in the resulting list unless keepends
        is given and true.
        cSsg|]}t|��qSrrNryrrrr(4r&z'newbytes.splitlines.<locals>.<listcomp>)r+r�
splitlines)r?�keependsrr;rrr�)s
znewbytes.splitlinescs tt|��||�}dd�|D�S)NcSsg|]}t|��qSrrNryrrrr(;r&z#newbytes.rsplit.<locals>.<listcomp>)r+r�rsplitr|r;rrr�6sznewbytes.rsplitcs"tt|��|�}tdd�|D��S)Ncss|]}t|�VqdSr
rNryrrr�	<genexpr>@r&z%newbytes.partition.<locals>.<genexpr>)r+r�	partitionrT�r?r}rr;rrr�=sznewbytes.partitioncs"tt|��|�}tdd�|D��S)Ncss|]}t|�VqdSr
rNryrrrr�Er&z&newbytes.rpartition.<locals>.<genexpr>)r+r�
rpartitionrTr�r;rrr�Bsznewbytes.rpartition)rjcGs&|j|g|�R�}|dkr"td��dS)z�
        S.rindex(sub [,start [,end]]) -> int

        Like S.rfind() but raise ValueError when the substring is not found.
        rx�substring not foundN)rir6)r?rhr8�posrrr�rindexGsznewbytes.rindexc	s�t|t�rpt|�dkr&dt|�}}n4t|�dkr<|d}nt|�dkrR|\}}ntd��t|�||��|�St|t�s�z|�|�}Wnttfy�td��Yn0zt	t
|�j|g|�R�WSty�td��Yn0dS)z�
        Returns index of sub in bytes.
        Raises ValueError if byte is not in bytes and TypeError if can't
        be converted bytes or its length is not 1.
        rrjr!ztakes at most 3 argumentszcan't convert sub to bytesr�N)rrHr*r2r-�index�bytesrr6r+r)r?rhr8�start�endr;rrr�Rs$



znewbytes.indexcs&t|ttf�rtt|��|�SdSdS)NF)rrr5r+r�__eq__rLr;rrr�msznewbytes.__eq__cs"t|t�rtt|��|�SdSdS)NT)rrr+r�__ne__rLr;rrr�ss
znewbytes.__ne__z"unorderable types: bytes() and {0}cs2t|t�rtt|��|�St|j�t|����dSr
)	rrr+r�__lt__r2�unorderable_errr@r/rLr;rrr�{s
znewbytes.__lt__cs2t|t�rtt|��|�St|j�t|����dSr
)	rrr+r�__le__r2r�r@r/rLr;rrr��s
znewbytes.__le__cs2t|t�rtt|��|�St|j�t|����dSr
)	rrr+r�__gt__r2r�r@r/rLr;rrr��s
znewbytes.__gt__cs2t|t�rtt|��|�St|j�t|����dSr
)	rrr+r�__ge__r2r�r@r/rLr;rrr��s
znewbytes.__ge__cstt|���Sr
)r+rrAr>r;rr�
__native__�sznewbytes.__native__cs |dvrtd��tt|��|�S)zu
        A trick to cause the ``hasattr`` builtin-fn to return False for
        the 'encode' method on Py2.
        )rrrm)rnr+r�__getattribute__)r?�namer;rrr��sznewbytes.__getattribute__csttt|��|��S)z�
        Strip trailing bytes contained in the argument.
        If the argument is omitted, strip trailing ASCII whitespace.
        )rr+�rstrip�r?Zbytes_to_stripr;rrr��sznewbytes.rstripcsttt|��|��S)z�
        Strip leading and trailing bytes contained in the argument.
        If the argument is omitted, strip trailing ASCII whitespace.
        )rr+�stripr�r;rrr��sznewbytes.stripcsttt|����S)zv
        b.lower() -> copy of b

        Return a copy of b with all ASCII characters converted to lowercase.
        )rr+�lowerr>r;rrr��sznewbytes.lowercsttt|����S)zv
        b.upper() -> copy of b

        Return a copy of b with all ASCII characters converted to uppercase.
        )rr+�upperr>r;rrr��sznewbytes.uppercCstt�||��S)aT
        B.maketrans(frm, to) -> translation table

        Return a translation table (a bytes object of length 256) suitable
        for use in the bytes or bytearray translate method where each byte
        in frm is mapped to the byte at the same position in to.
        The bytes objects frm and to must be of the same length.
        )rrd�	maketrans)r�frm�torrrr��sznewbytes.maketrans)rorp)Nrx)F)Nrx)N)N)2rrr�__doc__r,r=rArBrGrJrr0rKrPrQrRrUrYr[�classmethodrerfrirbrrcrtrwr{r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r��
__classcell__rrr;rr,s�Z
	
1


	r)r�Znumbersrrdr7Zfuture.utilsrrrrrZfuture.typesrr	Zfuture.types.newobjectr
�collectionsr�collections.abcr�rrr0r/rr r�__all__rrrr�<module>s&#PKBu\���?�?future/types/newbytes.pynu�[���"""
Pure-Python implementation of a Python 3-like bytes object for Python 2.

Why do this? Without it, the Python 2 bytes object is a very, very
different beast to the Python 3 bytes object.
"""

from numbers import Integral
import string
import copy

from future.utils import istext, isbytes, PY2, PY3, with_metaclass
from future.types import no, issubset
from future.types.newobject import newobject

if PY2:
    from collections import Iterable
else:
    from collections.abc import Iterable


_builtin_bytes = bytes

if PY3:
    # We'll probably never use newstr on Py3 anyway...
    unicode = str


class BaseNewBytes(type):
    def __instancecheck__(cls, instance):
        if cls == newbytes:
            return isinstance(instance, _builtin_bytes)
        else:
            return issubclass(instance.__class__, cls)


def _newchr(x):
    if isinstance(x, str):  # this happens on pypy
        return x.encode('ascii')
    else:
        return chr(x)


class newbytes(with_metaclass(BaseNewBytes, _builtin_bytes)):
    """
    A backport of the Python 3 bytes object to Py2
    """
    def __new__(cls, *args, **kwargs):
        """
        From the Py3 bytes docstring:

        bytes(iterable_of_ints) -> bytes
        bytes(string, encoding[, errors]) -> bytes
        bytes(bytes_or_buffer) -> immutable copy of bytes_or_buffer
        bytes(int) -> bytes object of size given by the parameter initialized with null bytes
        bytes() -> empty bytes object

        Construct an immutable array of bytes from:
          - an iterable yielding integers in range(256)
          - a text string encoded using the specified encoding
          - any object implementing the buffer API.
          - an integer
        """

        encoding = None
        errors = None

        if len(args) == 0:
            return super(newbytes, cls).__new__(cls)
        elif len(args) >= 2:
            args = list(args)
            if len(args) == 3:
                errors = args.pop()
            encoding=args.pop()
        # Was: elif isinstance(args[0], newbytes):
        # We use type() instead of the above because we're redefining
        # this to be True for all unicode string subclasses. Warning:
        # This may render newstr un-subclassable.
        if type(args[0]) == newbytes:
            # Special-case: for consistency with Py3.3, we return the same object
            # (with the same id) if a newbytes object is passed into the
            # newbytes constructor.
            return args[0]
        elif isinstance(args[0], _builtin_bytes):
            value = args[0]
        elif isinstance(args[0], unicode):
            try:
                if 'encoding' in kwargs:
                    assert encoding is None
                    encoding = kwargs['encoding']
                if 'errors' in kwargs:
                    assert errors is None
                    errors = kwargs['errors']
            except AssertionError:
                raise TypeError('Argument given by name and position')
            if encoding is None:
                raise TypeError('unicode string argument without an encoding')
            ###
            # Was:   value = args[0].encode(**kwargs)
            # Python 2.6 string encode() method doesn't take kwargs:
            # Use this instead:
            newargs = [encoding]
            if errors is not None:
                newargs.append(errors)
            value = args[0].encode(*newargs)
            ###
        elif hasattr(args[0], '__bytes__'):
            value = args[0].__bytes__()
        elif isinstance(args[0], Iterable):
            if len(args[0]) == 0:
                # This could be an empty list or tuple. Return b'' as on Py3.
                value = b''
            else:
                # Was: elif len(args[0])>0 and isinstance(args[0][0], Integral):
                #      # It's a list of integers
                # But then we can't index into e.g. frozensets. Try to proceed
                # anyway.
                try:
                    value = bytearray([_newchr(x) for x in args[0]])
                except:
                    raise ValueError('bytes must be in range(0, 256)')
        elif isinstance(args[0], Integral):
            if args[0] < 0:
                raise ValueError('negative count')
            value = b'\x00' * args[0]
        else:
            value = args[0]
        if type(value) == newbytes:
            # Above we use type(...) rather than isinstance(...) because the
            # newbytes metaclass overrides __instancecheck__.
            # oldbytes(value) gives the wrong thing on Py2: the same
            # result as str(value) on Py3, e.g. "b'abc'". (Issue #193).
            # So we handle this case separately:
            return copy.copy(value)
        else:
            return super(newbytes, cls).__new__(cls, value)

    def __repr__(self):
        return 'b' + super(newbytes, self).__repr__()

    def __str__(self):
        return 'b' + "'{0}'".format(super(newbytes, self).__str__())

    def __getitem__(self, y):
        value = super(newbytes, self).__getitem__(y)
        if isinstance(y, Integral):
            return ord(value)
        else:
            return newbytes(value)

    def __getslice__(self, *args):
        return self.__getitem__(slice(*args))

    def __contains__(self, key):
        if isinstance(key, int):
            newbyteskey = newbytes([key])
        # Don't use isinstance() here because we only want to catch
        # newbytes, not Python 2 str:
        elif type(key) == newbytes:
            newbyteskey = key
        else:
            newbyteskey = newbytes(key)
        return issubset(list(newbyteskey), list(self))

    @no(unicode)
    def __add__(self, other):
        return newbytes(super(newbytes, self).__add__(other))

    @no(unicode)
    def __radd__(self, left):
        return newbytes(left) + self

    @no(unicode)
    def __mul__(self, other):
        return newbytes(super(newbytes, self).__mul__(other))

    @no(unicode)
    def __rmul__(self, other):
        return newbytes(super(newbytes, self).__rmul__(other))

    def __mod__(self, vals):
        if isinstance(vals, newbytes):
            vals = _builtin_bytes.__str__(vals)

        elif isinstance(vals, tuple):
            newvals = []
            for v in vals:
                if isinstance(v, newbytes):
                    v = _builtin_bytes.__str__(v)
                newvals.append(v)
            vals = tuple(newvals)

        elif (hasattr(vals.__class__, '__getitem__') and
                hasattr(vals.__class__, 'iteritems')):
            for k, v in vals.iteritems():
                if isinstance(v, newbytes):
                    vals[k] = _builtin_bytes.__str__(v)

        return _builtin_bytes.__mod__(self, vals)

    def __imod__(self, other):
        return self.__mod__(other)

    def join(self, iterable_of_bytes):
        errmsg = 'sequence item {0}: expected bytes, {1} found'
        if isbytes(iterable_of_bytes) or istext(iterable_of_bytes):
            raise TypeError(errmsg.format(0, type(iterable_of_bytes)))
        for i, item in enumerate(iterable_of_bytes):
            if istext(item):
                raise TypeError(errmsg.format(i, type(item)))
        return newbytes(super(newbytes, self).join(iterable_of_bytes))

    @classmethod
    def fromhex(cls, string):
        # Only on Py2:
        return cls(string.replace(' ', '').decode('hex'))

    @no(unicode)
    def find(self, sub, *args):
        return super(newbytes, self).find(sub, *args)

    @no(unicode)
    def rfind(self, sub, *args):
        return super(newbytes, self).rfind(sub, *args)

    @no(unicode, (1, 2))
    def replace(self, old, new, *args):
        return newbytes(super(newbytes, self).replace(old, new, *args))

    def encode(self, *args):
        raise AttributeError("encode method has been disabled in newbytes")

    def decode(self, encoding='utf-8', errors='strict'):
        """
        Returns a newstr (i.e. unicode subclass)

        Decode B using the codec registered for encoding. Default encoding
        is 'utf-8'. errors may be given to set a different error
        handling scheme.  Default is 'strict' meaning that encoding errors raise
        a UnicodeDecodeError.  Other possible values are 'ignore' and 'replace'
        as well as any other name registered with codecs.register_error that is
        able to handle UnicodeDecodeErrors.
        """
        # Py2 str.encode() takes encoding and errors as optional parameter,
        # not keyword arguments as in Python 3 str.

        from future.types.newstr import newstr

        if errors == 'surrogateescape':
            from future.utils.surrogateescape import register_surrogateescape
            register_surrogateescape()

        return newstr(super(newbytes, self).decode(encoding, errors))

        # This is currently broken:
        # # We implement surrogateescape error handling here in addition rather
        # # than relying on the custom error handler from
        # # future.utils.surrogateescape to be registered globally, even though
        # # that is fine in the case of decoding. (But not encoding: see the
        # # comments in newstr.encode()``.)
        #
        # if errors == 'surrogateescape':
        #     # Decode char by char
        #     mybytes = []
        #     for code in self:
        #         # Code is an int
        #         if 0x80 <= code <= 0xFF:
        #             b = 0xDC00 + code
        #         elif code <= 0x7F:
        #             b = _unichr(c).decode(encoding=encoding)
        #         else:
        #             # # It may be a bad byte
        #             # FIXME: What to do in this case? See the Py3 docs / tests.
        #             # # Try swallowing it.
        #             # continue
        #             # print("RAISE!")
        #             raise NotASurrogateError
        #         mybytes.append(b)
        #     return newbytes(mybytes)
        # return newbytes(super(newstr, self).decode(encoding, errors))

    @no(unicode)
    def startswith(self, prefix, *args):
        return super(newbytes, self).startswith(prefix, *args)

    @no(unicode)
    def endswith(self, prefix, *args):
        return super(newbytes, self).endswith(prefix, *args)

    @no(unicode)
    def split(self, sep=None, maxsplit=-1):
        # Py2 str.split() takes maxsplit as an optional parameter, not as a
        # keyword argument as in Python 3 bytes.
        parts = super(newbytes, self).split(sep, maxsplit)
        return [newbytes(part) for part in parts]

    def splitlines(self, keepends=False):
        """
        B.splitlines([keepends]) -> list of lines

        Return a list of the lines in B, breaking at line boundaries.
        Line breaks are not included in the resulting list unless keepends
        is given and true.
        """
        # Py2 str.splitlines() takes keepends as an optional parameter,
        # not as a keyword argument as in Python 3 bytes.
        parts = super(newbytes, self).splitlines(keepends)
        return [newbytes(part) for part in parts]

    @no(unicode)
    def rsplit(self, sep=None, maxsplit=-1):
        # Py2 str.rsplit() takes maxsplit as an optional parameter, not as a
        # keyword argument as in Python 3 bytes.
        parts = super(newbytes, self).rsplit(sep, maxsplit)
        return [newbytes(part) for part in parts]

    @no(unicode)
    def partition(self, sep):
        parts = super(newbytes, self).partition(sep)
        return tuple(newbytes(part) for part in parts)

    @no(unicode)
    def rpartition(self, sep):
        parts = super(newbytes, self).rpartition(sep)
        return tuple(newbytes(part) for part in parts)

    @no(unicode, (1,))
    def rindex(self, sub, *args):
        '''
        S.rindex(sub [,start [,end]]) -> int

        Like S.rfind() but raise ValueError when the substring is not found.
        '''
        pos = self.rfind(sub, *args)
        if pos == -1:
            raise ValueError('substring not found')

    @no(unicode)
    def index(self, sub, *args):
        '''
        Returns index of sub in bytes.
        Raises ValueError if byte is not in bytes and TypeError if can't
        be converted bytes or its length is not 1.
        '''
        if isinstance(sub, int):
            if len(args) == 0:
                start, end = 0, len(self)
            elif len(args) == 1:
                start = args[0]
            elif len(args) == 2:
                start, end = args
            else:
                raise TypeError('takes at most 3 arguments')
            return list(self)[start:end].index(sub)
        if not isinstance(sub, bytes):
            try:
                sub = self.__class__(sub)
            except (TypeError, ValueError):
                raise TypeError("can't convert sub to bytes")
        try:
            return super(newbytes, self).index(sub, *args)
        except ValueError:
            raise ValueError('substring not found')

    def __eq__(self, other):
        if isinstance(other, (_builtin_bytes, bytearray)):
            return super(newbytes, self).__eq__(other)
        else:
            return False

    def __ne__(self, other):
        if isinstance(other, _builtin_bytes):
            return super(newbytes, self).__ne__(other)
        else:
            return True

    unorderable_err = 'unorderable types: bytes() and {0}'

    def __lt__(self, other):
        if isinstance(other, _builtin_bytes):
            return super(newbytes, self).__lt__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __le__(self, other):
        if isinstance(other, _builtin_bytes):
            return super(newbytes, self).__le__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __gt__(self, other):
        if isinstance(other, _builtin_bytes):
            return super(newbytes, self).__gt__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __ge__(self, other):
        if isinstance(other, _builtin_bytes):
            return super(newbytes, self).__ge__(other)
        raise TypeError(self.unorderable_err.format(type(other)))

    def __native__(self):
        # We can't just feed a newbytes object into str(), because
        # newbytes.__str__() returns e.g. "b'blah'", consistent with Py3 bytes.
        return super(newbytes, self).__str__()

    def __getattribute__(self, name):
        """
        A trick to cause the ``hasattr`` builtin-fn to return False for
        the 'encode' method on Py2.
        """
        if name in ['encode', u'encode']:
            raise AttributeError("encode method has been disabled in newbytes")
        return super(newbytes, self).__getattribute__(name)

    @no(unicode)
    def rstrip(self, bytes_to_strip=None):
        """
        Strip trailing bytes contained in the argument.
        If the argument is omitted, strip trailing ASCII whitespace.
        """
        return newbytes(super(newbytes, self).rstrip(bytes_to_strip))

    @no(unicode)
    def strip(self, bytes_to_strip=None):
        """
        Strip leading and trailing bytes contained in the argument.
        If the argument is omitted, strip trailing ASCII whitespace.
        """
        return newbytes(super(newbytes, self).strip(bytes_to_strip))

    def lower(self):
        """
        b.lower() -> copy of b

        Return a copy of b with all ASCII characters converted to lowercase.
        """
        return newbytes(super(newbytes, self).lower())

    @no(unicode)
    def upper(self):
        """
        b.upper() -> copy of b

        Return a copy of b with all ASCII characters converted to uppercase.
        """
        return newbytes(super(newbytes, self).upper())

    @classmethod
    @no(unicode)
    def maketrans(cls, frm, to):
        """
        B.maketrans(frm, to) -> translation table

        Return a translation table (a bytes object of length 256) suitable
        for use in the bytes or bytearray translate method where each byte
        in frm is mapped to the byte at the same position in to.
        The bytes objects frm and to must be of the same length.
        """
        return newbytes(string.maketrans(frm, to))


__all__ = ['newbytes']
PKBu\��l�

future/types/newobject.pynu�[���"""
An object subclass for Python 2 that gives new-style classes written in the
style of Python 3 (with ``__next__`` and unicode-returning ``__str__`` methods)
the appropriate Python 2-style ``next`` and ``__unicode__`` methods for compatible.

Example use::

    from builtins import object

    my_unicode_str = u'Unicode string: \u5b54\u5b50'

    class A(object):
        def __str__(self):
            return my_unicode_str

    a = A()
    print(str(a))

    # On Python 2, these relations hold:
    assert unicode(a) == my_unicode_string
    assert str(a) == my_unicode_string.encode('utf-8')


Another example::

    from builtins import object

    class Upper(object):
        def __init__(self, iterable):
            self._iter = iter(iterable)
        def __next__(self):                 # note the Py3 interface
            return next(self._iter).upper()
        def __iter__(self):
            return self

    assert list(Upper('hello')) == list('HELLO')

"""


class newobject(object):
    """
    A magical object class that provides Python 2 compatibility methods::
        next
        __unicode__
        __nonzero__

    Subclasses of this class can merely define the Python 3 methods (__next__,
    __str__, and __bool__).
    """
    def next(self):
        if hasattr(self, '__next__'):
            return type(self).__next__(self)
        raise TypeError('newobject is not an iterator')

    def __unicode__(self):
        # All subclasses of the builtin object should have __str__ defined.
        # Note that old-style classes do not have __str__ defined.
        if hasattr(self, '__str__'):
            s = type(self).__str__(self)
        else:
            s = str(self)
        if isinstance(s, unicode):
            return s
        else:
            return s.decode('utf-8')

    def __nonzero__(self):
        if hasattr(self, '__bool__'):
            return type(self).__bool__(self)
        if hasattr(self, '__len__'):
            return type(self).__len__(self)
        # object has no __nonzero__ method
        return True

    # Are these ever needed?
    # def __div__(self):
    #     return self.__truediv__()

    # def __idiv__(self, other):
    #     return self.__itruediv__(other)

    def __long__(self):
        if not hasattr(self, '__int__'):
            return NotImplemented
        return self.__int__()  # not type(self).__int__(self)

    # def __new__(cls, *args, **kwargs):
    #     """
    #     dict() -> new empty dictionary
    #     dict(mapping) -> new dictionary initialized from a mapping object's
    #         (key, value) pairs
    #     dict(iterable) -> new dictionary initialized as if via:
    #         d = {}
    #         for k, v in iterable:
    #             d[k] = v
    #     dict(**kwargs) -> new dictionary initialized with the name=value pairs
    #         in the keyword argument list.  For example:  dict(one=1, two=2)
    #     """

    #     if len(args) == 0:
    #         return super(newdict, cls).__new__(cls)
    #     elif type(args[0]) == newdict:
    #         return args[0]
    #     else:
    #         value = args[0]
    #     return super(newdict, cls).__new__(cls, value)

    def __native__(self):
        """
        Hook for the future.utils.native() function
        """
        return object(self)

    __slots__ = []

__all__ = ['newobject']
PK%Bu\��>��m�m#future/standard_library/__init__.pynu�[���"""
Python 3 reorganized the standard library (PEP 3108). This module exposes
several standard library modules to Python 2 under their new Python 3
names.

It is designed to be used as follows::

    from future import standard_library
    standard_library.install_aliases()

And then these normal Py3 imports work on both Py3 and Py2::

    import builtins
    import copyreg
    import queue
    import reprlib
    import socketserver
    import winreg    # on Windows only
    import test.support
    import html, html.parser, html.entities
    import http, http.client, http.server
    import http.cookies, http.cookiejar
    import urllib.parse, urllib.request, urllib.response, urllib.error, urllib.robotparser
    import xmlrpc.client, xmlrpc.server

    import _thread
    import _dummy_thread
    import _markupbase

    from itertools import filterfalse, zip_longest
    from sys import intern
    from collections import UserDict, UserList, UserString
    from collections import OrderedDict, Counter, ChainMap     # even on Py2.6
    from subprocess import getoutput, getstatusoutput
    from subprocess import check_output              # even on Py2.6
    from multiprocessing import SimpleQueue

(The renamed modules and functions are still available under their old
names on Python 2.)

This is a cleaner alternative to this idiom (see
http://docs.pythonsprints.com/python3_porting/py-porting.html)::

    try:
        import queue
    except ImportError:
        import Queue as queue


Limitations
-----------
We don't currently support these modules, but would like to::

    import dbm
    import dbm.dumb
    import dbm.gnu
    import collections.abc  # on Py33
    import pickle     # should (optionally) bring in cPickle on Python 2

"""

from __future__ import absolute_import, division, print_function

import sys
import logging
# imp was deprecated in python 3.6
if sys.version_info >= (3, 6):
    import importlib as imp
else:
    import imp
import contextlib
import copy
import os

# Make a dedicated logger; leave the root logger to be configured
# by the application.
flog = logging.getLogger('future_stdlib')
_formatter = logging.Formatter(logging.BASIC_FORMAT)
_handler = logging.StreamHandler()
_handler.setFormatter(_formatter)
flog.addHandler(_handler)
flog.setLevel(logging.WARN)

from future.utils import PY2, PY3

# The modules that are defined under the same names on Py3 but with
# different contents in a significant way (e.g. submodules) are:
#   pickle (fast one)
#   dbm
#   urllib
#   test
#   email

REPLACED_MODULES = set(['test', 'urllib', 'pickle', 'dbm'])  # add email and dbm when we support it

# The following module names are not present in Python 2.x, so they cause no
# potential clashes between the old and new names:
#   http
#   html
#   tkinter
#   xmlrpc
# Keys: Py2 / real module names
# Values: Py3 / simulated module names
RENAMES = {
           # 'cStringIO': 'io',  # there's a new io module in Python 2.6
                                 # that provides StringIO and BytesIO
           # 'StringIO': 'io',   # ditto
           # 'cPickle': 'pickle',
           '__builtin__': 'builtins',
           'copy_reg': 'copyreg',
           'Queue': 'queue',
           'future.moves.socketserver': 'socketserver',
           'ConfigParser': 'configparser',
           'repr': 'reprlib',
           'multiprocessing.queues': 'multiprocessing',
           # 'FileDialog': 'tkinter.filedialog',
           # 'tkFileDialog': 'tkinter.filedialog',
           # 'SimpleDialog': 'tkinter.simpledialog',
           # 'tkSimpleDialog': 'tkinter.simpledialog',
           # 'tkColorChooser': 'tkinter.colorchooser',
           # 'tkCommonDialog': 'tkinter.commondialog',
           # 'Dialog': 'tkinter.dialog',
           # 'Tkdnd': 'tkinter.dnd',
           # 'tkFont': 'tkinter.font',
           # 'tkMessageBox': 'tkinter.messagebox',
           # 'ScrolledText': 'tkinter.scrolledtext',
           # 'Tkconstants': 'tkinter.constants',
           # 'Tix': 'tkinter.tix',
           # 'ttk': 'tkinter.ttk',
           # 'Tkinter': 'tkinter',
           '_winreg': 'winreg',
           'thread': '_thread',
           'dummy_thread': '_dummy_thread' if sys.version_info < (3, 9) else '_thread',
           # 'anydbm': 'dbm',   # causes infinite import loop
           # 'whichdb': 'dbm',  # causes infinite import loop
           # anydbm and whichdb are handled by fix_imports2
           # 'dbhash': 'dbm.bsd',
           # 'dumbdbm': 'dbm.dumb',
           # 'dbm': 'dbm.ndbm',
           # 'gdbm': 'dbm.gnu',
           'future.moves.xmlrpc': 'xmlrpc',
           # 'future.backports.email': 'email',    # for use by urllib
           # 'DocXMLRPCServer': 'xmlrpc.server',
           # 'SimpleXMLRPCServer': 'xmlrpc.server',
           # 'httplib': 'http.client',
           # 'htmlentitydefs' : 'html.entities',
           # 'HTMLParser' : 'html.parser',
           # 'Cookie': 'http.cookies',
           # 'cookielib': 'http.cookiejar',
           # 'BaseHTTPServer': 'http.server',
           # 'SimpleHTTPServer': 'http.server',
           # 'CGIHTTPServer': 'http.server',
           # 'future.backports.test': 'test',  # primarily for renaming test_support to support
           # 'commands': 'subprocess',
           # 'urlparse' : 'urllib.parse',
           # 'robotparser' : 'urllib.robotparser',
           # 'abc': 'collections.abc',   # for Py33
           # 'future.utils.six.moves.html': 'html',
           # 'future.utils.six.moves.http': 'http',
           'future.moves.html': 'html',
           'future.moves.http': 'http',
           # 'future.backports.urllib': 'urllib',
           # 'future.utils.six.moves.urllib': 'urllib',
           'future.moves._markupbase': '_markupbase',
          }


# It is complicated and apparently brittle to mess around with the
# ``sys.modules`` cache in order to support "import urllib" meaning two
# different things (Py2.7 urllib and backported Py3.3-like urllib) in different
# contexts. So we require explicit imports for these modules.
assert len(set(RENAMES.values()) & set(REPLACED_MODULES)) == 0


# Harmless renames that we can insert.
# These modules need names from elsewhere being added to them:
#   subprocess: should provide getoutput and other fns from commands
#               module but these fns are missing: getstatus, mk2arg,
#               mkarg
#   re:         needs an ASCII constant that works compatibly with Py3

# etc: see lib2to3/fixes/fix_imports.py

# (New module name, new object name, old module name, old object name)
MOVES = [('collections', 'UserList', 'UserList', 'UserList'),
         ('collections', 'UserDict', 'UserDict', 'UserDict'),
         ('collections', 'UserString','UserString', 'UserString'),
         ('collections', 'ChainMap', 'future.backports.misc', 'ChainMap'),
         ('itertools', 'filterfalse','itertools', 'ifilterfalse'),
         ('itertools', 'zip_longest','itertools', 'izip_longest'),
         ('sys', 'intern','__builtin__', 'intern'),
         ('multiprocessing', 'SimpleQueue', 'multiprocessing.queues', 'SimpleQueue'),
         # The re module has no ASCII flag in Py2, but this is the default.
         # Set re.ASCII to a zero constant. stat.ST_MODE just happens to be one
         # (and it exists on Py2.6+).
         ('re', 'ASCII','stat', 'ST_MODE'),
         ('base64', 'encodebytes','base64', 'encodestring'),
         ('base64', 'decodebytes','base64', 'decodestring'),
         ('subprocess', 'getoutput', 'commands', 'getoutput'),
         ('subprocess', 'getstatusoutput', 'commands', 'getstatusoutput'),
         ('subprocess', 'check_output', 'future.backports.misc', 'check_output'),
         ('math', 'ceil', 'future.backports.misc', 'ceil'),
         ('collections', 'OrderedDict', 'future.backports.misc', 'OrderedDict'),
         ('collections', 'Counter', 'future.backports.misc', 'Counter'),
         ('collections', 'ChainMap', 'future.backports.misc', 'ChainMap'),
         ('itertools', 'count', 'future.backports.misc', 'count'),
         ('reprlib', 'recursive_repr', 'future.backports.misc', 'recursive_repr'),
         ('functools', 'cmp_to_key', 'future.backports.misc', 'cmp_to_key'),

# This is no use, since "import urllib.request" etc. still fails:
#          ('urllib', 'error', 'future.moves.urllib', 'error'),
#          ('urllib', 'parse', 'future.moves.urllib', 'parse'),
#          ('urllib', 'request', 'future.moves.urllib', 'request'),
#          ('urllib', 'response', 'future.moves.urllib', 'response'),
#          ('urllib', 'robotparser', 'future.moves.urllib', 'robotparser'),
        ]


# A minimal example of an import hook:
# class WarnOnImport(object):
#     def __init__(self, *args):
#         self.module_names = args
#
#     def find_module(self, fullname, path=None):
#         if fullname in self.module_names:
#             self.path = path
#             return self
#         return None
#
#     def load_module(self, name):
#         if name in sys.modules:
#             return sys.modules[name]
#         module_info = imp.find_module(name, self.path)
#         module = imp.load_module(name, *module_info)
#         sys.modules[name] = module
#         flog.warning("Imported deprecated module %s", name)
#         return module


class RenameImport(object):
    """
    A class for import hooks mapping Py3 module names etc. to the Py2 equivalents.
    """
    # Different RenameImport classes are created when importing this module from
    # different source files. This causes isinstance(hook, RenameImport) checks
    # to produce inconsistent results. We add this RENAMER attribute here so
    # remove_hooks() and install_hooks() can find instances of these classes
    # easily:
    RENAMER = True

    def __init__(self, old_to_new):
        '''
        Pass in a dictionary-like object mapping from old names to new
        names. E.g. {'ConfigParser': 'configparser', 'cPickle': 'pickle'}
        '''
        self.old_to_new = old_to_new
        both = set(old_to_new.keys()) & set(old_to_new.values())
        assert (len(both) == 0 and
                len(set(old_to_new.values())) == len(old_to_new.values())), \
               'Ambiguity in renaming (handler not implemented)'
        self.new_to_old = dict((new, old) for (old, new) in old_to_new.items())

    def find_module(self, fullname, path=None):
        # Handles hierarchical importing: package.module.module2
        new_base_names = set([s.split('.')[0] for s in self.new_to_old])
        # Before v0.12: Was: if fullname in set(self.old_to_new) | new_base_names:
        if fullname in new_base_names:
            return self
        return None

    def load_module(self, name):
        path = None
        if name in sys.modules:
            return sys.modules[name]
        elif name in self.new_to_old:
            # New name. Look up the corresponding old (Py2) name:
            oldname = self.new_to_old[name]
            module = self._find_and_load_module(oldname)
            # module.__future_module__ = True
        else:
            module = self._find_and_load_module(name)
        # In any case, make it available under the requested (Py3) name
        sys.modules[name] = module
        return module

    def _find_and_load_module(self, name, path=None):
        """
        Finds and loads it. But if there's a . in the name, handles it
        properly.
        """
        bits = name.split('.')
        while len(bits) > 1:
            # Treat the first bit as a package
            packagename = bits.pop(0)
            package = self._find_and_load_module(packagename, path)
            try:
                path = package.__path__
            except AttributeError:
                # This could be e.g. moves.
                flog.debug('Package {0} has no __path__.'.format(package))
                if name in sys.modules:
                    return sys.modules[name]
                flog.debug('What to do here?')

        name = bits[0]
        module_info = imp.find_module(name, path)
        return imp.load_module(name, *module_info)


class hooks(object):
    """
    Acts as a context manager. Saves the state of sys.modules and restores it
    after the 'with' block.

    Use like this:

    >>> from future import standard_library
    >>> with standard_library.hooks():
    ...     import http.client
    >>> import requests

    For this to work, http.client will be scrubbed from sys.modules after the
    'with' block. That way the modules imported in the 'with' block will
    continue to be accessible in the current namespace but not from any
    imported modules (like requests).
    """
    def __enter__(self):
        # flog.debug('Entering hooks context manager')
        self.old_sys_modules = copy.copy(sys.modules)
        self.hooks_were_installed = detect_hooks()
        # self.scrubbed = scrub_py2_sys_modules()
        install_hooks()
        return self

    def __exit__(self, *args):
        # flog.debug('Exiting hooks context manager')
        # restore_sys_modules(self.scrubbed)
        if not self.hooks_were_installed:
            remove_hooks()
        # scrub_future_sys_modules()

# Sanity check for is_py2_stdlib_module(): We aren't replacing any
# builtin modules names:
if PY2:
    assert len(set(RENAMES.values()) & set(sys.builtin_module_names)) == 0


def is_py2_stdlib_module(m):
    """
    Tries to infer whether the module m is from the Python 2 standard library.
    This may not be reliable on all systems.
    """
    if PY3:
        return False
    if not 'stdlib_path' in is_py2_stdlib_module.__dict__:
        stdlib_files = [contextlib.__file__, os.__file__, copy.__file__]
        stdlib_paths = [os.path.split(f)[0] for f in stdlib_files]
        if not len(set(stdlib_paths)) == 1:
            # This seems to happen on travis-ci.org. Very strange. We'll try to
            # ignore it.
            flog.warn('Multiple locations found for the Python standard '
                         'library: %s' % stdlib_paths)
        # Choose the first one arbitrarily
        is_py2_stdlib_module.stdlib_path = stdlib_paths[0]

    if m.__name__ in sys.builtin_module_names:
        return True

    if hasattr(m, '__file__'):
        modpath = os.path.split(m.__file__)
        if (modpath[0].startswith(is_py2_stdlib_module.stdlib_path) and
            'site-packages' not in modpath[0]):
            return True

    return False


def scrub_py2_sys_modules():
    """
    Removes any Python 2 standard library modules from ``sys.modules`` that
    would interfere with Py3-style imports using import hooks. Examples are
    modules with the same names (like urllib or email).

    (Note that currently import hooks are disabled for modules like these
    with ambiguous names anyway ...)
    """
    if PY3:
        return {}
    scrubbed = {}
    for modulename in REPLACED_MODULES & set(RENAMES.keys()):
        if not modulename in sys.modules:
            continue

        module = sys.modules[modulename]

        if is_py2_stdlib_module(module):
            flog.debug('Deleting (Py2) {} from sys.modules'.format(modulename))
            scrubbed[modulename] = sys.modules[modulename]
            del sys.modules[modulename]
    return scrubbed


def scrub_future_sys_modules():
    """
    Deprecated.
    """
    return {}

class suspend_hooks(object):
    """
    Acts as a context manager. Use like this:

    >>> from future import standard_library
    >>> standard_library.install_hooks()
    >>> import http.client
    >>> # ...
    >>> with standard_library.suspend_hooks():
    >>>     import requests     # incompatible with ``future``'s standard library hooks

    If the hooks were disabled before the context, they are not installed when
    the context is left.
    """
    def __enter__(self):
        self.hooks_were_installed = detect_hooks()
        remove_hooks()
        # self.scrubbed = scrub_future_sys_modules()
        return self

    def __exit__(self, *args):
        if self.hooks_were_installed:
            install_hooks()
        # restore_sys_modules(self.scrubbed)


def restore_sys_modules(scrubbed):
    """
    Add any previously scrubbed modules back to the sys.modules cache,
    but only if it's safe to do so.
    """
    clash = set(sys.modules) & set(scrubbed)
    if len(clash) != 0:
        # If several, choose one arbitrarily to raise an exception about
        first = list(clash)[0]
        raise ImportError('future module {} clashes with Py2 module'
                          .format(first))
    sys.modules.update(scrubbed)


def install_aliases():
    """
    Monkey-patches the standard library in Py2.6/7 to provide
    aliases for better Py3 compatibility.
    """
    if PY3:
        return
    # if hasattr(install_aliases, 'run_already'):
    #     return
    for (newmodname, newobjname, oldmodname, oldobjname) in MOVES:
        __import__(newmodname)
        # We look up the module in sys.modules because __import__ just returns the
        # top-level package:
        newmod = sys.modules[newmodname]
        # newmod.__future_module__ = True

        __import__(oldmodname)
        oldmod = sys.modules[oldmodname]

        obj = getattr(oldmod, oldobjname)
        setattr(newmod, newobjname, obj)

    # Hack for urllib so it appears to have the same structure on Py2 as on Py3
    import urllib
    from future.backports.urllib import request
    from future.backports.urllib import response
    from future.backports.urllib import parse
    from future.backports.urllib import error
    from future.backports.urllib import robotparser
    urllib.request = request
    urllib.response = response
    urllib.parse = parse
    urllib.error = error
    urllib.robotparser = robotparser
    sys.modules['urllib.request'] = request
    sys.modules['urllib.response'] = response
    sys.modules['urllib.parse'] = parse
    sys.modules['urllib.error'] = error
    sys.modules['urllib.robotparser'] = robotparser

    # Patch the test module so it appears to have the same structure on Py2 as on Py3
    try:
        import test
    except ImportError:
        pass
    try:
        from future.moves.test import support
    except ImportError:
        pass
    else:
        test.support = support
        sys.modules['test.support'] = support

    # Patch the dbm module so it appears to have the same structure on Py2 as on Py3
    try:
        import dbm
    except ImportError:
        pass
    else:
        from future.moves.dbm import dumb
        dbm.dumb = dumb
        sys.modules['dbm.dumb'] = dumb
        try:
            from future.moves.dbm import gnu
        except ImportError:
            pass
        else:
            dbm.gnu = gnu
            sys.modules['dbm.gnu'] = gnu
        try:
            from future.moves.dbm import ndbm
        except ImportError:
            pass
        else:
            dbm.ndbm = ndbm
            sys.modules['dbm.ndbm'] = ndbm

    # install_aliases.run_already = True


def install_hooks():
    """
    This function installs the future.standard_library import hook into
    sys.meta_path.
    """
    if PY3:
        return

    install_aliases()

    flog.debug('sys.meta_path was: {0}'.format(sys.meta_path))
    flog.debug('Installing hooks ...')

    # Add it unless it's there already
    newhook = RenameImport(RENAMES)
    if not detect_hooks():
        sys.meta_path.append(newhook)
    flog.debug('sys.meta_path is now: {0}'.format(sys.meta_path))


def enable_hooks():
    """
    Deprecated. Use install_hooks() instead. This will be removed by
    ``future`` v1.0.
    """
    install_hooks()


def remove_hooks(scrub_sys_modules=False):
    """
    This function removes the import hook from sys.meta_path.
    """
    if PY3:
        return
    flog.debug('Uninstalling hooks ...')
    # Loop backwards, so deleting items keeps the ordering:
    for i, hook in list(enumerate(sys.meta_path))[::-1]:
        if hasattr(hook, 'RENAMER'):
            del sys.meta_path[i]

    # Explicit is better than implicit. In the future the interface should
    # probably change so that scrubbing the import hooks requires a separate
    # function call. Left as is for now for backward compatibility with
    # v0.11.x.
    if scrub_sys_modules:
        scrub_future_sys_modules()


def disable_hooks():
    """
    Deprecated. Use remove_hooks() instead. This will be removed by
    ``future`` v1.0.
    """
    remove_hooks()


def detect_hooks():
    """
    Returns True if the import hooks are installed, False if not.
    """
    flog.debug('Detecting hooks ...')
    present = any([hasattr(hook, 'RENAMER') for hook in sys.meta_path])
    if present:
        flog.debug('Detected.')
    else:
        flog.debug('Not detected.')
    return present


# As of v0.12, this no longer happens implicitly:
# if not PY3:
#     install_hooks()


if not hasattr(sys, 'py2_modules'):
    sys.py2_modules = {}

def cache_py2_modules():
    """
    Currently this function is unneeded, as we are not attempting to provide import hooks
    for modules with ambiguous names: email, urllib, pickle.
    """
    if len(sys.py2_modules) != 0:
        return
    assert not detect_hooks()
    import urllib
    sys.py2_modules['urllib'] = urllib

    import email
    sys.py2_modules['email'] = email

    import pickle
    sys.py2_modules['pickle'] = pickle

    # Not all Python installations have test module. (Anaconda doesn't, for example.)
    # try:
    #     import test
    # except ImportError:
    #     sys.py2_modules['test'] = None
    # sys.py2_modules['test'] = test

    # import dbm
    # sys.py2_modules['dbm'] = dbm


def import_(module_name, backport=False):
    """
    Pass a (potentially dotted) module name of a Python 3 standard library
    module. This function imports the module compatibly on Py2 and Py3 and
    returns the top-level module.

    Example use:
        >>> http = import_('http.client')
        >>> http = import_('http.server')
        >>> urllib = import_('urllib.request')

    Then:
        >>> conn = http.client.HTTPConnection(...)
        >>> response = urllib.request.urlopen('http://mywebsite.com')
        >>> # etc.

    Use as follows:
        >>> package_name = import_(module_name)

    On Py3, equivalent to this:

        >>> import module_name

    On Py2, equivalent to this if backport=False:

        >>> from future.moves import module_name

    or to this if backport=True:

        >>> from future.backports import module_name

    except that it also handles dotted module names such as ``http.client``
    The effect then is like this:

        >>> from future.backports import module
        >>> from future.backports.module import submodule
        >>> module.submodule = submodule

    Note that this would be a SyntaxError in Python:

        >>> from future.backports import http.client

    """
    # Python 2.6 doesn't have importlib in the stdlib, so it requires
    # the backported ``importlib`` package from PyPI as a dependency to use
    # this function:
    import importlib

    if PY3:
        return __import__(module_name)
    else:
        # client.blah = blah
        # Then http.client = client
        # etc.
        if backport:
            prefix = 'future.backports'
        else:
            prefix = 'future.moves'
        parts = prefix.split('.') + module_name.split('.')

        modules = []
        for i, part in enumerate(parts):
            sofar = '.'.join(parts[:i+1])
            modules.append(importlib.import_module(sofar))
        for i, part in reversed(list(enumerate(parts))):
            if i == 0:
                break
            setattr(modules[i-1], part, modules[i])

        # Return the next-most top-level module after future.backports / future.moves:
        return modules[2]


def from_import(module_name, *symbol_names, **kwargs):
    """
    Example use:
        >>> HTTPConnection = from_import('http.client', 'HTTPConnection')
        >>> HTTPServer = from_import('http.server', 'HTTPServer')
        >>> urlopen, urlparse = from_import('urllib.request', 'urlopen', 'urlparse')

    Equivalent to this on Py3:

        >>> from module_name import symbol_names[0], symbol_names[1], ...

    and this on Py2:

        >>> from future.moves.module_name import symbol_names[0], ...

    or:

        >>> from future.backports.module_name import symbol_names[0], ...

    except that it also handles dotted module names such as ``http.client``.
    """

    if PY3:
        return __import__(module_name)
    else:
        if 'backport' in kwargs and bool(kwargs['backport']):
            prefix = 'future.backports'
        else:
            prefix = 'future.moves'
        parts = prefix.split('.') + module_name.split('.')
        module = importlib.import_module(prefix + '.' + module_name)
        output = [getattr(module, name) for name in symbol_names]
        if len(output) == 1:
            return output[0]
        else:
            return output


class exclude_local_folder_imports(object):
    """
    A context-manager that prevents standard library modules like configparser
    from being imported from the local python-future source folder on Py3.

    (This was need prior to v0.16.0 because the presence of a configparser
    folder would otherwise have prevented setuptools from running on Py3. Maybe
    it's not needed any more?)
    """
    def __init__(self, *args):
        assert len(args) > 0
        self.module_names = args
        # Disallow dotted module names like http.client:
        if any(['.' in m for m in self.module_names]):
            raise NotImplementedError('Dotted module names are not supported')

    def __enter__(self):
        self.old_sys_path = copy.copy(sys.path)
        self.old_sys_modules = copy.copy(sys.modules)
        if sys.version_info[0] < 3:
            return
        # The presence of all these indicates we've found our source folder,
        # because `builtins` won't have been installed in site-packages by setup.py:
        FUTURE_SOURCE_SUBFOLDERS = ['future', 'past', 'libfuturize', 'libpasteurize', 'builtins']

        # Look for the future source folder:
        for folder in self.old_sys_path:
            if all([os.path.exists(os.path.join(folder, subfolder))
                    for subfolder in FUTURE_SOURCE_SUBFOLDERS]):
                # Found it. Remove it.
                sys.path.remove(folder)

        # Ensure we import the system module:
        for m in self.module_names:
            # Delete the module and any submodules from sys.modules:
            # for key in list(sys.modules):
            #     if key == m or key.startswith(m + '.'):
            #         try:
            #             del sys.modules[key]
            #         except KeyError:
            #             pass
            try:
                module = __import__(m, level=0)
            except ImportError:
                # There's a problem importing the system module. E.g. the
                # winreg module is not available except on Windows.
                pass

    def __exit__(self, *args):
        # Restore sys.path and sys.modules:
        sys.path = self.old_sys_path
        for m in set(self.old_sys_modules.keys()) - set(sys.modules.keys()):
            sys.modules[m] = self.old_sys_modules[m]

TOP_LEVEL_MODULES = ['builtins',
                     'copyreg',
                     'html',
                     'http',
                     'queue',
                     'reprlib',
                     'socketserver',
                     'test',
                     'tkinter',
                     'winreg',
                     'xmlrpc',
                     '_dummy_thread',
                     '_markupbase',
                     '_thread',
                    ]

def import_top_level_modules():
    with exclude_local_folder_imports(*TOP_LEVEL_MODULES):
        for m in TOP_LEVEL_MODULES:
            try:
                __import__(m)
            except ImportError:     # e.g. winreg
                pass
PK*Bu\��um�H�H;future/standard_library/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�m�@sdZddlmZmZmZddlZddlZejdkr<ddlZ	nddl	Z	ddl
Z
ddlZddlZe�
d�Ze�ej�Ze��Ze�e�e�e�e�ej�ddlmZmZegd��Zdd	d
ddd
dddejdkr�dndddddd�Zeee���ee�@�dk�sJ�gd�Z Gdd�de!�Z"Gdd�de!�Z#e�rZeee���eej$�@�dk�sZJ�dd�Z%dd �Z&d!d"�Z'Gd#d$�d$e!�Z(d%d&�Z)d'd(�Z*d)d*�Z+d+d,�Z,d@d.d/�Z-d0d1�Z.d2d3�Z/e0ed4��s�ie_1d5d6�Z2dAd7d8�Z3d9d:�Z4Gd;d<�d<e!�Z5gd=�Z6d>d?�Z7dS)Ba�
Python 3 reorganized the standard library (PEP 3108). This module exposes
several standard library modules to Python 2 under their new Python 3
names.

It is designed to be used as follows::

    from future import standard_library
    standard_library.install_aliases()

And then these normal Py3 imports work on both Py3 and Py2::

    import builtins
    import copyreg
    import queue
    import reprlib
    import socketserver
    import winreg    # on Windows only
    import test.support
    import html, html.parser, html.entities
    import http, http.client, http.server
    import http.cookies, http.cookiejar
    import urllib.parse, urllib.request, urllib.response, urllib.error, urllib.robotparser
    import xmlrpc.client, xmlrpc.server

    import _thread
    import _dummy_thread
    import _markupbase

    from itertools import filterfalse, zip_longest
    from sys import intern
    from collections import UserDict, UserList, UserString
    from collections import OrderedDict, Counter, ChainMap     # even on Py2.6
    from subprocess import getoutput, getstatusoutput
    from subprocess import check_output              # even on Py2.6
    from multiprocessing import SimpleQueue

(The renamed modules and functions are still available under their old
names on Python 2.)

This is a cleaner alternative to this idiom (see
http://docs.pythonsprints.com/python3_porting/py-porting.html)::

    try:
        import queue
    except ImportError:
        import Queue as queue


Limitations
-----------
We don't currently support these modules, but would like to::

    import dbm
    import dbm.dumb
    import dbm.gnu
    import collections.abc  # on Py33
    import pickle     # should (optionally) bring in cPickle on Python 2

�)�absolute_import�division�print_functionN)��Z
future_stdlib)�PY2�PY3)�test�urllib�pickle�dbm�builtins�copyreg�queue�socketserver�configparser�reprlib�multiprocessing�winreg�_thread)r�	�
_dummy_thread�xmlrpc�html�http�_markupbase)�__builtin__�copy_reg�Queuezfuture.moves.socketserver�ConfigParser�repr�multiprocessing.queues�_winreg�thread�dummy_threadzfuture.moves.xmlrpczfuture.moves.htmlzfuture.moves.httpzfuture.moves._markupbase))�collections�UserListr&r&)r%�UserDictr'r')r%�
UserStringr(r(�r%�ChainMap�future.backports.miscr*)�	itertools�filterfalser,�ifilterfalse)r,�zip_longestr,�izip_longest)�sys�internrr2)r�SimpleQueuer!r3)�re�ASCII�stat�ST_MODE)�base64�encodebytesr8�encodestring)r8�decodebytesr8�decodestring)�
subprocess�	getoutput�commandsr>)r=�getstatusoutputr?r@)r=�check_outputr+rA)�math�ceilr+rC)r%�OrderedDictr+rD)r%�Counterr+rEr))r,�countr+rF)r�recursive_reprr+rG)�	functools�
cmp_to_keyr+rIc@s8eZdZdZdZdd�Zddd�Zdd	�Zd
d
d�ZdS)�RenameImportzX
    A class for import hooks mapping Py3 module names etc. to the Py2 equivalents.
    TcCsj||_t|���t|���@}t|�dkrFtt|����t|���ksNJd��tdd�|��D��|_dS)z�
        Pass in a dictionary-like object mapping from old names to new
        names. E.g. {'ConfigParser': 'configparser', 'cPickle': 'pickle'}
        rz/Ambiguity in renaming (handler not implemented)css|]\}}||fVqdS�N�)�.0�old�newrLrL�J/usr/local/lib/python3.9/site-packages/future/standard_library/__init__.py�	<genexpr>�z(RenameImport.__init__.<locals>.<genexpr>N)�
old_to_new�set�keys�values�len�dict�items�
new_to_old)�selfrSZbothrLrLrP�__init__�s��zRenameImport.__init__NcCs$tdd�|jD��}||vr |SdS)NcSsg|]}|�d�d�qS)�.r)�split)rM�srLrLrP�
<listcomp>	rRz,RenameImport.find_module.<locals>.<listcomp>)rTrZ)r[�fullname�pathZnew_base_namesrLrLrP�find_moduleszRenameImport.find_modulecCsPd}|tjvrtj|S||jvr8|j|}|�|�}n
|�|�}|tj|<|SrK)r1�modulesrZ�_find_and_load_module)r[�namerbZoldname�modulerLrLrP�load_modules





zRenameImport.load_modulecCs�|�d�}t|�dkr~|�d�}|�||�}z
|j}Wq
tyzt�d�|��|t	j
vrlt	j
|YSt�d�Yq
0q
|d}t�||�}tj
|g|�R�S)zb
        Finds and loads it. But if there's a . in the name, handles it
        properly.
        r]�rzPackage {0} has no __path__.zWhat to do here?)r^rW�popre�__path__�AttributeError�flog�debug�formatr1rd�imprcrh)r[rfrb�bitsZpackagename�packageZmodule_inforLrLrPres



z"RenameImport._find_and_load_module)N)N)	�__name__�
__module__�__qualname__�__doc__�RENAMERr\rcrhrerLrLrLrPrJ�s
rJc@s eZdZdZdd�Zdd�ZdS)�hooksa
    Acts as a context manager. Saves the state of sys.modules and restores it
    after the 'with' block.

    Use like this:

    >>> from future import standard_library
    >>> with standard_library.hooks():
    ...     import http.client
    >>> import requests

    For this to work, http.client will be scrubbed from sys.modules after the
    'with' block. That way the modules imported in the 'with' block will
    continue to be accessible in the current namespace but not from any
    imported modules (like requests).
    cCs t�tj�|_t�|_t�|SrK)�copyr1rd�old_sys_modules�detect_hooks�hooks_were_installed�
install_hooks�r[rLrLrP�	__enter__Gszhooks.__enter__cGs|jst�dSrK)r|�remove_hooks�r[�argsrLrLrP�__exit__Oszhooks.__exit__N�rsrtrurvrr�rLrLrLrPrx6srxcCs�trdSdtjvrXtjtjtjg}dd�|D�}tt|��dksNt	�
d|�|dt_|jt
jvrhdSt|d	�r�tj�|j�}|d�tj�r�d
|dvr�dSdS)z�
    Tries to infer whether the module m is from the Python 2 standard library.
    This may not be reliable on all systems.
    F�stdlib_pathcSsg|]}tj�|�d�qS)r)�osrbr^)rM�frLrLrPr`erRz(is_py2_stdlib_module.<locals>.<listcomp>riz<Multiple locations found for the Python standard library: %srT�__file__z
site-packages)r�is_py2_stdlib_module�__dict__�
contextlibr�r�ryrWrTrm�warnr�rsr1�builtin_module_names�hasattrrbr^�
startswith)�mZstdlib_filesZstdlib_paths�modpathrLrLrPr�\s&
�


�r�cCsjtriSi}ttt���@D]H}|tjvr,qtj|}t|�rt�	d�
|��tj|||<tj|=q|S)aE
    Removes any Python 2 standard library modules from ``sys.modules`` that
    would interfere with Py3-style imports using import hooks. Examples are
    modules with the same names (like urllib or email).

    (Note that currently import hooks are disabled for modules like these
    with ambiguous names anyway ...)
    z"Deleting (Py2) {} from sys.modules)r�REPLACED_MODULESrT�RENAMESrUr1rdr�rmrnro)�scrubbed�
modulenamergrLrLrP�scrub_py2_sys_moduleszs	


r�cCsiS)z
    Deprecated.
    rLrLrLrLrP�scrub_future_sys_modules�sr�c@s eZdZdZdd�Zdd�ZdS)�
suspend_hooksa�
    Acts as a context manager. Use like this:

    >>> from future import standard_library
    >>> standard_library.install_hooks()
    >>> import http.client
    >>> # ...
    >>> with standard_library.suspend_hooks():
    >>>     import requests     # incompatible with ``future``'s standard library hooks

    If the hooks were disabled before the context, they are not installed when
    the context is left.
    cCst�|_t�|SrK)r{r|r�r~rLrLrPr�szsuspend_hooks.__enter__cGs|jrt�dSrK)r|r}r�rLrLrPr��szsuspend_hooks.__exit__Nr�rLrLrLrPr��s
r�cCsHttj�t|�@}t|�dkr8t|�d}td�|���tj�|�dS)zp
    Add any previously scrubbed modules back to the sys.modules cache,
    but only if it's safe to do so.
    rz(future module {} clashes with Py2 moduleN)rTr1rdrW�list�ImportErrorro�update)r�Zclash�firstrLrLrP�restore_sys_modules�s�r�cCs�trdStD]F\}}}}t|�tj|}t|�tj|}t||�}t|||�qddl}ddlm	}ddlm
}	ddlm}
ddlm}ddlm
}||_	|	|_
|
|_||_||_
|tjd<|	tjd	<|
tjd
<|tjd<|tjd<zddl}
Wnt�yYn0zdd
lm}Wnt�y.Yn0||
_|tjd<zddl}Wnt�y`Yn�0ddlm}||_|tjd<zddlm}Wnt�y�Yn0||_|tjd<zddlm}Wnt�y�Yn0||_|tjd<dS)zm
    Monkey-patches the standard library in Py2.6/7 to provide
    aliases for better Py3 compatibility.
    Nr)�request)�response)�parse)�error)�robotparserzurllib.requestzurllib.responsezurllib.parsezurllib.errorzurllib.robotparser)�supportztest.support)�dumbzdbm.dumb)�gnuzdbm.gnu)�ndbmzdbm.ndbm)r�MOVES�
__import__r1rd�getattr�setattrr
Zfuture.backports.urllibr�r�r�r�r�r	r�Zfuture.moves.testr�rZfuture.moves.dbmr�r�r�)Z
newmodnameZ
newobjnameZ
oldmodnameZ
oldobjnameZnewmodZoldmod�objr
r�r�r�r�r�r	r�rr�r�r�rLrLrP�install_aliases�sl










r�cCsZtrdSt�t�d�tj��t�d�tt�}t	�sDtj�
|�t�d�tj��dS)z`
    This function installs the future.standard_library import hook into
    sys.meta_path.
    Nzsys.meta_path was: {0}zInstalling hooks ...zsys.meta_path is now: {0})rr�rmrnror1�	meta_pathrJr�r{�append)ZnewhookrLrLrPr}s
r}cCs
t�dS)z_
    Deprecated. Use install_hooks() instead. This will be removed by
    ``future`` v1.0.
    N)r}rLrLrLrP�enable_hooks%sr�FcCsTtrdSt�d�tttj��ddd�D]\}}t|d�r*tj|=q*|rPt�dS)zC
    This function removes the import hook from sys.meta_path.
    NzUninstalling hooks ...���rw)	rrmrnr��	enumerater1r�r�r�)Zscrub_sys_modules�i�hookrLrLrPr�-s
 

r�cCs
t�dS)z^
    Deprecated. Use remove_hooks() instead. This will be removed by
    ``future`` v1.0.
    N)r�rLrLrLrP�
disable_hooksAsr�cCs<t�d�tdd�tjD��}|r.t�d�n
t�d�|S)zG
    Returns True if the import hooks are installed, False if not.
    zDetecting hooks ...cSsg|]}t|d��qS)rw)r�)rMr�rLrLrPr`NrRz detect_hooks.<locals>.<listcomp>z	Detected.z
Not detected.)rmrn�anyr1r�)ZpresentrLrLrPr{Is

r{�py2_modulescCsVttj�dkrdSt�rJ�ddl}|tjd<ddl}|tjd<ddl}|tjd<dS)z�
    Currently this function is unneeded, as we are not attempting to provide import hooks
    for modules with ambiguous names: email, urllib, pickle.
    rNr
�emailr)rWr1r�r{r
r�r)r
r�rrLrLrP�cache_py2_modules^s


r�c	Cs�ddl}trt|�S|rd}nd}|�d�|�d�}g}t|�D].\}}d�|d|d��}|�|�|��qBtt	t|���D],\}}|dkr�q�t
||d|||�q�|dSdS)a�
    Pass a (potentially dotted) module name of a Python 3 standard library
    module. This function imports the module compatibly on Py2 and Py3 and
    returns the top-level module.

    Example use:
        >>> http = import_('http.client')
        >>> http = import_('http.server')
        >>> urllib = import_('urllib.request')

    Then:
        >>> conn = http.client.HTTPConnection(...)
        >>> response = urllib.request.urlopen('http://mywebsite.com')
        >>> # etc.

    Use as follows:
        >>> package_name = import_(module_name)

    On Py3, equivalent to this:

        >>> import module_name

    On Py2, equivalent to this if backport=False:

        >>> from future.moves import module_name

    or to this if backport=True:

        >>> from future.backports import module_name

    except that it also handles dotted module names such as ``http.client``
    The effect then is like this:

        >>> from future.backports import module
        >>> from future.backports.module import submodule
        >>> module.submodule = submodule

    Note that this would be a SyntaxError in Python:

        >>> from future.backports import http.client

    rN�future.backports�future.movesr]ri�)�	importlibrr�r^r��joinr��
import_module�reversedr�r�)	�module_name�backportr��prefix�partsrdr��partZsofarrLrLrP�import_zs .r�cs~trt|�Sd|vr&t|d�r&d}nd}|�d�|�d�}t�|d|���fdd�|D�}t|�dkrv|dS|Sd	S)
aa
    Example use:
        >>> HTTPConnection = from_import('http.client', 'HTTPConnection')
        >>> HTTPServer = from_import('http.server', 'HTTPServer')
        >>> urlopen, urlparse = from_import('urllib.request', 'urlopen', 'urlparse')

    Equivalent to this on Py3:

        >>> from module_name import symbol_names[0], symbol_names[1], ...

    and this on Py2:

        >>> from future.moves.module_name import symbol_names[0], ...

    or:

        >>> from future.backports.module_name import symbol_names[0], ...

    except that it also handles dotted module names such as ``http.client``.
    r�r�r�r]csg|]}t�|��qSrL)r�)rMrf�rgrLrPr`�rRzfrom_import.<locals>.<listcomp>rirN)rr��boolr^r�r�rW)r�Zsymbol_names�kwargsr�r��outputrLr�rP�from_import�sr�c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�exclude_local_folder_importsaZ
    A context-manager that prevents standard library modules like configparser
    from being imported from the local python-future source folder on Py3.

    (This was need prior to v0.16.0 because the presence of a configparser
    folder would otherwise have prevented setuptools from running on Py3. Maybe
    it's not needed any more?)
    cGs6t|�dksJ�||_tdd�|jD��r2td��dS)NrcSsg|]}d|v�qS)r]rL)rMr�rLrLrPr`�rRz9exclude_local_folder_imports.__init__.<locals>.<listcomp>z%Dotted module names are not supported)rW�module_namesr��NotImplementedErrorr�rLrLrPr\�sz%exclude_local_folder_imports.__init__c	s�t�tj�|_t�tj�|_tjddkr.dSgd�}|jD]&�t�fdd�|D��r<tj���q<|j	D](}zt
|dd�}Wqjty�Yqj0qjdS)Nrr)�futureZpastZlibfuturizeZ
libpasteurizer
cs"g|]}tj�tj��|���qSrL)r�rb�existsr�)rMZ	subfolder��folderrLrPr`s�z:exclude_local_folder_imports.__enter__.<locals>.<listcomp>)�level)ryr1rb�old_sys_pathrdrz�version_info�all�remover�r�r�)r[ZFUTURE_SOURCE_SUBFOLDERSr�rgrLr�rPr�s
�
z&exclude_local_folder_imports.__enter__cGs>|jt_t|j���ttj���D]}|j|tj|<q$dSrK)r�r1rbrTrzrUrd)r[r�r�rLrLrPr�s z%exclude_local_folder_imports.__exit__N)rsrtrurvr\rr�rLrLrLrPr��s r�)r
rrrrrrr	�tkinterrrrrrc
CsVtt��:tD]$}zt|�Wqty0Yq0qWd�n1sH0YdSrK)r��TOP_LEVEL_MODULESr�r�)r�rLrLrP�import_top_level_modules/s
r�)F)F)8rv�
__future__rrrr1�loggingr�r�rpr�ryr��	getLoggerrm�	Formatter�BASIC_FORMATZ
_formatter�
StreamHandlerZ_handler�setFormatter�
addHandler�setLevel�WARNZfuture.utilsrrrTr�r�rWrVr��objectrJrxr�r�r�r�r�r�r�r}r�r�r�r{r�r�r�r�r�r�r�r�rLrLrLrP�<module>sr=





�D"
7F"$P

I&6PK1Bu\�W����future/moves/_markupbase.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from _markupbase import *
else:
    __future_module__ = True
    from markupbase import *
PK7Bu\���Q��future/moves/http/client.pynu�[���from future.utils import PY3

if PY3:
    from http.client import *
else:
    from httplib import *
    from httplib import HTTPMessage
    __future_module__ = True
PK9Bu\�h���future/moves/http/cookiejar.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from http.cookiejar import *
else:
    __future_module__ = True
    from cookielib import *
PK<Bu\+�͉GGfuture/moves/http/__init__.pynu�[���from future.utils import PY3

if not PY3:
    __future_module__ = True
PKCBu\�=�PP6future/moves/http/__pycache__/cookiejar.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�http.cookiejarZ__future_module__�	cookielib�rr�E/usr/local/lib/python3.9/site-packages/future/moves/http/cookiejar.py�<module>s

PKEBu\C^Dl��5future/moves/http/__pycache__/__init__.cpython-39.pycnu�[���a

��?hG�@sddlmZesdZdS)�)�PY3TN)Zfuture.utilsrZ__future_module__�rr�D/usr/local/lib/python3.9/site-packages/future/moves/http/__init__.py�<module>sPKHBu\��883future/moves/http/__pycache__/client.cpython-39.pycnu�[���a

��?h��@s6ddlmZerddlTnddlTddlmZdZdS)�)�PY3)�*)�HTTPMessageTN)Zfuture.utilsr�http.client�httplibrZ__future_module__�rr�B/usr/local/lib/python3.9/site-packages/future/moves/http/client.py�<module>s

PKJBu\NPLff4future/moves/http/__pycache__/cookies.cpython-39.pycnu�[���a

��?h��@sBddlmZddlmZer&ddlTndZddlTddlmZdS)�)�absolute_import)�PY3)�*T)�MorselN)�
__future__rZfuture.utilsr�http.cookiesZ__future_module__�Cookier�r	r	�C/usr/local/lib/python3.9/site-packages/future/moves/http/cookies.py�<module>s
PKMBu\)�R�##3future/moves/http/__pycache__/server.cpython-39.pycnu�[���a

��?h^�@s�ddlmZddlmZer&ddlTnddZddlTddlTddlTzddlm	Z	Wn6e
y�zddlmZ	Wne
y�Yn0Yn0dS)�)�absolute_import)�PY3)�*T)�_url_collapse_path)�_url_collapse_path_splitN)�
__future__rZfuture.utilsrZhttp.serverZ__future_module__�BaseHTTPServer�
CGIHTTPServer�SimpleHTTPServerr�ImportErrorr�rr�B/usr/local/lib/python3.9/site-packages/future/moves/http/server.py�<module>s
PKPBu\�����future/moves/http/cookies.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from http.cookies import *
else:
    __future_module__ = True
    from Cookie import *
    from Cookie import Morsel    # left out of __all__ on Py2.7!
PKRBu\m�[�^^future/moves/http/server.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from http.server import *
else:
    __future_module__ = True
    from BaseHTTPServer import *
    from CGIHTTPServer import *
    from SimpleHTTPServer import *
    try:
        from CGIHTTPServer import _url_collapse_path     # needed for a test
    except ImportError:
        try:
            # Python 2.7.0 to 2.7.3
            from CGIHTTPServer import (
                _url_collapse_path_split as _url_collapse_path)
        except ImportError:
            # Doesn't exist on Python 2.6.x. Ignore it.
            pass
PKUBu\��Q	��future/moves/copyreg.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    import copyreg, sys
    # A "*" import uses Python 3's copyreg.__all__ which does not include
    # all public names in the API surface for copyreg, this avoids that
    # problem by just making our module _be_ a reference to the actual module.
    sys.modules['future.moves.copyreg'] = copyreg
else:
    __future_module__ = True
    from copy_reg import *
PKZBu\��NƱ�future/moves/html/entities.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from html.entities import *
else:
    __future_module__ = True
    from htmlentitydefs import *
PK\Bu\�ù���future/moves/html/__init__.pynu�[���from __future__ import absolute_import
from future.utils import PY3
__future_module__ = True

if PY3:
    from html import *
else:
    # cgi.escape isn't good enough for the single Py3.3 html test to pass.
    # Define it inline here instead. From the Py3.4 stdlib. Note that the
    # html.escape() function from the Py3.3 stdlib is not suitable for use on
    # Py2.x.
    """
    General functions for HTML manipulation.
    """

    def escape(s, quote=True):
        """
        Replace special characters "&", "<" and ">" to HTML-safe sequences.
        If the optional flag quote is true (the default), the quotation mark
        characters, both double quote (") and single quote (') characters are also
        translated.
        """
        s = s.replace("&", "&amp;") # Must be done first!
        s = s.replace("<", "&lt;")
        s = s.replace(">", "&gt;")
        if quote:
            s = s.replace('"', "&quot;")
            s = s.replace('\'', "&#x27;")
        return s

    __all__ = ['escape']
PKcBu\�|�WCC5future/moves/html/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s>ddlmZddlmZdZer*ddlTnddd�ZdgZdS)	�)�absolute_import)�PY3T)�*cCsD|�dd�}|�dd�}|�dd�}|r@|�dd�}|�d	d
�}|S)a	
        Replace special characters "&", "<" and ">" to HTML-safe sequences.
        If the optional flag quote is true (the default), the quotation mark
        characters, both double quote (") and single quote (') characters are also
        translated.
        �&z&amp;�<z&lt;�>z&gt;�"z&quot;�'z&#x27;)�replace)�s�quote�r
�D/usr/local/lib/python3.9/site-packages/future/moves/html/__init__.py�escapesrN)T)�
__future__rZfuture.utilsrZ__future_module__�htmlr�__all__r
r
r
r�<module>s


PKfBu\���SS5future/moves/html/__pycache__/entities.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�
html.entitiesZ__future_module__�htmlentitydefs�rr�D/usr/local/lib/python3.9/site-packages/future/moves/html/entities.py�<module>s

PKiBu\\��KK3future/moves/html/__pycache__/parser.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZdZer*ddlTnddlTdS)�)�absolute_import)�PY3T)�*N)�
__future__rZfuture.utilsrZ__future_module__�html.parser�
HTMLParser�rr�B/usr/local/lib/python3.9/site-packages/future/moves/html/parser.py�<module>s

PKkBu\^�R@��future/moves/html/parser.pynu�[���from __future__ import absolute_import
from future.utils import PY3
__future_module__ = True

if PY3:
    from html.parser import *
else:
    from HTMLParser import *
PKpBu\+<b�DD!future/moves/tkinter/constants.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.constants import *
else:
    try:
        from Tkconstants import *
    except ImportError:
        raise ImportError('The Tkconstants module is missing. Does your Py2 '
                          'installation include tkinter?')
PKsBu\�U��55future/moves/tkinter/font.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.font import *
else:
    try:
        from tkFont import *
    except ImportError:
        raise ImportError('The tkFont module is missing. Does your Py2 '
                          'installation include tkinter?')
PKuBu\�3�GG"future/moves/tkinter/messagebox.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.messagebox import *
else:
    try:
        from tkMessageBox import *
    except ImportError:
        raise ImportError('The tkMessageBox module is missing. Does your Py2 '
                          'installation include tkinter?')
PKxBu\%{0v22future/moves/tkinter/dnd.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.dnd import *
else:
    try:
        from Tkdnd import *
    except ImportError:
        raise ImportError('The Tkdnd module is missing. Does your Py2 '
                          'installation include tkinter?')
PK{Bu\9��..future/moves/tkinter/ttk.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.ttk import *
else:
    try:
        from ttk import *
    except ImportError:
        raise ImportError('The ttk module is missing. Does your Py2 '
                          'installation include tkinter?')
PK}Bu\���ll future/moves/tkinter/__init__.pynu�[���from __future__ import absolute_import
from future.utils import PY3
__future_module__ = True

if not PY3:
    from Tkinter import *
    from Tkinter import (_cnfmerge, _default_root, _flatten,
                          _support_default_root, _test,
                         _tkinter, _setit)

    try: # >= 2.7.4
        from Tkinter import (_join) 
    except ImportError: 
        pass

    try: # >= 2.7.4
        from Tkinter import (_stringify)
    except ImportError: 
        pass

    try: # >= 2.7.9
        from Tkinter import (_splitdict)
    except ImportError:
        pass

else:
    from tkinter import *
PK�Bu\���77future/moves/tkinter/dialog.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.dialog import *
else:
    try:
        from Dialog import *
    except ImportError:
        raise ImportError('The Dialog module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\	ǚ��6future/moves/tkinter/__pycache__/dialog.cpython-39.pycnu�[���a

��?h7�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zIThe Dialog module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.dialog�Dialog�ImportError�rr�E/usr/local/lib/python3.9/site-packages/future/moves/tkinter/dialog.py�<module>s
PK�Bu\�"`��<future/moves/tkinter/__pycache__/commondialog.cpython-39.pycnu�[���a

��?hM�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zQThe tkCommonDialog module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.commondialog�tkCommonDialog�ImportError�rr�K/usr/local/lib/python3.9/site-packages/future/moves/tkinter/commondialog.py�<module>s
PK�Bu\��,��8future/moves/tkinter/__pycache__/__init__.cpython-39.pycnu�[���a

��?hl�@s�ddlmZddlmZdZes�ddlTddlmZmZmZm	Z	m
Z
mZmZzddlm
Z
WneynYn0zddlmZWney�Yn0zddlmZWq�ey�Yq�0nddlTd	S)
�)�absolute_import)�PY3T)�*)�	_cnfmerge�
_default_root�_flatten�_support_default_root�_test�_tkinter�_setit)�_join)�
_stringify)�
_splitdictN)�
__future__rZfuture.utilsrZ__future_module__�Tkinterrrrrr	r
rr�ImportErrorr
r�tkinter�rr�G/usr/local/lib/python3.9/site-packages/future/moves/tkinter/__init__.py�<module>s$$PK�Bu\�����4future/moves/tkinter/__pycache__/font.cpython-39.pycnu�[���a

��?h5�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zIThe tkFont module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.font�tkFont�ImportError�rr�C/usr/local/lib/python3.9/site-packages/future/moves/tkinter/font.py�<module>s
PK�Bu\dtO���:future/moves/tkinter/__pycache__/messagebox.cpython-39.pycnu�[���a

��?hG�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zOThe tkMessageBox module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.messagebox�tkMessageBox�ImportError�rr�I/usr/local/lib/python3.9/site-packages/future/moves/tkinter/messagebox.py�<module>s
PK�Bu\���c��<future/moves/tkinter/__pycache__/colorchooser.cpython-39.pycnu�[���a

��?hM�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zQThe tkColorChooser module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.colorchooser�tkColorChooser�ImportError�rr�K/usr/local/lib/python3.9/site-packages/future/moves/tkinter/colorchooser.py�<module>s
PK�Bu\�����3future/moves/tkinter/__pycache__/tix.cpython-39.pycnu�[���a

��?h.�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zFThe Tix module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.tix�Tix�ImportError�rr�B/usr/local/lib/python3.9/site-packages/future/moves/tkinter/tix.py�<module>s
PK�Bu\�5����3future/moves/tkinter/__pycache__/ttk.cpython-39.pycnu�[���a

��?h.�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zFThe ttk module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.ttk�ttk�ImportError�rr�B/usr/local/lib/python3.9/site-packages/future/moves/tkinter/ttk.py�<module>s
PK�Bu\�&�PP:future/moves/tkinter/__pycache__/filedialog.cpython-39.pycnu�[���a

��?h�@szddlmZddlmZer&ddlTnPzddlTWneyLed��Yn0zddlTWneyted��Yn0dS)�)�absolute_import)�PY3)�*zMThe FileDialog module is missing. Does your Py2 installation include tkinter?zOThe tkFileDialog module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.filedialog�
FileDialog�ImportError�tkFileDialog�r	r	�I/usr/local/lib/python3.9/site-packages/future/moves/tkinter/filedialog.py�<module>s
PK�Bu\����3future/moves/tkinter/__pycache__/dnd.cpython-39.pycnu�[���a

��?h2�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zHThe Tkdnd module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.dnd�Tkdnd�ImportError�rr�B/usr/local/lib/python3.9/site-packages/future/moves/tkinter/dnd.py�<module>s
PK�Bu\5þ��9future/moves/tkinter/__pycache__/constants.cpython-39.pycnu�[���a

��?hD�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zNThe Tkconstants module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.constants�Tkconstants�ImportError�rr�H/usr/local/lib/python3.9/site-packages/future/moves/tkinter/constants.py�<module>s
PK�Bu\�ml��<future/moves/tkinter/__pycache__/scrolledtext.cpython-39.pycnu�[���a

��?hI�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zOThe ScrolledText module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.scrolledtext�ScrolledText�ImportError�rr�K/usr/local/lib/python3.9/site-packages/future/moves/tkinter/scrolledtext.py�<module>s
PK�Bu\P�[$��<future/moves/tkinter/__pycache__/simpledialog.cpython-39.pycnu�[���a

��?hI�@sRddlmZddlmZer&ddlTn(zddlTWneyLed��Yn0dS)�)�absolute_import)�PY3)�*zOThe SimpleDialog module is missing. Does your Py2 installation include tkinter?N)�
__future__rZfuture.utilsrZtkinter.simpledialog�SimpleDialog�ImportError�rr�K/usr/local/lib/python3.9/site-packages/future/moves/tkinter/simpledialog.py�<module>s
PK�Bu\�Ka;II$future/moves/tkinter/scrolledtext.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.scrolledtext import *
else:
    try:
        from ScrolledText import *
    except ImportError:
        raise ImportError('The ScrolledText module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\�"��II$future/moves/tkinter/simpledialog.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.simpledialog import *
else:
    try:
        from SimpleDialog import *
    except ImportError:
        raise ImportError('The SimpleDialog module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\�ޝMM$future/moves/tkinter/commondialog.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.commondialog import *
else:
    try:
        from tkCommonDialog import *
    except ImportError:
        raise ImportError('The tkCommonDialog module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\hu'%..future/moves/tkinter/tix.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.tix import *
else:
    try:
        from Tix import *
    except ImportError:
        raise ImportError('The Tix module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\ްs"future/moves/tkinter/filedialog.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.filedialog import *
else:
    try:
        from FileDialog import *
    except ImportError:
        raise ImportError('The FileDialog module is missing. Does your Py2 '
                          'installation include tkinter?')
    
    try:
        from tkFileDialog import *
    except ImportError:
        raise ImportError('The tkFileDialog module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\{<N\MM$future/moves/tkinter/colorchooser.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from tkinter.colorchooser import *
else:
    try:
        from tkColorChooser import *
    except ImportError:
        raise ImportError('The tkColorChooser module is missing. Does your Py2 '
                          'installation include tkinter?')
PK�Bu\)�	��future/moves/queue.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from queue import *
else:
    __future_module__ = True
    from Queue import *
PK�Bu\5�K��future/moves/subprocess.pynu�[���from __future__ import absolute_import
from future.utils import PY2, PY26

from subprocess import *

if PY2:
    __future_module__ = True
    from commands import getoutput, getstatusoutput

if PY26:
    from future.backports.misc import check_output
PK�Bu\]�)�future/moves/builtins.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from builtins import *
else:
    __future_module__ = True
    from __builtin__ import *
    # Overwrite any old definitions with the equivalent future.builtins ones:
    from future.builtins import *
PK�Bu\'�����future/moves/__init__.pynu�[���# future.moves package
from __future__ import absolute_import
import sys
__future_module__ = True
from future.standard_library import import_top_level_modules

if sys.version_info[0] >= 3:
    import_top_level_modules()
PK�Bu\�b���future/moves/dbm/gnu.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from dbm.gnu import *
else:
    __future_module__ = True
    from gdbm import *
PK�Bu\��Y���future/moves/dbm/ndbm.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from dbm.ndbm import *
else:
    __future_module__ = True
    from dbm import *
PK�Bu\�����future/moves/dbm/__init__.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from dbm import *
else:
    __future_module__ = True
    from whichdb import *
    from anydbm import *

# Py3.3's dbm/__init__.py imports ndbm but doesn't expose it via __all__.
# In case some (badly written) code depends on dbm.ndbm after import dbm,
# we simulate this:
if PY3:
    from dbm import ndbm
else:
    try:
        from future.moves.dbm import ndbm
    except ImportError:
        ndbm = None
PK�Bu\�k���4future/moves/dbm/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@sxddlmZddlmZer&ddlTndZddlTddlTerLddlmZn(zddl	mZWne
yrdZYn0dS)�)�absolute_import)�PY3)�*T)�ndbmN)�
__future__rZfuture.utilsr�dbmZ__future_module__�whichdb�anydbmrZfuture.moves.dbm�ImportError�rr�C/usr/local/lib/python3.9/site-packages/future/moves/dbm/__init__.py�<module>s
PK�Bu\�|Eu>>0future/moves/dbm/__pycache__/ndbm.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsrZdbm.ndbmZ__future_module__�dbm�rr�?/usr/local/lib/python3.9/site-packages/future/moves/dbm/ndbm.py�<module>s

PK�Bu\��)�==/future/moves/dbm/__pycache__/gnu.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsrZdbm.gnuZ__future_module__�gdbm�rr�>/usr/local/lib/python3.9/site-packages/future/moves/dbm/gnu.py�<module>s

PK�Bu\	�'�BB0future/moves/dbm/__pycache__/dumb.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsrZdbm.dumbZ__future_module__�dumbdbm�rr�?/usr/local/lib/python3.9/site-packages/future/moves/dbm/dumb.py�<module>s

PK�Bu\]
E���future/moves/dbm/dumb.pynu�[���from __future__ import absolute_import

from future.utils import PY3

if PY3:
    from dbm.dumb import *
else:
    __future_module__ = True
    from dumbdbm import *
PK�Bu\���6��future/moves/winreg.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from winreg import *
else:
    __future_module__ = True
    from _winreg import *
PK�Bu\^CZے�future/moves/configparser.pynu�[���from __future__ import absolute_import

from future.utils import PY2

if PY2:
    from ConfigParser import *
else:
    from configparser import *
PK�Bu\��3N��future/moves/pickle.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from pickle import *
else:
    __future_module__ = True
    try:
        from cPickle import *
    except ImportError:
        from pickle import *
PK�Bu\�=���2future/moves/__pycache__/subprocess.cpython-39.pycnu�[���a

��?h��@sPddlmZddlmZmZddlTer<dZddlmZm	Z	erLddl
mZdS)�)�absolute_import)�PY2�PY26)�*T)�	getoutput�getstatusoutput)�check_outputN)�
__future__rZfuture.utilsrr�
subprocessZ__future_module__�commandsrrZfuture.backports.miscr�rr�A/usr/local/lib/python3.9/site-packages/future/moves/subprocess.py�<module>sPK�Bu\�8n??/future/moves/__pycache__/_thread.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�_threadZ__future_module__�thread�rr�>/usr/local/lib/python3.9/site-packages/future/moves/_thread.py�<module>s

PK�Bu\�L��``3future/moves/__pycache__/collections.cpython-39.pycnu�[���a

��?h��@s�ddlmZddlZddlmZmZdZddlTerXddlmZddl	m	Z	ddl
m
Z
erldd	lmZm
Z
ejd
kr�ddlmZmZdS)�)�absolute_importN)�PY2�PY26T)�*)�UserDict)�UserList)�
UserString)�OrderedDict�Counter)�r)�ChainMap�_count_elements)�
__future__r�sysZfuture.utilsrrZ__future_module__�collectionsrrrZfuture.backports.miscr	r
�version_inforr
�rr�B/usr/local/lib/python3.9/site-packages/future/moves/collections.py�<module>s
PK�Bu\�I�hh0future/moves/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s<ddlmZddlZdZddlmZejddkr8e�dS)�)�absolute_importNT)�import_top_level_modules�)�
__future__r�sysZ__future_module__Zfuture.standard_libraryr�version_info�rr�?/usr/local/lib/python3.9/site-packages/future/moves/__init__.py�<module>s
PK�Bu\���À�5future/moves/__pycache__/_dummy_thread.cpython-39.pycnu�[���a

��?h\�@sHddlmZddlmZmZer*ddlTner8ddlTndZddlTdS)�)�absolute_import)�PY3�	PY39_PLUS)�*TN)	�
__future__rZfuture.utilsrr�_thread�
_dummy_threadZ__future_module__�dummy_thread�r
r
�D/usr/local/lib/python3.9/site-packages/future/moves/_dummy_thread.py�<module>s

PK�Bu\֙/�RR1future/moves/__pycache__/itertools.cpython-39.pycnu�[���a

��?h��@s8ddlmZddlTzeZeZWney2Yn0dS)�)�absolute_import)�*N)�
__future__r�	itertools�izip_longest�zip_longest�ifilterfalse�filterfalse�	NameError�rr�@/usr/local/lib/python3.9/site-packages/future/moves/itertools.py�<module>sPK�Bu\&��n33+future/moves/__pycache__/sys.cpython-39.pycnu�[���a

��?h��@s4ddlmZddlmZddlTer0ddlmZdS)�)�absolute_import)�PY2)�*)�internN)�
__future__rZfuture.utilsr�sys�__builtin__r�r	r	�:/usr/local/lib/python3.9/site-packages/future/moves/sys.py�<module>sPKCu\���::-future/moves/__pycache__/queue.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�queueZ__future_module__�Queue�rr�</usr/local/lib/python3.9/site-packages/future/moves/queue.py�<module>s

PKCu\�L��OO4future/moves/__pycache__/socketserver.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�socketserverZ__future_module__�SocketServer�rr�C/usr/local/lib/python3.9/site-packages/future/moves/socketserver.py�<module>s

PKCu\ ���KK3future/moves/__pycache__/_markupbase.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�_markupbaseZ__future_module__�
markupbase�rr�B/usr/local/lib/python3.9/site-packages/future/moves/_markupbase.py�<module>s

PKCu\;�E==/future/moves/__pycache__/reprlib.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�reprlibZ__future_module__�repr�rr�>/usr/local/lib/python3.9/site-packages/future/moves/reprlib.py�<module>s

PK
Cu\�ok554future/moves/__pycache__/configparser.cpython-39.pycnu�[���a

��?h��@s2ddlmZddlmZer&ddlTnddlTdS)�)�absolute_import)�PY2)�*N)�
__future__rZfuture.utilsr�ConfigParser�configparser�rr�C/usr/local/lib/python3.9/site-packages/future/moves/configparser.py�<module>s
PK
Cu\���>>.future/moves/__pycache__/winreg.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�winregZ__future_module__�_winreg�rr�=/usr/local/lib/python3.9/site-packages/future/moves/winreg.py�<module>s

PKCu\��@uu7future/moves/__pycache__/multiprocessing.cpython-39.pycnu�[���a

��?h��@s8ddlmZddlmZddlTes4dZddlmZdS)�)�absolute_import)�PY3)�*T)�SimpleQueueN)�
__future__rZfuture.utilsr�multiprocessingZ__future_module__Zmultiprocessing.queuesr�rr�F/usr/local/lib/python3.9/site-packages/future/moves/multiprocessing.py�<module>s
PKCu\�S�{aa0future/moves/__pycache__/builtins.cpython-39.pycnu�[���a

��?h�@s>ddlmZddlmZer&ddlTndZddlTddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�builtinsZ__future_module__�__builtin__Zfuture.builtins�rr�?/usr/local/lib/python3.9/site-packages/future/moves/builtins.py�<module>s
PKCu\^,�qq.future/moves/__pycache__/pickle.cpython-39.pycnu�[���a

��?h��@sVddlmZddlmZer&ddlTn,dZzddlTWneyPddlTYn0dS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsr�pickleZ__future_module__�cPickle�ImportError�r	r	�=/usr/local/lib/python3.9/site-packages/future/moves/pickle.py�<module>s
PKCu\��n�yy/future/moves/__pycache__/copyreg.cpython-39.pycnu�[���a

��?h��@sHddlmZddlmZer8ddlZddlZeejd<ndZddlTdS)�)�absolute_import)�PY3Nzfuture.moves.copyregT)�*)	�
__future__rZfuture.utilsr�copyreg�sys�modulesZ__future_module__�copy_reg�r
r
�>/usr/local/lib/python3.9/site-packages/future/moves/copyreg.py�<module>sPK!Cu\w�
���future/moves/multiprocessing.pynu�[���from __future__ import absolute_import
from future.utils import PY3

from multiprocessing import *
if not PY3:
    __future_module__ = True
    from multiprocessing.queues import SimpleQueue
PK#Cu\�,���future/moves/collections.pynu�[���from __future__ import absolute_import
import sys

from future.utils import PY2, PY26
__future_module__ = True

from collections import *

if PY2:
    from UserDict import UserDict
    from UserList import UserList
    from UserString import UserString

if PY26:
    from future.backports.misc import OrderedDict, Counter

if sys.version_info < (3, 3):
    from future.backports.misc import ChainMap, _count_elements
PK&Cu\��\\future/moves/_dummy_thread.pynu�[���from __future__ import absolute_import
from future.utils import PY3, PY39_PLUS


if PY39_PLUS:
    # _dummy_thread and dummy_threading modules were both deprecated in
    # Python 3.7 and removed in Python 3.9
    from _thread import *
elif PY3:
        from _dummy_thread import *
else:
    __future_module__ = True
    from dummy_thread import *
PK+Cu\�
	��future/moves/xmlrpc/client.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from xmlrpc.client import *
else:
    from xmlrpclib import *
PK/Cu\future/moves/xmlrpc/__init__.pynu�[���PK2Cu\@֟���7future/moves/xmlrpc/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�F/usr/local/lib/python3.9/site-packages/future/moves/xmlrpc/__init__.py�<module>�PK4Cu\�ז445future/moves/xmlrpc/__pycache__/client.cpython-39.pycnu�[���a

��?h��@s2ddlmZddlmZer&ddlTnddlTdS)�)�absolute_import)�PY3)�*N)�
__future__rZfuture.utilsr�
xmlrpc.client�	xmlrpclib�rr�D/usr/local/lib/python3.9/site-packages/future/moves/xmlrpc/client.py�<module>s
PK7Cu\���n445future/moves/xmlrpc/__pycache__/server.cpython-39.pycnu�[���a

��?h��@s2ddlmZddlmZer&ddlTnddlTdS)�)�absolute_import)�PY3)�*N)�
__future__rZfuture.utilsrZ
xmlrpc.server�	xmlrpclib�rr�D/usr/local/lib/python3.9/site-packages/future/moves/xmlrpc/server.py�<module>s
PK9Cu\k\�!��future/moves/xmlrpc/server.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from xmlrpc.server import *
else:
    from xmlrpclib import *
PK<Cu\��
��future/moves/sys.pynu�[���from __future__ import absolute_import

from future.utils import PY2

from sys import *

if PY2:
    from __builtin__ import intern
PKCCu\�}�VVfuture/moves/urllib/response.pynu�[���from future import standard_library
from future.utils import PY3

if PY3:
    from urllib.response import *
else:
    __future_module__ = True
    with standard_library.suspend_hooks():
        from urllib import (addbase,
                            addclosehook,
                            addinfo,
                            addinfourl)
PKECu\�U�jnnfuture/moves/urllib/__init__.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if not PY3:
    __future_module__ = True
PKKCu\>�$$6future/moves/urllib/__pycache__/request.cpython-39.pycnu�[���a

��?h�
�@sddlmZddlmZddlmZer�ddlTddlmZmZm	Z	m
Z
mZmZm
Z
mZmZmZmZmZmZmZmZddlmZmZmZmZmZmZmZmZmZm Z m!Z!nRdZ"e��8ddl#Tddl$TddlTddl#m%Z%e%Z Wd	�n1s�0Yd	S)
�)�absolute_import)�
suspend_hooks)�PY3)�*)�
getproxies�pathname2url�proxy_bypass�quote�request_host�thishost�unquote�url2pathname�
urlcleanup�urljoin�urlopen�urlparse�urlretrieve�urlsplit�
urlunparse)�	splitattr�	splithost�splitpasswd�	splitport�
splitquery�splittag�	splittype�	splituser�
splitvalue�to_bytes�unwrapT)�toBytesN)&�
__future__rZfuture.standard_libraryrZfuture.utilsr�urllib.requestrrrr	r
rrr
rrrrrrr�urllib.parserrrrrrrrrrrZ__future_module__�urllib�urllib2r �r&r&�E/usr/local/lib/python3.9/site-packages/future/moves/urllib/request.py�<module>sD6PKMCu\�`�g4future/moves/urllib/__pycache__/error.cpython-39.pycnu�[���a

��?h��@s|ddlmZddlmZddlmZer2ddlTnFdZe��,ddlm	Z	ddl
mZmZWd�n1sn0YdS)	�)�absolute_import)�
suspend_hooks)�PY3)�*T)�ContentTooShortError)�URLError�	HTTPErrorN)
�
__future__rZfuture.standard_libraryrZfuture.utilsr�urllib.errorZ__future_module__�urllibr�urllib2rr�r
r
�C/usr/local/lib/python3.9/site-packages/future/moves/urllib/error.py�<module>s
PKPCu\���BZZ:future/moves/urllib/__pycache__/robotparser.cpython-39.pycnu�[���a

��?h��@s6ddlmZddlmZer&ddlTndZddlTdS)�)�absolute_import)�PY3)�*TN)�
__future__rZfuture.utilsrZurllib.robotparserZ__future_module__�robotparser�rr�I/usr/local/lib/python3.9/site-packages/future/moves/urllib/robotparser.py�<module>s

PKRCu\�V".7future/moves/urllib/__pycache__/__init__.cpython-39.pycnu�[���a

��?hn�@s$ddlmZddlmZes dZdS)�)�absolute_import)�PY3TN)�
__future__rZfuture.utilsrZ__future_module__�rr�F/usr/local/lib/python3.9/site-packages/future/moves/urllib/__init__.py�<module>sPKUCu\�g�9��4future/moves/urllib/__pycache__/parse.cpython-39.pycnu�[���a

��?h�@s�ddlmZddlmZddlmZer2ddlTnzdZddlm	Z	m
Z
mZmZm
Z
mZmZmZmZmZe��0ddlmZmZmZmZmZmZWd�n1s�0YdS)	�)�absolute_import)�
suspend_hooks)�PY3)�*T)
�ParseResult�SplitResult�parse_qs�	parse_qsl�	urldefrag�urljoin�urlparse�urlsplit�
urlunparse�
urlunsplit)�quote�
quote_plus�unquote�unquote_plus�	urlencode�
splitqueryN)�
__future__rZfuture.standard_libraryrZfuture.utilsr�urllib.parseZ__future_module__rrrrr	r
rr
rr�urllibrrrrrr�rr�C/usr/local/lib/python3.9/site-packages/future/moves/urllib/parse.py�<module>s
0PKWCu\9<����7future/moves/urllib/__pycache__/response.cpython-39.pycnu�[���a

��?hV�@snddlmZddlmZer&ddlTnDdZe���(ddlmZm	Z	m
Z
mZWd�n1s`0YdS)�)�standard_library)�PY3)�*T)�addbase�addclosehook�addinfo�
addinfourlN)�futurerZfuture.utilsr�urllib.responseZ__future_module__Z
suspend_hooks�urllibrrrr�rr�F/usr/local/lib/python3.9/site-packages/future/moves/urllib/response.py�<module>s

PKZCu\7�<���future/moves/urllib/error.pynu�[���from __future__ import absolute_import
from future.standard_library import suspend_hooks

from future.utils import PY3

if PY3:
    from urllib.error import *
else:
    __future_module__ = True

    # We use this method to get at the original Py2 urllib before any renaming magic
    # ContentTooShortError = sys.py2_modules['urllib'].ContentTooShortError

    with suspend_hooks():
        from urllib import ContentTooShortError
        from urllib2 import URLError, HTTPError
PK]Cu\�s6�future/moves/urllib/parse.pynu�[���from __future__ import absolute_import
from future.standard_library import suspend_hooks

from future.utils import PY3

if PY3:
    from urllib.parse import *
else:
    __future_module__ = True
    from urlparse import (ParseResult, SplitResult, parse_qs, parse_qsl,
                          urldefrag, urljoin, urlparse, urlsplit,
                          urlunparse, urlunsplit)

    # we use this method to get at the original py2 urllib before any renaming
    # quote = sys.py2_modules['urllib'].quote
    # quote_plus = sys.py2_modules['urllib'].quote_plus
    # unquote = sys.py2_modules['urllib'].unquote
    # unquote_plus = sys.py2_modules['urllib'].unquote_plus
    # urlencode = sys.py2_modules['urllib'].urlencode
    # splitquery = sys.py2_modules['urllib'].splitquery

    with suspend_hooks():
        from urllib import (quote,
                            quote_plus,
                            unquote,
                            unquote_plus,
                            urlencode,
                            splitquery)
PKaCu\���
�
future/moves/urllib/request.pynu�[���from __future__ import absolute_import

from future.standard_library import suspend_hooks
from future.utils import PY3

if PY3:
    from urllib.request import *
    # This aren't in __all__:
    from urllib.request import (getproxies,
                                pathname2url,
                                proxy_bypass,
                                quote,
                                request_host,
                                thishost,
                                unquote,
                                url2pathname,
                                urlcleanup,
                                urljoin,
                                urlopen,
                                urlparse,
                                urlretrieve,
                                urlsplit,
                                urlunparse)

    from urllib.parse import (splitattr,
                              splithost,
                              splitpasswd,
                              splitport,
                              splitquery,
                              splittag,
                              splittype,
                              splituser,
                              splitvalue,
                              to_bytes,
                              unwrap)
else:
    __future_module__ = True
    with suspend_hooks():
        from urllib import *
        from urllib2 import *
        from urlparse import *

        # Rename:
        from urllib import toBytes    # missing from __all__ on Py2.6
        to_bytes = toBytes

        # from urllib import (pathname2url,
        #                     url2pathname,
        #                     getproxies,
        #                     urlretrieve,
        #                     urlcleanup,
        #                     URLopener,
        #                     FancyURLopener,
        #                     proxy_bypass)

        # from urllib2 import (
        #                  AbstractBasicAuthHandler,
        #                  AbstractDigestAuthHandler,
        #                  BaseHandler,
        #                  CacheFTPHandler,
        #                  FileHandler,
        #                  FTPHandler,
        #                  HTTPBasicAuthHandler,
        #                  HTTPCookieProcessor,
        #                  HTTPDefaultErrorHandler,
        #                  HTTPDigestAuthHandler,
        #                  HTTPErrorProcessor,
        #                  HTTPHandler,
        #                  HTTPPasswordMgr,
        #                  HTTPPasswordMgrWithDefaultRealm,
        #                  HTTPRedirectHandler,
        #                  HTTPSHandler,
        #                  URLError,
        #                  build_opener,
        #                  install_opener,
        #                  OpenerDirector,
        #                  ProxyBasicAuthHandler,
        #                  ProxyDigestAuthHandler,
        #                  ProxyHandler,
        #                  Request,
        #                  UnknownHandler,
        #                  urlopen,
        #                 )

        # from urlparse import (
        #                  urldefrag
        #                  urljoin,
        #                  urlparse,
        #                  urlunparse,
        #                  urlsplit,
        #                  urlunsplit,
        #                  parse_qs,
        #                  parse_q"
        #                 )
PKdCu\|���"future/moves/urllib/robotparser.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from urllib.robotparser import *
else:
    __future_module__ = True
    from robotparser import *
PKfCu\�_���future/moves/itertools.pynu�[���from __future__ import absolute_import

from itertools import *
try:
    zip_longest = izip_longest
    filterfalse = ifilterfalse
except NameError:
    pass
PKkCu\�U�jnnfuture/moves/test/__init__.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if not PY3:
    __future_module__ = True
PKnCu\�����future/moves/test/support.pynu�[���from __future__ import absolute_import

import sys

from future.standard_library import suspend_hooks
from future.utils import PY3

if PY3:
    from test.support import *
    if sys.version_info[:2] >= (3, 10):
        from test.support.os_helper import (
            EnvironmentVarGuard,
            TESTFN,
        )
        from test.support.warnings_helper import check_warnings
else:
    __future_module__ = True
    with suspend_hooks():
        from test.test_support import *
PKsCu\��H�5future/moves/test/__pycache__/__init__.cpython-39.pycnu�[���a

��?hn�@s$ddlmZddlmZes dZdS)�)�absolute_import)�PY3TN)�
__future__rZfuture.utilsrZ__future_module__�rr�D/usr/local/lib/python3.9/site-packages/future/moves/test/__init__.py�<module>sPKvCu\d�5a��4future/moves/test/__pycache__/support.cpython-39.pycnu�[���a

��?h��@s�ddlmZddlZddlmZddlmZerhddlTejdd�dkr�ddl	m
Z
mZdd	lm
Z
n2d
Ze��ddlTWd�n1s�0YdS)�)�absolute_importN)�
suspend_hooks)�PY3)�*�)��
)�EnvironmentVarGuard�TESTFN)�check_warningsT)�
__future__r�sysZfuture.standard_libraryrZfuture.utilsrZtest.support�version_infoZtest.support.os_helperr	r
Ztest.support.warnings_helperrZ__future_module__Ztest.test_support�rr�C/usr/local/lib/python3.9/site-packages/future/moves/test/support.py�<module>sPKyCu\Z9�p��future/moves/socketserver.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from socketserver import *
else:
    __future_module__ = True
    from SocketServer import *
PK{Cu\��>��future/moves/_thread.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from _thread import *
else:
    __future_module__ = True
    from thread import *
PK�Cu\�65���future/moves/reprlib.pynu�[���from __future__ import absolute_import
from future.utils import PY3

if PY3:
    from reprlib import *
else:
    __future_module__ = True
    from repr import *
PK�Cu\"vDW?W?future/backports/_markupbase.pynu�[���"""Shared support for scanning document type declarations in HTML and XHTML.

Backported for python-future from Python 3.3. Reason: ParserBase is an
old-style class in the Python 2.7 source of markupbase.py, which I suspect
might be the cause of sporadic unit-test failures on travis-ci.org with
test_htmlparser.py.  The test failures look like this:

    ======================================================================

ERROR: test_attr_entity_replacement (future.tests.test_htmlparser.AttributesStrictTestCase)

----------------------------------------------------------------------

Traceback (most recent call last):
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 661, in test_attr_entity_replacement
    [("starttag", "a", [("b", "&><\"'")])])
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 93, in _run_check
    collector = self.get_collector()
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 617, in get_collector
    return EventCollector(strict=True)
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 27, in __init__
    html.parser.HTMLParser.__init__(self, *args, **kw)
  File "/home/travis/build/edschofield/python-future/future/backports/html/parser.py", line 135, in __init__
    self.reset()
  File "/home/travis/build/edschofield/python-future/future/backports/html/parser.py", line 143, in reset
    _markupbase.ParserBase.reset(self)

TypeError: unbound method reset() must be called with ParserBase instance as first argument (got EventCollector instance instead)

This module is used as a foundation for the html.parser module.  It has no
documented public API and should not be used directly.

"""

import re

_declname_match = re.compile(r'[a-zA-Z][-_.a-zA-Z0-9]*\s*').match
_declstringlit_match = re.compile(r'(\'[^\']*\'|"[^"]*")\s*').match
_commentclose = re.compile(r'--\s*>')
_markedsectionclose = re.compile(r']\s*]\s*>')

# An analysis of the MS-Word extensions is available at
# http://www.planetpublish.com/xmlarena/xap/Thursday/WordtoXML.pdf

_msmarkedsectionclose = re.compile(r']\s*>')

del re


class ParserBase(object):
    """Parser base class which provides some common support methods used
    by the SGML/HTML and XHTML parsers."""

    def __init__(self):
        if self.__class__ is ParserBase:
            raise RuntimeError(
                "_markupbase.ParserBase must be subclassed")

    def error(self, message):
        raise NotImplementedError(
            "subclasses of ParserBase must override error()")

    def reset(self):
        self.lineno = 1
        self.offset = 0

    def getpos(self):
        """Return current line number and offset."""
        return self.lineno, self.offset

    # Internal -- update line number and offset.  This should be
    # called for each piece of data exactly once, in order -- in other
    # words the concatenation of all the input strings to this
    # function should be exactly the entire input.
    def updatepos(self, i, j):
        if i >= j:
            return j
        rawdata = self.rawdata
        nlines = rawdata.count("\n", i, j)
        if nlines:
            self.lineno = self.lineno + nlines
            pos = rawdata.rindex("\n", i, j) # Should not fail
            self.offset = j-(pos+1)
        else:
            self.offset = self.offset + j-i
        return j

    _decl_otherchars = ''

    # Internal -- parse declaration (for use by subclasses).
    def parse_declaration(self, i):
        # This is some sort of declaration; in "HTML as
        # deployed," this should only be the document type
        # declaration ("<!DOCTYPE html...>").
        # ISO 8879:1986, however, has more complex
        # declaration syntax for elements in <!...>, including:
        # --comment--
        # [marked section]
        # name in the following list: ENTITY, DOCTYPE, ELEMENT,
        # ATTLIST, NOTATION, SHORTREF, USEMAP,
        # LINKTYPE, LINK, IDLINK, USELINK, SYSTEM
        rawdata = self.rawdata
        j = i + 2
        assert rawdata[i:j] == "<!", "unexpected call to parse_declaration"
        if rawdata[j:j+1] == ">":
            # the empty comment <!>
            return j + 1
        if rawdata[j:j+1] in ("-", ""):
            # Start of comment followed by buffer boundary,
            # or just a buffer boundary.
            return -1
        # A simple, practical version could look like: ((name|stringlit) S*) + '>'
        n = len(rawdata)
        if rawdata[j:j+2] == '--': #comment
            # Locate --.*-- as the body of the comment
            return self.parse_comment(i)
        elif rawdata[j] == '[': #marked section
            # Locate [statusWord [...arbitrary SGML...]] as the body of the marked section
            # Where statusWord is one of TEMP, CDATA, IGNORE, INCLUDE, RCDATA
            # Note that this is extended by Microsoft Office "Save as Web" function
            # to include [if...] and [endif].
            return self.parse_marked_section(i)
        else: #all other declaration elements
            decltype, j = self._scan_name(j, i)
        if j < 0:
            return j
        if decltype == "doctype":
            self._decl_otherchars = ''
        while j < n:
            c = rawdata[j]
            if c == ">":
                # end of declaration syntax
                data = rawdata[i+2:j]
                if decltype == "doctype":
                    self.handle_decl(data)
                else:
                    # According to the HTML5 specs sections "8.2.4.44 Bogus
                    # comment state" and "8.2.4.45 Markup declaration open
                    # state", a comment token should be emitted.
                    # Calling unknown_decl provides more flexibility though.
                    self.unknown_decl(data)
                return j + 1
            if c in "\"'":
                m = _declstringlit_match(rawdata, j)
                if not m:
                    return -1 # incomplete
                j = m.end()
            elif c in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ":
                name, j = self._scan_name(j, i)
            elif c in self._decl_otherchars:
                j = j + 1
            elif c == "[":
                # this could be handled in a separate doctype parser
                if decltype == "doctype":
                    j = self._parse_doctype_subset(j + 1, i)
                elif decltype in set(["attlist", "linktype", "link", "element"]):
                    # must tolerate []'d groups in a content model in an element declaration
                    # also in data attribute specifications of attlist declaration
                    # also link type declaration subsets in linktype declarations
                    # also link attribute specification lists in link declarations
                    self.error("unsupported '[' char in %s declaration" % decltype)
                else:
                    self.error("unexpected '[' char in declaration")
            else:
                self.error(
                    "unexpected %r char in declaration" % rawdata[j])
            if j < 0:
                return j
        return -1 # incomplete

    # Internal -- parse a marked section
    # Override this to handle MS-word extension syntax <![if word]>content<![endif]>
    def parse_marked_section(self, i, report=1):
        rawdata= self.rawdata
        assert rawdata[i:i+3] == '<![', "unexpected call to parse_marked_section()"
        sectName, j = self._scan_name( i+3, i )
        if j < 0:
            return j
        if sectName in set(["temp", "cdata", "ignore", "include", "rcdata"]):
            # look for standard ]]> ending
            match= _markedsectionclose.search(rawdata, i+3)
        elif sectName in set(["if", "else", "endif"]):
            # look for MS Office ]> ending
            match= _msmarkedsectionclose.search(rawdata, i+3)
        else:
            self.error('unknown status keyword %r in marked section' % rawdata[i+3:j])
        if not match:
            return -1
        if report:
            j = match.start(0)
            self.unknown_decl(rawdata[i+3: j])
        return match.end(0)

    # Internal -- parse comment, return length or -1 if not terminated
    def parse_comment(self, i, report=1):
        rawdata = self.rawdata
        if rawdata[i:i+4] != '<!--':
            self.error('unexpected call to parse_comment()')
        match = _commentclose.search(rawdata, i+4)
        if not match:
            return -1
        if report:
            j = match.start(0)
            self.handle_comment(rawdata[i+4: j])
        return match.end(0)

    # Internal -- scan past the internal subset in a <!DOCTYPE declaration,
    # returning the index just past any whitespace following the trailing ']'.
    def _parse_doctype_subset(self, i, declstartpos):
        rawdata = self.rawdata
        n = len(rawdata)
        j = i
        while j < n:
            c = rawdata[j]
            if c == "<":
                s = rawdata[j:j+2]
                if s == "<":
                    # end of buffer; incomplete
                    return -1
                if s != "<!":
                    self.updatepos(declstartpos, j + 1)
                    self.error("unexpected char in internal subset (in %r)" % s)
                if (j + 2) == n:
                    # end of buffer; incomplete
                    return -1
                if (j + 4) > n:
                    # end of buffer; incomplete
                    return -1
                if rawdata[j:j+4] == "<!--":
                    j = self.parse_comment(j, report=0)
                    if j < 0:
                        return j
                    continue
                name, j = self._scan_name(j + 2, declstartpos)
                if j == -1:
                    return -1
                if name not in set(["attlist", "element", "entity", "notation"]):
                    self.updatepos(declstartpos, j + 2)
                    self.error(
                        "unknown declaration %r in internal subset" % name)
                # handle the individual names
                meth = getattr(self, "_parse_doctype_" + name)
                j = meth(j, declstartpos)
                if j < 0:
                    return j
            elif c == "%":
                # parameter entity reference
                if (j + 1) == n:
                    # end of buffer; incomplete
                    return -1
                s, j = self._scan_name(j + 1, declstartpos)
                if j < 0:
                    return j
                if rawdata[j] == ";":
                    j = j + 1
            elif c == "]":
                j = j + 1
                while j < n and rawdata[j].isspace():
                    j = j + 1
                if j < n:
                    if rawdata[j] == ">":
                        return j
                    self.updatepos(declstartpos, j)
                    self.error("unexpected char after internal subset")
                else:
                    return -1
            elif c.isspace():
                j = j + 1
            else:
                self.updatepos(declstartpos, j)
                self.error("unexpected char %r in internal subset" % c)
        # end of buffer reached
        return -1

    # Internal -- scan past <!ELEMENT declarations
    def _parse_doctype_element(self, i, declstartpos):
        name, j = self._scan_name(i, declstartpos)
        if j == -1:
            return -1
        # style content model; just skip until '>'
        rawdata = self.rawdata
        if '>' in rawdata[j:]:
            return rawdata.find(">", j) + 1
        return -1

    # Internal -- scan past <!ATTLIST declarations
    def _parse_doctype_attlist(self, i, declstartpos):
        rawdata = self.rawdata
        name, j = self._scan_name(i, declstartpos)
        c = rawdata[j:j+1]
        if c == "":
            return -1
        if c == ">":
            return j + 1
        while 1:
            # scan a series of attribute descriptions; simplified:
            #   name type [value] [#constraint]
            name, j = self._scan_name(j, declstartpos)
            if j < 0:
                return j
            c = rawdata[j:j+1]
            if c == "":
                return -1
            if c == "(":
                # an enumerated type; look for ')'
                if ")" in rawdata[j:]:
                    j = rawdata.find(")", j) + 1
                else:
                    return -1
                while rawdata[j:j+1].isspace():
                    j = j + 1
                if not rawdata[j:]:
                    # end of buffer, incomplete
                    return -1
            else:
                name, j = self._scan_name(j, declstartpos)
            c = rawdata[j:j+1]
            if not c:
                return -1
            if c in "'\"":
                m = _declstringlit_match(rawdata, j)
                if m:
                    j = m.end()
                else:
                    return -1
                c = rawdata[j:j+1]
                if not c:
                    return -1
            if c == "#":
                if rawdata[j:] == "#":
                    # end of buffer
                    return -1
                name, j = self._scan_name(j + 1, declstartpos)
                if j < 0:
                    return j
                c = rawdata[j:j+1]
                if not c:
                    return -1
            if c == '>':
                # all done
                return j + 1

    # Internal -- scan past <!NOTATION declarations
    def _parse_doctype_notation(self, i, declstartpos):
        name, j = self._scan_name(i, declstartpos)
        if j < 0:
            return j
        rawdata = self.rawdata
        while 1:
            c = rawdata[j:j+1]
            if not c:
                # end of buffer; incomplete
                return -1
            if c == '>':
                return j + 1
            if c in "'\"":
                m = _declstringlit_match(rawdata, j)
                if not m:
                    return -1
                j = m.end()
            else:
                name, j = self._scan_name(j, declstartpos)
                if j < 0:
                    return j

    # Internal -- scan past <!ENTITY declarations
    def _parse_doctype_entity(self, i, declstartpos):
        rawdata = self.rawdata
        if rawdata[i:i+1] == "%":
            j = i + 1
            while 1:
                c = rawdata[j:j+1]
                if not c:
                    return -1
                if c.isspace():
                    j = j + 1
                else:
                    break
        else:
            j = i
        name, j = self._scan_name(j, declstartpos)
        if j < 0:
            return j
        while 1:
            c = self.rawdata[j:j+1]
            if not c:
                return -1
            if c in "'\"":
                m = _declstringlit_match(rawdata, j)
                if m:
                    j = m.end()
                else:
                    return -1    # incomplete
            elif c == ">":
                return j + 1
            else:
                name, j = self._scan_name(j, declstartpos)
                if j < 0:
                    return j

    # Internal -- scan a name token and the new position and the token, or
    # return -1 if we've reached the end of the buffer.
    def _scan_name(self, i, declstartpos):
        rawdata = self.rawdata
        n = len(rawdata)
        if i == n:
            return None, -1
        m = _declname_match(rawdata, i)
        if m:
            s = m.group()
            name = s.strip()
            if (i + len(s)) == n:
                return None, -1  # end of buffer
            return name.lower(), m.end()
        else:
            self.updatepos(declstartpos, i)
            self.error("expected name token at %r"
                       % rawdata[declstartpos:declstartpos+20])

    # To be overridden -- handlers for unknown objects
    def unknown_decl(self, data):
        pass
PK�Cu\Sq"l'�'�future/backports/misc.pynu�[���"""
Miscellaneous function (re)definitions from the Py3.4+ standard library
for Python 2.6/2.7.

- math.ceil                (for Python 2.7)
- collections.OrderedDict  (for Python 2.6)
- collections.Counter      (for Python 2.6)
- collections.ChainMap     (for all versions prior to Python 3.3)
- itertools.count          (for Python 2.6, with step parameter)
- subprocess.check_output  (for Python 2.6)
- reprlib.recursive_repr   (for Python 2.6+)
- functools.cmp_to_key     (for Python 2.6)
"""

from __future__ import absolute_import

import subprocess
from math import ceil as oldceil

from operator import itemgetter as _itemgetter, eq as _eq
import sys
import heapq as _heapq
from _weakref import proxy as _proxy
from itertools import repeat as _repeat, chain as _chain, starmap as _starmap
from socket import getaddrinfo, SOCK_STREAM, error, socket

from future.utils import iteritems, itervalues, PY2, PY26, PY3

if PY2:
    from collections import Mapping, MutableMapping
else:
    from collections.abc import Mapping, MutableMapping


def ceil(x):
    """
    Return the ceiling of x as an int.
    This is the smallest integral value >= x.
    """
    return int(oldceil(x))


########################################################################
###  reprlib.recursive_repr decorator from Py3.4
########################################################################

from itertools import islice

if PY26:
    # itertools.count in Py 2.6 doesn't accept a step parameter
    def count(start=0, step=1):
        while True:
            yield start
            start += step
else:
    from itertools import count


if PY3:
    try:
        from _thread import get_ident
    except ImportError:
        from _dummy_thread import get_ident
else:
    try:
        from thread import get_ident
    except ImportError:
        from dummy_thread import get_ident


def recursive_repr(fillvalue='...'):
    'Decorator to make a repr function return fillvalue for a recursive call'

    def decorating_function(user_function):
        repr_running = set()

        def wrapper(self):
            key = id(self), get_ident()
            if key in repr_running:
                return fillvalue
            repr_running.add(key)
            try:
                result = user_function(self)
            finally:
                repr_running.discard(key)
            return result

        # Can't use functools.wraps() here because of bootstrap issues
        wrapper.__module__ = getattr(user_function, '__module__')
        wrapper.__doc__ = getattr(user_function, '__doc__')
        wrapper.__name__ = getattr(user_function, '__name__')
        wrapper.__annotations__ = getattr(user_function, '__annotations__', {})
        return wrapper

    return decorating_function


# OrderedDict Shim from  Raymond Hettinger, python core dev
# http://code.activestate.com/recipes/576693-ordered-dictionary-for-py24/
# here to support version 2.6.

################################################################################
### OrderedDict
################################################################################

class _Link(object):
    __slots__ = 'prev', 'next', 'key', '__weakref__'

class OrderedDict(dict):
    'Dictionary that remembers insertion order'
    # An inherited dict maps keys to values.
    # The inherited dict provides __getitem__, __len__, __contains__, and get.
    # The remaining methods are order-aware.
    # Big-O running times for all methods are the same as regular dictionaries.

    # The internal self.__map dict maps keys to links in a doubly linked list.
    # The circular doubly linked list starts and ends with a sentinel element.
    # The sentinel element never gets deleted (this simplifies the algorithm).
    # The sentinel is in self.__hardroot with a weakref proxy in self.__root.
    # The prev links are weakref proxies (to prevent circular references).
    # Individual links are kept alive by the hard reference in self.__map.
    # Those hard references disappear when a key is deleted from an OrderedDict.

    def __init__(*args, **kwds):
        '''Initialize an ordered dictionary.  The signature is the same as
        regular dictionaries, but keyword arguments are not recommended because
        their insertion order is arbitrary.

        '''
        if not args:
            raise TypeError("descriptor '__init__' of 'OrderedDict' object "
                            "needs an argument")
        self = args[0]
        args = args[1:]
        if len(args) > 1:
            raise TypeError('expected at most 1 arguments, got %d' % len(args))
        try:
            self.__root
        except AttributeError:
            self.__hardroot = _Link()
            self.__root = root = _proxy(self.__hardroot)
            root.prev = root.next = root
            self.__map = {}
        self.__update(*args, **kwds)

    def __setitem__(self, key, value,
                    dict_setitem=dict.__setitem__, proxy=_proxy, Link=_Link):
        'od.__setitem__(i, y) <==> od[i]=y'
        # Setting a new item creates a new link at the end of the linked list,
        # and the inherited dictionary is updated with the new key/value pair.
        if key not in self:
            self.__map[key] = link = Link()
            root = self.__root
            last = root.prev
            link.prev, link.next, link.key = last, root, key
            last.next = link
            root.prev = proxy(link)
        dict_setitem(self, key, value)

    def __delitem__(self, key, dict_delitem=dict.__delitem__):
        'od.__delitem__(y) <==> del od[y]'
        # Deleting an existing item uses self.__map to find the link which gets
        # removed by updating the links in the predecessor and successor nodes.
        dict_delitem(self, key)
        link = self.__map.pop(key)
        link_prev = link.prev
        link_next = link.next
        link_prev.next = link_next
        link_next.prev = link_prev

    def __iter__(self):
        'od.__iter__() <==> iter(od)'
        # Traverse the linked list in order.
        root = self.__root
        curr = root.next
        while curr is not root:
            yield curr.key
            curr = curr.next

    def __reversed__(self):
        'od.__reversed__() <==> reversed(od)'
        # Traverse the linked list in reverse order.
        root = self.__root
        curr = root.prev
        while curr is not root:
            yield curr.key
            curr = curr.prev

    def clear(self):
        'od.clear() -> None.  Remove all items from od.'
        root = self.__root
        root.prev = root.next = root
        self.__map.clear()
        dict.clear(self)

    def popitem(self, last=True):
        '''od.popitem() -> (k, v), return and remove a (key, value) pair.
        Pairs are returned in LIFO order if last is true or FIFO order if false.

        '''
        if not self:
            raise KeyError('dictionary is empty')
        root = self.__root
        if last:
            link = root.prev
            link_prev = link.prev
            link_prev.next = root
            root.prev = link_prev
        else:
            link = root.next
            link_next = link.next
            root.next = link_next
            link_next.prev = root
        key = link.key
        del self.__map[key]
        value = dict.pop(self, key)
        return key, value

    def move_to_end(self, key, last=True):
        '''Move an existing element to the end (or beginning if last==False).

        Raises KeyError if the element does not exist.
        When last=True, acts like a fast version of self[key]=self.pop(key).

        '''
        link = self.__map[key]
        link_prev = link.prev
        link_next = link.next
        link_prev.next = link_next
        link_next.prev = link_prev
        root = self.__root
        if last:
            last = root.prev
            link.prev = last
            link.next = root
            last.next = root.prev = link
        else:
            first = root.next
            link.prev = root
            link.next = first
            root.next = first.prev = link

    def __sizeof__(self):
        sizeof = sys.getsizeof
        n = len(self) + 1                       # number of links including root
        size = sizeof(self.__dict__)            # instance dictionary
        size += sizeof(self.__map) * 2          # internal dict and inherited dict
        size += sizeof(self.__hardroot) * n     # link objects
        size += sizeof(self.__root) * n         # proxy objects
        return size

    update = __update = MutableMapping.update
    keys = MutableMapping.keys
    values = MutableMapping.values
    items = MutableMapping.items
    __ne__ = MutableMapping.__ne__

    __marker = object()

    def pop(self, key, default=__marker):
        '''od.pop(k[,d]) -> v, remove specified key and return the corresponding
        value.  If key is not found, d is returned if given, otherwise KeyError
        is raised.

        '''
        if key in self:
            result = self[key]
            del self[key]
            return result
        if default is self.__marker:
            raise KeyError(key)
        return default

    def setdefault(self, key, default=None):
        'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
        if key in self:
            return self[key]
        self[key] = default
        return default

    @recursive_repr()
    def __repr__(self):
        'od.__repr__() <==> repr(od)'
        if not self:
            return '%s()' % (self.__class__.__name__,)
        return '%s(%r)' % (self.__class__.__name__, list(self.items()))

    def __reduce__(self):
        'Return state information for pickling'
        inst_dict = vars(self).copy()
        for k in vars(OrderedDict()):
            inst_dict.pop(k, None)
        return self.__class__, (), inst_dict or None, None, iter(self.items())

    def copy(self):
        'od.copy() -> a shallow copy of od'
        return self.__class__(self)

    @classmethod
    def fromkeys(cls, iterable, value=None):
        '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
        If not specified, the value defaults to None.

        '''
        self = cls()
        for key in iterable:
            self[key] = value
        return self

    def __eq__(self, other):
        '''od.__eq__(y) <==> od==y.  Comparison to another OD is order-sensitive
        while comparison to a regular mapping is order-insensitive.

        '''
        if isinstance(other, OrderedDict):
            return dict.__eq__(self, other) and all(map(_eq, self, other))
        return dict.__eq__(self, other)


# {{{ http://code.activestate.com/recipes/576611/ (r11)

try:
    from operator import itemgetter
    from heapq import nlargest
except ImportError:
    pass

########################################################################
###  Counter
########################################################################

def _count_elements(mapping, iterable):
    'Tally elements from the iterable.'
    mapping_get = mapping.get
    for elem in iterable:
        mapping[elem] = mapping_get(elem, 0) + 1

class Counter(dict):
    '''Dict subclass for counting hashable items.  Sometimes called a bag
    or multiset.  Elements are stored as dictionary keys and their counts
    are stored as dictionary values.

    >>> c = Counter('abcdeabcdabcaba')  # count elements from a string

    >>> c.most_common(3)                # three most common elements
    [('a', 5), ('b', 4), ('c', 3)]
    >>> sorted(c)                       # list all unique elements
    ['a', 'b', 'c', 'd', 'e']
    >>> ''.join(sorted(c.elements()))   # list elements with repetitions
    'aaaaabbbbcccdde'
    >>> sum(c.values())                 # total of all counts
    15

    >>> c['a']                          # count of letter 'a'
    5
    >>> for elem in 'shazam':           # update counts from an iterable
    ...     c[elem] += 1                # by adding 1 to each element's count
    >>> c['a']                          # now there are seven 'a'
    7
    >>> del c['b']                      # remove all 'b'
    >>> c['b']                          # now there are zero 'b'
    0

    >>> d = Counter('simsalabim')       # make another counter
    >>> c.update(d)                     # add in the second counter
    >>> c['a']                          # now there are nine 'a'
    9

    >>> c.clear()                       # empty the counter
    >>> c
    Counter()

    Note:  If a count is set to zero or reduced to zero, it will remain
    in the counter until the entry is deleted or the counter is cleared:

    >>> c = Counter('aaabbc')
    >>> c['b'] -= 2                     # reduce the count of 'b' by two
    >>> c.most_common()                 # 'b' is still in, but its count is zero
    [('a', 3), ('c', 1), ('b', 0)]

    '''
    # References:
    #   http://en.wikipedia.org/wiki/Multiset
    #   http://www.gnu.org/software/smalltalk/manual-base/html_node/Bag.html
    #   http://www.demo2s.com/Tutorial/Cpp/0380__set-multiset/Catalog0380__set-multiset.htm
    #   http://code.activestate.com/recipes/259174/
    #   Knuth, TAOCP Vol. II section 4.6.3

    def __init__(*args, **kwds):
        '''Create a new, empty Counter object.  And if given, count elements
        from an input iterable.  Or, initialize the count from another mapping
        of elements to their counts.

        >>> c = Counter()                           # a new, empty counter
        >>> c = Counter('gallahad')                 # a new counter from an iterable
        >>> c = Counter({'a': 4, 'b': 2})           # a new counter from a mapping
        >>> c = Counter(a=4, b=2)                   # a new counter from keyword args

        '''
        if not args:
            raise TypeError("descriptor '__init__' of 'Counter' object "
                            "needs an argument")
        self = args[0]
        args = args[1:]
        if len(args) > 1:
            raise TypeError('expected at most 1 arguments, got %d' % len(args))
        super(Counter, self).__init__()
        self.update(*args, **kwds)

    def __missing__(self, key):
        'The count of elements not in the Counter is zero.'
        # Needed so that self[missing_item] does not raise KeyError
        return 0

    def most_common(self, n=None):
        '''List the n most common elements and their counts from the most
        common to the least.  If n is None, then list all element counts.

        >>> Counter('abcdeabcdabcaba').most_common(3)
        [('a', 5), ('b', 4), ('c', 3)]

        '''
        # Emulate Bag.sortedByCount from Smalltalk
        if n is None:
            return sorted(self.items(), key=_itemgetter(1), reverse=True)
        return _heapq.nlargest(n, self.items(), key=_itemgetter(1))

    def elements(self):
        '''Iterator over elements repeating each as many times as its count.

        >>> c = Counter('ABCABC')
        >>> sorted(c.elements())
        ['A', 'A', 'B', 'B', 'C', 'C']

        # Knuth's example for prime factors of 1836:  2**2 * 3**3 * 17**1
        >>> prime_factors = Counter({2: 2, 3: 3, 17: 1})
        >>> product = 1
        >>> for factor in prime_factors.elements():     # loop over factors
        ...     product *= factor                       # and multiply them
        >>> product
        1836

        Note, if an element's count has been set to zero or is a negative
        number, elements() will ignore it.

        '''
        # Emulate Bag.do from Smalltalk and Multiset.begin from C++.
        return _chain.from_iterable(_starmap(_repeat, self.items()))

    # Override dict methods where necessary

    @classmethod
    def fromkeys(cls, iterable, v=None):
        # There is no equivalent method for counters because setting v=1
        # means that no element can have a count greater than one.
        raise NotImplementedError(
            'Counter.fromkeys() is undefined.  Use Counter(iterable) instead.')

    def update(*args, **kwds):
        '''Like dict.update() but add counts instead of replacing them.

        Source can be an iterable, a dictionary, or another Counter instance.

        >>> c = Counter('which')
        >>> c.update('witch')           # add elements from another iterable
        >>> d = Counter('watch')
        >>> c.update(d)                 # add elements from another counter
        >>> c['h']                      # four 'h' in which, witch, and watch
        4

        '''
        # The regular dict.update() operation makes no sense here because the
        # replace behavior results in the some of original untouched counts
        # being mixed-in with all of the other counts for a mismash that
        # doesn't have a straight-forward interpretation in most counting
        # contexts.  Instead, we implement straight-addition.  Both the inputs
        # and outputs are allowed to contain zero and negative counts.

        if not args:
            raise TypeError("descriptor 'update' of 'Counter' object "
                            "needs an argument")
        self = args[0]
        args = args[1:]
        if len(args) > 1:
            raise TypeError('expected at most 1 arguments, got %d' % len(args))
        iterable = args[0] if args else None
        if iterable is not None:
            if isinstance(iterable, Mapping):
                if self:
                    self_get = self.get
                    for elem, count in iterable.items():
                        self[elem] = count + self_get(elem, 0)
                else:
                    super(Counter, self).update(iterable) # fast path when counter is empty
            else:
                _count_elements(self, iterable)
        if kwds:
            self.update(kwds)

    def subtract(*args, **kwds):
        '''Like dict.update() but subtracts counts instead of replacing them.
        Counts can be reduced below zero.  Both the inputs and outputs are
        allowed to contain zero and negative counts.

        Source can be an iterable, a dictionary, or another Counter instance.

        >>> c = Counter('which')
        >>> c.subtract('witch')             # subtract elements from another iterable
        >>> c.subtract(Counter('watch'))    # subtract elements from another counter
        >>> c['h']                          # 2 in which, minus 1 in witch, minus 1 in watch
        0
        >>> c['w']                          # 1 in which, minus 1 in witch, minus 1 in watch
        -1

        '''
        if not args:
            raise TypeError("descriptor 'subtract' of 'Counter' object "
                            "needs an argument")
        self = args[0]
        args = args[1:]
        if len(args) > 1:
            raise TypeError('expected at most 1 arguments, got %d' % len(args))
        iterable = args[0] if args else None
        if iterable is not None:
            self_get = self.get
            if isinstance(iterable, Mapping):
                for elem, count in iterable.items():
                    self[elem] = self_get(elem, 0) - count
            else:
                for elem in iterable:
                    self[elem] = self_get(elem, 0) - 1
        if kwds:
            self.subtract(kwds)

    def copy(self):
        'Return a shallow copy.'
        return self.__class__(self)

    def __reduce__(self):
        return self.__class__, (dict(self),)

    def __delitem__(self, elem):
        'Like dict.__delitem__() but does not raise KeyError for missing values.'
        if elem in self:
            super(Counter, self).__delitem__(elem)

    def __repr__(self):
        if not self:
            return '%s()' % self.__class__.__name__
        try:
            items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
            return '%s({%s})' % (self.__class__.__name__, items)
        except TypeError:
            # handle case where values are not orderable
            return '{0}({1!r})'.format(self.__class__.__name__, dict(self))

    # Multiset-style mathematical operations discussed in:
    #       Knuth TAOCP Volume II section 4.6.3 exercise 19
    #       and at http://en.wikipedia.org/wiki/Multiset
    #
    # Outputs guaranteed to only include positive counts.
    #
    # To strip negative and zero counts, add-in an empty counter:
    #       c += Counter()

    def __add__(self, other):
        '''Add counts from two counters.

        >>> Counter('abbb') + Counter('bcc')
        Counter({'b': 4, 'c': 2, 'a': 1})

        '''
        if not isinstance(other, Counter):
            return NotImplemented
        result = Counter()
        for elem, count in self.items():
            newcount = count + other[elem]
            if newcount > 0:
                result[elem] = newcount
        for elem, count in other.items():
            if elem not in self and count > 0:
                result[elem] = count
        return result

    def __sub__(self, other):
        ''' Subtract count, but keep only results with positive counts.

        >>> Counter('abbbc') - Counter('bccd')
        Counter({'b': 2, 'a': 1})

        '''
        if not isinstance(other, Counter):
            return NotImplemented
        result = Counter()
        for elem, count in self.items():
            newcount = count - other[elem]
            if newcount > 0:
                result[elem] = newcount
        for elem, count in other.items():
            if elem not in self and count < 0:
                result[elem] = 0 - count
        return result

    def __or__(self, other):
        '''Union is the maximum of value in either of the input counters.

        >>> Counter('abbb') | Counter('bcc')
        Counter({'b': 3, 'c': 2, 'a': 1})

        '''
        if not isinstance(other, Counter):
            return NotImplemented
        result = Counter()
        for elem, count in self.items():
            other_count = other[elem]
            newcount = other_count if count < other_count else count
            if newcount > 0:
                result[elem] = newcount
        for elem, count in other.items():
            if elem not in self and count > 0:
                result[elem] = count
        return result

    def __and__(self, other):
        ''' Intersection is the minimum of corresponding counts.

        >>> Counter('abbb') & Counter('bcc')
        Counter({'b': 1})

        '''
        if not isinstance(other, Counter):
            return NotImplemented
        result = Counter()
        for elem, count in self.items():
            other_count = other[elem]
            newcount = count if count < other_count else other_count
            if newcount > 0:
                result[elem] = newcount
        return result

    def __pos__(self):
        'Adds an empty counter, effectively stripping negative and zero counts'
        return self + Counter()

    def __neg__(self):
        '''Subtracts from an empty counter.  Strips positive and zero counts,
        and flips the sign on negative counts.

        '''
        return Counter() - self

    def _keep_positive(self):
        '''Internal method to strip elements with a negative or zero count'''
        nonpositive = [elem for elem, count in self.items() if not count > 0]
        for elem in nonpositive:
            del self[elem]
        return self

    def __iadd__(self, other):
        '''Inplace add from another counter, keeping only positive counts.

        >>> c = Counter('abbb')
        >>> c += Counter('bcc')
        >>> c
        Counter({'b': 4, 'c': 2, 'a': 1})

        '''
        for elem, count in other.items():
            self[elem] += count
        return self._keep_positive()

    def __isub__(self, other):
        '''Inplace subtract counter, but keep only results with positive counts.

        >>> c = Counter('abbbc')
        >>> c -= Counter('bccd')
        >>> c
        Counter({'b': 2, 'a': 1})

        '''
        for elem, count in other.items():
            self[elem] -= count
        return self._keep_positive()

    def __ior__(self, other):
        '''Inplace union is the maximum of value from either counter.

        >>> c = Counter('abbb')
        >>> c |= Counter('bcc')
        >>> c
        Counter({'b': 3, 'c': 2, 'a': 1})

        '''
        for elem, other_count in other.items():
            count = self[elem]
            if other_count > count:
                self[elem] = other_count
        return self._keep_positive()

    def __iand__(self, other):
        '''Inplace intersection is the minimum of corresponding counts.

        >>> c = Counter('abbb')
        >>> c &= Counter('bcc')
        >>> c
        Counter({'b': 1})

        '''
        for elem, count in self.items():
            other_count = other[elem]
            if other_count < count:
                self[elem] = other_count
        return self._keep_positive()


def check_output(*popenargs, **kwargs):
    """
    For Python 2.6 compatibility: see
    http://stackoverflow.com/questions/4814970/
    """

    if 'stdout' in kwargs:
        raise ValueError('stdout argument not allowed, it will be overridden.')
    process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
    output, unused_err = process.communicate()
    retcode = process.poll()
    if retcode:
        cmd = kwargs.get("args")
        if cmd is None:
            cmd = popenargs[0]
        raise subprocess.CalledProcessError(retcode, cmd)
    return output


def count(start=0, step=1):
    """
    ``itertools.count`` in Py 2.6 doesn't accept a step
    parameter. This is an enhanced version of ``itertools.count``
    for Py2.6 equivalent to ``itertools.count`` in Python 2.7+.
    """
    while True:
        yield start
        start += step


########################################################################
###  ChainMap (helper for configparser and string.Template)
###  From the Py3.4 source code. See also:
###    https://github.com/kkxue/Py2ChainMap/blob/master/py2chainmap.py
########################################################################

class ChainMap(MutableMapping):
    ''' A ChainMap groups multiple dicts (or other mappings) together
    to create a single, updateable view.

    The underlying mappings are stored in a list.  That list is public and can
    accessed or updated using the *maps* attribute.  There is no other state.

    Lookups search the underlying mappings successively until a key is found.
    In contrast, writes, updates, and deletions only operate on the first
    mapping.

    '''

    def __init__(self, *maps):
        '''Initialize a ChainMap by setting *maps* to the given mappings.
        If no mappings are provided, a single empty dictionary is used.

        '''
        self.maps = list(maps) or [{}]          # always at least one map

    def __missing__(self, key):
        raise KeyError(key)

    def __getitem__(self, key):
        for mapping in self.maps:
            try:
                return mapping[key]             # can't use 'key in mapping' with defaultdict
            except KeyError:
                pass
        return self.__missing__(key)            # support subclasses that define __missing__

    def get(self, key, default=None):
        return self[key] if key in self else default

    def __len__(self):
        return len(set().union(*self.maps))     # reuses stored hash values if possible

    def __iter__(self):
        return iter(set().union(*self.maps))

    def __contains__(self, key):
        return any(key in m for m in self.maps)

    def __bool__(self):
        return any(self.maps)

    # Py2 compatibility:
    __nonzero__ = __bool__

    @recursive_repr()
    def __repr__(self):
        return '{0.__class__.__name__}({1})'.format(
            self, ', '.join(map(repr, self.maps)))

    @classmethod
    def fromkeys(cls, iterable, *args):
        'Create a ChainMap with a single dict created from the iterable.'
        return cls(dict.fromkeys(iterable, *args))

    def copy(self):
        'New ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]'
        return self.__class__(self.maps[0].copy(), *self.maps[1:])

    __copy__ = copy

    def new_child(self, m=None):                # like Django's Context.push()
        '''
        New ChainMap with a new map followed by all previous maps. If no
        map is provided, an empty dict is used.
        '''
        if m is None:
            m = {}
        return self.__class__(m, *self.maps)

    @property
    def parents(self):                          # like Django's Context.pop()
        'New ChainMap from maps[1:].'
        return self.__class__(*self.maps[1:])

    def __setitem__(self, key, value):
        self.maps[0][key] = value

    def __delitem__(self, key):
        try:
            del self.maps[0][key]
        except KeyError:
            raise KeyError('Key not found in the first mapping: {0!r}'.format(key))

    def popitem(self):
        'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
        try:
            return self.maps[0].popitem()
        except KeyError:
            raise KeyError('No keys found in the first mapping.')

    def pop(self, key, *args):
        'Remove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].'
        try:
            return self.maps[0].pop(key, *args)
        except KeyError:
            raise KeyError('Key not found in the first mapping: {0!r}'.format(key))

    def clear(self):
        'Clear maps[0], leaving maps[1:] intact.'
        self.maps[0].clear()


# Re-use the same sentinel as in the Python stdlib socket module:
from socket import _GLOBAL_DEFAULT_TIMEOUT
# Was: _GLOBAL_DEFAULT_TIMEOUT = object()


def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,
                      source_address=None):
    """Backport of 3-argument create_connection() for Py2.6.

    Connect to *address* and return the socket object.

    Convenience function.  Connect to *address* (a 2-tuple ``(host,
    port)``) and return the socket object.  Passing the optional
    *timeout* parameter will set the timeout on the socket instance
    before attempting to connect.  If no *timeout* is supplied, the
    global default timeout setting returned by :func:`getdefaulttimeout`
    is used.  If *source_address* is set it must be a tuple of (host, port)
    for the socket to bind as a source address before making the connection.
    An host of '' or port 0 tells the OS to use the default.
    """

    host, port = address
    err = None
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
        af, socktype, proto, canonname, sa = res
        sock = None
        try:
            sock = socket(af, socktype, proto)
            if timeout is not _GLOBAL_DEFAULT_TIMEOUT:
                sock.settimeout(timeout)
            if source_address:
                sock.bind(source_address)
            sock.connect(sa)
            return sock

        except error as _:
            err = _
            if sock is not None:
                sock.close()

    if err is not None:
        raise err
    else:
        raise error("getaddrinfo returns an empty list")

# Backport from Py2.7 for Py2.6:
def cmp_to_key(mycmp):
    """Convert a cmp= function into a key= function"""
    class K(object):
        __slots__ = ['obj']
        def __init__(self, obj, *args):
            self.obj = obj
        def __lt__(self, other):
            return mycmp(self.obj, other.obj) < 0
        def __gt__(self, other):
            return mycmp(self.obj, other.obj) > 0
        def __eq__(self, other):
            return mycmp(self.obj, other.obj) == 0
        def __le__(self, other):
            return mycmp(self.obj, other.obj) <= 0
        def __ge__(self, other):
            return mycmp(self.obj, other.obj) >= 0
        def __ne__(self, other):
            return mycmp(self.obj, other.obj) != 0
        def __hash__(self):
            raise TypeError('hash not implemented')
    return K

# Back up our definitions above in case they're useful
_OrderedDict = OrderedDict
_Counter = Counter
_check_output = check_output
_count = count
_ceil = ceil
__count_elements = _count_elements
_recursive_repr = recursive_repr
_ChainMap = ChainMap
_create_connection = create_connection
_cmp_to_key = cmp_to_key

# Overwrite the definitions above with the usual ones
# from the standard library:
if sys.version_info >= (2, 7):
    from collections import OrderedDict, Counter
    from itertools import count
    from functools import cmp_to_key
    try:
        from subprocess import check_output
    except ImportError:
        # Not available. This happens with Google App Engine: see issue #231
        pass
    from socket import create_connection

if sys.version_info >= (3, 0):
    from math import ceil
    from collections import _count_elements

if sys.version_info >= (3, 3):
    from reprlib import recursive_repr
    from collections import ChainMap
PK�Cu\2���future/backports/http/client.pynu�[���"""HTTP/1.1 client library

A backport of the Python 3.3 http/client.py module for python-future.

<intro stuff goes here>
<other stuff, too>

HTTPConnection goes through a number of "states", which define when a client
may legally make another request or fetch the response for a particular
request. This diagram details these state transitions:

    (null)
      |
      | HTTPConnection()
      v
    Idle
      |
      | putrequest()
      v
    Request-started
      |
      | ( putheader() )*  endheaders()
      v
    Request-sent
      |
      | response = getresponse()
      v
    Unread-response   [Response-headers-read]
      |\____________________
      |                     |
      | response.read()     | putrequest()
      v                     v
    Idle                  Req-started-unread-response
                     ______/|
                   /        |
   response.read() |        | ( putheader() )*  endheaders()
                   v        v
       Request-started    Req-sent-unread-response
                            |
                            | response.read()
                            v
                          Request-sent

This diagram presents the following rules:
  -- a second request may not be started until {response-headers-read}
  -- a response [object] cannot be retrieved until {request-sent}
  -- there is no differentiation between an unread response body and a
     partially read response body

Note: this enforcement is applied by the HTTPConnection class. The
      HTTPResponse class does not enforce this state machine, which
      implies sophisticated clients may accelerate the request/response
      pipeline. Caution should be taken, though: accelerating the states
      beyond the above pattern may imply knowledge of the server's
      connection-close behavior for certain requests. For example, it
      is impossible to tell whether the server will close the connection
      UNTIL the response headers have been read; this means that further
      requests cannot be placed into the pipeline until it is known that
      the server will NOT be closing the connection.

Logical State                  __state            __response
-------------                  -------            ----------
Idle                           _CS_IDLE           None
Request-started                _CS_REQ_STARTED    None
Request-sent                   _CS_REQ_SENT       None
Unread-response                _CS_IDLE           <response_class>
Req-started-unread-response    _CS_REQ_STARTED    <response_class>
Req-sent-unread-response       _CS_REQ_SENT       <response_class>
"""

from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from future.builtins import bytes, int, str, super
from future.utils import PY2

from future.backports.email import parser as email_parser
from future.backports.email import message as email_message
from future.backports.misc import create_connection as socket_create_connection
import io
import os
import socket
from future.backports.urllib.parse import urlsplit
import warnings
from array import array

if PY2:
    from collections import Iterable
else:
    from collections.abc import Iterable

__all__ = ["HTTPResponse", "HTTPConnection",
           "HTTPException", "NotConnected", "UnknownProtocol",
           "UnknownTransferEncoding", "UnimplementedFileMode",
           "IncompleteRead", "InvalidURL", "ImproperConnectionState",
           "CannotSendRequest", "CannotSendHeader", "ResponseNotReady",
           "BadStatusLine", "error", "responses"]

HTTP_PORT = 80
HTTPS_PORT = 443

_UNKNOWN = 'UNKNOWN'

# connection states
_CS_IDLE = 'Idle'
_CS_REQ_STARTED = 'Request-started'
_CS_REQ_SENT = 'Request-sent'

# status codes
# informational
CONTINUE = 100
SWITCHING_PROTOCOLS = 101
PROCESSING = 102

# successful
OK = 200
CREATED = 201
ACCEPTED = 202
NON_AUTHORITATIVE_INFORMATION = 203
NO_CONTENT = 204
RESET_CONTENT = 205
PARTIAL_CONTENT = 206
MULTI_STATUS = 207
IM_USED = 226

# redirection
MULTIPLE_CHOICES = 300
MOVED_PERMANENTLY = 301
FOUND = 302
SEE_OTHER = 303
NOT_MODIFIED = 304
USE_PROXY = 305
TEMPORARY_REDIRECT = 307

# client error
BAD_REQUEST = 400
UNAUTHORIZED = 401
PAYMENT_REQUIRED = 402
FORBIDDEN = 403
NOT_FOUND = 404
METHOD_NOT_ALLOWED = 405
NOT_ACCEPTABLE = 406
PROXY_AUTHENTICATION_REQUIRED = 407
REQUEST_TIMEOUT = 408
CONFLICT = 409
GONE = 410
LENGTH_REQUIRED = 411
PRECONDITION_FAILED = 412
REQUEST_ENTITY_TOO_LARGE = 413
REQUEST_URI_TOO_LONG = 414
UNSUPPORTED_MEDIA_TYPE = 415
REQUESTED_RANGE_NOT_SATISFIABLE = 416
EXPECTATION_FAILED = 417
UNPROCESSABLE_ENTITY = 422
LOCKED = 423
FAILED_DEPENDENCY = 424
UPGRADE_REQUIRED = 426
PRECONDITION_REQUIRED = 428
TOO_MANY_REQUESTS = 429
REQUEST_HEADER_FIELDS_TOO_LARGE = 431

# server error
INTERNAL_SERVER_ERROR = 500
NOT_IMPLEMENTED = 501
BAD_GATEWAY = 502
SERVICE_UNAVAILABLE = 503
GATEWAY_TIMEOUT = 504
HTTP_VERSION_NOT_SUPPORTED = 505
INSUFFICIENT_STORAGE = 507
NOT_EXTENDED = 510
NETWORK_AUTHENTICATION_REQUIRED = 511

# Mapping status codes to official W3C names
responses = {
    100: 'Continue',
    101: 'Switching Protocols',

    200: 'OK',
    201: 'Created',
    202: 'Accepted',
    203: 'Non-Authoritative Information',
    204: 'No Content',
    205: 'Reset Content',
    206: 'Partial Content',

    300: 'Multiple Choices',
    301: 'Moved Permanently',
    302: 'Found',
    303: 'See Other',
    304: 'Not Modified',
    305: 'Use Proxy',
    306: '(Unused)',
    307: 'Temporary Redirect',

    400: 'Bad Request',
    401: 'Unauthorized',
    402: 'Payment Required',
    403: 'Forbidden',
    404: 'Not Found',
    405: 'Method Not Allowed',
    406: 'Not Acceptable',
    407: 'Proxy Authentication Required',
    408: 'Request Timeout',
    409: 'Conflict',
    410: 'Gone',
    411: 'Length Required',
    412: 'Precondition Failed',
    413: 'Request Entity Too Large',
    414: 'Request-URI Too Long',
    415: 'Unsupported Media Type',
    416: 'Requested Range Not Satisfiable',
    417: 'Expectation Failed',
    428: 'Precondition Required',
    429: 'Too Many Requests',
    431: 'Request Header Fields Too Large',

    500: 'Internal Server Error',
    501: 'Not Implemented',
    502: 'Bad Gateway',
    503: 'Service Unavailable',
    504: 'Gateway Timeout',
    505: 'HTTP Version Not Supported',
    511: 'Network Authentication Required',
}

# maximal amount of data to read at one time in _safe_read
MAXAMOUNT = 1048576

# maximal line length when calling readline().
_MAXLINE = 65536
_MAXHEADERS = 100


class HTTPMessage(email_message.Message):
    # XXX The only usage of this method is in
    # http.server.CGIHTTPRequestHandler.  Maybe move the code there so
    # that it doesn't need to be part of the public API.  The API has
    # never been defined so this could cause backwards compatibility
    # issues.

    def getallmatchingheaders(self, name):
        """Find all header lines matching a given header name.

        Look through the list of headers and find all lines matching a given
        header name (and their continuation lines).  A list of the lines is
        returned, without interpretation.  If the header does not occur, an
        empty list is returned.  If the header occurs multiple times, all
        occurrences are returned.  Case is not important in the header name.

        """
        name = name.lower() + ':'
        n = len(name)
        lst = []
        hit = 0
        for line in self.keys():
            if line[:n].lower() == name:
                hit = 1
            elif not line[:1].isspace():
                hit = 0
            if hit:
                lst.append(line)
        return lst

def parse_headers(fp, _class=HTTPMessage):
    """Parses only RFC2822 headers from a file pointer.

    email Parser wants to see strings rather than bytes.
    But a TextIOWrapper around self.rfile would buffer too many bytes
    from the stream, bytes which we later need to read as bytes.
    So we read the correct bytes here, as bytes, for email Parser
    to parse.

    """
    headers = []
    while True:
        line = fp.readline(_MAXLINE + 1)
        if len(line) > _MAXLINE:
            raise LineTooLong("header line")
        headers.append(line)
        if len(headers) > _MAXHEADERS:
            raise HTTPException("got more than %d headers" % _MAXHEADERS)
        if line in (b'\r\n', b'\n', b''):
            break
    hstring = bytes(b'').join(headers).decode('iso-8859-1')
    return email_parser.Parser(_class=_class).parsestr(hstring)


_strict_sentinel = object()

class HTTPResponse(io.RawIOBase):

    # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details.

    # The bytes from the socket object are iso-8859-1 strings.
    # See RFC 2616 sec 2.2 which notes an exception for MIME-encoded
    # text following RFC 2047.  The basic status line parsing only
    # accepts iso-8859-1.

    def __init__(self, sock, debuglevel=0, strict=_strict_sentinel, method=None, url=None):
        # If the response includes a content-length header, we need to
        # make sure that the client doesn't read more than the
        # specified number of bytes.  If it does, it will block until
        # the server times out and closes the connection.  This will
        # happen if a self.fp.read() is done (without a size) whether
        # self.fp is buffered or not.  So, no self.fp.read() by
        # clients unless they know what they are doing.
        self.fp = sock.makefile("rb")
        self.debuglevel = debuglevel
        if strict is not _strict_sentinel:
            warnings.warn("the 'strict' argument isn't supported anymore; "
                "http.client now always assumes HTTP/1.x compliant servers.",
                DeprecationWarning, 2)
        self._method = method

        # The HTTPResponse object is returned via urllib.  The clients
        # of http and urllib expect different attributes for the
        # headers.  headers is used here and supports urllib.  msg is
        # provided as a backwards compatibility layer for http
        # clients.

        self.headers = self.msg = None

        # from the Status-Line of the response
        self.version = _UNKNOWN # HTTP-Version
        self.status = _UNKNOWN  # Status-Code
        self.reason = _UNKNOWN  # Reason-Phrase

        self.chunked = _UNKNOWN         # is "chunked" being used?
        self.chunk_left = _UNKNOWN      # bytes left to read in current chunk
        self.length = _UNKNOWN          # number of bytes left in response
        self.will_close = _UNKNOWN      # conn will close at end of response

    def _read_status(self):
        line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
        if len(line) > _MAXLINE:
            raise LineTooLong("status line")
        if self.debuglevel > 0:
            print("reply:", repr(line))
        if not line:
            # Presumably, the server closed the connection before
            # sending a valid response.
            raise BadStatusLine(line)
        try:
            version, status, reason = line.split(None, 2)
        except ValueError:
            try:
                version, status = line.split(None, 1)
                reason = ""
            except ValueError:
                # empty version will cause next test to fail.
                version = ""
        if not version.startswith("HTTP/"):
            self._close_conn()
            raise BadStatusLine(line)

        # The status code is a three-digit number
        try:
            status = int(status)
            if status < 100 or status > 999:
                raise BadStatusLine(line)
        except ValueError:
            raise BadStatusLine(line)
        return version, status, reason

    def begin(self):
        if self.headers is not None:
            # we've already started reading the response
            return

        # read until we get a non-100 response
        while True:
            version, status, reason = self._read_status()
            if status != CONTINUE:
                break
            # skip the header from the 100 response
            while True:
                skip = self.fp.readline(_MAXLINE + 1)
                if len(skip) > _MAXLINE:
                    raise LineTooLong("header line")
                skip = skip.strip()
                if not skip:
                    break
                if self.debuglevel > 0:
                    print("header:", skip)

        self.code = self.status = status
        self.reason = reason.strip()
        if version in ("HTTP/1.0", "HTTP/0.9"):
            # Some servers might still return "0.9", treat it as 1.0 anyway
            self.version = 10
        elif version.startswith("HTTP/1."):
            self.version = 11   # use HTTP/1.1 code for HTTP/1.x where x>=1
        else:
            raise UnknownProtocol(version)

        self.headers = self.msg = parse_headers(self.fp)

        if self.debuglevel > 0:
            for hdr in self.headers:
                print("header:", hdr, end=" ")

        # are we using the chunked-style of transfer encoding?
        tr_enc = self.headers.get("transfer-encoding")
        if tr_enc and tr_enc.lower() == "chunked":
            self.chunked = True
            self.chunk_left = None
        else:
            self.chunked = False

        # will the connection close at the end of the response?
        self.will_close = self._check_close()

        # do we have a Content-Length?
        # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"
        self.length = None
        length = self.headers.get("content-length")

         # are we using the chunked-style of transfer encoding?
        tr_enc = self.headers.get("transfer-encoding")
        if length and not self.chunked:
            try:
                self.length = int(length)
            except ValueError:
                self.length = None
            else:
                if self.length < 0:  # ignore nonsensical negative lengths
                    self.length = None
        else:
            self.length = None

        # does the body have a fixed length? (of zero)
        if (status == NO_CONTENT or status == NOT_MODIFIED or
            100 <= status < 200 or      # 1xx codes
            self._method == "HEAD"):
            self.length = 0

        # if the connection remains open, and we aren't using chunked, and
        # a content-length was not provided, then assume that the connection
        # WILL close.
        if (not self.will_close and
            not self.chunked and
            self.length is None):
            self.will_close = True

    def _check_close(self):
        conn = self.headers.get("connection")
        if self.version == 11:
            # An HTTP/1.1 proxy is assumed to stay open unless
            # explicitly closed.
            conn = self.headers.get("connection")
            if conn and "close" in conn.lower():
                return True
            return False

        # Some HTTP/1.0 implementations have support for persistent
        # connections, using rules different than HTTP/1.1.

        # For older HTTP, Keep-Alive indicates persistent connection.
        if self.headers.get("keep-alive"):
            return False

        # At least Akamai returns a "Connection: Keep-Alive" header,
        # which was supposed to be sent by the client.
        if conn and "keep-alive" in conn.lower():
            return False

        # Proxy-Connection is a netscape hack.
        pconn = self.headers.get("proxy-connection")
        if pconn and "keep-alive" in pconn.lower():
            return False

        # otherwise, assume it will close
        return True

    def _close_conn(self):
        fp = self.fp
        self.fp = None
        fp.close()

    def close(self):
        super().close() # set "closed" flag
        if self.fp:
            self._close_conn()

    # These implementations are for the benefit of io.BufferedReader.

    # XXX This class should probably be revised to act more like
    # the "raw stream" that BufferedReader expects.

    def flush(self):
        super().flush()
        if self.fp:
            self.fp.flush()

    def readable(self):
        return True

    # End of "raw stream" methods

    def isclosed(self):
        """True if the connection is closed."""
        # NOTE: it is possible that we will not ever call self.close(). This
        #       case occurs when will_close is TRUE, length is None, and we
        #       read up to the last byte, but NOT past it.
        #
        # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be
        #          called, meaning self.isclosed() is meaningful.
        return self.fp is None

    def read(self, amt=None):
        if self.fp is None:
            return bytes(b"")

        if self._method == "HEAD":
            self._close_conn()
            return bytes(b"")

        if amt is not None:
            # Amount is given, so call base class version
            # (which is implemented in terms of self.readinto)
            return bytes(super(HTTPResponse, self).read(amt))
        else:
            # Amount is not given (unbounded read) so we must check self.length
            # and self.chunked

            if self.chunked:
                return self._readall_chunked()

            if self.length is None:
                s = self.fp.read()
            else:
                try:
                    s = self._safe_read(self.length)
                except IncompleteRead:
                    self._close_conn()
                    raise
                self.length = 0
            self._close_conn()        # we read everything
            return bytes(s)

    def readinto(self, b):
        if self.fp is None:
            return 0

        if self._method == "HEAD":
            self._close_conn()
            return 0

        if self.chunked:
            return self._readinto_chunked(b)

        if self.length is not None:
            if len(b) > self.length:
                # clip the read to the "end of response"
                b = memoryview(b)[0:self.length]

        # we do not use _safe_read() here because this may be a .will_close
        # connection, and the user is reading more bytes than will be provided
        # (for example, reading in 1k chunks)

        if PY2:
            data = self.fp.read(len(b))
            n = len(data)
            b[:n] = data
        else:
            n = self.fp.readinto(b)

        if not n and b:
            # Ideally, we would raise IncompleteRead if the content-length
            # wasn't satisfied, but it might break compatibility.
            self._close_conn()
        elif self.length is not None:
            self.length -= n
            if not self.length:
                self._close_conn()
        return n

    def _read_next_chunk_size(self):
        # Read the next chunk size from the file
        line = self.fp.readline(_MAXLINE + 1)
        if len(line) > _MAXLINE:
            raise LineTooLong("chunk size")
        i = line.find(b";")
        if i >= 0:
            line = line[:i] # strip chunk-extensions
        try:
            return int(line, 16)
        except ValueError:
            # close the connection as protocol synchronisation is
            # probably lost
            self._close_conn()
            raise

    def _read_and_discard_trailer(self):
        # read and discard trailer up to the CRLF terminator
        ### note: we shouldn't have any trailers!
        while True:
            line = self.fp.readline(_MAXLINE + 1)
            if len(line) > _MAXLINE:
                raise LineTooLong("trailer line")
            if not line:
                # a vanishingly small number of sites EOF without
                # sending the trailer
                break
            if line in (b'\r\n', b'\n', b''):
                break

    def _readall_chunked(self):
        assert self.chunked != _UNKNOWN
        chunk_left = self.chunk_left
        value = []
        while True:
            if chunk_left is None:
                try:
                    chunk_left = self._read_next_chunk_size()
                    if chunk_left == 0:
                        break
                except ValueError:
                    raise IncompleteRead(bytes(b'').join(value))
            value.append(self._safe_read(chunk_left))

            # we read the whole chunk, get another
            self._safe_read(2)      # toss the CRLF at the end of the chunk
            chunk_left = None

        self._read_and_discard_trailer()

        # we read everything; close the "file"
        self._close_conn()

        return bytes(b'').join(value)

    def _readinto_chunked(self, b):
        assert self.chunked != _UNKNOWN
        chunk_left = self.chunk_left

        total_bytes = 0
        mvb = memoryview(b)
        while True:
            if chunk_left is None:
                try:
                    chunk_left = self._read_next_chunk_size()
                    if chunk_left == 0:
                        break
                except ValueError:
                    raise IncompleteRead(bytes(b[0:total_bytes]))

            if len(mvb) < chunk_left:
                n = self._safe_readinto(mvb)
                self.chunk_left = chunk_left - n
                return total_bytes + n
            elif len(mvb) == chunk_left:
                n = self._safe_readinto(mvb)
                self._safe_read(2)  # toss the CRLF at the end of the chunk
                self.chunk_left = None
                return total_bytes + n
            else:
                temp_mvb = mvb[0:chunk_left]
                n = self._safe_readinto(temp_mvb)
                mvb = mvb[n:]
                total_bytes += n

            # we read the whole chunk, get another
            self._safe_read(2)      # toss the CRLF at the end of the chunk
            chunk_left = None

        self._read_and_discard_trailer()

        # we read everything; close the "file"
        self._close_conn()

        return total_bytes

    def _safe_read(self, amt):
        """Read the number of bytes requested, compensating for partial reads.

        Normally, we have a blocking socket, but a read() can be interrupted
        by a signal (resulting in a partial read).

        Note that we cannot distinguish between EOF and an interrupt when zero
        bytes have been read. IncompleteRead() will be raised in this
        situation.

        This function should be used when <amt> bytes "should" be present for
        reading. If the bytes are truly not available (due to EOF), then the
        IncompleteRead exception can be used to detect the problem.
        """
        s = []
        while amt > 0:
            chunk = self.fp.read(min(amt, MAXAMOUNT))
            if not chunk:
                raise IncompleteRead(bytes(b'').join(s), amt)
            s.append(chunk)
            amt -= len(chunk)
        return bytes(b"").join(s)

    def _safe_readinto(self, b):
        """Same as _safe_read, but for reading into a buffer."""
        total_bytes = 0
        mvb = memoryview(b)
        while total_bytes < len(b):
            if MAXAMOUNT < len(mvb):
                temp_mvb = mvb[0:MAXAMOUNT]
                if PY2:
                    data = self.fp.read(len(temp_mvb))
                    n = len(data)
                    temp_mvb[:n] = data
                else:
                    n = self.fp.readinto(temp_mvb)
            else:
                if PY2:
                    data = self.fp.read(len(mvb))
                    n = len(data)
                    mvb[:n] = data
                else:
                    n = self.fp.readinto(mvb)
            if not n:
                raise IncompleteRead(bytes(mvb[0:total_bytes]), len(b))
            mvb = mvb[n:]
            total_bytes += n
        return total_bytes

    def fileno(self):
        return self.fp.fileno()

    def getheader(self, name, default=None):
        if self.headers is None:
            raise ResponseNotReady()
        headers = self.headers.get_all(name) or default
        if isinstance(headers, str) or not hasattr(headers, '__iter__'):
            return headers
        else:
            return ', '.join(headers)

    def getheaders(self):
        """Return list of (header, value) tuples."""
        if self.headers is None:
            raise ResponseNotReady()
        return list(self.headers.items())

    # We override IOBase.__iter__ so that it doesn't check for closed-ness

    def __iter__(self):
        return self

    # For compatibility with old-style urllib responses.

    def info(self):
        return self.headers

    def geturl(self):
        return self.url

    def getcode(self):
        return self.status

class HTTPConnection(object):

    _http_vsn = 11
    _http_vsn_str = 'HTTP/1.1'

    response_class = HTTPResponse
    default_port = HTTP_PORT
    auto_open = 1
    debuglevel = 0

    def __init__(self, host, port=None, strict=_strict_sentinel,
                 timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None):
        if strict is not _strict_sentinel:
            warnings.warn("the 'strict' argument isn't supported anymore; "
                "http.client now always assumes HTTP/1.x compliant servers.",
                DeprecationWarning, 2)
        self.timeout = timeout
        self.source_address = source_address
        self.sock = None
        self._buffer = []
        self.__response = None
        self.__state = _CS_IDLE
        self._method = None
        self._tunnel_host = None
        self._tunnel_port = None
        self._tunnel_headers = {}

        self._set_hostport(host, port)

    def set_tunnel(self, host, port=None, headers=None):
        """ Sets up the host and the port for the HTTP CONNECT Tunnelling.

        The headers argument should be a mapping of extra HTTP headers
        to send with the CONNECT request.
        """
        self._tunnel_host = host
        self._tunnel_port = port
        if headers:
            self._tunnel_headers = headers
        else:
            self._tunnel_headers.clear()

    def _set_hostport(self, host, port):
        if port is None:
            i = host.rfind(':')
            j = host.rfind(']')         # ipv6 addresses have [...]
            if i > j:
                try:
                    port = int(host[i+1:])
                except ValueError:
                    if host[i+1:] == "": # http://foo.com:/ == http://foo.com/
                        port = self.default_port
                    else:
                        raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
                host = host[:i]
            else:
                port = self.default_port
            if host and host[0] == '[' and host[-1] == ']':
                host = host[1:-1]
        self.host = host
        self.port = port

    def set_debuglevel(self, level):
        self.debuglevel = level

    def _tunnel(self):
        self._set_hostport(self._tunnel_host, self._tunnel_port)
        connect_str = "CONNECT %s:%d HTTP/1.0\r\n" % (self.host, self.port)
        connect_bytes = connect_str.encode("ascii")
        self.send(connect_bytes)
        for header, value in self._tunnel_headers.items():
            header_str = "%s: %s\r\n" % (header, value)
            header_bytes = header_str.encode("latin-1")
            self.send(header_bytes)
        self.send(bytes(b'\r\n'))

        response = self.response_class(self.sock, method=self._method)
        (version, code, message) = response._read_status()

        if code != 200:
            self.close()
            raise socket.error("Tunnel connection failed: %d %s" % (code,
                                                                    message.strip()))
        while True:
            line = response.fp.readline(_MAXLINE + 1)
            if len(line) > _MAXLINE:
                raise LineTooLong("header line")
            if not line:
                # for sites which EOF without sending a trailer
                break
            if line in (b'\r\n', b'\n', b''):
                break

    def connect(self):
        """Connect to the host and port specified in __init__."""
        self.sock = socket_create_connection((self.host,self.port),
                                             self.timeout, self.source_address)
        if self._tunnel_host:
            self._tunnel()

    def close(self):
        """Close the connection to the HTTP server."""
        if self.sock:
            self.sock.close()   # close it manually... there may be other refs
            self.sock = None
        if self.__response:
            self.__response.close()
            self.__response = None
        self.__state = _CS_IDLE

    def send(self, data):
        """Send `data' to the server.
        ``data`` can be a string object, a bytes object, an array object, a
        file-like object that supports a .read() method, or an iterable object.
        """

        if self.sock is None:
            if self.auto_open:
                self.connect()
            else:
                raise NotConnected()

        if self.debuglevel > 0:
            print("send:", repr(data))
        blocksize = 8192
        # Python 2.7 array objects have a read method which is incompatible
        # with the 2-arg calling syntax below.
        if hasattr(data, "read") and not isinstance(data, array):
            if self.debuglevel > 0:
                print("sendIng a read()able")
            encode = False
            try:
                mode = data.mode
            except AttributeError:
                # io.BytesIO and other file-like objects don't have a `mode`
                # attribute.
                pass
            else:
                if "b" not in mode:
                    encode = True
                    if self.debuglevel > 0:
                        print("encoding file using iso-8859-1")
            while 1:
                datablock = data.read(blocksize)
                if not datablock:
                    break
                if encode:
                    datablock = datablock.encode("iso-8859-1")
                self.sock.sendall(datablock)
            return
        try:
            self.sock.sendall(data)
        except TypeError:
            if isinstance(data, Iterable):
                for d in data:
                    self.sock.sendall(d)
            else:
                raise TypeError("data should be a bytes-like object "
                                "or an iterable, got %r" % type(data))

    def _output(self, s):
        """Add a line of output to the current request buffer.

        Assumes that the line does *not* end with \\r\\n.
        """
        self._buffer.append(s)

    def _send_output(self, message_body=None):
        """Send the currently buffered request and clear the buffer.

        Appends an extra \\r\\n to the buffer.
        A message_body may be specified, to be appended to the request.
        """
        self._buffer.extend((bytes(b""), bytes(b"")))
        msg = bytes(b"\r\n").join(self._buffer)
        del self._buffer[:]
        # If msg and message_body are sent in a single send() call,
        # it will avoid performance problems caused by the interaction
        # between delayed ack and the Nagle algorithm.
        if isinstance(message_body, bytes):
            msg += message_body
            message_body = None
        self.send(msg)
        if message_body is not None:
            # message_body was not a string (i.e. it is a file), and
            # we must run the risk of Nagle.
            self.send(message_body)

    def putrequest(self, method, url, skip_host=0, skip_accept_encoding=0):
        """Send a request to the server.

        `method' specifies an HTTP request method, e.g. 'GET'.
        `url' specifies the object being requested, e.g. '/index.html'.
        `skip_host' if True does not add automatically a 'Host:' header
        `skip_accept_encoding' if True does not add automatically an
           'Accept-Encoding:' header
        """

        # if a prior response has been completed, then forget about it.
        if self.__response and self.__response.isclosed():
            self.__response = None


        # in certain cases, we cannot issue another request on this connection.
        # this occurs when:
        #   1) we are in the process of sending a request.   (_CS_REQ_STARTED)
        #   2) a response to a previous request has signalled that it is going
        #      to close the connection upon completion.
        #   3) the headers for the previous response have not been read, thus
        #      we cannot determine whether point (2) is true.   (_CS_REQ_SENT)
        #
        # if there is no prior response, then we can request at will.
        #
        # if point (2) is true, then we will have passed the socket to the
        # response (effectively meaning, "there is no prior response"), and
        # will open a new one when a new request is made.
        #
        # Note: if a prior response exists, then we *can* start a new request.
        #       We are not allowed to begin fetching the response to this new
        #       request, however, until that prior response is complete.
        #
        if self.__state == _CS_IDLE:
            self.__state = _CS_REQ_STARTED
        else:
            raise CannotSendRequest(self.__state)

        # Save the method we use, we need it later in the response phase
        self._method = method
        if not url:
            url = '/'
        request = '%s %s %s' % (method, url, self._http_vsn_str)

        # Non-ASCII characters should have been eliminated earlier
        self._output(request.encode('ascii'))

        if self._http_vsn == 11:
            # Issue some standard headers for better HTTP/1.1 compliance

            if not skip_host:
                # this header is issued *only* for HTTP/1.1
                # connections. more specifically, this means it is
                # only issued when the client uses the new
                # HTTPConnection() class. backwards-compat clients
                # will be using HTTP/1.0 and those clients may be
                # issuing this header themselves. we should NOT issue
                # it twice; some web servers (such as Apache) barf
                # when they see two Host: headers

                # If we need a non-standard port,include it in the
                # header.  If the request is going through a proxy,
                # but the host of the actual URL, not the host of the
                # proxy.

                netloc = ''
                if url.startswith('http'):
                    nil, netloc, nil, nil, nil = urlsplit(url)

                if netloc:
                    try:
                        netloc_enc = netloc.encode("ascii")
                    except UnicodeEncodeError:
                        netloc_enc = netloc.encode("idna")
                    self.putheader('Host', netloc_enc)
                else:
                    try:
                        host_enc = self.host.encode("ascii")
                    except UnicodeEncodeError:
                        host_enc = self.host.encode("idna")

                    # As per RFC 273, IPv6 address should be wrapped with []
                    # when used as Host header

                    if self.host.find(':') >= 0:
                        host_enc = bytes(b'[' + host_enc + b']')

                    if self.port == self.default_port:
                        self.putheader('Host', host_enc)
                    else:
                        host_enc = host_enc.decode("ascii")
                        self.putheader('Host', "%s:%s" % (host_enc, self.port))

            # note: we are assuming that clients will not attempt to set these
            #       headers since *this* library must deal with the
            #       consequences. this also means that when the supporting
            #       libraries are updated to recognize other forms, then this
            #       code should be changed (removed or updated).

            # we only want a Content-Encoding of "identity" since we don't
            # support encodings such as x-gzip or x-deflate.
            if not skip_accept_encoding:
                self.putheader('Accept-Encoding', 'identity')

            # we can accept "chunked" Transfer-Encodings, but no others
            # NOTE: no TE header implies *only* "chunked"
            #self.putheader('TE', 'chunked')

            # if TE is supplied in the header, then it must appear in a
            # Connection header.
            #self.putheader('Connection', 'TE')

        else:
            # For HTTP/1.0, the server will assume "not chunked"
            pass

    def putheader(self, header, *values):
        """Send a request header line to the server.

        For example: h.putheader('Accept', 'text/html')
        """
        if self.__state != _CS_REQ_STARTED:
            raise CannotSendHeader()

        if hasattr(header, 'encode'):
            header = header.encode('ascii')
        values = list(values)
        for i, one_value in enumerate(values):
            if hasattr(one_value, 'encode'):
                values[i] = one_value.encode('latin-1')
            elif isinstance(one_value, int):
                values[i] = str(one_value).encode('ascii')
        value = bytes(b'\r\n\t').join(values)
        header = header + bytes(b': ') + value
        self._output(header)

    def endheaders(self, message_body=None):
        """Indicate that the last header line has been sent to the server.

        This method sends the request to the server.  The optional message_body
        argument can be used to pass a message body associated with the
        request.  The message body will be sent in the same packet as the
        message headers if it is a string, otherwise it is sent as a separate
        packet.
        """
        if self.__state == _CS_REQ_STARTED:
            self.__state = _CS_REQ_SENT
        else:
            raise CannotSendHeader()
        self._send_output(message_body)

    def request(self, method, url, body=None, headers={}):
        """Send a complete request to the server."""
        self._send_request(method, url, body, headers)

    def _set_content_length(self, body):
        # Set the content-length based on the body.
        thelen = None
        try:
            thelen = str(len(body))
        except TypeError as te:
            # If this is a file-like object, try to
            # fstat its file descriptor
            try:
                thelen = str(os.fstat(body.fileno()).st_size)
            except (AttributeError, OSError):
                # Don't send a length if this failed
                if self.debuglevel > 0: print("Cannot stat!!")

        if thelen is not None:
            self.putheader('Content-Length', thelen)

    def _send_request(self, method, url, body, headers):
        # Honor explicitly requested Host: and Accept-Encoding: headers.
        header_names = dict.fromkeys([k.lower() for k in headers])
        skips = {}
        if 'host' in header_names:
            skips['skip_host'] = 1
        if 'accept-encoding' in header_names:
            skips['skip_accept_encoding'] = 1

        self.putrequest(method, url, **skips)

        if body is not None and ('content-length' not in header_names):
            self._set_content_length(body)
        for hdr, value in headers.items():
            self.putheader(hdr, value)
        if isinstance(body, str):
            # RFC 2616 Section 3.7.1 says that text default has a
            # default charset of iso-8859-1.
            body = body.encode('iso-8859-1')
        self.endheaders(body)

    def getresponse(self):
        """Get the response from the server.

        If the HTTPConnection is in the correct state, returns an
        instance of HTTPResponse or of whatever object is returned by
        class the response_class variable.

        If a request has not been sent or if a previous response has
        not be handled, ResponseNotReady is raised.  If the HTTP
        response indicates that the connection should be closed, then
        it will be closed before the response is returned.  When the
        connection is closed, the underlying socket is closed.
        """

        # if a prior response has been completed, then forget about it.
        if self.__response and self.__response.isclosed():
            self.__response = None

        # if a prior response exists, then it must be completed (otherwise, we
        # cannot read this response's header to determine the connection-close
        # behavior)
        #
        # note: if a prior response existed, but was connection-close, then the
        # socket and response were made independent of this HTTPConnection
        # object since a new request requires that we open a whole new
        # connection
        #
        # this means the prior response had one of two states:
        #   1) will_close: this connection was reset and the prior socket and
        #                  response operate independently
        #   2) persistent: the response was retained and we await its
        #                  isclosed() status to become true.
        #
        if self.__state != _CS_REQ_SENT or self.__response:
            raise ResponseNotReady(self.__state)

        if self.debuglevel > 0:
            response = self.response_class(self.sock, self.debuglevel,
                                           method=self._method)
        else:
            response = self.response_class(self.sock, method=self._method)

        response.begin()
        assert response.will_close != _UNKNOWN
        self.__state = _CS_IDLE

        if response.will_close:
            # this effectively passes the connection to the response
            self.close()
        else:
            # remember this, so we can tell when it is complete
            self.__response = response

        return response

try:
    import ssl
    from ssl import SSLContext
except ImportError:
    pass
else:
    class HTTPSConnection(HTTPConnection):
        "This class allows communication via SSL."

        default_port = HTTPS_PORT

        # XXX Should key_file and cert_file be deprecated in favour of context?

        def __init__(self, host, port=None, key_file=None, cert_file=None,
                     strict=_strict_sentinel, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
                     source_address=None, **_3to2kwargs):
            if 'check_hostname' in _3to2kwargs: check_hostname = _3to2kwargs['check_hostname']; del _3to2kwargs['check_hostname']
            else: check_hostname = None
            if 'context' in _3to2kwargs: context = _3to2kwargs['context']; del _3to2kwargs['context']
            else: context = None
            super(HTTPSConnection, self).__init__(host, port, strict, timeout,
                                                  source_address)
            self.key_file = key_file
            self.cert_file = cert_file
            if context is None:
                # Some reasonable defaults
                context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
                context.options |= ssl.OP_NO_SSLv2
            will_verify = context.verify_mode != ssl.CERT_NONE
            if check_hostname is None:
                check_hostname = will_verify
            elif check_hostname and not will_verify:
                raise ValueError("check_hostname needs a SSL context with "
                                 "either CERT_OPTIONAL or CERT_REQUIRED")
            if key_file or cert_file:
                context.load_cert_chain(cert_file, key_file)
            self._context = context
            self._check_hostname = check_hostname

        def connect(self):
            "Connect to a host on a given (SSL) port."

            sock = socket_create_connection((self.host, self.port),
                                            self.timeout, self.source_address)

            if self._tunnel_host:
                self.sock = sock
                self._tunnel()

            server_hostname = self.host if ssl.HAS_SNI else None
            self.sock = self._context.wrap_socket(sock,
                                                  server_hostname=server_hostname)
            try:
                if self._check_hostname:
                    ssl.match_hostname(self.sock.getpeercert(), self.host)
            except Exception:
                self.sock.shutdown(socket.SHUT_RDWR)
                self.sock.close()
                raise

    __all__.append("HTTPSConnection")


    # ######################################
    # # We use the old HTTPSConnection class from Py2.7, because ssl.SSLContext
    # # doesn't exist in the Py2.7 stdlib
    # class HTTPSConnection(HTTPConnection):
    #     "This class allows communication via SSL."

    #     default_port = HTTPS_PORT

    #     def __init__(self, host, port=None, key_file=None, cert_file=None,
    #                  strict=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
    #                  source_address=None):
    #         HTTPConnection.__init__(self, host, port, strict, timeout,
    #                                 source_address)
    #         self.key_file = key_file
    #         self.cert_file = cert_file

    #     def connect(self):
    #         "Connect to a host on a given (SSL) port."

    #         sock = socket_create_connection((self.host, self.port),
    #                                         self.timeout, self.source_address)
    #         if self._tunnel_host:
    #             self.sock = sock
    #             self._tunnel()
    #         self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file)

    # __all__.append("HTTPSConnection")
    # ######################################


class HTTPException(Exception):
    # Subclasses that define an __init__ must call Exception.__init__
    # or define self.args.  Otherwise, str() will fail.
    pass

class NotConnected(HTTPException):
    pass

class InvalidURL(HTTPException):
    pass

class UnknownProtocol(HTTPException):
    def __init__(self, version):
        self.args = version,
        self.version = version

class UnknownTransferEncoding(HTTPException):
    pass

class UnimplementedFileMode(HTTPException):
    pass

class IncompleteRead(HTTPException):
    def __init__(self, partial, expected=None):
        self.args = partial,
        self.partial = partial
        self.expected = expected
    def __repr__(self):
        if self.expected is not None:
            e = ', %i more expected' % self.expected
        else:
            e = ''
        return 'IncompleteRead(%i bytes read%s)' % (len(self.partial), e)
    def __str__(self):
        return repr(self)

class ImproperConnectionState(HTTPException):
    pass

class CannotSendRequest(ImproperConnectionState):
    pass

class CannotSendHeader(ImproperConnectionState):
    pass

class ResponseNotReady(ImproperConnectionState):
    pass

class BadStatusLine(HTTPException):
    def __init__(self, line):
        if not line:
            line = repr(line)
        self.args = line,
        self.line = line

class LineTooLong(HTTPException):
    def __init__(self, line_type):
        HTTPException.__init__(self, "got more than %d bytes when reading %s"
                                     % (_MAXLINE, line_type))

# for backwards compatibility
error = HTTPException
PK�Cu\3�2�@+@+"future/backports/http/cookiejar.pynu�[���r"""HTTP cookie handling for web clients.

This is a backport of the Py3.3 ``http.cookiejar`` module for
python-future.

This module has (now fairly distant) origins in Gisle Aas' Perl module
HTTP::Cookies, from the libwww-perl library.

Docstrings, comments and debug strings in this code refer to the
attributes of the HTTP cookie system as cookie-attributes, to distinguish
them clearly from Python attributes.

Class diagram (note that BSDDBCookieJar and the MSIE* classes are not
distributed with the Python standard library, but are available from
http://wwwsearch.sf.net/):

                        CookieJar____
                        /     \      \
            FileCookieJar      \      \
             /    |   \         \      \
 MozillaCookieJar | LWPCookieJar \      \
                  |               |      \
                  |   ---MSIEBase |       \
                  |  /      |     |        \
                  | /   MSIEDBCookieJar BSDDBCookieJar
                  |/
               MSIECookieJar

"""

from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from future.builtins import filter, int, map, open, str
from future.utils import as_native_str, PY2

__all__ = ['Cookie', 'CookieJar', 'CookiePolicy', 'DefaultCookiePolicy',
           'FileCookieJar', 'LWPCookieJar', 'LoadError', 'MozillaCookieJar']

import copy
import datetime
import re
if PY2:
    re.ASCII = 0
import time
from future.backports.urllib.parse import urlparse, urlsplit, quote
from future.backports.http.client import HTTP_PORT
try:
    import threading as _threading
except ImportError:
    import dummy_threading as _threading
from calendar import timegm

debug = False   # set to True to enable debugging via the logging module
logger = None

def _debug(*args):
    if not debug:
        return
    global logger
    if not logger:
        import logging
        logger = logging.getLogger("http.cookiejar")
    return logger.debug(*args)


DEFAULT_HTTP_PORT = str(HTTP_PORT)
MISSING_FILENAME_TEXT = ("a filename was not supplied (nor was the CookieJar "
                         "instance initialised with one)")

def _warn_unhandled_exception():
    # There are a few catch-all except: statements in this module, for
    # catching input that's bad in unexpected ways.  Warn if any
    # exceptions are caught there.
    import io, warnings, traceback
    f = io.StringIO()
    traceback.print_exc(None, f)
    msg = f.getvalue()
    warnings.warn("http.cookiejar bug!\n%s" % msg, stacklevel=2)


# Date/time conversion
# -----------------------------------------------------------------------------

EPOCH_YEAR = 1970
def _timegm(tt):
    year, month, mday, hour, min, sec = tt[:6]
    if ((year >= EPOCH_YEAR) and (1 <= month <= 12) and (1 <= mday <= 31) and
        (0 <= hour <= 24) and (0 <= min <= 59) and (0 <= sec <= 61)):
        return timegm(tt)
    else:
        return None

DAYS = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
MONTHS = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
          "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
MONTHS_LOWER = []
for month in MONTHS: MONTHS_LOWER.append(month.lower())

def time2isoz(t=None):
    """Return a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like "YYYY-MM-DD hh:mm:ssZ",
    representing Universal Time (UTC, aka GMT).  An example of this format is:

    1994-11-24 08:49:37Z

    """
    if t is None:
        dt = datetime.datetime.utcnow()
    else:
        dt = datetime.datetime.utcfromtimestamp(t)
    return "%04d-%02d-%02d %02d:%02d:%02dZ" % (
        dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second)

def time2netscape(t=None):
    """Return a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like this:

    Wed, DD-Mon-YYYY HH:MM:SS GMT

    """
    if t is None:
        dt = datetime.datetime.utcnow()
    else:
        dt = datetime.datetime.utcfromtimestamp(t)
    return "%s %02d-%s-%04d %02d:%02d:%02d GMT" % (
        DAYS[dt.weekday()], dt.day, MONTHS[dt.month-1],
        dt.year, dt.hour, dt.minute, dt.second)


UTC_ZONES = {"GMT": None, "UTC": None, "UT": None, "Z": None}

TIMEZONE_RE = re.compile(r"^([-+])?(\d\d?):?(\d\d)?$", re.ASCII)
def offset_from_tz_string(tz):
    offset = None
    if tz in UTC_ZONES:
        offset = 0
    else:
        m = TIMEZONE_RE.search(tz)
        if m:
            offset = 3600 * int(m.group(2))
            if m.group(3):
                offset = offset + 60 * int(m.group(3))
            if m.group(1) == '-':
                offset = -offset
    return offset

def _str2time(day, mon, yr, hr, min, sec, tz):
    # translate month name to number
    # month numbers start with 1 (January)
    try:
        mon = MONTHS_LOWER.index(mon.lower())+1
    except ValueError:
        # maybe it's already a number
        try:
            imon = int(mon)
        except ValueError:
            return None
        if 1 <= imon <= 12:
            mon = imon
        else:
            return None

    # make sure clock elements are defined
    if hr is None: hr = 0
    if min is None: min = 0
    if sec is None: sec = 0

    yr = int(yr)
    day = int(day)
    hr = int(hr)
    min = int(min)
    sec = int(sec)

    if yr < 1000:
        # find "obvious" year
        cur_yr = time.localtime(time.time())[0]
        m = cur_yr % 100
        tmp = yr
        yr = yr + cur_yr - m
        m = m - tmp
        if abs(m) > 50:
            if m > 0: yr = yr + 100
            else: yr = yr - 100

    # convert UTC time tuple to seconds since epoch (not timezone-adjusted)
    t = _timegm((yr, mon, day, hr, min, sec, tz))

    if t is not None:
        # adjust time using timezone string, to get absolute time since epoch
        if tz is None:
            tz = "UTC"
        tz = tz.upper()
        offset = offset_from_tz_string(tz)
        if offset is None:
            return None
        t = t - offset

    return t

STRICT_DATE_RE = re.compile(
    r"^[SMTWF][a-z][a-z], (\d\d) ([JFMASOND][a-z][a-z]) "
    "(\d\d\d\d) (\d\d):(\d\d):(\d\d) GMT$", re.ASCII)
WEEKDAY_RE = re.compile(
    r"^(?:Sun|Mon|Tue|Wed|Thu|Fri|Sat)[a-z]*,?\s*", re.I | re.ASCII)
LOOSE_HTTP_DATE_RE = re.compile(
    r"""^
    (\d\d?)            # day
       (?:\s+|[-\/])
    (\w+)              # month
        (?:\s+|[-\/])
    (\d+)              # year
    (?:
          (?:\s+|:)    # separator before clock
       (\d\d?):(\d\d)  # hour:min
       (?::(\d\d))?    # optional seconds
    )?                 # optional clock
       \s*
    (?:
       ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+) # timezone
       \s*
    )?
    (?:
       \(\w+\)         # ASCII representation of timezone in parens.
       \s*
    )?$""", re.X | re.ASCII)
def http2time(text):
    """Returns time in seconds since epoch of time represented by a string.

    Return value is an integer.

    None is returned if the format of str is unrecognized, the time is outside
    the representable range, or the timezone string is not recognized.  If the
    string contains no timezone, UTC is assumed.

    The timezone in the string may be numerical (like "-0800" or "+0100") or a
    string timezone (like "UTC", "GMT", "BST" or "EST").  Currently, only the
    timezone strings equivalent to UTC (zero offset) are known to the function.

    The function loosely parses the following formats:

    Wed, 09 Feb 1994 22:23:32 GMT       -- HTTP format
    Tuesday, 08-Feb-94 14:15:29 GMT     -- old rfc850 HTTP format
    Tuesday, 08-Feb-1994 14:15:29 GMT   -- broken rfc850 HTTP format
    09 Feb 1994 22:23:32 GMT            -- HTTP format (no weekday)
    08-Feb-94 14:15:29 GMT              -- rfc850 format (no weekday)
    08-Feb-1994 14:15:29 GMT            -- broken rfc850 format (no weekday)

    The parser ignores leading and trailing whitespace.  The time may be
    absent.

    If the year is given with only 2 digits, the function will select the
    century that makes the year closest to the current date.

    """
    # fast exit for strictly conforming string
    m = STRICT_DATE_RE.search(text)
    if m:
        g = m.groups()
        mon = MONTHS_LOWER.index(g[1].lower()) + 1
        tt = (int(g[2]), mon, int(g[0]),
              int(g[3]), int(g[4]), float(g[5]))
        return _timegm(tt)

    # No, we need some messy parsing...

    # clean up
    text = text.lstrip()
    text = WEEKDAY_RE.sub("", text, 1)  # Useless weekday

    # tz is time zone specifier string
    day, mon, yr, hr, min, sec, tz = [None]*7

    # loose regexp parse
    m = LOOSE_HTTP_DATE_RE.search(text)
    if m is not None:
        day, mon, yr, hr, min, sec, tz = m.groups()
    else:
        return None  # bad format

    return _str2time(day, mon, yr, hr, min, sec, tz)

ISO_DATE_RE = re.compile(
    """^
    (\d{4})              # year
       [-\/]?
    (\d\d?)              # numerical month
       [-\/]?
    (\d\d?)              # day
   (?:
         (?:\s+|[-:Tt])  # separator before clock
      (\d\d?):?(\d\d)    # hour:min
      (?::?(\d\d(?:\.\d*)?))?  # optional seconds (and fractional)
   )?                    # optional clock
      \s*
   (?:
      ([-+]?\d\d?:?(:?\d\d)?
       |Z|z)             # timezone  (Z is "zero meridian", i.e. GMT)
      \s*
   )?$""", re.X | re. ASCII)
def iso2time(text):
    """
    As for http2time, but parses the ISO 8601 formats:

    1994-02-03 14:15:29 -0100    -- ISO 8601 format
    1994-02-03 14:15:29          -- zone is optional
    1994-02-03                   -- only date
    1994-02-03T14:15:29          -- Use T as separator
    19940203T141529Z             -- ISO 8601 compact format
    19940203                     -- only date

    """
    # clean up
    text = text.lstrip()

    # tz is time zone specifier string
    day, mon, yr, hr, min, sec, tz = [None]*7

    # loose regexp parse
    m = ISO_DATE_RE.search(text)
    if m is not None:
        # XXX there's an extra bit of the timezone I'm ignoring here: is
        #   this the right thing to do?
        yr, mon, day, hr, min, sec, tz, _ = m.groups()
    else:
        return None  # bad format

    return _str2time(day, mon, yr, hr, min, sec, tz)


# Header parsing
# -----------------------------------------------------------------------------

def unmatched(match):
    """Return unmatched part of re.Match object."""
    start, end = match.span(0)
    return match.string[:start]+match.string[end:]

HEADER_TOKEN_RE =        re.compile(r"^\s*([^=\s;,]+)")
HEADER_QUOTED_VALUE_RE = re.compile(r"^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"")
HEADER_VALUE_RE =        re.compile(r"^\s*=\s*([^\s;,]*)")
HEADER_ESCAPE_RE = re.compile(r"\\(.)")
def split_header_words(header_values):
    r"""Parse header values into a list of lists containing key,value pairs.

    The function knows how to deal with ",", ";" and "=" as well as quoted
    values after "=".  A list of space separated tokens are parsed as if they
    were separated by ";".

    If the header_values passed as argument contains multiple values, then they
    are treated as if they were a single value separated by comma ",".

    This means that this function is useful for parsing header fields that
    follow this syntax (BNF as from the HTTP/1.1 specification, but we relax
    the requirement for tokens).

      headers           = #header
      header            = (token | parameter) *( [";"] (token | parameter))

      token             = 1*<any CHAR except CTLs or separators>
      separators        = "(" | ")" | "<" | ">" | "@"
                        | "," | ";" | ":" | "\" | <">
                        | "/" | "[" | "]" | "?" | "="
                        | "{" | "}" | SP | HT

      quoted-string     = ( <"> *(qdtext | quoted-pair ) <"> )
      qdtext            = <any TEXT except <">>
      quoted-pair       = "\" CHAR

      parameter         = attribute "=" value
      attribute         = token
      value             = token | quoted-string

    Each header is represented by a list of key/value pairs.  The value for a
    simple token (not part of a parameter) is None.  Syntactically incorrect
    headers will not necessarily be parsed as you would want.

    This is easier to describe with some examples:

    >>> split_header_words(['foo="bar"; port="80,81"; discard, bar=baz'])
    [[('foo', 'bar'), ('port', '80,81'), ('discard', None)], [('bar', 'baz')]]
    >>> split_header_words(['text/html; charset="iso-8859-1"'])
    [[('text/html', None), ('charset', 'iso-8859-1')]]
    >>> split_header_words([r'Basic realm="\"foo\bar\""'])
    [[('Basic', None), ('realm', '"foobar"')]]

    """
    assert not isinstance(header_values, str)
    result = []
    for text in header_values:
        orig_text = text
        pairs = []
        while text:
            m = HEADER_TOKEN_RE.search(text)
            if m:
                text = unmatched(m)
                name = m.group(1)
                m = HEADER_QUOTED_VALUE_RE.search(text)
                if m:  # quoted value
                    text = unmatched(m)
                    value = m.group(1)
                    value = HEADER_ESCAPE_RE.sub(r"\1", value)
                else:
                    m = HEADER_VALUE_RE.search(text)
                    if m:  # unquoted value
                        text = unmatched(m)
                        value = m.group(1)
                        value = value.rstrip()
                    else:
                        # no value, a lone token
                        value = None
                pairs.append((name, value))
            elif text.lstrip().startswith(","):
                # concatenated headers, as per RFC 2616 section 4.2
                text = text.lstrip()[1:]
                if pairs: result.append(pairs)
                pairs = []
            else:
                # skip junk
                non_junk, nr_junk_chars = re.subn("^[=\s;]*", "", text)
                assert nr_junk_chars > 0, (
                    "split_header_words bug: '%s', '%s', %s" %
                    (orig_text, text, pairs))
                text = non_junk
        if pairs: result.append(pairs)
    return result

HEADER_JOIN_ESCAPE_RE = re.compile(r"([\"\\])")
def join_header_words(lists):
    """Do the inverse (almost) of the conversion done by split_header_words.

    Takes a list of lists of (key, value) pairs and produces a single header
    value.  Attribute values are quoted if needed.

    >>> join_header_words([[("text/plain", None), ("charset", "iso-8859/1")]])
    'text/plain; charset="iso-8859/1"'
    >>> join_header_words([[("text/plain", None)], [("charset", "iso-8859/1")]])
    'text/plain, charset="iso-8859/1"'

    """
    headers = []
    for pairs in lists:
        attr = []
        for k, v in pairs:
            if v is not None:
                if not re.search(r"^\w+$", v):
                    v = HEADER_JOIN_ESCAPE_RE.sub(r"\\\1", v)  # escape " and \
                    v = '"%s"' % v
                k = "%s=%s" % (k, v)
            attr.append(k)
        if attr: headers.append("; ".join(attr))
    return ", ".join(headers)

def strip_quotes(text):
    if text.startswith('"'):
        text = text[1:]
    if text.endswith('"'):
        text = text[:-1]
    return text

def parse_ns_headers(ns_headers):
    """Ad-hoc parser for Netscape protocol cookie-attributes.

    The old Netscape cookie format for Set-Cookie can for instance contain
    an unquoted "," in the expires field, so we have to use this ad-hoc
    parser instead of split_header_words.

    XXX This may not make the best possible effort to parse all the crap
    that Netscape Cookie headers contain.  Ronald Tschalar's HTTPClient
    parser is probably better, so could do worse than following that if
    this ever gives any trouble.

    Currently, this is also used for parsing RFC 2109 cookies.

    """
    known_attrs = ("expires", "domain", "path", "secure",
                   # RFC 2109 attrs (may turn up in Netscape cookies, too)
                   "version", "port", "max-age")

    result = []
    for ns_header in ns_headers:
        pairs = []
        version_set = False
        for ii, param in enumerate(re.split(r";\s*", ns_header)):
            param = param.rstrip()
            if param == "": continue
            if "=" not in param:
                k, v = param, None
            else:
                k, v = re.split(r"\s*=\s*", param, 1)
                k = k.lstrip()
            if ii != 0:
                lc = k.lower()
                if lc in known_attrs:
                    k = lc
                if k == "version":
                    # This is an RFC 2109 cookie.
                    v = strip_quotes(v)
                    version_set = True
                if k == "expires":
                    # convert expires date to seconds since epoch
                    v = http2time(strip_quotes(v))  # None if invalid
            pairs.append((k, v))

        if pairs:
            if not version_set:
                pairs.append(("version", "0"))
            result.append(pairs)

    return result


IPV4_RE = re.compile(r"\.\d+$", re.ASCII)
def is_HDN(text):
    """Return True if text is a host domain name."""
    # XXX
    # This may well be wrong.  Which RFC is HDN defined in, if any (for
    #  the purposes of RFC 2965)?
    # For the current implementation, what about IPv6?  Remember to look
    #  at other uses of IPV4_RE also, if change this.
    if IPV4_RE.search(text):
        return False
    if text == "":
        return False
    if text[0] == "." or text[-1] == ".":
        return False
    return True

def domain_match(A, B):
    """Return True if domain A domain-matches domain B, according to RFC 2965.

    A and B may be host domain names or IP addresses.

    RFC 2965, section 1:

    Host names can be specified either as an IP address or a HDN string.
    Sometimes we compare one host name with another.  (Such comparisons SHALL
    be case-insensitive.)  Host A's name domain-matches host B's if

         *  their host name strings string-compare equal; or

         * A is a HDN string and has the form NB, where N is a non-empty
            name string, B has the form .B', and B' is a HDN string.  (So,
            x.y.com domain-matches .Y.com but not Y.com.)

    Note that domain-match is not a commutative operation: a.b.c.com
    domain-matches .c.com, but not the reverse.

    """
    # Note that, if A or B are IP addresses, the only relevant part of the
    # definition of the domain-match algorithm is the direct string-compare.
    A = A.lower()
    B = B.lower()
    if A == B:
        return True
    if not is_HDN(A):
        return False
    i = A.rfind(B)
    if i == -1 or i == 0:
        # A does not have form NB, or N is the empty string
        return False
    if not B.startswith("."):
        return False
    if not is_HDN(B[1:]):
        return False
    return True

def liberal_is_HDN(text):
    """Return True if text is a sort-of-like a host domain name.

    For accepting/blocking domains.

    """
    if IPV4_RE.search(text):
        return False
    return True

def user_domain_match(A, B):
    """For blocking/accepting domains.

    A and B may be host domain names or IP addresses.

    """
    A = A.lower()
    B = B.lower()
    if not (liberal_is_HDN(A) and liberal_is_HDN(B)):
        if A == B:
            # equal IP addresses
            return True
        return False
    initial_dot = B.startswith(".")
    if initial_dot and A.endswith(B):
        return True
    if not initial_dot and A == B:
        return True
    return False

cut_port_re = re.compile(r":\d+$", re.ASCII)
def request_host(request):
    """Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    """
    url = request.get_full_url()
    host = urlparse(url)[1]
    if host == "":
        host = request.get_header("Host", "")

    # remove port, if present
    host = cut_port_re.sub("", host, 1)
    return host.lower()

def eff_request_host(request):
    """Return a tuple (request-host, effective request-host name).

    As defined by RFC 2965, except both are lowercased.

    """
    erhn = req_host = request_host(request)
    if req_host.find(".") == -1 and not IPV4_RE.search(req_host):
        erhn = req_host + ".local"
    return req_host, erhn

def request_path(request):
    """Path component of request-URI, as defined by RFC 2965."""
    url = request.get_full_url()
    parts = urlsplit(url)
    path = escape_path(parts.path)
    if not path.startswith("/"):
        # fix bad RFC 2396 absoluteURI
        path = "/" + path
    return path

def request_port(request):
    host = request.host
    i = host.find(':')
    if i >= 0:
        port = host[i+1:]
        try:
            int(port)
        except ValueError:
            _debug("nonnumeric port: '%s'", port)
            return None
    else:
        port = DEFAULT_HTTP_PORT
    return port

# Characters in addition to A-Z, a-z, 0-9, '_', '.', and '-' that don't
# need to be escaped to form a valid HTTP URL (RFCs 2396 and 1738).
HTTP_PATH_SAFE = "%/;:@&=+$,!~*'()"
ESCAPED_CHAR_RE = re.compile(r"%([0-9a-fA-F][0-9a-fA-F])")
def uppercase_escaped_char(match):
    return "%%%s" % match.group(1).upper()
def escape_path(path):
    """Escape any invalid characters in HTTP URL, and uppercase all escapes."""
    # There's no knowing what character encoding was used to create URLs
    # containing %-escapes, but since we have to pick one to escape invalid
    # path characters, we pick UTF-8, as recommended in the HTML 4.0
    # specification:
    # http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.2.1
    # And here, kind of: draft-fielding-uri-rfc2396bis-03
    # (And in draft IRI specification: draft-duerst-iri-05)
    # (And here, for new URI schemes: RFC 2718)
    path = quote(path, HTTP_PATH_SAFE)
    path = ESCAPED_CHAR_RE.sub(uppercase_escaped_char, path)
    return path

def reach(h):
    """Return reach of host h, as defined by RFC 2965, section 1.

    The reach R of a host name H is defined as follows:

       *  If

          -  H is the host domain name of a host; and,

          -  H has the form A.B; and

          -  A has no embedded (that is, interior) dots; and

          -  B has at least one embedded dot, or B is the string "local".
             then the reach of H is .B.

       *  Otherwise, the reach of H is H.

    >>> reach("www.acme.com")
    '.acme.com'
    >>> reach("acme.com")
    'acme.com'
    >>> reach("acme.local")
    '.local'

    """
    i = h.find(".")
    if i >= 0:
        #a = h[:i]  # this line is only here to show what a is
        b = h[i+1:]
        i = b.find(".")
        if is_HDN(h) and (i >= 0 or b == "local"):
            return "."+b
    return h

def is_third_party(request):
    """

    RFC 2965, section 3.3.6:

        An unverifiable transaction is to a third-party host if its request-
        host U does not domain-match the reach R of the request-host O in the
        origin transaction.

    """
    req_host = request_host(request)
    if not domain_match(req_host, reach(request.get_origin_req_host())):
        return True
    else:
        return False


class Cookie(object):
    """HTTP Cookie.

    This class represents both Netscape and RFC 2965 cookies.

    This is deliberately a very simple class.  It just holds attributes.  It's
    possible to construct Cookie instances that don't comply with the cookie
    standards.  CookieJar.make_cookies is the factory function for Cookie
    objects -- it deals with cookie parsing, supplying defaults, and
    normalising to the representation used in this class.  CookiePolicy is
    responsible for checking them to see whether they should be accepted from
    and returned to the server.

    Note that the port may be present in the headers, but unspecified ("Port"
    rather than"Port=80", for example); if this is the case, port is None.

    """

    def __init__(self, version, name, value,
                 port, port_specified,
                 domain, domain_specified, domain_initial_dot,
                 path, path_specified,
                 secure,
                 expires,
                 discard,
                 comment,
                 comment_url,
                 rest,
                 rfc2109=False,
                 ):

        if version is not None: version = int(version)
        if expires is not None: expires = int(expires)
        if port is None and port_specified is True:
            raise ValueError("if port is None, port_specified must be false")

        self.version = version
        self.name = name
        self.value = value
        self.port = port
        self.port_specified = port_specified
        # normalise case, as per RFC 2965 section 3.3.3
        self.domain = domain.lower()
        self.domain_specified = domain_specified
        # Sigh.  We need to know whether the domain given in the
        # cookie-attribute had an initial dot, in order to follow RFC 2965
        # (as clarified in draft errata).  Needed for the returned $Domain
        # value.
        self.domain_initial_dot = domain_initial_dot
        self.path = path
        self.path_specified = path_specified
        self.secure = secure
        self.expires = expires
        self.discard = discard
        self.comment = comment
        self.comment_url = comment_url
        self.rfc2109 = rfc2109

        self._rest = copy.copy(rest)

    def has_nonstandard_attr(self, name):
        return name in self._rest
    def get_nonstandard_attr(self, name, default=None):
        return self._rest.get(name, default)
    def set_nonstandard_attr(self, name, value):
        self._rest[name] = value

    def is_expired(self, now=None):
        if now is None: now = time.time()
        if (self.expires is not None) and (self.expires <= now):
            return True
        return False

    def __str__(self):
        if self.port is None: p = ""
        else: p = ":"+self.port
        limit = self.domain + p + self.path
        if self.value is not None:
            namevalue = "%s=%s" % (self.name, self.value)
        else:
            namevalue = self.name
        return "<Cookie %s for %s>" % (namevalue, limit)

    @as_native_str()
    def __repr__(self):
        args = []
        for name in ("version", "name", "value",
                     "port", "port_specified",
                     "domain", "domain_specified", "domain_initial_dot",
                     "path", "path_specified",
                     "secure", "expires", "discard", "comment", "comment_url",
                     ):
            attr = getattr(self, name)
            ### Python-Future:
            # Avoid u'...' prefixes for unicode strings:
            if isinstance(attr, str):
                attr = str(attr)
            ###
            args.append(str("%s=%s") % (name, repr(attr)))
        args.append("rest=%s" % repr(self._rest))
        args.append("rfc2109=%s" % repr(self.rfc2109))
        return "Cookie(%s)" % ", ".join(args)


class CookiePolicy(object):
    """Defines which cookies get accepted from and returned to server.

    May also modify cookies, though this is probably a bad idea.

    The subclass DefaultCookiePolicy defines the standard rules for Netscape
    and RFC 2965 cookies -- override that if you want a customised policy.

    """
    def set_ok(self, cookie, request):
        """Return true if (and only if) cookie should be accepted from server.

        Currently, pre-expired cookies never get this far -- the CookieJar
        class deletes such cookies itself.

        """
        raise NotImplementedError()

    def return_ok(self, cookie, request):
        """Return true if (and only if) cookie should be returned to server."""
        raise NotImplementedError()

    def domain_return_ok(self, domain, request):
        """Return false if cookies should not be returned, given cookie domain.
        """
        return True

    def path_return_ok(self, path, request):
        """Return false if cookies should not be returned, given cookie path.
        """
        return True


class DefaultCookiePolicy(CookiePolicy):
    """Implements the standard rules for accepting and returning cookies."""

    DomainStrictNoDots = 1
    DomainStrictNonDomain = 2
    DomainRFC2965Match = 4

    DomainLiberal = 0
    DomainStrict = DomainStrictNoDots|DomainStrictNonDomain

    def __init__(self,
                 blocked_domains=None, allowed_domains=None,
                 netscape=True, rfc2965=False,
                 rfc2109_as_netscape=None,
                 hide_cookie2=False,
                 strict_domain=False,
                 strict_rfc2965_unverifiable=True,
                 strict_ns_unverifiable=False,
                 strict_ns_domain=DomainLiberal,
                 strict_ns_set_initial_dollar=False,
                 strict_ns_set_path=False,
                 ):
        """Constructor arguments should be passed as keyword arguments only."""
        self.netscape = netscape
        self.rfc2965 = rfc2965
        self.rfc2109_as_netscape = rfc2109_as_netscape
        self.hide_cookie2 = hide_cookie2
        self.strict_domain = strict_domain
        self.strict_rfc2965_unverifiable = strict_rfc2965_unverifiable
        self.strict_ns_unverifiable = strict_ns_unverifiable
        self.strict_ns_domain = strict_ns_domain
        self.strict_ns_set_initial_dollar = strict_ns_set_initial_dollar
        self.strict_ns_set_path = strict_ns_set_path

        if blocked_domains is not None:
            self._blocked_domains = tuple(blocked_domains)
        else:
            self._blocked_domains = ()

        if allowed_domains is not None:
            allowed_domains = tuple(allowed_domains)
        self._allowed_domains = allowed_domains

    def blocked_domains(self):
        """Return the sequence of blocked domains (as a tuple)."""
        return self._blocked_domains
    def set_blocked_domains(self, blocked_domains):
        """Set the sequence of blocked domains."""
        self._blocked_domains = tuple(blocked_domains)

    def is_blocked(self, domain):
        for blocked_domain in self._blocked_domains:
            if user_domain_match(domain, blocked_domain):
                return True
        return False

    def allowed_domains(self):
        """Return None, or the sequence of allowed domains (as a tuple)."""
        return self._allowed_domains
    def set_allowed_domains(self, allowed_domains):
        """Set the sequence of allowed domains, or None."""
        if allowed_domains is not None:
            allowed_domains = tuple(allowed_domains)
        self._allowed_domains = allowed_domains

    def is_not_allowed(self, domain):
        if self._allowed_domains is None:
            return False
        for allowed_domain in self._allowed_domains:
            if user_domain_match(domain, allowed_domain):
                return False
        return True

    def set_ok(self, cookie, request):
        """
        If you override .set_ok(), be sure to call this method.  If it returns
        false, so should your subclass (assuming your subclass wants to be more
        strict about which cookies to accept).

        """
        _debug(" - checking cookie %s=%s", cookie.name, cookie.value)

        assert cookie.name is not None

        for n in "version", "verifiability", "name", "path", "domain", "port":
            fn_name = "set_ok_"+n
            fn = getattr(self, fn_name)
            if not fn(cookie, request):
                return False

        return True

    def set_ok_version(self, cookie, request):
        if cookie.version is None:
            # Version is always set to 0 by parse_ns_headers if it's a Netscape
            # cookie, so this must be an invalid RFC 2965 cookie.
            _debug("   Set-Cookie2 without version attribute (%s=%s)",
                   cookie.name, cookie.value)
            return False
        if cookie.version > 0 and not self.rfc2965:
            _debug("   RFC 2965 cookies are switched off")
            return False
        elif cookie.version == 0 and not self.netscape:
            _debug("   Netscape cookies are switched off")
            return False
        return True

    def set_ok_verifiability(self, cookie, request):
        if request.unverifiable and is_third_party(request):
            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
                _debug("   third-party RFC 2965 cookie during "
                             "unverifiable transaction")
                return False
            elif cookie.version == 0 and self.strict_ns_unverifiable:
                _debug("   third-party Netscape cookie during "
                             "unverifiable transaction")
                return False
        return True

    def set_ok_name(self, cookie, request):
        # Try and stop servers setting V0 cookies designed to hack other
        # servers that know both V0 and V1 protocols.
        if (cookie.version == 0 and self.strict_ns_set_initial_dollar and
            cookie.name.startswith("$")):
            _debug("   illegal name (starts with '$'): '%s'", cookie.name)
            return False
        return True

    def set_ok_path(self, cookie, request):
        if cookie.path_specified:
            req_path = request_path(request)
            if ((cookie.version > 0 or
                 (cookie.version == 0 and self.strict_ns_set_path)) and
                not req_path.startswith(cookie.path)):
                _debug("   path attribute %s is not a prefix of request "
                       "path %s", cookie.path, req_path)
                return False
        return True

    def set_ok_domain(self, cookie, request):
        if self.is_blocked(cookie.domain):
            _debug("   domain %s is in user block-list", cookie.domain)
            return False
        if self.is_not_allowed(cookie.domain):
            _debug("   domain %s is not in user allow-list", cookie.domain)
            return False
        if cookie.domain_specified:
            req_host, erhn = eff_request_host(request)
            domain = cookie.domain
            if self.strict_domain and (domain.count(".") >= 2):
                # XXX This should probably be compared with the Konqueror
                # (kcookiejar.cpp) and Mozilla implementations, but it's a
                # losing battle.
                i = domain.rfind(".")
                j = domain.rfind(".", 0, i)
                if j == 0:  # domain like .foo.bar
                    tld = domain[i+1:]
                    sld = domain[j+1:i]
                    if sld.lower() in ("co", "ac", "com", "edu", "org", "net",
                       "gov", "mil", "int", "aero", "biz", "cat", "coop",
                       "info", "jobs", "mobi", "museum", "name", "pro",
                       "travel", "eu") and len(tld) == 2:
                        # domain like .co.uk
                        _debug("   country-code second level domain %s", domain)
                        return False
            if domain.startswith("."):
                undotted_domain = domain[1:]
            else:
                undotted_domain = domain
            embedded_dots = (undotted_domain.find(".") >= 0)
            if not embedded_dots and domain != ".local":
                _debug("   non-local domain %s contains no embedded dot",
                       domain)
                return False
            if cookie.version == 0:
                if (not erhn.endswith(domain) and
                    (not erhn.startswith(".") and
                     not ("."+erhn).endswith(domain))):
                    _debug("   effective request-host %s (even with added "
                           "initial dot) does not end with %s",
                           erhn, domain)
                    return False
            if (cookie.version > 0 or
                (self.strict_ns_domain & self.DomainRFC2965Match)):
                if not domain_match(erhn, domain):
                    _debug("   effective request-host %s does not domain-match "
                           "%s", erhn, domain)
                    return False
            if (cookie.version > 0 or
                (self.strict_ns_domain & self.DomainStrictNoDots)):
                host_prefix = req_host[:-len(domain)]
                if (host_prefix.find(".") >= 0 and
                    not IPV4_RE.search(req_host)):
                    _debug("   host prefix %s for domain %s contains a dot",
                           host_prefix, domain)
                    return False
        return True

    def set_ok_port(self, cookie, request):
        if cookie.port_specified:
            req_port = request_port(request)
            if req_port is None:
                req_port = "80"
            else:
                req_port = str(req_port)
            for p in cookie.port.split(","):
                try:
                    int(p)
                except ValueError:
                    _debug("   bad port %s (not numeric)", p)
                    return False
                if p == req_port:
                    break
            else:
                _debug("   request port (%s) not found in %s",
                       req_port, cookie.port)
                return False
        return True

    def return_ok(self, cookie, request):
        """
        If you override .return_ok(), be sure to call this method.  If it
        returns false, so should your subclass (assuming your subclass wants to
        be more strict about which cookies to return).

        """
        # Path has already been checked by .path_return_ok(), and domain
        # blocking done by .domain_return_ok().
        _debug(" - checking cookie %s=%s", cookie.name, cookie.value)

        for n in "version", "verifiability", "secure", "expires", "port", "domain":
            fn_name = "return_ok_"+n
            fn = getattr(self, fn_name)
            if not fn(cookie, request):
                return False
        return True

    def return_ok_version(self, cookie, request):
        if cookie.version > 0 and not self.rfc2965:
            _debug("   RFC 2965 cookies are switched off")
            return False
        elif cookie.version == 0 and not self.netscape:
            _debug("   Netscape cookies are switched off")
            return False
        return True

    def return_ok_verifiability(self, cookie, request):
        if request.unverifiable and is_third_party(request):
            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
                _debug("   third-party RFC 2965 cookie during unverifiable "
                       "transaction")
                return False
            elif cookie.version == 0 and self.strict_ns_unverifiable:
                _debug("   third-party Netscape cookie during unverifiable "
                       "transaction")
                return False
        return True

    def return_ok_secure(self, cookie, request):
        if cookie.secure and request.type != "https":
            _debug("   secure cookie with non-secure request")
            return False
        return True

    def return_ok_expires(self, cookie, request):
        if cookie.is_expired(self._now):
            _debug("   cookie expired")
            return False
        return True

    def return_ok_port(self, cookie, request):
        if cookie.port:
            req_port = request_port(request)
            if req_port is None:
                req_port = "80"
            for p in cookie.port.split(","):
                if p == req_port:
                    break
            else:
                _debug("   request port %s does not match cookie port %s",
                       req_port, cookie.port)
                return False
        return True

    def return_ok_domain(self, cookie, request):
        req_host, erhn = eff_request_host(request)
        domain = cookie.domain

        # strict check of non-domain cookies: Mozilla does this, MSIE5 doesn't
        if (cookie.version == 0 and
            (self.strict_ns_domain & self.DomainStrictNonDomain) and
            not cookie.domain_specified and domain != erhn):
            _debug("   cookie with unspecified domain does not string-compare "
                   "equal to request domain")
            return False

        if cookie.version > 0 and not domain_match(erhn, domain):
            _debug("   effective request-host name %s does not domain-match "
                   "RFC 2965 cookie domain %s", erhn, domain)
            return False
        if cookie.version == 0 and not ("."+erhn).endswith(domain):
            _debug("   request-host %s does not match Netscape cookie domain "
                   "%s", req_host, domain)
            return False
        return True

    def domain_return_ok(self, domain, request):
        # Liberal check of.  This is here as an optimization to avoid
        # having to load lots of MSIE cookie files unless necessary.
        req_host, erhn = eff_request_host(request)
        if not req_host.startswith("."):
            req_host = "."+req_host
        if not erhn.startswith("."):
            erhn = "."+erhn
        if not (req_host.endswith(domain) or erhn.endswith(domain)):
            #_debug("   request domain %s does not match cookie domain %s",
            #       req_host, domain)
            return False

        if self.is_blocked(domain):
            _debug("   domain %s is in user block-list", domain)
            return False
        if self.is_not_allowed(domain):
            _debug("   domain %s is not in user allow-list", domain)
            return False

        return True

    def path_return_ok(self, path, request):
        _debug("- checking cookie path=%s", path)
        req_path = request_path(request)
        if not req_path.startswith(path):
            _debug("  %s does not path-match %s", req_path, path)
            return False
        return True


def vals_sorted_by_key(adict):
    keys = sorted(adict.keys())
    return map(adict.get, keys)

def deepvalues(mapping):
    """Iterates over nested mapping, depth-first, in sorted order by key."""
    values = vals_sorted_by_key(mapping)
    for obj in values:
        mapping = False
        try:
            obj.items
        except AttributeError:
            pass
        else:
            mapping = True
            for subobj in deepvalues(obj):
                yield subobj
        if not mapping:
            yield obj


# Used as second parameter to dict.get() method, to distinguish absent
# dict key from one with a None value.
class Absent(object): pass

class CookieJar(object):
    """Collection of HTTP cookies.

    You may not need to know about this class: try
    urllib.request.build_opener(HTTPCookieProcessor).open(url).
    """

    non_word_re = re.compile(r"\W")
    quote_re = re.compile(r"([\"\\])")
    strict_domain_re = re.compile(r"\.?[^.]*")
    domain_re = re.compile(r"[^.]*")
    dots_re = re.compile(r"^\.+")

    magic_re = re.compile(r"^\#LWP-Cookies-(\d+\.\d+)", re.ASCII)

    def __init__(self, policy=None):
        if policy is None:
            policy = DefaultCookiePolicy()
        self._policy = policy

        self._cookies_lock = _threading.RLock()
        self._cookies = {}

    def set_policy(self, policy):
        self._policy = policy

    def _cookies_for_domain(self, domain, request):
        cookies = []
        if not self._policy.domain_return_ok(domain, request):
            return []
        _debug("Checking %s for cookies to return", domain)
        cookies_by_path = self._cookies[domain]
        for path in cookies_by_path.keys():
            if not self._policy.path_return_ok(path, request):
                continue
            cookies_by_name = cookies_by_path[path]
            for cookie in cookies_by_name.values():
                if not self._policy.return_ok(cookie, request):
                    _debug("   not returning cookie")
                    continue
                _debug("   it's a match")
                cookies.append(cookie)
        return cookies

    def _cookies_for_request(self, request):
        """Return a list of cookies to be returned to server."""
        cookies = []
        for domain in self._cookies.keys():
            cookies.extend(self._cookies_for_domain(domain, request))
        return cookies

    def _cookie_attrs(self, cookies):
        """Return a list of cookie-attributes to be returned to server.

        like ['foo="bar"; $Path="/"', ...]

        The $Version attribute is also added when appropriate (currently only
        once per request).

        """
        # add cookies in order of most specific (ie. longest) path first
        cookies.sort(key=lambda a: len(a.path), reverse=True)

        version_set = False

        attrs = []
        for cookie in cookies:
            # set version of Cookie header
            # XXX
            # What should it be if multiple matching Set-Cookie headers have
            #  different versions themselves?
            # Answer: there is no answer; was supposed to be settled by
            #  RFC 2965 errata, but that may never appear...
            version = cookie.version
            if not version_set:
                version_set = True
                if version > 0:
                    attrs.append("$Version=%s" % version)

            # quote cookie value if necessary
            # (not for Netscape protocol, which already has any quotes
            #  intact, due to the poorly-specified Netscape Cookie: syntax)
            if ((cookie.value is not None) and
                self.non_word_re.search(cookie.value) and version > 0):
                value = self.quote_re.sub(r"\\\1", cookie.value)
            else:
                value = cookie.value

            # add cookie-attributes to be returned in Cookie header
            if cookie.value is None:
                attrs.append(cookie.name)
            else:
                attrs.append("%s=%s" % (cookie.name, value))
            if version > 0:
                if cookie.path_specified:
                    attrs.append('$Path="%s"' % cookie.path)
                if cookie.domain.startswith("."):
                    domain = cookie.domain
                    if (not cookie.domain_initial_dot and
                        domain.startswith(".")):
                        domain = domain[1:]
                    attrs.append('$Domain="%s"' % domain)
                if cookie.port is not None:
                    p = "$Port"
                    if cookie.port_specified:
                        p = p + ('="%s"' % cookie.port)
                    attrs.append(p)

        return attrs

    def add_cookie_header(self, request):
        """Add correct Cookie: header to request (urllib.request.Request object).

        The Cookie2 header is also added unless policy.hide_cookie2 is true.

        """
        _debug("add_cookie_header")
        self._cookies_lock.acquire()
        try:

            self._policy._now = self._now = int(time.time())

            cookies = self._cookies_for_request(request)

            attrs = self._cookie_attrs(cookies)
            if attrs:
                if not request.has_header("Cookie"):
                    request.add_unredirected_header(
                        "Cookie", "; ".join(attrs))

            # if necessary, advertise that we know RFC 2965
            if (self._policy.rfc2965 and not self._policy.hide_cookie2 and
                not request.has_header("Cookie2")):
                for cookie in cookies:
                    if cookie.version != 1:
                        request.add_unredirected_header("Cookie2", '$Version="1"')
                        break

        finally:
            self._cookies_lock.release()

        self.clear_expired_cookies()

    def _normalized_cookie_tuples(self, attrs_set):
        """Return list of tuples containing normalised cookie information.

        attrs_set is the list of lists of key,value pairs extracted from
        the Set-Cookie or Set-Cookie2 headers.

        Tuples are name, value, standard, rest, where name and value are the
        cookie name and value, standard is a dictionary containing the standard
        cookie-attributes (discard, secure, version, expires or max-age,
        domain, path and port) and rest is a dictionary containing the rest of
        the cookie-attributes.

        """
        cookie_tuples = []

        boolean_attrs = "discard", "secure"
        value_attrs = ("version",
                       "expires", "max-age",
                       "domain", "path", "port",
                       "comment", "commenturl")

        for cookie_attrs in attrs_set:
            name, value = cookie_attrs[0]

            # Build dictionary of standard cookie-attributes (standard) and
            # dictionary of other cookie-attributes (rest).

            # Note: expiry time is normalised to seconds since epoch.  V0
            # cookies should have the Expires cookie-attribute, and V1 cookies
            # should have Max-Age, but since V1 includes RFC 2109 cookies (and
            # since V0 cookies may be a mish-mash of Netscape and RFC 2109), we
            # accept either (but prefer Max-Age).
            max_age_set = False

            bad_cookie = False

            standard = {}
            rest = {}
            for k, v in cookie_attrs[1:]:
                lc = k.lower()
                # don't lose case distinction for unknown fields
                if lc in value_attrs or lc in boolean_attrs:
                    k = lc
                if k in boolean_attrs and v is None:
                    # boolean cookie-attribute is present, but has no value
                    # (like "discard", rather than "port=80")
                    v = True
                if k in standard:
                    # only first value is significant
                    continue
                if k == "domain":
                    if v is None:
                        _debug("   missing value for domain attribute")
                        bad_cookie = True
                        break
                    # RFC 2965 section 3.3.3
                    v = v.lower()
                if k == "expires":
                    if max_age_set:
                        # Prefer max-age to expires (like Mozilla)
                        continue
                    if v is None:
                        _debug("   missing or invalid value for expires "
                              "attribute: treating as session cookie")
                        continue
                if k == "max-age":
                    max_age_set = True
                    try:
                        v = int(v)
                    except ValueError:
                        _debug("   missing or invalid (non-numeric) value for "
                              "max-age attribute")
                        bad_cookie = True
                        break
                    # convert RFC 2965 Max-Age to seconds since epoch
                    # XXX Strictly you're supposed to follow RFC 2616
                    #   age-calculation rules.  Remember that zero Max-Age is a
                    #   is a request to discard (old and new) cookie, though.
                    k = "expires"
                    v = self._now + v
                if (k in value_attrs) or (k in boolean_attrs):
                    if (v is None and
                        k not in ("port", "comment", "commenturl")):
                        _debug("   missing value for %s attribute" % k)
                        bad_cookie = True
                        break
                    standard[k] = v
                else:
                    rest[k] = v

            if bad_cookie:
                continue

            cookie_tuples.append((name, value, standard, rest))

        return cookie_tuples

    def _cookie_from_cookie_tuple(self, tup, request):
        # standard is dict of standard cookie-attributes, rest is dict of the
        # rest of them
        name, value, standard, rest = tup

        domain = standard.get("domain", Absent)
        path = standard.get("path", Absent)
        port = standard.get("port", Absent)
        expires = standard.get("expires", Absent)

        # set the easy defaults
        version = standard.get("version", None)
        if version is not None:
            try:
                version = int(version)
            except ValueError:
                return None  # invalid version, ignore cookie
        secure = standard.get("secure", False)
        # (discard is also set if expires is Absent)
        discard = standard.get("discard", False)
        comment = standard.get("comment", None)
        comment_url = standard.get("commenturl", None)

        # set default path
        if path is not Absent and path != "":
            path_specified = True
            path = escape_path(path)
        else:
            path_specified = False
            path = request_path(request)
            i = path.rfind("/")
            if i != -1:
                if version == 0:
                    # Netscape spec parts company from reality here
                    path = path[:i]
                else:
                    path = path[:i+1]
            if len(path) == 0: path = "/"

        # set default domain
        domain_specified = domain is not Absent
        # but first we have to remember whether it starts with a dot
        domain_initial_dot = False
        if domain_specified:
            domain_initial_dot = bool(domain.startswith("."))
        if domain is Absent:
            req_host, erhn = eff_request_host(request)
            domain = erhn
        elif not domain.startswith("."):
            domain = "."+domain

        # set default port
        port_specified = False
        if port is not Absent:
            if port is None:
                # Port attr present, but has no value: default to request port.
                # Cookie should then only be sent back on that port.
                port = request_port(request)
            else:
                port_specified = True
                port = re.sub(r"\s+", "", port)
        else:
            # No port attr present.  Cookie can be sent back on any port.
            port = None

        # set default expires and discard
        if expires is Absent:
            expires = None
            discard = True
        elif expires <= self._now:
            # Expiry date in past is request to delete cookie.  This can't be
            # in DefaultCookiePolicy, because can't delete cookies there.
            try:
                self.clear(domain, path, name)
            except KeyError:
                pass
            _debug("Expiring cookie, domain='%s', path='%s', name='%s'",
                   domain, path, name)
            return None

        return Cookie(version,
                      name, value,
                      port, port_specified,
                      domain, domain_specified, domain_initial_dot,
                      path, path_specified,
                      secure,
                      expires,
                      discard,
                      comment,
                      comment_url,
                      rest)

    def _cookies_from_attrs_set(self, attrs_set, request):
        cookie_tuples = self._normalized_cookie_tuples(attrs_set)

        cookies = []
        for tup in cookie_tuples:
            cookie = self._cookie_from_cookie_tuple(tup, request)
            if cookie: cookies.append(cookie)
        return cookies

    def _process_rfc2109_cookies(self, cookies):
        rfc2109_as_ns = getattr(self._policy, 'rfc2109_as_netscape', None)
        if rfc2109_as_ns is None:
            rfc2109_as_ns = not self._policy.rfc2965
        for cookie in cookies:
            if cookie.version == 1:
                cookie.rfc2109 = True
                if rfc2109_as_ns:
                    # treat 2109 cookies as Netscape cookies rather than
                    # as RFC2965 cookies
                    cookie.version = 0

    def make_cookies(self, response, request):
        """Return sequence of Cookie objects extracted from response object."""
        # get cookie-attributes for RFC 2965 and Netscape protocols
        headers = response.info()
        rfc2965_hdrs = headers.get_all("Set-Cookie2", [])
        ns_hdrs = headers.get_all("Set-Cookie", [])

        rfc2965 = self._policy.rfc2965
        netscape = self._policy.netscape

        if ((not rfc2965_hdrs and not ns_hdrs) or
            (not ns_hdrs and not rfc2965) or
            (not rfc2965_hdrs and not netscape) or
            (not netscape and not rfc2965)):
            return []  # no relevant cookie headers: quick exit

        try:
            cookies = self._cookies_from_attrs_set(
                split_header_words(rfc2965_hdrs), request)
        except Exception:
            _warn_unhandled_exception()
            cookies = []

        if ns_hdrs and netscape:
            try:
                # RFC 2109 and Netscape cookies
                ns_cookies = self._cookies_from_attrs_set(
                    parse_ns_headers(ns_hdrs), request)
            except Exception:
                _warn_unhandled_exception()
                ns_cookies = []
            self._process_rfc2109_cookies(ns_cookies)

            # Look for Netscape cookies (from Set-Cookie headers) that match
            # corresponding RFC 2965 cookies (from Set-Cookie2 headers).
            # For each match, keep the RFC 2965 cookie and ignore the Netscape
            # cookie (RFC 2965 section 9.1).  Actually, RFC 2109 cookies are
            # bundled in with the Netscape cookies for this purpose, which is
            # reasonable behaviour.
            if rfc2965:
                lookup = {}
                for cookie in cookies:
                    lookup[(cookie.domain, cookie.path, cookie.name)] = None

                def no_matching_rfc2965(ns_cookie, lookup=lookup):
                    key = ns_cookie.domain, ns_cookie.path, ns_cookie.name
                    return key not in lookup
                ns_cookies = filter(no_matching_rfc2965, ns_cookies)

            if ns_cookies:
                cookies.extend(ns_cookies)

        return cookies

    def set_cookie_if_ok(self, cookie, request):
        """Set a cookie if policy says it's OK to do so."""
        self._cookies_lock.acquire()
        try:
            self._policy._now = self._now = int(time.time())

            if self._policy.set_ok(cookie, request):
                self.set_cookie(cookie)


        finally:
            self._cookies_lock.release()

    def set_cookie(self, cookie):
        """Set a cookie, without checking whether or not it should be set."""
        c = self._cookies
        self._cookies_lock.acquire()
        try:
            if cookie.domain not in c: c[cookie.domain] = {}
            c2 = c[cookie.domain]
            if cookie.path not in c2: c2[cookie.path] = {}
            c3 = c2[cookie.path]
            c3[cookie.name] = cookie
        finally:
            self._cookies_lock.release()

    def extract_cookies(self, response, request):
        """Extract cookies from response, where allowable given the request."""
        _debug("extract_cookies: %s", response.info())
        self._cookies_lock.acquire()
        try:
            self._policy._now = self._now = int(time.time())

            for cookie in self.make_cookies(response, request):
                if self._policy.set_ok(cookie, request):
                    _debug(" setting cookie: %s", cookie)
                    self.set_cookie(cookie)
        finally:
            self._cookies_lock.release()

    def clear(self, domain=None, path=None, name=None):
        """Clear some cookies.

        Invoking this method without arguments will clear all cookies.  If
        given a single argument, only cookies belonging to that domain will be
        removed.  If given two arguments, cookies belonging to the specified
        path within that domain are removed.  If given three arguments, then
        the cookie with the specified name, path and domain is removed.

        Raises KeyError if no matching cookie exists.

        """
        if name is not None:
            if (domain is None) or (path is None):
                raise ValueError(
                    "domain and path must be given to remove a cookie by name")
            del self._cookies[domain][path][name]
        elif path is not None:
            if domain is None:
                raise ValueError(
                    "domain must be given to remove cookies by path")
            del self._cookies[domain][path]
        elif domain is not None:
            del self._cookies[domain]
        else:
            self._cookies = {}

    def clear_session_cookies(self):
        """Discard all session cookies.

        Note that the .save() method won't save session cookies anyway, unless
        you ask otherwise by passing a true ignore_discard argument.

        """
        self._cookies_lock.acquire()
        try:
            for cookie in self:
                if cookie.discard:
                    self.clear(cookie.domain, cookie.path, cookie.name)
        finally:
            self._cookies_lock.release()

    def clear_expired_cookies(self):
        """Discard all expired cookies.

        You probably don't need to call this method: expired cookies are never
        sent back to the server (provided you're using DefaultCookiePolicy),
        this method is called by CookieJar itself every so often, and the
        .save() method won't save expired cookies anyway (unless you ask
        otherwise by passing a true ignore_expires argument).

        """
        self._cookies_lock.acquire()
        try:
            now = time.time()
            for cookie in self:
                if cookie.is_expired(now):
                    self.clear(cookie.domain, cookie.path, cookie.name)
        finally:
            self._cookies_lock.release()

    def __iter__(self):
        return deepvalues(self._cookies)

    def __len__(self):
        """Return number of contained cookies."""
        i = 0
        for cookie in self: i = i + 1
        return i

    @as_native_str()
    def __repr__(self):
        r = []
        for cookie in self: r.append(repr(cookie))
        return "<%s[%s]>" % (self.__class__, ", ".join(r))

    def __str__(self):
        r = []
        for cookie in self: r.append(str(cookie))
        return "<%s[%s]>" % (self.__class__, ", ".join(r))


# derives from IOError for backwards-compatibility with Python 2.4.0
class LoadError(IOError): pass

class FileCookieJar(CookieJar):
    """CookieJar that can be loaded from and saved to a file."""

    def __init__(self, filename=None, delayload=False, policy=None):
        """
        Cookies are NOT loaded from the named file until either the .load() or
        .revert() method is called.

        """
        CookieJar.__init__(self, policy)
        if filename is not None:
            try:
                filename+""
            except:
                raise ValueError("filename must be string-like")
        self.filename = filename
        self.delayload = bool(delayload)

    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
        """Save cookies to a file."""
        raise NotImplementedError()

    def load(self, filename=None, ignore_discard=False, ignore_expires=False):
        """Load cookies from a file."""
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        f = open(filename)
        try:
            self._really_load(f, filename, ignore_discard, ignore_expires)
        finally:
            f.close()

    def revert(self, filename=None,
               ignore_discard=False, ignore_expires=False):
        """Clear all cookies and reload cookies from a saved file.

        Raises LoadError (or IOError) if reversion is not successful; the
        object's state will not be altered if this happens.

        """
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        self._cookies_lock.acquire()
        try:

            old_state = copy.deepcopy(self._cookies)
            self._cookies = {}
            try:
                self.load(filename, ignore_discard, ignore_expires)
            except (LoadError, IOError):
                self._cookies = old_state
                raise

        finally:
            self._cookies_lock.release()


def lwp_cookie_str(cookie):
    """Return string representation of Cookie in an the LWP cookie file format.

    Actually, the format is extended a bit -- see module docstring.

    """
    h = [(cookie.name, cookie.value),
         ("path", cookie.path),
         ("domain", cookie.domain)]
    if cookie.port is not None: h.append(("port", cookie.port))
    if cookie.path_specified: h.append(("path_spec", None))
    if cookie.port_specified: h.append(("port_spec", None))
    if cookie.domain_initial_dot: h.append(("domain_dot", None))
    if cookie.secure: h.append(("secure", None))
    if cookie.expires: h.append(("expires",
                               time2isoz(float(cookie.expires))))
    if cookie.discard: h.append(("discard", None))
    if cookie.comment: h.append(("comment", cookie.comment))
    if cookie.comment_url: h.append(("commenturl", cookie.comment_url))

    keys = sorted(cookie._rest.keys())
    for k in keys:
        h.append((k, str(cookie._rest[k])))

    h.append(("version", str(cookie.version)))

    return join_header_words([h])

class LWPCookieJar(FileCookieJar):
    """
    The LWPCookieJar saves a sequence of "Set-Cookie3" lines.
    "Set-Cookie3" is the format used by the libwww-perl library, not known
    to be compatible with any browser, but which is easy to read and
    doesn't lose information about RFC 2965 cookies.

    Additional methods

    as_lwp_str(ignore_discard=True, ignore_expired=True)

    """

    def as_lwp_str(self, ignore_discard=True, ignore_expires=True):
        """Return cookies as a string of "\\n"-separated "Set-Cookie3" headers.

        ignore_discard and ignore_expires: see docstring for FileCookieJar.save

        """
        now = time.time()
        r = []
        for cookie in self:
            if not ignore_discard and cookie.discard:
                continue
            if not ignore_expires and cookie.is_expired(now):
                continue
            r.append("Set-Cookie3: %s" % lwp_cookie_str(cookie))
        return "\n".join(r+[""])

    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        f = open(filename, "w")
        try:
            # There really isn't an LWP Cookies 2.0 format, but this indicates
            # that there is extra information in here (domain_dot and
            # port_spec) while still being compatible with libwww-perl, I hope.
            f.write("#LWP-Cookies-2.0\n")
            f.write(self.as_lwp_str(ignore_discard, ignore_expires))
        finally:
            f.close()

    def _really_load(self, f, filename, ignore_discard, ignore_expires):
        magic = f.readline()
        if not self.magic_re.search(magic):
            msg = ("%r does not look like a Set-Cookie3 (LWP) format "
                   "file" % filename)
            raise LoadError(msg)

        now = time.time()

        header = "Set-Cookie3:"
        boolean_attrs = ("port_spec", "path_spec", "domain_dot",
                         "secure", "discard")
        value_attrs = ("version",
                       "port", "path", "domain",
                       "expires",
                       "comment", "commenturl")

        try:
            while 1:
                line = f.readline()
                if line == "": break
                if not line.startswith(header):
                    continue
                line = line[len(header):].strip()

                for data in split_header_words([line]):
                    name, value = data[0]
                    standard = {}
                    rest = {}
                    for k in boolean_attrs:
                        standard[k] = False
                    for k, v in data[1:]:
                        if k is not None:
                            lc = k.lower()
                        else:
                            lc = None
                        # don't lose case distinction for unknown fields
                        if (lc in value_attrs) or (lc in boolean_attrs):
                            k = lc
                        if k in boolean_attrs:
                            if v is None: v = True
                            standard[k] = v
                        elif k in value_attrs:
                            standard[k] = v
                        else:
                            rest[k] = v

                    h = standard.get
                    expires = h("expires")
                    discard = h("discard")
                    if expires is not None:
                        expires = iso2time(expires)
                    if expires is None:
                        discard = True
                    domain = h("domain")
                    domain_specified = domain.startswith(".")
                    c = Cookie(h("version"), name, value,
                               h("port"), h("port_spec"),
                               domain, domain_specified, h("domain_dot"),
                               h("path"), h("path_spec"),
                               h("secure"),
                               expires,
                               discard,
                               h("comment"),
                               h("commenturl"),
                               rest)
                    if not ignore_discard and c.discard:
                        continue
                    if not ignore_expires and c.is_expired(now):
                        continue
                    self.set_cookie(c)

        except IOError:
            raise
        except Exception:
            _warn_unhandled_exception()
            raise LoadError("invalid Set-Cookie3 format file %r: %r" %
                            (filename, line))


class MozillaCookieJar(FileCookieJar):
    """

    WARNING: you may want to backup your browser's cookies file if you use
    this class to save cookies.  I *think* it works, but there have been
    bugs in the past!

    This class differs from CookieJar only in the format it uses to save and
    load cookies to and from a file.  This class uses the Mozilla/Netscape
    `cookies.txt' format.  lynx uses this file format, too.

    Don't expect cookies saved while the browser is running to be noticed by
    the browser (in fact, Mozilla on unix will overwrite your saved cookies if
    you change them on disk while it's running; on Windows, you probably can't
    save at all while the browser is running).

    Note that the Mozilla/Netscape format will downgrade RFC2965 cookies to
    Netscape cookies on saving.

    In particular, the cookie version and port number information is lost,
    together with information about whether or not Path, Port and Discard were
    specified by the Set-Cookie2 (or Set-Cookie) header, and whether or not the
    domain as set in the HTTP header started with a dot (yes, I'm aware some
    domains in Netscape files start with a dot and some don't -- trust me, you
    really don't want to know any more about this).

    Note that though Mozilla and Netscape use the same format, they use
    slightly different headers.  The class saves cookies using the Netscape
    header by default (Mozilla can cope with that).

    """
    magic_re = re.compile("#( Netscape)? HTTP Cookie File")
    header = """\
# Netscape HTTP Cookie File
# http://www.netscape.com/newsref/std/cookie_spec.html
# This is a generated file!  Do not edit.

"""

    def _really_load(self, f, filename, ignore_discard, ignore_expires):
        now = time.time()

        magic = f.readline()
        if not self.magic_re.search(magic):
            f.close()
            raise LoadError(
                "%r does not look like a Netscape format cookies file" %
                filename)

        try:
            while 1:
                line = f.readline()
                if line == "": break

                # last field may be absent, so keep any trailing tab
                if line.endswith("\n"): line = line[:-1]

                # skip comments and blank lines XXX what is $ for?
                if (line.strip().startswith(("#", "$")) or
                    line.strip() == ""):
                    continue

                domain, domain_specified, path, secure, expires, name, value = \
                        line.split("\t")
                secure = (secure == "TRUE")
                domain_specified = (domain_specified == "TRUE")
                if name == "":
                    # cookies.txt regards 'Set-Cookie: foo' as a cookie
                    # with no name, whereas http.cookiejar regards it as a
                    # cookie with no value.
                    name = value
                    value = None

                initial_dot = domain.startswith(".")
                assert domain_specified == initial_dot

                discard = False
                if expires == "":
                    expires = None
                    discard = True

                # assume path_specified is false
                c = Cookie(0, name, value,
                           None, False,
                           domain, domain_specified, initial_dot,
                           path, False,
                           secure,
                           expires,
                           discard,
                           None,
                           None,
                           {})
                if not ignore_discard and c.discard:
                    continue
                if not ignore_expires and c.is_expired(now):
                    continue
                self.set_cookie(c)

        except IOError:
            raise
        except Exception:
            _warn_unhandled_exception()
            raise LoadError("invalid Netscape format cookies file %r: %r" %
                            (filename, line))

    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        f = open(filename, "w")
        try:
            f.write(self.header)
            now = time.time()
            for cookie in self:
                if not ignore_discard and cookie.discard:
                    continue
                if not ignore_expires and cookie.is_expired(now):
                    continue
                if cookie.secure: secure = "TRUE"
                else: secure = "FALSE"
                if cookie.domain.startswith("."): initial_dot = "TRUE"
                else: initial_dot = "FALSE"
                if cookie.expires is not None:
                    expires = str(cookie.expires)
                else:
                    expires = ""
                if cookie.value is None:
                    # cookies.txt regards 'Set-Cookie: foo' as a cookie
                    # with no name, whereas http.cookiejar regards it as a
                    # cookie with no value.
                    name = ""
                    value = cookie.name
                else:
                    name = cookie.name
                    value = cookie.value
                f.write(
                    "\t".join([cookie.domain, initial_dot, cookie.path,
                               secure, expires, name, value])+
                    "\n")
        finally:
            f.close()
PK�Cu\!future/backports/http/__init__.pynu�[���PK�Cu\af��z�z�:future/backports/http/__pycache__/cookiejar.cpython-39.pycnu�[���a

��?h@+�@sndZddlmZddlmZddlmZddlmZddlmZmZm	Z	m
Z
mZddlm
Z
mZgd�Zdd	lZdd	lZdd	lZer�de_dd	lZdd
lmZmZmZddlmZzdd	lZWney�dd	lZYn0ddlm Z d
Z!d	a"dd�Z#ee�Z$dZ%dd�Z&dZ'dd�Z(gd�Z)gd�Z*gZ+e*D]Z,e+�-e,�.���q.dmdd�Z/dndd�Z0d	d	d	d	d�Z1e�2dej�Z3dd�Z4d d!�Z5e�2d"ej�Z6e�2d#ej7ejB�Z8e�2d$ej9ejB�Z:d%d&�Z;e�2d'ej9ejB�Z<d(d)�Z=d*d+�Z>e�2d,�Z?e�2d-�Z@e�2d.�ZAe�2d/�ZBd0d1�ZCe�2d2�ZDd3d4�ZEd5d6�ZFd7d8�ZGe�2d9ej�ZHd:d;�ZId<d=�ZJd>d?�ZKd@dA�ZLe�2dBej�ZMdCdD�ZNdEdF�ZOdGdH�ZPdIdJ�ZQdKZRe�2dL�ZSdMdN�ZTdOdP�ZUdQdR�ZVdSdT�ZWGdUdV�dVeX�ZYGdWdX�dXeX�ZZGdYdZ�dZeZ�Z[d[d\�Z\d]d^�Z]Gd_d`�d`eX�Z^Gdadb�dbeX�Z_Gdcdd�dde`�ZaGdedf�dfe_�Zbdgdh�ZcGdidj�djeb�ZdGdkdl�dleb�Zed	S)oa�HTTP cookie handling for web clients.

This is a backport of the Py3.3 ``http.cookiejar`` module for
python-future.

This module has (now fairly distant) origins in Gisle Aas' Perl module
HTTP::Cookies, from the libwww-perl library.

Docstrings, comments and debug strings in this code refer to the
attributes of the HTTP cookie system as cookie-attributes, to distinguish
them clearly from Python attributes.

Class diagram (note that BSDDBCookieJar and the MSIE* classes are not
distributed with the Python standard library, but are available from
http://wwwsearch.sf.net/):

                        CookieJar____
                        /     \      \
            FileCookieJar      \      \
             /    |   \         \      \
 MozillaCookieJar | LWPCookieJar \      \
                  |               |      \
                  |   ---MSIEBase |       \
                  |  /      |     |        \
                  | /   MSIEDBCookieJar BSDDBCookieJar
                  |/
               MSIECookieJar

�)�unicode_literals)�print_function)�division)�absolute_import)�filter�int�map�open�str)�
as_native_str�PY2)�Cookie�	CookieJar�CookiePolicy�DefaultCookiePolicy�
FileCookieJar�LWPCookieJar�	LoadError�MozillaCookieJarN)�urlparse�urlsplit�quote)�	HTTP_PORT)�timegmFcGs(tsdStsddl}|�d�atj|�S)Nrzhttp.cookiejar)�debug�logger�logging�	getLogger)�argsr�r�I/usr/local/lib/python3.9/site-packages/future/backports/http/cookiejar.py�_debug:s
r!zQa filename was not supplied (nor was the CookieJar instance initialised with one)cCsJddl}ddl}ddl}|��}|�d|�|��}|jd|dd�dS)Nrzhttp.cookiejar bug!
%s�)�
stacklevel)�io�warnings�	traceback�StringIO�	print_exc�getvalue�warn)r$r%r&�f�msgrrr �_warn_unhandled_exceptionHs
r-i�cCs�|dd�\}}}}}}|tkr�d|kr4dkr�nnhd|krLdkr�nnPd|krddkr�nn8d|kr|dkr�nn d|kr�dkr�nnt|�SdSdS)	N����r��;�=)�
EPOCH_YEARr)�tt�year�month�mday�hour�min�secrrr �_timegmWs 8��
��
��
r=)�Mon�Tue�Wed�Thu�Fri�Sat�Sun)�Jan�Feb�Mar�Apr�May�Jun�Jul�Aug�Sep�Oct�Nov�DeccCs@|durtj��}ntj�|�}d|j|j|j|j|j|jfS)aHReturn a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like "YYYY-MM-DD hh:mm:ssZ",
    representing Universal Time (UTC, aka GMT).  An example of this format is:

    1994-11-24 08:49:37Z

    Nz%04d-%02d-%02d %02d:%02d:%02dZ)	�datetime�utcnow�utcfromtimestampr7r8�dayr:�minute�second��t�dtrrr �	time2isozes�rZcCsR|durtj��}ntj�|�}dt|��|jt|jd|j|j	|j
|jfS)z�Return a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like this:

    Wed, DD-Mon-YYYY HH:MM:SS GMT

    Nz"%s %02d-%s-%04d %02d:%02d:%02d GMTr/)rQrRrS�DAYS�weekdayrT�MONTHSr8r7r:rUrVrWrrr �
time2netscapexs�r^)�GMT�UTC�UT�Zz^([-+])?(\d\d?):?(\d\d)?$cCsjd}|tvrd}nTt�|�}|rfdt|�d��}|�d�rR|dt|�d��}|�d�dkrf|}|S)Nrir"��<r/�-)�	UTC_ZONES�TIMEZONE_RE�searchr�group)�tz�offset�mrrr �offset_from_tz_string�s

rmc
Cs|zt�|���d}WnZtypzt|�}WntyFYYdS0d|kr\dkrfnn|}nYdSYn0|dur~d}|dur�d}|dur�d}t|�}t|�}t|�}t|�}t|�}|dk�r$t�t���d}|d}	|}
|||	}|	|
}	t|	�dk�r$|	dk�r|d}n|d}t|||||||f�}|du�rx|du�rRd}|�	�}t
|�}|du�rpdS||}|S)Nr/r0ri��d�2r`)�MONTHS_LOWER�index�lower�
ValueErrorr�time�	localtime�absr=�upperrm)
rT�mon�yr�hrr;r<rj�imon�cur_yrrl�tmprXrkrrr �	_str2time�sJ




r~zV^[SMTWF][a-z][a-z], (\d\d) ([JFMASOND][a-z][a-z]) (\d\d\d\d) (\d\d):(\d\d):(\d\d) GMT$z+^(?:Sun|Mon|Tue|Wed|Thu|Fri|Sat)[a-z]*,?\s*a�^
    (\d\d?)            # day
       (?:\s+|[-\/])
    (\w+)              # month
        (?:\s+|[-\/])
    (\d+)              # year
    (?:
          (?:\s+|:)    # separator before clock
       (\d\d?):(\d\d)  # hour:min
       (?::(\d\d))?    # optional seconds
    )?                 # optional clock
       \s*
    (?:
       ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+) # timezone
       \s*
    )?
    (?:
       \(\w+\)         # ASCII representation of timezone in parens.
       \s*
    )?$cCs�t�|�}|rl|��}t�|d���d}t|d�|t|d�t|d�t|d�t|d�f}t|�S|�	�}t
�d|d�}dgd	\}}}}}}	}
t�|�}|dur�|��\}}}}}}	}
ndSt
||||||	|
�S)
a�Returns time in seconds since epoch of time represented by a string.

    Return value is an integer.

    None is returned if the format of str is unrecognized, the time is outside
    the representable range, or the timezone string is not recognized.  If the
    string contains no timezone, UTC is assumed.

    The timezone in the string may be numerical (like "-0800" or "+0100") or a
    string timezone (like "UTC", "GMT", "BST" or "EST").  Currently, only the
    timezone strings equivalent to UTC (zero offset) are known to the function.

    The function loosely parses the following formats:

    Wed, 09 Feb 1994 22:23:32 GMT       -- HTTP format
    Tuesday, 08-Feb-94 14:15:29 GMT     -- old rfc850 HTTP format
    Tuesday, 08-Feb-1994 14:15:29 GMT   -- broken rfc850 HTTP format
    09 Feb 1994 22:23:32 GMT            -- HTTP format (no weekday)
    08-Feb-94 14:15:29 GMT              -- rfc850 format (no weekday)
    08-Feb-1994 14:15:29 GMT            -- broken rfc850 format (no weekday)

    The parser ignores leading and trailing whitespace.  The time may be
    absent.

    If the year is given with only 2 digits, the function will select the
    century that makes the year closest to the current date.

    r/r"rrc���N�)�STRICT_DATE_RErh�groupsrprqrrr�floatr=�lstrip�
WEEKDAY_RE�sub�LOOSE_HTTP_DATE_REr~)�textrl�grxr6rTryrzr;r<rjrrr �	http2time�s 
�
r�a�^
    (\d{4})              # year
       [-\/]?
    (\d\d?)              # numerical month
       [-\/]?
    (\d\d?)              # day
   (?:
         (?:\s+|[-:Tt])  # separator before clock
      (\d\d?):?(\d\d)    # hour:min
      (?::?(\d\d(?:\.\d*)?))?  # optional seconds (and fractional)
   )?                    # optional clock
      \s*
   (?:
      ([-+]?\d\d?:?(:?\d\d)?
       |Z|z)             # timezone  (Z is "zero meridian", i.e. GMT)
      \s*
   )?$c
Csd|��}dgd\}}}}}}}t�|�}|durL|��\}}}}}}}}	ndSt|||||||�S)av
    As for http2time, but parses the ISO 8601 formats:

    1994-02-03 14:15:29 -0100    -- ISO 8601 format
    1994-02-03 14:15:29          -- zone is optional
    1994-02-03                   -- only date
    1994-02-03T14:15:29          -- Use T as separator
    19940203T141529Z             -- ISO 8601 compact format
    19940203                     -- only date

    Nr�)r��ISO_DATE_RErhr�r~)
r�rTrxryrzr;r<rjrl�_rrr �iso2time6s

r�cCs*|�d�\}}|jd|�|j|d�S)z)Return unmatched part of re.Match object.rN)�span�string)�match�start�endrrr �	unmatchedWsr�z^\s*([^=\s;,]+)z&^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"z^\s*=\s*([^\s;,]*)z\\(.)c
Cs0t|t�rJ�g}|D�]}|}g}|�rt�|�}|r�t|�}|�d�}t�|�}|rxt|�}|�d�}t�d|�}n.t	�|�}|r�t|�}|�d�}|�
�}nd}|�||f�q$|���
d�r�|��dd�}|r�|�|�g}q$t�dd|�\}}	|	dk�sJd|||f��|}q$|r|�|�q|S)	amParse header values into a list of lists containing key,value pairs.

    The function knows how to deal with ",", ";" and "=" as well as quoted
    values after "=".  A list of space separated tokens are parsed as if they
    were separated by ";".

    If the header_values passed as argument contains multiple values, then they
    are treated as if they were a single value separated by comma ",".

    This means that this function is useful for parsing header fields that
    follow this syntax (BNF as from the HTTP/1.1 specification, but we relax
    the requirement for tokens).

      headers           = #header
      header            = (token | parameter) *( [";"] (token | parameter))

      token             = 1*<any CHAR except CTLs or separators>
      separators        = "(" | ")" | "<" | ">" | "@"
                        | "," | ";" | ":" | "\" | <">
                        | "/" | "[" | "]" | "?" | "="
                        | "{" | "}" | SP | HT

      quoted-string     = ( <"> *(qdtext | quoted-pair ) <"> )
      qdtext            = <any TEXT except <">>
      quoted-pair       = "\" CHAR

      parameter         = attribute "=" value
      attribute         = token
      value             = token | quoted-string

    Each header is represented by a list of key/value pairs.  The value for a
    simple token (not part of a parameter) is None.  Syntactically incorrect
    headers will not necessarily be parsed as you would want.

    This is easier to describe with some examples:

    >>> split_header_words(['foo="bar"; port="80,81"; discard, bar=baz'])
    [[('foo', 'bar'), ('port', '80,81'), ('discard', None)], [('bar', 'baz')]]
    >>> split_header_words(['text/html; charset="iso-8859-1"'])
    [[('text/html', None), ('charset', 'iso-8859-1')]]
    >>> split_header_words([r'Basic realm="\"foo\bar\""'])
    [[('Basic', None), ('realm', '"foobar"')]]

    r/z\1N�,z^[=\s;]*r�rz&split_header_words bug: '%s', '%s', %s)�
isinstancer
�HEADER_TOKEN_RErhr�ri�HEADER_QUOTED_VALUE_RE�HEADER_ESCAPE_REr��HEADER_VALUE_RE�rstrip�appendr��
startswith�re�subn)
�
header_values�resultr��	orig_text�pairsrl�name�value�non_junk�
nr_junk_charsrrr �split_header_words`sF-







��r��([\"\\])cCs|g}|D]h}g}|D]F\}}|durPt�d|�sDt�d|�}d|}d||f}|�|�q|r|�d�|��qd�|�S)a�Do the inverse (almost) of the conversion done by split_header_words.

    Takes a list of lists of (key, value) pairs and produces a single header
    value.  Attribute values are quoted if needed.

    >>> join_header_words([[("text/plain", None), ("charset", "iso-8859/1")]])
    'text/plain; charset="iso-8859/1"'
    >>> join_header_words([[("text/plain", None)], [("charset", "iso-8859/1")]])
    'text/plain, charset="iso-8859/1"'

    Nz^\w+$�\\\1z"%s"�%s=%s�; �, )r�rh�HEADER_JOIN_ESCAPE_REr�r��join)�lists�headersr��attr�k�vrrr �join_header_words�sr�cCs0|�d�r|dd�}|�d�r,|dd�}|S)N�"r/���)r��endswith�r�rrr �strip_quotes�s


r�cCs�d}g}|D]�}g}d}tt�d|��D]�\}}|��}|dkrBq(d|vrV|d}}	nt�d|d�\}}	|��}|d	kr�|��}
|
|vr�|
}|d
kr�t|	�}	d}|dkr�tt|	��}	|�||	f�q(|r|s�|�d
�|�|�q|S)a5Ad-hoc parser for Netscape protocol cookie-attributes.

    The old Netscape cookie format for Set-Cookie can for instance contain
    an unquoted "," in the expires field, so we have to use this ad-hoc
    parser instead of split_header_words.

    XXX This may not make the best possible effort to parse all the crap
    that Netscape Cookie headers contain.  Ronald Tschalar's HTTPClient
    parser is probably better, so could do worse than following that if
    this ever gives any trouble.

    Currently, this is also used for parsing RFC 2109 cookies.

    )�expires�domain�path�secure�version�port�max-ageFz;\s*r��=Nz\s*=\s*r/rr�Tr�)r��0)	�	enumerater��splitr�r�rrr�r�r�)�
ns_headers�known_attrsr��	ns_headerr��version_set�ii�paramr�r��lcrrr �parse_ns_headers�s6

r�z\.\d+$cCs:t�|�rdS|dkrdS|ddks2|ddkr6dSdS)z*Return True if text is a host domain name.Fr�r�.r�T��IPV4_RErhr�rrr �is_HDNs
r�cCsl|��}|��}||krdSt|�s(dS|�|�}|dksB|dkrFdS|�d�sTdSt|dd��shdSdS)a�Return True if domain A domain-matches domain B, according to RFC 2965.

    A and B may be host domain names or IP addresses.

    RFC 2965, section 1:

    Host names can be specified either as an IP address or a HDN string.
    Sometimes we compare one host name with another.  (Such comparisons SHALL
    be case-insensitive.)  Host A's name domain-matches host B's if

         *  their host name strings string-compare equal; or

         * A is a HDN string and has the form NB, where N is a non-empty
            name string, B has the form .B', and B' is a HDN string.  (So,
            x.y.com domain-matches .Y.com but not Y.com.)

    Note that domain-match is not a commutative operation: a.b.c.com
    domain-matches .c.com, but not the reverse.

    TFr�rr�r/N)rrr��rfindr�)�A�B�irrr �domain_matchs

r�cCst�|�rdSdS)zdReturn True if text is a sort-of-like a host domain name.

    For accepting/blocking domains.

    FTr�r�rrr �liberal_is_HDNAs
r�cCs`|��}|��}t|�r t|�s0||kr,dSdS|�d�}|rL|�|�rLdS|s\||kr\dSdS)z\For blocking/accepting domains.

    A and B may be host domain names or IP addresses.

    TFr�)rrr�r�r�)r�r��initial_dotrrr �user_domain_matchKs
r�z:\d+$cCs>|��}t|�d}|dkr(|�dd�}t�d|d�}|��S)z�Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    r/r��Host)�get_full_urlr�
get_header�cut_port_rer�rr)�request�url�hostrrr �request_host`sr�cCs4t|�}}|�d�dkr,t�|�s,|d}||fS)zzReturn a tuple (request-host, effective request-host name).

    As defined by RFC 2965, except both are lowercased.

    r�r��.local)r��findr�rh)r��erhn�req_hostrrr �eff_request_hostpsr�cCs0|��}t|�}t|j�}|�d�s,d|}|S)z6Path component of request-URI, as defined by RFC 2965.�/)r�r�escape_pathr�r�)r�r��partsr�rrr �request_path{s

r�cCs^|j}|�d�}|dkrV||dd�}zt|�WqZtyRtd|�YdS0nt}|S)N�:rr/znonnumeric port: '%s')r�r�rrsr!�DEFAULT_HTTP_PORT)r�r�r�r�rrr �request_port�s


r�z%/;:@&=+$,!~*'()z%([0-9a-fA-F][0-9a-fA-F])cCsd|�d���S)Nz%%%sr/)rirw)r�rrr �uppercase_escaped_char�sr�cCst|t�}t�t|�}|S)zEEscape any invalid characters in HTTP URL, and uppercase all escapes.)r�HTTP_PATH_SAFE�ESCAPED_CHAR_REr�r�)r�rrr r��s

r�cCsP|�d�}|dkrL||dd�}|�d�}t|�rL|dksD|dkrLd|S|S)aBReturn reach of host h, as defined by RFC 2965, section 1.

    The reach R of a host name H is defined as follows:

       *  If

          -  H is the host domain name of a host; and,

          -  H has the form A.B; and

          -  A has no embedded (that is, interior) dots; and

          -  B has at least one embedded dot, or B is the string "local".
             then the reach of H is .B.

       *  Otherwise, the reach of H is H.

    >>> reach("www.acme.com")
    '.acme.com'
    >>> reach("acme.com")
    'acme.com'
    >>> reach("acme.local")
    '.local'

    r�rr/N�local)r�r�)�hr��brrr �reach�s

r�cCs&t|�}t|t|����sdSdSdS)z�

    RFC 2965, section 3.3.6:

        An unverifiable transaction is to a third-party host if its request-
        host U does not domain-match the reach R of the request-host O in the
        origin transaction.

    TFN)r�r�r��get_origin_req_host)r�r�rrr �is_third_party�s
r�c@sTeZdZdZddd�Zdd�Zddd	�Zd
d�Zddd
�Zdd�Z	e
�dd��ZdS)r
a�HTTP Cookie.

    This class represents both Netscape and RFC 2965 cookies.

    This is deliberately a very simple class.  It just holds attributes.  It's
    possible to construct Cookie instances that don't comply with the cookie
    standards.  CookieJar.make_cookies is the factory function for Cookie
    objects -- it deals with cookie parsing, supplying defaults, and
    normalising to the representation used in this class.  CookiePolicy is
    responsible for checking them to see whether they should be accepted from
    and returned to the server.

    Note that the port may be present in the headers, but unspecified ("Port"
    rather than"Port=80", for example); if this is the case, port is None.

    FcCs�|durt|�}|dur t|�}|dur8|dur8td��||_||_||_||_||_|��|_||_	||_
|	|_|
|_||_
||_|
|_||_||_||_t�|�|_dS)NTz-if port is None, port_specified must be false)rrsr�r�r�r��port_specifiedrrr��domain_specified�domain_initial_dotr��path_specifiedr�r��discard�comment�comment_url�rfc2109�copy�_rest)�selfr�r�r�r�r�r�r�r�r�r�r�r�r�r�r��restr�rrr �__init__�s*

zCookie.__init__cCs
||jvS�N�r)rr�rrr �has_nonstandard_attrszCookie.has_nonstandard_attrNcCs|j�||�Sr)r�get)rr��defaultrrr �get_nonstandard_attrszCookie.get_nonstandard_attrcCs||j|<dSrr)rr�r�rrr �set_nonstandard_attrszCookie.set_nonstandard_attrcCs,|durt��}|jdur(|j|kr(dSdS�NTF)rtr�)r�nowrrr �
is_expiredszCookie.is_expiredcCsX|jdurd}n
d|j}|j||j}|jdurFd|j|jf}n|j}d||fS)Nr�r�r�z<Cookie %s for %s>)r�r�r�r�r�)r�p�limit�	namevaluerrr �__str__$s

zCookie.__str__cCszg}dD]:}t||�}t|t�r(t|�}|�td�|t|�f�q|�dt|j��|�dt|j��dd�|�S)N)r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�zrest=%sz
rfc2109=%sz
Cookie(%s)r�)�getattrr�r
r��reprrr�r�)rrr�r�rrr �__repr__.s

zCookie.__repr__)F)N)N)�__name__�
__module__�__qualname__�__doc__rrr
rrrrrrrrr r
�s�
*


r
c@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)ra Defines which cookies get accepted from and returned to server.

    May also modify cookies, though this is probably a bad idea.

    The subclass DefaultCookiePolicy defines the standard rules for Netscape
    and RFC 2965 cookies -- override that if you want a customised policy.

    cCs
t��dS)z�Return true if (and only if) cookie should be accepted from server.

        Currently, pre-expired cookies never get this far -- the CookieJar
        class deletes such cookies itself.

        N��NotImplementedError�r�cookier�rrr �set_okLszCookiePolicy.set_okcCs
t��dS)zAReturn true if (and only if) cookie should be returned to server.Nrrrrr �	return_okUszCookiePolicy.return_okcCsdS)zMReturn false if cookies should not be returned, given cookie domain.
        Tr)rr�r�rrr �domain_return_okYszCookiePolicy.domain_return_okcCsdS)zKReturn false if cookies should not be returned, given cookie path.
        Tr)rr�r�rrr �path_return_ok^szCookiePolicy.path_return_okN)rrrrrrr r!rrrr rCs
	rc@s�eZdZdZdZdZdZdZeeBZdddddddddeddfd	d
�Z	dd�Z
d
d�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�Zd'd(�Zd)d*�Zd+d,�Zd-d.�Zd/d0�Zd1d2�Zd3d4�Zd5d6�ZdS)7rzBImplements the standard rules for accepting and returning cookies.r/r"rrNTFc

Csp||_||_||_||_||_||_|	|_|
|_||_||_	|durPt
|�|_nd|_|durft
|�}||_dS)zAConstructor arguments should be passed as keyword arguments only.Nr)
�netscape�rfc2965�rfc2109_as_netscape�hide_cookie2�
strict_domain�strict_rfc2965_unverifiable�strict_ns_unverifiable�strict_ns_domain�strict_ns_set_initial_dollar�strict_ns_set_path�tuple�_blocked_domains�_allowed_domains)
r�blocked_domains�allowed_domainsr"r#r$r%r&r'r(r)r*r+rrr rns 
zDefaultCookiePolicy.__init__cCs|jS)z4Return the sequence of blocked domains (as a tuple).)r-�rrrr r/�sz#DefaultCookiePolicy.blocked_domainscCst|�|_dS)z$Set the sequence of blocked domains.N)r,r-)rr/rrr �set_blocked_domains�sz'DefaultCookiePolicy.set_blocked_domainscCs |jD]}t||�rdSqdSr)r-r�)rr��blocked_domainrrr �
is_blocked�s

zDefaultCookiePolicy.is_blockedcCs|jS)z=Return None, or the sequence of allowed domains (as a tuple).)r.r1rrr r0�sz#DefaultCookiePolicy.allowed_domainscCs|durt|�}||_dS)z-Set the sequence of allowed domains, or None.N)r,r.)rr0rrr �set_allowed_domains�sz'DefaultCookiePolicy.set_allowed_domainscCs.|jdurdS|jD]}t||�rdSqdS)NFT)r.r�)rr��allowed_domainrrr �is_not_allowed�s


z"DefaultCookiePolicy.is_not_allowedcCsNtd|j|j�|jdusJ�dD]&}d|}t||�}|||�s"dSq"dS)z�
        If you override .set_ok(), be sure to call this method.  If it returns
        false, so should your subclass (assuming your subclass wants to be more
        strict about which cookies to accept).

        � - checking cookie %s=%sN)r��
verifiabilityr�r�r�r��set_ok_FT�r!r�r�r�rrr��n�fn_name�fnrrr r�s

zDefaultCookiePolicy.set_okcCsZ|jdurtd|j|j�dS|jdkr:|js:td�dS|jdkrV|jsVtd�dSdS)Nz0   Set-Cookie2 without version attribute (%s=%s)Fr�$   RFC 2965 cookies are switched off�$   Netscape cookies are switched offT)r�r!r�r�r#r"rrrr �set_ok_version�s
�z"DefaultCookiePolicy.set_ok_versioncCsJ|jrFt|�rF|jdkr*|jr*td�dS|jdkrF|jrFtd�dSdS�Nrz>   third-party RFC 2965 cookie during unverifiable transactionFz>   third-party Netscape cookie during unverifiable transactionT��unverifiabler�r�r'r!r(rrrr �set_ok_verifiability�sz(DefaultCookiePolicy.set_ok_verifiabilitycCs0|jdkr,|jr,|j�d�r,td|j�dSdS)Nr�$z'   illegal name (starts with '$'): '%s'FT)r�r*r�r�r!rrrr �set_ok_name�s
�zDefaultCookiePolicy.set_ok_namecCsJ|jrFt|�}|jdks(|jdkrF|jrF|�|j�sFtd|j|�dSdS)Nrz7   path attribute %s is not a prefix of request path %sFT)r�r�r�r+r�r�r!)rrr��req_pathrrr �set_ok_path�s
��
��zDefaultCookiePolicy.set_ok_pathc
Cs�|�|j�rtd|j�dS|�|j�r8td|j�dS|j�r�t|�\}}|j}|jr�|�d�dkr�|�d�}|�dd|�}|dkr�||dd�}||d|�}	|	�	�dvr�t
|�dkr�td	|�dS|�d�r�|dd�}
n|}
|
�d�dk}|�s|d
k�rtd|�dS|j
dk�rX|�|��sX|�d��sXd|�|��sXtd||�dS|j
dk�sr|j|j@�r�t||��s�td
||�dS|j
dk�s�|j|j@�r�|dt
|��}|�d�dk�r�t�|��s�td||�dSdS)N�"   domain %s is in user block-listF�&   domain %s is not in user allow-listr�r"rr/)�co�ac�com�edu�org�net�gov�milr�aero�biz�cat�coop�info�jobs�mobi�museumr��pro�travel�euz&   country-code second level domain %sr�z/   non-local domain %s contains no embedded dotzO   effective request-host %s (even with added initial dot) does not end with %sz5   effective request-host %s does not domain-match %sz.   host prefix %s for domain %s contains a dotT)r4r�r!r7r�r�r&�countr�rr�lenr�r�r�r�r)�DomainRFC2965Matchr��DomainStrictNoDotsr�rh)
rrr�r�r�r�r��j�tld�sld�undotted_domain�
embedded_dots�host_prefixrrr �
set_ok_domain�sv

�

����
��
���z!DefaultCookiePolicy.set_ok_domainc	Cs�|jr�t|�}|durd}nt|�}|j�d�D]>}zt|�Wn ty`td|�YdS0||kr0q�q0td||j�dSdS)N�80r�z   bad port %s (not numeric)Fz$   request port (%s) not found in %sT)r�r�r
r�r�rrsr!�rrr��req_portrrrr �set_ok_port*s$

�zDefaultCookiePolicy.set_ok_portcCs@td|j|j�dD]&}d|}t||�}|||�sdSqdS)z�
        If you override .return_ok(), be sure to call this method.  If it
        returns false, so should your subclass (assuming your subclass wants to
        be more strict about which cookies to return).

        r8)r�r9r�r�r�r��
return_ok_FTr;r<rrr r?s	

zDefaultCookiePolicy.return_okcCs<|jdkr|jstd�dS|jdkr8|js8td�dSdS)Nrr@FrAT)r�r#r!r"rrrr �return_ok_versionQsz%DefaultCookiePolicy.return_ok_versioncCsJ|jrFt|�rF|jdkr*|jr*td�dS|jdkrF|jrFtd�dSdSrCrDrrrr �return_ok_verifiabilityZsz+DefaultCookiePolicy.return_ok_verifiabilitycCs |jr|jdkrtd�dSdS)N�httpsz(   secure cookie with non-secure requestFT)r��typer!rrrr �return_ok_securefsz$DefaultCookiePolicy.return_ok_securecCs|�|j�rtd�dSdS)Nz   cookie expiredFT)r�_nowr!rrrr �return_ok_expireslsz%DefaultCookiePolicy.return_ok_expirescCsN|jrJt|�}|durd}|j�d�D]}||kr&qJq&td||j�dSdS)Nrkr�z0   request port %s does not match cookie port %sFT)r�r�r�r!rlrrr �return_ok_portrs�z"DefaultCookiePolicy.return_ok_portcCs�t|�\}}|j}|jdkrB|j|j@rB|jsB||krBtd�dS|jdkrft||�sftd||�dS|jdkr�d|�|�s�td||�dSdS)NrzQ   cookie with unspecified domain does not string-compare equal to request domainFzQ   effective request-host name %s does not domain-match RFC 2965 cookie domain %sr�z;   request-host %s does not match Netscape cookie domain %sT)	r�r�r�r)�DomainStrictNonDomainr�r!r�r�)rrr�r�r�r�rrr �return_ok_domain�s,

�����z$DefaultCookiePolicy.return_ok_domaincCs|t|�\}}|�d�sd|}|�d�s0d|}|�|�sH|�|�sHdS|�|�r`td|�dS|�|�rxtd|�dSdS)Nr�FrKrLT)r�r�r�r4r!r7)rr�r�r�r�rrr r �s





z$DefaultCookiePolicy.domain_return_okcCs0td|�t|�}|�|�s,td||�dSdS)Nz- checking cookie path=%sz  %s does not path-match %sFT)r!r�r�)rr�r�rIrrr r!�s

z"DefaultCookiePolicy.path_return_ok) rrrrrcrxrb�
DomainLiberal�DomainStrictrr/r2r4r0r5r7rrBrFrHrJrjrnrrprqrtrvrwryr r!rrrr rdsN�
!	;	rcCst|���}t|j|�Sr)�sorted�keysrr)�adictr}rrr �vals_sorted_by_key�src	csZt|�}|D]H}d}z
|jWnty0Yn0d}t|�D]
}|Vq>|s|VqdS)zBIterates over nested mapping, depth-first, in sorted order by key.FTN)r�items�AttributeError�
deepvalues)�mapping�values�objZsubobjrrr r��s
r�c@seZdZdS)�AbsentN�rrrrrrr r���r�c@seZdZdZe�d�Ze�d�Ze�d�Ze�d�Z	e�d�Z
e�dej�Zd3d	d
�Z
dd�Zd
d�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd4d%d&�Zd'd(�Zd)d*�Zd+d,�Zd-d.�Ze �d/d0��Z!d1d2�Z"dS)5rz�Collection of HTTP cookies.

    You may not need to know about this class: try
    urllib.request.build_opener(HTTPCookieProcessor).open(url).
    z\Wr�z\.?[^.]*z[^.]*z^\.+z^\#LWP-Cookies-(\d+\.\d+)NcCs(|durt�}||_t��|_i|_dSr)r�_policy�
_threading�RLock�
_cookies_lock�_cookies�r�policyrrr r�s

zCookieJar.__init__cCs
||_dSr)r�r�rrr �
set_policy�szCookieJar.set_policycCs�g}|j�||�sgStd|�|j|}|��D]T}|j�||�sFq2||}|��D].}|j�||�srtd�qVtd�|�|�qVq2|S)Nz!Checking %s for cookies to returnz   not returning cookiez   it's a match)	r�r r!r�r}r!r�rr�)rr�r��cookies�cookies_by_pathr��cookies_by_namerrrr �_cookies_for_domain�s 

zCookieJar._cookies_for_domaincCs*g}|j��D]}|�|�||��q|S)z2Return a list of cookies to be returned to server.)r�r}�extendr�)rr�r�r�rrr �_cookies_for_request�szCookieJar._cookies_for_requestc	Cs<|jdd�dd�d}g}|D�]}|j}|sHd}|dkrH|�d|�|jdurz|j�|j�rz|dkrz|j�d	|j�}n|j}|jdur�|�|j�n|�d
|j|f�|dkr|j	r�|�d|j
�|j�d��r|j}|j
s�|�d�r�|d
d�}|�d|�|jdurd}|j�r,|d|j}|�|�q|S)z�Return a list of cookie-attributes to be returned to server.

        like ['foo="bar"; $Path="/"', ...]

        The $Version attribute is also added when appropriate (currently only
        once per request).

        cSs
t|j�Sr)rar�)�arrr �<lambda>r�z)CookieJar._cookie_attrs.<locals>.<lambda>T)�key�reverseFrz$Version=%sNr�r�z
$Path="%s"r�r/z$Domain="%s"z$Portz="%s")�sortr�r�r��non_word_rerh�quote_rer�r�r�r�r�r�r�r�r�)	rr�r��attrsrr�r�r�rrrr �
_cookie_attrssF


��
�
zCookieJar._cookie_attrscCs�td�|j��z�tt���|j_|_|�|�}|�|�}|r^|�	d�s^|�
dd�|��|jjr�|jj
s�|�	d�s�|D]}|jdkr||�
dd�q�q|W|j��n|j��0|��dS)z�Add correct Cookie: header to request (urllib.request.Request object).

        The Cookie2 header is also added unless policy.hide_cookie2 is true.

        �add_cookie_headerr
r��Cookie2r/z$Version="1"N)r!r��acquirerrtr�rur�r��
has_header�add_unredirected_headerr�r#r%r��release�clear_expired_cookies)rr�r�r�rrrr r�<s(




��
zCookieJar.add_cookie_headerc
Cs�g}d}d}|D�]x}|d\}}d}d}	i}
i}|dd�D�].\}}
|��}||vs`||vrd|}||vrx|
durxd}
||
vr�q>|dkr�|
dur�td	�d}	�qp|
��}
|d
kr�|r�q>|
dur�td�q>|dk�rd}zt|
�}
Wn(t�ytd
�d}	Y�qpYn0d
}|j|
}
||v�s2||v�rf|
du�r\|dv�r\td|�d}	�qp|
|
|<q>|
||<q>|	�rxq|�|||
|f�q|S)aReturn list of tuples containing normalised cookie information.

        attrs_set is the list of lists of key,value pairs extracted from
        the Set-Cookie or Set-Cookie2 headers.

        Tuples are name, value, standard, rest, where name and value are the
        cookie name and value, standard is a dictionary containing the standard
        cookie-attributes (discard, secure, version, expires or max-age,
        domain, path and port) and rest is a dictionary containing the rest of
        the cookie-attributes.

        )r�r�)r�r�r�r�r�r�r��
commenturlrFr/NTr�z%   missing value for domain attributer�zM   missing or invalid value for expires attribute: treating as session cookier�z?   missing or invalid (non-numeric) value for max-age attribute)r�r�r�z!   missing value for %s attribute)rrr!rrsrur�)r�	attrs_set�
cookie_tuples�
boolean_attrs�value_attrs�cookie_attrsr�r��max_age_set�
bad_cookie�standardrr�r�r�rrr �_normalized_cookie_tuples]sh





�

z#CookieJar._normalized_cookie_tuplescCs"|\}}}}|�dt�}|�dt�}|�dt�}	|�dt�}
|�dd�}|durrzt|�}WntypYdS0|�dd�}|�dd�}
|�d	d�}|�d
d�}|tur�|dkr�d}t|�}nXd}t|�}|�d
�}|dk�r|dkr�|d|�}n|d|d�}t|�dk�rd
}|tu}d}|�r8t|�	d��}|tu�rTt
|�\}}|}n|�	d��shd|}d}|	tu�r�|	du�r�t|�}	nd}t�
dd|	�}	nd}	|
tu�r�d}
d}
nF|
|jk�r�z|�|||�Wnt�y�Yn0td|||�dSt||||	||||||||
|
|||�S)Nr�r�r�r�r�r�Fr�r�r�r�Tr�r�rr/r�z\s+z2Expiring cookie, domain='%s', path='%s', name='%s')rr�rrsr�r�r�ra�boolr�r�r�r�r�ru�clear�KeyErrorr!r
)r�tupr�r�r�r�rr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrr �_cookie_from_cookie_tuple�s�







��z#CookieJar._cookie_from_cookie_tuplecCs6|�|�}g}|D]}|�||�}|r|�|�q|Sr)r�r�r�)rr�r�r�r�r�rrrr �_cookies_from_attrs_sets
z!CookieJar._cookies_from_attrs_setcCsHt|jdd�}|dur |jj}|D]}|jdkr$d|_|r$d|_q$dS)Nr$r/Tr)rr�r#r�r�)rr��
rfc2109_as_nsrrrr �_process_rfc2109_cookies#s

z"CookieJar._process_rfc2109_cookiesc
Cs |��}|�dg�}|�dg�}|jj}|jj}|s8|rP|s@|rP|sH|rP|sT|sTgSz|�t|�|�}Wnty�t�g}Yn0|�r|�rz|�t	|�|�}	Wnty�t�g}	Yn0|�
|	�|�ri}
|D]}d|
|j|j|j
f<q�|
fdd�}t||	�}	|	�r|�|	�|S)zAReturn sequence of Cookie objects extracted from response object.zSet-Cookie2z
Set-CookieNcSs|j|j|jf}||vSr)r�r�r�)�	ns_cookie�lookupr�rrr �no_matching_rfc2965[sz3CookieJar.make_cookies.<locals>.no_matching_rfc2965)rY�get_allr�r#r"r�r��	Exceptionr-r�r�r�r�r�rr�)
r�responser�r��rfc2965_hdrs�ns_hdrsr#r"r��
ns_cookiesr�rr�rrr �make_cookies/sX�������
�



zCookieJar.make_cookiescCsX|j��z<tt���|j_|_|j�||�r:|�|�W|j��n|j��0dS)z-Set a cookie if policy says it's OK to do so.N)	r�r�rrtr�rur�
set_cookier�rrrr �set_cookie_if_okes
zCookieJar.set_cookie_if_okcCsv|j}|j��zT|j|vr&i||j<||j}|j|vrDi||j<||j}|||j<W|j��n|j��0dS)z?Set a cookie, without checking whether or not it should be set.N)r�r�r�r�r�r�r�)rr�c�c2�c3rrr r�rs


zCookieJar.set_cookiecCs�td|���|j��zXtt���|j_|_|�||�D]&}|j�	||�r<td|�|�
|�q<W|j��n|j��0dS)zAExtract cookies from response, where allowable given the request.zextract_cookies: %sz setting cookie: %sN)r!rYr�r�rrtr�rur�rr�r�)rr�r�rrrr �extract_cookiess

zCookieJar.extract_cookiescCst|dur2|dus|dur td��|j|||=n>|durX|durJtd��|j||=n|durj|j|=ni|_dS)a�Clear some cookies.

        Invoking this method without arguments will clear all cookies.  If
        given a single argument, only cookies belonging to that domain will be
        removed.  If given two arguments, cookies belonging to the specified
        path within that domain are removed.  If given three arguments, then
        the cookie with the specified name, path and domain is removed.

        Raises KeyError if no matching cookie exists.

        Nz8domain and path must be given to remove a cookie by namez.domain must be given to remove cookies by path)rsr�)rr�r�r�rrr r��s��
zCookieJar.clearcCsN|j��z2|D]}|jr|�|j|j|j�qW|j��n|j��0dS)z�Discard all session cookies.

        Note that the .save() method won't save session cookies anyway, unless
        you ask otherwise by passing a true ignore_discard argument.

        N)r�r�r�r�r�r�r�r�)rrrrr �clear_session_cookies�s
zCookieJar.clear_session_cookiescCsZ|j��z>t��}|D]"}|�|�r|�|j|j|j�qW|j��n|j��0dS)a�Discard all expired cookies.

        You probably don't need to call this method: expired cookies are never
        sent back to the server (provided you're using DefaultCookiePolicy),
        this method is called by CookieJar itself every so often, and the
        .save() method won't save expired cookies anyway (unless you ask
        otherwise by passing a true ignore_expires argument).

        N)	r�r�rtrr�r�r�r�r�)rr
rrrr r��s


zCookieJar.clear_expired_cookiescCs
t|j�Sr)r�r�r1rrr �__iter__�szCookieJar.__iter__cCsd}|D]}|d}q|S)z#Return number of contained cookies.rr/r)rr�rrrr �__len__�szCookieJar.__len__cCs0g}|D]}|�t|��qd|jd�|�fS�Nz<%s[%s]>r�)r�r�	__class__r��r�rrrrr r�szCookieJar.__repr__cCs0g}|D]}|�t|��qd|jd�|�fSr�)r�r
r�r�r�rrr r�szCookieJar.__str__)N)NNN)#rrrrr��compiler�r��strict_domain_re�	domain_re�dots_re�ASCII�magic_rerr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrrrrrr r�s:





;!a\	6



rc@seZdZdS)rNr�rrrr r�r�rc@s8eZdZdZddd�Zd
dd�Zddd	�Zdd
d�ZdS)rz6CookieJar that can be loaded from and saved to a file.NFcCsJt�||�|dur6z|dWntd��Yn0||_t|�|_dS)z}
        Cookies are NOT loaded from the named file until either the .load() or
        .revert() method is called.

        Nr�zfilename must be string-like)rrrs�filenamer��	delayload)rr�r�r�rrr r�szFileCookieJar.__init__cCs
t��dS)zSave cookies to a file.Nr)rr��ignore_discard�ignore_expiresrrr �save�szFileCookieJar.savecCsV|dur"|jdur|j}ntt��t|�}z|�||||�W|��n
|��0dS)zLoad cookies from a file.N)r�rs�MISSING_FILENAME_TEXTr	�_really_load�close�rr�r�r�r+rrr �load�szFileCookieJar.loadc	Cs�|dur"|jdur|j}ntt��|j��zRt�|j�}i|_z|�|||�Wnt	t
fyp||_�Yn0W|j��n|j��0dS)z�Clear all cookies and reload cookies from a saved file.

        Raises LoadError (or IOError) if reversion is not successful; the
        object's state will not be altered if this happens.

        N)r�rsr�r�r�r�deepcopyr�r�r�IOErrorr�)rr�r�r��	old_staterrr �reverts

zFileCookieJar.revert)NFN)NFF)NFF)NFF)rrrrrr�r�r�rrrr r�s


�rcCs |j|jfd|jfd|jfg}|jdur8|�d|jf�|jrH|�d�|jrX|�d�|jrh|�d�|j	rx|�d�|j
r�|�d	tt|j
��f�|j
r�|�d
�|jr�|�d|jf�|jr�|�d|jf�t|j���}|D]}|�|t|j|�f�q�|�d
t|j�f�t|g�S)z�Return string representation of Cookie in an the LWP cookie file format.

    Actually, the format is extended a bit -- see module docstring.

    r�r�Nr�)�	path_specN)�	port_specN)�
domain_dotN)r�Nr�)r�Nr�r�r�)r�r�r�r�r�r�r�r�r�r�r�rZr�r�r�r�r|rr}r
r�r�)rr�r}r�rrr �lwp_cookie_strs(
��r�c@s,eZdZdZddd�Zddd�Zd	d
�ZdS)
ra[
    The LWPCookieJar saves a sequence of "Set-Cookie3" lines.
    "Set-Cookie3" is the format used by the libwww-perl library, not known
    to be compatible with any browser, but which is easy to read and
    doesn't lose information about RFC 2965 cookies.

    Additional methods

    as_lwp_str(ignore_discard=True, ignore_expired=True)

    TcCsTt��}g}|D]2}|s |jr q|s0|�|�r0q|�dt|��qd�|dg�S)z�Return cookies as a string of "\n"-separated "Set-Cookie3" headers.

        ignore_discard and ignore_expires: see docstring for FileCookieJar.save

        zSet-Cookie3: %s�
r�)rtr�rr�r�r�)rr�r�r
r�rrrr �
as_lwp_strHs
zLWPCookieJar.as_lwp_strNFcCsd|dur"|jdur|j}ntt��t|d�}z(|�d�|�|�||��W|��n
|��0dS)N�wz#LWP-Cookies-2.0
)r�rsr�r	�writer�r�r�rrr r�Xs

zLWPCookieJar.savecCs,|��}|j�|�s$d|}t|��t��}d}d}	d}
�z�|��}|dkrP�q�|�|�s\q<|t|�d���}t|g�D�]f}|d\}
}i}i}|	D]}d||<q�|dd�D]n\}}|dur�|�	�}nd}||
vs�||	vr�|}||	v�r|dur�d	}|||<q�||
v�r|||<q�|||<q�|j
}|d
�}|d�}|du�rJt|�}|du�rXd	}|d�}|�d
�}t|d�|
||d�|d�|||d�|d�|d�|d�|||d�|d�|�}|�s�|j
�r�qz|�s�|�|��r�qz|�|�qzq<Wn>t�y��Yn*t�y&t�td||f��Yn0dS)Nz5%r does not look like a Set-Cookie3 (LWP) format filezSet-Cookie3:)r�r�r�r�r�)r�r�r�r�r�r�r�r�rFr/Tr�r�r�r�r�r�r�r�r�r�r�r�r�z&invalid Set-Cookie3 format file %r: %r)�readliner�rhrrtr�ra�stripr�rrrr�r
r�rr�r�r�r-)rr+r�r�r��magicr,r
�headerr�r��line�datar�r�r�rr�r�r�r�r�r�r�r�r�rrr r�gs��











�
�zLWPCookieJar._really_load)TT)NFF)rrrrr�r�r�rrrr r;s

rc@s0eZdZdZe�d�ZdZdd�Zd
dd	�Z	dS)ra�

    WARNING: you may want to backup your browser's cookies file if you use
    this class to save cookies.  I *think* it works, but there have been
    bugs in the past!

    This class differs from CookieJar only in the format it uses to save and
    load cookies to and from a file.  This class uses the Mozilla/Netscape
    `cookies.txt' format.  lynx uses this file format, too.

    Don't expect cookies saved while the browser is running to be noticed by
    the browser (in fact, Mozilla on unix will overwrite your saved cookies if
    you change them on disk while it's running; on Windows, you probably can't
    save at all while the browser is running).

    Note that the Mozilla/Netscape format will downgrade RFC2965 cookies to
    Netscape cookies on saving.

    In particular, the cookie version and port number information is lost,
    together with information about whether or not Path, Port and Discard were
    specified by the Set-Cookie2 (or Set-Cookie) header, and whether or not the
    domain as set in the HTTP header started with a dot (yes, I'm aware some
    domains in Netscape files start with a dot and some don't -- trust me, you
    really don't want to know any more about this).

    Note that though Mozilla and Netscape use the same format, they use
    slightly different headers.  The class saves cookies using the Netscape
    header by default (Mozilla can cope with that).

    z#( Netscape)? HTTP Cookie Filez~# Netscape HTTP Cookie File
# http://www.netscape.com/newsref/std/cookie_spec.html
# This is a generated file!  Do not edit.

cCsxt��}|��}|j�|�s0|��td|���z|��}|dkrH�q2|�d�r^|dd�}|���d�s4|��dkrzq4|�	d�\}}	}
}}}
}|dk}|	dk}	|
dkr�|}
d}|�d�}|	|ks�J�d	}|dkr�d}d
}t
d|
|dd	||	||
d	|||ddi�}|�s|j�rq4|�s&|�|��r&q4|�
|�q4Wn>t�yJ�Yn*t�yrt�td||f��Yn0dS)
Nz4%r does not look like a Netscape format cookies filer�r�r�)�#rG�	�TRUEr�FTrz+invalid Netscape format cookies file %r: %r)rtr�r�rhr�rr�r�r�r�r
r�rr�r�r�r-)rr+r�r�r�r
r�r�r�r�r�r�r�r�r�r�r�r�rrr r��sj��
��
�
�zMozillaCookieJar._really_loadNFcCs
|dur"|jdur|j}ntt��t|d�}z�|�|j�t��}|D]�}|sV|jrVqF|sf|�|�rfqF|j	rrd}nd}|j
�d�r�d}nd}|jdur�t
|j�}	nd}	|jdur�d}
|j}n|j}
|j}|�d�|j
||j||	|
|g�d�qFW|��n
|��0dS)Nr�r��FALSEr�r�r�r�)r�rsr�r	r�r�rtr�rr�r�r�r�r
r�r�r�r�r�)rr�r�r�r+r
rr�r�r�r�r�rrr r� s@



���zMozillaCookieJar.save)NFF)
rrrrr�r�r�r�r�r�rrrr r�s

Br)N)N)fr�
__future__rrrrZfuture.builtinsrrrr	r
Zfuture.utilsrr�__all__rrQr�r�rtZfuture.backports.urllib.parserrrZfuture.backports.http.clientr�	threadingr��ImportErrorZdummy_threading�calendarrrrr!r�r�r-r5r=r[r]rpr8r�rrrZr^rfr�rgrmr~r��Ir��Xr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r��objectr
rrrr�r�rr�rrr�rrrrrr �<module>s�


5��
�8
�!



U
4'


#h!S=|PK�Cu\�NTs��9future/backports/http/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�H/usr/local/lib/python3.9/site-packages/future/backports/http/__init__.py�<module>�PK�Cu\Ko�
x
x7future/backports/http/__pycache__/client.cpython-39.pycnu�[���a

��?h��.@s�dZddlmZmZmZmZddlmZmZm	Z	m
Z
ddlmZddl
mZddl
mZddlmZddlZddlZddlZdd	lmZddlZdd
lmZer�ddlmZnddlmZgd�Zd
Z dZ!dZ"dZ#dZ$dZ%dZ&dZ'dZ(dZ)dZ*dZ+dZ,dZ-dZ.dZ/dZ0dZ1dZ2d Z3d!Z4d"Z5d#Z6d$Z7d%Z8d&Z9d'Z:d(Z;d)Z<d*Z=d+Z>d,Z?d-Z@d.ZAd/ZBd0ZCd1ZDd2ZEd3ZFd4ZGd5ZHd6ZId7ZJd8ZKd9ZLd:ZMd;ZNd<ZOd=ZPd>ZQd?ZRd@ZSdAZTdBZUdCZVdDZWdEZXdFZYdGZZdHdIdJdKdLdMdNdOdPdQdRdSdTdUdVdWdXdYdZd[d\d]d^d_d`dadbdcdddedfdgdhdidjdkdldmdndodpdqdrdsdtdu�-Z[dvZ\dwZ]dZ^Gdxdy�dyej_�Z`e`fdzd{�Zaeb�ZcGd|d}�d}ejd�ZeGd~d�deb�ZfzddlgZgdd�lgmhZhWnei�y�Yn0Gd�d��d�ef�Zje�kd��Gd�d��d�el�ZmGd�d��d�em�ZnGd�d��d�em�ZoGd�d��d�em�ZpGd�d��d�em�ZqGd�d��d�em�ZrGd�d��d�em�ZsGd�d��d�em�ZtGd�d��d�et�ZuGd�d��d�et�ZvGd�d��d�et�ZwGd�d��d�em�ZxGd�d��d�em�ZyemZzdS)�aD
HTTP/1.1 client library

A backport of the Python 3.3 http/client.py module for python-future.

<intro stuff goes here>
<other stuff, too>

HTTPConnection goes through a number of "states", which define when a client
may legally make another request or fetch the response for a particular
request. This diagram details these state transitions:

    (null)
      |
      | HTTPConnection()
      v
    Idle
      |
      | putrequest()
      v
    Request-started
      |
      | ( putheader() )*  endheaders()
      v
    Request-sent
      |
      | response = getresponse()
      v
    Unread-response   [Response-headers-read]
      |\____________________
      |                     |
      | response.read()     | putrequest()
      v                     v
    Idle                  Req-started-unread-response
                     ______/|
                   /        |
   response.read() |        | ( putheader() )*  endheaders()
                   v        v
       Request-started    Req-sent-unread-response
                            |
                            | response.read()
                            v
                          Request-sent

This diagram presents the following rules:
  -- a second request may not be started until {response-headers-read}
  -- a response [object] cannot be retrieved until {request-sent}
  -- there is no differentiation between an unread response body and a
     partially read response body

Note: this enforcement is applied by the HTTPConnection class. The
      HTTPResponse class does not enforce this state machine, which
      implies sophisticated clients may accelerate the request/response
      pipeline. Caution should be taken, though: accelerating the states
      beyond the above pattern may imply knowledge of the server's
      connection-close behavior for certain requests. For example, it
      is impossible to tell whether the server will close the connection
      UNTIL the response headers have been read; this means that further
      requests cannot be placed into the pipeline until it is known that
      the server will NOT be closing the connection.

Logical State                  __state            __response
-------------                  -------            ----------
Idle                           _CS_IDLE           None
Request-started                _CS_REQ_STARTED    None
Request-sent                   _CS_REQ_SENT       None
Unread-response                _CS_IDLE           <response_class>
Req-started-unread-response    _CS_REQ_STARTED    <response_class>
Req-sent-unread-response       _CS_REQ_SENT       <response_class>
�)�absolute_import�division�print_function�unicode_literals)�bytes�int�str�super)�PY2)�parser)�message)�create_connectionN)�urlsplit)�array)�Iterable)�HTTPResponse�HTTPConnection�
HTTPException�NotConnected�UnknownProtocol�UnknownTransferEncoding�UnimplementedFileMode�IncompleteRead�
InvalidURL�ImproperConnectionState�CannotSendRequest�CannotSendHeader�ResponseNotReady�
BadStatusLine�error�	responses�Pi��UNKNOWN�IdlezRequest-startedzRequest-sent�d�e�f�������������������,�-�.�/�0�1�3������������������i�i�i�i�����������i�i���ContinuezSwitching Protocols�OK�Created�AcceptedzNon-Authoritative Informationz
No Contentz
Reset ContentzPartial ContentzMultiple ChoiceszMoved Permanently�Foundz	See OtherzNot Modifiedz	Use Proxyz(Unused)zTemporary RedirectzBad Request�UnauthorizedzPayment Required�	Forbiddenz	Not FoundzMethod Not AllowedzNot AcceptablezProxy Authentication RequiredzRequest Timeout�Conflict�GonezLength RequiredzPrecondition FailedzRequest Entity Too LargezRequest-URI Too LongzUnsupported Media TypezRequested Range Not SatisfiablezExpectation FailedzPrecondition RequiredzToo Many RequestszRequest Header Fields Too LargezInternal Server ErrorzNot ImplementedzBad GatewayzService UnavailablezGateway TimeoutzHTTP Version Not SupportedzNetwork Authentication Required)-r$r%r'r(r)r*r+r,r-r0r1r2r3r4r5i2r6r7r8r9r:r;r<r=r>r?r@rArBrCrDrErFrGrHrIrJrKrLrMrNrOrPrQrRiic@seZdZdd�ZdS)�HTTPMessagecCsj|��d}t|�}g}d}|��D]@}|d|���|krBd}n|dd���sVd}|r$|�|�q$|S)a�Find all header lines matching a given header name.

        Look through the list of headers and find all lines matching a given
        header name (and their continuation lines).  A list of the lines is
        returned, without interpretation.  If the header does not occur, an
        empty list is returned.  If the header occurs multiple times, all
        occurrences are returned.  Case is not important in the header name.

        �:rN�)�lower�len�keys�isspace�append)�self�name�n�lst�hit�line�rj�F/usr/local/lib/python3.9/site-packages/future/backports/http/client.py�getallmatchingheaders�s
z!HTTPMessage.getallmatchingheadersN)�__name__�
__module__�__qualname__rlrjrjrjrkr\�sr\cCszg}|�td�}t|�tkr&td��|�|�t|�tkrHtdt��|dvrqTqtd��|��	d�}t
j|d��|�S)aGParses only RFC2822 headers from a file pointer.

    email Parser wants to see strings rather than bytes.
    But a TextIOWrapper around self.rfile would buffer too many bytes
    from the stream, bytes which we later need to read as bytes.
    So we read the correct bytes here, as bytes, for email Parser
    to parse.

    r^�header linezgot more than %d headers��
�
�rt�
iso-8859-1)�_class)
�readline�_MAXLINEr`�LineTooLongrc�_MAXHEADERSrr�join�decode�email_parser�Parser�parsestr)�fprv�headersri�hstringrjrjrk�
parse_headerss

r�cs�eZdZdeddfdd�Zdd�Zdd�Zd	d
�Zdd�Z�fd
d�Z	�fdd�Z
dd�Zdd�Zd3�fdd�	Z
dd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�Zd4d'd(�Zd)d*�Zd+d,�Zd-d.�Zd/d0�Zd1d2�Z�ZS)5rrNcCsh|�d�|_||_|tur(t�dtd�||_d|_|_	t
|_t
|_t
|_
t
|_t
|_t
|_t
|_dS)N�rb�ithe 'strict' argument isn't supported anymore; http.client now always assumes HTTP/1.x compliant servers.�)�makefiler��
debuglevel�_strict_sentinel�warnings�warn�DeprecationWarning�_methodr��msg�_UNKNOWN�version�status�reason�chunked�
chunk_left�length�
will_close)rd�sockr��strict�method�urlrjrjrk�__init__*s�zHTTPResponse.__init__cCst|j�td�d�}t|�tkr*td��|jdkrBtdt|��|sNt	|��z|�
dd�\}}}WnBty�z|�
dd�\}}d}Wnty�d}Yn0Yn0|�d�s�|�
�t	|��z$t|�}|d	ks�|d
kr�t	|��Wnt�yt	|��Yn0|||fS)Nr^ruzstatus linerzreply:r��zHTTP/r$i�)rr�rwrxr`ryr��print�reprr�split�
ValueError�
startswith�_close_connr)rdrir�r�r�rjrjrk�_read_statusLs2

zHTTPResponse._read_statuscCs�|jdurdS|��\}}}|tkr&qp|j�td�}t|�tkrJtd��|��}|sXq|j	dkr&t
d|�q&q||_|_|��|_
|dvr�d|_n|�d�r�d|_nt|��t|j�|_|_|j	dkr�|jD]}t
d|d	d
�q�|j�d�}|�r|��dk�rd
|_d|_nd|_|��|_d|_|j�d�}|j�d�}|�r�|j�s�zt|�|_Wnt�yxd|_Yn0|jdk�r�d|_nd|_|tk�s�|tk�s�d|k�r�dk�s�n|jdk�r�d|_|j�s�|j�s�|jdu�r�d
|_dS)Nr^rprzheader:)zHTTP/1.0zHTTP/0.9�
zHTTP/1.�� )�endztransfer-encodingr�TF�content-lengthr$r'�HEAD)r�r��CONTINUEr�rwrxr`ry�stripr�r��coder�r�r�r�rr�r��getr_r�r��_check_closer�r�rr��
NO_CONTENT�NOT_MODIFIEDr�)rdr�r�r��skip�hdr�tr_encr�rjrjrk�beginlsn






�
�
���zHTTPResponse.begincCs�|j�d�}|jdkr:|j�d�}|r6d|��vr6dSdS|j�d�rJdS|r^d|��vr^dS|j�d�}|r~d|��vr~dSdS)N�
connectionr��closeTFz
keep-alivezproxy-connection)r�r�r�r_)rd�conn�pconnrjrjrkr��s
zHTTPResponse._check_closecCs|j}d|_|��dS�N)r�r�)rdr�rjrjrkr��szHTTPResponse._close_conncst���|jr|��dSr�)r	r�r�r��rd��	__class__rjrkr��s
zHTTPResponse.closecst���|jr|j��dSr�)r	�flushr�r�r�rjrkr��s
zHTTPResponse.flushcCsdS)NTrjr�rjrjrk�readable�szHTTPResponse.readablecCs
|jduS)z!True if the connection is closed.N)r�r�rjrjrk�isclosed�szHTTPResponse.isclosedcs�|jdurtd�S|jdkr,|��td�S|durHttt|��|��S|jrV|��S|j	durl|j��}n4z|�
|j	�}Wnty�|���Yn0d|_	|��t|�SdS)Nrtr�r)r�rr�r�r	r�readr��_readall_chunkedr��
_safe_readr)rd�amt�sr�rjrkr��s&


zHTTPResponse.readcCs�|jdurdS|jdkr$|��dS|jr4|�|�S|jdur^t|�|jkr^t|�d|j�}tr�|j�	t|��}t|�}||d|�<n|j�
|�}|s�|r�|��n&|jdur�|j|8_|js�|��|S)Nrr�)r�r�r�r��_readinto_chunkedr�r`�
memoryviewr
r��readinto)rd�b�datarfrjrjrkr�s,





zHTTPResponse.readintocCsp|j�td�}t|�tkr$td��|�d�}|dkrB|d|�}zt|d�WStyj|���Yn0dS)Nr^z
chunk size�;r�)	r�rwrxr`ry�findrr�r�)rdri�irjrjrk�_read_next_chunk_sizeAs
z"HTTPResponse._read_next_chunk_sizecCs:|j�td�}t|�tkr$td��|s*q6|dvrq6qdS)Nr^ztrailer linerq)r�rwrxr`ry�rdrirjrjrk�_read_and_discard_trailerQsz&HTTPResponse._read_and_discard_trailercCs�|jtksJ�|j}g}|dur^z|��}|dkr6Wq~Wn$ty\ttd��|���Yn0|�|�	|��|�	d�d}q|�
�|��td��|�S)Nrrtr�)r�r�r�r�r�rrr{rcr�r�r�)rdr��valuerjrjrkr�_s 
zHTTPResponse._readall_chunkedcCs|jtksJ�|j}d}t|�}|durhz|��}|dkr>Wq�Wn&tyftt|d|����Yn0t|�|kr�|�	|�}|||_||St|�|kr�|�	|�}|�
d�d|_||S|d|�}|�	|�}||d�}||7}|�
d�d}q |��|��|S)Nrr�)
r�r�r�r�r�r�rrr`�_safe_readintor�r�r�)rdr�r��total_bytes�mvbrf�temp_mvbrjrjrkr�xs:





zHTTPResponse._readinto_chunkedcCs\g}|dkrN|j�t|t��}|s6ttd��|�|��|�|�|t|�8}qtd��|�S)aVRead the number of bytes requested, compensating for partial reads.

        Normally, we have a blocking socket, but a read() can be interrupted
        by a signal (resulting in a partial read).

        Note that we cannot distinguish between EOF and an interrupt when zero
        bytes have been read. IncompleteRead() will be raised in this
        situation.

        This function should be used when <amt> bytes "should" be present for
        reading. If the bytes are truly not available (due to EOF), then the
        IncompleteRead exception can be used to detect the problem.
        rrt)	r�r��min�	MAXAMOUNTrrr{rcr`)rdr�r��chunkrjrjrkr��s
zHTTPResponse._safe_readcCs�d}t|�}|t|�kr�tt|�krh|dt�}trZ|j�t|��}t|�}||d|�<q�|j�|�}n6tr�|j�t|��}t|�}||d|�<n|j�|�}|s�tt|d|��t|���||d�}||7}q|S)z2Same as _safe_read, but for reading into a buffer.rN)	r�r`r�r
r�r�r�rr)rdr�r�r�r�r�rfrjrjrkr��s(
zHTTPResponse._safe_readintocCs
|j��Sr�)r��filenor�rjrjrkr��szHTTPResponse.filenocCsF|jdurt��|j�|�p|}t|t�s4t|d�s8|Sd�|�SdS)N�__iter__z, )r�r�get_all�
isinstancer�hasattrr{)rdre�defaultr�rjrjrk�	getheader�s
zHTTPResponse.getheadercCs|jdurt��t|j���S)z&Return list of (header, value) tuples.N)r�r�list�itemsr�rjrjrk�
getheaders�s
zHTTPResponse.getheaderscCs|Sr�rjr�rjrjrkr��szHTTPResponse.__iter__cCs|jSr�)r�r�rjrjrk�info�szHTTPResponse.infocCs|jSr�)r�r�rjrjrk�geturl�szHTTPResponse.geturlcCs|jSr�)r�r�rjrjrk�getcode�szHTTPResponse.getcode)N)N)rmrnror�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r��
__classcell__rjrjr�rkr!s0	" P

%)
	rc@s�eZdZdZdZeZeZdZ	dZ
deej
dfdd�Zd(dd	�Zd
d�Zdd
�Zdd�Zdd�Zdd�Zdd�Zdd�Zd)dd�Zd*dd�Zdd�Zd+dd�Zdifd d!�Zd"d#�Zd$d%�Zd&d'�ZdS),rr�zHTTP/1.1r^rNcCsb|turt�dtd�||_||_d|_g|_d|_t	|_
d|_d|_d|_
i|_|�||�dS)Nr�r�)r�r�r�r��timeout�source_addressr��_buffer�_HTTPConnection__response�_CS_IDLE�_HTTPConnection__stater��_tunnel_host�_tunnel_port�_tunnel_headers�
_set_hostport)rd�host�portr�r�r�rjrjrkr��s�zHTTPConnection.__init__cCs&||_||_|r||_n
|j��dS)z� Sets up the host and the port for the HTTP CONNECT Tunnelling.

        The headers argument should be a mapping of extra HTTP headers
        to send with the CONNECT request.
        N)r�r�r��clear)rdr�r�r�rjrjrk�
set_tunnels
zHTTPConnection.set_tunnelcCs�|dur�|�d�}|�d�}||kr�zt||dd��}WnFty�||dd�dkrf|j}ntd||dd���Yn0|d|�}n|j}|r�|ddkr�|ddkr�|dd�}||_||_dS)	Nr]�]r^r�znonnumeric port: '%s'r�[���)�rfindrr��default_portrr�r�)rdr�r�r��jrjrjrkr�s 

zHTTPConnection._set_hostportcCs
||_dSr�)r�)rd�levelrjrjrk�set_debuglevel2szHTTPConnection.set_debuglevelcCs�|�|j|j�d|j|jf}|�d�}|�|�|j��D](\}}d||f}|�d�}|�|�q>|�t	d��|j
|j|jd�}|�
�\}}	}
|	dkr�|��t�d|	|
��f��|j�td	�}t|�tkr�td
��|s�q�|dvr�q�q�dS)NzCONNECT %s:%d HTTP/1.0
�asciiz%s: %s
�latin-1rr�r�r'zTunnel connection failed: %d %sr^rprq)r�r�r�r�r��encode�sendr�r�r�response_classr�r�r�r��socketrr�r�rwrxr`ry)rdZconnect_strZ
connect_bytes�headerr�Z
header_str�header_bytes�responser�r�rrirjrjrk�_tunnel5s.


�zHTTPConnection._tunnelcCs,t|j|jf|j|j�|_|jr(|��dS)z3Connect to the host and port specified in __init__.N)�socket_create_connectionr�r�r�r�r�r�rr�rjrjrk�connectQs
�zHTTPConnection.connectcCs6|jr|j��d|_|jr,|j��d|_t|_dS)z(Close the connection to the HTTP server.N)r�r�r�r�r�r�rjrjrkr�Xs

zHTTPConnection.closecCs2|jdur |jr|��nt��|jdkr8tdt|��d}t|d�r�t|t	�s�|jdkrbtd�d}z
|j
}Wnty�Yn 0d|vr�d	}|jdkr�td
�|�|�}|s�q�|r�|�
d�}|j�|�q�dSz|j�|�WnJt�y,t|t��r|D]}|j�|��qntdt|���Yn0dS)
z�Send `data' to the server.
        ``data`` can be a string object, a bytes object, an array object, a
        file-like object that supports a .read() method, or an iterable object.
        Nrzsend:i r�zsendIng a read()ableFr�Tzencoding file using iso-8859-1ruz9data should be a bytes-like object or an iterable, got %r)r��	auto_openr	rr�r�r�r�r�r�mode�AttributeErrorr�r�sendall�	TypeErrorr�type)rdr��	blocksizerr�	datablock�drjrjrkrbsF







�zHTTPConnection.sendcCs|j�|�dS)zuAdd a line of output to the current request buffer.

        Assumes that the line does *not* end with \r\n.
        N)r�rc)rdr�rjrjrk�_output�szHTTPConnection._outputcCsj|j�td�td�f�td��|j�}|jdd�=t|t�rJ||7}d}|�|�|durf|�|�dS)z�Send the currently buffered request and clear the buffer.

        Appends an extra \r\n to the buffer.
        A message_body may be specified, to be appended to the request.
        rtrrN)r��extendrr{r�r)rd�message_bodyr�rjrjrk�_send_output�s

zHTTPConnection._send_outputc
Csv|jr|j��rd|_|jtkr(t|_n
t|j��||_|s@d}d|||jf}|�|�	d��|j
dk�rr|�s^d}|�d�r�t|�\}}}}}|r�z|�	d�}Wnt
y�|�	d�}Yn0|�d	|�n�z|j�	d�}	Wnt
y�|j�	d�}	Yn0|j�d
�dk�r"td|	d
�}	|j|jk�r>|�d	|	�n |	�d�}	|�d	d|	|jf�|�sr|�dd�ndS)a`Send a request to the server.

        `method' specifies an HTTP request method, e.g. 'GET'.
        `url' specifies the object being requested, e.g. '/index.html'.
        `skip_host' if True does not add automatically a 'Host:' header
        `skip_accept_encoding' if True does not add automatically an
           'Accept-Encoding:' header
        N�/z%s %s %sr�r�r��http�idna�Hostr]r�[�]z%s:%szAccept-Encoding�identity)r�r�r�r��_CS_REQ_STARTEDrr��
_http_vsn_strrr�	_http_vsnr�r�UnicodeEncodeError�	putheaderr�r�rr�r�r|)
rdr�r��	skip_host�skip_accept_encoding�request�netloc�nil�
netloc_enc�host_encrjrjrk�
putrequest�sD




zHTTPConnection.putrequestcGs�|jtkrt��t|d�r$|�d�}t|�}t|�D]>\}}t|d�rV|�d�||<q4t|t�r4t	|��d�||<q4t
d��|�}|t
d�|}|�|�dS)zkSend a request header line to the server.

        For example: h.putheader('Accept', 'text/html')
        rr�r�s
	s: N)
r�rrr�rr��	enumerater�rrrr{r)rdr�valuesr��	one_valuer�rjrjrkr"$s




zHTTPConnection.putheadercCs&|jtkrt|_nt��|�|�dS)a�Indicate that the last header line has been sent to the server.

        This method sends the request to the server.  The optional message_body
        argument can be used to pass a message body associated with the
        request.  The message body will be sent in the same packet as the
        message headers if it is a string, otherwise it is sent as a separate
        packet.
        N)r�r�_CS_REQ_SENTrr)rdrrjrjrk�
endheaders8s	
zHTTPConnection.endheaderscCs|�||||�dS)z&Send a complete request to the server.N)�
_send_request)rdr�r��bodyr�rjrjrkr%GszHTTPConnection.requestcCs�d}ztt|��}Wnftyz}zNztt�|���j�}Wn(ttfyd|j	dkr`t
d�Yn0WYd}~n
d}~00|dur�|�d|�dS)Nrz
Cannot stat!!zContent-Length)rr`r�os�fstatr��st_sizer�OSErrorr�r�r")rdr1Zthelen�terjrjrk�_set_content_lengthKs.z"HTTPConnection._set_content_lengthc	Cs�t�dd�|D��}i}d|vr(d|d<d|vr8d|d<|j||fi|��|durfd|vrf|�|�|��D]\}}|�||�qnt|t�r�|�d	�}|�	|�dS)
NcSsg|]}|���qSrj)r_)�.0�krjrjrk�
<listcomp>^rtz0HTTPConnection._send_request.<locals>.<listcomp>r�r^r#zaccept-encodingr$r�ru)
�dict�fromkeysr*r7r�r"r�rrr/)	rdr�r�r1r��header_names�skipsr�r�rjrjrkr0\s


zHTTPConnection._send_requestcCs�|jr|j��rd|_|jtks&|jr0t|j��|jdkrR|j|j|j|jd�}n|j|j|jd�}|�	�|j
tkszJ�t|_|j
r�|�
�n||_|S)a/Get the response from the server.

        If the HTTPConnection is in the correct state, returns an
        instance of HTTPResponse or of whatever object is returned by
        class the response_class variable.

        If a request has not been sent or if a previous response has
        not be handled, ResponseNotReady is raised.  If the HTTP
        response indicates that the connection should be closed, then
        it will be closed before the response is returned.  When the
        connection is closed, the underlying socket is closed.
        Nrr�)r�r�r�r.rr�rr�r�r�r�r�r�r�)rdrrjrjrk�getresponseqs 

�
zHTTPConnection.getresponse)NN)N)rr)N)rmrnror rrr�	HTTP_PORTr�r
r�r�r�_GLOBAL_DEFAULT_TIMEOUTr�r�r�r�rr	r�rrrr*r"r/r%r7r0r?rjrjrjrkr�s2�



2

t
r)�
SSLContextcs<eZdZdZeZdddeejdf�fdd�	Z	dd�Z
�ZS)�HTTPSConnectionz(This class allows communication via SSL.Ncs�d|vr|d}	|d=nd}	d|vr4|d}
|d=nd}
tt|��|||||�||_||_|
dur�t�tj�}
|
jtj	O_|
j
tjk}|	dur�|}	n|	r�|s�td��|s�|r�|
�
||�|
|_|	|_dS)N�check_hostname�contextzMcheck_hostname needs a SSL context with either CERT_OPTIONAL or CERT_REQUIRED)r	rCr��key_file�	cert_file�sslrB�PROTOCOL_SSLv23�options�OP_NO_SSLv2�verify_mode�	CERT_NONEr��load_cert_chain�_context�_check_hostname)rdr�r�rFrGr�r�r�Z_3to2kwargsrDrE�will_verifyr�rjrkr��s*�zHTTPSConnection.__init__cCs�t|j|jf|j|j�}|jr,||_|��tj	r8|jnd}|j
j||d�|_z|jrjt�
|j��|j�Wn,ty�|j�tj�|j���Yn0dS)z(Connect to a host on a given (SSL) port.N)�server_hostname)rr�r�r�r�r�r�rrH�HAS_SNIrO�wrap_socketrP�match_hostname�getpeercert�	Exception�shutdownr�	SHUT_RDWRr�)rdr�rRrjrjrkr	�s"��
zHTTPSConnection.connect)rmrnro�__doc__�
HTTPS_PORTr�r�rrAr�r	r�rjrjr�rkrC�s�rCc@seZdZdS)rN�rmrnrorjrjrjrkrsrc@seZdZdS)rNr\rjrjrjrkr
src@seZdZdS)rNr\rjrjrjrkr
src@seZdZdd�ZdS)rcCs|f|_||_dSr�)�argsr�)rdr�rjrjrkr�szUnknownProtocol.__init__N�rmrnror�rjrjrjrkrsrc@seZdZdS)rNr\rjrjrjrkrsrc@seZdZdS)rNr\rjrjrjrkrsrc@s&eZdZddd�Zdd�Zdd�ZdS)	rNcCs|f|_||_||_dSr�)r]�partial�expected)rdr_r`rjrjrkr�szIncompleteRead.__init__cCs,|jdurd|j}nd}dt|j�|fS)Nz, %i more expectedr�zIncompleteRead(%i bytes read%s))r`r`r_)rd�erjrjrk�__repr__ s
zIncompleteRead.__repr__cCst|�Sr�)r�r�rjrjrk�__str__&szIncompleteRead.__str__)N)rmrnror�rbrcrjrjrjrkrs
rc@seZdZdS)rNr\rjrjrjrkr)src@seZdZdS)rNr\rjrjrjrkr,src@seZdZdS)rNr\rjrjrjrkr/src@seZdZdS)rNr\rjrjrjrkr2src@seZdZdd�ZdS)rcCs|st|�}|f|_||_dSr�)r�r]rir�rjrjrkr�6szBadStatusLine.__init__Nr^rjrjrjrkr5src@seZdZdd�ZdS)rycCst�|dt|f�dS)Nz&got more than %d bytes when reading %s)rr�rx)rd�	line_typerjrjrkr�=s�zLineTooLong.__init__Nr^rjrjrjrkry<sry){rZ�
__future__rrrrZfuture.builtinsrrrr	Zfuture.utilsr
Zfuture.backports.emailrr}rZ
email_messageZfuture.backports.miscr
r�ior2rZfuture.backports.urllib.parserr�r�collectionsr�collections.abc�__all__r@r[r�r�rr.r��SWITCHING_PROTOCOLS�
PROCESSINGrT�CREATED�ACCEPTED�NON_AUTHORITATIVE_INFORMATIONr��
RESET_CONTENT�PARTIAL_CONTENT�MULTI_STATUS�IM_USED�MULTIPLE_CHOICES�MOVED_PERMANENTLY�FOUND�	SEE_OTHERr��	USE_PROXY�TEMPORARY_REDIRECT�BAD_REQUEST�UNAUTHORIZED�PAYMENT_REQUIRED�	FORBIDDEN�	NOT_FOUND�METHOD_NOT_ALLOWED�NOT_ACCEPTABLE�PROXY_AUTHENTICATION_REQUIRED�REQUEST_TIMEOUT�CONFLICT�GONE�LENGTH_REQUIRED�PRECONDITION_FAILED�REQUEST_ENTITY_TOO_LARGE�REQUEST_URI_TOO_LONG�UNSUPPORTED_MEDIA_TYPE�REQUESTED_RANGE_NOT_SATISFIABLE�EXPECTATION_FAILED�UNPROCESSABLE_ENTITY�LOCKED�FAILED_DEPENDENCY�UPGRADE_REQUIRED�PRECONDITION_REQUIRED�TOO_MANY_REQUESTS�REQUEST_HEADER_FIELDS_TOO_LARGE�INTERNAL_SERVER_ERROR�NOT_IMPLEMENTED�BAD_GATEWAY�SERVICE_UNAVAILABLE�GATEWAY_TIMEOUT�HTTP_VERSION_NOT_SUPPORTED�INSUFFICIENT_STORAGE�NOT_EXTENDED�NETWORK_AUTHENTICATION_REQUIREDr r�rxrz�Messager\r��objectr��	RawIOBaserrrHrB�ImportErrorrCrcrWrrrrrrrrrrrrryrrjrjrjrk�<module>s8F�5V76
!PK�Cu\C��.�?�?8future/backports/http/__pycache__/cookies.cpython-39.pycnu�[���a

��?hMT��@s�dZddlmZddlmZddlmZddlmZddlmZmZm	Z	m
Z
ddlmZm
Z
ddlZernde_ddlZgd	�Zd
jZdjZdjZGd
d�de�ZejejdZddddddddddddddddd d!d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8d9d:d;d<d=d>d?d@dAdBdCdDdEdFdGdHdIdJdKdLdMdNdOdPdQdRdSdTdUdVdWdXdYdZd[d\d]d^d_d`dadbdcdddedfdgdhdidjdkdldmdndodpdqdrdsdtdudvdwdxdydzd{d|d}d~dd�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d���Zefd�d��Ze�d��Ze�d��Zd�d��Z gd��Z!gd��Z"de!e"fd�d��Z#Gd�d��d�e�Z$d�Z%e�d�e%d�e%d�ej�Z&Gd�dDŽd�e�Z'Gd�dɄd�e'�Z(dS)�af

http.cookies module ported to python-future from Py3.3

Here's a sample session to show how to use this module.
At the moment, this is the only documentation.

The Basics
----------

Importing is easy...

   >>> from http import cookies

Most of the time you start by creating a cookie.

   >>> C = cookies.SimpleCookie()

Once you've created your Cookie, you can add values just as if it were
a dictionary.

   >>> C = cookies.SimpleCookie()
   >>> C["fig"] = "newton"
   >>> C["sugar"] = "wafer"
   >>> C.output()
   'Set-Cookie: fig=newton\r\nSet-Cookie: sugar=wafer'

Notice that the printable representation of a Cookie is the
appropriate format for a Set-Cookie: header.  This is the
default behavior.  You can change the header and printed
attributes by using the .output() function

   >>> C = cookies.SimpleCookie()
   >>> C["rocky"] = "road"
   >>> C["rocky"]["path"] = "/cookie"
   >>> print(C.output(header="Cookie:"))
   Cookie: rocky=road; Path=/cookie
   >>> print(C.output(attrs=[], header="Cookie:"))
   Cookie: rocky=road

The load() method of a Cookie extracts cookies from a string.  In a
CGI script, you would use this method to extract the cookies from the
HTTP_COOKIE environment variable.

   >>> C = cookies.SimpleCookie()
   >>> C.load("chips=ahoy; vienna=finger")
   >>> C.output()
   'Set-Cookie: chips=ahoy\r\nSet-Cookie: vienna=finger'

The load() method is darn-tootin smart about identifying cookies
within a string.  Escaped quotation marks, nested semicolons, and other
such trickeries do not confuse it.

   >>> C = cookies.SimpleCookie()
   >>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
   >>> print(C)
   Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"

Each element of the Cookie also supports all of the RFC 2109
Cookie attributes.  Here's an example which sets the Path
attribute.

   >>> C = cookies.SimpleCookie()
   >>> C["oreo"] = "doublestuff"
   >>> C["oreo"]["path"] = "/"
   >>> print(C)
   Set-Cookie: oreo=doublestuff; Path=/

Each dictionary element has a 'value' attribute, which gives you
back the value associated with the key.

   >>> C = cookies.SimpleCookie()
   >>> C["twix"] = "none for you"
   >>> C["twix"].value
   'none for you'

The SimpleCookie expects that all values should be standard strings.
Just to be sure, SimpleCookie invokes the str() builtin to convert
the value to a string, when the values are set dictionary-style.

   >>> C = cookies.SimpleCookie()
   >>> C["number"] = 7
   >>> C["string"] = "seven"
   >>> C["number"].value
   '7'
   >>> C["string"].value
   'seven'
   >>> C.output()
   'Set-Cookie: number=7\r\nSet-Cookie: string=seven'

Finis.
�)�unicode_literals)�print_function)�division)�absolute_import)�chr�dict�int�str)�PY2�
as_native_strN)�CookieError�
BaseCookie�SimpleCookie�z; � c@seZdZdS)rN)�__name__�
__module__�__qualname__�rr�G/usr/local/lib/python3.9/site-packages/future/backports/http/cookies.pyr�srz!#$%&'*+-.^_`|~:z\000z\001z\002z\003z\004z\005z\006z\007z\010z\011z\012z\013z\014z\015z\016z\017z\020z\021z\022z\023z\024z\025z\026z\027z\030z\031z\032z\033z\034z\035z\036z\037z\054z\073�\"z\\z\177z\200z\201z\202z\203z\204z\205z\206z\207z\210z\211z\212z\213z\214z\215z\216z\217z\220z\221z\222z\223z\224z\225z\226z\227z\230z\231z\232z\233z\234z\235z\236z\237z\240z\241z\242z\243z\244z\245z\246z\247z\250z\251z\252z\253z\254z\255z\256z\257z\260z\261z\262z\263z\264z\265z\266z\267z\270z\271z\272z\273z\274z\275z\276z\277z\300z\301z\302z\303z\304z\305z\306z\307z\310z\311z\312z\313z\314z\315z\316z\317z\320z\321z\322z\323z\324z\325z\326z\327z\330z\331z\332z\333z\334z\335z\336z\337z\340z\341z\342z\343z\344z\345z\346z\347z\350z\351z\352z\353z\354z\355z\356z\357z\360z\361z\362z\363z\364z\365z\366z\367z\370z\371z\372z\373z\374z\375z\376z\377)�����������	�
���
�������������������,�;�"�\��€��‚�ƒ�„�…�†�‡�ˆ�‰�Š�‹�Œ��Ž���‘�’�“�”�•�–�—�˜�™�š�›�œ��ž�Ÿ� �¡�¢�£�¤�¥�¦�§�¨�©�ª�«�¬�­�®�¯�°�±�²�³�´�µ�¶�·�¸�¹�º�»�¼�½�¾�¿�À�Á�Â�Ã�Ä�Å�Æ�Ç�È�É�Ê�Ë�Ì�Í�Î�Ï�Ð�Ñ�Ò�Ó�Ô�Õ�Ö�×�Ø�Ù�Ú�Û�Ü�Ý�Þ�ß�à�á�â�ã�ä�å�æ�ç�è�é�ê�ë�ì�í�î�ï�ð�ñ�ò�ó�ô�õ�ö�÷�ø�ù�ú�û�ü�ý�þ�ÿcs8t�fdd�|D��r|Sdtdd�|D��dSdS)z�Quote a string for use in a cookie header.

    If the string does not need to be double-quoted, then just return the
    string.  Otherwise, surround the string in doublequotes and quote
    (with a \) special characters.
    c3s|]}|�vVqdS�Nr��.0�c��
LegalCharsrr�	<genexpr>��z_quote.<locals>.<genexpr>r9css|]}t�||�VqdSr�)�_Translator�get)r��srrrr��r�N)�all�	_nulljoin)r	r�rr�r�_quote�sr�z\\[0-3][0-7][0-7]z[\\].cCsBt|�dkr|S|ddks(|ddkr,|S|dd�}d}t|�}g}d|kr^|k�r:nn�t�||�}t�||�}|s�|s�|�||d���q:d}}|r�|�d�}|r�|�d�}|r�|r�||kr�|�|||��|�||d�|d}qH|�|||��|�tt||d|d�d���|d}qHt|�S)N�rr9������)	�len�
_OctalPatt�search�
_QuotePatt�append�startrrr�)�mystr�i�n�resZo_matchZq_match�j�krrr�_unquote�s6


$
r�)�Mon�Tue�Wed�Thu�Fri�Sat�Sun)
N�Jan�Feb�Mar�Apr�May�Jun�Jul�Aug�Sep�Oct�Nov�Decc	CsRddlm}m}|�}|||�\	}}}}	}
}}}
}d|||||||	|
|fS)Nr)�gmtime�timez#%s, %02d %3s %4d %02d:%02d:%02d GMT)r�r�)�future�weekdayname�	monthnamer�r��now�year�month�day�hh�mm�ss�wd�y�zrrr�_getdate3s�r�c	@s�eZdZdZdddddddd	d
�Zeddg�Zdd�Zd
d�Zdd�Z	e
fdd�Zddd�ZeZe
�dd��Zddd�Zddd�ZdS) �Morsela�A class to hold ONE (key, value) pair.

    In a cookie, each such pair may have several attributes, so this class is
    used to keep the attributes associated with the appropriate key,value pair.
    This class also includes a coded_value attribute, which is used to hold
    the network representation of the value.  This is most useful when Python
    objects are pickled for network transit.
    �expires�Path�Comment�DomainzMax-Age�secure�httponly�Version)r�path�comment�domain�max-agerr�versioncCs0d|_|_|_|jD]}t�||d�qdS)Nr)�key�value�coded_value�	_reservedr�__setitem__)�selfrrrr�__init__^s
zMorsel.__init__cCs0|��}||jvrtd|��t�|||�dS)NzInvalid Attribute %s)�lowerrrrr)r�K�Vrrrrfs
zMorsel.__setitem__cCs|��|jvSr�)rr)rrrrr�
isReservedKeylszMorsel.isReservedKeycsR|��|jvrtd|��t�fdd�|D��r<td|��||_||_||_dS)Nz!Attempt to set a reserved key: %sc3s|]}|�vVqdSr�rr�r�rrr�tr�zMorsel.set.<locals>.<genexpr>zIllegal key value: %s)rrr�anyrr
r)rr�val�	coded_valr�rr�r�setosz
Morsel.setN�Set-Cookie:cCsd||�|�fS)Nz%s %s)�OutputString)r�attrs�headerrrr�output|sz
Morsel.outputcCs>trt|jt�rt|j�}n|j}d|jjt|j�t|�fS)Nz<%s: %s=%s>)	r
�
isinstancer
�unicoder	�	__class__rr�repr�rrrrr�__repr__�s�zMorsel.__repr__cCsd|�|��dd�S)Nz�
        <script type="text/javascript">
        <!-- begin hiding
        document.cookie = "%s";
        // end hiding -->
        </script>
        r9r)r�replace)rrrrr�	js_output�s�zMorsel.js_outputcCsg}|j}|d|j|jf�|dur,|j}t|���}|D]�\}}|dkrNq<||vrXq<|dkr�t|t�r�|d|j|t|�f�q<|dkr�t|t�r�|d|j||f�q<|dkr�|t	|j|��q<|dkr�|t	|j|��q<|d|j||f�q<t
|�S)N�%s=%srrr
z%s=%drr)r�rrr�sorted�itemsr rr�r	�_semispacejoin)rr�resultr�r*rr
rrrr�s*zMorsel.OutputString)Nr)N)N)rrr�__doc__rr�_flagsrrr�_LegalCharsr�__str__rr%r'rrrrrr�;s*�



r�z.[\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]z~
    (?x)                           # This is a verbose pattern
    (?P<key>                       # Start of group 'key'
    a+?   # Any word of at least one letter
    )                              # End of group 'key'
    (                              # Optional group: there may not be a value.
    \s*=\s*                          # Equal Sign
    (?P<val>                         # Start of group 'val'
    "(?:[^\\"]|\\.)*"                  # Any doublequoted string
    |                                  # or
    \w{3},\s[\w\d\s-]{9,11}\s[\d:]{8}\sGMT  # Special case for "expires" attr
    |                                  # or
    a,*      # Any word or empty string
    )                                # End of group 'val'
    )?                             # End of optional value group
    \s*                            # Any number of spaces.
    (\s+|;|$)                      # Ending either at space, semicolon, or EOS.
    c@steZdZdZdd�Zdd�Zddd�Zd	d
�Zdd�Zddd�Z	e	Z
e�dd��Zddd�Z
dd�Zefdd�ZdS)r
z'A container class for a set of Morsels.cCs||fS)a
real_value, coded_value = value_decode(STRING)
        Called prior to setting a cookie's value from the network
        representation.  The VALUE is the value read from HTTP
        header.
        Override this function to modify the behavior of cookies.
        rr$rrr�value_decode�szBaseCookie.value_decodecCst|�}||fS)z�real_value, coded_value = value_encode(VALUE)
        Called prior to setting a cookie's value from the dictionary
        representation.  The VALUE is the value being assigned.
        Override this function to modify the behavior of cookies.
        )r	�rr�strvalrrr�value_encode�szBaseCookie.value_encodeNcCs|r|�|�dSr�)�load)r�inputrrrr�szBaseCookie.__init__cCs.|�|t��}|�|||�t�|||�dS)z+Private method for setting a cookie's valueN)r�r�rrr)rr�
real_valuer�Mrrr�__set�szBaseCookie.__setcCs |�|�\}}|�|||�dS)zDictionary style assignment.N)r4�_BaseCookie__set)rrr
�rval�cvalrrrr�szBaseCookie.__setitem__r�
cCs:g}t|���}|D]\}}|�|�||��q|�|�S)z"Return a string suitable for HTTP.)r)r*r�r�join)rrr�sepr,r*rr
rrrr�s
zBaseCookie.outputcCsng}t|���}|D]D\}}tr8t|jt�r8t|j�}n|j}|�dt|�t|�f�qd|j	j
t|�fS)Nr(z<%s: %s>)r)r*r
r r
r!r	r�r#r"r�
_spacejoin)r�lr*rr
rrrrr%szBaseCookie.__repr__cCs6g}t|���}|D]\}}|�|�|��qt|�S)z(Return a string suitable for JavaScript.)r)r*r�r'r�)rrr,r*rr
rrrr's
zBaseCookie.js_outputcCs4t|t�r|�|�n|��D]\}}|||<qdS)z�Load cookies from a string (presumably HTTP_COOKIE) or
        from a dictionary.  Loading cookies from a dictionary 'd'
        is equivalent to calling:
            map(Cookie.__setitem__, d.keys(), d.values())
        N)r r	�_BaseCookie__parse_stringr*)r�rawdatarr
rrrr5s


zBaseCookie.loadcCs�d}t|�}d}d|kr$|kr�nn�|�||�}|s:q�|�d�|�d�}}|�d�}|ddkr||r�|||dd�<q|��tjvr�|r�|dur�|��tjvr�d||<q�t|�||<q|dur|�	|�\}	}
|�
||	|
�||}qdS)Nrrr�$r�T)r�r��group�endrr�rr.r�r1r:)rr��pattr�r�r8�matchrr
r;r<rrr�__parse_string&s,

zBaseCookie.__parse_string)N)Nrr=)N)rrrr-r1r4rr:rrr0rr%r'r5�_CookiePatternrBrrrrr
�s		



r
c@s eZdZdZdd�Zdd�ZdS)rz�
    SimpleCookie supports strings as cookie values.  When setting
    the value using the dictionary assignment notation, SimpleCookie
    calls the builtin str() to convert the value to a string.  Values
    received from HTTP are kept as strings.
    cCst|�|fSr�)r�r$rrrr1QszSimpleCookie.value_decodecCst|�}|t|�fSr�)r	r�r2rrrr4TszSimpleCookie.value_encodeN)rrrr-r1r4rrrrrJsr))r-�
__future__rrrrZfuture.builtinsrrrr	Zfuture.utilsr
r�re�ASCII�string�__all__r>r�r+r@�	Exceptionr�
ascii_letters�digitsr/r�r��compiler�r�r��_weekdayname�
_monthnamer�r�Z_LegalCharsPattrJr
rrrrr�<module>'s�[�A

2�����tPK�Cu\#@ACz�z�7future/backports/http/__pycache__/server.cpython-39.pycnu�[���a

��?hӱ�@s�dZddlmZmZmZmZddlmZddlTdZ	ddgZ
ddlmZdd	l
mZdd
lmZddlmZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZd
ZdZ dd�Z!Gdd�dej"�Z#Gdd�dej$�Z%Gdd�de%�Z&dd�Z'da(dd�Z)dd�Z*Gdd�de&�Z+e%e#ddfdd �Z,e-d!k�r�e�.�Z/e/j0d"d#d$d%�e/j0d&d'de1d(d)d*�e/�2�Z3e3j4�r�e,e+e3j5d+�ne,e&e3j5d+�dS),aQHTTP server classes.

From Python 3.3

Note: BaseHTTPRequestHandler doesn't implement any HTTP request; see
SimpleHTTPRequestHandler for simple implementations of GET, HEAD and POST,
and CGIHTTPRequestHandler for CGI scripts.

It does, however, optionally implement HTTP/1.1 persistent connections,
as of version 0.3.

Notes on CGIHTTPRequestHandler
------------------------------

This class implements GET and POST requests to cgi-bin scripts.

If the os.fork() function is not present (e.g. on Windows),
subprocess.Popen() is used as a fallback, with slightly altered semantics.

In all cases, the implementation is intentionally naive -- all
requests are executed synchronously.

SECURITY WARNING: DON'T USE THIS CODE UNLESS YOU ARE INSIDE A FIREWALL
-- it may execute arbitrary Python code or external programs.

Note that status code 200 is sent prior to execution of a CGI script, so
scripts cannot send other status codes such as 302 (redirect).

XXX To do:

- log requests even later (to capture byte count)
- log user-agent header and other interesting goodies
- send error log to separate file
�)�absolute_import�division�print_function�unicode_literals)�utils)�*z0.6�
HTTPServer�BaseHTTPRequestHandler��html)�client)�parse)�socketserverNa�<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
        "http://www.w3.org/TR/html4/strict.dtd">
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html;charset=utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: %(code)d</p>
        <p>Message: %(message)s.</p>
        <p>Error code explanation: %(code)s - %(explain)s.</p>
    </body>
</html>
ztext/html;charset=utf-8cCs|�dd��dd��dd�S)N�&z&amp;�<z&lt;�>z&gt;)�replacer
�r�F/usr/local/lib/python3.9/site-packages/future/backports/http/server.py�_quote_html�src@seZdZdZdd�ZdS)r�cCs8tj�|�|j��dd�\}}t�|�|_||_dS)z.Override server_bind to store the server name.N�)r�	TCPServer�server_bind�socket�getsockname�getfqdn�server_name�server_port)�self�host�portrrrr�szHTTPServer.server_bindN)�__name__�
__module__�__qualname__�allow_reuse_addressrrrrrr�sc-@s@eZdZdZdej��dZdeZ	e
ZeZ
dZdd�Zdd	�Zd
d�Zdd
�ZdZdd�Zd[dd�Zd\dd�Zdd�Zdd�Zdd�Zd]dd�Zdd�Zd d!�Zd"d#�Zd^d$d%�Zd&d'�Zgd(�Zgd)�Z d*d+�Z!d,Z"e#j$Z%d-d.d/d0d1d2d3d4d5d6d7d8d9d:d;d<d=d>d?d@dAdBdCdDdEdFdGdHdIdJdKdLdMdNdOdPdQdRdSdTdUdVdWdXdY�,Z&dS)_r	a�HTTP request handler base class.

    The following explanation of HTTP serves to guide you through the
    code as well as to expose any misunderstandings I may have about
    HTTP (so you don't need to read the code to figure out I'm wrong
    :-).

    HTTP (HyperText Transfer Protocol) is an extensible protocol on
    top of a reliable stream transport (e.g. TCP/IP).  The protocol
    recognizes three parts to a request:

    1. One line identifying the request type and path
    2. An optional set of RFC-822-style headers
    3. An optional data part

    The headers and data are separated by a blank line.

    The first line of the request has the form

    <command> <path> <version>

    where <command> is a (case-sensitive) keyword such as GET or POST,
    <path> is a string containing path information for the request,
    and <version> should be the string "HTTP/1.0" or "HTTP/1.1".
    <path> is encoded using the URL encoding scheme (using %xx to signify
    the ASCII character with hex code xx).

    The specification specifies that lines are separated by CRLF but
    for compatibility with the widest range of clients recommends
    servers also handle LF.  Similarly, whitespace in the request line
    is treated sensibly (allowing multiple spaces between components
    and allowing trailing whitespace).

    Similarly, for output, lines ought to be separated by CRLF pairs
    but most clients grok LF characters just fine.

    If the first line of the request has the form

    <command> <path>

    (i.e. <version> is left out) then this is assumed to be an HTTP
    0.9 request; this form has no optional headers and data part and
    the reply consists of just the data.

    The reply form of the HTTP 1.x protocol again has three parts:

    1. One line giving the response code
    2. An optional set of RFC-822-style headers
    3. The data

    Again, the headers and data are separated by a blank line.

    The response code line has the form

    <version> <responsecode> <responsestring>

    where <version> is the protocol version ("HTTP/1.0" or "HTTP/1.1"),
    <responsecode> is a 3-digit response code indicating success or
    failure of the request, and <responsestring> is an optional
    human-readable string explaining what the response code means.

    This server parses the request and the headers, and then calls a
    function specific to the request type (<command>).  Specifically,
    a request SPAM will be handled by a method do_SPAM().  If no
    such method exists the server sends an error response to the
    client.  If it exists, it is called with no arguments:

    do_SPAM()

    Note that the request name is case sensitive (i.e. SPAM and spam
    are different requests).

    The various request details are stored in instance variables:

    - client_address is the client IP address in the form (host,
    port);

    - command, path and version are the broken-down request line;

    - headers is an instance of email.message.Message (or a derived
    class) containing the header information;

    - rfile is a file object open for reading positioned at the
    start of the optional input data part;

    - wfile is a file object open for writing.

    IT IS IMPORTANT TO ADHERE TO THE PROTOCOL FOR WRITING!

    The first thing to be written must be the response line.  Then
    follow 0 or more header lines, then a blank line, and then the
    actual data (if any).  The meaning of the header lines depends on
    the command executed by the server; in most cases, when data is
    returned, there should be at least one header line of the form

    Content-type: <type>/<subtype>

    where <type> and <subtype> should be registered MIME types,
    e.g. "text/html" or "text/plain".

    zPython/rz	BaseHTTP/�HTTP/0.9c
	Cs\d|_|j|_}d|_t|jd�}|�d�}||_|��}t	|�dk�r$|\}}}|dd�dkrx|�
dd	|�d
SzF|�dd�d}|�d�}t	|�d
kr�t�t|d�t|d�f}Wn(tt
fy�|�
dd	|�Yd
S0|dk�r|jdk�rd|_|dk�r~|�
dd|�d
SnZt	|�d
k�r`|\}}d|_|dk�r~|�
dd|�d
Sn|�sjd
S|�
dd|�d
S||||_|_|_ztj|j|jd�|_Wn$tj�y�|�
dd�Yd
S0|j�dd�}|��dk�r�d|_n |��dk�r|jdk�rd|_|j�dd�}	|	��dk�rX|jdk�rX|jdk�rX|���sXd
SdS) a'Parse a request (internal).

        The request should be stored in self.raw_requestline; the results
        are in self.command, self.path, self.request_version and
        self.headers.

        Return True for success, False for failure; on failure, an
        error is sent back.

        Nrz
iso-8859-1z
��zHTTP/�zBad request version (%r)F�/�.rr)rrzHTTP/1.1)rr�zInvalid HTTP Version (%s)�GETzBad HTTP/0.9 request type (%r)zBad request syntax (%r))�_classz
Line too long�
Connection��close�
keep-aliveZExpectz100-continueT)�command�default_request_version�request_version�close_connection�str�raw_requestline�rstrip�requestline�split�len�
send_error�
ValueError�int�
IndexError�protocol_version�path�http_client�
parse_headers�rfile�MessageClass�headers�LineTooLong�get�lower�handle_expect_100)
r�versionr:�wordsr3rBZbase_version_numberZversion_numberZconntype�expectrrr�
parse_requests�



�
�����
z$BaseHTTPRequestHandler.parse_requestcCs|�d�|��dS)a7Decide what to do with an "Expect: 100-continue" header.

        If the client is expecting a 100 Continue response, we must
        respond with either a 100 Continue or a final response before
        waiting for the request body. The default is to always respond
        with a 100 Continue. You can behave differently (for example,
        reject unauthorized requests) by overriding this method.

        This method should either return True (possibly after sending
        a 100 Continue response) or send an error response and return
        False.

        �dT)�send_response_only�
flush_headers�rrrrrK]s
z(BaseHTTPRequestHandler.handle_expect_100c
Cs�z�|j�d�|_t|j�dkr@d|_d|_d|_|�d�WdS|jsRd|_WdS|�	�s`WdSd|j}t
||�s�|�dd	|j�WdSt||�}|�|j�
�Wn:tjy�}z |�d
|�d|_WYd}~dSd}~00dS)z�Handle a single HTTP request.

        You normally don't need to override this method; see the class
        __doc__ string for information on how to handle specific HTTP
        commands such as GET and POST.

        iir0�NrZdo_�zUnsupported method (%r)zRequest timed out: %r)rE�readliner8r<r:r5r3r=r6rO�hasattr�getattr�wfile�flushr�timeout�	log_error)rZmname�method�errr�handle_one_requestos0



z)BaseHTTPRequestHandler.handle_one_requestcCs"d|_|��|js|��qdS)z&Handle multiple requests if necessary.rN)r6r_rSrrr�handle�szBaseHTTPRequestHandler.handleNcCs�z|j|\}}Wnty,d\}}Yn0|dur:|}|}|�d||�|j|t|�|d�}|�||�|�d|j�|�dd�|��|j	dkr�|d	kr�|d
vr�|j
�|�dd��dS)
a�Send and log an error reply.

        Arguments are the error code, and a detailed message.
        The detailed message defaults to the short entry matching the
        response code.

        This sends an error response (so it must be called before any
        output has been generated), logs the error, and finally sends
        a piece of HTML explaining the error to the user.

        )�???raNzcode %d, message %s)�code�message�explainzContent-Typer/r1�HEAD��)���0zUTF-8r)
�	responses�KeyErrorr\�error_message_formatr�
send_response�send_header�error_content_type�end_headersr3rY�write�encode)rrbrcZshortmsgZlongmsgrd�contentrrrr=�s"
�z!BaseHTTPRequestHandler.send_errorcCs:|�|�|�||�|�d|���|�d|���dS)z�Add the response header to the headers buffer and log the
        response code.

        Also send two standard headers with the server software
        version and the current date.

        �Server�DateN)�log_requestrQrm�version_string�date_time_string�rrbrcrrrrl�s
z$BaseHTTPRequestHandler.send_responsecCsd|dur&||jvr"|j|d}nd}|jdkr`t|d�s@g|_|j�d|j||f�dd��dS)	zSend the response header only.Nrr0r&�_headers_bufferz
%s %d %s
�latin-1�strict)rir5rWry�appendrArqrxrrrrQ�s



��z)BaseHTTPRequestHandler.send_response_onlycCsl|jdkr6t|d�sg|_|j�d||f�dd��|��dkrh|��dkrVd|_n|��d	krhd
|_dS)z)Send a MIME header to the headers buffer.r&ryz%s: %s
rzr{�
connectionr1rr2rN)r5rWryr|rqrJr6)r�keyword�valuerrrrm�s

�z"BaseHTTPRequestHandler.send_headercCs"|jdkr|j�d�|��dS)z,Send the blank line ending the MIME headers.r&s
N)r5ryr|rRrSrrrro�s
z"BaseHTTPRequestHandler.end_headerscCs(t|d�r$|j�d�|j��g|_dS)Nry�)rWrYrp�joinryrSrrrrR�s
z$BaseHTTPRequestHandler.flush_headers�-cCs|�d|jt|�t|��dS)zNLog an accepted request.

        This is called by send_response().

        z
"%s" %s %sN)�log_messager:r7)rrb�sizerrrru�s�z"BaseHTTPRequestHandler.log_requestcGs|j|g|�R�dS)z�Log an error.

        This is called when a request cannot be fulfilled.  By
        default it passes the message on to log_message().

        Arguments are the same as for log_message().

        XXX This should go to the separate error log.

        N)r��r�format�argsrrrr\�sz BaseHTTPRequestHandler.log_errorcGs&tj�d|��|��||f�dS)a�Log an arbitrary message.

        This is used by all other logging functions.  Override
        it if you have specific logging wishes.

        The first argument, FORMAT, is a format string for the
        message to be logged.  If the format string contains
        any % escapes requiring parameters, they should be
        specified as subsequent arguments (it's just like
        printf!).

        The client ip and current date/time are prefixed to
        every message.

        z%s - - [%s] %s
N)�sys�stderrrp�address_string�log_date_time_stringr�rrrr�s��z"BaseHTTPRequestHandler.log_messagecCs|jd|jS)z*Return the server software version string.� )�server_version�sys_versionrSrrrrvsz%BaseHTTPRequestHandler.version_stringc	CsR|durt��}t�|�\	}}}}}}}}	}
d|j|||j|||||f}|S)z@Return the current date and time formatted for a message header.Nz#%s, %02d %3s %4d %02d:%02d:%02d GMT)�time�gmtime�weekdayname�	monthname)r�	timestamp�year�month�day�hh�mm�ss�wd�y�z�srrrrws�z'BaseHTTPRequestHandler.date_time_stringc	CsBt��}t�|�\	}}}}}}}}	}
d||j|||||f}|S)z.Return the current time formatted for logging.z%02d/%3s/%04d %02d:%02d:%02d)r��	localtimer�)r�nowr�r�r�r�r�r��xr�r�r�rrrr�*s�z+BaseHTTPRequestHandler.log_date_time_string)�Mon�Tue�Wed�Thu�Fri�Sat�Sun)
N�Jan�Feb�Mar�Apr�May�Jun�Jul�Aug�Sep�Oct�Nov�DeccCs
|jdS)zReturn the client address.r)�client_addressrSrrrr�8sz%BaseHTTPRequestHandler.address_string�HTTP/1.0)�Continuez!Request received, please continue)zSwitching Protocolsz.Switching to new protocol; obey Upgrade header)�OKz#Request fulfilled, document follows)�CreatedzDocument created, URL follows)�Acceptedz/Request accepted, processing continues off-line)zNon-Authoritative InformationzRequest fulfilled from cache)z
No Contentz"Request fulfilled, nothing follows)z
Reset Contentz#Clear input form for further input.)zPartial ContentzPartial content follows.)zMultiple Choicesz,Object has several resources -- see URI list)zMoved Permanentlyz(Object moved permanently -- see URI list)�Found�(Object moved temporarily -- see URI list)z	See Otherz'Object moved -- see Method and URL list)zNot Modifiedz)Document has not changed since given time)z	Use ProxyzAYou must use proxy specified in Location to access this resource.)zTemporary Redirectr�)zBad Requestz(Bad request syntax or unsupported method)�Unauthorizedz*No permission -- see authorization schemes)zPayment Requiredz"No payment -- see charging schemes)�	Forbiddenz0Request forbidden -- authorization will not help)z	Not FoundzNothing matches the given URI)zMethod Not Allowedz.Specified method is invalid for this resource.)zNot Acceptablez&URI not available in preferred format.)zProxy Authentication Requiredz8You must authenticate with this proxy before proceeding.)zRequest Timeoutz#Request timed out; try again later.)�ConflictzRequest conflict.)�Gonez6URI no longer exists and has been permanently removed.)zLength Requiredz#Client must specify Content-Length.)zPrecondition Failedz!Precondition in headers is false.)zRequest Entity Too LargezEntity is too large.)zRequest-URI Too LongzURI is too long.)zUnsupported Media Typez"Entity body in unsupported format.)zRequested Range Not SatisfiablezCannot satisfy request range.)zExpectation Failedz(Expect condition could not be satisfied.)zPrecondition Requiredz9The origin server requires the request to be conditional.)zToo Many RequestszPThe user has sent too many requests in a given amount of time ("rate limiting").)zRequest Header Fields Too LargezWThe server is unwilling to process the request because its header fields are too large.)zInternal Server ErrorzServer got itself in trouble)zNot Implementedz&Server does not support this operation)zBad Gatewayz,Invalid responses from another server/proxy.)zService Unavailablez8The server cannot process the request due to a high load)zGateway Timeoutz4The gateway server did not receive a timely response)zHTTP Version Not SupportedzCannot fulfill request.)zNetwork Authentication Requiredz8The client needs to authenticate to gain network access.),rP�erf������rg����i,�-i.i/rhi1i3r)i�i���i�i�i�i�i�i�i�i�i�rTi�i�i�i�i�i�i�rUi�i�i�r,i�)N)N)N)r�r�)N)'r"r#r$�__doc__r�rLr;r��__version__r��DEFAULT_ERROR_MESSAGErk�DEFAULT_ERROR_CONTENT_TYPErnr4rOrKr_r`r=rlrQrmrorRrur\r�rvrwr�r�r�r�rArC�HTTPMessagerFrirrrrr	�s�gQ#





	�c@s|eZdZdZdeZdd�Zdd�Zdd�Zd	d
�Z	dd�Z
d
d�Zdd�Ze
jsZe
��e
j��Ze�ddddd��dS)�SimpleHTTPRequestHandleraWSimple HTTP request handler with GET and HEAD commands.

    This serves files from the current directory and any of its
    subdirectories.  The MIME type for files is determined by
    calling the .guess_type() method.

    The GET and HEAD requests are identical except that the HEAD
    request omits the actual contents of the file.

    zSimpleHTTP/cCs&|��}|r"|�||j�|��dS)zServe a GET request.N)�	send_head�copyfilerYr1�r�frrr�do_GET�szSimpleHTTPRequestHandler.do_GETcCs|��}|r|��dS)zServe a HEAD request.N)r�r1r�rrr�do_HEAD�sz SimpleHTTPRequestHandler.do_HEADcCs|�|j�}d}tj�|�r�|j�d�sP|�d�|�d|jd�|��dSdD]&}tj�||�}tj�	|�rT|}q�qT|�
|�S|�|�}zt|d�}Wn t
y�|�dd�YdS0|�d	�|�d
|�t�|���}|�dt|d��|�d
|�|j��|��|S)a{Common code for GET and HEAD commands.

        This sends the response code and MIME headers.

        Return value is either a file object (which has to be copied
        to the outputfile by the caller unless the command was HEAD,
        and must be closed by the caller under all circumstances), or
        None, in which case the caller has nothing further to do.

        Nr*r�ZLocation)z
index.htmlz	index.htm�rbr�zFile not foundrf�Content-type�Content-Length�z
Last-Modified)�translate_pathrB�os�isdir�endswithrlrmror��exists�list_directory�
guess_type�open�IOErrorr=�fstat�filenor7rw�st_mtime)rrBr��index�ctype�fsrrrr��s6



z"SimpleHTTPRequestHandler.send_headc
Cs�zt�|�}Wn"tjy0|�dd�YdS0|jdd�d�g}t�t�|j	��}t
��}d|}|�d�|�d	�|�d
|�|�d|�|�d|�|�d
�|D]j}tj	�
||�}|}	}
tj	�|�r�|d}	|d}
tj	�|��r|d}	|�dt�|
�t�|	�f�q�|�d�d�
|��|�}t��}|�|�|�d�|�d�|�dd|�|�dtt|���|��|S)z�Helper to produce a directory listing (absent index.html).

        Return value is either a file object, or None (indicating an
        error).  In either case, the headers are sent, making the
        interface the same as for send_head().

        r�zNo permission to list directoryNcSs|��S)N)rJ)�arrr�<lambda>�r�z9SimpleHTTPRequestHandler.list_directory.<locals>.<lambda>)�keyzDirectory listing for %szZ<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">z
<html>
<head>z@<meta http-equiv="Content-Type" content="text/html; charset=%s">z<title>%s</title>
</head>z<body>
<h1>%s</h1>z	<hr>
<ul>r*�@z<li><a href="%s">%s</a></li>z</ul>
<hr>
</body>
</html>
�
rrfr�ztext/html; charset=%sr�)r��listdir�errorr=�sortr�escape�urllib_parse�unquoterBr��getfilesystemencodingr|r�r��islink�quoterq�io�BytesIOrp�seekrlrmr7r<ro)
rrB�list�rZdisplaypath�enc�title�name�fullnameZdisplayname�linkname�encodedr�rrrr��sN

�
�



z'SimpleHTTPRequestHandler.list_directorycCs�|�dd�d}|�dd�d}t�t�|��}|�d�}td|�}t��}|D]D}tj�	|�\}}tj�|�\}}|tj
tjfvr�qPtj�||�}qP|S)z�Translate a /-separated PATH to the local filename syntax.

        Components that mean special things to the local file system
        (e.g. drive or directory names) are ignored.  (XXX They should
        probably be diagnosed.)

        �?rr�#r*N)
r;�	posixpath�normpathr�r��filterr��getcwdrB�
splitdrive�curdir�pardirr�)rrBrM�word�drive�headrrrr�
s	

z'SimpleHTTPRequestHandler.translate_pathcCst�||�dS)a�Copy all data between two file objects.

        The SOURCE argument is a file object open for reading
        (or anything with a read() method) and the DESTINATION
        argument is a file object open for writing (or
        anything with a write() method).

        The only reason for overriding this would be to change
        the block size or perhaps to replace newlines by CRLF
        -- note however that this the default server uses this
        to copy binary data as well.

        N)�shutil�copyfileobj)r�sourceZ
outputfilerrrr�#sz!SimpleHTTPRequestHandler.copyfilecCsLt�|�\}}||jvr"|j|S|��}||jvr>|j|S|jdSdS)a�Guess the type of a file.

        Argument is a PATH (a filename).

        Return value is a string of the form type/subtype,
        usable for a MIME Content-type header.

        The default implementation looks the file's extension
        up in the table self.extensions_map, using application/octet-stream
        as a default; however it would be permissible (if
        slow) to look inside the data to make a better guess.

        r0N)r�splitext�extensions_maprJ)rrB�base�extrrrr�3s



z#SimpleHTTPRequestHandler.guess_typezapplication/octet-streamz
text/plain)r0�.pyz.cz.hN)r"r#r$r�r�r�r�r�r�r�r�r�r��	mimetypes�inited�init�	types_map�copyr�updaterrrrr��s$)4
�r�cCs�|�d�}g}|dd�D],}|dkr0|��q|r|dkr|�|�q|r||��}|r�|dkrn|��d}q�|dkr�d}nd}dd�|�|f}d�|�}|S)a`
    Given a URL path, remove extra '/'s and '.' path elements and collapse
    any '..' references and returns a colllapsed path.

    Implements something akin to RFC-2396 5.2 step 6 to parse relative paths.
    The utility of this function is limited to is_cgi method and helps
    preventing some security attacks.

    Returns: A tuple of (head, tail) where tail is everything after the final /
    and head is everything before it.  Head will always start with a '/' and,
    if it contains anything else, never have a trailing '/'.

    Raises: IndexError if too many '..' occur within the path.

    r*N���z..r+r0)r;�popr|r�)rB�
path_partsZ
head_parts�partZ	tail_partZ	splitpath�collapsed_pathrrr�_url_collapse_pathXs&


r cCsntrtSzddl}Wnty(YdS0z|�d�daWn,tyhdtdd�|��D��aYn0tS)	z$Internal routine to get nobody's uidrNr�nobodyrrcss|]}|dVqdS)rNr)�.0r�rrr�	<genexpr>�r�znobody_uid.<locals>.<genexpr>)r!�pwd�ImportError�getpwnamrj�max�getpwall)r$rrr�
nobody_uid�s r)cCst�|tj�S)zTest for executable file.)r��access�X_OK)rBrrr�
executable�sr,c@sVeZdZdZeed�ZdZdd�Zdd�Z	dd	�Z
d
dgZdd
�Zdd�Z
dd�ZdS)�CGIHTTPRequestHandlerz�Complete HTTP server with GET, HEAD and POST commands.

    GET and HEAD also support running CGI scripts.

    The POST command is *only* implemented for CGI scripts.

    �forkrcCs"|��r|��n|�dd�dS)zRServe a POST request.

        This is only implemented for CGI scripts.

        rUzCan only POST to CGI scriptsN)�is_cgi�run_cgir=rSrrr�do_POST�s
zCGIHTTPRequestHandler.do_POSTcCs|��r|��St�|�SdS)z-Version of send_head that support CGI scriptsN)r/r0r�r�rSrrrr��szCGIHTTPRequestHandler.send_headcCsPt|j�}|�dd�}|d|�||dd�}}||jvrL||f|_dSdS)a3Test whether self.path corresponds to a CGI script.

        Returns True and updates the cgi_info attribute to the tuple
        (dir, rest) if self.path requires running a CGI script.
        Returns False otherwise.

        If any exception is raised, the caller should assume that
        self.path was rejected as invalid and act accordingly.

        The default implementation tests whether the normalized url
        path begins with one of the strings in self.cgi_directories
        (and the next character is a '/' or the end of the string).

        r*rNTF)r rB�find�cgi_directories�cgi_info)rrZdir_sepr�tailrrrr/�s


zCGIHTTPRequestHandler.is_cgiz/cgi-binz/htbincCst|�S)z1Test whether argument path is an executable file.)r,)rrBrrr�
is_executable�sz#CGIHTTPRequestHandler.is_executablecCstj�|�\}}|��dvS)z.Test whether argument path is a Python script.)rz.pyw)r�rBrrJ)rrBrr5rrr�	is_python�szCGIHTTPRequestHandler.is_pythonc(	Cs�|j}|j\}}|�dt|�d�}|dkr�|d|�}||dd�}|�|�}tj�|�r�||}}|�dt|�d�}q$q�q$|�d�}|dkr�|d|�||dd�}}nd}|�d�}|dkr�|d|�||d�}	}n
|d}	}|d|	}
|�|
�}tj�|��s(|�	dd|
�dStj�
|��sJ|�	d	d
|
�dS|�|
�}|j�sb|�s�|�
|��s�|�	d	d|
�dSt�tj�}
|��|
d<|jj|
d
<d|
d<|j|
d<t|jj�|
d<|j|
d<t�|�}||
d<|�|�|
d<|
|
d<|�r||
d<|jd|
d<|j�d�}|�r�|��}t|�dk�r�ddl}ddl}|d|
d<|d� �dk�r�z<|d�!d�}t"j#�r�|�$|��%d�}n|�&|��%d�}Wn|j't(f�y�Yn&0|�d�}t|�dk�r�|d|
d<|j�d�du�r
|j�)�|
d <n|jd|
d <|j�d!�}|�r2||
d"<|j�d#�}|�rL||
d$<g}|j�*d%�D]>}|dd�d&v�r�|�+|�,��n||d'd��d(�}�q\d(�-|�|
d)<|j�d*�}|�r�||
d+<t.d|j�/d,g��}d-�-|�}|�r�||
d.<d/D]}|
�0|d��q�|�1d0d1�|�2�|�3d2d3�}|j�rL|	g}d4|v�rJ|�+|�t4�}|j5�6�t�7�}|dk�r�t�8|d�\}}t9�9|j:gggd�d�r�|j:�;d��s|�q��q||�r�|�<d5|�dSz\zt�=|�Wntj>�y�Yn0t�?|j:�@�d�t�?|j5�@�d�t�A|||
�Wn(|j�B|jC|j�t�Dd6�Yn0�n�ddlE}|g} |�|��r�tFjG}!|!� ��Hd7��r�|!dd8�|!d9d�}!|!d:g| } d4|v�r�| �+|�|�Id;|�J| ��ztK|�}"WntLtMf�y�d}"Yn0|jN| |jO|jO|jO|
d<�}#|j� �d=k�r2|"dk�r2|j:�;|"�}$nd}$t9�9|j:jPgggd�d�rj|j:jP�Qd��s6�qj�q6|#�R|$�\}%}&|j5�S|%�|&�r�|�<d>|&�|#jT�U�|#jV�U�|#jW}'|'�r�|�<d5|'�n
|�Id?�dS)@zExecute a CGI script.r*rrNrr0r�zNo such CGI script (%r)r�z#CGI script is not a plain file (%r)z!CGI script is not executable (%r)�SERVER_SOFTWAREZSERVER_NAMEzCGI/1.1ZGATEWAY_INTERFACEZSERVER_PROTOCOLZSERVER_PORT�REQUEST_METHODZ	PATH_INFOZPATH_TRANSLATEDZSCRIPT_NAME�QUERY_STRINGZREMOTE_ADDR�
authorizationrZ	AUTH_TYPE�basic�ascii�:ZREMOTE_USERzcontent-type�CONTENT_TYPEzcontent-length�CONTENT_LENGTH�referer�HTTP_REFERER�acceptz	

 ��,ZHTTP_ACCEPTz
user-agent�HTTP_USER_AGENT�cookiez, �HTTP_COOKIE)r:ZREMOTE_HOSTr@rFrHrBrfzScript output follows�+r��=zCGI script exit status %#x�zw.exe������z-uzcommand: %s)�stdin�stdoutr��env�postz%szCGI script exited OK)XrBr4r2r<r�r�r��rfindr�r=�isfiler7�	have_forkr6r�deepcopy�environrv�serverrrAr7rr3r�r�r�rGrIr;�base64�binasciirJrqr�PY3�decodebytes�decode�decodestring�Error�UnicodeError�get_content_type�getallmatchingheadersr|�stripr�r�get_all�
setdefaultrlrRrr)rYrZr.�waitpid�selectrE�readr\�setuidr��dup2r��execve�handle_error�request�_exit�
subprocessr�r,r�r��list2cmdliner?�	TypeErrorr>�Popen�PIPE�_sock�recv�communicaterpr�r1rO�
returncode)(rrB�dir�rest�iZnextdirZnextrestZ	scriptdir�query�scriptZ
scriptnameZ
scriptfileZispyrPZuqrestr;rXrY�lengthrArC�lineZua�coZ
cookie_str�kZ
decoded_queryr�r!�pid�stsrnZcmdlineZinterp�nbytes�p�datarOr��statusrrrr0�s@



 


�
�



�
�








�

zCGIHTTPRequestHandler.run_cgiN)r"r#r$r�rWr�rT�rbufsizer1r�r/r3r6r7r0rrrrr-�s	
r-r�i@cCsxd|f}||_|||�}|j��}td|dd|dd�z|��Wn,tyrtd�|��t�d�Yn0dS)	zTest the HTTP request handler class.

    This runs an HTTP server on port 8000 (or the first command line
    argument).

    r0zServing HTTP onrr!rz...z&
Keyboard interrupt received, exiting.N)	rArr�print�
serve_forever�KeyboardInterrupt�server_closer��exit)�HandlerClassZServerClass�protocolr!�server_addressZhttpd�sarrr�test�s

r��__main__z--cgi�
store_truezRun as CGI Server)�action�helpr!�storerz&Specify alternate port [default: 8000])r��default�type�nargsr�)r�r!)6r��
__future__rrrr�futurerZfuture.builtinsr��__all__Zfuture.backportsrZfuture.backports.httprrCZfuture.backports.urllibr
r�rr�rr�rrfr
rr�r�r�argparser�r�rrr�StreamRequestHandlerr	r�r r!r)r,r-r�r"�ArgumentParser�parser�add_argumentr?�
parse_argsr��cgir!rrrr�<module>sn#3E+�

��PK�Cu\���2MTMT future/backports/http/cookies.pynu�[���####
# Copyright 2000 by Timothy O'Malley <timo@alum.mit.edu>
#
#                All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software
# and its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Timothy O'Malley  not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# Timothy O'Malley DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
# SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS, IN NO EVENT SHALL Timothy O'Malley BE LIABLE FOR
# ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
#
####
#
# Id: Cookie.py,v 2.29 2000/08/23 05:28:49 timo Exp
#   by Timothy O'Malley <timo@alum.mit.edu>
#
#  Cookie.py is a Python module for the handling of HTTP
#  cookies as a Python dictionary.  See RFC 2109 for more
#  information on cookies.
#
#  The original idea to treat Cookies as a dictionary came from
#  Dave Mitchell (davem@magnet.com) in 1995, when he released the
#  first version of nscookie.py.
#
####

r"""
http.cookies module ported to python-future from Py3.3

Here's a sample session to show how to use this module.
At the moment, this is the only documentation.

The Basics
----------

Importing is easy...

   >>> from http import cookies

Most of the time you start by creating a cookie.

   >>> C = cookies.SimpleCookie()

Once you've created your Cookie, you can add values just as if it were
a dictionary.

   >>> C = cookies.SimpleCookie()
   >>> C["fig"] = "newton"
   >>> C["sugar"] = "wafer"
   >>> C.output()
   'Set-Cookie: fig=newton\r\nSet-Cookie: sugar=wafer'

Notice that the printable representation of a Cookie is the
appropriate format for a Set-Cookie: header.  This is the
default behavior.  You can change the header and printed
attributes by using the .output() function

   >>> C = cookies.SimpleCookie()
   >>> C["rocky"] = "road"
   >>> C["rocky"]["path"] = "/cookie"
   >>> print(C.output(header="Cookie:"))
   Cookie: rocky=road; Path=/cookie
   >>> print(C.output(attrs=[], header="Cookie:"))
   Cookie: rocky=road

The load() method of a Cookie extracts cookies from a string.  In a
CGI script, you would use this method to extract the cookies from the
HTTP_COOKIE environment variable.

   >>> C = cookies.SimpleCookie()
   >>> C.load("chips=ahoy; vienna=finger")
   >>> C.output()
   'Set-Cookie: chips=ahoy\r\nSet-Cookie: vienna=finger'

The load() method is darn-tootin smart about identifying cookies
within a string.  Escaped quotation marks, nested semicolons, and other
such trickeries do not confuse it.

   >>> C = cookies.SimpleCookie()
   >>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
   >>> print(C)
   Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"

Each element of the Cookie also supports all of the RFC 2109
Cookie attributes.  Here's an example which sets the Path
attribute.

   >>> C = cookies.SimpleCookie()
   >>> C["oreo"] = "doublestuff"
   >>> C["oreo"]["path"] = "/"
   >>> print(C)
   Set-Cookie: oreo=doublestuff; Path=/

Each dictionary element has a 'value' attribute, which gives you
back the value associated with the key.

   >>> C = cookies.SimpleCookie()
   >>> C["twix"] = "none for you"
   >>> C["twix"].value
   'none for you'

The SimpleCookie expects that all values should be standard strings.
Just to be sure, SimpleCookie invokes the str() builtin to convert
the value to a string, when the values are set dictionary-style.

   >>> C = cookies.SimpleCookie()
   >>> C["number"] = 7
   >>> C["string"] = "seven"
   >>> C["number"].value
   '7'
   >>> C["string"].value
   'seven'
   >>> C.output()
   'Set-Cookie: number=7\r\nSet-Cookie: string=seven'

Finis.
"""
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from future.builtins import chr, dict, int, str
from future.utils import PY2, as_native_str

#
# Import our required modules
#
import re
if PY2:
    re.ASCII = 0    # for py2 compatibility
import string

__all__ = ["CookieError", "BaseCookie", "SimpleCookie"]

_nulljoin = ''.join
_semispacejoin = '; '.join
_spacejoin = ' '.join

#
# Define an exception visible to External modules
#
class CookieError(Exception):
    pass


# These quoting routines conform to the RFC2109 specification, which in
# turn references the character definitions from RFC2068.  They provide
# a two-way quoting algorithm.  Any non-text character is translated
# into a 4 character sequence: a forward-slash followed by the
# three-digit octal equivalent of the character.  Any '\' or '"' is
# quoted with a preceeding '\' slash.
#
# These are taken from RFC2068 and RFC2109.
#       _LegalChars       is the list of chars which don't require "'s
#       _Translator       hash-table for fast quoting
#
_LegalChars       = string.ascii_letters + string.digits + "!#$%&'*+-.^_`|~:"
_Translator       = {
    '\000' : '\\000',  '\001' : '\\001',  '\002' : '\\002',
    '\003' : '\\003',  '\004' : '\\004',  '\005' : '\\005',
    '\006' : '\\006',  '\007' : '\\007',  '\010' : '\\010',
    '\011' : '\\011',  '\012' : '\\012',  '\013' : '\\013',
    '\014' : '\\014',  '\015' : '\\015',  '\016' : '\\016',
    '\017' : '\\017',  '\020' : '\\020',  '\021' : '\\021',
    '\022' : '\\022',  '\023' : '\\023',  '\024' : '\\024',
    '\025' : '\\025',  '\026' : '\\026',  '\027' : '\\027',
    '\030' : '\\030',  '\031' : '\\031',  '\032' : '\\032',
    '\033' : '\\033',  '\034' : '\\034',  '\035' : '\\035',
    '\036' : '\\036',  '\037' : '\\037',

    # Because of the way browsers really handle cookies (as opposed
    # to what the RFC says) we also encode , and ;

    ',' : '\\054', ';' : '\\073',

    '"' : '\\"',       '\\' : '\\\\',

    '\177' : '\\177',  '\200' : '\\200',  '\201' : '\\201',
    '\202' : '\\202',  '\203' : '\\203',  '\204' : '\\204',
    '\205' : '\\205',  '\206' : '\\206',  '\207' : '\\207',
    '\210' : '\\210',  '\211' : '\\211',  '\212' : '\\212',
    '\213' : '\\213',  '\214' : '\\214',  '\215' : '\\215',
    '\216' : '\\216',  '\217' : '\\217',  '\220' : '\\220',
    '\221' : '\\221',  '\222' : '\\222',  '\223' : '\\223',
    '\224' : '\\224',  '\225' : '\\225',  '\226' : '\\226',
    '\227' : '\\227',  '\230' : '\\230',  '\231' : '\\231',
    '\232' : '\\232',  '\233' : '\\233',  '\234' : '\\234',
    '\235' : '\\235',  '\236' : '\\236',  '\237' : '\\237',
    '\240' : '\\240',  '\241' : '\\241',  '\242' : '\\242',
    '\243' : '\\243',  '\244' : '\\244',  '\245' : '\\245',
    '\246' : '\\246',  '\247' : '\\247',  '\250' : '\\250',
    '\251' : '\\251',  '\252' : '\\252',  '\253' : '\\253',
    '\254' : '\\254',  '\255' : '\\255',  '\256' : '\\256',
    '\257' : '\\257',  '\260' : '\\260',  '\261' : '\\261',
    '\262' : '\\262',  '\263' : '\\263',  '\264' : '\\264',
    '\265' : '\\265',  '\266' : '\\266',  '\267' : '\\267',
    '\270' : '\\270',  '\271' : '\\271',  '\272' : '\\272',
    '\273' : '\\273',  '\274' : '\\274',  '\275' : '\\275',
    '\276' : '\\276',  '\277' : '\\277',  '\300' : '\\300',
    '\301' : '\\301',  '\302' : '\\302',  '\303' : '\\303',
    '\304' : '\\304',  '\305' : '\\305',  '\306' : '\\306',
    '\307' : '\\307',  '\310' : '\\310',  '\311' : '\\311',
    '\312' : '\\312',  '\313' : '\\313',  '\314' : '\\314',
    '\315' : '\\315',  '\316' : '\\316',  '\317' : '\\317',
    '\320' : '\\320',  '\321' : '\\321',  '\322' : '\\322',
    '\323' : '\\323',  '\324' : '\\324',  '\325' : '\\325',
    '\326' : '\\326',  '\327' : '\\327',  '\330' : '\\330',
    '\331' : '\\331',  '\332' : '\\332',  '\333' : '\\333',
    '\334' : '\\334',  '\335' : '\\335',  '\336' : '\\336',
    '\337' : '\\337',  '\340' : '\\340',  '\341' : '\\341',
    '\342' : '\\342',  '\343' : '\\343',  '\344' : '\\344',
    '\345' : '\\345',  '\346' : '\\346',  '\347' : '\\347',
    '\350' : '\\350',  '\351' : '\\351',  '\352' : '\\352',
    '\353' : '\\353',  '\354' : '\\354',  '\355' : '\\355',
    '\356' : '\\356',  '\357' : '\\357',  '\360' : '\\360',
    '\361' : '\\361',  '\362' : '\\362',  '\363' : '\\363',
    '\364' : '\\364',  '\365' : '\\365',  '\366' : '\\366',
    '\367' : '\\367',  '\370' : '\\370',  '\371' : '\\371',
    '\372' : '\\372',  '\373' : '\\373',  '\374' : '\\374',
    '\375' : '\\375',  '\376' : '\\376',  '\377' : '\\377'
    }

def _quote(str, LegalChars=_LegalChars):
    r"""Quote a string for use in a cookie header.

    If the string does not need to be double-quoted, then just return the
    string.  Otherwise, surround the string in doublequotes and quote
    (with a \) special characters.
    """
    if all(c in LegalChars for c in str):
        return str
    else:
        return '"' + _nulljoin(_Translator.get(s, s) for s in str) + '"'


_OctalPatt = re.compile(r"\\[0-3][0-7][0-7]")
_QuotePatt = re.compile(r"[\\].")

def _unquote(mystr):
    # If there aren't any doublequotes,
    # then there can't be any special characters.  See RFC 2109.
    if len(mystr) < 2:
        return mystr
    if mystr[0] != '"' or mystr[-1] != '"':
        return mystr

    # We have to assume that we must decode this string.
    # Down to work.

    # Remove the "s
    mystr = mystr[1:-1]

    # Check for special sequences.  Examples:
    #    \012 --> \n
    #    \"   --> "
    #
    i = 0
    n = len(mystr)
    res = []
    while 0 <= i < n:
        o_match = _OctalPatt.search(mystr, i)
        q_match = _QuotePatt.search(mystr, i)
        if not o_match and not q_match:              # Neither matched
            res.append(mystr[i:])
            break
        # else:
        j = k = -1
        if o_match:
            j = o_match.start(0)
        if q_match:
            k = q_match.start(0)
        if q_match and (not o_match or k < j):     # QuotePatt matched
            res.append(mystr[i:k])
            res.append(mystr[k+1])
            i = k + 2
        else:                                      # OctalPatt matched
            res.append(mystr[i:j])
            res.append(chr(int(mystr[j+1:j+4], 8)))
            i = j + 4
    return _nulljoin(res)

# The _getdate() routine is used to set the expiration time in the cookie's HTTP
# header.  By default, _getdate() returns the current time in the appropriate
# "expires" format for a Set-Cookie header.  The one optional argument is an
# offset from now, in seconds.  For example, an offset of -3600 means "one hour
# ago".  The offset may be a floating point number.
#

_weekdayname = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']

_monthname = [None,
              'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
              'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']

def _getdate(future=0, weekdayname=_weekdayname, monthname=_monthname):
    from time import gmtime, time
    now = time()
    year, month, day, hh, mm, ss, wd, y, z = gmtime(now + future)
    return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % \
           (weekdayname[wd], day, monthname[month], year, hh, mm, ss)


class Morsel(dict):
    """A class to hold ONE (key, value) pair.

    In a cookie, each such pair may have several attributes, so this class is
    used to keep the attributes associated with the appropriate key,value pair.
    This class also includes a coded_value attribute, which is used to hold
    the network representation of the value.  This is most useful when Python
    objects are pickled for network transit.
    """
    # RFC 2109 lists these attributes as reserved:
    #   path       comment         domain
    #   max-age    secure      version
    #
    # For historical reasons, these attributes are also reserved:
    #   expires
    #
    # This is an extension from Microsoft:
    #   httponly
    #
    # This dictionary provides a mapping from the lowercase
    # variant on the left to the appropriate traditional
    # formatting on the right.
    _reserved = {
        "expires"  : "expires",
        "path"     : "Path",
        "comment"  : "Comment",
        "domain"   : "Domain",
        "max-age"  : "Max-Age",
        "secure"   : "secure",
        "httponly" : "httponly",
        "version"  : "Version",
    }

    _flags = set(['secure', 'httponly'])

    def __init__(self):
        # Set defaults
        self.key = self.value = self.coded_value = None

        # Set default attributes
        for key in self._reserved:
            dict.__setitem__(self, key, "")

    def __setitem__(self, K, V):
        K = K.lower()
        if not K in self._reserved:
            raise CookieError("Invalid Attribute %s" % K)
        dict.__setitem__(self, K, V)

    def isReservedKey(self, K):
        return K.lower() in self._reserved

    def set(self, key, val, coded_val, LegalChars=_LegalChars):
        # First we verify that the key isn't a reserved word
        # Second we make sure it only contains legal characters
        if key.lower() in self._reserved:
            raise CookieError("Attempt to set a reserved key: %s" % key)
        if any(c not in LegalChars for c in key):
            raise CookieError("Illegal key value: %s" % key)

        # It's a good key, so save it.
        self.key = key
        self.value = val
        self.coded_value = coded_val

    def output(self, attrs=None, header="Set-Cookie:"):
        return "%s %s" % (header, self.OutputString(attrs))

    __str__ = output

    @as_native_str()
    def __repr__(self):
        if PY2 and isinstance(self.value, unicode):
            val = str(self.value)    # make it a newstr to remove the u prefix
        else:
            val = self.value
        return '<%s: %s=%s>' % (self.__class__.__name__,
                                str(self.key), repr(val))

    def js_output(self, attrs=None):
        # Print javascript
        return """
        <script type="text/javascript">
        <!-- begin hiding
        document.cookie = \"%s\";
        // end hiding -->
        </script>
        """ % (self.OutputString(attrs).replace('"', r'\"'))

    def OutputString(self, attrs=None):
        # Build up our result
        #
        result = []
        append = result.append

        # First, the key=value pair
        append("%s=%s" % (self.key, self.coded_value))

        # Now add any defined attributes
        if attrs is None:
            attrs = self._reserved
        items = sorted(self.items())
        for key, value in items:
            if value == "":
                continue
            if key not in attrs:
                continue
            if key == "expires" and isinstance(value, int):
                append("%s=%s" % (self._reserved[key], _getdate(value)))
            elif key == "max-age" and isinstance(value, int):
                append("%s=%d" % (self._reserved[key], value))
            elif key == "secure":
                append(str(self._reserved[key]))
            elif key == "httponly":
                append(str(self._reserved[key]))
            else:
                append("%s=%s" % (self._reserved[key], value))

        # Return the result
        return _semispacejoin(result)


#
# Pattern for finding cookie
#
# This used to be strict parsing based on the RFC2109 and RFC2068
# specifications.  I have since discovered that MSIE 3.0x doesn't
# follow the character rules outlined in those specs.  As a
# result, the parsing rules here are less strict.
#

_LegalCharsPatt  = r"[\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]"
_CookiePattern = re.compile(r"""
    (?x)                           # This is a verbose pattern
    (?P<key>                       # Start of group 'key'
    """ + _LegalCharsPatt + r"""+?   # Any word of at least one letter
    )                              # End of group 'key'
    (                              # Optional group: there may not be a value.
    \s*=\s*                          # Equal Sign
    (?P<val>                         # Start of group 'val'
    "(?:[^\\"]|\\.)*"                  # Any doublequoted string
    |                                  # or
    \w{3},\s[\w\d\s-]{9,11}\s[\d:]{8}\sGMT  # Special case for "expires" attr
    |                                  # or
    """ + _LegalCharsPatt + r"""*      # Any word or empty string
    )                                # End of group 'val'
    )?                             # End of optional value group
    \s*                            # Any number of spaces.
    (\s+|;|$)                      # Ending either at space, semicolon, or EOS.
    """, re.ASCII)                 # May be removed if safe.


# At long last, here is the cookie class.  Using this class is almost just like
# using a dictionary.  See this module's docstring for example usage.
#
class BaseCookie(dict):
    """A container class for a set of Morsels."""

    def value_decode(self, val):
        """real_value, coded_value = value_decode(STRING)
        Called prior to setting a cookie's value from the network
        representation.  The VALUE is the value read from HTTP
        header.
        Override this function to modify the behavior of cookies.
        """
        return val, val

    def value_encode(self, val):
        """real_value, coded_value = value_encode(VALUE)
        Called prior to setting a cookie's value from the dictionary
        representation.  The VALUE is the value being assigned.
        Override this function to modify the behavior of cookies.
        """
        strval = str(val)
        return strval, strval

    def __init__(self, input=None):
        if input:
            self.load(input)

    def __set(self, key, real_value, coded_value):
        """Private method for setting a cookie's value"""
        M = self.get(key, Morsel())
        M.set(key, real_value, coded_value)
        dict.__setitem__(self, key, M)

    def __setitem__(self, key, value):
        """Dictionary style assignment."""
        rval, cval = self.value_encode(value)
        self.__set(key, rval, cval)

    def output(self, attrs=None, header="Set-Cookie:", sep="\015\012"):
        """Return a string suitable for HTTP."""
        result = []
        items = sorted(self.items())
        for key, value in items:
            result.append(value.output(attrs, header))
        return sep.join(result)

    __str__ = output

    @as_native_str()
    def __repr__(self):
        l = []
        items = sorted(self.items())
        for key, value in items:
            if PY2 and isinstance(value.value, unicode):
                val = str(value.value)    # make it a newstr to remove the u prefix
            else:
                val = value.value
            l.append('%s=%s' % (str(key), repr(val)))
        return '<%s: %s>' % (self.__class__.__name__, _spacejoin(l))

    def js_output(self, attrs=None):
        """Return a string suitable for JavaScript."""
        result = []
        items = sorted(self.items())
        for key, value in items:
            result.append(value.js_output(attrs))
        return _nulljoin(result)

    def load(self, rawdata):
        """Load cookies from a string (presumably HTTP_COOKIE) or
        from a dictionary.  Loading cookies from a dictionary 'd'
        is equivalent to calling:
            map(Cookie.__setitem__, d.keys(), d.values())
        """
        if isinstance(rawdata, str):
            self.__parse_string(rawdata)
        else:
            # self.update() wouldn't call our custom __setitem__
            for key, value in rawdata.items():
                self[key] = value
        return

    def __parse_string(self, mystr, patt=_CookiePattern):
        i = 0            # Our starting point
        n = len(mystr)     # Length of string
        M = None         # current morsel

        while 0 <= i < n:
            # Start looking for a cookie
            match = patt.search(mystr, i)
            if not match:
                # No more cookies
                break

            key, value = match.group("key"), match.group("val")

            i = match.end(0)

            # Parse the key, value in case it's metainfo
            if key[0] == "$":
                # We ignore attributes which pertain to the cookie
                # mechanism as a whole.  See RFC 2109.
                # (Does anyone care?)
                if M:
                    M[key[1:]] = value
            elif key.lower() in Morsel._reserved:
                if M:
                    if value is None:
                        if key.lower() in Morsel._flags:
                            M[key] = True
                    else:
                        M[key] = _unquote(value)
            elif value is not None:
                rval, cval = self.value_decode(value)
                self.__set(key, rval, cval)
                M = self[key]


class SimpleCookie(BaseCookie):
    """
    SimpleCookie supports strings as cookie values.  When setting
    the value using the dictionary assignment notation, SimpleCookie
    calls the builtin str() to convert the value to a string.  Values
    received from HTTP are kept as strings.
    """
    def value_decode(self, val):
        return _unquote(val), val

    def value_encode(self, val):
        strval = str(val)
        return strval, _quote(strval)
PK�Cu\��Vӱӱfuture/backports/http/server.pynu�[���"""HTTP server classes.

From Python 3.3

Note: BaseHTTPRequestHandler doesn't implement any HTTP request; see
SimpleHTTPRequestHandler for simple implementations of GET, HEAD and POST,
and CGIHTTPRequestHandler for CGI scripts.

It does, however, optionally implement HTTP/1.1 persistent connections,
as of version 0.3.

Notes on CGIHTTPRequestHandler
------------------------------

This class implements GET and POST requests to cgi-bin scripts.

If the os.fork() function is not present (e.g. on Windows),
subprocess.Popen() is used as a fallback, with slightly altered semantics.

In all cases, the implementation is intentionally naive -- all
requests are executed synchronously.

SECURITY WARNING: DON'T USE THIS CODE UNLESS YOU ARE INSIDE A FIREWALL
-- it may execute arbitrary Python code or external programs.

Note that status code 200 is sent prior to execution of a CGI script, so
scripts cannot send other status codes such as 302 (redirect).

XXX To do:

- log requests even later (to capture byte count)
- log user-agent header and other interesting goodies
- send error log to separate file
"""

from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from future import utils
from future.builtins import *


# See also:
#
# HTTP Working Group                                        T. Berners-Lee
# INTERNET-DRAFT                                            R. T. Fielding
# <draft-ietf-http-v10-spec-00.txt>                     H. Frystyk Nielsen
# Expires September 8, 1995                                  March 8, 1995
#
# URL: http://www.ics.uci.edu/pub/ietf/http/draft-ietf-http-v10-spec-00.txt
#
# and
#
# Network Working Group                                      R. Fielding
# Request for Comments: 2616                                       et al
# Obsoletes: 2068                                              June 1999
# Category: Standards Track
#
# URL: http://www.faqs.org/rfcs/rfc2616.html

# Log files
# ---------
#
# Here's a quote from the NCSA httpd docs about log file format.
#
# | The logfile format is as follows. Each line consists of:
# |
# | host rfc931 authuser [DD/Mon/YYYY:hh:mm:ss] "request" ddd bbbb
# |
# |        host: Either the DNS name or the IP number of the remote client
# |        rfc931: Any information returned by identd for this person,
# |                - otherwise.
# |        authuser: If user sent a userid for authentication, the user name,
# |                  - otherwise.
# |        DD: Day
# |        Mon: Month (calendar name)
# |        YYYY: Year
# |        hh: hour (24-hour format, the machine's timezone)
# |        mm: minutes
# |        ss: seconds
# |        request: The first line of the HTTP request as sent by the client.
# |        ddd: the status code returned by the server, - if not available.
# |        bbbb: the total number of bytes sent,
# |              *not including the HTTP/1.0 header*, - if not available
# |
# | You can determine the name of the file accessed through request.
#
# (Actually, the latter is only true if you know the server configuration
# at the time the request was made!)

__version__ = "0.6"

__all__ = ["HTTPServer", "BaseHTTPRequestHandler"]

from future.backports import html
from future.backports.http import client as http_client
from future.backports.urllib import parse as urllib_parse
from future.backports import socketserver

import io
import mimetypes
import os
import posixpath
import select
import shutil
import socket # For gethostbyaddr()
import sys
import time
import copy
import argparse


# Default error message template
DEFAULT_ERROR_MESSAGE = """\
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
        "http://www.w3.org/TR/html4/strict.dtd">
<html>
    <head>
        <meta http-equiv="Content-Type" content="text/html;charset=utf-8">
        <title>Error response</title>
    </head>
    <body>
        <h1>Error response</h1>
        <p>Error code: %(code)d</p>
        <p>Message: %(message)s.</p>
        <p>Error code explanation: %(code)s - %(explain)s.</p>
    </body>
</html>
"""

DEFAULT_ERROR_CONTENT_TYPE = "text/html;charset=utf-8"

def _quote_html(html):
    return html.replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")

class HTTPServer(socketserver.TCPServer):

    allow_reuse_address = 1    # Seems to make sense in testing environment

    def server_bind(self):
        """Override server_bind to store the server name."""
        socketserver.TCPServer.server_bind(self)
        host, port = self.socket.getsockname()[:2]
        self.server_name = socket.getfqdn(host)
        self.server_port = port


class BaseHTTPRequestHandler(socketserver.StreamRequestHandler):

    """HTTP request handler base class.

    The following explanation of HTTP serves to guide you through the
    code as well as to expose any misunderstandings I may have about
    HTTP (so you don't need to read the code to figure out I'm wrong
    :-).

    HTTP (HyperText Transfer Protocol) is an extensible protocol on
    top of a reliable stream transport (e.g. TCP/IP).  The protocol
    recognizes three parts to a request:

    1. One line identifying the request type and path
    2. An optional set of RFC-822-style headers
    3. An optional data part

    The headers and data are separated by a blank line.

    The first line of the request has the form

    <command> <path> <version>

    where <command> is a (case-sensitive) keyword such as GET or POST,
    <path> is a string containing path information for the request,
    and <version> should be the string "HTTP/1.0" or "HTTP/1.1".
    <path> is encoded using the URL encoding scheme (using %xx to signify
    the ASCII character with hex code xx).

    The specification specifies that lines are separated by CRLF but
    for compatibility with the widest range of clients recommends
    servers also handle LF.  Similarly, whitespace in the request line
    is treated sensibly (allowing multiple spaces between components
    and allowing trailing whitespace).

    Similarly, for output, lines ought to be separated by CRLF pairs
    but most clients grok LF characters just fine.

    If the first line of the request has the form

    <command> <path>

    (i.e. <version> is left out) then this is assumed to be an HTTP
    0.9 request; this form has no optional headers and data part and
    the reply consists of just the data.

    The reply form of the HTTP 1.x protocol again has three parts:

    1. One line giving the response code
    2. An optional set of RFC-822-style headers
    3. The data

    Again, the headers and data are separated by a blank line.

    The response code line has the form

    <version> <responsecode> <responsestring>

    where <version> is the protocol version ("HTTP/1.0" or "HTTP/1.1"),
    <responsecode> is a 3-digit response code indicating success or
    failure of the request, and <responsestring> is an optional
    human-readable string explaining what the response code means.

    This server parses the request and the headers, and then calls a
    function specific to the request type (<command>).  Specifically,
    a request SPAM will be handled by a method do_SPAM().  If no
    such method exists the server sends an error response to the
    client.  If it exists, it is called with no arguments:

    do_SPAM()

    Note that the request name is case sensitive (i.e. SPAM and spam
    are different requests).

    The various request details are stored in instance variables:

    - client_address is the client IP address in the form (host,
    port);

    - command, path and version are the broken-down request line;

    - headers is an instance of email.message.Message (or a derived
    class) containing the header information;

    - rfile is a file object open for reading positioned at the
    start of the optional input data part;

    - wfile is a file object open for writing.

    IT IS IMPORTANT TO ADHERE TO THE PROTOCOL FOR WRITING!

    The first thing to be written must be the response line.  Then
    follow 0 or more header lines, then a blank line, and then the
    actual data (if any).  The meaning of the header lines depends on
    the command executed by the server; in most cases, when data is
    returned, there should be at least one header line of the form

    Content-type: <type>/<subtype>

    where <type> and <subtype> should be registered MIME types,
    e.g. "text/html" or "text/plain".

    """

    # The Python system version, truncated to its first component.
    sys_version = "Python/" + sys.version.split()[0]

    # The server software version.  You may want to override this.
    # The format is multiple whitespace-separated strings,
    # where each string is of the form name[/version].
    server_version = "BaseHTTP/" + __version__

    error_message_format = DEFAULT_ERROR_MESSAGE
    error_content_type = DEFAULT_ERROR_CONTENT_TYPE

    # The default request version.  This only affects responses up until
    # the point where the request line is parsed, so it mainly decides what
    # the client gets back when sending a malformed request line.
    # Most web servers default to HTTP 0.9, i.e. don't send a status line.
    default_request_version = "HTTP/0.9"

    def parse_request(self):
        """Parse a request (internal).

        The request should be stored in self.raw_requestline; the results
        are in self.command, self.path, self.request_version and
        self.headers.

        Return True for success, False for failure; on failure, an
        error is sent back.

        """
        self.command = None  # set in case of error on the first line
        self.request_version = version = self.default_request_version
        self.close_connection = 1
        requestline = str(self.raw_requestline, 'iso-8859-1')
        requestline = requestline.rstrip('\r\n')
        self.requestline = requestline
        words = requestline.split()
        if len(words) == 3:
            command, path, version = words
            if version[:5] != 'HTTP/':
                self.send_error(400, "Bad request version (%r)" % version)
                return False
            try:
                base_version_number = version.split('/', 1)[1]
                version_number = base_version_number.split(".")
                # RFC 2145 section 3.1 says there can be only one "." and
                #   - major and minor numbers MUST be treated as
                #      separate integers;
                #   - HTTP/2.4 is a lower version than HTTP/2.13, which in
                #      turn is lower than HTTP/12.3;
                #   - Leading zeros MUST be ignored by recipients.
                if len(version_number) != 2:
                    raise ValueError
                version_number = int(version_number[0]), int(version_number[1])
            except (ValueError, IndexError):
                self.send_error(400, "Bad request version (%r)" % version)
                return False
            if version_number >= (1, 1) and self.protocol_version >= "HTTP/1.1":
                self.close_connection = 0
            if version_number >= (2, 0):
                self.send_error(505,
                          "Invalid HTTP Version (%s)" % base_version_number)
                return False
        elif len(words) == 2:
            command, path = words
            self.close_connection = 1
            if command != 'GET':
                self.send_error(400,
                                "Bad HTTP/0.9 request type (%r)" % command)
                return False
        elif not words:
            return False
        else:
            self.send_error(400, "Bad request syntax (%r)" % requestline)
            return False
        self.command, self.path, self.request_version = command, path, version

        # Examine the headers and look for a Connection directive.
        try:
            self.headers = http_client.parse_headers(self.rfile,
                                                     _class=self.MessageClass)
        except http_client.LineTooLong:
            self.send_error(400, "Line too long")
            return False

        conntype = self.headers.get('Connection', "")
        if conntype.lower() == 'close':
            self.close_connection = 1
        elif (conntype.lower() == 'keep-alive' and
              self.protocol_version >= "HTTP/1.1"):
            self.close_connection = 0
        # Examine the headers and look for an Expect directive
        expect = self.headers.get('Expect', "")
        if (expect.lower() == "100-continue" and
                self.protocol_version >= "HTTP/1.1" and
                self.request_version >= "HTTP/1.1"):
            if not self.handle_expect_100():
                return False
        return True

    def handle_expect_100(self):
        """Decide what to do with an "Expect: 100-continue" header.

        If the client is expecting a 100 Continue response, we must
        respond with either a 100 Continue or a final response before
        waiting for the request body. The default is to always respond
        with a 100 Continue. You can behave differently (for example,
        reject unauthorized requests) by overriding this method.

        This method should either return True (possibly after sending
        a 100 Continue response) or send an error response and return
        False.

        """
        self.send_response_only(100)
        self.flush_headers()
        return True

    def handle_one_request(self):
        """Handle a single HTTP request.

        You normally don't need to override this method; see the class
        __doc__ string for information on how to handle specific HTTP
        commands such as GET and POST.

        """
        try:
            self.raw_requestline = self.rfile.readline(65537)
            if len(self.raw_requestline) > 65536:
                self.requestline = ''
                self.request_version = ''
                self.command = ''
                self.send_error(414)
                return
            if not self.raw_requestline:
                self.close_connection = 1
                return
            if not self.parse_request():
                # An error code has been sent, just exit
                return
            mname = 'do_' + self.command
            if not hasattr(self, mname):
                self.send_error(501, "Unsupported method (%r)" % self.command)
                return
            method = getattr(self, mname)
            method()
            self.wfile.flush() #actually send the response if not already done.
        except socket.timeout as e:
            #a read or a write timed out.  Discard this connection
            self.log_error("Request timed out: %r", e)
            self.close_connection = 1
            return

    def handle(self):
        """Handle multiple requests if necessary."""
        self.close_connection = 1

        self.handle_one_request()
        while not self.close_connection:
            self.handle_one_request()

    def send_error(self, code, message=None):
        """Send and log an error reply.

        Arguments are the error code, and a detailed message.
        The detailed message defaults to the short entry matching the
        response code.

        This sends an error response (so it must be called before any
        output has been generated), logs the error, and finally sends
        a piece of HTML explaining the error to the user.

        """

        try:
            shortmsg, longmsg = self.responses[code]
        except KeyError:
            shortmsg, longmsg = '???', '???'
        if message is None:
            message = shortmsg
        explain = longmsg
        self.log_error("code %d, message %s", code, message)
        # using _quote_html to prevent Cross Site Scripting attacks (see bug #1100201)
        content = (self.error_message_format %
                   {'code': code, 'message': _quote_html(message), 'explain': explain})
        self.send_response(code, message)
        self.send_header("Content-Type", self.error_content_type)
        self.send_header('Connection', 'close')
        self.end_headers()
        if self.command != 'HEAD' and code >= 200 and code not in (204, 304):
            self.wfile.write(content.encode('UTF-8', 'replace'))

    def send_response(self, code, message=None):
        """Add the response header to the headers buffer and log the
        response code.

        Also send two standard headers with the server software
        version and the current date.

        """
        self.log_request(code)
        self.send_response_only(code, message)
        self.send_header('Server', self.version_string())
        self.send_header('Date', self.date_time_string())

    def send_response_only(self, code, message=None):
        """Send the response header only."""
        if message is None:
            if code in self.responses:
                message = self.responses[code][0]
            else:
                message = ''
        if self.request_version != 'HTTP/0.9':
            if not hasattr(self, '_headers_buffer'):
                self._headers_buffer = []
            self._headers_buffer.append(("%s %d %s\r\n" %
                    (self.protocol_version, code, message)).encode(
                        'latin-1', 'strict'))

    def send_header(self, keyword, value):
        """Send a MIME header to the headers buffer."""
        if self.request_version != 'HTTP/0.9':
            if not hasattr(self, '_headers_buffer'):
                self._headers_buffer = []
            self._headers_buffer.append(
                ("%s: %s\r\n" % (keyword, value)).encode('latin-1', 'strict'))

        if keyword.lower() == 'connection':
            if value.lower() == 'close':
                self.close_connection = 1
            elif value.lower() == 'keep-alive':
                self.close_connection = 0

    def end_headers(self):
        """Send the blank line ending the MIME headers."""
        if self.request_version != 'HTTP/0.9':
            self._headers_buffer.append(b"\r\n")
            self.flush_headers()

    def flush_headers(self):
        if hasattr(self, '_headers_buffer'):
            self.wfile.write(b"".join(self._headers_buffer))
            self._headers_buffer = []

    def log_request(self, code='-', size='-'):
        """Log an accepted request.

        This is called by send_response().

        """

        self.log_message('"%s" %s %s',
                         self.requestline, str(code), str(size))

    def log_error(self, format, *args):
        """Log an error.

        This is called when a request cannot be fulfilled.  By
        default it passes the message on to log_message().

        Arguments are the same as for log_message().

        XXX This should go to the separate error log.

        """

        self.log_message(format, *args)

    def log_message(self, format, *args):
        """Log an arbitrary message.

        This is used by all other logging functions.  Override
        it if you have specific logging wishes.

        The first argument, FORMAT, is a format string for the
        message to be logged.  If the format string contains
        any % escapes requiring parameters, they should be
        specified as subsequent arguments (it's just like
        printf!).

        The client ip and current date/time are prefixed to
        every message.

        """

        sys.stderr.write("%s - - [%s] %s\n" %
                         (self.address_string(),
                          self.log_date_time_string(),
                          format%args))

    def version_string(self):
        """Return the server software version string."""
        return self.server_version + ' ' + self.sys_version

    def date_time_string(self, timestamp=None):
        """Return the current date and time formatted for a message header."""
        if timestamp is None:
            timestamp = time.time()
        year, month, day, hh, mm, ss, wd, y, z = time.gmtime(timestamp)
        s = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (
                self.weekdayname[wd],
                day, self.monthname[month], year,
                hh, mm, ss)
        return s

    def log_date_time_string(self):
        """Return the current time formatted for logging."""
        now = time.time()
        year, month, day, hh, mm, ss, x, y, z = time.localtime(now)
        s = "%02d/%3s/%04d %02d:%02d:%02d" % (
                day, self.monthname[month], year, hh, mm, ss)
        return s

    weekdayname = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']

    monthname = [None,
                 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
                 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']

    def address_string(self):
        """Return the client address."""

        return self.client_address[0]

    # Essentially static class variables

    # The version of the HTTP protocol we support.
    # Set this to HTTP/1.1 to enable automatic keepalive
    protocol_version = "HTTP/1.0"

    # MessageClass used to parse headers
    MessageClass = http_client.HTTPMessage

    # Table mapping response codes to messages; entries have the
    # form {code: (shortmessage, longmessage)}.
    # See RFC 2616 and 6585.
    responses = {
        100: ('Continue', 'Request received, please continue'),
        101: ('Switching Protocols',
              'Switching to new protocol; obey Upgrade header'),

        200: ('OK', 'Request fulfilled, document follows'),
        201: ('Created', 'Document created, URL follows'),
        202: ('Accepted',
              'Request accepted, processing continues off-line'),
        203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
        204: ('No Content', 'Request fulfilled, nothing follows'),
        205: ('Reset Content', 'Clear input form for further input.'),
        206: ('Partial Content', 'Partial content follows.'),

        300: ('Multiple Choices',
              'Object has several resources -- see URI list'),
        301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
        302: ('Found', 'Object moved temporarily -- see URI list'),
        303: ('See Other', 'Object moved -- see Method and URL list'),
        304: ('Not Modified',
              'Document has not changed since given time'),
        305: ('Use Proxy',
              'You must use proxy specified in Location to access this '
              'resource.'),
        307: ('Temporary Redirect',
              'Object moved temporarily -- see URI list'),

        400: ('Bad Request',
              'Bad request syntax or unsupported method'),
        401: ('Unauthorized',
              'No permission -- see authorization schemes'),
        402: ('Payment Required',
              'No payment -- see charging schemes'),
        403: ('Forbidden',
              'Request forbidden -- authorization will not help'),
        404: ('Not Found', 'Nothing matches the given URI'),
        405: ('Method Not Allowed',
              'Specified method is invalid for this resource.'),
        406: ('Not Acceptable', 'URI not available in preferred format.'),
        407: ('Proxy Authentication Required', 'You must authenticate with '
              'this proxy before proceeding.'),
        408: ('Request Timeout', 'Request timed out; try again later.'),
        409: ('Conflict', 'Request conflict.'),
        410: ('Gone',
              'URI no longer exists and has been permanently removed.'),
        411: ('Length Required', 'Client must specify Content-Length.'),
        412: ('Precondition Failed', 'Precondition in headers is false.'),
        413: ('Request Entity Too Large', 'Entity is too large.'),
        414: ('Request-URI Too Long', 'URI is too long.'),
        415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
        416: ('Requested Range Not Satisfiable',
              'Cannot satisfy request range.'),
        417: ('Expectation Failed',
              'Expect condition could not be satisfied.'),
        428: ('Precondition Required',
              'The origin server requires the request to be conditional.'),
        429: ('Too Many Requests', 'The user has sent too many requests '
              'in a given amount of time ("rate limiting").'),
        431: ('Request Header Fields Too Large', 'The server is unwilling to '
              'process the request because its header fields are too large.'),

        500: ('Internal Server Error', 'Server got itself in trouble'),
        501: ('Not Implemented',
              'Server does not support this operation'),
        502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
        503: ('Service Unavailable',
              'The server cannot process the request due to a high load'),
        504: ('Gateway Timeout',
              'The gateway server did not receive a timely response'),
        505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
        511: ('Network Authentication Required',
              'The client needs to authenticate to gain network access.'),
        }


class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):

    """Simple HTTP request handler with GET and HEAD commands.

    This serves files from the current directory and any of its
    subdirectories.  The MIME type for files is determined by
    calling the .guess_type() method.

    The GET and HEAD requests are identical except that the HEAD
    request omits the actual contents of the file.

    """

    server_version = "SimpleHTTP/" + __version__

    def do_GET(self):
        """Serve a GET request."""
        f = self.send_head()
        if f:
            self.copyfile(f, self.wfile)
            f.close()

    def do_HEAD(self):
        """Serve a HEAD request."""
        f = self.send_head()
        if f:
            f.close()

    def send_head(self):
        """Common code for GET and HEAD commands.

        This sends the response code and MIME headers.

        Return value is either a file object (which has to be copied
        to the outputfile by the caller unless the command was HEAD,
        and must be closed by the caller under all circumstances), or
        None, in which case the caller has nothing further to do.

        """
        path = self.translate_path(self.path)
        f = None
        if os.path.isdir(path):
            if not self.path.endswith('/'):
                # redirect browser - doing basically what apache does
                self.send_response(301)
                self.send_header("Location", self.path + "/")
                self.end_headers()
                return None
            for index in "index.html", "index.htm":
                index = os.path.join(path, index)
                if os.path.exists(index):
                    path = index
                    break
            else:
                return self.list_directory(path)
        ctype = self.guess_type(path)
        try:
            f = open(path, 'rb')
        except IOError:
            self.send_error(404, "File not found")
            return None
        self.send_response(200)
        self.send_header("Content-type", ctype)
        fs = os.fstat(f.fileno())
        self.send_header("Content-Length", str(fs[6]))
        self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
        self.end_headers()
        return f

    def list_directory(self, path):
        """Helper to produce a directory listing (absent index.html).

        Return value is either a file object, or None (indicating an
        error).  In either case, the headers are sent, making the
        interface the same as for send_head().

        """
        try:
            list = os.listdir(path)
        except os.error:
            self.send_error(404, "No permission to list directory")
            return None
        list.sort(key=lambda a: a.lower())
        r = []
        displaypath = html.escape(urllib_parse.unquote(self.path))
        enc = sys.getfilesystemencoding()
        title = 'Directory listing for %s' % displaypath
        r.append('<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" '
                 '"http://www.w3.org/TR/html4/strict.dtd">')
        r.append('<html>\n<head>')
        r.append('<meta http-equiv="Content-Type" '
                 'content="text/html; charset=%s">' % enc)
        r.append('<title>%s</title>\n</head>' % title)
        r.append('<body>\n<h1>%s</h1>' % title)
        r.append('<hr>\n<ul>')
        for name in list:
            fullname = os.path.join(path, name)
            displayname = linkname = name
            # Append / for directories or @ for symbolic links
            if os.path.isdir(fullname):
                displayname = name + "/"
                linkname = name + "/"
            if os.path.islink(fullname):
                displayname = name + "@"
                # Note: a link to a directory displays with @ and links with /
            r.append('<li><a href="%s">%s</a></li>'
                    % (urllib_parse.quote(linkname), html.escape(displayname)))
            # # Use this instead:
            # r.append('<li><a href="%s">%s</a></li>'
            #         % (urllib.quote(linkname), cgi.escape(displayname)))
        r.append('</ul>\n<hr>\n</body>\n</html>\n')
        encoded = '\n'.join(r).encode(enc)
        f = io.BytesIO()
        f.write(encoded)
        f.seek(0)
        self.send_response(200)
        self.send_header("Content-type", "text/html; charset=%s" % enc)
        self.send_header("Content-Length", str(len(encoded)))
        self.end_headers()
        return f

    def translate_path(self, path):
        """Translate a /-separated PATH to the local filename syntax.

        Components that mean special things to the local file system
        (e.g. drive or directory names) are ignored.  (XXX They should
        probably be diagnosed.)

        """
        # abandon query parameters
        path = path.split('?',1)[0]
        path = path.split('#',1)[0]
        path = posixpath.normpath(urllib_parse.unquote(path))
        words = path.split('/')
        words = filter(None, words)
        path = os.getcwd()
        for word in words:
            drive, word = os.path.splitdrive(word)
            head, word = os.path.split(word)
            if word in (os.curdir, os.pardir): continue
            path = os.path.join(path, word)
        return path

    def copyfile(self, source, outputfile):
        """Copy all data between two file objects.

        The SOURCE argument is a file object open for reading
        (or anything with a read() method) and the DESTINATION
        argument is a file object open for writing (or
        anything with a write() method).

        The only reason for overriding this would be to change
        the block size or perhaps to replace newlines by CRLF
        -- note however that this the default server uses this
        to copy binary data as well.

        """
        shutil.copyfileobj(source, outputfile)

    def guess_type(self, path):
        """Guess the type of a file.

        Argument is a PATH (a filename).

        Return value is a string of the form type/subtype,
        usable for a MIME Content-type header.

        The default implementation looks the file's extension
        up in the table self.extensions_map, using application/octet-stream
        as a default; however it would be permissible (if
        slow) to look inside the data to make a better guess.

        """

        base, ext = posixpath.splitext(path)
        if ext in self.extensions_map:
            return self.extensions_map[ext]
        ext = ext.lower()
        if ext in self.extensions_map:
            return self.extensions_map[ext]
        else:
            return self.extensions_map['']

    if not mimetypes.inited:
        mimetypes.init() # try to read system mime.types
    extensions_map = mimetypes.types_map.copy()
    extensions_map.update({
        '': 'application/octet-stream', # Default
        '.py': 'text/plain',
        '.c': 'text/plain',
        '.h': 'text/plain',
        })


# Utilities for CGIHTTPRequestHandler

def _url_collapse_path(path):
    """
    Given a URL path, remove extra '/'s and '.' path elements and collapse
    any '..' references and returns a colllapsed path.

    Implements something akin to RFC-2396 5.2 step 6 to parse relative paths.
    The utility of this function is limited to is_cgi method and helps
    preventing some security attacks.

    Returns: A tuple of (head, tail) where tail is everything after the final /
    and head is everything before it.  Head will always start with a '/' and,
    if it contains anything else, never have a trailing '/'.

    Raises: IndexError if too many '..' occur within the path.

    """
    # Similar to os.path.split(os.path.normpath(path)) but specific to URL
    # path semantics rather than local operating system semantics.
    path_parts = path.split('/')
    head_parts = []
    for part in path_parts[:-1]:
        if part == '..':
            head_parts.pop() # IndexError if more '..' than prior parts
        elif part and part != '.':
            head_parts.append( part )
    if path_parts:
        tail_part = path_parts.pop()
        if tail_part:
            if tail_part == '..':
                head_parts.pop()
                tail_part = ''
            elif tail_part == '.':
                tail_part = ''
    else:
        tail_part = ''

    splitpath = ('/' + '/'.join(head_parts), tail_part)
    collapsed_path = "/".join(splitpath)

    return collapsed_path



nobody = None

def nobody_uid():
    """Internal routine to get nobody's uid"""
    global nobody
    if nobody:
        return nobody
    try:
        import pwd
    except ImportError:
        return -1
    try:
        nobody = pwd.getpwnam('nobody')[2]
    except KeyError:
        nobody = 1 + max(x[2] for x in pwd.getpwall())
    return nobody


def executable(path):
    """Test for executable file."""
    return os.access(path, os.X_OK)


class CGIHTTPRequestHandler(SimpleHTTPRequestHandler):

    """Complete HTTP server with GET, HEAD and POST commands.

    GET and HEAD also support running CGI scripts.

    The POST command is *only* implemented for CGI scripts.

    """

    # Determine platform specifics
    have_fork = hasattr(os, 'fork')

    # Make rfile unbuffered -- we need to read one line and then pass
    # the rest to a subprocess, so we can't use buffered input.
    rbufsize = 0

    def do_POST(self):
        """Serve a POST request.

        This is only implemented for CGI scripts.

        """

        if self.is_cgi():
            self.run_cgi()
        else:
            self.send_error(501, "Can only POST to CGI scripts")

    def send_head(self):
        """Version of send_head that support CGI scripts"""
        if self.is_cgi():
            return self.run_cgi()
        else:
            return SimpleHTTPRequestHandler.send_head(self)

    def is_cgi(self):
        """Test whether self.path corresponds to a CGI script.

        Returns True and updates the cgi_info attribute to the tuple
        (dir, rest) if self.path requires running a CGI script.
        Returns False otherwise.

        If any exception is raised, the caller should assume that
        self.path was rejected as invalid and act accordingly.

        The default implementation tests whether the normalized url
        path begins with one of the strings in self.cgi_directories
        (and the next character is a '/' or the end of the string).

        """
        collapsed_path = _url_collapse_path(self.path)
        dir_sep = collapsed_path.find('/', 1)
        head, tail = collapsed_path[:dir_sep], collapsed_path[dir_sep+1:]
        if head in self.cgi_directories:
            self.cgi_info = head, tail
            return True
        return False


    cgi_directories = ['/cgi-bin', '/htbin']

    def is_executable(self, path):
        """Test whether argument path is an executable file."""
        return executable(path)

    def is_python(self, path):
        """Test whether argument path is a Python script."""
        head, tail = os.path.splitext(path)
        return tail.lower() in (".py", ".pyw")

    def run_cgi(self):
        """Execute a CGI script."""
        path = self.path
        dir, rest = self.cgi_info

        i = path.find('/', len(dir) + 1)
        while i >= 0:
            nextdir = path[:i]
            nextrest = path[i+1:]

            scriptdir = self.translate_path(nextdir)
            if os.path.isdir(scriptdir):
                dir, rest = nextdir, nextrest
                i = path.find('/', len(dir) + 1)
            else:
                break

        # find an explicit query string, if present.
        i = rest.rfind('?')
        if i >= 0:
            rest, query = rest[:i], rest[i+1:]
        else:
            query = ''

        # dissect the part after the directory name into a script name &
        # a possible additional path, to be stored in PATH_INFO.
        i = rest.find('/')
        if i >= 0:
            script, rest = rest[:i], rest[i:]
        else:
            script, rest = rest, ''

        scriptname = dir + '/' + script
        scriptfile = self.translate_path(scriptname)
        if not os.path.exists(scriptfile):
            self.send_error(404, "No such CGI script (%r)" % scriptname)
            return
        if not os.path.isfile(scriptfile):
            self.send_error(403, "CGI script is not a plain file (%r)" %
                            scriptname)
            return
        ispy = self.is_python(scriptname)
        if self.have_fork or not ispy:
            if not self.is_executable(scriptfile):
                self.send_error(403, "CGI script is not executable (%r)" %
                                scriptname)
                return

        # Reference: http://hoohoo.ncsa.uiuc.edu/cgi/env.html
        # XXX Much of the following could be prepared ahead of time!
        env = copy.deepcopy(os.environ)
        env['SERVER_SOFTWARE'] = self.version_string()
        env['SERVER_NAME'] = self.server.server_name
        env['GATEWAY_INTERFACE'] = 'CGI/1.1'
        env['SERVER_PROTOCOL'] = self.protocol_version
        env['SERVER_PORT'] = str(self.server.server_port)
        env['REQUEST_METHOD'] = self.command
        uqrest = urllib_parse.unquote(rest)
        env['PATH_INFO'] = uqrest
        env['PATH_TRANSLATED'] = self.translate_path(uqrest)
        env['SCRIPT_NAME'] = scriptname
        if query:
            env['QUERY_STRING'] = query
        env['REMOTE_ADDR'] = self.client_address[0]
        authorization = self.headers.get("authorization")
        if authorization:
            authorization = authorization.split()
            if len(authorization) == 2:
                import base64, binascii
                env['AUTH_TYPE'] = authorization[0]
                if authorization[0].lower() == "basic":
                    try:
                        authorization = authorization[1].encode('ascii')
                        if utils.PY3:
                            # In Py3.3, was:
                            authorization = base64.decodebytes(authorization).\
                                            decode('ascii')
                        else:
                            # Backport to Py2.7:
                            authorization = base64.decodestring(authorization).\
                                            decode('ascii')
                    except (binascii.Error, UnicodeError):
                        pass
                    else:
                        authorization = authorization.split(':')
                        if len(authorization) == 2:
                            env['REMOTE_USER'] = authorization[0]
        # XXX REMOTE_IDENT
        if self.headers.get('content-type') is None:
            env['CONTENT_TYPE'] = self.headers.get_content_type()
        else:
            env['CONTENT_TYPE'] = self.headers['content-type']
        length = self.headers.get('content-length')
        if length:
            env['CONTENT_LENGTH'] = length
        referer = self.headers.get('referer')
        if referer:
            env['HTTP_REFERER'] = referer
        accept = []
        for line in self.headers.getallmatchingheaders('accept'):
            if line[:1] in "\t\n\r ":
                accept.append(line.strip())
            else:
                accept = accept + line[7:].split(',')
        env['HTTP_ACCEPT'] = ','.join(accept)
        ua = self.headers.get('user-agent')
        if ua:
            env['HTTP_USER_AGENT'] = ua
        co = filter(None, self.headers.get_all('cookie', []))
        cookie_str = ', '.join(co)
        if cookie_str:
            env['HTTP_COOKIE'] = cookie_str
        # XXX Other HTTP_* headers
        # Since we're setting the env in the parent, provide empty
        # values to override previously set values
        for k in ('QUERY_STRING', 'REMOTE_HOST', 'CONTENT_LENGTH',
                  'HTTP_USER_AGENT', 'HTTP_COOKIE', 'HTTP_REFERER'):
            env.setdefault(k, "")

        self.send_response(200, "Script output follows")
        self.flush_headers()

        decoded_query = query.replace('+', ' ')

        if self.have_fork:
            # Unix -- fork as we should
            args = [script]
            if '=' not in decoded_query:
                args.append(decoded_query)
            nobody = nobody_uid()
            self.wfile.flush() # Always flush before forking
            pid = os.fork()
            if pid != 0:
                # Parent
                pid, sts = os.waitpid(pid, 0)
                # throw away additional data [see bug #427345]
                while select.select([self.rfile], [], [], 0)[0]:
                    if not self.rfile.read(1):
                        break
                if sts:
                    self.log_error("CGI script exit status %#x", sts)
                return
            # Child
            try:
                try:
                    os.setuid(nobody)
                except os.error:
                    pass
                os.dup2(self.rfile.fileno(), 0)
                os.dup2(self.wfile.fileno(), 1)
                os.execve(scriptfile, args, env)
            except:
                self.server.handle_error(self.request, self.client_address)
                os._exit(127)

        else:
            # Non-Unix -- use subprocess
            import subprocess
            cmdline = [scriptfile]
            if self.is_python(scriptfile):
                interp = sys.executable
                if interp.lower().endswith("w.exe"):
                    # On Windows, use python.exe, not pythonw.exe
                    interp = interp[:-5] + interp[-4:]
                cmdline = [interp, '-u'] + cmdline
            if '=' not in query:
                cmdline.append(query)
            self.log_message("command: %s", subprocess.list2cmdline(cmdline))
            try:
                nbytes = int(length)
            except (TypeError, ValueError):
                nbytes = 0
            p = subprocess.Popen(cmdline,
                                 stdin=subprocess.PIPE,
                                 stdout=subprocess.PIPE,
                                 stderr=subprocess.PIPE,
                                 env = env
                                 )
            if self.command.lower() == "post" and nbytes > 0:
                data = self.rfile.read(nbytes)
            else:
                data = None
            # throw away additional data [see bug #427345]
            while select.select([self.rfile._sock], [], [], 0)[0]:
                if not self.rfile._sock.recv(1):
                    break
            stdout, stderr = p.communicate(data)
            self.wfile.write(stdout)
            if stderr:
                self.log_error('%s', stderr)
            p.stderr.close()
            p.stdout.close()
            status = p.returncode
            if status:
                self.log_error("CGI script exit status %#x", status)
            else:
                self.log_message("CGI script exited OK")


def test(HandlerClass = BaseHTTPRequestHandler,
         ServerClass = HTTPServer, protocol="HTTP/1.0", port=8000):
    """Test the HTTP request handler class.

    This runs an HTTP server on port 8000 (or the first command line
    argument).

    """
    server_address = ('', port)

    HandlerClass.protocol_version = protocol
    httpd = ServerClass(server_address, HandlerClass)

    sa = httpd.socket.getsockname()
    print("Serving HTTP on", sa[0], "port", sa[1], "...")
    try:
        httpd.serve_forever()
    except KeyboardInterrupt:
        print("\nKeyboard interrupt received, exiting.")
        httpd.server_close()
        sys.exit(0)

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--cgi', action='store_true',
                       help='Run as CGI Server')
    parser.add_argument('port', action='store',
                        default=8000, type=int,
                        nargs='?',
                        help='Specify alternate port [default: 8000]')
    args = parser.parse_args()
    if args.cgi:
        test(HandlerClass=CGIHTTPRequestHandler, port=args.port)
    else:
        test(HandlerClass=SimpleHTTPRequestHandler, port=args.port)
PK�Cu\\�m�&�&!future/backports/html/entities.pynu�[���"""HTML character entity references.

Backported for python-future from Python 3.3
"""

from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from future.builtins import *


# maps the HTML entity name to the Unicode codepoint
name2codepoint = {
    'AElig':    0x00c6, # latin capital letter AE = latin capital ligature AE, U+00C6 ISOlat1
    'Aacute':   0x00c1, # latin capital letter A with acute, U+00C1 ISOlat1
    'Acirc':    0x00c2, # latin capital letter A with circumflex, U+00C2 ISOlat1
    'Agrave':   0x00c0, # latin capital letter A with grave = latin capital letter A grave, U+00C0 ISOlat1
    'Alpha':    0x0391, # greek capital letter alpha, U+0391
    'Aring':    0x00c5, # latin capital letter A with ring above = latin capital letter A ring, U+00C5 ISOlat1
    'Atilde':   0x00c3, # latin capital letter A with tilde, U+00C3 ISOlat1
    'Auml':     0x00c4, # latin capital letter A with diaeresis, U+00C4 ISOlat1
    'Beta':     0x0392, # greek capital letter beta, U+0392
    'Ccedil':   0x00c7, # latin capital letter C with cedilla, U+00C7 ISOlat1
    'Chi':      0x03a7, # greek capital letter chi, U+03A7
    'Dagger':   0x2021, # double dagger, U+2021 ISOpub
    'Delta':    0x0394, # greek capital letter delta, U+0394 ISOgrk3
    'ETH':      0x00d0, # latin capital letter ETH, U+00D0 ISOlat1
    'Eacute':   0x00c9, # latin capital letter E with acute, U+00C9 ISOlat1
    'Ecirc':    0x00ca, # latin capital letter E with circumflex, U+00CA ISOlat1
    'Egrave':   0x00c8, # latin capital letter E with grave, U+00C8 ISOlat1
    'Epsilon':  0x0395, # greek capital letter epsilon, U+0395
    'Eta':      0x0397, # greek capital letter eta, U+0397
    'Euml':     0x00cb, # latin capital letter E with diaeresis, U+00CB ISOlat1
    'Gamma':    0x0393, # greek capital letter gamma, U+0393 ISOgrk3
    'Iacute':   0x00cd, # latin capital letter I with acute, U+00CD ISOlat1
    'Icirc':    0x00ce, # latin capital letter I with circumflex, U+00CE ISOlat1
    'Igrave':   0x00cc, # latin capital letter I with grave, U+00CC ISOlat1
    'Iota':     0x0399, # greek capital letter iota, U+0399
    'Iuml':     0x00cf, # latin capital letter I with diaeresis, U+00CF ISOlat1
    'Kappa':    0x039a, # greek capital letter kappa, U+039A
    'Lambda':   0x039b, # greek capital letter lambda, U+039B ISOgrk3
    'Mu':       0x039c, # greek capital letter mu, U+039C
    'Ntilde':   0x00d1, # latin capital letter N with tilde, U+00D1 ISOlat1
    'Nu':       0x039d, # greek capital letter nu, U+039D
    'OElig':    0x0152, # latin capital ligature OE, U+0152 ISOlat2
    'Oacute':   0x00d3, # latin capital letter O with acute, U+00D3 ISOlat1
    'Ocirc':    0x00d4, # latin capital letter O with circumflex, U+00D4 ISOlat1
    'Ograve':   0x00d2, # latin capital letter O with grave, U+00D2 ISOlat1
    'Omega':    0x03a9, # greek capital letter omega, U+03A9 ISOgrk3
    'Omicron':  0x039f, # greek capital letter omicron, U+039F
    'Oslash':   0x00d8, # latin capital letter O with stroke = latin capital letter O slash, U+00D8 ISOlat1
    'Otilde':   0x00d5, # latin capital letter O with tilde, U+00D5 ISOlat1
    'Ouml':     0x00d6, # latin capital letter O with diaeresis, U+00D6 ISOlat1
    'Phi':      0x03a6, # greek capital letter phi, U+03A6 ISOgrk3
    'Pi':       0x03a0, # greek capital letter pi, U+03A0 ISOgrk3
    'Prime':    0x2033, # double prime = seconds = inches, U+2033 ISOtech
    'Psi':      0x03a8, # greek capital letter psi, U+03A8 ISOgrk3
    'Rho':      0x03a1, # greek capital letter rho, U+03A1
    'Scaron':   0x0160, # latin capital letter S with caron, U+0160 ISOlat2
    'Sigma':    0x03a3, # greek capital letter sigma, U+03A3 ISOgrk3
    'THORN':    0x00de, # latin capital letter THORN, U+00DE ISOlat1
    'Tau':      0x03a4, # greek capital letter tau, U+03A4
    'Theta':    0x0398, # greek capital letter theta, U+0398 ISOgrk3
    'Uacute':   0x00da, # latin capital letter U with acute, U+00DA ISOlat1
    'Ucirc':    0x00db, # latin capital letter U with circumflex, U+00DB ISOlat1
    'Ugrave':   0x00d9, # latin capital letter U with grave, U+00D9 ISOlat1
    'Upsilon':  0x03a5, # greek capital letter upsilon, U+03A5 ISOgrk3
    'Uuml':     0x00dc, # latin capital letter U with diaeresis, U+00DC ISOlat1
    'Xi':       0x039e, # greek capital letter xi, U+039E ISOgrk3
    'Yacute':   0x00dd, # latin capital letter Y with acute, U+00DD ISOlat1
    'Yuml':     0x0178, # latin capital letter Y with diaeresis, U+0178 ISOlat2
    'Zeta':     0x0396, # greek capital letter zeta, U+0396
    'aacute':   0x00e1, # latin small letter a with acute, U+00E1 ISOlat1
    'acirc':    0x00e2, # latin small letter a with circumflex, U+00E2 ISOlat1
    'acute':    0x00b4, # acute accent = spacing acute, U+00B4 ISOdia
    'aelig':    0x00e6, # latin small letter ae = latin small ligature ae, U+00E6 ISOlat1
    'agrave':   0x00e0, # latin small letter a with grave = latin small letter a grave, U+00E0 ISOlat1
    'alefsym':  0x2135, # alef symbol = first transfinite cardinal, U+2135 NEW
    'alpha':    0x03b1, # greek small letter alpha, U+03B1 ISOgrk3
    'amp':      0x0026, # ampersand, U+0026 ISOnum
    'and':      0x2227, # logical and = wedge, U+2227 ISOtech
    'ang':      0x2220, # angle, U+2220 ISOamso
    'aring':    0x00e5, # latin small letter a with ring above = latin small letter a ring, U+00E5 ISOlat1
    'asymp':    0x2248, # almost equal to = asymptotic to, U+2248 ISOamsr
    'atilde':   0x00e3, # latin small letter a with tilde, U+00E3 ISOlat1
    'auml':     0x00e4, # latin small letter a with diaeresis, U+00E4 ISOlat1
    'bdquo':    0x201e, # double low-9 quotation mark, U+201E NEW
    'beta':     0x03b2, # greek small letter beta, U+03B2 ISOgrk3
    'brvbar':   0x00a6, # broken bar = broken vertical bar, U+00A6 ISOnum
    'bull':     0x2022, # bullet = black small circle, U+2022 ISOpub
    'cap':      0x2229, # intersection = cap, U+2229 ISOtech
    'ccedil':   0x00e7, # latin small letter c with cedilla, U+00E7 ISOlat1
    'cedil':    0x00b8, # cedilla = spacing cedilla, U+00B8 ISOdia
    'cent':     0x00a2, # cent sign, U+00A2 ISOnum
    'chi':      0x03c7, # greek small letter chi, U+03C7 ISOgrk3
    'circ':     0x02c6, # modifier letter circumflex accent, U+02C6 ISOpub
    'clubs':    0x2663, # black club suit = shamrock, U+2663 ISOpub
    'cong':     0x2245, # approximately equal to, U+2245 ISOtech
    'copy':     0x00a9, # copyright sign, U+00A9 ISOnum
    'crarr':    0x21b5, # downwards arrow with corner leftwards = carriage return, U+21B5 NEW
    'cup':      0x222a, # union = cup, U+222A ISOtech
    'curren':   0x00a4, # currency sign, U+00A4 ISOnum
    'dArr':     0x21d3, # downwards double arrow, U+21D3 ISOamsa
    'dagger':   0x2020, # dagger, U+2020 ISOpub
    'darr':     0x2193, # downwards arrow, U+2193 ISOnum
    'deg':      0x00b0, # degree sign, U+00B0 ISOnum
    'delta':    0x03b4, # greek small letter delta, U+03B4 ISOgrk3
    'diams':    0x2666, # black diamond suit, U+2666 ISOpub
    'divide':   0x00f7, # division sign, U+00F7 ISOnum
    'eacute':   0x00e9, # latin small letter e with acute, U+00E9 ISOlat1
    'ecirc':    0x00ea, # latin small letter e with circumflex, U+00EA ISOlat1
    'egrave':   0x00e8, # latin small letter e with grave, U+00E8 ISOlat1
    'empty':    0x2205, # empty set = null set = diameter, U+2205 ISOamso
    'emsp':     0x2003, # em space, U+2003 ISOpub
    'ensp':     0x2002, # en space, U+2002 ISOpub
    'epsilon':  0x03b5, # greek small letter epsilon, U+03B5 ISOgrk3
    'equiv':    0x2261, # identical to, U+2261 ISOtech
    'eta':      0x03b7, # greek small letter eta, U+03B7 ISOgrk3
    'eth':      0x00f0, # latin small letter eth, U+00F0 ISOlat1
    'euml':     0x00eb, # latin small letter e with diaeresis, U+00EB ISOlat1
    'euro':     0x20ac, # euro sign, U+20AC NEW
    'exist':    0x2203, # there exists, U+2203 ISOtech
    'fnof':     0x0192, # latin small f with hook = function = florin, U+0192 ISOtech
    'forall':   0x2200, # for all, U+2200 ISOtech
    'frac12':   0x00bd, # vulgar fraction one half = fraction one half, U+00BD ISOnum
    'frac14':   0x00bc, # vulgar fraction one quarter = fraction one quarter, U+00BC ISOnum
    'frac34':   0x00be, # vulgar fraction three quarters = fraction three quarters, U+00BE ISOnum
    'frasl':    0x2044, # fraction slash, U+2044 NEW
    'gamma':    0x03b3, # greek small letter gamma, U+03B3 ISOgrk3
    'ge':       0x2265, # greater-than or equal to, U+2265 ISOtech
    'gt':       0x003e, # greater-than sign, U+003E ISOnum
    'hArr':     0x21d4, # left right double arrow, U+21D4 ISOamsa
    'harr':     0x2194, # left right arrow, U+2194 ISOamsa
    'hearts':   0x2665, # black heart suit = valentine, U+2665 ISOpub
    'hellip':   0x2026, # horizontal ellipsis = three dot leader, U+2026 ISOpub
    'iacute':   0x00ed, # latin small letter i with acute, U+00ED ISOlat1
    'icirc':    0x00ee, # latin small letter i with circumflex, U+00EE ISOlat1
    'iexcl':    0x00a1, # inverted exclamation mark, U+00A1 ISOnum
    'igrave':   0x00ec, # latin small letter i with grave, U+00EC ISOlat1
    'image':    0x2111, # blackletter capital I = imaginary part, U+2111 ISOamso
    'infin':    0x221e, # infinity, U+221E ISOtech
    'int':      0x222b, # integral, U+222B ISOtech
    'iota':     0x03b9, # greek small letter iota, U+03B9 ISOgrk3
    'iquest':   0x00bf, # inverted question mark = turned question mark, U+00BF ISOnum
    'isin':     0x2208, # element of, U+2208 ISOtech
    'iuml':     0x00ef, # latin small letter i with diaeresis, U+00EF ISOlat1
    'kappa':    0x03ba, # greek small letter kappa, U+03BA ISOgrk3
    'lArr':     0x21d0, # leftwards double arrow, U+21D0 ISOtech
    'lambda':   0x03bb, # greek small letter lambda, U+03BB ISOgrk3
    'lang':     0x2329, # left-pointing angle bracket = bra, U+2329 ISOtech
    'laquo':    0x00ab, # left-pointing double angle quotation mark = left pointing guillemet, U+00AB ISOnum
    'larr':     0x2190, # leftwards arrow, U+2190 ISOnum
    'lceil':    0x2308, # left ceiling = apl upstile, U+2308 ISOamsc
    'ldquo':    0x201c, # left double quotation mark, U+201C ISOnum
    'le':       0x2264, # less-than or equal to, U+2264 ISOtech
    'lfloor':   0x230a, # left floor = apl downstile, U+230A ISOamsc
    'lowast':   0x2217, # asterisk operator, U+2217 ISOtech
    'loz':      0x25ca, # lozenge, U+25CA ISOpub
    'lrm':      0x200e, # left-to-right mark, U+200E NEW RFC 2070
    'lsaquo':   0x2039, # single left-pointing angle quotation mark, U+2039 ISO proposed
    'lsquo':    0x2018, # left single quotation mark, U+2018 ISOnum
    'lt':       0x003c, # less-than sign, U+003C ISOnum
    'macr':     0x00af, # macron = spacing macron = overline = APL overbar, U+00AF ISOdia
    'mdash':    0x2014, # em dash, U+2014 ISOpub
    'micro':    0x00b5, # micro sign, U+00B5 ISOnum
    'middot':   0x00b7, # middle dot = Georgian comma = Greek middle dot, U+00B7 ISOnum
    'minus':    0x2212, # minus sign, U+2212 ISOtech
    'mu':       0x03bc, # greek small letter mu, U+03BC ISOgrk3
    'nabla':    0x2207, # nabla = backward difference, U+2207 ISOtech
    'nbsp':     0x00a0, # no-break space = non-breaking space, U+00A0 ISOnum
    'ndash':    0x2013, # en dash, U+2013 ISOpub
    'ne':       0x2260, # not equal to, U+2260 ISOtech
    'ni':       0x220b, # contains as member, U+220B ISOtech
    'not':      0x00ac, # not sign, U+00AC ISOnum
    'notin':    0x2209, # not an element of, U+2209 ISOtech
    'nsub':     0x2284, # not a subset of, U+2284 ISOamsn
    'ntilde':   0x00f1, # latin small letter n with tilde, U+00F1 ISOlat1
    'nu':       0x03bd, # greek small letter nu, U+03BD ISOgrk3
    'oacute':   0x00f3, # latin small letter o with acute, U+00F3 ISOlat1
    'ocirc':    0x00f4, # latin small letter o with circumflex, U+00F4 ISOlat1
    'oelig':    0x0153, # latin small ligature oe, U+0153 ISOlat2
    'ograve':   0x00f2, # latin small letter o with grave, U+00F2 ISOlat1
    'oline':    0x203e, # overline = spacing overscore, U+203E NEW
    'omega':    0x03c9, # greek small letter omega, U+03C9 ISOgrk3
    'omicron':  0x03bf, # greek small letter omicron, U+03BF NEW
    'oplus':    0x2295, # circled plus = direct sum, U+2295 ISOamsb
    'or':       0x2228, # logical or = vee, U+2228 ISOtech
    'ordf':     0x00aa, # feminine ordinal indicator, U+00AA ISOnum
    'ordm':     0x00ba, # masculine ordinal indicator, U+00BA ISOnum
    'oslash':   0x00f8, # latin small letter o with stroke, = latin small letter o slash, U+00F8 ISOlat1
    'otilde':   0x00f5, # latin small letter o with tilde, U+00F5 ISOlat1
    'otimes':   0x2297, # circled times = vector product, U+2297 ISOamsb
    'ouml':     0x00f6, # latin small letter o with diaeresis, U+00F6 ISOlat1
    'para':     0x00b6, # pilcrow sign = paragraph sign, U+00B6 ISOnum
    'part':     0x2202, # partial differential, U+2202 ISOtech
    'permil':   0x2030, # per mille sign, U+2030 ISOtech
    'perp':     0x22a5, # up tack = orthogonal to = perpendicular, U+22A5 ISOtech
    'phi':      0x03c6, # greek small letter phi, U+03C6 ISOgrk3
    'pi':       0x03c0, # greek small letter pi, U+03C0 ISOgrk3
    'piv':      0x03d6, # greek pi symbol, U+03D6 ISOgrk3
    'plusmn':   0x00b1, # plus-minus sign = plus-or-minus sign, U+00B1 ISOnum
    'pound':    0x00a3, # pound sign, U+00A3 ISOnum
    'prime':    0x2032, # prime = minutes = feet, U+2032 ISOtech
    'prod':     0x220f, # n-ary product = product sign, U+220F ISOamsb
    'prop':     0x221d, # proportional to, U+221D ISOtech
    'psi':      0x03c8, # greek small letter psi, U+03C8 ISOgrk3
    'quot':     0x0022, # quotation mark = APL quote, U+0022 ISOnum
    'rArr':     0x21d2, # rightwards double arrow, U+21D2 ISOtech
    'radic':    0x221a, # square root = radical sign, U+221A ISOtech
    'rang':     0x232a, # right-pointing angle bracket = ket, U+232A ISOtech
    'raquo':    0x00bb, # right-pointing double angle quotation mark = right pointing guillemet, U+00BB ISOnum
    'rarr':     0x2192, # rightwards arrow, U+2192 ISOnum
    'rceil':    0x2309, # right ceiling, U+2309 ISOamsc
    'rdquo':    0x201d, # right double quotation mark, U+201D ISOnum
    'real':     0x211c, # blackletter capital R = real part symbol, U+211C ISOamso
    'reg':      0x00ae, # registered sign = registered trade mark sign, U+00AE ISOnum
    'rfloor':   0x230b, # right floor, U+230B ISOamsc
    'rho':      0x03c1, # greek small letter rho, U+03C1 ISOgrk3
    'rlm':      0x200f, # right-to-left mark, U+200F NEW RFC 2070
    'rsaquo':   0x203a, # single right-pointing angle quotation mark, U+203A ISO proposed
    'rsquo':    0x2019, # right single quotation mark, U+2019 ISOnum
    'sbquo':    0x201a, # single low-9 quotation mark, U+201A NEW
    'scaron':   0x0161, # latin small letter s with caron, U+0161 ISOlat2
    'sdot':     0x22c5, # dot operator, U+22C5 ISOamsb
    'sect':     0x00a7, # section sign, U+00A7 ISOnum
    'shy':      0x00ad, # soft hyphen = discretionary hyphen, U+00AD ISOnum
    'sigma':    0x03c3, # greek small letter sigma, U+03C3 ISOgrk3
    'sigmaf':   0x03c2, # greek small letter final sigma, U+03C2 ISOgrk3
    'sim':      0x223c, # tilde operator = varies with = similar to, U+223C ISOtech
    'spades':   0x2660, # black spade suit, U+2660 ISOpub
    'sub':      0x2282, # subset of, U+2282 ISOtech
    'sube':     0x2286, # subset of or equal to, U+2286 ISOtech
    'sum':      0x2211, # n-ary sumation, U+2211 ISOamsb
    'sup':      0x2283, # superset of, U+2283 ISOtech
    'sup1':     0x00b9, # superscript one = superscript digit one, U+00B9 ISOnum
    'sup2':     0x00b2, # superscript two = superscript digit two = squared, U+00B2 ISOnum
    'sup3':     0x00b3, # superscript three = superscript digit three = cubed, U+00B3 ISOnum
    'supe':     0x2287, # superset of or equal to, U+2287 ISOtech
    'szlig':    0x00df, # latin small letter sharp s = ess-zed, U+00DF ISOlat1
    'tau':      0x03c4, # greek small letter tau, U+03C4 ISOgrk3
    'there4':   0x2234, # therefore, U+2234 ISOtech
    'theta':    0x03b8, # greek small letter theta, U+03B8 ISOgrk3
    'thetasym': 0x03d1, # greek small letter theta symbol, U+03D1 NEW
    'thinsp':   0x2009, # thin space, U+2009 ISOpub
    'thorn':    0x00fe, # latin small letter thorn with, U+00FE ISOlat1
    'tilde':    0x02dc, # small tilde, U+02DC ISOdia
    'times':    0x00d7, # multiplication sign, U+00D7 ISOnum
    'trade':    0x2122, # trade mark sign, U+2122 ISOnum
    'uArr':     0x21d1, # upwards double arrow, U+21D1 ISOamsa
    'uacute':   0x00fa, # latin small letter u with acute, U+00FA ISOlat1
    'uarr':     0x2191, # upwards arrow, U+2191 ISOnum
    'ucirc':    0x00fb, # latin small letter u with circumflex, U+00FB ISOlat1
    'ugrave':   0x00f9, # latin small letter u with grave, U+00F9 ISOlat1
    'uml':      0x00a8, # diaeresis = spacing diaeresis, U+00A8 ISOdia
    'upsih':    0x03d2, # greek upsilon with hook symbol, U+03D2 NEW
    'upsilon':  0x03c5, # greek small letter upsilon, U+03C5 ISOgrk3
    'uuml':     0x00fc, # latin small letter u with diaeresis, U+00FC ISOlat1
    'weierp':   0x2118, # script capital P = power set = Weierstrass p, U+2118 ISOamso
    'xi':       0x03be, # greek small letter xi, U+03BE ISOgrk3
    'yacute':   0x00fd, # latin small letter y with acute, U+00FD ISOlat1
    'yen':      0x00a5, # yen sign = yuan sign, U+00A5 ISOnum
    'yuml':     0x00ff, # latin small letter y with diaeresis, U+00FF ISOlat1
    'zeta':     0x03b6, # greek small letter zeta, U+03B6 ISOgrk3
    'zwj':      0x200d, # zero width joiner, U+200D NEW RFC 2070
    'zwnj':     0x200c, # zero width non-joiner, U+200C NEW RFC 2070
}


# maps the HTML5 named character references to the equivalent Unicode character(s)
html5 = {
    'Aacute': '\xc1',
    'aacute': '\xe1',
    'Aacute;': '\xc1',
    'aacute;': '\xe1',
    'Abreve;': '\u0102',
    'abreve;': '\u0103',
    'ac;': '\u223e',
    'acd;': '\u223f',
    'acE;': '\u223e\u0333',
    'Acirc': '\xc2',
    'acirc': '\xe2',
    'Acirc;': '\xc2',
    'acirc;': '\xe2',
    'acute': '\xb4',
    'acute;': '\xb4',
    'Acy;': '\u0410',
    'acy;': '\u0430',
    'AElig': '\xc6',
    'aelig': '\xe6',
    'AElig;': '\xc6',
    'aelig;': '\xe6',
    'af;': '\u2061',
    'Afr;': '\U0001d504',
    'afr;': '\U0001d51e',
    'Agrave': '\xc0',
    'agrave': '\xe0',
    'Agrave;': '\xc0',
    'agrave;': '\xe0',
    'alefsym;': '\u2135',
    'aleph;': '\u2135',
    'Alpha;': '\u0391',
    'alpha;': '\u03b1',
    'Amacr;': '\u0100',
    'amacr;': '\u0101',
    'amalg;': '\u2a3f',
    'AMP': '&',
    'amp': '&',
    'AMP;': '&',
    'amp;': '&',
    'And;': '\u2a53',
    'and;': '\u2227',
    'andand;': '\u2a55',
    'andd;': '\u2a5c',
    'andslope;': '\u2a58',
    'andv;': '\u2a5a',
    'ang;': '\u2220',
    'ange;': '\u29a4',
    'angle;': '\u2220',
    'angmsd;': '\u2221',
    'angmsdaa;': '\u29a8',
    'angmsdab;': '\u29a9',
    'angmsdac;': '\u29aa',
    'angmsdad;': '\u29ab',
    'angmsdae;': '\u29ac',
    'angmsdaf;': '\u29ad',
    'angmsdag;': '\u29ae',
    'angmsdah;': '\u29af',
    'angrt;': '\u221f',
    'angrtvb;': '\u22be',
    'angrtvbd;': '\u299d',
    'angsph;': '\u2222',
    'angst;': '\xc5',
    'angzarr;': '\u237c',
    'Aogon;': '\u0104',
    'aogon;': '\u0105',
    'Aopf;': '\U0001d538',
    'aopf;': '\U0001d552',
    'ap;': '\u2248',
    'apacir;': '\u2a6f',
    'apE;': '\u2a70',
    'ape;': '\u224a',
    'apid;': '\u224b',
    'apos;': "'",
    'ApplyFunction;': '\u2061',
    'approx;': '\u2248',
    'approxeq;': '\u224a',
    'Aring': '\xc5',
    'aring': '\xe5',
    'Aring;': '\xc5',
    'aring;': '\xe5',
    'Ascr;': '\U0001d49c',
    'ascr;': '\U0001d4b6',
    'Assign;': '\u2254',
    'ast;': '*',
    'asymp;': '\u2248',
    'asympeq;': '\u224d',
    'Atilde': '\xc3',
    'atilde': '\xe3',
    'Atilde;': '\xc3',
    'atilde;': '\xe3',
    'Auml': '\xc4',
    'auml': '\xe4',
    'Auml;': '\xc4',
    'auml;': '\xe4',
    'awconint;': '\u2233',
    'awint;': '\u2a11',
    'backcong;': '\u224c',
    'backepsilon;': '\u03f6',
    'backprime;': '\u2035',
    'backsim;': '\u223d',
    'backsimeq;': '\u22cd',
    'Backslash;': '\u2216',
    'Barv;': '\u2ae7',
    'barvee;': '\u22bd',
    'Barwed;': '\u2306',
    'barwed;': '\u2305',
    'barwedge;': '\u2305',
    'bbrk;': '\u23b5',
    'bbrktbrk;': '\u23b6',
    'bcong;': '\u224c',
    'Bcy;': '\u0411',
    'bcy;': '\u0431',
    'bdquo;': '\u201e',
    'becaus;': '\u2235',
    'Because;': '\u2235',
    'because;': '\u2235',
    'bemptyv;': '\u29b0',
    'bepsi;': '\u03f6',
    'bernou;': '\u212c',
    'Bernoullis;': '\u212c',
    'Beta;': '\u0392',
    'beta;': '\u03b2',
    'beth;': '\u2136',
    'between;': '\u226c',
    'Bfr;': '\U0001d505',
    'bfr;': '\U0001d51f',
    'bigcap;': '\u22c2',
    'bigcirc;': '\u25ef',
    'bigcup;': '\u22c3',
    'bigodot;': '\u2a00',
    'bigoplus;': '\u2a01',
    'bigotimes;': '\u2a02',
    'bigsqcup;': '\u2a06',
    'bigstar;': '\u2605',
    'bigtriangledown;': '\u25bd',
    'bigtriangleup;': '\u25b3',
    'biguplus;': '\u2a04',
    'bigvee;': '\u22c1',
    'bigwedge;': '\u22c0',
    'bkarow;': '\u290d',
    'blacklozenge;': '\u29eb',
    'blacksquare;': '\u25aa',
    'blacktriangle;': '\u25b4',
    'blacktriangledown;': '\u25be',
    'blacktriangleleft;': '\u25c2',
    'blacktriangleright;': '\u25b8',
    'blank;': '\u2423',
    'blk12;': '\u2592',
    'blk14;': '\u2591',
    'blk34;': '\u2593',
    'block;': '\u2588',
    'bne;': '=\u20e5',
    'bnequiv;': '\u2261\u20e5',
    'bNot;': '\u2aed',
    'bnot;': '\u2310',
    'Bopf;': '\U0001d539',
    'bopf;': '\U0001d553',
    'bot;': '\u22a5',
    'bottom;': '\u22a5',
    'bowtie;': '\u22c8',
    'boxbox;': '\u29c9',
    'boxDL;': '\u2557',
    'boxDl;': '\u2556',
    'boxdL;': '\u2555',
    'boxdl;': '\u2510',
    'boxDR;': '\u2554',
    'boxDr;': '\u2553',
    'boxdR;': '\u2552',
    'boxdr;': '\u250c',
    'boxH;': '\u2550',
    'boxh;': '\u2500',
    'boxHD;': '\u2566',
    'boxHd;': '\u2564',
    'boxhD;': '\u2565',
    'boxhd;': '\u252c',
    'boxHU;': '\u2569',
    'boxHu;': '\u2567',
    'boxhU;': '\u2568',
    'boxhu;': '\u2534',
    'boxminus;': '\u229f',
    'boxplus;': '\u229e',
    'boxtimes;': '\u22a0',
    'boxUL;': '\u255d',
    'boxUl;': '\u255c',
    'boxuL;': '\u255b',
    'boxul;': '\u2518',
    'boxUR;': '\u255a',
    'boxUr;': '\u2559',
    'boxuR;': '\u2558',
    'boxur;': '\u2514',
    'boxV;': '\u2551',
    'boxv;': '\u2502',
    'boxVH;': '\u256c',
    'boxVh;': '\u256b',
    'boxvH;': '\u256a',
    'boxvh;': '\u253c',
    'boxVL;': '\u2563',
    'boxVl;': '\u2562',
    'boxvL;': '\u2561',
    'boxvl;': '\u2524',
    'boxVR;': '\u2560',
    'boxVr;': '\u255f',
    'boxvR;': '\u255e',
    'boxvr;': '\u251c',
    'bprime;': '\u2035',
    'Breve;': '\u02d8',
    'breve;': '\u02d8',
    'brvbar': '\xa6',
    'brvbar;': '\xa6',
    'Bscr;': '\u212c',
    'bscr;': '\U0001d4b7',
    'bsemi;': '\u204f',
    'bsim;': '\u223d',
    'bsime;': '\u22cd',
    'bsol;': '\\',
    'bsolb;': '\u29c5',
    'bsolhsub;': '\u27c8',
    'bull;': '\u2022',
    'bullet;': '\u2022',
    'bump;': '\u224e',
    'bumpE;': '\u2aae',
    'bumpe;': '\u224f',
    'Bumpeq;': '\u224e',
    'bumpeq;': '\u224f',
    'Cacute;': '\u0106',
    'cacute;': '\u0107',
    'Cap;': '\u22d2',
    'cap;': '\u2229',
    'capand;': '\u2a44',
    'capbrcup;': '\u2a49',
    'capcap;': '\u2a4b',
    'capcup;': '\u2a47',
    'capdot;': '\u2a40',
    'CapitalDifferentialD;': '\u2145',
    'caps;': '\u2229\ufe00',
    'caret;': '\u2041',
    'caron;': '\u02c7',
    'Cayleys;': '\u212d',
    'ccaps;': '\u2a4d',
    'Ccaron;': '\u010c',
    'ccaron;': '\u010d',
    'Ccedil': '\xc7',
    'ccedil': '\xe7',
    'Ccedil;': '\xc7',
    'ccedil;': '\xe7',
    'Ccirc;': '\u0108',
    'ccirc;': '\u0109',
    'Cconint;': '\u2230',
    'ccups;': '\u2a4c',
    'ccupssm;': '\u2a50',
    'Cdot;': '\u010a',
    'cdot;': '\u010b',
    'cedil': '\xb8',
    'cedil;': '\xb8',
    'Cedilla;': '\xb8',
    'cemptyv;': '\u29b2',
    'cent': '\xa2',
    'cent;': '\xa2',
    'CenterDot;': '\xb7',
    'centerdot;': '\xb7',
    'Cfr;': '\u212d',
    'cfr;': '\U0001d520',
    'CHcy;': '\u0427',
    'chcy;': '\u0447',
    'check;': '\u2713',
    'checkmark;': '\u2713',
    'Chi;': '\u03a7',
    'chi;': '\u03c7',
    'cir;': '\u25cb',
    'circ;': '\u02c6',
    'circeq;': '\u2257',
    'circlearrowleft;': '\u21ba',
    'circlearrowright;': '\u21bb',
    'circledast;': '\u229b',
    'circledcirc;': '\u229a',
    'circleddash;': '\u229d',
    'CircleDot;': '\u2299',
    'circledR;': '\xae',
    'circledS;': '\u24c8',
    'CircleMinus;': '\u2296',
    'CirclePlus;': '\u2295',
    'CircleTimes;': '\u2297',
    'cirE;': '\u29c3',
    'cire;': '\u2257',
    'cirfnint;': '\u2a10',
    'cirmid;': '\u2aef',
    'cirscir;': '\u29c2',
    'ClockwiseContourIntegral;': '\u2232',
    'CloseCurlyDoubleQuote;': '\u201d',
    'CloseCurlyQuote;': '\u2019',
    'clubs;': '\u2663',
    'clubsuit;': '\u2663',
    'Colon;': '\u2237',
    'colon;': ':',
    'Colone;': '\u2a74',
    'colone;': '\u2254',
    'coloneq;': '\u2254',
    'comma;': ',',
    'commat;': '@',
    'comp;': '\u2201',
    'compfn;': '\u2218',
    'complement;': '\u2201',
    'complexes;': '\u2102',
    'cong;': '\u2245',
    'congdot;': '\u2a6d',
    'Congruent;': '\u2261',
    'Conint;': '\u222f',
    'conint;': '\u222e',
    'ContourIntegral;': '\u222e',
    'Copf;': '\u2102',
    'copf;': '\U0001d554',
    'coprod;': '\u2210',
    'Coproduct;': '\u2210',
    'COPY': '\xa9',
    'copy': '\xa9',
    'COPY;': '\xa9',
    'copy;': '\xa9',
    'copysr;': '\u2117',
    'CounterClockwiseContourIntegral;': '\u2233',
    'crarr;': '\u21b5',
    'Cross;': '\u2a2f',
    'cross;': '\u2717',
    'Cscr;': '\U0001d49e',
    'cscr;': '\U0001d4b8',
    'csub;': '\u2acf',
    'csube;': '\u2ad1',
    'csup;': '\u2ad0',
    'csupe;': '\u2ad2',
    'ctdot;': '\u22ef',
    'cudarrl;': '\u2938',
    'cudarrr;': '\u2935',
    'cuepr;': '\u22de',
    'cuesc;': '\u22df',
    'cularr;': '\u21b6',
    'cularrp;': '\u293d',
    'Cup;': '\u22d3',
    'cup;': '\u222a',
    'cupbrcap;': '\u2a48',
    'CupCap;': '\u224d',
    'cupcap;': '\u2a46',
    'cupcup;': '\u2a4a',
    'cupdot;': '\u228d',
    'cupor;': '\u2a45',
    'cups;': '\u222a\ufe00',
    'curarr;': '\u21b7',
    'curarrm;': '\u293c',
    'curlyeqprec;': '\u22de',
    'curlyeqsucc;': '\u22df',
    'curlyvee;': '\u22ce',
    'curlywedge;': '\u22cf',
    'curren': '\xa4',
    'curren;': '\xa4',
    'curvearrowleft;': '\u21b6',
    'curvearrowright;': '\u21b7',
    'cuvee;': '\u22ce',
    'cuwed;': '\u22cf',
    'cwconint;': '\u2232',
    'cwint;': '\u2231',
    'cylcty;': '\u232d',
    'Dagger;': '\u2021',
    'dagger;': '\u2020',
    'daleth;': '\u2138',
    'Darr;': '\u21a1',
    'dArr;': '\u21d3',
    'darr;': '\u2193',
    'dash;': '\u2010',
    'Dashv;': '\u2ae4',
    'dashv;': '\u22a3',
    'dbkarow;': '\u290f',
    'dblac;': '\u02dd',
    'Dcaron;': '\u010e',
    'dcaron;': '\u010f',
    'Dcy;': '\u0414',
    'dcy;': '\u0434',
    'DD;': '\u2145',
    'dd;': '\u2146',
    'ddagger;': '\u2021',
    'ddarr;': '\u21ca',
    'DDotrahd;': '\u2911',
    'ddotseq;': '\u2a77',
    'deg': '\xb0',
    'deg;': '\xb0',
    'Del;': '\u2207',
    'Delta;': '\u0394',
    'delta;': '\u03b4',
    'demptyv;': '\u29b1',
    'dfisht;': '\u297f',
    'Dfr;': '\U0001d507',
    'dfr;': '\U0001d521',
    'dHar;': '\u2965',
    'dharl;': '\u21c3',
    'dharr;': '\u21c2',
    'DiacriticalAcute;': '\xb4',
    'DiacriticalDot;': '\u02d9',
    'DiacriticalDoubleAcute;': '\u02dd',
    'DiacriticalGrave;': '`',
    'DiacriticalTilde;': '\u02dc',
    'diam;': '\u22c4',
    'Diamond;': '\u22c4',
    'diamond;': '\u22c4',
    'diamondsuit;': '\u2666',
    'diams;': '\u2666',
    'die;': '\xa8',
    'DifferentialD;': '\u2146',
    'digamma;': '\u03dd',
    'disin;': '\u22f2',
    'div;': '\xf7',
    'divide': '\xf7',
    'divide;': '\xf7',
    'divideontimes;': '\u22c7',
    'divonx;': '\u22c7',
    'DJcy;': '\u0402',
    'djcy;': '\u0452',
    'dlcorn;': '\u231e',
    'dlcrop;': '\u230d',
    'dollar;': '$',
    'Dopf;': '\U0001d53b',
    'dopf;': '\U0001d555',
    'Dot;': '\xa8',
    'dot;': '\u02d9',
    'DotDot;': '\u20dc',
    'doteq;': '\u2250',
    'doteqdot;': '\u2251',
    'DotEqual;': '\u2250',
    'dotminus;': '\u2238',
    'dotplus;': '\u2214',
    'dotsquare;': '\u22a1',
    'doublebarwedge;': '\u2306',
    'DoubleContourIntegral;': '\u222f',
    'DoubleDot;': '\xa8',
    'DoubleDownArrow;': '\u21d3',
    'DoubleLeftArrow;': '\u21d0',
    'DoubleLeftRightArrow;': '\u21d4',
    'DoubleLeftTee;': '\u2ae4',
    'DoubleLongLeftArrow;': '\u27f8',
    'DoubleLongLeftRightArrow;': '\u27fa',
    'DoubleLongRightArrow;': '\u27f9',
    'DoubleRightArrow;': '\u21d2',
    'DoubleRightTee;': '\u22a8',
    'DoubleUpArrow;': '\u21d1',
    'DoubleUpDownArrow;': '\u21d5',
    'DoubleVerticalBar;': '\u2225',
    'DownArrow;': '\u2193',
    'Downarrow;': '\u21d3',
    'downarrow;': '\u2193',
    'DownArrowBar;': '\u2913',
    'DownArrowUpArrow;': '\u21f5',
    'DownBreve;': '\u0311',
    'downdownarrows;': '\u21ca',
    'downharpoonleft;': '\u21c3',
    'downharpoonright;': '\u21c2',
    'DownLeftRightVector;': '\u2950',
    'DownLeftTeeVector;': '\u295e',
    'DownLeftVector;': '\u21bd',
    'DownLeftVectorBar;': '\u2956',
    'DownRightTeeVector;': '\u295f',
    'DownRightVector;': '\u21c1',
    'DownRightVectorBar;': '\u2957',
    'DownTee;': '\u22a4',
    'DownTeeArrow;': '\u21a7',
    'drbkarow;': '\u2910',
    'drcorn;': '\u231f',
    'drcrop;': '\u230c',
    'Dscr;': '\U0001d49f',
    'dscr;': '\U0001d4b9',
    'DScy;': '\u0405',
    'dscy;': '\u0455',
    'dsol;': '\u29f6',
    'Dstrok;': '\u0110',
    'dstrok;': '\u0111',
    'dtdot;': '\u22f1',
    'dtri;': '\u25bf',
    'dtrif;': '\u25be',
    'duarr;': '\u21f5',
    'duhar;': '\u296f',
    'dwangle;': '\u29a6',
    'DZcy;': '\u040f',
    'dzcy;': '\u045f',
    'dzigrarr;': '\u27ff',
    'Eacute': '\xc9',
    'eacute': '\xe9',
    'Eacute;': '\xc9',
    'eacute;': '\xe9',
    'easter;': '\u2a6e',
    'Ecaron;': '\u011a',
    'ecaron;': '\u011b',
    'ecir;': '\u2256',
    'Ecirc': '\xca',
    'ecirc': '\xea',
    'Ecirc;': '\xca',
    'ecirc;': '\xea',
    'ecolon;': '\u2255',
    'Ecy;': '\u042d',
    'ecy;': '\u044d',
    'eDDot;': '\u2a77',
    'Edot;': '\u0116',
    'eDot;': '\u2251',
    'edot;': '\u0117',
    'ee;': '\u2147',
    'efDot;': '\u2252',
    'Efr;': '\U0001d508',
    'efr;': '\U0001d522',
    'eg;': '\u2a9a',
    'Egrave': '\xc8',
    'egrave': '\xe8',
    'Egrave;': '\xc8',
    'egrave;': '\xe8',
    'egs;': '\u2a96',
    'egsdot;': '\u2a98',
    'el;': '\u2a99',
    'Element;': '\u2208',
    'elinters;': '\u23e7',
    'ell;': '\u2113',
    'els;': '\u2a95',
    'elsdot;': '\u2a97',
    'Emacr;': '\u0112',
    'emacr;': '\u0113',
    'empty;': '\u2205',
    'emptyset;': '\u2205',
    'EmptySmallSquare;': '\u25fb',
    'emptyv;': '\u2205',
    'EmptyVerySmallSquare;': '\u25ab',
    'emsp13;': '\u2004',
    'emsp14;': '\u2005',
    'emsp;': '\u2003',
    'ENG;': '\u014a',
    'eng;': '\u014b',
    'ensp;': '\u2002',
    'Eogon;': '\u0118',
    'eogon;': '\u0119',
    'Eopf;': '\U0001d53c',
    'eopf;': '\U0001d556',
    'epar;': '\u22d5',
    'eparsl;': '\u29e3',
    'eplus;': '\u2a71',
    'epsi;': '\u03b5',
    'Epsilon;': '\u0395',
    'epsilon;': '\u03b5',
    'epsiv;': '\u03f5',
    'eqcirc;': '\u2256',
    'eqcolon;': '\u2255',
    'eqsim;': '\u2242',
    'eqslantgtr;': '\u2a96',
    'eqslantless;': '\u2a95',
    'Equal;': '\u2a75',
    'equals;': '=',
    'EqualTilde;': '\u2242',
    'equest;': '\u225f',
    'Equilibrium;': '\u21cc',
    'equiv;': '\u2261',
    'equivDD;': '\u2a78',
    'eqvparsl;': '\u29e5',
    'erarr;': '\u2971',
    'erDot;': '\u2253',
    'Escr;': '\u2130',
    'escr;': '\u212f',
    'esdot;': '\u2250',
    'Esim;': '\u2a73',
    'esim;': '\u2242',
    'Eta;': '\u0397',
    'eta;': '\u03b7',
    'ETH': '\xd0',
    'eth': '\xf0',
    'ETH;': '\xd0',
    'eth;': '\xf0',
    'Euml': '\xcb',
    'euml': '\xeb',
    'Euml;': '\xcb',
    'euml;': '\xeb',
    'euro;': '\u20ac',
    'excl;': '!',
    'exist;': '\u2203',
    'Exists;': '\u2203',
    'expectation;': '\u2130',
    'ExponentialE;': '\u2147',
    'exponentiale;': '\u2147',
    'fallingdotseq;': '\u2252',
    'Fcy;': '\u0424',
    'fcy;': '\u0444',
    'female;': '\u2640',
    'ffilig;': '\ufb03',
    'fflig;': '\ufb00',
    'ffllig;': '\ufb04',
    'Ffr;': '\U0001d509',
    'ffr;': '\U0001d523',
    'filig;': '\ufb01',
    'FilledSmallSquare;': '\u25fc',
    'FilledVerySmallSquare;': '\u25aa',
    'fjlig;': 'fj',
    'flat;': '\u266d',
    'fllig;': '\ufb02',
    'fltns;': '\u25b1',
    'fnof;': '\u0192',
    'Fopf;': '\U0001d53d',
    'fopf;': '\U0001d557',
    'ForAll;': '\u2200',
    'forall;': '\u2200',
    'fork;': '\u22d4',
    'forkv;': '\u2ad9',
    'Fouriertrf;': '\u2131',
    'fpartint;': '\u2a0d',
    'frac12': '\xbd',
    'frac12;': '\xbd',
    'frac13;': '\u2153',
    'frac14': '\xbc',
    'frac14;': '\xbc',
    'frac15;': '\u2155',
    'frac16;': '\u2159',
    'frac18;': '\u215b',
    'frac23;': '\u2154',
    'frac25;': '\u2156',
    'frac34': '\xbe',
    'frac34;': '\xbe',
    'frac35;': '\u2157',
    'frac38;': '\u215c',
    'frac45;': '\u2158',
    'frac56;': '\u215a',
    'frac58;': '\u215d',
    'frac78;': '\u215e',
    'frasl;': '\u2044',
    'frown;': '\u2322',
    'Fscr;': '\u2131',
    'fscr;': '\U0001d4bb',
    'gacute;': '\u01f5',
    'Gamma;': '\u0393',
    'gamma;': '\u03b3',
    'Gammad;': '\u03dc',
    'gammad;': '\u03dd',
    'gap;': '\u2a86',
    'Gbreve;': '\u011e',
    'gbreve;': '\u011f',
    'Gcedil;': '\u0122',
    'Gcirc;': '\u011c',
    'gcirc;': '\u011d',
    'Gcy;': '\u0413',
    'gcy;': '\u0433',
    'Gdot;': '\u0120',
    'gdot;': '\u0121',
    'gE;': '\u2267',
    'ge;': '\u2265',
    'gEl;': '\u2a8c',
    'gel;': '\u22db',
    'geq;': '\u2265',
    'geqq;': '\u2267',
    'geqslant;': '\u2a7e',
    'ges;': '\u2a7e',
    'gescc;': '\u2aa9',
    'gesdot;': '\u2a80',
    'gesdoto;': '\u2a82',
    'gesdotol;': '\u2a84',
    'gesl;': '\u22db\ufe00',
    'gesles;': '\u2a94',
    'Gfr;': '\U0001d50a',
    'gfr;': '\U0001d524',
    'Gg;': '\u22d9',
    'gg;': '\u226b',
    'ggg;': '\u22d9',
    'gimel;': '\u2137',
    'GJcy;': '\u0403',
    'gjcy;': '\u0453',
    'gl;': '\u2277',
    'gla;': '\u2aa5',
    'glE;': '\u2a92',
    'glj;': '\u2aa4',
    'gnap;': '\u2a8a',
    'gnapprox;': '\u2a8a',
    'gnE;': '\u2269',
    'gne;': '\u2a88',
    'gneq;': '\u2a88',
    'gneqq;': '\u2269',
    'gnsim;': '\u22e7',
    'Gopf;': '\U0001d53e',
    'gopf;': '\U0001d558',
    'grave;': '`',
    'GreaterEqual;': '\u2265',
    'GreaterEqualLess;': '\u22db',
    'GreaterFullEqual;': '\u2267',
    'GreaterGreater;': '\u2aa2',
    'GreaterLess;': '\u2277',
    'GreaterSlantEqual;': '\u2a7e',
    'GreaterTilde;': '\u2273',
    'Gscr;': '\U0001d4a2',
    'gscr;': '\u210a',
    'gsim;': '\u2273',
    'gsime;': '\u2a8e',
    'gsiml;': '\u2a90',
    'GT': '>',
    'gt': '>',
    'GT;': '>',
    'Gt;': '\u226b',
    'gt;': '>',
    'gtcc;': '\u2aa7',
    'gtcir;': '\u2a7a',
    'gtdot;': '\u22d7',
    'gtlPar;': '\u2995',
    'gtquest;': '\u2a7c',
    'gtrapprox;': '\u2a86',
    'gtrarr;': '\u2978',
    'gtrdot;': '\u22d7',
    'gtreqless;': '\u22db',
    'gtreqqless;': '\u2a8c',
    'gtrless;': '\u2277',
    'gtrsim;': '\u2273',
    'gvertneqq;': '\u2269\ufe00',
    'gvnE;': '\u2269\ufe00',
    'Hacek;': '\u02c7',
    'hairsp;': '\u200a',
    'half;': '\xbd',
    'hamilt;': '\u210b',
    'HARDcy;': '\u042a',
    'hardcy;': '\u044a',
    'hArr;': '\u21d4',
    'harr;': '\u2194',
    'harrcir;': '\u2948',
    'harrw;': '\u21ad',
    'Hat;': '^',
    'hbar;': '\u210f',
    'Hcirc;': '\u0124',
    'hcirc;': '\u0125',
    'hearts;': '\u2665',
    'heartsuit;': '\u2665',
    'hellip;': '\u2026',
    'hercon;': '\u22b9',
    'Hfr;': '\u210c',
    'hfr;': '\U0001d525',
    'HilbertSpace;': '\u210b',
    'hksearow;': '\u2925',
    'hkswarow;': '\u2926',
    'hoarr;': '\u21ff',
    'homtht;': '\u223b',
    'hookleftarrow;': '\u21a9',
    'hookrightarrow;': '\u21aa',
    'Hopf;': '\u210d',
    'hopf;': '\U0001d559',
    'horbar;': '\u2015',
    'HorizontalLine;': '\u2500',
    'Hscr;': '\u210b',
    'hscr;': '\U0001d4bd',
    'hslash;': '\u210f',
    'Hstrok;': '\u0126',
    'hstrok;': '\u0127',
    'HumpDownHump;': '\u224e',
    'HumpEqual;': '\u224f',
    'hybull;': '\u2043',
    'hyphen;': '\u2010',
    'Iacute': '\xcd',
    'iacute': '\xed',
    'Iacute;': '\xcd',
    'iacute;': '\xed',
    'ic;': '\u2063',
    'Icirc': '\xce',
    'icirc': '\xee',
    'Icirc;': '\xce',
    'icirc;': '\xee',
    'Icy;': '\u0418',
    'icy;': '\u0438',
    'Idot;': '\u0130',
    'IEcy;': '\u0415',
    'iecy;': '\u0435',
    'iexcl': '\xa1',
    'iexcl;': '\xa1',
    'iff;': '\u21d4',
    'Ifr;': '\u2111',
    'ifr;': '\U0001d526',
    'Igrave': '\xcc',
    'igrave': '\xec',
    'Igrave;': '\xcc',
    'igrave;': '\xec',
    'ii;': '\u2148',
    'iiiint;': '\u2a0c',
    'iiint;': '\u222d',
    'iinfin;': '\u29dc',
    'iiota;': '\u2129',
    'IJlig;': '\u0132',
    'ijlig;': '\u0133',
    'Im;': '\u2111',
    'Imacr;': '\u012a',
    'imacr;': '\u012b',
    'image;': '\u2111',
    'ImaginaryI;': '\u2148',
    'imagline;': '\u2110',
    'imagpart;': '\u2111',
    'imath;': '\u0131',
    'imof;': '\u22b7',
    'imped;': '\u01b5',
    'Implies;': '\u21d2',
    'in;': '\u2208',
    'incare;': '\u2105',
    'infin;': '\u221e',
    'infintie;': '\u29dd',
    'inodot;': '\u0131',
    'Int;': '\u222c',
    'int;': '\u222b',
    'intcal;': '\u22ba',
    'integers;': '\u2124',
    'Integral;': '\u222b',
    'intercal;': '\u22ba',
    'Intersection;': '\u22c2',
    'intlarhk;': '\u2a17',
    'intprod;': '\u2a3c',
    'InvisibleComma;': '\u2063',
    'InvisibleTimes;': '\u2062',
    'IOcy;': '\u0401',
    'iocy;': '\u0451',
    'Iogon;': '\u012e',
    'iogon;': '\u012f',
    'Iopf;': '\U0001d540',
    'iopf;': '\U0001d55a',
    'Iota;': '\u0399',
    'iota;': '\u03b9',
    'iprod;': '\u2a3c',
    'iquest': '\xbf',
    'iquest;': '\xbf',
    'Iscr;': '\u2110',
    'iscr;': '\U0001d4be',
    'isin;': '\u2208',
    'isindot;': '\u22f5',
    'isinE;': '\u22f9',
    'isins;': '\u22f4',
    'isinsv;': '\u22f3',
    'isinv;': '\u2208',
    'it;': '\u2062',
    'Itilde;': '\u0128',
    'itilde;': '\u0129',
    'Iukcy;': '\u0406',
    'iukcy;': '\u0456',
    'Iuml': '\xcf',
    'iuml': '\xef',
    'Iuml;': '\xcf',
    'iuml;': '\xef',
    'Jcirc;': '\u0134',
    'jcirc;': '\u0135',
    'Jcy;': '\u0419',
    'jcy;': '\u0439',
    'Jfr;': '\U0001d50d',
    'jfr;': '\U0001d527',
    'jmath;': '\u0237',
    'Jopf;': '\U0001d541',
    'jopf;': '\U0001d55b',
    'Jscr;': '\U0001d4a5',
    'jscr;': '\U0001d4bf',
    'Jsercy;': '\u0408',
    'jsercy;': '\u0458',
    'Jukcy;': '\u0404',
    'jukcy;': '\u0454',
    'Kappa;': '\u039a',
    'kappa;': '\u03ba',
    'kappav;': '\u03f0',
    'Kcedil;': '\u0136',
    'kcedil;': '\u0137',
    'Kcy;': '\u041a',
    'kcy;': '\u043a',
    'Kfr;': '\U0001d50e',
    'kfr;': '\U0001d528',
    'kgreen;': '\u0138',
    'KHcy;': '\u0425',
    'khcy;': '\u0445',
    'KJcy;': '\u040c',
    'kjcy;': '\u045c',
    'Kopf;': '\U0001d542',
    'kopf;': '\U0001d55c',
    'Kscr;': '\U0001d4a6',
    'kscr;': '\U0001d4c0',
    'lAarr;': '\u21da',
    'Lacute;': '\u0139',
    'lacute;': '\u013a',
    'laemptyv;': '\u29b4',
    'lagran;': '\u2112',
    'Lambda;': '\u039b',
    'lambda;': '\u03bb',
    'Lang;': '\u27ea',
    'lang;': '\u27e8',
    'langd;': '\u2991',
    'langle;': '\u27e8',
    'lap;': '\u2a85',
    'Laplacetrf;': '\u2112',
    'laquo': '\xab',
    'laquo;': '\xab',
    'Larr;': '\u219e',
    'lArr;': '\u21d0',
    'larr;': '\u2190',
    'larrb;': '\u21e4',
    'larrbfs;': '\u291f',
    'larrfs;': '\u291d',
    'larrhk;': '\u21a9',
    'larrlp;': '\u21ab',
    'larrpl;': '\u2939',
    'larrsim;': '\u2973',
    'larrtl;': '\u21a2',
    'lat;': '\u2aab',
    'lAtail;': '\u291b',
    'latail;': '\u2919',
    'late;': '\u2aad',
    'lates;': '\u2aad\ufe00',
    'lBarr;': '\u290e',
    'lbarr;': '\u290c',
    'lbbrk;': '\u2772',
    'lbrace;': '{',
    'lbrack;': '[',
    'lbrke;': '\u298b',
    'lbrksld;': '\u298f',
    'lbrkslu;': '\u298d',
    'Lcaron;': '\u013d',
    'lcaron;': '\u013e',
    'Lcedil;': '\u013b',
    'lcedil;': '\u013c',
    'lceil;': '\u2308',
    'lcub;': '{',
    'Lcy;': '\u041b',
    'lcy;': '\u043b',
    'ldca;': '\u2936',
    'ldquo;': '\u201c',
    'ldquor;': '\u201e',
    'ldrdhar;': '\u2967',
    'ldrushar;': '\u294b',
    'ldsh;': '\u21b2',
    'lE;': '\u2266',
    'le;': '\u2264',
    'LeftAngleBracket;': '\u27e8',
    'LeftArrow;': '\u2190',
    'Leftarrow;': '\u21d0',
    'leftarrow;': '\u2190',
    'LeftArrowBar;': '\u21e4',
    'LeftArrowRightArrow;': '\u21c6',
    'leftarrowtail;': '\u21a2',
    'LeftCeiling;': '\u2308',
    'LeftDoubleBracket;': '\u27e6',
    'LeftDownTeeVector;': '\u2961',
    'LeftDownVector;': '\u21c3',
    'LeftDownVectorBar;': '\u2959',
    'LeftFloor;': '\u230a',
    'leftharpoondown;': '\u21bd',
    'leftharpoonup;': '\u21bc',
    'leftleftarrows;': '\u21c7',
    'LeftRightArrow;': '\u2194',
    'Leftrightarrow;': '\u21d4',
    'leftrightarrow;': '\u2194',
    'leftrightarrows;': '\u21c6',
    'leftrightharpoons;': '\u21cb',
    'leftrightsquigarrow;': '\u21ad',
    'LeftRightVector;': '\u294e',
    'LeftTee;': '\u22a3',
    'LeftTeeArrow;': '\u21a4',
    'LeftTeeVector;': '\u295a',
    'leftthreetimes;': '\u22cb',
    'LeftTriangle;': '\u22b2',
    'LeftTriangleBar;': '\u29cf',
    'LeftTriangleEqual;': '\u22b4',
    'LeftUpDownVector;': '\u2951',
    'LeftUpTeeVector;': '\u2960',
    'LeftUpVector;': '\u21bf',
    'LeftUpVectorBar;': '\u2958',
    'LeftVector;': '\u21bc',
    'LeftVectorBar;': '\u2952',
    'lEg;': '\u2a8b',
    'leg;': '\u22da',
    'leq;': '\u2264',
    'leqq;': '\u2266',
    'leqslant;': '\u2a7d',
    'les;': '\u2a7d',
    'lescc;': '\u2aa8',
    'lesdot;': '\u2a7f',
    'lesdoto;': '\u2a81',
    'lesdotor;': '\u2a83',
    'lesg;': '\u22da\ufe00',
    'lesges;': '\u2a93',
    'lessapprox;': '\u2a85',
    'lessdot;': '\u22d6',
    'lesseqgtr;': '\u22da',
    'lesseqqgtr;': '\u2a8b',
    'LessEqualGreater;': '\u22da',
    'LessFullEqual;': '\u2266',
    'LessGreater;': '\u2276',
    'lessgtr;': '\u2276',
    'LessLess;': '\u2aa1',
    'lesssim;': '\u2272',
    'LessSlantEqual;': '\u2a7d',
    'LessTilde;': '\u2272',
    'lfisht;': '\u297c',
    'lfloor;': '\u230a',
    'Lfr;': '\U0001d50f',
    'lfr;': '\U0001d529',
    'lg;': '\u2276',
    'lgE;': '\u2a91',
    'lHar;': '\u2962',
    'lhard;': '\u21bd',
    'lharu;': '\u21bc',
    'lharul;': '\u296a',
    'lhblk;': '\u2584',
    'LJcy;': '\u0409',
    'ljcy;': '\u0459',
    'Ll;': '\u22d8',
    'll;': '\u226a',
    'llarr;': '\u21c7',
    'llcorner;': '\u231e',
    'Lleftarrow;': '\u21da',
    'llhard;': '\u296b',
    'lltri;': '\u25fa',
    'Lmidot;': '\u013f',
    'lmidot;': '\u0140',
    'lmoust;': '\u23b0',
    'lmoustache;': '\u23b0',
    'lnap;': '\u2a89',
    'lnapprox;': '\u2a89',
    'lnE;': '\u2268',
    'lne;': '\u2a87',
    'lneq;': '\u2a87',
    'lneqq;': '\u2268',
    'lnsim;': '\u22e6',
    'loang;': '\u27ec',
    'loarr;': '\u21fd',
    'lobrk;': '\u27e6',
    'LongLeftArrow;': '\u27f5',
    'Longleftarrow;': '\u27f8',
    'longleftarrow;': '\u27f5',
    'LongLeftRightArrow;': '\u27f7',
    'Longleftrightarrow;': '\u27fa',
    'longleftrightarrow;': '\u27f7',
    'longmapsto;': '\u27fc',
    'LongRightArrow;': '\u27f6',
    'Longrightarrow;': '\u27f9',
    'longrightarrow;': '\u27f6',
    'looparrowleft;': '\u21ab',
    'looparrowright;': '\u21ac',
    'lopar;': '\u2985',
    'Lopf;': '\U0001d543',
    'lopf;': '\U0001d55d',
    'loplus;': '\u2a2d',
    'lotimes;': '\u2a34',
    'lowast;': '\u2217',
    'lowbar;': '_',
    'LowerLeftArrow;': '\u2199',
    'LowerRightArrow;': '\u2198',
    'loz;': '\u25ca',
    'lozenge;': '\u25ca',
    'lozf;': '\u29eb',
    'lpar;': '(',
    'lparlt;': '\u2993',
    'lrarr;': '\u21c6',
    'lrcorner;': '\u231f',
    'lrhar;': '\u21cb',
    'lrhard;': '\u296d',
    'lrm;': '\u200e',
    'lrtri;': '\u22bf',
    'lsaquo;': '\u2039',
    'Lscr;': '\u2112',
    'lscr;': '\U0001d4c1',
    'Lsh;': '\u21b0',
    'lsh;': '\u21b0',
    'lsim;': '\u2272',
    'lsime;': '\u2a8d',
    'lsimg;': '\u2a8f',
    'lsqb;': '[',
    'lsquo;': '\u2018',
    'lsquor;': '\u201a',
    'Lstrok;': '\u0141',
    'lstrok;': '\u0142',
    'LT': '<',
    'lt': '<',
    'LT;': '<',
    'Lt;': '\u226a',
    'lt;': '<',
    'ltcc;': '\u2aa6',
    'ltcir;': '\u2a79',
    'ltdot;': '\u22d6',
    'lthree;': '\u22cb',
    'ltimes;': '\u22c9',
    'ltlarr;': '\u2976',
    'ltquest;': '\u2a7b',
    'ltri;': '\u25c3',
    'ltrie;': '\u22b4',
    'ltrif;': '\u25c2',
    'ltrPar;': '\u2996',
    'lurdshar;': '\u294a',
    'luruhar;': '\u2966',
    'lvertneqq;': '\u2268\ufe00',
    'lvnE;': '\u2268\ufe00',
    'macr': '\xaf',
    'macr;': '\xaf',
    'male;': '\u2642',
    'malt;': '\u2720',
    'maltese;': '\u2720',
    'Map;': '\u2905',
    'map;': '\u21a6',
    'mapsto;': '\u21a6',
    'mapstodown;': '\u21a7',
    'mapstoleft;': '\u21a4',
    'mapstoup;': '\u21a5',
    'marker;': '\u25ae',
    'mcomma;': '\u2a29',
    'Mcy;': '\u041c',
    'mcy;': '\u043c',
    'mdash;': '\u2014',
    'mDDot;': '\u223a',
    'measuredangle;': '\u2221',
    'MediumSpace;': '\u205f',
    'Mellintrf;': '\u2133',
    'Mfr;': '\U0001d510',
    'mfr;': '\U0001d52a',
    'mho;': '\u2127',
    'micro': '\xb5',
    'micro;': '\xb5',
    'mid;': '\u2223',
    'midast;': '*',
    'midcir;': '\u2af0',
    'middot': '\xb7',
    'middot;': '\xb7',
    'minus;': '\u2212',
    'minusb;': '\u229f',
    'minusd;': '\u2238',
    'minusdu;': '\u2a2a',
    'MinusPlus;': '\u2213',
    'mlcp;': '\u2adb',
    'mldr;': '\u2026',
    'mnplus;': '\u2213',
    'models;': '\u22a7',
    'Mopf;': '\U0001d544',
    'mopf;': '\U0001d55e',
    'mp;': '\u2213',
    'Mscr;': '\u2133',
    'mscr;': '\U0001d4c2',
    'mstpos;': '\u223e',
    'Mu;': '\u039c',
    'mu;': '\u03bc',
    'multimap;': '\u22b8',
    'mumap;': '\u22b8',
    'nabla;': '\u2207',
    'Nacute;': '\u0143',
    'nacute;': '\u0144',
    'nang;': '\u2220\u20d2',
    'nap;': '\u2249',
    'napE;': '\u2a70\u0338',
    'napid;': '\u224b\u0338',
    'napos;': '\u0149',
    'napprox;': '\u2249',
    'natur;': '\u266e',
    'natural;': '\u266e',
    'naturals;': '\u2115',
    'nbsp': '\xa0',
    'nbsp;': '\xa0',
    'nbump;': '\u224e\u0338',
    'nbumpe;': '\u224f\u0338',
    'ncap;': '\u2a43',
    'Ncaron;': '\u0147',
    'ncaron;': '\u0148',
    'Ncedil;': '\u0145',
    'ncedil;': '\u0146',
    'ncong;': '\u2247',
    'ncongdot;': '\u2a6d\u0338',
    'ncup;': '\u2a42',
    'Ncy;': '\u041d',
    'ncy;': '\u043d',
    'ndash;': '\u2013',
    'ne;': '\u2260',
    'nearhk;': '\u2924',
    'neArr;': '\u21d7',
    'nearr;': '\u2197',
    'nearrow;': '\u2197',
    'nedot;': '\u2250\u0338',
    'NegativeMediumSpace;': '\u200b',
    'NegativeThickSpace;': '\u200b',
    'NegativeThinSpace;': '\u200b',
    'NegativeVeryThinSpace;': '\u200b',
    'nequiv;': '\u2262',
    'nesear;': '\u2928',
    'nesim;': '\u2242\u0338',
    'NestedGreaterGreater;': '\u226b',
    'NestedLessLess;': '\u226a',
    'NewLine;': '\n',
    'nexist;': '\u2204',
    'nexists;': '\u2204',
    'Nfr;': '\U0001d511',
    'nfr;': '\U0001d52b',
    'ngE;': '\u2267\u0338',
    'nge;': '\u2271',
    'ngeq;': '\u2271',
    'ngeqq;': '\u2267\u0338',
    'ngeqslant;': '\u2a7e\u0338',
    'nges;': '\u2a7e\u0338',
    'nGg;': '\u22d9\u0338',
    'ngsim;': '\u2275',
    'nGt;': '\u226b\u20d2',
    'ngt;': '\u226f',
    'ngtr;': '\u226f',
    'nGtv;': '\u226b\u0338',
    'nhArr;': '\u21ce',
    'nharr;': '\u21ae',
    'nhpar;': '\u2af2',
    'ni;': '\u220b',
    'nis;': '\u22fc',
    'nisd;': '\u22fa',
    'niv;': '\u220b',
    'NJcy;': '\u040a',
    'njcy;': '\u045a',
    'nlArr;': '\u21cd',
    'nlarr;': '\u219a',
    'nldr;': '\u2025',
    'nlE;': '\u2266\u0338',
    'nle;': '\u2270',
    'nLeftarrow;': '\u21cd',
    'nleftarrow;': '\u219a',
    'nLeftrightarrow;': '\u21ce',
    'nleftrightarrow;': '\u21ae',
    'nleq;': '\u2270',
    'nleqq;': '\u2266\u0338',
    'nleqslant;': '\u2a7d\u0338',
    'nles;': '\u2a7d\u0338',
    'nless;': '\u226e',
    'nLl;': '\u22d8\u0338',
    'nlsim;': '\u2274',
    'nLt;': '\u226a\u20d2',
    'nlt;': '\u226e',
    'nltri;': '\u22ea',
    'nltrie;': '\u22ec',
    'nLtv;': '\u226a\u0338',
    'nmid;': '\u2224',
    'NoBreak;': '\u2060',
    'NonBreakingSpace;': '\xa0',
    'Nopf;': '\u2115',
    'nopf;': '\U0001d55f',
    'not': '\xac',
    'Not;': '\u2aec',
    'not;': '\xac',
    'NotCongruent;': '\u2262',
    'NotCupCap;': '\u226d',
    'NotDoubleVerticalBar;': '\u2226',
    'NotElement;': '\u2209',
    'NotEqual;': '\u2260',
    'NotEqualTilde;': '\u2242\u0338',
    'NotExists;': '\u2204',
    'NotGreater;': '\u226f',
    'NotGreaterEqual;': '\u2271',
    'NotGreaterFullEqual;': '\u2267\u0338',
    'NotGreaterGreater;': '\u226b\u0338',
    'NotGreaterLess;': '\u2279',
    'NotGreaterSlantEqual;': '\u2a7e\u0338',
    'NotGreaterTilde;': '\u2275',
    'NotHumpDownHump;': '\u224e\u0338',
    'NotHumpEqual;': '\u224f\u0338',
    'notin;': '\u2209',
    'notindot;': '\u22f5\u0338',
    'notinE;': '\u22f9\u0338',
    'notinva;': '\u2209',
    'notinvb;': '\u22f7',
    'notinvc;': '\u22f6',
    'NotLeftTriangle;': '\u22ea',
    'NotLeftTriangleBar;': '\u29cf\u0338',
    'NotLeftTriangleEqual;': '\u22ec',
    'NotLess;': '\u226e',
    'NotLessEqual;': '\u2270',
    'NotLessGreater;': '\u2278',
    'NotLessLess;': '\u226a\u0338',
    'NotLessSlantEqual;': '\u2a7d\u0338',
    'NotLessTilde;': '\u2274',
    'NotNestedGreaterGreater;': '\u2aa2\u0338',
    'NotNestedLessLess;': '\u2aa1\u0338',
    'notni;': '\u220c',
    'notniva;': '\u220c',
    'notnivb;': '\u22fe',
    'notnivc;': '\u22fd',
    'NotPrecedes;': '\u2280',
    'NotPrecedesEqual;': '\u2aaf\u0338',
    'NotPrecedesSlantEqual;': '\u22e0',
    'NotReverseElement;': '\u220c',
    'NotRightTriangle;': '\u22eb',
    'NotRightTriangleBar;': '\u29d0\u0338',
    'NotRightTriangleEqual;': '\u22ed',
    'NotSquareSubset;': '\u228f\u0338',
    'NotSquareSubsetEqual;': '\u22e2',
    'NotSquareSuperset;': '\u2290\u0338',
    'NotSquareSupersetEqual;': '\u22e3',
    'NotSubset;': '\u2282\u20d2',
    'NotSubsetEqual;': '\u2288',
    'NotSucceeds;': '\u2281',
    'NotSucceedsEqual;': '\u2ab0\u0338',
    'NotSucceedsSlantEqual;': '\u22e1',
    'NotSucceedsTilde;': '\u227f\u0338',
    'NotSuperset;': '\u2283\u20d2',
    'NotSupersetEqual;': '\u2289',
    'NotTilde;': '\u2241',
    'NotTildeEqual;': '\u2244',
    'NotTildeFullEqual;': '\u2247',
    'NotTildeTilde;': '\u2249',
    'NotVerticalBar;': '\u2224',
    'npar;': '\u2226',
    'nparallel;': '\u2226',
    'nparsl;': '\u2afd\u20e5',
    'npart;': '\u2202\u0338',
    'npolint;': '\u2a14',
    'npr;': '\u2280',
    'nprcue;': '\u22e0',
    'npre;': '\u2aaf\u0338',
    'nprec;': '\u2280',
    'npreceq;': '\u2aaf\u0338',
    'nrArr;': '\u21cf',
    'nrarr;': '\u219b',
    'nrarrc;': '\u2933\u0338',
    'nrarrw;': '\u219d\u0338',
    'nRightarrow;': '\u21cf',
    'nrightarrow;': '\u219b',
    'nrtri;': '\u22eb',
    'nrtrie;': '\u22ed',
    'nsc;': '\u2281',
    'nsccue;': '\u22e1',
    'nsce;': '\u2ab0\u0338',
    'Nscr;': '\U0001d4a9',
    'nscr;': '\U0001d4c3',
    'nshortmid;': '\u2224',
    'nshortparallel;': '\u2226',
    'nsim;': '\u2241',
    'nsime;': '\u2244',
    'nsimeq;': '\u2244',
    'nsmid;': '\u2224',
    'nspar;': '\u2226',
    'nsqsube;': '\u22e2',
    'nsqsupe;': '\u22e3',
    'nsub;': '\u2284',
    'nsubE;': '\u2ac5\u0338',
    'nsube;': '\u2288',
    'nsubset;': '\u2282\u20d2',
    'nsubseteq;': '\u2288',
    'nsubseteqq;': '\u2ac5\u0338',
    'nsucc;': '\u2281',
    'nsucceq;': '\u2ab0\u0338',
    'nsup;': '\u2285',
    'nsupE;': '\u2ac6\u0338',
    'nsupe;': '\u2289',
    'nsupset;': '\u2283\u20d2',
    'nsupseteq;': '\u2289',
    'nsupseteqq;': '\u2ac6\u0338',
    'ntgl;': '\u2279',
    'Ntilde': '\xd1',
    'ntilde': '\xf1',
    'Ntilde;': '\xd1',
    'ntilde;': '\xf1',
    'ntlg;': '\u2278',
    'ntriangleleft;': '\u22ea',
    'ntrianglelefteq;': '\u22ec',
    'ntriangleright;': '\u22eb',
    'ntrianglerighteq;': '\u22ed',
    'Nu;': '\u039d',
    'nu;': '\u03bd',
    'num;': '#',
    'numero;': '\u2116',
    'numsp;': '\u2007',
    'nvap;': '\u224d\u20d2',
    'nVDash;': '\u22af',
    'nVdash;': '\u22ae',
    'nvDash;': '\u22ad',
    'nvdash;': '\u22ac',
    'nvge;': '\u2265\u20d2',
    'nvgt;': '>\u20d2',
    'nvHarr;': '\u2904',
    'nvinfin;': '\u29de',
    'nvlArr;': '\u2902',
    'nvle;': '\u2264\u20d2',
    'nvlt;': '<\u20d2',
    'nvltrie;': '\u22b4\u20d2',
    'nvrArr;': '\u2903',
    'nvrtrie;': '\u22b5\u20d2',
    'nvsim;': '\u223c\u20d2',
    'nwarhk;': '\u2923',
    'nwArr;': '\u21d6',
    'nwarr;': '\u2196',
    'nwarrow;': '\u2196',
    'nwnear;': '\u2927',
    'Oacute': '\xd3',
    'oacute': '\xf3',
    'Oacute;': '\xd3',
    'oacute;': '\xf3',
    'oast;': '\u229b',
    'ocir;': '\u229a',
    'Ocirc': '\xd4',
    'ocirc': '\xf4',
    'Ocirc;': '\xd4',
    'ocirc;': '\xf4',
    'Ocy;': '\u041e',
    'ocy;': '\u043e',
    'odash;': '\u229d',
    'Odblac;': '\u0150',
    'odblac;': '\u0151',
    'odiv;': '\u2a38',
    'odot;': '\u2299',
    'odsold;': '\u29bc',
    'OElig;': '\u0152',
    'oelig;': '\u0153',
    'ofcir;': '\u29bf',
    'Ofr;': '\U0001d512',
    'ofr;': '\U0001d52c',
    'ogon;': '\u02db',
    'Ograve': '\xd2',
    'ograve': '\xf2',
    'Ograve;': '\xd2',
    'ograve;': '\xf2',
    'ogt;': '\u29c1',
    'ohbar;': '\u29b5',
    'ohm;': '\u03a9',
    'oint;': '\u222e',
    'olarr;': '\u21ba',
    'olcir;': '\u29be',
    'olcross;': '\u29bb',
    'oline;': '\u203e',
    'olt;': '\u29c0',
    'Omacr;': '\u014c',
    'omacr;': '\u014d',
    'Omega;': '\u03a9',
    'omega;': '\u03c9',
    'Omicron;': '\u039f',
    'omicron;': '\u03bf',
    'omid;': '\u29b6',
    'ominus;': '\u2296',
    'Oopf;': '\U0001d546',
    'oopf;': '\U0001d560',
    'opar;': '\u29b7',
    'OpenCurlyDoubleQuote;': '\u201c',
    'OpenCurlyQuote;': '\u2018',
    'operp;': '\u29b9',
    'oplus;': '\u2295',
    'Or;': '\u2a54',
    'or;': '\u2228',
    'orarr;': '\u21bb',
    'ord;': '\u2a5d',
    'order;': '\u2134',
    'orderof;': '\u2134',
    'ordf': '\xaa',
    'ordf;': '\xaa',
    'ordm': '\xba',
    'ordm;': '\xba',
    'origof;': '\u22b6',
    'oror;': '\u2a56',
    'orslope;': '\u2a57',
    'orv;': '\u2a5b',
    'oS;': '\u24c8',
    'Oscr;': '\U0001d4aa',
    'oscr;': '\u2134',
    'Oslash': '\xd8',
    'oslash': '\xf8',
    'Oslash;': '\xd8',
    'oslash;': '\xf8',
    'osol;': '\u2298',
    'Otilde': '\xd5',
    'otilde': '\xf5',
    'Otilde;': '\xd5',
    'otilde;': '\xf5',
    'Otimes;': '\u2a37',
    'otimes;': '\u2297',
    'otimesas;': '\u2a36',
    'Ouml': '\xd6',
    'ouml': '\xf6',
    'Ouml;': '\xd6',
    'ouml;': '\xf6',
    'ovbar;': '\u233d',
    'OverBar;': '\u203e',
    'OverBrace;': '\u23de',
    'OverBracket;': '\u23b4',
    'OverParenthesis;': '\u23dc',
    'par;': '\u2225',
    'para': '\xb6',
    'para;': '\xb6',
    'parallel;': '\u2225',
    'parsim;': '\u2af3',
    'parsl;': '\u2afd',
    'part;': '\u2202',
    'PartialD;': '\u2202',
    'Pcy;': '\u041f',
    'pcy;': '\u043f',
    'percnt;': '%',
    'period;': '.',
    'permil;': '\u2030',
    'perp;': '\u22a5',
    'pertenk;': '\u2031',
    'Pfr;': '\U0001d513',
    'pfr;': '\U0001d52d',
    'Phi;': '\u03a6',
    'phi;': '\u03c6',
    'phiv;': '\u03d5',
    'phmmat;': '\u2133',
    'phone;': '\u260e',
    'Pi;': '\u03a0',
    'pi;': '\u03c0',
    'pitchfork;': '\u22d4',
    'piv;': '\u03d6',
    'planck;': '\u210f',
    'planckh;': '\u210e',
    'plankv;': '\u210f',
    'plus;': '+',
    'plusacir;': '\u2a23',
    'plusb;': '\u229e',
    'pluscir;': '\u2a22',
    'plusdo;': '\u2214',
    'plusdu;': '\u2a25',
    'pluse;': '\u2a72',
    'PlusMinus;': '\xb1',
    'plusmn': '\xb1',
    'plusmn;': '\xb1',
    'plussim;': '\u2a26',
    'plustwo;': '\u2a27',
    'pm;': '\xb1',
    'Poincareplane;': '\u210c',
    'pointint;': '\u2a15',
    'Popf;': '\u2119',
    'popf;': '\U0001d561',
    'pound': '\xa3',
    'pound;': '\xa3',
    'Pr;': '\u2abb',
    'pr;': '\u227a',
    'prap;': '\u2ab7',
    'prcue;': '\u227c',
    'prE;': '\u2ab3',
    'pre;': '\u2aaf',
    'prec;': '\u227a',
    'precapprox;': '\u2ab7',
    'preccurlyeq;': '\u227c',
    'Precedes;': '\u227a',
    'PrecedesEqual;': '\u2aaf',
    'PrecedesSlantEqual;': '\u227c',
    'PrecedesTilde;': '\u227e',
    'preceq;': '\u2aaf',
    'precnapprox;': '\u2ab9',
    'precneqq;': '\u2ab5',
    'precnsim;': '\u22e8',
    'precsim;': '\u227e',
    'Prime;': '\u2033',
    'prime;': '\u2032',
    'primes;': '\u2119',
    'prnap;': '\u2ab9',
    'prnE;': '\u2ab5',
    'prnsim;': '\u22e8',
    'prod;': '\u220f',
    'Product;': '\u220f',
    'profalar;': '\u232e',
    'profline;': '\u2312',
    'profsurf;': '\u2313',
    'prop;': '\u221d',
    'Proportion;': '\u2237',
    'Proportional;': '\u221d',
    'propto;': '\u221d',
    'prsim;': '\u227e',
    'prurel;': '\u22b0',
    'Pscr;': '\U0001d4ab',
    'pscr;': '\U0001d4c5',
    'Psi;': '\u03a8',
    'psi;': '\u03c8',
    'puncsp;': '\u2008',
    'Qfr;': '\U0001d514',
    'qfr;': '\U0001d52e',
    'qint;': '\u2a0c',
    'Qopf;': '\u211a',
    'qopf;': '\U0001d562',
    'qprime;': '\u2057',
    'Qscr;': '\U0001d4ac',
    'qscr;': '\U0001d4c6',
    'quaternions;': '\u210d',
    'quatint;': '\u2a16',
    'quest;': '?',
    'questeq;': '\u225f',
    'QUOT': '"',
    'quot': '"',
    'QUOT;': '"',
    'quot;': '"',
    'rAarr;': '\u21db',
    'race;': '\u223d\u0331',
    'Racute;': '\u0154',
    'racute;': '\u0155',
    'radic;': '\u221a',
    'raemptyv;': '\u29b3',
    'Rang;': '\u27eb',
    'rang;': '\u27e9',
    'rangd;': '\u2992',
    'range;': '\u29a5',
    'rangle;': '\u27e9',
    'raquo': '\xbb',
    'raquo;': '\xbb',
    'Rarr;': '\u21a0',
    'rArr;': '\u21d2',
    'rarr;': '\u2192',
    'rarrap;': '\u2975',
    'rarrb;': '\u21e5',
    'rarrbfs;': '\u2920',
    'rarrc;': '\u2933',
    'rarrfs;': '\u291e',
    'rarrhk;': '\u21aa',
    'rarrlp;': '\u21ac',
    'rarrpl;': '\u2945',
    'rarrsim;': '\u2974',
    'Rarrtl;': '\u2916',
    'rarrtl;': '\u21a3',
    'rarrw;': '\u219d',
    'rAtail;': '\u291c',
    'ratail;': '\u291a',
    'ratio;': '\u2236',
    'rationals;': '\u211a',
    'RBarr;': '\u2910',
    'rBarr;': '\u290f',
    'rbarr;': '\u290d',
    'rbbrk;': '\u2773',
    'rbrace;': '}',
    'rbrack;': ']',
    'rbrke;': '\u298c',
    'rbrksld;': '\u298e',
    'rbrkslu;': '\u2990',
    'Rcaron;': '\u0158',
    'rcaron;': '\u0159',
    'Rcedil;': '\u0156',
    'rcedil;': '\u0157',
    'rceil;': '\u2309',
    'rcub;': '}',
    'Rcy;': '\u0420',
    'rcy;': '\u0440',
    'rdca;': '\u2937',
    'rdldhar;': '\u2969',
    'rdquo;': '\u201d',
    'rdquor;': '\u201d',
    'rdsh;': '\u21b3',
    'Re;': '\u211c',
    'real;': '\u211c',
    'realine;': '\u211b',
    'realpart;': '\u211c',
    'reals;': '\u211d',
    'rect;': '\u25ad',
    'REG': '\xae',
    'reg': '\xae',
    'REG;': '\xae',
    'reg;': '\xae',
    'ReverseElement;': '\u220b',
    'ReverseEquilibrium;': '\u21cb',
    'ReverseUpEquilibrium;': '\u296f',
    'rfisht;': '\u297d',
    'rfloor;': '\u230b',
    'Rfr;': '\u211c',
    'rfr;': '\U0001d52f',
    'rHar;': '\u2964',
    'rhard;': '\u21c1',
    'rharu;': '\u21c0',
    'rharul;': '\u296c',
    'Rho;': '\u03a1',
    'rho;': '\u03c1',
    'rhov;': '\u03f1',
    'RightAngleBracket;': '\u27e9',
    'RightArrow;': '\u2192',
    'Rightarrow;': '\u21d2',
    'rightarrow;': '\u2192',
    'RightArrowBar;': '\u21e5',
    'RightArrowLeftArrow;': '\u21c4',
    'rightarrowtail;': '\u21a3',
    'RightCeiling;': '\u2309',
    'RightDoubleBracket;': '\u27e7',
    'RightDownTeeVector;': '\u295d',
    'RightDownVector;': '\u21c2',
    'RightDownVectorBar;': '\u2955',
    'RightFloor;': '\u230b',
    'rightharpoondown;': '\u21c1',
    'rightharpoonup;': '\u21c0',
    'rightleftarrows;': '\u21c4',
    'rightleftharpoons;': '\u21cc',
    'rightrightarrows;': '\u21c9',
    'rightsquigarrow;': '\u219d',
    'RightTee;': '\u22a2',
    'RightTeeArrow;': '\u21a6',
    'RightTeeVector;': '\u295b',
    'rightthreetimes;': '\u22cc',
    'RightTriangle;': '\u22b3',
    'RightTriangleBar;': '\u29d0',
    'RightTriangleEqual;': '\u22b5',
    'RightUpDownVector;': '\u294f',
    'RightUpTeeVector;': '\u295c',
    'RightUpVector;': '\u21be',
    'RightUpVectorBar;': '\u2954',
    'RightVector;': '\u21c0',
    'RightVectorBar;': '\u2953',
    'ring;': '\u02da',
    'risingdotseq;': '\u2253',
    'rlarr;': '\u21c4',
    'rlhar;': '\u21cc',
    'rlm;': '\u200f',
    'rmoust;': '\u23b1',
    'rmoustache;': '\u23b1',
    'rnmid;': '\u2aee',
    'roang;': '\u27ed',
    'roarr;': '\u21fe',
    'robrk;': '\u27e7',
    'ropar;': '\u2986',
    'Ropf;': '\u211d',
    'ropf;': '\U0001d563',
    'roplus;': '\u2a2e',
    'rotimes;': '\u2a35',
    'RoundImplies;': '\u2970',
    'rpar;': ')',
    'rpargt;': '\u2994',
    'rppolint;': '\u2a12',
    'rrarr;': '\u21c9',
    'Rrightarrow;': '\u21db',
    'rsaquo;': '\u203a',
    'Rscr;': '\u211b',
    'rscr;': '\U0001d4c7',
    'Rsh;': '\u21b1',
    'rsh;': '\u21b1',
    'rsqb;': ']',
    'rsquo;': '\u2019',
    'rsquor;': '\u2019',
    'rthree;': '\u22cc',
    'rtimes;': '\u22ca',
    'rtri;': '\u25b9',
    'rtrie;': '\u22b5',
    'rtrif;': '\u25b8',
    'rtriltri;': '\u29ce',
    'RuleDelayed;': '\u29f4',
    'ruluhar;': '\u2968',
    'rx;': '\u211e',
    'Sacute;': '\u015a',
    'sacute;': '\u015b',
    'sbquo;': '\u201a',
    'Sc;': '\u2abc',
    'sc;': '\u227b',
    'scap;': '\u2ab8',
    'Scaron;': '\u0160',
    'scaron;': '\u0161',
    'sccue;': '\u227d',
    'scE;': '\u2ab4',
    'sce;': '\u2ab0',
    'Scedil;': '\u015e',
    'scedil;': '\u015f',
    'Scirc;': '\u015c',
    'scirc;': '\u015d',
    'scnap;': '\u2aba',
    'scnE;': '\u2ab6',
    'scnsim;': '\u22e9',
    'scpolint;': '\u2a13',
    'scsim;': '\u227f',
    'Scy;': '\u0421',
    'scy;': '\u0441',
    'sdot;': '\u22c5',
    'sdotb;': '\u22a1',
    'sdote;': '\u2a66',
    'searhk;': '\u2925',
    'seArr;': '\u21d8',
    'searr;': '\u2198',
    'searrow;': '\u2198',
    'sect': '\xa7',
    'sect;': '\xa7',
    'semi;': ';',
    'seswar;': '\u2929',
    'setminus;': '\u2216',
    'setmn;': '\u2216',
    'sext;': '\u2736',
    'Sfr;': '\U0001d516',
    'sfr;': '\U0001d530',
    'sfrown;': '\u2322',
    'sharp;': '\u266f',
    'SHCHcy;': '\u0429',
    'shchcy;': '\u0449',
    'SHcy;': '\u0428',
    'shcy;': '\u0448',
    'ShortDownArrow;': '\u2193',
    'ShortLeftArrow;': '\u2190',
    'shortmid;': '\u2223',
    'shortparallel;': '\u2225',
    'ShortRightArrow;': '\u2192',
    'ShortUpArrow;': '\u2191',
    'shy': '\xad',
    'shy;': '\xad',
    'Sigma;': '\u03a3',
    'sigma;': '\u03c3',
    'sigmaf;': '\u03c2',
    'sigmav;': '\u03c2',
    'sim;': '\u223c',
    'simdot;': '\u2a6a',
    'sime;': '\u2243',
    'simeq;': '\u2243',
    'simg;': '\u2a9e',
    'simgE;': '\u2aa0',
    'siml;': '\u2a9d',
    'simlE;': '\u2a9f',
    'simne;': '\u2246',
    'simplus;': '\u2a24',
    'simrarr;': '\u2972',
    'slarr;': '\u2190',
    'SmallCircle;': '\u2218',
    'smallsetminus;': '\u2216',
    'smashp;': '\u2a33',
    'smeparsl;': '\u29e4',
    'smid;': '\u2223',
    'smile;': '\u2323',
    'smt;': '\u2aaa',
    'smte;': '\u2aac',
    'smtes;': '\u2aac\ufe00',
    'SOFTcy;': '\u042c',
    'softcy;': '\u044c',
    'sol;': '/',
    'solb;': '\u29c4',
    'solbar;': '\u233f',
    'Sopf;': '\U0001d54a',
    'sopf;': '\U0001d564',
    'spades;': '\u2660',
    'spadesuit;': '\u2660',
    'spar;': '\u2225',
    'sqcap;': '\u2293',
    'sqcaps;': '\u2293\ufe00',
    'sqcup;': '\u2294',
    'sqcups;': '\u2294\ufe00',
    'Sqrt;': '\u221a',
    'sqsub;': '\u228f',
    'sqsube;': '\u2291',
    'sqsubset;': '\u228f',
    'sqsubseteq;': '\u2291',
    'sqsup;': '\u2290',
    'sqsupe;': '\u2292',
    'sqsupset;': '\u2290',
    'sqsupseteq;': '\u2292',
    'squ;': '\u25a1',
    'Square;': '\u25a1',
    'square;': '\u25a1',
    'SquareIntersection;': '\u2293',
    'SquareSubset;': '\u228f',
    'SquareSubsetEqual;': '\u2291',
    'SquareSuperset;': '\u2290',
    'SquareSupersetEqual;': '\u2292',
    'SquareUnion;': '\u2294',
    'squarf;': '\u25aa',
    'squf;': '\u25aa',
    'srarr;': '\u2192',
    'Sscr;': '\U0001d4ae',
    'sscr;': '\U0001d4c8',
    'ssetmn;': '\u2216',
    'ssmile;': '\u2323',
    'sstarf;': '\u22c6',
    'Star;': '\u22c6',
    'star;': '\u2606',
    'starf;': '\u2605',
    'straightepsilon;': '\u03f5',
    'straightphi;': '\u03d5',
    'strns;': '\xaf',
    'Sub;': '\u22d0',
    'sub;': '\u2282',
    'subdot;': '\u2abd',
    'subE;': '\u2ac5',
    'sube;': '\u2286',
    'subedot;': '\u2ac3',
    'submult;': '\u2ac1',
    'subnE;': '\u2acb',
    'subne;': '\u228a',
    'subplus;': '\u2abf',
    'subrarr;': '\u2979',
    'Subset;': '\u22d0',
    'subset;': '\u2282',
    'subseteq;': '\u2286',
    'subseteqq;': '\u2ac5',
    'SubsetEqual;': '\u2286',
    'subsetneq;': '\u228a',
    'subsetneqq;': '\u2acb',
    'subsim;': '\u2ac7',
    'subsub;': '\u2ad5',
    'subsup;': '\u2ad3',
    'succ;': '\u227b',
    'succapprox;': '\u2ab8',
    'succcurlyeq;': '\u227d',
    'Succeeds;': '\u227b',
    'SucceedsEqual;': '\u2ab0',
    'SucceedsSlantEqual;': '\u227d',
    'SucceedsTilde;': '\u227f',
    'succeq;': '\u2ab0',
    'succnapprox;': '\u2aba',
    'succneqq;': '\u2ab6',
    'succnsim;': '\u22e9',
    'succsim;': '\u227f',
    'SuchThat;': '\u220b',
    'Sum;': '\u2211',
    'sum;': '\u2211',
    'sung;': '\u266a',
    'sup1': '\xb9',
    'sup1;': '\xb9',
    'sup2': '\xb2',
    'sup2;': '\xb2',
    'sup3': '\xb3',
    'sup3;': '\xb3',
    'Sup;': '\u22d1',
    'sup;': '\u2283',
    'supdot;': '\u2abe',
    'supdsub;': '\u2ad8',
    'supE;': '\u2ac6',
    'supe;': '\u2287',
    'supedot;': '\u2ac4',
    'Superset;': '\u2283',
    'SupersetEqual;': '\u2287',
    'suphsol;': '\u27c9',
    'suphsub;': '\u2ad7',
    'suplarr;': '\u297b',
    'supmult;': '\u2ac2',
    'supnE;': '\u2acc',
    'supne;': '\u228b',
    'supplus;': '\u2ac0',
    'Supset;': '\u22d1',
    'supset;': '\u2283',
    'supseteq;': '\u2287',
    'supseteqq;': '\u2ac6',
    'supsetneq;': '\u228b',
    'supsetneqq;': '\u2acc',
    'supsim;': '\u2ac8',
    'supsub;': '\u2ad4',
    'supsup;': '\u2ad6',
    'swarhk;': '\u2926',
    'swArr;': '\u21d9',
    'swarr;': '\u2199',
    'swarrow;': '\u2199',
    'swnwar;': '\u292a',
    'szlig': '\xdf',
    'szlig;': '\xdf',
    'Tab;': '\t',
    'target;': '\u2316',
    'Tau;': '\u03a4',
    'tau;': '\u03c4',
    'tbrk;': '\u23b4',
    'Tcaron;': '\u0164',
    'tcaron;': '\u0165',
    'Tcedil;': '\u0162',
    'tcedil;': '\u0163',
    'Tcy;': '\u0422',
    'tcy;': '\u0442',
    'tdot;': '\u20db',
    'telrec;': '\u2315',
    'Tfr;': '\U0001d517',
    'tfr;': '\U0001d531',
    'there4;': '\u2234',
    'Therefore;': '\u2234',
    'therefore;': '\u2234',
    'Theta;': '\u0398',
    'theta;': '\u03b8',
    'thetasym;': '\u03d1',
    'thetav;': '\u03d1',
    'thickapprox;': '\u2248',
    'thicksim;': '\u223c',
    'ThickSpace;': '\u205f\u200a',
    'thinsp;': '\u2009',
    'ThinSpace;': '\u2009',
    'thkap;': '\u2248',
    'thksim;': '\u223c',
    'THORN': '\xde',
    'thorn': '\xfe',
    'THORN;': '\xde',
    'thorn;': '\xfe',
    'Tilde;': '\u223c',
    'tilde;': '\u02dc',
    'TildeEqual;': '\u2243',
    'TildeFullEqual;': '\u2245',
    'TildeTilde;': '\u2248',
    'times': '\xd7',
    'times;': '\xd7',
    'timesb;': '\u22a0',
    'timesbar;': '\u2a31',
    'timesd;': '\u2a30',
    'tint;': '\u222d',
    'toea;': '\u2928',
    'top;': '\u22a4',
    'topbot;': '\u2336',
    'topcir;': '\u2af1',
    'Topf;': '\U0001d54b',
    'topf;': '\U0001d565',
    'topfork;': '\u2ada',
    'tosa;': '\u2929',
    'tprime;': '\u2034',
    'TRADE;': '\u2122',
    'trade;': '\u2122',
    'triangle;': '\u25b5',
    'triangledown;': '\u25bf',
    'triangleleft;': '\u25c3',
    'trianglelefteq;': '\u22b4',
    'triangleq;': '\u225c',
    'triangleright;': '\u25b9',
    'trianglerighteq;': '\u22b5',
    'tridot;': '\u25ec',
    'trie;': '\u225c',
    'triminus;': '\u2a3a',
    'TripleDot;': '\u20db',
    'triplus;': '\u2a39',
    'trisb;': '\u29cd',
    'tritime;': '\u2a3b',
    'trpezium;': '\u23e2',
    'Tscr;': '\U0001d4af',
    'tscr;': '\U0001d4c9',
    'TScy;': '\u0426',
    'tscy;': '\u0446',
    'TSHcy;': '\u040b',
    'tshcy;': '\u045b',
    'Tstrok;': '\u0166',
    'tstrok;': '\u0167',
    'twixt;': '\u226c',
    'twoheadleftarrow;': '\u219e',
    'twoheadrightarrow;': '\u21a0',
    'Uacute': '\xda',
    'uacute': '\xfa',
    'Uacute;': '\xda',
    'uacute;': '\xfa',
    'Uarr;': '\u219f',
    'uArr;': '\u21d1',
    'uarr;': '\u2191',
    'Uarrocir;': '\u2949',
    'Ubrcy;': '\u040e',
    'ubrcy;': '\u045e',
    'Ubreve;': '\u016c',
    'ubreve;': '\u016d',
    'Ucirc': '\xdb',
    'ucirc': '\xfb',
    'Ucirc;': '\xdb',
    'ucirc;': '\xfb',
    'Ucy;': '\u0423',
    'ucy;': '\u0443',
    'udarr;': '\u21c5',
    'Udblac;': '\u0170',
    'udblac;': '\u0171',
    'udhar;': '\u296e',
    'ufisht;': '\u297e',
    'Ufr;': '\U0001d518',
    'ufr;': '\U0001d532',
    'Ugrave': '\xd9',
    'ugrave': '\xf9',
    'Ugrave;': '\xd9',
    'ugrave;': '\xf9',
    'uHar;': '\u2963',
    'uharl;': '\u21bf',
    'uharr;': '\u21be',
    'uhblk;': '\u2580',
    'ulcorn;': '\u231c',
    'ulcorner;': '\u231c',
    'ulcrop;': '\u230f',
    'ultri;': '\u25f8',
    'Umacr;': '\u016a',
    'umacr;': '\u016b',
    'uml': '\xa8',
    'uml;': '\xa8',
    'UnderBar;': '_',
    'UnderBrace;': '\u23df',
    'UnderBracket;': '\u23b5',
    'UnderParenthesis;': '\u23dd',
    'Union;': '\u22c3',
    'UnionPlus;': '\u228e',
    'Uogon;': '\u0172',
    'uogon;': '\u0173',
    'Uopf;': '\U0001d54c',
    'uopf;': '\U0001d566',
    'UpArrow;': '\u2191',
    'Uparrow;': '\u21d1',
    'uparrow;': '\u2191',
    'UpArrowBar;': '\u2912',
    'UpArrowDownArrow;': '\u21c5',
    'UpDownArrow;': '\u2195',
    'Updownarrow;': '\u21d5',
    'updownarrow;': '\u2195',
    'UpEquilibrium;': '\u296e',
    'upharpoonleft;': '\u21bf',
    'upharpoonright;': '\u21be',
    'uplus;': '\u228e',
    'UpperLeftArrow;': '\u2196',
    'UpperRightArrow;': '\u2197',
    'Upsi;': '\u03d2',
    'upsi;': '\u03c5',
    'upsih;': '\u03d2',
    'Upsilon;': '\u03a5',
    'upsilon;': '\u03c5',
    'UpTee;': '\u22a5',
    'UpTeeArrow;': '\u21a5',
    'upuparrows;': '\u21c8',
    'urcorn;': '\u231d',
    'urcorner;': '\u231d',
    'urcrop;': '\u230e',
    'Uring;': '\u016e',
    'uring;': '\u016f',
    'urtri;': '\u25f9',
    'Uscr;': '\U0001d4b0',
    'uscr;': '\U0001d4ca',
    'utdot;': '\u22f0',
    'Utilde;': '\u0168',
    'utilde;': '\u0169',
    'utri;': '\u25b5',
    'utrif;': '\u25b4',
    'uuarr;': '\u21c8',
    'Uuml': '\xdc',
    'uuml': '\xfc',
    'Uuml;': '\xdc',
    'uuml;': '\xfc',
    'uwangle;': '\u29a7',
    'vangrt;': '\u299c',
    'varepsilon;': '\u03f5',
    'varkappa;': '\u03f0',
    'varnothing;': '\u2205',
    'varphi;': '\u03d5',
    'varpi;': '\u03d6',
    'varpropto;': '\u221d',
    'vArr;': '\u21d5',
    'varr;': '\u2195',
    'varrho;': '\u03f1',
    'varsigma;': '\u03c2',
    'varsubsetneq;': '\u228a\ufe00',
    'varsubsetneqq;': '\u2acb\ufe00',
    'varsupsetneq;': '\u228b\ufe00',
    'varsupsetneqq;': '\u2acc\ufe00',
    'vartheta;': '\u03d1',
    'vartriangleleft;': '\u22b2',
    'vartriangleright;': '\u22b3',
    'Vbar;': '\u2aeb',
    'vBar;': '\u2ae8',
    'vBarv;': '\u2ae9',
    'Vcy;': '\u0412',
    'vcy;': '\u0432',
    'VDash;': '\u22ab',
    'Vdash;': '\u22a9',
    'vDash;': '\u22a8',
    'vdash;': '\u22a2',
    'Vdashl;': '\u2ae6',
    'Vee;': '\u22c1',
    'vee;': '\u2228',
    'veebar;': '\u22bb',
    'veeeq;': '\u225a',
    'vellip;': '\u22ee',
    'Verbar;': '\u2016',
    'verbar;': '|',
    'Vert;': '\u2016',
    'vert;': '|',
    'VerticalBar;': '\u2223',
    'VerticalLine;': '|',
    'VerticalSeparator;': '\u2758',
    'VerticalTilde;': '\u2240',
    'VeryThinSpace;': '\u200a',
    'Vfr;': '\U0001d519',
    'vfr;': '\U0001d533',
    'vltri;': '\u22b2',
    'vnsub;': '\u2282\u20d2',
    'vnsup;': '\u2283\u20d2',
    'Vopf;': '\U0001d54d',
    'vopf;': '\U0001d567',
    'vprop;': '\u221d',
    'vrtri;': '\u22b3',
    'Vscr;': '\U0001d4b1',
    'vscr;': '\U0001d4cb',
    'vsubnE;': '\u2acb\ufe00',
    'vsubne;': '\u228a\ufe00',
    'vsupnE;': '\u2acc\ufe00',
    'vsupne;': '\u228b\ufe00',
    'Vvdash;': '\u22aa',
    'vzigzag;': '\u299a',
    'Wcirc;': '\u0174',
    'wcirc;': '\u0175',
    'wedbar;': '\u2a5f',
    'Wedge;': '\u22c0',
    'wedge;': '\u2227',
    'wedgeq;': '\u2259',
    'weierp;': '\u2118',
    'Wfr;': '\U0001d51a',
    'wfr;': '\U0001d534',
    'Wopf;': '\U0001d54e',
    'wopf;': '\U0001d568',
    'wp;': '\u2118',
    'wr;': '\u2240',
    'wreath;': '\u2240',
    'Wscr;': '\U0001d4b2',
    'wscr;': '\U0001d4cc',
    'xcap;': '\u22c2',
    'xcirc;': '\u25ef',
    'xcup;': '\u22c3',
    'xdtri;': '\u25bd',
    'Xfr;': '\U0001d51b',
    'xfr;': '\U0001d535',
    'xhArr;': '\u27fa',
    'xharr;': '\u27f7',
    'Xi;': '\u039e',
    'xi;': '\u03be',
    'xlArr;': '\u27f8',
    'xlarr;': '\u27f5',
    'xmap;': '\u27fc',
    'xnis;': '\u22fb',
    'xodot;': '\u2a00',
    'Xopf;': '\U0001d54f',
    'xopf;': '\U0001d569',
    'xoplus;': '\u2a01',
    'xotime;': '\u2a02',
    'xrArr;': '\u27f9',
    'xrarr;': '\u27f6',
    'Xscr;': '\U0001d4b3',
    'xscr;': '\U0001d4cd',
    'xsqcup;': '\u2a06',
    'xuplus;': '\u2a04',
    'xutri;': '\u25b3',
    'xvee;': '\u22c1',
    'xwedge;': '\u22c0',
    'Yacute': '\xdd',
    'yacute': '\xfd',
    'Yacute;': '\xdd',
    'yacute;': '\xfd',
    'YAcy;': '\u042f',
    'yacy;': '\u044f',
    'Ycirc;': '\u0176',
    'ycirc;': '\u0177',
    'Ycy;': '\u042b',
    'ycy;': '\u044b',
    'yen': '\xa5',
    'yen;': '\xa5',
    'Yfr;': '\U0001d51c',
    'yfr;': '\U0001d536',
    'YIcy;': '\u0407',
    'yicy;': '\u0457',
    'Yopf;': '\U0001d550',
    'yopf;': '\U0001d56a',
    'Yscr;': '\U0001d4b4',
    'yscr;': '\U0001d4ce',
    'YUcy;': '\u042e',
    'yucy;': '\u044e',
    'yuml': '\xff',
    'Yuml;': '\u0178',
    'yuml;': '\xff',
    'Zacute;': '\u0179',
    'zacute;': '\u017a',
    'Zcaron;': '\u017d',
    'zcaron;': '\u017e',
    'Zcy;': '\u0417',
    'zcy;': '\u0437',
    'Zdot;': '\u017b',
    'zdot;': '\u017c',
    'zeetrf;': '\u2128',
    'ZeroWidthSpace;': '\u200b',
    'Zeta;': '\u0396',
    'zeta;': '\u03b6',
    'Zfr;': '\u2128',
    'zfr;': '\U0001d537',
    'ZHcy;': '\u0416',
    'zhcy;': '\u0436',
    'zigrarr;': '\u21dd',
    'Zopf;': '\u2124',
    'zopf;': '\U0001d56b',
    'Zscr;': '\U0001d4b5',
    'zscr;': '\U0001d4cf',
    'zwj;': '\u200d',
    'zwnj;': '\u200c',
}

# maps the Unicode codepoint to the HTML entity name
codepoint2name = {}

# maps the HTML entity name to the character
# (or a character reference if the character is outside the Latin-1 range)
entitydefs = {}

for (name, codepoint) in name2codepoint.items():
    codepoint2name[codepoint] = name
    entitydefs[name] = chr(codepoint)

del name, codepoint
PK�Cu\Qޅ��!future/backports/html/__init__.pynu�[���"""
General functions for HTML manipulation, backported from Py3.

Note that this uses Python 2.7 code with the corresponding Python 3
module names and locations.
"""

from __future__ import unicode_literals


_escape_map = {ord('&'): '&amp;', ord('<'): '&lt;', ord('>'): '&gt;'}
_escape_map_full = {ord('&'): '&amp;', ord('<'): '&lt;', ord('>'): '&gt;',
                    ord('"'): '&quot;', ord('\''): '&#x27;'}

# NB: this is a candidate for a bytes/string polymorphic interface

def escape(s, quote=True):
    """
    Replace special characters "&", "<" and ">" to HTML-safe sequences.
    If the optional flag quote is true (the default), the quotation mark
    characters, both double quote (") and single quote (') characters are also
    translated.
    """
    assert not isinstance(s, bytes), 'Pass a unicode string'
    if quote:
        return s.translate(_escape_map_full)
    return s.translate(_escape_map)
PK�Cu\���e��9future/backports/html/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��
@sfdZddlmZed�ded�ded�diZed�ded�ded�ded	�d
ed�diZddd�ZdS)z�
General functions for HTML manipulation, backported from Py3.

Note that this uses Python 2.7 code with the corresponding Python 3
module names and locations.
�)�unicode_literals�&z&amp;�<z&lt;�>z&gt;�"z&quot;�'z&#x27;TcCs*t|t�rJd��|r |�t�S|�t�S)z�
    Replace special characters "&", "<" and ">" to HTML-safe sequences.
    If the optional flag quote is true (the default), the quotation mark
    characters, both double quote (") and single quote (') characters are also
    translated.
    zPass a unicode string)�
isinstance�bytes�	translate�_escape_map_full�_escape_map)�s�quote�r�H/usr/local/lib/python3.9/site-packages/future/backports/html/__init__.py�escapes
rN)T)�__doc__�
__future__r�ordrrrrrrr�<module>s�PK�Cu\�L��(�(�9future/backports/html/__pycache__/entities.cpython-39.pycnu�[���a

��?h�&��@sD%dZddlmZmZmZmZddlTdddddd	d
ddd
ddddddddddddddddddd d!d"d#d$d%d&d'd(d)d*d+d,d-d.d/d0d1d2d3d4d5d6d7d8d9d:d;d<d=d>d?d@dAdBdCdDdEdFdGdHdIdJdKdLdMdNdOdPdQdRdSdTdUdVdWdXdYdZd[d\d]d^d_d`dadbdcdddedfdgdhdidjdkdldmdndodpdqdrdsdtdudvdwdxdydzd{d|d}d~dd�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d��d��Z�d�d�d�d�d�d�d�d�d�d�d	�d�d	�d
�d
�d�d�d
�d�d
�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d �d!�d"�d!�d#�d$�d%�d&�d'�d(�d)�d*�d+�d,�d-�d.�d/�d0�d1�d2�d3�d4�d5�d6�d7�d8�d9�d:�d;�d�d6�d9�d0�d<�d0�d<�d=�d>�d?�d@�d6�dA�dB�dC�dB�dC�dD�dE�dD�dE�dF�dG�dH�dI�dJ�dK�dL�dM�dN�dO�dP�dQ�dQ�dR�dS�dH�dT�dU�dV�dW�dW�dW�dX�dI�dY�dY�dZ�d[�d\�d]�d^�d_�d`�da�db�dc�dd�de�df�dg�dh�di�dj�dk�dl�dm�dn�do�dp�dq�dr�ds�dt�du�dv�dw�dx�dy�dz�d{�d|�d}�d~�d�d�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��dJ�d��d��d��d��dY�d��d��dK�dL�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��ddÐdĐdŐdƐdǐdȐdɐdʐdɐdʐdːd̐d͐dΐdϐdАdѐdҐdҐdҐdӐdԐdԐdՐdՐdŐd֐dאdؐdِdِdڐdېdܐdݐdސdߐd�d�d�d�d�d�d�d�d�d�d�dސd�d�d�d�d�d�d�d�d�d�d�d?�d?�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d�d�d�d�d�d�d�dF�d�d�d�d�d�d�d	�d
�d�d�d
�d�d�d�d�d�d�d�d�dA�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d �d!�d"�d#�d$�d%�d&�d'�d(�d)�d*�d+�d,�d-�d.�d/�d0�d��d1�d"�d2�d3�d4�d5�d5�d6�d7�d8�d9�d:�d;�d<�d=�d>�d?�d
�d@�d,�dA�dB�dC�dC�dC�dD�dD�dE�d1�dF�dG�dH�dH�dH�dI�dI�dJ�dK�dL�dM�dN�dO�dP�dE�d@�dQ�dR�dS�dR�dT�dU�dV�dP�d��dE�d&�dW�dX�d)�dY�dZ�d[�d\�d]�d^�d_�d`�d'�d&�d'�da�db�dc�d2�d>�d?�dd�de�df�dg�dh�di�dj�dk�dl�dm�dn�do�dp�dq�dr�ds�dt�du�dv�dw�dx�dq�db�dy�dz�d{�d|�d}�d~�d�d~�d�d��d��d��d��d��d��d��d��d��d��d��d4�d��dS�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��dR�d��d��d��d��d��d��d��d��d��d��d��d��d��ddÐdÐd��d��d��d��dĐdŐdƐdǐdȐdɐdʐdːd̐d͐do�dΐdϐdАdѐdҐdӐdԐdՐdՐd֐dאdؐdِdڐdڐdېdܐdܐdݐdސdߐd�d�d�d�d�d�d�d�d�d�d�d�dؐd�d�d�d�d�dF�d�d�d�d�d�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d�d�d�d�d�d�d�d�d�d�d	�d
�d�d�d
�d�d�d�d�d�d�d�d�d�d�d�dA�d��d��d��d�d�d��d�d�d�d�d�d�d�d�d�d�d�d�d�d�d �d!�d�d"�d�d��d��d�d�d#�d#�dĐd$�dڐd%�d&�d'�dX�d(�d)�d*�d+�d,�d-�d.�d/�d/�d0�d1�d2�d3�d%�d4�d5�d6�d7�d8�d9�d:�d;�d<�d��d%�d=�d,�d>�d?�d��d��d@�d(�dA�dB�dA�dB�dC�dD�dE�dD�dE�dF�dG�dH�dI�dJ�dK�dK�dX�dL�dM�dN�dO�dN�dO�dP�dQ�dR�dS�dT�dU�dV�dL�dW�dX�dL�dP�dY�dL�dZ�d[�d\�d\�d��d]�d^�d_�dZ�d`�da�db�dc�da�db�d`�dd�de�dC�df�dg�dh�di�dj�dk�dl�dm�dn�de�do�do�dY�dp�d��dq�dr�ds�dt�d��df�du�dv�dw�dx�dy�dz�dy�dz�d{�d|�d}�d~�d�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��dW�d��d��d��d��d8�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��dd��dÐdĐdŐdƐdV�dǐdȐdɐdʐdːd��d��dW�d��d��d̐d��dd͐dΐd>�dϐdАdf�dѐdҐd(�dX�d(�d̐dӐd*�dԐd*�dՐd֐dאdؐdِdڐdېdܐdݐdސdѐdߐd�d�dːdʐd�d�d�d�d�d�d�d�d��d�d�d�d�dʐd�d�d�d�d�d�d�dАd�d�d�d�d�df�dѐd�d�d�d��d��d��dҐdL�d��d��d��d��d��d��d��d��d��d��d��d��d��d�d�d�d͐d�dY�d�d�dZ�d�d�d�d[�d�d��d�d�d	�d
�d�d�d
�d�d�d�d�d�dn�d�d�d̐dn�dӐd�d�d�d�d��d�d�d�d�d�d�d��d�d�d�d�d �d �d �d��d �d!�d"�d�dאd#�d$�d%�d&�dڐdr�d'�d(�d)�d*�d*�d+�d+�d,�d-�d-�d.�d/�d/�dl�dՐd0�d1�d2�d3�d4�d5�d6�d#�d7�d8�d9�d:�d;�d<�d<�d=�d@�d>�dՐdՐd?�d��dT�d@�dA�dB�d0�dA�dC�dD�dE�dA�d8�dF�d�dG�dH�dI�dI�d6�dJ�dK�dL�dM�dN�dO�dP�dM�dQ�dQ�dR�dS�dS�dT�dU�dV�dW�dX�dY�dZ�d[�d\�d]�d^�d_�d`�da�db�dc�dd�dd�de�df�df�df�df�dg�dh�di�d�d��dj�dk�dk�dl�dm�dn�do�do�dn�dp�dp�dq�dr�ds�dt�dt�du�dv�dw�dx�dy�dz�d{�dy�d|�d}�d~�d�d��d��d��d~�d�dv�dw�d��d��d��d��d��d��d��d��d��d��d��d��d��d��dS�dR�d��d��d��d��dg�d��d��d��da�di�dk�dt�do�dn�du�d��dp�dr�dT�dU�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d[�dM�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��ddÐdĐdŐdƐdǐdȐdɐdʐdːd̐d͐dΐdϐdАdѐdҐdӐdԐdՐd֐dאdؐdِdِdڐdېdܐdېdܐd�d�dݐdސdݐdސdߐd�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d��dߐd�d�d�d�d�d��d�d��d��d��d��d�d��d��d��dƐd�d��d�d��d��d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d	�d
�d	�d
�d�d�d
�d�d
�d�d�d�d�d�d�d�d�d�d�d�d�d`�d�d�d`�d�d�d�d�d�d�d�d�d�d�d�d �d!�d"�d#�d$�d8�d%�d&�d'�d֐d(�d,�d)�d,�d*�d+�d��d,�dU�d-�d.�d/�d/�d/�d0�d1�d/�d2�d2�d3�d4�d5�d5�d6�d7�d8�d9�d:�d;�d7�d8�d9�d7�d;�d9�d<�d;�d=�d>�d?�d<�d@�dA�d3�d=�d>�d?�dB�dB�dC�dD�dE�dF�d�dF�dF�d<�dG�dH�dI�dJ�dK�dL�dM�dN�dQ�dO�dP�dQ�dR�dS�d:�dT�dU�d��dV�dV�dV�dV�dW�dX�dY�dZ�d[�d\�d]�d^�d_�d`�d^�da�da�db�d\�dc�dd�de�df�dg�dh�d9�d�di�dj�dk�dl�dm�dn�do�dp�dO�dm�d+�dm�dq�dr�ds�dt�du�dv�dw�dx�dy�dz�d{�dr�d|�d}�d~�d�d�d�d��d��d��d��d��d��d��d�d�d�d�dy�dӐdy�d��d��d��d��d��di�d��d��d��d��d��d^�dc�d\�dc�de�d��dl�d{�d��d��d?�d��d��di�d��d��d��d��dm�d��d/�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��dW�d��d��d��d��d��ds�d�d�d��d��d��d��ds�d��d��d��d��d��d��d�d��d��d��d��d��d��d��d��d��d��d��ddÐdĐdŐdƐdǐdȐdɐdʐdV�dːd4�d̐d�d�d͐d͐dΐdϐdM�dM�dАdѐdҐd�dӐdԐdՐd֐dאd'�d��d=�d`�dc�dؐdِdِdڐdېdܐdܐdݐdސdߐdߐd�d�d�d�d�d�d�d��d��dM�d�d�d=�d�d�d�d�d�d�d�d�d�d�d�d�d�d`�d��d��d��d��d[�d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��d��do�do�dc�d��d��dM�d�d�d�d�dg�d��d$�d+�d�d�d�d�d�d�d�d	�d
�d�d�d�d�d�d�d�d
�d	�d
�d�d�d��d��d��d��d��d��dǐd��dÐdĐdŐdǐdy�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d�d �d!�d"�d�d�d�d�d!�d �d#�d$�d%�d5�d&�d�d�d'�d(�d(�d)�d*�d+�d,�d�d-�d.�d/�d0�d1�d2�d3�d4�d5�d6�d7�d7�d7�d8�d9�d:�d:�d6�dݐd;�d<�d<�d6�dݐd=�d>�d=�d>�dݐdB�dߐd��d6�d?�d?�d��d@�dA�dR�dh�dk�dB�dC�dD�dE�dF�dϐdG�dH�dH�dI�dx�d&�dڐdJ�d��d��dK�dJ�dL�d3�dM�dN�dO�dP�dQ�dR�dS�dT�dU�dV�dW�dX�d]�d��db�dY�dZ�dY�dZ�d[�d^�dؐd\�d]�d^�d_�d`�da�db�da�db�dc�dd�de�df�dg�dh�di�dj�dk�dl�dm�dl�dm�dn�dݐd��do�dp�dp�dq�dr�ds�dt�dE�dE�d�du�dR�dv�db�dw�dx�dy�dz�d{�dؐd^�dؐd|�de�d}�d_�d}�dh�dݐd��dw�dِdd�d~�d�d~�d��d�d�d0�d��d��d��d��d��d��d��d��d��d��d��d��dI�dp�d��d��d��d��d��d��d��d��d��d��d$�d(�dF�d_�d}�d��dܐd��d��d��d��d:�dؐd��d��d��d��d��d��d��d��d]�d��d��dk�d��d��d��d��d��d��d��d��d=�d��d��d��d$�d��d��dؐd��d��d��d��dF�d��d��d��d��d��d��d��d��d��d��d��d��dl�d�d��d��d��d��d��d��d��d��d��d��d��d`�da�db�dh�d��d��dZ�d�d��d��dY�d�d�d��dc�d��d��dd�de�d[�d�d��d��df�dj�di�dk�dl�d��d��d��d��d��ddÐdĐdŐdƐdǐdǐdȐdɐdʐdːd̐d͐dΐdϐdАdѐdҐdӐdҐdԐdՐd֐dאdؐdِdڐdېdܐdf�dݐdސdܐdߐd�d�d�dc�d�d�d�d�d�d���ZiZ	iZ
e��D]\ZZ
ee	e
<ee
�e
e<�%q[[
�d�S(�zPHTML character entity references.

Backported for python-future from Python 3.3
�)�absolute_import�division�print_function�unicode_literals)�*������i�������i���i�i! i���������i�i���i�������i���i�i�i���i�iR������i�i�������i�i�i3 i�i�i`i���i�i�������i���i���ixi����������i5!i��&i'"i "��iH"����i i��i" i)"����i�i�ic&iE"�i�!i*"�i�!i  i�!�i�if&�������i"i i i�ia"i�����i� i"i�i"���iD i�ie"�>i�!i�!ie&i& �������i!i"i+"i��i"��i�i�!i�i)#�i�!i#i id"i
#i"i�%i i9 i �<�i ��i"i�i"�i i`"i"�i	"i�"��i�����iS��i> i�i�i�"i("����i�"��i"i0 i�"i�i�i���i2 i"i"i��"i�!i"i*#�i�!i	#i i!�i#i�i i: i i iai�"��i�i�i<"i`&i�"i�"i"i�"���i�"��i�i4"i�i�i	 �i���i"!i�!�i�!���i�i��i!i����i�i
 i )��AElig�Aacute�Acirc�Agrave�Alpha�Aring�Atilde�Auml�Beta�Ccedil�Chi�Dagger�Delta�ETH�Eacute�Ecirc�Egrave�Epsilon�Eta�Euml�Gamma�Iacute�Icirc�Igrave�Iota�Iuml�Kappa�Lambda�Mu�Ntilde�Nu�OElig�Oacute�Ocirc�Ograve�Omega�Omicron�Oslash�Otilde�Ouml�Phi�Pi�Prime�Psi�Rho�Scaron�Sigma�THORN�Tau�Theta�Uacute�Ucirc�Ugrave�Upsilon�Uuml�Xi�Yacute�Yuml�Zeta�aacute�acirc�acute�aelig�agrave�alefsym�alpha�amp�and�ang�aring�asymp�atilde�auml�bdquo�beta�brvbar�bull�cap�ccedil�cedil�cent�chi�circ�clubs�cong�copy�crarr�cup�curren�dArr�dagger�darr�deg�delta�diams�divide�eacute�ecirc�egrave�empty�emsp�ensp�epsilon�equiv�eta�eth�euml�euro�exist�fnof�forall�frac12�frac14�frac34�frasl�gamma�ge�gt�hArr�harr�hearts�hellip�iacute�icirc�iexcl�igrave�image�infin�int�iota�iquest�isin�iuml�kappa�lArr�lambda�lang�laquo�larr�lceil�ldquo�le�lfloor�lowast�loz�lrm�lsaquo�lsquo�lt�macr�mdash�micro�middot�minus�mu�nabla�nbsp�ndash�ne�ni�not�notin�nsub�ntilde�nu�oacute�ocirc�oelig�ograve�oline�omega�omicron�oplus�or�ordf�ordm�oslash�otilde�otimes�ouml�para�part�permil�perp�phi�pi�piv�plusmn�pound�prime�prod�prop�psi�quot�rArr�radic�rang�raquo�rarr�rceil�rdquo�real�reg�rfloor�rho�rlm�rsaquo�rsquo�sbquo�scaron�sdot�sect�shy�sigma�sigmaf�sim�spades�sub�sube�sum�sup�sup1�sup2�sup3�supe�szlig�tau�there4�theta�thetasym�thinsp�thorn�tilde�times�trade�uArr�uacute�uarr�ucirc�ugrave�uml�upsih�upsilon�uuml�weierp�xi�yacute�yen�yuml�zeta�zwj�zwnj�Á�áuĂuău∾u∿u∾̳�Â�â�´uАuа�Æ�æu⁡u𝔄u𝔞�À�àuℵuΑuαuĀuāu⨿�&u⩓u∧u⩕u⩜u⩘u⩚u∠u⦤u∡u⦨u⦩u⦪u⦫u⦬u⦭u⦮u⦯u∟u⊾u⦝u∢�Åu⍼uĄuąu𝔸u𝕒u≈u⩯u⩰u≊u≋�'�åu𝒜u𝒶u≔ru≍�Ã�ã�Ä�äu∳u⨑u≌u϶u‵u∽u⋍u∖u⫧u⊽u⌆u⌅u⎵u⎶uБuбu„u∵u⦰uℬuΒuβuℶu≬u𝔅u𝔟u⋂u◯u⋃u⨀u⨁u⨂u⨆u★u▽u△u⨄u⋁u⋀u⤍u⧫u▪u▴u▾u◂u▸u␣u▒u░u▓u█u=⃥u≡⃥u⫭u⌐u𝔹u𝕓u⊥u⋈u⧉u╗u╖u╕u┐u╔u╓u╒u┌u═u─u╦u╤u╥u┬u╩u╧u╨u┴u⊟u⊞u⊠u╝u╜u╛u┘u╚u╙u╘u└u║u│u╬u╫u╪u┼u╣u╢u╡u┤u╠u╟u╞u├u˘�¦u𝒷u⁏�\u⧅u⟈u•u≎u⪮u≏uĆuću⋒u∩u⩄u⩉u⩋u⩇u⩀uⅅu∩︀u⁁uˇuℭu⩍uČuč�Ç�çuĈuĉu∰u⩌u⩐uĊuċ�¸u⦲�¢�·u𝔠uЧuчu✓uΧuχu○uˆu≗u↺u↻u⊛u⊚u⊝u⊙�®uⓈu⊖u⊕u⊗u⧃u⨐u⫯u⧂u∲u”u’u♣u∷�:u⩴�,�@u∁u∘uℂu≅u⩭u≡u∯u∮u𝕔u∐�©u℗u↵u⨯u✗u𝒞u𝒸u⫏u⫑u⫐u⫒u⋯u⤸u⤵u⋞u⋟u↶u⤽u⋓u∪u⩈u⩆u⩊u⊍u⩅u∪︀u↷u⤼u⋎u⋏�¤u∱u⌭u‡u†uℸu↡u⇓u↓u‐u⫤u⊣u⤏u˝uĎuďuДuдuⅆu⇊u⤑u⩷�°u∇uΔuδu⦱u⥿u𝔇u𝔡u⥥u⇃u⇂u˙�`u˜u⋄u♦�¨uϝu⋲�÷u⋇uЂuђu⌞u⌍�$u𝔻u𝕕u⃜u≐u≑u∸u∔u⊡u⇐u⇔u⟸u⟺u⟹u⇒u⊨u⇑u⇕u∥u⤓u⇵ȗu⥐u⥞u↽u⥖u⥟u⇁u⥗u⊤u↧u⤐u⌟u⌌u𝒟u𝒹uЅuѕu⧶uĐuđu⋱u▿u⥯u⦦uЏuџu⟿�É�éu⩮uĚuěu≖�Ê�êu≕uЭuэuĖuėuⅇu≒u𝔈u𝔢u⪚�È�èu⪖u⪘u⪙u∈u⏧uℓu⪕u⪗uĒuēu∅u◻u▫u u u uŊuŋu uĘuęu𝔼u𝕖u⋕u⧣u⩱uεuΕuϵu≂u⩵�=u≟u⇌u⩸u⧥u⥱u≓uℰuℯu⩳uΗuη�Ð�ð�Ë�ëu€�!u∃uФuфu♀uffiuffufflu𝔉u𝔣ufiu◼�fju♭uflu▱uƒu𝔽u𝕗u∀u⋔u⫙uℱu⨍�½u⅓�¼u⅕u⅙u⅛u⅔u⅖�¾u⅗u⅜u⅘u⅚u⅝u⅞u⁄u⌢u𝒻uǵuΓuγuϜu⪆uĞuğuĢuĜuĝuГuгuĠuġu≧u≥u⪌u⋛u⩾u⪩u⪀u⪂u⪄u⋛︀u⪔u𝔊u𝔤u⋙u≫uℷuЃuѓu≷u⪥u⪒u⪤u⪊u≩u⪈u⋧u𝔾u𝕘u⪢u≳u𝒢uℊu⪎u⪐�>u⪧u⩺u⋗u⦕u⩼u⥸u≩︀u uℋuЪuъu↔u⥈u↭�^uℏuĤuĥu♥u…u⊹uℌu𝔥u⤥u⤦u⇿u∻u↩u↪uℍu𝕙u―u𝒽uĦuħu⁃�Í�íu⁣�Î�îuИuиuİuЕuе�¡uℑu𝔦�Ì�ìuⅈu⨌u∭u⧜u℩uIJuijuĪuīuℐuıu⊷uƵu℅u∞u⧝u∬u∫u⊺uℤu⨗u⨼u⁢uЁuёuĮuįu𝕀u𝕚uΙuι�¿u𝒾u⋵u⋹u⋴u⋳uĨuĩuІuі�Ï�ïuĴuĵuЙuйu𝔍u𝔧uȷu𝕁u𝕛u𝒥u𝒿uЈuјuЄuєuΚuκuϰuĶuķuКuкu𝔎u𝔨uĸuХuхuЌuќu𝕂u𝕜u𝒦u𝓀u⇚uĹuĺu⦴uℒuΛuλu⟪u⟨u⦑u⪅�«u↞u←u⇤u⤟u⤝u↫u⤹u⥳u↢u⪫u⤛u⤙u⪭u⪭︀u⤎u⤌u❲�{�[u⦋u⦏u⦍uĽuľuĻuļu⌈uЛuлu⤶u“u⥧u⥋u↲u≦u≤u⇆u⟦u⥡u⥙u⌊u↼u⇇u⇋u⥎u↤u⥚u⋋u⊲u⧏u⊴u⥑u⥠u↿u⥘u⥒u⪋u⋚u⩽u⪨u⩿u⪁u⪃u⋚︀u⪓u⋖u≶u⪡u≲u⥼u𝔏u𝔩u⪑u⥢u⥪u▄uЉuљu⋘u≪u⥫u◺uĿuŀu⎰u⪉u≨u⪇u⋦u⟬u⇽u⟵u⟷u⟼u⟶u↬u⦅u𝕃u𝕝u⨭u⨴u∗�_u↙u↘u◊�(u⦓u⥭u‎u⊿u‹u𝓁u↰u⪍u⪏u‘u‚uŁuł�<u⪦u⩹u⋉u⥶u⩻u◃u⦖u⥊u⥦u≨︀�¯u♂u✠u⤅u↦u↥u▮u⨩uМuмu—u∺u uℳu𝔐u𝔪u℧�µu∣u⫰u−u⨪u∓u⫛u⊧u𝕄u𝕞u𝓂uΜuμu⊸uŃuńu∠⃒u≉u⩰̸u≋̸uʼnu♮uℕ� u≎̸u≏̸u⩃uŇuňuŅuņu≇u⩭̸u⩂uНuнu–u≠u⤤u⇗u↗u≐̸u​u≢u⤨u≂̸�
u∄u𝔑u𝔫u≧̸u≱u⩾̸u⋙̸u≵u≫⃒u≯u≫̸u⇎u↮u⫲u∋u⋼u⋺uЊuњu⇍u↚u‥u≦̸u≰u⩽̸u≮u⋘̸u≴u≪⃒u⋪u⋬u≪̸u∤u⁠u𝕟�¬u⫬u≭u∦u∉u≹u⋵̸u⋹̸u⋷u⋶u⧏̸u≸u⪢̸u⪡̸u∌u⋾u⋽u⊀u⪯̸u⋠u⋫u⧐̸u⋭u⊏̸u⋢u⊐̸u⋣u⊂⃒u⊈u⊁u⪰̸u⋡u≿̸u⊃⃒u⊉u≁u≄u⫽⃥u∂̸u⨔u⇏u↛u⤳̸u↝̸u𝒩u𝓃u⊄u⫅̸u⊅u⫆̸�Ñ�ñuΝuν�#u№u u≍⃒u⊯u⊮u⊭u⊬u≥⃒u>⃒u⤄u⧞u⤂u≤⃒u<⃒u⊴⃒u⤃u⊵⃒u∼⃒u⤣u⇖u↖u⤧�Ó�ó�Ô�ôuОuоuŐuőu⨸u⦼uŒuœu⦿u𝔒u𝔬u˛�Ò�òu⧁u⦵uΩu⦾u⦻u‾u⧀uŌuōuωuΟuοu⦶u𝕆u𝕠u⦷u⦹u⩔u∨u⩝uℴ�ª�ºu⊶u⩖u⩗u⩛u𝒪�Ø�øu⊘�Õ�õu⨷u⨶�Ö�öu⌽u⏞u⎴u⏜�¶u⫳u⫽u∂uПuп�%�.u‰u‱u𝔓u𝔭uΦuφuϕu☎uΠuπuϖuℎ�+u⨣u⨢u⨥u⩲�±u⨦u⨧u⨕uℙu𝕡�£u⪻u≺u⪷u≼u⪳u⪯u≾u⪹u⪵u⋨u″u′u∏u⌮u⌒u⌓u∝u⊰u𝒫u𝓅uΨuψu u𝔔u𝔮uℚu𝕢u⁗u𝒬u𝓆u⨖�?�"u⇛u∽̱uŔuŕu√u⦳u⟫u⟩u⦒u⦥�»u↠u→u⥵u⇥u⤠u⤳u⤞u⥅u⥴u⤖u↣u↝u⤜u⤚u∶u❳�}�]u⦌u⦎u⦐uŘuřuŖuŗu⌉uРuрu⤷u⥩u↳uℜuℛuℝu▭u⥽u⌋u𝔯u⥤u⇀u⥬uΡuρuϱu⇄u⟧u⥝u⥕u⇉u⊢u⥛u⋌u⊳u⧐u⊵u⥏u⥜u↾u⥔u⥓u˚u‏u⎱u⫮u⟭u⇾u⦆u𝕣u⨮u⨵u⥰�)u⦔u⨒u›u𝓇u↱u⋊u▹u⧎u⧴u⥨u℞uŚuśu⪼u≻u⪸uŠušu≽u⪴u⪰uŞuşuŜuŝu⪺u⪶u⋩u⨓u≿uСuсu⋅u⩦u⇘�§�;u⤩u✶u𝔖u𝔰u♯uЩuщuШuшu↑�­uΣuσuςu∼u⩪u≃u⪞u⪠u⪝u⪟u≆u⨤u⥲u⨳u⧤u⌣u⪪u⪬u⪬︀uЬuь�/u⧄u⌿u𝕊u𝕤u♠u⊓u⊓︀u⊔u⊔︀u⊏u⊑u⊐u⊒u□u𝒮u𝓈u⋆u☆u⋐u⊂u⪽u⫅u⊆u⫃u⫁u⫋u⊊u⪿u⥹u⫇u⫕u⫓u∑u♪�¹�²�³u⋑u⊃u⪾u⫘u⫆u⊇u⫄u⟉u⫗u⥻u⫂u⫌u⊋u⫀u⫈u⫔u⫖u⇙u⤪�ß�	u⌖uΤuτuŤuťuŢuţuТuтu⃛u⌕u𝔗u𝔱u∴uΘuθuϑu  u �Þ�þ�×u⨱u⨰u⌶u⫱u𝕋u𝕥u⫚u‴u™u▵u≜u◬u⨺u⨹u⧍u⨻u⏢u𝒯u𝓉uЦuцuЋuћuŦuŧ�Ú�úu↟u⥉uЎuўuŬuŭ�Û�ûuУuуu⇅uŰuűu⥮u⥾u𝔘u𝔲�Ù�ùu⥣u▀u⌜u⌏u◸uŪuūu⏟u⏝u⊎uŲuųu𝕌u𝕦u⤒u↕uϒuυuΥu⇈u⌝u⌎uŮuůu◹u𝒰u𝓊u⋰uŨuũ�Ü�üu⦧u⦜u⊊︀u⫋︀u⊋︀u⫌︀u⫫u⫨u⫩uВuвu⊫u⊩u⫦u⊻u≚u⋮u‖�|u❘u≀u𝔙u𝔳u𝕍u𝕧u𝒱u𝓋u⊪u⦚uŴuŵu⩟u≙u℘u𝔚u𝔴u𝕎u𝕨u𝒲u𝓌u𝔛u𝔵uΞuξu⋻u𝕏u𝕩u𝒳u𝓍�Ý�ýuЯuяuŶuŷuЫuы�¥u𝔜u𝔶uЇuїu𝕐u𝕪u𝒴u𝓎uЮuю�ÿuŸuŹuźuŽužuЗuзuŻużuℨuΖuζu𝔷uЖuжu⇝u𝕫u𝒵u𝓏u‍u‌(�rlr�zAacute;zaacute;zAbreve;zabreve;zac;zacd;zacE;rmr�zAcirc;zacirc;r�zacute;zAcy;zacy;rkr�zAElig;zaelig;zaf;zAfr;zafr;rnr�zAgrave;zagrave;zalefsym;zaleph;zAlpha;zalpha;zAmacr;zamacr;zamalg;�AMPr�zAMP;zamp;zAnd;zand;zandand;zandd;z	andslope;zandv;zang;zange;zangle;zangmsd;z	angmsdaa;z	angmsdab;z	angmsdac;z	angmsdad;z	angmsdae;z	angmsdaf;z	angmsdag;z	angmsdah;zangrt;zangrtvb;z	angrtvbd;zangsph;zangst;zangzarr;zAogon;zaogon;zAopf;zaopf;zap;zapacir;zapE;zape;zapid;zapos;zApplyFunction;zapprox;z	approxeq;rpr�zAring;zaring;zAscr;zascr;zAssign;zast;zasymp;zasympeq;rqr�zAtilde;zatilde;rrr�zAuml;zauml;z	awconint;zawint;z	backcong;zbackepsilon;z
backprime;zbacksim;z
backsimeq;z
Backslash;zBarv;zbarvee;zBarwed;zbarwed;z	barwedge;zbbrk;z	bbrktbrk;zbcong;zBcy;zbcy;zbdquo;zbecaus;zBecause;zbecause;zbemptyv;zbepsi;zbernou;zBernoullis;zBeta;zbeta;zbeth;zbetween;zBfr;zbfr;zbigcap;zbigcirc;zbigcup;zbigodot;z	bigoplus;z
bigotimes;z	bigsqcup;zbigstar;zbigtriangledown;zbigtriangleup;z	biguplus;zbigvee;z	bigwedge;zbkarow;z
blacklozenge;zblacksquare;zblacktriangle;zblacktriangledown;zblacktriangleleft;zblacktriangleright;zblank;zblk12;zblk14;zblk34;zblock;zbne;zbnequiv;zbNot;zbnot;zBopf;zbopf;zbot;zbottom;zbowtie;zboxbox;zboxDL;zboxDl;zboxdL;zboxdl;zboxDR;zboxDr;zboxdR;zboxdr;zboxH;zboxh;zboxHD;zboxHd;zboxhD;zboxhd;zboxHU;zboxHu;zboxhU;zboxhu;z	boxminus;zboxplus;z	boxtimes;zboxUL;zboxUl;zboxuL;zboxul;zboxUR;zboxUr;zboxuR;zboxur;zboxV;zboxv;zboxVH;zboxVh;zboxvH;zboxvh;zboxVL;zboxVl;zboxvL;zboxvl;zboxVR;zboxVr;zboxvR;zboxvr;zbprime;zBreve;zbreve;r�zbrvbar;zBscr;zbscr;zbsemi;zbsim;zbsime;zbsol;zbsolb;z	bsolhsub;zbull;zbullet;zbump;zbumpE;zbumpe;zBumpeq;zbumpeq;zCacute;zcacute;zCap;zcap;zcapand;z	capbrcup;zcapcap;zcapcup;zcapdot;zCapitalDifferentialD;zcaps;zcaret;zcaron;zCayleys;zccaps;zCcaron;zccaron;rtr�zCcedil;zccedil;zCcirc;zccirc;zCconint;zccups;zccupssm;zCdot;zcdot;r�zcedil;zCedilla;zcemptyv;r�zcent;z
CenterDot;z
centerdot;zCfr;zcfr;zCHcy;zchcy;zcheck;z
checkmark;zChi;zchi;zcir;zcirc;zcirceq;zcirclearrowleft;zcirclearrowright;zcircledast;zcircledcirc;zcircleddash;z
CircleDot;z	circledR;z	circledS;zCircleMinus;zCirclePlus;zCircleTimes;zcirE;zcire;z	cirfnint;zcirmid;zcirscir;zClockwiseContourIntegral;zCloseCurlyDoubleQuote;zCloseCurlyQuote;zclubs;z	clubsuit;zColon;zcolon;zColone;zcolone;zcoloneq;zcomma;zcommat;zcomp;zcompfn;zcomplement;z
complexes;zcong;zcongdot;z
Congruent;zConint;zconint;zContourIntegral;zCopf;zcopf;zcoprod;z
Coproduct;�COPYr�zCOPY;zcopy;zcopysr;z CounterClockwiseContourIntegral;zcrarr;zCross;zcross;zCscr;zcscr;zcsub;zcsube;zcsup;zcsupe;zctdot;zcudarrl;zcudarrr;zcuepr;zcuesc;zcularr;zcularrp;zCup;zcup;z	cupbrcap;zCupCap;zcupcap;zcupcup;zcupdot;zcupor;zcups;zcurarr;zcurarrm;zcurlyeqprec;zcurlyeqsucc;z	curlyvee;zcurlywedge;r�zcurren;zcurvearrowleft;zcurvearrowright;zcuvee;zcuwed;z	cwconint;zcwint;zcylcty;zDagger;zdagger;zdaleth;zDarr;zdArr;zdarr;zdash;zDashv;zdashv;zdbkarow;zdblac;zDcaron;zdcaron;zDcy;zdcy;zDD;zdd;zddagger;zddarr;z	DDotrahd;zddotseq;r�zdeg;zDel;zDelta;zdelta;zdemptyv;zdfisht;zDfr;zdfr;zdHar;zdharl;zdharr;zDiacriticalAcute;zDiacriticalDot;zDiacriticalDoubleAcute;zDiacriticalGrave;zDiacriticalTilde;zdiam;zDiamond;zdiamond;zdiamondsuit;zdiams;zdie;zDifferentialD;zdigamma;zdisin;zdiv;r�zdivide;zdivideontimes;zdivonx;zDJcy;zdjcy;zdlcorn;zdlcrop;zdollar;zDopf;zdopf;zDot;zdot;zDotDot;zdoteq;z	doteqdot;z	DotEqual;z	dotminus;zdotplus;z
dotsquare;zdoublebarwedge;zDoubleContourIntegral;z
DoubleDot;zDoubleDownArrow;zDoubleLeftArrow;zDoubleLeftRightArrow;zDoubleLeftTee;zDoubleLongLeftArrow;zDoubleLongLeftRightArrow;zDoubleLongRightArrow;zDoubleRightArrow;zDoubleRightTee;zDoubleUpArrow;zDoubleUpDownArrow;zDoubleVerticalBar;z
DownArrow;z
Downarrow;z
downarrow;z
DownArrowBar;zDownArrowUpArrow;z
DownBreve;zdowndownarrows;zdownharpoonleft;zdownharpoonright;zDownLeftRightVector;zDownLeftTeeVector;zDownLeftVector;zDownLeftVectorBar;zDownRightTeeVector;zDownRightVector;zDownRightVectorBar;zDownTee;z
DownTeeArrow;z	drbkarow;zdrcorn;zdrcrop;zDscr;zdscr;zDScy;zdscy;zdsol;zDstrok;zdstrok;zdtdot;zdtri;zdtrif;zduarr;zduhar;zdwangle;zDZcy;zdzcy;z	dzigrarr;ryr�zEacute;zeacute;zeaster;zEcaron;zecaron;zecir;rzr�zEcirc;zecirc;zecolon;zEcy;zecy;zeDDot;zEdot;zeDot;zedot;zee;zefDot;zEfr;zefr;zeg;r{r�zEgrave;zegrave;zegs;zegsdot;zel;zElement;z	elinters;zell;zels;zelsdot;zEmacr;zemacr;zempty;z	emptyset;zEmptySmallSquare;zemptyv;zEmptyVerySmallSquare;zemsp13;zemsp14;zemsp;zENG;zeng;zensp;zEogon;zeogon;zEopf;zeopf;zepar;zeparsl;zeplus;zepsi;zEpsilon;zepsilon;zepsiv;zeqcirc;zeqcolon;zeqsim;zeqslantgtr;zeqslantless;zEqual;zequals;zEqualTilde;zequest;zEquilibrium;zequiv;zequivDD;z	eqvparsl;zerarr;zerDot;zEscr;zescr;zesdot;zEsim;zesim;zEta;zeta;rxr�zETH;zeth;r~r�zEuml;zeuml;zeuro;zexcl;zexist;zExists;zexpectation;z
ExponentialE;z
exponentiale;zfallingdotseq;zFcy;zfcy;zfemale;zffilig;zfflig;zffllig;zFfr;zffr;zfilig;zFilledSmallSquare;zFilledVerySmallSquare;zfjlig;zflat;zfllig;zfltns;zfnof;zFopf;zfopf;zForAll;zforall;zfork;zforkv;zFouriertrf;z	fpartint;r�zfrac12;zfrac13;r�zfrac14;zfrac15;zfrac16;zfrac18;zfrac23;zfrac25;r�zfrac34;zfrac35;zfrac38;zfrac45;zfrac56;zfrac58;zfrac78;zfrasl;zfrown;zFscr;zfscr;zgacute;zGamma;zgamma;zGammad;zgammad;zgap;zGbreve;zgbreve;zGcedil;zGcirc;zgcirc;zGcy;zgcy;zGdot;zgdot;zgE;zge;zgEl;zgel;zgeq;zgeqq;z	geqslant;zges;zgescc;zgesdot;zgesdoto;z	gesdotol;zgesl;zgesles;zGfr;zgfr;zGg;zgg;zggg;zgimel;zGJcy;zgjcy;zgl;zgla;zglE;zglj;zgnap;z	gnapprox;zgnE;zgne;zgneq;zgneqq;zgnsim;zGopf;zgopf;zgrave;z
GreaterEqual;zGreaterEqualLess;zGreaterFullEqual;zGreaterGreater;zGreaterLess;zGreaterSlantEqual;z
GreaterTilde;zGscr;zgscr;zgsim;zgsime;zgsiml;�GTr�zGT;zGt;zgt;zgtcc;zgtcir;zgtdot;zgtlPar;zgtquest;z
gtrapprox;zgtrarr;zgtrdot;z
gtreqless;zgtreqqless;zgtrless;zgtrsim;z
gvertneqq;zgvnE;zHacek;zhairsp;zhalf;zhamilt;zHARDcy;zhardcy;zhArr;zharr;zharrcir;zharrw;zHat;zhbar;zHcirc;zhcirc;zhearts;z
heartsuit;zhellip;zhercon;zHfr;zhfr;z
HilbertSpace;z	hksearow;z	hkswarow;zhoarr;zhomtht;zhookleftarrow;zhookrightarrow;zHopf;zhopf;zhorbar;zHorizontalLine;zHscr;zhscr;zhslash;zHstrok;zhstrok;z
HumpDownHump;z
HumpEqual;zhybull;zhyphen;r�r�zIacute;ziacute;zic;r�r�zIcirc;zicirc;zIcy;zicy;zIdot;zIEcy;ziecy;r�ziexcl;ziff;zIfr;zifr;r�r�zIgrave;zigrave;zii;ziiiint;ziiint;ziinfin;ziiota;zIJlig;zijlig;zIm;zImacr;zimacr;zimage;zImaginaryI;z	imagline;z	imagpart;zimath;zimof;zimped;zImplies;zin;zincare;zinfin;z	infintie;zinodot;zInt;zint;zintcal;z	integers;z	Integral;z	intercal;z
Intersection;z	intlarhk;zintprod;zInvisibleComma;zInvisibleTimes;zIOcy;ziocy;zIogon;ziogon;zIopf;ziopf;zIota;ziota;ziprod;r�ziquest;zIscr;ziscr;zisin;zisindot;zisinE;zisins;zisinsv;zisinv;zit;zItilde;zitilde;zIukcy;ziukcy;r�r�zIuml;ziuml;zJcirc;zjcirc;zJcy;zjcy;zJfr;zjfr;zjmath;zJopf;zjopf;zJscr;zjscr;zJsercy;zjsercy;zJukcy;zjukcy;zKappa;zkappa;zkappav;zKcedil;zkcedil;zKcy;zkcy;zKfr;zkfr;zkgreen;zKHcy;zkhcy;zKJcy;zkjcy;zKopf;zkopf;zKscr;zkscr;zlAarr;zLacute;zlacute;z	laemptyv;zlagran;zLambda;zlambda;zLang;zlang;zlangd;zlangle;zlap;zLaplacetrf;r�zlaquo;zLarr;zlArr;zlarr;zlarrb;zlarrbfs;zlarrfs;zlarrhk;zlarrlp;zlarrpl;zlarrsim;zlarrtl;zlat;zlAtail;zlatail;zlate;zlates;zlBarr;zlbarr;zlbbrk;zlbrace;zlbrack;zlbrke;zlbrksld;zlbrkslu;zLcaron;zlcaron;zLcedil;zlcedil;zlceil;zlcub;zLcy;zlcy;zldca;zldquo;zldquor;zldrdhar;z	ldrushar;zldsh;zlE;zle;zLeftAngleBracket;z
LeftArrow;z
Leftarrow;z
leftarrow;z
LeftArrowBar;zLeftArrowRightArrow;zleftarrowtail;zLeftCeiling;zLeftDoubleBracket;zLeftDownTeeVector;zLeftDownVector;zLeftDownVectorBar;z
LeftFloor;zleftharpoondown;zleftharpoonup;zleftleftarrows;zLeftRightArrow;zLeftrightarrow;zleftrightarrow;zleftrightarrows;zleftrightharpoons;zleftrightsquigarrow;zLeftRightVector;zLeftTee;z
LeftTeeArrow;zLeftTeeVector;zleftthreetimes;z
LeftTriangle;zLeftTriangleBar;zLeftTriangleEqual;zLeftUpDownVector;zLeftUpTeeVector;z
LeftUpVector;zLeftUpVectorBar;zLeftVector;zLeftVectorBar;zlEg;zleg;zleq;zleqq;z	leqslant;zles;zlescc;zlesdot;zlesdoto;z	lesdotor;zlesg;zlesges;zlessapprox;zlessdot;z
lesseqgtr;zlesseqqgtr;zLessEqualGreater;zLessFullEqual;zLessGreater;zlessgtr;z	LessLess;zlesssim;zLessSlantEqual;z
LessTilde;zlfisht;zlfloor;zLfr;zlfr;zlg;zlgE;zlHar;zlhard;zlharu;zlharul;zlhblk;zLJcy;zljcy;zLl;zll;zllarr;z	llcorner;zLleftarrow;zllhard;zlltri;zLmidot;zlmidot;zlmoust;zlmoustache;zlnap;z	lnapprox;zlnE;zlne;zlneq;zlneqq;zlnsim;zloang;zloarr;zlobrk;zLongLeftArrow;zLongleftarrow;zlongleftarrow;zLongLeftRightArrow;zLongleftrightarrow;zlongleftrightarrow;zlongmapsto;zLongRightArrow;zLongrightarrow;zlongrightarrow;zlooparrowleft;zlooparrowright;zlopar;zLopf;zlopf;zloplus;zlotimes;zlowast;zlowbar;zLowerLeftArrow;zLowerRightArrow;zloz;zlozenge;zlozf;zlpar;zlparlt;zlrarr;z	lrcorner;zlrhar;zlrhard;zlrm;zlrtri;zlsaquo;zLscr;zlscr;zLsh;zlsh;zlsim;zlsime;zlsimg;zlsqb;zlsquo;zlsquor;zLstrok;zlstrok;�LTr�zLT;zLt;zlt;zltcc;zltcir;zltdot;zlthree;zltimes;zltlarr;zltquest;zltri;zltrie;zltrif;zltrPar;z	lurdshar;zluruhar;z
lvertneqq;zlvnE;rzmacr;zmale;zmalt;zmaltese;zMap;zmap;zmapsto;zmapstodown;zmapstoleft;z	mapstoup;zmarker;zmcomma;zMcy;zmcy;zmdash;zmDDot;zmeasuredangle;zMediumSpace;z
Mellintrf;zMfr;zmfr;zmho;rzmicro;zmid;zmidast;zmidcir;rzmiddot;zminus;zminusb;zminusd;zminusdu;z
MinusPlus;zmlcp;zmldr;zmnplus;zmodels;zMopf;zmopf;zmp;zMscr;zmscr;zmstpos;zMu;zmu;z	multimap;zmumap;znabla;zNacute;znacute;znang;znap;znapE;znapid;znapos;znapprox;znatur;znatural;z	naturals;rznbsp;znbump;znbumpe;zncap;zNcaron;zncaron;zNcedil;zncedil;zncong;z	ncongdot;zncup;zNcy;zncy;zndash;zne;znearhk;zneArr;znearr;znearrow;znedot;zNegativeMediumSpace;zNegativeThickSpace;zNegativeThinSpace;zNegativeVeryThinSpace;znequiv;znesear;znesim;zNestedGreaterGreater;zNestedLessLess;zNewLine;znexist;znexists;zNfr;znfr;zngE;znge;zngeq;zngeqq;z
ngeqslant;znges;znGg;zngsim;znGt;zngt;zngtr;znGtv;znhArr;znharr;znhpar;zni;znis;znisd;zniv;zNJcy;znjcy;znlArr;znlarr;znldr;znlE;znle;znLeftarrow;znleftarrow;znLeftrightarrow;znleftrightarrow;znleq;znleqq;z
nleqslant;znles;znless;znLl;znlsim;znLt;znlt;znltri;znltrie;znLtv;znmid;zNoBreak;zNonBreakingSpace;zNopf;znopf;rzNot;znot;z
NotCongruent;z
NotCupCap;zNotDoubleVerticalBar;zNotElement;z	NotEqual;zNotEqualTilde;z
NotExists;zNotGreater;zNotGreaterEqual;zNotGreaterFullEqual;zNotGreaterGreater;zNotGreaterLess;zNotGreaterSlantEqual;zNotGreaterTilde;zNotHumpDownHump;z
NotHumpEqual;znotin;z	notindot;znotinE;znotinva;znotinvb;znotinvc;zNotLeftTriangle;zNotLeftTriangleBar;zNotLeftTriangleEqual;zNotLess;z
NotLessEqual;zNotLessGreater;zNotLessLess;zNotLessSlantEqual;z
NotLessTilde;zNotNestedGreaterGreater;zNotNestedLessLess;znotni;znotniva;znotnivb;znotnivc;zNotPrecedes;zNotPrecedesEqual;zNotPrecedesSlantEqual;zNotReverseElement;zNotRightTriangle;zNotRightTriangleBar;zNotRightTriangleEqual;zNotSquareSubset;zNotSquareSubsetEqual;zNotSquareSuperset;zNotSquareSupersetEqual;z
NotSubset;zNotSubsetEqual;zNotSucceeds;zNotSucceedsEqual;zNotSucceedsSlantEqual;zNotSucceedsTilde;zNotSuperset;zNotSupersetEqual;z	NotTilde;zNotTildeEqual;zNotTildeFullEqual;zNotTildeTilde;zNotVerticalBar;znpar;z
nparallel;znparsl;znpart;znpolint;znpr;znprcue;znpre;znprec;znpreceq;znrArr;znrarr;znrarrc;znrarrw;znRightarrow;znrightarrow;znrtri;znrtrie;znsc;znsccue;znsce;zNscr;znscr;z
nshortmid;znshortparallel;znsim;znsime;znsimeq;znsmid;znspar;znsqsube;znsqsupe;znsub;znsubE;znsube;znsubset;z
nsubseteq;znsubseteqq;znsucc;znsucceq;znsup;znsupE;znsupe;znsupset;z
nsupseteq;znsupseteqq;zntgl;r�rzNtilde;zntilde;zntlg;zntriangleleft;zntrianglelefteq;zntriangleright;zntrianglerighteq;zNu;znu;znum;znumero;znumsp;znvap;znVDash;znVdash;znvDash;znvdash;znvge;znvgt;znvHarr;znvinfin;znvlArr;znvle;znvlt;znvltrie;znvrArr;znvrtrie;znvsim;znwarhk;znwArr;znwarr;znwarrow;znwnear;r�rzOacute;zoacute;zoast;zocir;r�rzOcirc;zocirc;zOcy;zocy;zodash;zOdblac;zodblac;zodiv;zodot;zodsold;zOElig;zoelig;zofcir;zOfr;zofr;zogon;r�rzOgrave;zograve;zogt;zohbar;zohm;zoint;zolarr;zolcir;zolcross;zoline;zolt;zOmacr;zomacr;zOmega;zomega;zOmicron;zomicron;zomid;zominus;zOopf;zoopf;zopar;zOpenCurlyDoubleQuote;zOpenCurlyQuote;zoperp;zoplus;zOr;zor;zorarr;zord;zorder;zorderof;rzordf;rzordm;zorigof;zoror;zorslope;zorv;zoS;zOscr;zoscr;r�rzOslash;zoslash;zosol;r�rzOtilde;zotilde;zOtimes;zotimes;z	otimesas;r�rzOuml;zouml;zovbar;zOverBar;z
OverBrace;zOverBracket;zOverParenthesis;zpar;rzpara;z	parallel;zparsim;zparsl;zpart;z	PartialD;zPcy;zpcy;zpercnt;zperiod;zpermil;zperp;zpertenk;zPfr;zpfr;zPhi;zphi;zphiv;zphmmat;zphone;zPi;zpi;z
pitchfork;zpiv;zplanck;zplanckh;zplankv;zplus;z	plusacir;zplusb;zpluscir;zplusdo;zplusdu;zpluse;z
PlusMinus;r&zplusmn;zplussim;zplustwo;zpm;zPoincareplane;z	pointint;zPopf;zpopf;r'zpound;zPr;zpr;zprap;zprcue;zprE;zpre;zprec;zprecapprox;zpreccurlyeq;z	Precedes;zPrecedesEqual;zPrecedesSlantEqual;zPrecedesTilde;zpreceq;zprecnapprox;z	precneqq;z	precnsim;zprecsim;zPrime;zprime;zprimes;zprnap;zprnE;zprnsim;zprod;zProduct;z	profalar;z	profline;z	profsurf;zprop;zProportion;z
Proportional;zpropto;zprsim;zprurel;zPscr;zpscr;zPsi;zpsi;zpuncsp;zQfr;zqfr;zqint;zQopf;zqopf;zqprime;zQscr;zqscr;zquaternions;zquatint;zquest;zquesteq;�QUOTr,zQUOT;zquot;zrAarr;zrace;zRacute;zracute;zradic;z	raemptyv;zRang;zrang;zrangd;zrange;zrangle;r0zraquo;zRarr;zrArr;zrarr;zrarrap;zrarrb;zrarrbfs;zrarrc;zrarrfs;zrarrhk;zrarrlp;zrarrpl;zrarrsim;zRarrtl;zrarrtl;zrarrw;zrAtail;zratail;zratio;z
rationals;zRBarr;zrBarr;zrbarr;zrbbrk;zrbrace;zrbrack;zrbrke;zrbrksld;zrbrkslu;zRcaron;zrcaron;zRcedil;zrcedil;zrceil;zrcub;zRcy;zrcy;zrdca;zrdldhar;zrdquo;zrdquor;zrdsh;zRe;zreal;zrealine;z	realpart;zreals;zrect;�REGr5zREG;zreg;zReverseElement;zReverseEquilibrium;zReverseUpEquilibrium;zrfisht;zrfloor;zRfr;zrfr;zrHar;zrhard;zrharu;zrharul;zRho;zrho;zrhov;zRightAngleBracket;zRightArrow;zRightarrow;zrightarrow;zRightArrowBar;zRightArrowLeftArrow;zrightarrowtail;z
RightCeiling;zRightDoubleBracket;zRightDownTeeVector;zRightDownVector;zRightDownVectorBar;zRightFloor;zrightharpoondown;zrightharpoonup;zrightleftarrows;zrightleftharpoons;zrightrightarrows;zrightsquigarrow;z	RightTee;zRightTeeArrow;zRightTeeVector;zrightthreetimes;zRightTriangle;zRightTriangleBar;zRightTriangleEqual;zRightUpDownVector;zRightUpTeeVector;zRightUpVector;zRightUpVectorBar;zRightVector;zRightVectorBar;zring;z
risingdotseq;zrlarr;zrlhar;zrlm;zrmoust;zrmoustache;zrnmid;zroang;zroarr;zrobrk;zropar;zRopf;zropf;zroplus;zrotimes;z
RoundImplies;zrpar;zrpargt;z	rppolint;zrrarr;zRrightarrow;zrsaquo;zRscr;zrscr;zRsh;zrsh;zrsqb;zrsquo;zrsquor;zrthree;zrtimes;zrtri;zrtrie;zrtrif;z	rtriltri;zRuleDelayed;zruluhar;zrx;zSacute;zsacute;zsbquo;zSc;zsc;zscap;zScaron;zscaron;zsccue;zscE;zsce;zScedil;zscedil;zScirc;zscirc;zscnap;zscnE;zscnsim;z	scpolint;zscsim;zScy;zscy;zsdot;zsdotb;zsdote;zsearhk;zseArr;zsearr;zsearrow;r>zsect;zsemi;zseswar;z	setminus;zsetmn;zsext;zSfr;zsfr;zsfrown;zsharp;zSHCHcy;zshchcy;zSHcy;zshcy;zShortDownArrow;zShortLeftArrow;z	shortmid;zshortparallel;zShortRightArrow;z
ShortUpArrow;r?zshy;zSigma;zsigma;zsigmaf;zsigmav;zsim;zsimdot;zsime;zsimeq;zsimg;zsimgE;zsiml;zsimlE;zsimne;zsimplus;zsimrarr;zslarr;zSmallCircle;zsmallsetminus;zsmashp;z	smeparsl;zsmid;zsmile;zsmt;zsmte;zsmtes;zSOFTcy;zsoftcy;zsol;zsolb;zsolbar;zSopf;zsopf;zspades;z
spadesuit;zspar;zsqcap;zsqcaps;zsqcup;zsqcups;zSqrt;zsqsub;zsqsube;z	sqsubset;zsqsubseteq;zsqsup;zsqsupe;z	sqsupset;zsqsupseteq;zsqu;zSquare;zsquare;zSquareIntersection;z
SquareSubset;zSquareSubsetEqual;zSquareSuperset;zSquareSupersetEqual;zSquareUnion;zsquarf;zsquf;zsrarr;zSscr;zsscr;zssetmn;zssmile;zsstarf;zStar;zstar;zstarf;zstraightepsilon;zstraightphi;zstrns;zSub;zsub;zsubdot;zsubE;zsube;zsubedot;zsubmult;zsubnE;zsubne;zsubplus;zsubrarr;zSubset;zsubset;z	subseteq;z
subseteqq;zSubsetEqual;z
subsetneq;zsubsetneqq;zsubsim;zsubsub;zsubsup;zsucc;zsuccapprox;zsucccurlyeq;z	Succeeds;zSucceedsEqual;zSucceedsSlantEqual;zSucceedsTilde;zsucceq;zsuccnapprox;z	succneqq;z	succnsim;zsuccsim;z	SuchThat;zSum;zsum;zsung;rHzsup1;rIzsup2;rJzsup3;zSup;zsup;zsupdot;zsupdsub;zsupE;zsupe;zsupedot;z	Superset;zSupersetEqual;zsuphsol;zsuphsub;zsuplarr;zsupmult;zsupnE;zsupne;zsupplus;zSupset;zsupset;z	supseteq;z
supseteqq;z
supsetneq;zsupsetneqq;zsupsim;zsupsub;zsupsup;zswarhk;zswArr;zswarr;zswarrow;zswnwar;rLzszlig;zTab;ztarget;zTau;ztau;ztbrk;zTcaron;ztcaron;zTcedil;ztcedil;zTcy;ztcy;ztdot;ztelrec;zTfr;ztfr;zthere4;z
Therefore;z
therefore;zTheta;ztheta;z	thetasym;zthetav;zthickapprox;z	thicksim;zThickSpace;zthinsp;z
ThinSpace;zthkap;zthksim;r�rRzTHORN;zthorn;zTilde;ztilde;zTildeEqual;zTildeFullEqual;zTildeTilde;rTztimes;ztimesb;z	timesbar;ztimesd;ztint;ztoea;ztop;ztopbot;ztopcir;zTopf;ztopf;ztopfork;ztosa;ztprime;zTRADE;ztrade;z	triangle;z
triangledown;z
triangleleft;ztrianglelefteq;z
triangleq;ztriangleright;ztrianglerighteq;ztridot;ztrie;z	triminus;z
TripleDot;ztriplus;ztrisb;ztritime;z	trpezium;zTscr;ztscr;zTScy;ztscy;zTSHcy;ztshcy;zTstrok;ztstrok;ztwixt;ztwoheadleftarrow;ztwoheadrightarrow;r�rWzUacute;zuacute;zUarr;zuArr;zuarr;z	Uarrocir;zUbrcy;zubrcy;zUbreve;zubreve;r�rYzUcirc;zucirc;zUcy;zucy;zudarr;zUdblac;zudblac;zudhar;zufisht;zUfr;zufr;r�rZzUgrave;zugrave;zuHar;zuharl;zuharr;zuhblk;zulcorn;z	ulcorner;zulcrop;zultri;zUmacr;zumacr;r[zuml;z	UnderBar;zUnderBrace;z
UnderBracket;zUnderParenthesis;zUnion;z
UnionPlus;zUogon;zuogon;zUopf;zuopf;zUpArrow;zUparrow;zuparrow;zUpArrowBar;zUpArrowDownArrow;zUpDownArrow;zUpdownarrow;zupdownarrow;zUpEquilibrium;zupharpoonleft;zupharpoonright;zuplus;zUpperLeftArrow;zUpperRightArrow;zUpsi;zupsi;zupsih;zUpsilon;zupsilon;zUpTee;zUpTeeArrow;zupuparrows;zurcorn;z	urcorner;zurcrop;zUring;zuring;zurtri;zUscr;zuscr;zutdot;zUtilde;zutilde;zutri;zutrif;zuuarr;r�r^zUuml;zuuml;zuwangle;zvangrt;zvarepsilon;z	varkappa;zvarnothing;zvarphi;zvarpi;z
varpropto;zvArr;zvarr;zvarrho;z	varsigma;z
varsubsetneq;zvarsubsetneqq;z
varsupsetneq;zvarsupsetneqq;z	vartheta;zvartriangleleft;zvartriangleright;zVbar;zvBar;zvBarv;zVcy;zvcy;zVDash;zVdash;zvDash;zvdash;zVdashl;zVee;zvee;zveebar;zveeeq;zvellip;zVerbar;zverbar;zVert;zvert;zVerticalBar;z
VerticalLine;zVerticalSeparator;zVerticalTilde;zVeryThinSpace;zVfr;zvfr;zvltri;zvnsub;zvnsup;zVopf;zvopf;zvprop;zvrtri;zVscr;zvscr;zvsubnE;zvsubne;zvsupnE;zvsupne;zVvdash;zvzigzag;zWcirc;zwcirc;zwedbar;zWedge;zwedge;zwedgeq;zweierp;zWfr;zwfr;zWopf;zwopf;zwp;zwr;zwreath;zWscr;zwscr;zxcap;zxcirc;zxcup;zxdtri;zXfr;zxfr;zxhArr;zxharr;zXi;zxi;zxlArr;zxlarr;zxmap;zxnis;zxodot;zXopf;zxopf;zxoplus;zxotime;zxrArr;zxrarr;zXscr;zxscr;zxsqcup;zxuplus;zxutri;zxvee;zxwedge;r�razYacute;zyacute;zYAcy;zyacy;zYcirc;zycirc;zYcy;zycy;rbzyen;zYfr;zyfr;zYIcy;zyicy;zYopf;zyopf;zYscr;zyscr;zYUcy;zyucy;rczYuml;zyuml;zZacute;zzacute;zZcaron;zzcaron;zZcy;zzcy;zZdot;zzdot;zzeetrf;zZeroWidthSpace;zZeta;zzeta;zZfr;zzfr;zZHcy;zzhcy;zzigrarr;zZopf;zzopf;zZscr;zzscr;zzwj;zzwnj;N)�__doc__�
__future__rrrrZfuture.builtins�name2codepoint�html5�codepoint2name�
entitydefs�items�name�	codepoint�chr�r�r��H/usr/local/lib/python3.9/site-packages/future/backports/html/entities.py�<module>s���������������������
LPK�Cu\Q	*5*57future/backports/html/__pycache__/parser.cpython-39.pycnu�[���a

��?h:M�@sdZddlmZmZmZmZddlTddlmZddl	Z	ddl
Z
e	�d�Ze	�d�Z
e	�d�Ze	�d	�Ze	�d
�Ze	�d�Ze	�d�Ze	�d
�Ze	�d�Ze	�d�Ze	�d�Ze	�de	j�Ze	�de	j�Ze	�d�Ze	�d�ZGdd�de�ZGdd�dej�ZdS)zLA parser for HTML and XHTML.

Backported for python-future from Python 3.3.
�)�absolute_import�division�print_function�unicode_literals)�*)�_markupbaseNz[&<]z
&[a-zA-Z#]z%&([a-zA-Z][-.a-zA-Z0-9]*)[^a-zA-Z0-9]z)&#(?:[0-9]+|[xX][0-9a-fA-F]+)[^0-9a-fA-F]z	<[a-zA-Z]�>z--\s*>z(([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\s|/(?!>))*z[a-zA-Z][^	

 />]*zJ\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?z]((?<=[\'"\s/])[^\s/>][^\s/=>]*)(\s*=+\s*(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?(?:\s|/(?!>))*a�
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:\s+                             # whitespace before attribute name
    (?:[a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name
      (?:\s*=\s*                     # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |\"[^\"]*\"                # LIT-enclosed value
          |[^'\">\s]+                # bare value
         )
       )?
     )
   )*
  \s*                                # trailing whitespace
aF
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:[\s/]*                          # optional whitespace before attribute name
    (?:(?<=['"\s/])[^\s/>][^\s/=>]*  # attribute name
      (?:\s*=+\s*                    # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |"[^"]*"                   # LIT-enclosed value
          |(?!['"])[^>\s]*           # bare value
         )
         (?:\s*,)*                   # possibly followed by a comma
       )?(?:\s|/(?!>))*
     )*
   )?
  \s*                                # trailing whitespace
z#</\s*([a-zA-Z][-.a-zA-Z0-9:_]*)\s*>c@s"eZdZdZddd�Zdd�ZdS)	�HTMLParseErrorz&Exception raised for all parse errors.�NNcCs&|sJ�||_|d|_|d|_dS)Nr���msg�lineno�offset)�selfr
�position�r�F/usr/local/lib/python3.9/site-packages/future/backports/html/parser.py�__init__Us
zHTMLParseError.__init__cCs>|j}|jdur|d|j}|jdur:|d|jd}|S)Nz, at line %dz, column %drr)r�resultrrr�__str__[s

zHTMLParseError.__str__N)r
)�__name__�
__module__�__qualname__�__doc__rrrrrrr	Rs
r	c@s�eZdZdZdZd:dd�Zdd�Zdd	�Zd
d�Zdd
�Z	dZ
dd�Zdd�Zdd�Z
dd�Zdd�Zd;dd�Zdd�Zdd�Zd d!�Zd"d#�Zd$d%�Zd&d'�Zd(d)�Zd*d+�Zd,d-�Zd.d/�Zd0d1�Zd2d3�Zd4d5�Zd6d7�Zd8d9�ZdS)<�
HTMLParsera�Find tags and other markup and call handler functions.

    Usage:
        p = HTMLParser()
        p.feed(data)
        ...
        p.close()

    Start tags are handled by calling self.handle_starttag() or
    self.handle_startendtag(); end tags by self.handle_endtag().  The
    data between tags is passed from the parser to the derived class
    by calling self.handle_data() with the data as argument (the data
    may be split up in arbitrary chunks).  Entity references are
    passed by calling self.handle_entityref() with the entity
    reference as the argument.  Numeric character references are
    passed to self.handle_charref() with the string containing the
    reference as the argument.
    )�script�styleFcCs&|rtjdtdd�||_|��dS)z�Initialize and reset this instance.

        If strict is set to False (the default) the parser will parse invalid
        markup, otherwise it will raise an error.  Note that the strict mode
        is deprecated.
        zThe strict mode is deprecated.�)�
stacklevelN)�warnings�warn�DeprecationWarning�strict�reset)rr#rrrrzs�zHTMLParser.__init__cCs(d|_d|_t|_d|_tj�|�dS)z1Reset this instance.  Loses all unprocessed data.�z???N)�rawdata�lasttag�interesting_normal�interesting�
cdata_elemr�
ParserBaser$�rrrrr$�s
zHTMLParser.resetcCs|j||_|�d�dS)z�Feed data to the parser.

        Call this as often as you want, with as little or as much text
        as you want (may include '\n').
        rN)r&�goahead�r�datarrr�feed�szHTMLParser.feedcCs|�d�dS)zHandle any buffered data.rN)r-r,rrr�close�szHTMLParser.closecCst||����dS�N)r	�getpos)r�messagerrr�error�szHTMLParser.errorNcCs|jS)z)Return full source of start tag: '<...>'.)�_HTMLParser__starttag_textr,rrr�get_starttag_text�szHTMLParser.get_starttag_textcCs$|��|_t�d|jtj�|_dS)Nz</\s*%s\s*>)�lowerr*�re�compile�Ir))r�elemrrr�set_cdata_mode�s
zHTMLParser.set_cdata_modecCst|_d|_dSr2)r(r)r*r,rrr�clear_cdata_mode�szHTMLParser.clear_cdata_modec
Cs�|j}d}t|�}||k�rd|j�||�}|r8|��}n|jrB�qd|}||kr`|�|||��|�||�}||krx�qd|j}|d|��r�t	�
||�r�|�|�}n�|d|�r�|�|�}n~|d|�r�|�
|�}nh|d|�r�|�|�}nR|d|��r|j�r|�|�}n
|�|�}n&|d|k�rd|�d�|d}n�qd|dk�r�|�sJ�qd|j�r\|�d�|�d	|d�}|dk�r�|�d|d�}|dk�r�|d}n|d7}|�|||��|�||�}q|d
|��r`t�
||�}|�r*|��dd�}	|�|	�|��}|d
|d��s|d}|�||�}qn4d
||d�v�rd|�|dd��|�|d�}�qdq|d|��rVt�
||�}|�r�|�d�}	|�|	�|��}|d
|d��s�|d}|�||�}qt�
||�}|�r&|�rd|��||d�k�rd|j�r|�d�n||k�r|}|�||d�}�qdn.|d|k�rd|�d�|�||d�}n�qdqdsJd��q|�r�||k�r�|j�s�|�|||��|�||�}||d�|_dS)Nr�<�</�<!--�<?�<!rzEOF in middle of constructr�&#r����;�&z#EOF in middle of entity or char refzinteresting.search() lied)r&�lenr)�search�startr*�handle_data�	updatepos�
startswith�starttagopen�match�parse_starttag�parse_endtag�
parse_comment�parse_pir#�parse_declaration�parse_html_declarationr5�find�charref�group�handle_charref�end�	entityref�handle_entityref�
incomplete)
rrZr&�i�nrO�jrM�k�namerrrr-�s�
















zHTMLParser.goaheadcCs�|j}|||d�dks"Jd��|||d�dkr@|�|�S|||d�dkr^|�|�S|||d���d	kr�|�d
|d�}|dkr�dS|�||d|��|dS|�|�SdS)
NrrCz+unexpected call to parse_html_declaration()�rA�z<![�	z	<!doctyperrEr)r&rR�parse_marked_sectionr8rV�handle_decl�parse_bogus_comment)rr^r&�gtposrrrrUs

z!HTMLParser.parse_html_declarationrcCs`|j}|||d�dvs"Jd��|�d|d�}|dkr>dS|rX|�||d|��|dS)Nr)rCr@z"unexpected call to parse_comment()rrEr)r&rV�handle_comment)rr^�reportr&�posrrrrh-szHTMLParser.parse_bogus_commentcCsd|j}|||d�dks"Jd��t�||d�}|s:dS|��}|�||d|��|��}|S)NrrBzunexpected call to parse_pi()rE)r&�picloserIrJ�	handle_pirZ)rr^r&rOr`rrrrS9szHTMLParser.parse_picCs.d|_|�|�}|dkr|S|j}|||�|_g}t�||d�}|sPJd��|��}|�d���|_}||k�rH|j	r�t
�||�}nt�||�}|s��qH|�ddd�\}	}
}|
s�d}n`|dd�dkr�|dd�k�sn|dd�dk�r|dd�k�rnn|dd�}|�r,|�|�}|�
|	��|f�|��}ql|||���}|d	v�r�|��\}
}d
|jv�r�|
|j�d
�}
t|j�|j�d
�}n|t|j�}|j	�r�|�d|||�dd�f�|�|||��|S|�d
��r|�||�n"|�||�||jv�r*|�|�|S)Nrrz#unexpected call to parse_starttag()rrd�'rE�")r�/>�
z junk characters in start tag: %r�rq)r6�check_for_whole_start_tagr&�tagfindrOrZrXr8r'r#�attrfind�attrfind_tolerant�unescape�append�stripr3�countrH�rfindr5rK�endswith�handle_startendtag�handle_starttag�CDATA_CONTENT_ELEMENTSr=)rr^�endposr&�attrsrOra�tag�m�attrname�rest�	attrvaluerZrrrrrrPEsf

(�

�



��
zHTMLParser.parse_starttagcCs|j}|jrt�||�}nt�||�}|r�|��}|||d�}|dkrR|dS|dkr�|�d|�rn|dS|�d|�r~dS|jr�|�||d�|�d�||kr�|S|dS|dkr�dS|d	vr�dS|jr�|�||�|�d
�||kr�|S|dSt	d��dS)Nrr�/rqrrEzmalformed empty start tagr%z6abcdefghijklmnopqrstuvwxyz=/ABCDEFGHIJKLMNOPQRSTUVWXYZzmalformed start tagzwe should not get here!)
r&r#�locatestarttagendrO�locatestarttagend_tolerantrZrMrLr5�AssertionError)rr^r&r�r`�nextrrrrt~s>

z$HTMLParser.check_for_whole_start_tagcCsN|j}|||d�dks"Jd��t�||d�}|s:dS|��}t�||�}|s�|jdurr|�|||��|S|jr�|�	d|||�f�t
�||d�}|s�|||d�dkr�|dS|�|�S|���
�}|�d	|���}|�|�|dS|�d��
�}|jdu�r4||jk�r4|�|||��|S|�|�
��|��|S)
Nrr@zunexpected call to parse_endtagrrEzbad end tag: %rrdz</>r)r&�	endendtagrIrZ�
endtagfindrOr*rKr#r5�tagfind_tolerantrhrXr8rV�
handle_endtagr>)rr^r&rOri�	namematch�tagnamer<rrrrQ�s<


zHTMLParser.parse_endtagcCs|�||�|�|�dSr2)rr��rr�r�rrrr~�szHTMLParser.handle_startendtagcCsdSr2rr�rrrr�szHTMLParser.handle_starttagcCsdSr2r)rr�rrrr��szHTMLParser.handle_endtagcCsdSr2r�rrbrrrrY�szHTMLParser.handle_charrefcCsdSr2rr�rrrr\�szHTMLParser.handle_entityrefcCsdSr2rr.rrrrK�szHTMLParser.handle_datacCsdSr2rr.rrrrj�szHTMLParser.handle_commentcCsdSr2r)r�declrrrrg�szHTMLParser.handle_declcCsdSr2rr.rrrrn�szHTMLParser.handle_picCs|jr|�d|f�dS)Nzunknown declaration: %r)r#r5r.rrr�unknown_decl�szHTMLParser.unknown_declcCs"d|vr|Sdd�}t�d||�S)NrGcSs|��d}zZ|ddkrd|dd�}|ddvrLt|dd��d�d�}nt|�d��}t|�WSWnty�d|YS0ddlm}||vr�||S|�d�r�d	|Std
t	|��D]4}|d|�|vr�||d|�||d�Sq�d	|SdS)Nr�#r)�x�XrF�rD)�html5rGr)
�groups�int�rstrip�chr�
ValueErrorZfuture.backports.html.entitiesr�r}�rangerH)�s�cr�r�rrr�replaceEntities�s&
"z,HTMLParser.unescape.<locals>.replaceEntitiesz&&(#?[xX]?(?:[0-9a-fA-F]+;|\w{1,32};?)))r9�sub)rr�r�rrrrx�s�zHTMLParser.unescape)F)r) rrrrr�rr$r0r1r5r6r7r=r>r-rUrhrSrPrtrQr~rr�rYr\rKrjrgrnr�rxrrrrrds:

	h
9+*r) r�
__future__rrrrZfuture.builtinsZfuture.backportsrr9r r:r(r]r[rWrNrm�commentcloserur�rvrw�VERBOSEr�r�r�r��	Exceptionr	r+rrrrr�<module>s<








��
��

PK�Cu\�FS�:M:Mfuture/backports/html/parser.pynu�[���"""A parser for HTML and XHTML.

Backported for python-future from Python 3.3.
"""

# This file is based on sgmllib.py, but the API is slightly different.

# XXX There should be a way to distinguish between PCDATA (parsed
# character data -- the normal case), RCDATA (replaceable character
# data -- only char and entity references and end tags are special)
# and CDATA (character data -- only end tags are special).

from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from future.builtins import *
from future.backports import _markupbase
import re
import warnings

# Regular expressions used for parsing

interesting_normal = re.compile('[&<]')
incomplete = re.compile('&[a-zA-Z#]')

entityref = re.compile('&([a-zA-Z][-.a-zA-Z0-9]*)[^a-zA-Z0-9]')
charref = re.compile('&#(?:[0-9]+|[xX][0-9a-fA-F]+)[^0-9a-fA-F]')

starttagopen = re.compile('<[a-zA-Z]')
piclose = re.compile('>')
commentclose = re.compile(r'--\s*>')
tagfind = re.compile('([a-zA-Z][-.a-zA-Z0-9:_]*)(?:\s|/(?!>))*')
# see http://www.w3.org/TR/html5/tokenization.html#tag-open-state
# and http://www.w3.org/TR/html5/tokenization.html#tag-name-state
tagfind_tolerant = re.compile('[a-zA-Z][^\t\n\r\f />\x00]*')
# Note:
#  1) the strict attrfind isn't really strict, but we can't make it
#     correctly strict without breaking backward compatibility;
#  2) if you change attrfind remember to update locatestarttagend too;
#  3) if you change attrfind and/or locatestarttagend the parser will
#     explode, so don't do it.
attrfind = re.compile(
    r'\s*([a-zA-Z_][-.:a-zA-Z_0-9]*)(\s*=\s*'
    r'(\'[^\']*\'|"[^"]*"|[^\s"\'=<>`]*))?')
attrfind_tolerant = re.compile(
    r'((?<=[\'"\s/])[^\s/>][^\s/=>]*)(\s*=+\s*'
    r'(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?(?:\s|/(?!>))*')
locatestarttagend = re.compile(r"""
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:\s+                             # whitespace before attribute name
    (?:[a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name
      (?:\s*=\s*                     # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |\"[^\"]*\"                # LIT-enclosed value
          |[^'\">\s]+                # bare value
         )
       )?
     )
   )*
  \s*                                # trailing whitespace
""", re.VERBOSE)
locatestarttagend_tolerant = re.compile(r"""
  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name
  (?:[\s/]*                          # optional whitespace before attribute name
    (?:(?<=['"\s/])[^\s/>][^\s/=>]*  # attribute name
      (?:\s*=+\s*                    # value indicator
        (?:'[^']*'                   # LITA-enclosed value
          |"[^"]*"                   # LIT-enclosed value
          |(?!['"])[^>\s]*           # bare value
         )
         (?:\s*,)*                   # possibly followed by a comma
       )?(?:\s|/(?!>))*
     )*
   )?
  \s*                                # trailing whitespace
""", re.VERBOSE)
endendtag = re.compile('>')
# the HTML 5 spec, section 8.1.2.2, doesn't allow spaces between
# </ and the tag name, so maybe this should be fixed
endtagfind = re.compile('</\s*([a-zA-Z][-.a-zA-Z0-9:_]*)\s*>')


class HTMLParseError(Exception):
    """Exception raised for all parse errors."""

    def __init__(self, msg, position=(None, None)):
        assert msg
        self.msg = msg
        self.lineno = position[0]
        self.offset = position[1]

    def __str__(self):
        result = self.msg
        if self.lineno is not None:
            result = result + ", at line %d" % self.lineno
        if self.offset is not None:
            result = result + ", column %d" % (self.offset + 1)
        return result


class HTMLParser(_markupbase.ParserBase):
    """Find tags and other markup and call handler functions.

    Usage:
        p = HTMLParser()
        p.feed(data)
        ...
        p.close()

    Start tags are handled by calling self.handle_starttag() or
    self.handle_startendtag(); end tags by self.handle_endtag().  The
    data between tags is passed from the parser to the derived class
    by calling self.handle_data() with the data as argument (the data
    may be split up in arbitrary chunks).  Entity references are
    passed by calling self.handle_entityref() with the entity
    reference as the argument.  Numeric character references are
    passed to self.handle_charref() with the string containing the
    reference as the argument.
    """

    CDATA_CONTENT_ELEMENTS = ("script", "style")

    def __init__(self, strict=False):
        """Initialize and reset this instance.

        If strict is set to False (the default) the parser will parse invalid
        markup, otherwise it will raise an error.  Note that the strict mode
        is deprecated.
        """
        if strict:
            warnings.warn("The strict mode is deprecated.",
                          DeprecationWarning, stacklevel=2)
        self.strict = strict
        self.reset()

    def reset(self):
        """Reset this instance.  Loses all unprocessed data."""
        self.rawdata = ''
        self.lasttag = '???'
        self.interesting = interesting_normal
        self.cdata_elem = None
        _markupbase.ParserBase.reset(self)

    def feed(self, data):
        r"""Feed data to the parser.

        Call this as often as you want, with as little or as much text
        as you want (may include '\n').
        """
        self.rawdata = self.rawdata + data
        self.goahead(0)

    def close(self):
        """Handle any buffered data."""
        self.goahead(1)

    def error(self, message):
        raise HTMLParseError(message, self.getpos())

    __starttag_text = None

    def get_starttag_text(self):
        """Return full source of start tag: '<...>'."""
        return self.__starttag_text

    def set_cdata_mode(self, elem):
        self.cdata_elem = elem.lower()
        self.interesting = re.compile(r'</\s*%s\s*>' % self.cdata_elem, re.I)

    def clear_cdata_mode(self):
        self.interesting = interesting_normal
        self.cdata_elem = None

    # Internal -- handle data as far as reasonable.  May leave state
    # and data to be processed by a subsequent call.  If 'end' is
    # true, force handling all data as if followed by EOF marker.
    def goahead(self, end):
        rawdata = self.rawdata
        i = 0
        n = len(rawdata)
        while i < n:
            match = self.interesting.search(rawdata, i) # < or &
            if match:
                j = match.start()
            else:
                if self.cdata_elem:
                    break
                j = n
            if i < j: self.handle_data(rawdata[i:j])
            i = self.updatepos(i, j)
            if i == n: break
            startswith = rawdata.startswith
            if startswith('<', i):
                if starttagopen.match(rawdata, i): # < + letter
                    k = self.parse_starttag(i)
                elif startswith("</", i):
                    k = self.parse_endtag(i)
                elif startswith("<!--", i):
                    k = self.parse_comment(i)
                elif startswith("<?", i):
                    k = self.parse_pi(i)
                elif startswith("<!", i):
                    if self.strict:
                        k = self.parse_declaration(i)
                    else:
                        k = self.parse_html_declaration(i)
                elif (i + 1) < n:
                    self.handle_data("<")
                    k = i + 1
                else:
                    break
                if k < 0:
                    if not end:
                        break
                    if self.strict:
                        self.error("EOF in middle of construct")
                    k = rawdata.find('>', i + 1)
                    if k < 0:
                        k = rawdata.find('<', i + 1)
                        if k < 0:
                            k = i + 1
                    else:
                        k += 1
                    self.handle_data(rawdata[i:k])
                i = self.updatepos(i, k)
            elif startswith("&#", i):
                match = charref.match(rawdata, i)
                if match:
                    name = match.group()[2:-1]
                    self.handle_charref(name)
                    k = match.end()
                    if not startswith(';', k-1):
                        k = k - 1
                    i = self.updatepos(i, k)
                    continue
                else:
                    if ";" in rawdata[i:]: #bail by consuming &#
                        self.handle_data(rawdata[0:2])
                        i = self.updatepos(i, 2)
                    break
            elif startswith('&', i):
                match = entityref.match(rawdata, i)
                if match:
                    name = match.group(1)
                    self.handle_entityref(name)
                    k = match.end()
                    if not startswith(';', k-1):
                        k = k - 1
                    i = self.updatepos(i, k)
                    continue
                match = incomplete.match(rawdata, i)
                if match:
                    # match.group() will contain at least 2 chars
                    if end and match.group() == rawdata[i:]:
                        if self.strict:
                            self.error("EOF in middle of entity or char ref")
                        else:
                            if k <= i:
                                k = n
                            i = self.updatepos(i, i + 1)
                    # incomplete
                    break
                elif (i + 1) < n:
                    # not the end of the buffer, and can't be confused
                    # with some other construct
                    self.handle_data("&")
                    i = self.updatepos(i, i + 1)
                else:
                    break
            else:
                assert 0, "interesting.search() lied"
        # end while
        if end and i < n and not self.cdata_elem:
            self.handle_data(rawdata[i:n])
            i = self.updatepos(i, n)
        self.rawdata = rawdata[i:]

    # Internal -- parse html declarations, return length or -1 if not terminated
    # See w3.org/TR/html5/tokenization.html#markup-declaration-open-state
    # See also parse_declaration in _markupbase
    def parse_html_declaration(self, i):
        rawdata = self.rawdata
        assert rawdata[i:i+2] == '<!', ('unexpected call to '
                                        'parse_html_declaration()')
        if rawdata[i:i+4] == '<!--':
            # this case is actually already handled in goahead()
            return self.parse_comment(i)
        elif rawdata[i:i+3] == '<![':
            return self.parse_marked_section(i)
        elif rawdata[i:i+9].lower() == '<!doctype':
            # find the closing >
            gtpos = rawdata.find('>', i+9)
            if gtpos == -1:
                return -1
            self.handle_decl(rawdata[i+2:gtpos])
            return gtpos+1
        else:
            return self.parse_bogus_comment(i)

    # Internal -- parse bogus comment, return length or -1 if not terminated
    # see http://www.w3.org/TR/html5/tokenization.html#bogus-comment-state
    def parse_bogus_comment(self, i, report=1):
        rawdata = self.rawdata
        assert rawdata[i:i+2] in ('<!', '</'), ('unexpected call to '
                                                'parse_comment()')
        pos = rawdata.find('>', i+2)
        if pos == -1:
            return -1
        if report:
            self.handle_comment(rawdata[i+2:pos])
        return pos + 1

    # Internal -- parse processing instr, return end or -1 if not terminated
    def parse_pi(self, i):
        rawdata = self.rawdata
        assert rawdata[i:i+2] == '<?', 'unexpected call to parse_pi()'
        match = piclose.search(rawdata, i+2) # >
        if not match:
            return -1
        j = match.start()
        self.handle_pi(rawdata[i+2: j])
        j = match.end()
        return j

    # Internal -- handle starttag, return end or -1 if not terminated
    def parse_starttag(self, i):
        self.__starttag_text = None
        endpos = self.check_for_whole_start_tag(i)
        if endpos < 0:
            return endpos
        rawdata = self.rawdata
        self.__starttag_text = rawdata[i:endpos]

        # Now parse the data between i+1 and j into a tag and attrs
        attrs = []
        match = tagfind.match(rawdata, i+1)
        assert match, 'unexpected call to parse_starttag()'
        k = match.end()
        self.lasttag = tag = match.group(1).lower()
        while k < endpos:
            if self.strict:
                m = attrfind.match(rawdata, k)
            else:
                m = attrfind_tolerant.match(rawdata, k)
            if not m:
                break
            attrname, rest, attrvalue = m.group(1, 2, 3)
            if not rest:
                attrvalue = None
            elif attrvalue[:1] == '\'' == attrvalue[-1:] or \
                 attrvalue[:1] == '"' == attrvalue[-1:]:
                attrvalue = attrvalue[1:-1]
            if attrvalue:
                attrvalue = self.unescape(attrvalue)
            attrs.append((attrname.lower(), attrvalue))
            k = m.end()

        end = rawdata[k:endpos].strip()
        if end not in (">", "/>"):
            lineno, offset = self.getpos()
            if "\n" in self.__starttag_text:
                lineno = lineno + self.__starttag_text.count("\n")
                offset = len(self.__starttag_text) \
                         - self.__starttag_text.rfind("\n")
            else:
                offset = offset + len(self.__starttag_text)
            if self.strict:
                self.error("junk characters in start tag: %r"
                           % (rawdata[k:endpos][:20],))
            self.handle_data(rawdata[i:endpos])
            return endpos
        if end.endswith('/>'):
            # XHTML-style empty tag: <span attr="value" />
            self.handle_startendtag(tag, attrs)
        else:
            self.handle_starttag(tag, attrs)
            if tag in self.CDATA_CONTENT_ELEMENTS:
                self.set_cdata_mode(tag)
        return endpos

    # Internal -- check to see if we have a complete starttag; return end
    # or -1 if incomplete.
    def check_for_whole_start_tag(self, i):
        rawdata = self.rawdata
        if self.strict:
            m = locatestarttagend.match(rawdata, i)
        else:
            m = locatestarttagend_tolerant.match(rawdata, i)
        if m:
            j = m.end()
            next = rawdata[j:j+1]
            if next == ">":
                return j + 1
            if next == "/":
                if rawdata.startswith("/>", j):
                    return j + 2
                if rawdata.startswith("/", j):
                    # buffer boundary
                    return -1
                # else bogus input
                if self.strict:
                    self.updatepos(i, j + 1)
                    self.error("malformed empty start tag")
                if j > i:
                    return j
                else:
                    return i + 1
            if next == "":
                # end of input
                return -1
            if next in ("abcdefghijklmnopqrstuvwxyz=/"
                        "ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
                # end of input in or before attribute value, or we have the
                # '/' from a '/>' ending
                return -1
            if self.strict:
                self.updatepos(i, j)
                self.error("malformed start tag")
            if j > i:
                return j
            else:
                return i + 1
        raise AssertionError("we should not get here!")

    # Internal -- parse endtag, return end or -1 if incomplete
    def parse_endtag(self, i):
        rawdata = self.rawdata
        assert rawdata[i:i+2] == "</", "unexpected call to parse_endtag"
        match = endendtag.search(rawdata, i+1) # >
        if not match:
            return -1
        gtpos = match.end()
        match = endtagfind.match(rawdata, i) # </ + tag + >
        if not match:
            if self.cdata_elem is not None:
                self.handle_data(rawdata[i:gtpos])
                return gtpos
            if self.strict:
                self.error("bad end tag: %r" % (rawdata[i:gtpos],))
            # find the name: w3.org/TR/html5/tokenization.html#tag-name-state
            namematch = tagfind_tolerant.match(rawdata, i+2)
            if not namematch:
                # w3.org/TR/html5/tokenization.html#end-tag-open-state
                if rawdata[i:i+3] == '</>':
                    return i+3
                else:
                    return self.parse_bogus_comment(i)
            tagname = namematch.group().lower()
            # consume and ignore other stuff between the name and the >
            # Note: this is not 100% correct, since we might have things like
            # </tag attr=">">, but looking for > after tha name should cover
            # most of the cases and is much simpler
            gtpos = rawdata.find('>', namematch.end())
            self.handle_endtag(tagname)
            return gtpos+1

        elem = match.group(1).lower() # script or style
        if self.cdata_elem is not None:
            if elem != self.cdata_elem:
                self.handle_data(rawdata[i:gtpos])
                return gtpos

        self.handle_endtag(elem.lower())
        self.clear_cdata_mode()
        return gtpos

    # Overridable -- finish processing of start+end tag: <tag.../>
    def handle_startendtag(self, tag, attrs):
        self.handle_starttag(tag, attrs)
        self.handle_endtag(tag)

    # Overridable -- handle start tag
    def handle_starttag(self, tag, attrs):
        pass

    # Overridable -- handle end tag
    def handle_endtag(self, tag):
        pass

    # Overridable -- handle character reference
    def handle_charref(self, name):
        pass

    # Overridable -- handle entity reference
    def handle_entityref(self, name):
        pass

    # Overridable -- handle data
    def handle_data(self, data):
        pass

    # Overridable -- handle comment
    def handle_comment(self, data):
        pass

    # Overridable -- handle declaration
    def handle_decl(self, decl):
        pass

    # Overridable -- handle processing instruction
    def handle_pi(self, data):
        pass

    def unknown_decl(self, data):
        if self.strict:
            self.error("unknown declaration: %r" % (data,))

    # Internal -- helper to remove special character quoting
    def unescape(self, s):
        if '&' not in s:
            return s
        def replaceEntities(s):
            s = s.groups()[0]
            try:
                if s[0] == "#":
                    s = s[1:]
                    if s[0] in ['x','X']:
                        c = int(s[1:].rstrip(';'), 16)
                    else:
                        c = int(s.rstrip(';'))
                    return chr(c)
            except ValueError:
                return '&#' + s
            else:
                from future.backports.html.entities import html5
                if s in html5:
                    return html5[s]
                elif s.endswith(';'):
                    return '&' + s
                for x in range(2, len(s)):
                    if s[:x] in html5:
                        return html5[s[:x]] + s[x:]
                else:
                    return '&' + s

        return re.sub(r"&(#?[xX]?(?:[0-9a-fA-F]+;|\w{1,32};?))",
                      replaceEntities, s)
PK�Cu\�H�gfuture/backports/__init__.pynu�[���"""
future.backports package
"""

from __future__ import absolute_import

import sys

__future_module__ = True
from future.standard_library import import_top_level_modules


if sys.version_info[0] >= 3:
    import_top_level_modules()


from .misc import (ceil,
                   OrderedDict,
                   Counter,
                   ChainMap,
                   check_output,
                   count,
                   recursive_repr,
                   _count_elements,
                   cmp_to_key
                  )
PK�Cu\X�h~/=/=future/backports/socket.pynu�[���# Wrapper module for _socket, providing some additional facilities
# implemented in Python.

"""\
This module provides socket operations and some related functions.
On Unix, it supports IP (Internet Protocol) and Unix domain sockets.
On other systems, it only supports IP. Functions specific for a
socket are available as methods of the socket object.

Functions:

socket() -- create a new socket object
socketpair() -- create a pair of new socket objects [*]
fromfd() -- create a socket object from an open file descriptor [*]
fromshare() -- create a socket object from data received from socket.share() [*]
gethostname() -- return the current hostname
gethostbyname() -- map a hostname to its IP number
gethostbyaddr() -- map an IP number or hostname to DNS info
getservbyname() -- map a service name and a protocol name to a port number
getprotobyname() -- map a protocol name (e.g. 'tcp') to a number
ntohs(), ntohl() -- convert 16, 32 bit int from network to host byte order
htons(), htonl() -- convert 16, 32 bit int from host to network byte order
inet_aton() -- convert IP addr string (123.45.67.89) to 32-bit packed format
inet_ntoa() -- convert 32-bit packed format IP to string (123.45.67.89)
socket.getdefaulttimeout() -- get the default timeout value
socket.setdefaulttimeout() -- set the default timeout value
create_connection() -- connects to an address, with an optional timeout and
                       optional source address.

 [*] not available on all platforms!

Special objects:

SocketType -- type object for socket objects
error -- exception raised for I/O errors
has_ipv6 -- boolean value indicating if IPv6 is supported

Integer constants:

AF_INET, AF_UNIX -- socket domains (first argument to socket() call)
SOCK_STREAM, SOCK_DGRAM, SOCK_RAW -- socket types (second argument)

Many other constants may be defined; these may be used in calls to
the setsockopt() and getsockopt() methods.
"""

from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from future.builtins import super

import _socket
from _socket import *

import os, sys, io

try:
    import errno
except ImportError:
    errno = None
EBADF = getattr(errno, 'EBADF', 9)
EAGAIN = getattr(errno, 'EAGAIN', 11)
EWOULDBLOCK = getattr(errno, 'EWOULDBLOCK', 11)

__all__ = ["getfqdn", "create_connection"]
__all__.extend(os._get_exports_list(_socket))


_realsocket = socket

# WSA error codes
if sys.platform.lower().startswith("win"):
    errorTab = {}
    errorTab[10004] = "The operation was interrupted."
    errorTab[10009] = "A bad file handle was passed."
    errorTab[10013] = "Permission denied."
    errorTab[10014] = "A fault occurred on the network??" # WSAEFAULT
    errorTab[10022] = "An invalid operation was attempted."
    errorTab[10035] = "The socket operation would block"
    errorTab[10036] = "A blocking operation is already in progress."
    errorTab[10048] = "The network address is in use."
    errorTab[10054] = "The connection has been reset."
    errorTab[10058] = "The network has been shut down."
    errorTab[10060] = "The operation timed out."
    errorTab[10061] = "Connection refused."
    errorTab[10063] = "The name is too long."
    errorTab[10064] = "The host is down."
    errorTab[10065] = "The host is unreachable."
    __all__.append("errorTab")


class socket(_socket.socket):

    """A subclass of _socket.socket adding the makefile() method."""

    __slots__ = ["__weakref__", "_io_refs", "_closed"]

    def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None):
        if fileno is None:
            _socket.socket.__init__(self, family, type, proto)
        else:
            _socket.socket.__init__(self, family, type, proto, fileno)
        self._io_refs = 0
        self._closed = False

    def __enter__(self):
        return self

    def __exit__(self, *args):
        if not self._closed:
            self.close()

    def __repr__(self):
        """Wrap __repr__() to reveal the real class name."""
        s = _socket.socket.__repr__(self)
        if s.startswith("<socket object"):
            s = "<%s.%s%s%s" % (self.__class__.__module__,
                                self.__class__.__name__,
                                getattr(self, '_closed', False) and " [closed] " or "",
                                s[7:])
        return s

    def __getstate__(self):
        raise TypeError("Cannot serialize socket object")

    def dup(self):
        """dup() -> socket object

        Return a new socket object connected to the same system resource.
        """
        fd = dup(self.fileno())
        sock = self.__class__(self.family, self.type, self.proto, fileno=fd)
        sock.settimeout(self.gettimeout())
        return sock

    def accept(self):
        """accept() -> (socket object, address info)

        Wait for an incoming connection.  Return a new socket
        representing the connection, and the address of the client.
        For IP sockets, the address info is a pair (hostaddr, port).
        """
        fd, addr = self._accept()
        sock = socket(self.family, self.type, self.proto, fileno=fd)
        # Issue #7995: if no default timeout is set and the listening
        # socket had a (non-zero) timeout, force the new socket in blocking
        # mode to override platform-specific socket flags inheritance.
        if getdefaulttimeout() is None and self.gettimeout():
            sock.setblocking(True)
        return sock, addr

    def makefile(self, mode="r", buffering=None, **_3to2kwargs):
        """makefile(...) -> an I/O stream connected to the socket

        The arguments are as for io.open() after the filename,
        except the only mode characters supported are 'r', 'w' and 'b'.
        The semantics are similar too.  (XXX refactor to share code?)
        """
        if 'newline' in _3to2kwargs: newline = _3to2kwargs['newline']; del _3to2kwargs['newline']
        else: newline = None
        if 'errors' in _3to2kwargs: errors = _3to2kwargs['errors']; del _3to2kwargs['errors']
        else: errors = None
        if 'encoding' in _3to2kwargs: encoding = _3to2kwargs['encoding']; del _3to2kwargs['encoding']
        else: encoding = None
        for c in mode:
            if c not in ("r", "w", "b"):
                raise ValueError("invalid mode %r (only r, w, b allowed)")
        writing = "w" in mode
        reading = "r" in mode or not writing
        assert reading or writing
        binary = "b" in mode
        rawmode = ""
        if reading:
            rawmode += "r"
        if writing:
            rawmode += "w"
        raw = SocketIO(self, rawmode)
        self._io_refs += 1
        if buffering is None:
            buffering = -1
        if buffering < 0:
            buffering = io.DEFAULT_BUFFER_SIZE
        if buffering == 0:
            if not binary:
                raise ValueError("unbuffered streams must be binary")
            return raw
        if reading and writing:
            buffer = io.BufferedRWPair(raw, raw, buffering)
        elif reading:
            buffer = io.BufferedReader(raw, buffering)
        else:
            assert writing
            buffer = io.BufferedWriter(raw, buffering)
        if binary:
            return buffer
        text = io.TextIOWrapper(buffer, encoding, errors, newline)
        text.mode = mode
        return text

    def _decref_socketios(self):
        if self._io_refs > 0:
            self._io_refs -= 1
        if self._closed:
            self.close()

    def _real_close(self, _ss=_socket.socket):
        # This function should not reference any globals. See issue #808164.
        _ss.close(self)

    def close(self):
        # This function should not reference any globals. See issue #808164.
        self._closed = True
        if self._io_refs <= 0:
            self._real_close()

    def detach(self):
        """detach() -> file descriptor

        Close the socket object without closing the underlying file descriptor.
        The object cannot be used after this call, but the file descriptor
        can be reused for other purposes.  The file descriptor is returned.
        """
        self._closed = True
        return super().detach()

def fromfd(fd, family, type, proto=0):
    """ fromfd(fd, family, type[, proto]) -> socket object

    Create a socket object from a duplicate of the given file
    descriptor.  The remaining arguments are the same as for socket().
    """
    nfd = dup(fd)
    return socket(family, type, proto, nfd)

if hasattr(_socket.socket, "share"):
    def fromshare(info):
        """ fromshare(info) -> socket object

        Create a socket object from a the bytes object returned by
        socket.share(pid).
        """
        return socket(0, 0, 0, info)

if hasattr(_socket, "socketpair"):

    def socketpair(family=None, type=SOCK_STREAM, proto=0):
        """socketpair([family[, type[, proto]]]) -> (socket object, socket object)

        Create a pair of socket objects from the sockets returned by the platform
        socketpair() function.
        The arguments are the same as for socket() except the default family is
        AF_UNIX if defined on the platform; otherwise, the default is AF_INET.
        """
        if family is None:
            try:
                family = AF_UNIX
            except NameError:
                family = AF_INET
        a, b = _socket.socketpair(family, type, proto)
        a = socket(family, type, proto, a.detach())
        b = socket(family, type, proto, b.detach())
        return a, b


_blocking_errnos = set([EAGAIN, EWOULDBLOCK])

class SocketIO(io.RawIOBase):

    """Raw I/O implementation for stream sockets.

    This class supports the makefile() method on sockets.  It provides
    the raw I/O interface on top of a socket object.
    """

    # One might wonder why not let FileIO do the job instead.  There are two
    # main reasons why FileIO is not adapted:
    # - it wouldn't work under Windows (where you can't used read() and
    #   write() on a socket handle)
    # - it wouldn't work with socket timeouts (FileIO would ignore the
    #   timeout and consider the socket non-blocking)

    # XXX More docs

    def __init__(self, sock, mode):
        if mode not in ("r", "w", "rw", "rb", "wb", "rwb"):
            raise ValueError("invalid mode: %r" % mode)
        io.RawIOBase.__init__(self)
        self._sock = sock
        if "b" not in mode:
            mode += "b"
        self._mode = mode
        self._reading = "r" in mode
        self._writing = "w" in mode
        self._timeout_occurred = False

    def readinto(self, b):
        """Read up to len(b) bytes into the writable buffer *b* and return
        the number of bytes read.  If the socket is non-blocking and no bytes
        are available, None is returned.

        If *b* is non-empty, a 0 return value indicates that the connection
        was shutdown at the other end.
        """
        self._checkClosed()
        self._checkReadable()
        if self._timeout_occurred:
            raise IOError("cannot read from timed out object")
        while True:
            try:
                return self._sock.recv_into(b)
            except timeout:
                self._timeout_occurred = True
                raise
            # except InterruptedError:
            #     continue
            except error as e:
                if e.args[0] in _blocking_errnos:
                    return None
                raise

    def write(self, b):
        """Write the given bytes or bytearray object *b* to the socket
        and return the number of bytes written.  This can be less than
        len(b) if not all data could be written.  If the socket is
        non-blocking and no bytes could be written None is returned.
        """
        self._checkClosed()
        self._checkWritable()
        try:
            return self._sock.send(b)
        except error as e:
            # XXX what about EINTR?
            if e.args[0] in _blocking_errnos:
                return None
            raise

    def readable(self):
        """True if the SocketIO is open for reading.
        """
        if self.closed:
            raise ValueError("I/O operation on closed socket.")
        return self._reading

    def writable(self):
        """True if the SocketIO is open for writing.
        """
        if self.closed:
            raise ValueError("I/O operation on closed socket.")
        return self._writing

    def seekable(self):
        """True if the SocketIO is open for seeking.
        """
        if self.closed:
            raise ValueError("I/O operation on closed socket.")
        return super().seekable()

    def fileno(self):
        """Return the file descriptor of the underlying socket.
        """
        self._checkClosed()
        return self._sock.fileno()

    @property
    def name(self):
        if not self.closed:
            return self.fileno()
        else:
            return -1

    @property
    def mode(self):
        return self._mode

    def close(self):
        """Close the SocketIO object.  This doesn't close the underlying
        socket, except if all references to it have disappeared.
        """
        if self.closed:
            return
        io.RawIOBase.close(self)
        self._sock._decref_socketios()
        self._sock = None


def getfqdn(name=''):
    """Get fully qualified domain name from name.

    An empty argument is interpreted as meaning the local host.

    First the hostname returned by gethostbyaddr() is checked, then
    possibly existing aliases. In case no FQDN is available, hostname
    from gethostname() is returned.
    """
    name = name.strip()
    if not name or name == '0.0.0.0':
        name = gethostname()
    try:
        hostname, aliases, ipaddrs = gethostbyaddr(name)
    except error:
        pass
    else:
        aliases.insert(0, hostname)
        for name in aliases:
            if '.' in name:
                break
        else:
            name = hostname
    return name


# Re-use the same sentinel as in the Python stdlib socket module:
from socket import _GLOBAL_DEFAULT_TIMEOUT
# Was: _GLOBAL_DEFAULT_TIMEOUT = object()


def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT,
                      source_address=None):
    """Connect to *address* and return the socket object.

    Convenience function.  Connect to *address* (a 2-tuple ``(host,
    port)``) and return the socket object.  Passing the optional
    *timeout* parameter will set the timeout on the socket instance
    before attempting to connect.  If no *timeout* is supplied, the
    global default timeout setting returned by :func:`getdefaulttimeout`
    is used.  If *source_address* is set it must be a tuple of (host, port)
    for the socket to bind as a source address before making the connection.
    An host of '' or port 0 tells the OS to use the default.
    """

    host, port = address
    err = None
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
        af, socktype, proto, canonname, sa = res
        sock = None
        try:
            sock = socket(af, socktype, proto)
            if timeout is not _GLOBAL_DEFAULT_TIMEOUT:
                sock.settimeout(timeout)
            if source_address:
                sock.bind(source_address)
            sock.connect(sa)
            return sock

        except error as _:
            err = _
            if sock is not None:
                sock.close()

    if err is not None:
        raise err
    else:
        raise error("getaddrinfo returns an empty list")
PK�Cu\'( gg4future/backports/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sldZddlmZddlZdZddlmZejddkr<e�ddlm	Z	m
Z
mZmZm
Z
mZmZmZmZdS)	z
future.backports package
�)�absolute_importNT)�import_top_level_modules��)	�ceil�OrderedDict�Counter�ChainMap�check_output�count�recursive_repr�_count_elements�
cmp_to_key)�__doc__�
__future__r�sysZ__future_module__Zfuture.standard_libraryr�version_info�miscrrrr	r
rrr
r�rr�C/usr/local/lib/python3.9/site-packages/future/backports/__init__.py�<module>sPK�Cu\lu1���4future/backports/__pycache__/datetime.cpython-39.pycnu�[���a

��?h!'�@s�dZddlmZddlmZddlmZddlmZddlmZddlmZddlm	Z	dd	lm
Z
dd
lmZddlmZddl
mZmZdd
lZdd
lZdd�ZdZdZdZgd�Zd
gZdZedd
�D]Ze�e�ee7Zq�[[dd�Zdd�Zdd�Zdd�Z dd�Z!ed�Z"ed�Z#ed �Z$e$d!k�s8J�e"d"e#dk�sNJ�e#d#e$dk�sdJ�d$d%�Z%gd&�Z&gd'�Z'd(d)�Z(d*d+�Z)d,d-�Z*d.d/�Z+d0d1�Z,d2d3�Z-d4d5�Z.d6d7�Z/d8d9�Z0d:d;�Z1Gd<d=�d=e�Z2e2d>�e2_3e2d?d@dAdAdBdC�e2_4e2ddD�e2_5GdEdF�dFe�Z6e6Z7e6ddd�e6_3e6ddGdH�e6_4e2ddI�e6_5GdJdK�dKe�Z8e8Z9GdLdM�dMe�ZeZ:eddd�e_3ed@dAdAdB�e_4e2ddD�e_5GdNdO�dOe6�Z;e;ddd�e;_3e;ddGdHd@dAdAdB�e;_4e2ddD�e;_5dPdQ�Z<GdRdS�dSe8�Z=e=�>e2d��e=_?e=�>e=j@�e=_3e=�>e=jA�e=_4e;dTdde=j?dU�ZBzddVlCTWneD�yJYnL0['[[[#["[$[[&[([+[.[/[0[,[-[[1[7[ [[[)[[<[[%[[:[9[*[!ddWlCmZd
S)Xz�Concrete date/time and related types.

See http://www.iana.org/time-zones/repository/tz-link.html for
time zone and DST data sources.
�)�division)�unicode_literals)�print_function)�absolute_import)�str)�bytes)�map)�round)�int)�object)�
native_str�PY2NcCs||krdS||krdSdS)Nr�������x�yrr�C/usr/local/lib/python3.9/site-packages/future/backports/datetime.py�_cmpsrri'i۹7)
N��r�rrrrrrrrcCs$|ddko"|ddkp"|ddkS)zyear -> 1 if leap year, else 0.�r�d�r)�yearrrr�_is_leap.srcCs(|d}|d|d|d|dS)z2year -> number of days before January 1st of year.r�mrrrr)rrrrr�_days_before_year2srcCs:d|krdksnJ|��|dkr2t|�r2dSt|S)z9year, month -> number of days in that month in that year.r���)r�_DAYS_IN_MONTH�r�monthrrr�_days_in_month7sr&cCs6d|krdksnJd��t||dko2t|�S)zCyear, month -> number of days in year preceding first day of month.rr �month must be in 1..12r!)�_DAYS_BEFORE_MONTHrr$rrr�_days_before_month>sr)cCs`d|krdksnJd��t||�}d|kr<|ksJnJd|��t|�t||�|S)z>year, month, day -> ordinal, considering 01-Jan-0001 as day 1.rr r'�day must be in 1..%d)r&rr)�rr%�day�dimrrr�_ymd2ordCs
"��r.i��e�i�r�c	Cs8|d8}t|t�\}}|dd}t|t�\}}t|t�\}}t|d�\}}||d|d|7}|dkst|dkr�|dks�J�|dddfS|d	ko�|d
kp�|d	k}|t|�ks�J�|dd?}t||d
ko�|}||kr�|d8}|t||d
ko�|8}||8}d|k�r$t||�k�s*nJ�|||dfS)z@ordinal -> (year, month, day), considering 01-Jan-0001 as day 1.rrrrrrr r���2r0r!)�divmod�_DI400Y�_DI100Y�_DI4Yrr(r#r&)	�nZn400rZn100Zn4Zn1Zleapyearr%Z	precedingrrr�_ord2ymd\s($r:)
N�Jan�Feb�Mar�Apr�May�Jun�Jul�Aug�Sep�Oct�Nov�Dec)N�Mon�Tue�Wed�Thu�Fri�Sat�Sunc	Cs>t|||�dd}t||�|}t�|||||||||f	�S)N��)r.r)�_time�struct_time)	r�m�d�hh�mm�ssZdstflagZwdayZdnumrrr�_build_struct_time�srWcCs"d|||f}|r|d|7}|S)Nz%02d:%02d:%02d�.%06dr)rTrUrV�us�resultrrr�_format_time�sr[cCs�d}d}d}g}|j}dt|�}}	||	k�r�||}
|d7}|
dk�r�||	k�r�||}
|d7}|
dkr�|dur�dt|dd�}|�|��q�|
dk�r@|du�r&d}t|d	��r&|��}|du�r&d
}|jdkr�|}d}t|tdd��\}
}|tdd
��r
Jd��|tdd
�}d||
|f}d|v�s4J�|�|�n^|
dk�r�|du�r�d}t|d��r�|��}|du�r�|�	dd�}|�|�n|d�||
�n|d�q$||
�q$d�
|�}t�||�S)Nrr�%�fz%06d�microsecond�z��	utcoffset�+�-��hours��minutes�whole minutez
%c%02d%02d�Z�tznamez%%)
�append�len�getattr�hasattrra�daysr5�	timedeltarj�replace�joinrP�strftime)r�format�	timetupleZfreplaceZzreplaceZZreplaceZ	newformat�push�ir9�ch�offset�sign�hrR�srrr�_wrap_strftime�sb


�










r}cCs|durdSt||�|�S�N)rm)�tzinfoZmethnameZ	tzinfoargrrr�_call_tzinfo_method�sr�cCs&|dur"t|t�s"tdt|���dS)Nz4tzinfo.tzname() must return None or string, not '%s')�
isinstancer�	TypeError�type)�namerrr�
_check_tzname�s�r�cCs�|dvsJ�|durdSt|t�s6td|t|�f��|tdd�sJ|jrZtd||f��td�|krxtd�ks�ntd||f��dS)N)ra�dstz3tzinfo.%s() must return None or timedelta, not '%s'rrfz9tzinfo.%s() must return a whole number of minutes, got %szV%s()=%s, must be must be strictly between -timedelta(hours=24) and timedelta(hours=24))r�rpr�r��microseconds�
ValueError)r�ryrrr�_check_utc_offset�s

�� �r�cCs�t|t�std��t|kr&tks:ntdttf|��d|krNdksZntd|��t||�}d|krx|ks�ntd||��dS)N�int expectedzyear must be in %d..%drr r'r*)r�r
r��MINYEAR�MAXYEARr�r&r+rrr�_check_date_fieldss


r�cCs�t|t�std��d|kr&dks2ntd|��d|krFdksRntd|��d|krfdksrntd|��d|kr�dks�ntd	|��dS)
Nr�r�zhour must be in 0..23�;zminute must be in 0..59zsecond must be in 0..59�?Bz microsecond must be in 0..999999)r�r
r�r�)�hour�minute�secondr^rrr�_check_time_fieldss



r�cCs|durt|t�std��dS)Nz4tzinfo argument must be None or of a tzinfo subclass)r�rr�)�tzrrr�_check_tzinfo_argsr�cCs tdt|�jt|�jf��dS)Nzcan't compare '%s' to '%s')r�r��__name__rrrr�	_cmperror!s�r�c@seZdZdZdZdAdd�Zdd�Zdd	�Zd
d�Ze	dd
��Z
e	dd��Ze	dd��Zdd�Z
e
Zdd�Zdd�Zdd�Zdd�Zdd�Zdd�ZeZd d!�Zd"d#�Zd$d%�Zd&d'�Zd(d)�Zd*d+�Zd,d-�Zd.d/�Zd0d1�Zd2d3�Zd4d5�Z d6d7�Z!d8d9�Z"d:d;�Z#d<d=�Z$d>d?�Z%d@S)Brpa�Represent the difference between two datetime objects.

    Supported operators:

    - add, subtract timedelta
    - unary plus, minus, abs
    - compare to timedelta
    - multiply, divide by int

    In addition, datetime supports subtraction of two datetime objects
    returning a timedelta, and addition or subtraction of a datetime
    and a timedelta giving a datetime.

    Representation: (days, seconds, microseconds).  Why?  Because I
    felt like it.
    ��_days�_seconds�
_microsecondsrcCsd}}	}
||d7}||d|d7}||d7}t|t�r�t�|�\}}t�|d�\}}
|
t|
�ksrJ�t|
�}	|t|�ks�J�t|�}nd}|}t|t�s�J�t|�dks�J�t|t�s�J�t|	�d	ks�J�t|t��r(t�|�\}}|t|�k�sJ�t|�}||7}t|�d
k�s,J�n|}t|t��s<J�t|�d
k�sNJ�t|t��s^J�t|d	�\}}||7}|	t|�7}	t|	t��s�J�t|	�dk�s�J�|d}t|�d
k�s�J�t|t��rz||7}t|d�}t|d�\}}|t|�k�s�J�|t|�k�sJ�t|d�\}}|t|�k�s,J�|t|�k�s>J�|t|�7}|	t|�7}	t|	t��sfJ�t|	�dk�s�J�nlt|d�\}}t|d	�\}}||7}|	t|�7}	t|	t��s�J�t|	�dk�s�J�t|�}||7}t|d�}t|	�dk�s�J�t|�dk�s
J�t|t��sJ�t|�|k�s,J�t|�}
t|
d�\}}
|	|7}	t|	t��sZJ�t|	d	�\}}	||7}t|t��s�J�t|	t��r�d|	k�r�d	k�s�nJ�t|
t��r�d|
k�r�dk�s�nJ�t�	|�}||_
|	|_|
|_t|�dk�r
t
d|��|S)NrrO�<�i�g�@g��?�Qg@i����.Ag�@Ai���@Bg��GA�ɚ;z$timedelta # of days is too large: %d)r��float�_math�modfr
�absr5r	r�__new__r�r�r��
OverflowError)�clsro�secondsr�Zmillisecondsrgre�weeksrSr|rYZdayfracZdaysecondsfracZdaysecondswholeZsecondsfracZusdouble�selfrrrr�8s�



**
ztimedelta.__new__cCsZ|jr$dd|jj|j|j|jfS|jrDdd|jj|j|jfSdd|jj|jfS)N�%s(%d, %d, %d)�	datetime.z
%s(%d, %d)z%s(%d))r��	__class__r�r�r��r�rrr�__repr__�s��ztimedelta.__repr__cCsdt|jd�\}}t|d�\}}d|||f}|jrLdd�}d||j�|}|jr`|d|j}|S)Nr�z%d:%02d:%02dcSs|t|�dkrdpdfS)Nrr|r`)r�)r9rrr�plural�sz!timedelta.__str__.<locals>.pluralz
%d day%s, rX)r5r�r�r�)r�rUrVrTr|r�rrr�__str__�sztimedelta.__str__cCs|jd|jd|jdS)zTotal seconds in the duration.r�r�)ror�r�r�rrr�
total_seconds�s
��ztimedelta.total_secondscCs|jS�ro�r�r�rrrro�sztimedelta.dayscCs|jS�r�)r�r�rrrr��sztimedelta.secondscCs|jS�r�)r�r�rrrr��sztimedelta.microsecondscCs2t|t�r.t|j|j|j|j|j|j�StSr~�r�rpr�r�r��NotImplemented�r��otherrrr�__add__�s


�ztimedelta.__add__cCs2t|t�r.t|j|j|j|j|j|j�StSr~r�r�rrr�__sub__�s


�ztimedelta.__sub__cCst|t�r||StSr~)r�rpr�r�rrr�__rsub__�s

ztimedelta.__rsub__cCst|j|j|j�Sr~)rpr�r�r�r�rrr�__neg__�s�ztimedelta.__neg__cCs|Sr~rr�rrr�__pos__�sztimedelta.__pos__cCs|jdkr|S|SdS�Nrr�r�rrr�__abs__�s
ztimedelta.__abs__cCsNt|t�r(t|j||j||j|�St|t�rJ|��\}}|||StSr~)	r�r
rpr�r�r�r��as_integer_ratior�)r�r��a�brrr�__mul__�s

�
ztimedelta.__mul__cCs|jd|jd|jS)Nr�r�r�r�rrr�_to_microseconds
s�ztimedelta._to_microsecondscCsNt|ttf�stS|��}t|t�r0||��St|t�rJtdd||�SdSr�)r�r
rpr�r�)r�r��usecrrr�__floordiv__s

ztimedelta.__floordiv__cCszt|tttf�stS|��}t|t�r2||��St|t�rLtdd||�St|t�rv|��\}}tdd|||�SdSr�)r�r
r�rpr�r�r�)r�r�r�r�r�rrr�__truediv__s


ztimedelta.__truediv__cCs*t|t�r&|��|��}tdd|�StSr�)r�rpr�r�)r�r��rrrr�__mod__&s
ztimedelta.__mod__cCs4t|t�r0t|��|���\}}|tdd|�fStSr�)r�rpr5r�r�)r�r��qr�rrr�
__divmod__,s
�ztimedelta.__divmod__cCs t|t�r|�|�dkSdSdS)NrF�r�rprr�rrr�__eq__5s
ztimedelta.__eq__cCs t|t�r|�|�dkSdSdS)NrTr�r�rrr�__ne__;s
ztimedelta.__ne__cCs&t|t�r|�|�dkSt||�dSr��r�rprr�r�rrr�__le__As
ztimedelta.__le__cCs&t|t�r|�|�dkSt||�dSr�r�r�rrr�__lt__Gs
ztimedelta.__lt__cCs&t|t�r|�|�dkSt||�dSr�r�r�rrr�__ge__Ms
ztimedelta.__ge__cCs&t|t�r|�|�dkSt||�dSr�r�r�rrr�__gt__Ss
ztimedelta.__gt__cCs t|t�sJ�t|��|���Sr~)r�rpr�	_getstater�rrrrYsztimedelta._cmpcCst|���Sr~��hashr�r�rrr�__hash__]sztimedelta.__hash__cCs|jdkp|jdkp|jdkSr�r�r�rrr�__bool__`s

��ztimedelta.__bool__cCs|j|j|jfSr~r�r�rrrr�gsztimedelta._getstatecCs|j|��fSr~�r�r�r�rrr�
__reduce__jsztimedelta.__reduce__N)rrrrrrr)&r��
__module__�__qualname__�__doc__�	__slots__r�r�r�r��propertyror�r�r��__radd__r�r�r�r�r�r��__rmul__r�r�r�r�r�r�r�r�r�r�r�rr�r�r�r�rrrrrp%sN�
q


				rpi6e�r�r�r�r�)rorergr�r�r�c@s0eZdZdZdZdBdd�Zedd��Zedd	��Zed
d��Z	dd
�Z
dd�Zdd�Zdd�Z
dd�ZeZedd��Zedd��Zedd��Zdd�Zdd�ZdCd d!�Zd"d#�Zd$d%�Zd&d'�Zd(d)�Zd*d+�Zd,d-�Zd.d/�Zd0d1�Zd2d3�ZeZ d4d5�Z!d6d7�Z"d8d9�Z#d:d;�Z$d<d=�Z%d>d?�Z&d@dA�Z'dS)D�datea�Concrete date type.

    Constructors:

    __new__()
    fromtimestamp()
    today()
    fromordinal()

    Operators:

    __repr__, __str__
    __cmp__, __hash__
    __add__, __radd__, __sub__ (add/radd only with timedelta arg)

    Methods:

    timetuple()
    toordinal()
    weekday()
    isoweekday(), isocalendar(), isoformat()
    ctime()
    strftime()

    Properties (readonly):
    year, month, day
    ��_year�_month�_dayNcCs~t|t�rRt|�dkrRd|dkr.dkrRnn |durRt�|�}|�|�|St|||�t�|�}||_||_||_	|S)zVConstructor.

        Arguments:

        year, month, day (required, base 1)
        rrr!r N)
r�rrlrr��_date__setstater�r�r�r�)r�rr%r,r�rrrr��s ��
�


zdate.__new__c	Cs(t�|�\	}}}}}}}}	}
||||�S)z;Construct a date from a POSIX timestamp (like time.time()).)rP�	localtime)r��trrRrSrTrUrV�weekday�jdayr�rrr�
fromtimestamp�szdate.fromtimestampcCst��}|�|�S)z"Construct a date from time.time().�rP�timer��r�r�rrr�today�sz
date.todaycCst|�\}}}||||�S)z�Construct a date from a proleptic Gregorian ordinal.

        January 1 of year 1 is day 1.  Only the year, month and day are
        non-zero in the result.
        )r:)r�r9rrRrSrrr�fromordinal�szdate.fromordinalcCsdd|jj|j|j|jfS)a5Convert to formal string, for repr().

        >>> dt = datetime(2010, 1, 1)
        >>> repr(dt)
        'datetime.datetime(2010, 1, 1, 0, 0)'

        >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc)
        >>> repr(dt)
        'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)'
        r�r�)r�r�r�r�r�r�rrrr��s
�z
date.__repr__cCs.|��dpd}dt|t|j|j|jfS)�Return ctime() style string.rOz%s %s %2d 00:00:00 %04d)�	toordinal�	_DAYNAMES�_MONTHNAMESr�r�r��r�r�rrr�ctime�s�z
date.ctimecCst|||���S)zFormat using strftime().)r}ru�r��fmtrrrrs�sz
date.strftimecCst|�dkr|�|�St|�Sr��rlrsrr�rrr�
__format__�s
zdate.__format__cCsd|j|j|jfS)z�Return the date formatted according to ISO.

        This is 'YYYY-MM-DD'.

        References:
        - http://www.w3.org/TR/NOTE-datetime
        - http://www.cl.cam.ac.uk/~mgk25/iso-time.html
        z%04d-%02d-%02dr�r�rrr�	isoformat�s	zdate.isoformatcCs|jS)z
year (1-9999))r�r�rrrr�sz	date.yearcCs|jS)zmonth (1-12))r�r�rrrr%�sz
date.monthcCs|jS)z
day (1-31))r�r�rrrr,�szdate.daycCst|j|j|jdddd�S)�9Return local time tuple compatible with time.localtime().rr)rWr�r�r�r�rrrrus�zdate.timetuplecCst|j|j|j�S)z�Return proleptic Gregorian ordinal for the year, month and day.

        January 1 of year 1 is day 1.  Only the year, month and day values
        contribute to the result.
        )r.r�r�r�r�rrrr�szdate.toordinalcCsB|dur|j}|dur|j}|dur*|j}t|||�t|||�S)z;Return a new date with new values for the specified fields.N)r�r�r�r�r�)r�rr%r,rrrrqszdate.replacecCst|t�r|�|�dkStSr��r�r�rr�r�rrrr�s
zdate.__eq__cCst|t�r|�|�dkStSr�rr�rrrr�"s
zdate.__ne__cCst|t�r|�|�dkStSr�rr�rrrr�'s
zdate.__le__cCst|t�r|�|�dkStSr�rr�rrrr�,s
zdate.__lt__cCst|t�r|�|�dkStSr�rr�rrrr�1s
zdate.__ge__cCst|t�r|�|�dkStSr�rr�rrrr�6s
zdate.__gt__cCsPt|t�sJ�|j|j|j}}}|j|j|j}}}t|||f|||f�Sr~)r�r�r�r�r�r)r�r�rrRrS�y2�m2Zd2rrrr;sz	date._cmpcCst|���S)�Hash.r�r�rrrr�Asz
date.__hash__cCsFt|t�rB|��|j}d|kr,tkr:nn
t�|�Std��tS)zAdd a date to a timedelta.r�result out of range)	r�rpr�ro�_MAXORDINALr�r�r�r�)r�r��orrrr�Gs

zdate.__add__cCsDt|t�r|t|j�St|t�r@|��}|��}t||�StS)z.Subtract two dates, or a date and a timedelta.)r�rpror�r�r�)r�r��days1�days2rrrr�Rs

zdate.__sub__cCs|��ddS)z:Return day of the week, where Monday == 0 ... Sunday == 6.rNrO�r�r�rrrr�\szdate.weekdaycCs|��dpdS)z:Return day of the week, where Monday == 1 ... Sunday == 7.rOrr�rrr�
isoweekdaybszdate.isoweekdaycCs�|j}t|�}t|j|j|j�}t||d�\}}|dkr^|d8}t|�}t||d�\}}n$|dkr�|t|d�kr�|d7}d}||d|dfS)a�Return a 3-tuple containing ISO year, week number, and weekday.

        The first ISO week of the year is the (Mon-Sun) week
        containing the year's first Thursday; everything else derives
        from that.

        The first week is 1; Monday is 1 ... Sunday is 7.

        ISO calendar algorithm taken from
        http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm
        rOrr�4)r��_isoweek1mondayr.r�r�r5)r�r�week1mondayr��weekr,rrr�isocalendargszdate.isocalendarcCs&t|jd�\}}t|||j|jg�fS�N�)r5r�rr�r�)r��yhi�ylorrrr��szdate._getstatecCsPt|�dks&d|dkr$dks.ntd��|\}}|_|_|d||_dS)Nrrr!r znot enough argumentsr)rlr�r�r�r�)r��stringrrrrr�
__setstate�s&zdate.__setstatecCs|j|��fSr~r�r�rrrr��szdate.__reduce__)NN)NNN)(r�r�r�r�r�r��classmethodr�r�r�r�r�rsrrr�r�rr%r,rur�rqr�r�r�r�r�r�rr�r�r�r�r�r
rr�r�r�rrrrr�rsR








	
r�r rr�c@s<eZdZdZdZdd�Zdd�Zdd�Zd	d
�Zdd�Z	d
S)rz}Abstract base class for time zone info classes.

    Subclasses must override the name(), utcoffset() and dst() methods.
    rcCstd��dS)z%datetime -> string name of time zone.z&tzinfo subclass must override tzname()N��NotImplementedError�r��dtrrrrj�sz
tzinfo.tznamecCstd��dS)z:datetime -> minutes east of UTC (negative for west of UTC)z)tzinfo subclass must override utcoffset()Nrrrrrra�sztzinfo.utcoffsetcCstd��dS)z�datetime -> DST offset in minutes east of UTC.

        Return 0 if DST not in effect.  utcoffset() must include the DST
        offset.
        z#tzinfo subclass must override dst()Nrrrrrr��sz
tzinfo.dstcCs�t|t�std��|j|ur$td��|��}|dur<td��|��}|durTtd��||}|r�||7}|��}|dur�td��||S)z*datetime in UTC -> datetime in local time.z&fromutc() requires a datetime argumentzdt.tzinfo is not selfNz0fromutc() requires a non-None utcoffset() resultz*fromutc() requires a non-None dst() resultz;fromutc(): dt.dst gave inconsistent results; cannot convert)r��datetimer�rr�rar�)r�rZdtoffZdtdst�deltarrr�fromutc�s"

ztzinfo.fromutccCsft|dd�}|r|�}nd}t|dd�}|r4|�}nt|dd�pBd}|durV|j|fS|j||fSdS)N�__getinitargs__r�__getstate__�__dict__)rmr�)r�Zgetinitargs�args�getstate�staterrrr��s
ztzinfo.__reduce__N)
r�r�r�r�r�rjrar�r r�rrrrr�src@seZdZdZd=dd�Zedd��Zedd	��Zed
d��Zedd
��Z	edd��Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zd>dd�Zdd �Zd?d"d#�Zd$d%�Zd&d'�ZeZd(d)�Zd*d+�Zd,d-�Zd.d/�Zd0d1�Zd@d3d4�Zd5d6�Zd7d8�Zd9d:�Zd;d<�Z dS)Ar�aTime with time zone.

    Constructors:

    __new__()

    Operators:

    __repr__, __str__
    __cmp__, __hash__

    Methods:

    strftime()
    isoformat()
    utcoffset()
    tzname()
    dst()

    Properties (readonly):
    hour, minute, second, microsecond, tzinfo
    rNcCslt�|�}t|t�r4t|�dkr4|�||p,d�|St|�t||||�||_||_	||_
||_||_|S)z�Constructor.

        Arguments:

        hour, minute (required)
        second, microsecond (default to zero)
        tzinfo (default to None)
        rNN)
rr�r�rrl�_time__setstater�r��_hour�_minute�_second�_microsecond�_tzinfo)r�r�r�r�r^rr�rrrr��s	
ztime.__new__cCs|jS�zhour (0-23)�r(r�rrrr�sz	time.hourcCs|jS�z
minute (0-59)�r)r�rrrr�sztime.minutecCs|jS�z
second (0-59)�r*r�rrrr�sztime.secondcCs|jS�zmicrosecond (0-999999)�r+r�rrrr^sztime.microsecondcCs|jS�ztimezone info object�r,r�rrrr sztime.tzinfocCs$t|t�r|j|dd�dkSdSdS�NT��allow_mixedrF�r�r�rr�rrrr�)s
ztime.__eq__cCs$t|t�r|j|dd�dkSdSdS�NTr8rr:r�rrrr�/s
ztime.__ne__cCs&t|t�r|�|�dkSt||�dSr��r�r�rr�r�rrrr�5s
ztime.__le__cCs&t|t�r|�|�dkSt||�dSr�r<r�rrrr�;s
ztime.__lt__cCs&t|t�r|�|�dkSt||�dSr�r<r�rrrr�As
ztime.__ge__cCs&t|t�r|�|�dkSt||�dSr�r<r�rrrr�Gs
ztime.__gt__Fc
Cs�t|t�sJ�|j}|j}d}}||ur0d}n|��}|��}||k}|rvt|j|j|j|jf|j|j|j|jf�S|dus�|dur�|r�dSt	d��|jd|j|t
dd�}|jd|j|t
dd�}	t||j|jf|	|j|jf�S)NTr!z$cannot compare naive and aware timesr�rrf)r�r�r,rarr(r)r*r+r�rp)
r�r�r9�mytz�ottz�myoff�otoff�base_compareZmyhhmmZothhmmrrrrMs4����z	time._cmpcCs�|��}|st|��d�Stt|j|jd�|tdd��\}}|tdd�rVJd��|tdd�}d|krxdkr�nntt|||j|j	��St|||j|j	f�S)rr�rergrrdrfrhr3)
rar�r�r5rpr�r�r�r�r^)r��tzoffr{rRrrrr�is�z
time.__hash__�:cCs�|��}|dur�|jdkr&d}|}nd}t|tdd��\}}|tdd�rTJd��|tdd�}d|krvd	ks|nJ�d
||||f}|S)z2Return formatted timezone offset (+xx:xx) or None.Nrrcrbrrdrfrhr3z%s%02d%s%02d)raror5rp)r��sep�offrzrTrUrrr�_tzstrxs
ztime._tzstrcCs�|jdkrd|j|jf}n|jdkr2d|j}nd}dd|jj|j|j|f}|jdur�|dd�d	kspJ�|dd�d
|jd	}|S)�%Convert to formal string, for repr().rz, %d, %dz, %dr`z%s(%d, %d%s)r�Nr�)�, tzinfo=%r)r+r*r�r�r(r)r,)r�r|rrrr��s


�
z
time.__repr__cCs.t|j|j|j|j�}|��}|r*||7}|S)z�Return the time formatted according to ISO.

        This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if
        self.microsecond == 0.
        )r[r(r)r*r+rG)r�r|r�rrrr�s�ztime.isoformatc	Cs(ddd|j|j|jdddf	}t|||�S)z{Format using strftime().  The date part of the timestamp passed
        to underlying strftime should not be used.
        ilrrr)r(r)r*r})r�r�rurrrrs�s
�z
time.strftimecCst|�dkr|�|�St|�Sr�r�r�rrrr�s
ztime.__format__cCs(|jdurdS|j�d�}td|�|S�zQReturn the timezone offset in minutes east of UTC (negative west of
        UTC).Nra�r,rar��r�ryrrrra�s


ztime.utcoffsetcCs&|jdurdS|j�d�}t|�|S)�Return the timezone name.

        Note that the name is 100% informational -- there's no requirement that
        it mean anything in particular. For example, "GMT", "UTC", "-500",
        "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies.
        N)r,rjr��r�r�rrrrj�s

ztime.tznamecCs(|jdurdS|j�d�}td|�|S�afReturn 0 if DST is not in effect, or the DST offset (in minutes
        eastward) if DST is in effect.

        This is purely informational; the DST offset has already been added to
        the UTC offset returned by utcoffset() if applicable, so there's no
        need to consult dst() unless you're interested in displaying the DST
        info.
        Nr��r,r�r�rMrrrr��s
	

ztime.dstTcCsl|dur|j}|dur|j}|dur*|j}|dur8|j}|durF|j}t||||�t|�t|||||�S)z;Return a new time with new values for the specified fields.NT)r�r�r�r^rr�r�r�)r�r�r�r�r^rrrrrq�sztime.replacecCs4|js|jrdS|��ptd�}t|j|jd�|kS)NTrrB)r�r^rarpr�r�rMrrrr��sz
time.__bool__cCsVt|jd�\}}t|d�\}}t|j|j|j|||g�}|jdurH|fS||jfSdSr)r5r+rr(r)r*r,)r��us2�us3�us1�	basestaterrrr��s�
ztime._getstatecCsvt|�dks|ddkr td��|\|_|_|_}}}|d>|Bd>|B|_|dus^t|t�rf||_ntd|��dS)NrNrr3zan integer is required��bad tzinfo state arg %r)	rlr�r(r)r*r+r��
_tzinfo_classr,)r�rrrTrRrSrrrrs�ztime.__setstatecCst|��fSr~)r�r�r�rrrr�sztime.__reduce__)rrrrN)F)rD)NNNNT)!r�r�r�r�r�r�r�r�r�r^rr�r�r�r�r�r�rr�rGr�rr�rsrrarjr�rqr�r�r'r�rrrrr��sH








	
�

r�c@s�eZdZdZejdZdXdd�Zedd��Zed	d
��Z	edd��Z
ed
d��Zedd��Ze
dYdd��Ze
dd��Ze
dZdd��Ze
dd��Ze
dd��Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�Zd[d(d)�Zd\d*d+�Zd,d-�Zd]d/d0�Zd1d2�Zd3d4�Ze
d5d6��Zd7d8�Zd9d:�Z d;d<�Z!d=d>�Z"d?d@�Z#dAdB�Z$dCdD�Z%dEdF�Z&dGdH�Z'd^dJdK�Z(dLdM�Z)e)Z*dNdO�Z+dPdQ�Z,dRdS�Z-dTdU�Z.dVdW�Z/dS)_rz�datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])

    The year, month and day arguments are required. tzinfo may be None, or an
    instance of a tzinfo subclass. The remaining arguments may be ints.
    )r(r)r*r+r,Nrc	
Cs�t|t�r:t|�dkr:t�||dd��}	|	�||�|	St|�t||||�t�||||�}	||	_||	_	||	_
||	_||	_|	S)N�
r)
r�rrlr�r��_datetime__setstater�r�r(r)r*r+r,)
r�rr%r,r�r�r�r^rr�rrrr�szdatetime.__new__cCs|jSr-r.r�rrrr�0sz
datetime.hourcCs|jSr/r0r�rrrr�5szdatetime.minutecCs|jSr1r2r�rrrr�:szdatetime.secondcCs|jSr3r4r�rrrr^?szdatetime.microsecondcCs|jSr5r6r�rrrrDszdatetime.tzinfoc	Cs�t|�|durtjntj}t|d�\}}t|d�}|dkrJ|d7}d}||�\	}}}}	}
}}}
}t|d�}|||||	|
|||�}|dur�|�|�}|S)z�Construct a datetime from a POSIX timestamp (like time.time()).

        A timezone info object may be passed in as well.
        Nr�r�r�rrr�)r�rPr��gmtimer5r
�minr )r�r�r��	converter�fracrYrrRrSrTrUrVr�r�r�rZrrrr�Is

zdatetime.fromtimestampc
	Csht|d�\}}t|d�}|dkr.|d7}d}t�|�\	}}}}}}	}
}}t|	d�}	|||||||	|�S)zCConstruct a UTC datetime from a POSIX timestamp (like time.time()).r�r�r�rrr�)r5r
rPr[r\)
r�r�r^rYrrRrSrTrUrVr�r�r�rrr�utcfromtimestampes
zdatetime.utcfromtimestampcCst��}|�||�S)zBConstruct a datetime from time.time() and optional time zone info.r�)r�r�r�rrr�now{szdatetime.nowcCst��}|�|�S)z*Construct a UTC datetime from time.time().)rPr�r_r�rrr�utcnow�szdatetime.utcnowc	CsJt|t�std��t|t�s$td��||j|j|j|j|j|j	|j
|j�S)z8Construct a datetime from a given date and a given time.z%date argument must be a date instancez%time argument must be a time instance)r��_date_classr��_time_classrr%r,r�r�r�r^r)r�r�r�rrr�combine�s

�zdatetime.combinecCsD|��}|durd}n|r d}nd}t|j|j|j|j|j|j|�S)rNrrr)r�rWrr%r,r�r�r�)r�r�rrrru�s�zdatetime.timetuplecCsL|jdur<t�|j|j|j|j|j|jdddf	�|j	dS|t
��SdS)zReturn POSIX timestamp as floatNrr�)r,rP�mktimerr%r,r�r�r�r^�_EPOCHr�r�rrr�	timestamp�s
��zdatetime.timestampcCsT|��}|r||8}|j|j|j}}}|j|j|j}}}t||||||d�S)z4Return UTC time tuple compatible with time.gmtime().r)rarr%r,r�r�r�rW)r�ryrrRrSrTrUrVrrr�utctimetuple�szdatetime.utctimetuplecCst|j|j|j�S)zReturn the date part.)r�r�r�r�r�rrrr��sz
datetime.datecCst|j|j|j|j�S)z'Return the time part, with tzinfo None.)r�r�r�r�r^r�rrrr��sz
datetime.timecCst|j|j|j|j|j�S)z'Return the time part, with same tzinfo.)r�r�r�r�r^r,r�rrr�timetz�s�zdatetime.timetzTc			Cs�|dur|j}|dur|j}|dur*|j}|dur8|j}|durF|j}|durT|j}|durb|j}|durp|j}t|||�t	||||�t
|�t||||||||�S)z?Return a new datetime with new values for the specified fields.NT)rr%r,r�r�r�r^rr�r�r�r)	r�rr%r,r�r�r�r^rrrrrq�s,�zdatetime.replacecCsL|dur�|jdurtd��|ttdd�}t�|�}t|dd��}z|j}|j}Wnvt	y�|tt�
|�dd��}tjo�|jdk}|r�tj
ntj}|t|d�kr�t|tj|�}nt|�}Yq�0tt|d�|�}nt|t�s�td��|j}	|	du�r
td��||	u�r|S|��}
|
du�r2td��||
j|d�}|�|�S)Nz'astimezone() requires an aware datetimerr�rNrz)tz argument must be an instance of tzinfo�r)rr�rfrprPr�r�	tm_gmtoff�tm_zone�AttributeErrorr[�daylight�tm_isdst�altzone�timezonerjr�r�rarqr )r�r��ts�localtm�local�gmtoff�zonerr�r=Zmyoffset�utcrrr�
astimezone�s:






zdatetime.astimezonecCs:|��dpd}dt|t|j|j|j|j|j|jfS)r�rOz%s %s %2d %02d:%02d:%02d %04d)	r�r�r�r�r�r(r)r*r�r�rrrr�s�zdatetime.ctime�TcCs�d|j|j|j|ft|j|j|j|j�}|��}|dur�|j	dkrRd}|}nd}t
|tdd��\}}|tdd�r�Jd	��|tdd�}|d
|||f7}|S)a�Return the time formatted according to ISO.

        This is 'YYYY-MM-DD HH:MM:SS.mmmmmm', or 'YYYY-MM-DD HH:MM:SS' if
        self.microsecond == 0.

        If self.tzinfo is not None, the UTC offset is also attached, giving
        'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM' or 'YYYY-MM-DD HH:MM:SS+HH:MM'.

        Optional argument sep specifies the separator between date and
        time, default 'T'.
        z%04d-%02d-%02d%cNrrcrbrrdrfrhz%s%02d:%02d)r�r�r�r[r(r)r*r+raror5rp)r�rEr|rFrzrTrUrrrrs$���
zdatetime.isoformatcCs�|j|j|j|j|j|j|jg}|ddkr2|d=|ddkrD|d=d�tt	|��}dd|j
j|f}|jdur�|dd�dks�J�|dd�d|jd}|S)	rHrrz, z%s(%s)r�NrIrJ)
r�r�r�r(r)r*r+rrrrr�r�r,)r��Lr|rrrr�,s�
zdatetime.__repr__cCs|jdd�S)zConvert to string, for str().� )rE)rr�rrrr�;szdatetime.__str__cCsddl}|�|||�S)zKstring, format -> new datetime parsed from a string (like time.strptime()).rN)�	_strptimeZ_strptime_datetime)r�Zdate_stringrtr|rrr�strptime?szdatetime.strptimecCs(|jdurdS|j�|�}td|�|SrKrLrMrrrraEs


zdatetime.utcoffsetcCst|jd|�}t|�|S)rNrj)r�r,r�rOrrrrjNszdatetime.tznamecCs(|jdurdS|j�|�}td|�|SrPrQrMrrrr�Ys
	

zdatetime.dstcCs2t|t�r|j|dd�dkSt|t�s*tSdSdSr7�r�rrr�r�r�rrrr�js


zdatetime.__eq__cCs2t|t�r|j|dd�dkSt|t�s*tSdSdSr;r~r�rrrr�rs


zdatetime.__ne__cCs4t|t�r|�|�dkSt|t�s&tSt||�dSr��r�rrr�r�r�r�rrrr�zs


zdatetime.__le__cCs4t|t�r|�|�dkSt|t�s&tSt||�dSr�rr�rrrr��s


zdatetime.__lt__cCs4t|t�r|�|�dkSt|t�s&tSt||�dSr�rr�rrrr��s


zdatetime.__ge__cCs4t|t�r|�|�dkSt|t�s&tSt||�dSr�rr�rrrr��s


zdatetime.__gt__Fc		Cs�t|t�sJ�|j}|j}d}}||ur0d}n|��}|��}||k}|r�t|j|j|j|j|j	|j
|jf|j|j|j|j|j	|j
|jf�S|dus�|dur�|r�dStd��||}|j
dkr�dS|r�dp�dS)NTr!z(cannot compare naive and aware datetimesrrr)r�rr,rarr�r�r�r(r)r*r+r�ro)	r�r�r9r=r>r?r@rA�diffrrrr�s6���
z
datetime._cmpc
Cs�t|t�stSt|��|j|j|j|jd�}||7}t|j	d�\}}t|d�\}}d|j
krhtkr�nn&t�
t�|j
�t||||j|jd��Std��dS)zAdd a datetime and a timedelta.)rergr�r�r�r�rrjrN)r�rpr�r�r(r)r*r+r5r�rorrrdr�r�r�r�r,r�)r�r�rr��remr�r�rrrr��s&
���zdatetime.__add__c	Cs�t|t�s"t|t�r||StS|��}|��}|j|jd|jd}|j|jd|jd}t|||||j|j�}|j	|j	ur�|S|�
�}|�
�}||kr�|S|dus�|dur�td��|||S)z6Subtract two datetimes, or a datetime and a timedelta.r�r�Nz(cannot mix naive and timezone-aware time)r�rrpr�r�r*r)r(r+r,rar�)	r�r�r
rZsecs1Zsecs2�baser?r@rrrr��s*



�zdatetime.__sub__cCsb|��}|dur t|��d�St|j|j|j�}|jd|jd|j	}tt
|||j�|�S)Nrr�r�)rar�r�r.rr%r,r�r�r�rpr^)r�rCror�rrrr��szdatetime.__hash__cCsrt|jd�\}}t|jd�\}}t|d�\}}t|||j|j|j|j|j|||g
�}|j	durd|fS||j	fSdSr)
r5r�r+rr�r�r(r)r*r,)r�rrrRrSrTrUrrrr��s�
zdatetime._getstatec
Csp|\
}}|_|_|_|_|_}}}|d||_|d>|Bd>|B|_|dusXt|t�r`||_	nt
d|��dS)NrrVrW)r�r�r(r)r*r�r+r�rXr,r�)r�rrrrrTrRrSrrrr�s�zdatetime.__setstatecCs|j|��fSr~r�r�rrrr�szdatetime.__reduce__)NNrrrrN)N)N)NNNNNNNT)N)ry)F)0r�r�r�r�r�r�r�r�r�r�r�r^rrr�r_r`rardrurgrhr�rirqrxr�rr�r�r}rarjr�r�r�r�r�r�r�rr�r�r�r�r�rZr�rrrrrst
�










		�

.


	


rcCs8d}t|dd�}|dd}||}||kr4|d7}|S)Nr2rrNrO)r.)r�THURSDAYZfirstday�firstweekdayrrrrrsrc@s�eZdZdZe�Zefdd�Zeddd��Zdd�Z	d	d
�Z
dd�Zd
d�Zdd�Z
dd�Zdd�Zdd�Zdd�Zeddd�ZeZedd��ZdS)rq)�_offset�_namecCs�t|t�std��||jur,|s&|jSd}n*t|t�sVtrNt|t�rN|��}ntd��|j	|krn|j
ksxntd��|jdks�|j
ddkr�td��|�||�S)Nzoffset must be a timedeltazname must be a stringzYoffset must be a timedelta strictly between -timedelta(hours=24) and timedelta(hours=24).rr�zAoffset must be a timedelta representing a whole number of minutes)r�rpr��_Omittedrwrr
r�decode�
_minoffset�
_maxoffsetr�r�r��_create)r�ryr�rrrr�s"




�ztimezone.__new__NcCst�|�}||_||_|Sr~)rr�r�r�)r�ryr�r�rrrr�8s
ztimezone._createcCs|jdur|jfS|j|jfS)zpickle supportN)r�r�r�rrrr!?s
ztimezone.__getinitargs__cCst|�tkrdS|j|jkS)NF)r�rqr�r�rrrr�Esztimezone.__eq__cCs
t|j�Sr~)r�r�r�rrrr�Jsztimezone.__hash__cCsH||jurdS|jdur.dd|jj|jfSdd|jj|j|jfS)aConvert to formal string, for repr().

        >>> tz = timezone.utc
        >>> repr(tz)
        'datetime.timezone.utc'
        >>> tz = timezone(timedelta(hours=-5), 'EST')
        >>> repr(tz)
        "datetime.timezone(datetime.timedelta(-1, 68400), 'EST')"
        zdatetime.timezone.utcNz%s(%r)r�z
%s(%r, %r))rwr�r�r�r�r�rrrr�Ms


��ztimezone.__repr__cCs
|�d�Sr~)rjr�rrrr�_sztimezone.__str__cCs$t|t�s|dur|jStd��dS)Nz8utcoffset() argument must be a datetime instance or None)r�rr�r�rrrrrabsztimezone.utcoffsetcCs:t|t�s|dur.|jdur(|�|j�S|jStd��dS)Nz5tzname() argument must be a datetime instance or None)r�rr��_name_from_offsetr�r�rrrrrjhs

ztimezone.tznamecCs"t|t�s|durdStd��dS)Nz2dst() argument must be a datetime instance or None)r�rr�rrrrr�psztimezone.dstcCs2t|t�r&|j|urtd��||jStd��dS)Nzfromutc: dt.tzinfo is not selfz6fromutc() argument must be a datetime instance or None)r�rrr�r�r�rrrrr vs



ztimezone.fromutcr�r�rBcCsL|td�krd}|}nd}t|tdd��\}}|tdd�}d�|||�S)NrrcrbrrdrfzUTC{}{:02d}:{:02d})rpr5rt)rrzre�restrgrrrr��sztimezone._name_from_offset)N)r�r�r�r�rr�r�rr�r!r�r�r�r�rarjr�r rpr�r��staticmethodr�rrrrrqs$	rqi�rj)�*)r�)Er��
__future__rrrrZfuture.builtinsrrrr	r
rZfuture.utilsrr
r�rP�mathr�rr�r�rr#r(�dbmr-rkrrr&r)r.r6r7r8r:r�r�rWr[r}r�r�r�r�r�r�r�rpr\�max�
resolutionr�rbrrXrcrrrqr�rwr�r�rf�	_datetime�ImportErrorrrrr�<module>s�

	?9J

�!C4ysG

PK�Cu\�{�7�72future/backports/__pycache__/socket.cpython-39.pycnu�[���a

��?h/=�@s�dZddlmZddlmZddlmZddlmZddlmZddlZddlTddl	Z	ddl
Z
ddlZzddlZWne
y�dZYn0eed	d
�Zeedd�Zeed
d�ZddgZe�e	�e��eZe
j���d��rdiZded<ded<ded<ded<ded<ded<ded<ded <d!ed"<d#ed$<d%ed&<d'ed(<d)ed*<d+ed,<d-ed.<e�d/�Gd0d1�d1ej�Zd?d2d3�Zeejd4��r�d5d6�Zeed7��r�dedfd8d7�Z e!eeg�Z"Gd9d:�d:ej#�Z$d@d<d�Z%dd=lm&Z&e&dfd>d�Z'dS)AaThis module provides socket operations and some related functions.
On Unix, it supports IP (Internet Protocol) and Unix domain sockets.
On other systems, it only supports IP. Functions specific for a
socket are available as methods of the socket object.

Functions:

socket() -- create a new socket object
socketpair() -- create a pair of new socket objects [*]
fromfd() -- create a socket object from an open file descriptor [*]
fromshare() -- create a socket object from data received from socket.share() [*]
gethostname() -- return the current hostname
gethostbyname() -- map a hostname to its IP number
gethostbyaddr() -- map an IP number or hostname to DNS info
getservbyname() -- map a service name and a protocol name to a port number
getprotobyname() -- map a protocol name (e.g. 'tcp') to a number
ntohs(), ntohl() -- convert 16, 32 bit int from network to host byte order
htons(), htonl() -- convert 16, 32 bit int from host to network byte order
inet_aton() -- convert IP addr string (123.45.67.89) to 32-bit packed format
inet_ntoa() -- convert 32-bit packed format IP to string (123.45.67.89)
socket.getdefaulttimeout() -- get the default timeout value
socket.setdefaulttimeout() -- set the default timeout value
create_connection() -- connects to an address, with an optional timeout and
                       optional source address.

 [*] not available on all platforms!

Special objects:

SocketType -- type object for socket objects
error -- exception raised for I/O errors
has_ipv6 -- boolean value indicating if IPv6 is supported

Integer constants:

AF_INET, AF_UNIX -- socket domains (first argument to socket() call)
SOCK_STREAM, SOCK_DGRAM, SOCK_RAW -- socket types (second argument)

Many other constants may be defined; these may be used in calls to
the setsockopt() and getsockopt() methods.
�)�unicode_literals)�print_function)�division)�absolute_import)�superN)�*�EBADF�	�EAGAIN��EWOULDBLOCK�getfqdn�create_connection�winzThe operation was interrupted.i'zA bad file handle was passed.i'zPermission denied.i'z!A fault occurred on the network??i'z#An invalid operation was attempted.i&'z The socket operation would blocki3'z,A blocking operation is already in progress.i4'zThe network address is in use.i@'zThe connection has been reset.iF'zThe network has been shut down.iJ'zThe operation timed out.iL'zConnection refused.iM'zThe name is too long.iO'zThe host is down.iP'zThe host is unreachable.iQ'�errorTabcs�eZdZdZgd�Zeeddfdd�Zdd�Zd	d
�Z	dd�Z
d
d�Zdd�Zdd�Z
ddd�Zdd�Zejfdd�Zdd�Z�fdd�Z�ZS)�socketz:A subclass of _socket.socket adding the makefile() method.)�__weakref__�_io_refs�_closedrNcCs@|durtj�||||�ntj�|||||�d|_d|_dS)NrF)�_socketr�__init__rr)�self�family�type�proto�fileno�r�A/usr/local/lib/python3.9/site-packages/future/backports/socket.pyrcs
zsocket.__init__cCs|S�Nr�rrrr�	__enter__kszsocket.__enter__cGs|js|��dSr)r�close)r�argsrrr�__exit__nszsocket.__exit__cCsJtj�|�}|�d�rFd|jj|jjt|dd�r4dp6d|dd�f}|S)	z.Wrap __repr__() to reveal the real class name.z<socket objectz
<%s.%s%s%srFz
 [closed] ��N)rr�__repr__�
startswith�	__class__�
__module__�__name__�getattr)r�srrrr&rs

�zsocket.__repr__cCstd��dS)NzCannot serialize socket object)�	TypeErrorrrrr�__getstate__|szsocket.__getstate__cCs6t|���}|j|j|j|j|d�}|�|���|S)zjdup() -> socket object

        Return a new socket object connected to the same system resource.
        �r)�duprr(rrr�
settimeout�
gettimeout)r�fd�sockrrrr0sz
socket.dupcCsF|��\}}t|j|j|j|d�}t�dur>|��r>|�d�||fS)z�accept() -> (socket object, address info)

        Wait for an incoming connection.  Return a new socket
        representing the connection, and the address of the client.
        For IP sockets, the address info is a pair (hostaddr, port).
        r/NT)�_acceptrrrr�getdefaulttimeoutr2�setblocking)rr3�addrr4rrr�accept�s

z
socket.accept�rcKshd|vr|d}|d=nd}d|vr4|d}|d=nd}d|vrP|d}|d=nd}|D]}|dvrXtd��qXd|v}d|vp�|}	|	s�|s�J�d	|v}
d
}|	r�|d7}|r�|d7}t||�}|jd7_|dur�d}|d
kr�tj}|d
kr�|
s�td��|S|	�r|�rt�|||�}
n*|	�r.t�||�}
n|�s8J�t�||�}
|
�rN|
St�|
|||�}||_	|S)a
makefile(...) -> an I/O stream connected to the socket

        The arguments are as for io.open() after the filename,
        except the only mode characters supported are 'r', 'w' and 'b'.
        The semantics are similar too.  (XXX refactor to share code?)
        �newlineN�errors�encoding)r:�w�bz&invalid mode %r (only r, w, b allowed)r>r:r?r$����rz!unbuffered streams must be binary)
�
ValueError�SocketIOr�io�DEFAULT_BUFFER_SIZE�BufferedRWPair�BufferedReader�BufferedWriter�
TextIOWrapper�mode)rrJ�	bufferingZ_3to2kwargsr;r<r=�c�writing�reading�binary�rawmode�raw�buffer�textrrr�makefile�sN


zsocket.makefilecCs*|jdkr|jd8_|jr&|��dS)Nrr@)rrr!rrrr�_decref_socketios�s
zsocket._decref_socketioscCs|�|�dSr)r!)r�_ssrrr�_real_close�szsocket._real_closecCsd|_|jdkr|��dS)NTr)rrrWrrrrr!�s
zsocket.closecsd|_t���S)adetach() -> file descriptor

        Close the socket object without closing the underlying file descriptor.
        The object cannot be used after this call, but the file descriptor
        can be reused for other purposes.  The file descriptor is returned.
        T)rr�detachr�r(rrrX�sz
socket.detach)r:N)r*r)�__qualname__�__doc__�	__slots__�AF_INET�SOCK_STREAMrr r#r&r.r0r9rTrUrrrWr!rX�
__classcell__rrrYrr]s


0rcCst|�}t||||�S)z� fromfd(fd, family, type[, proto]) -> socket object

    Create a socket object from a duplicate of the given file
    descriptor.  The remaining arguments are the same as for socket().
    )r0r)r3rrr�nfdrrr�fromfd�sra�sharecCstddd|�S)z� fromshare(info) -> socket object

        Create a socket object from a the bytes object returned by
        socket.share(pid).
        r)r)�inforrr�	fromshare�srd�
socketpaircCsf|dur(zt}Wnty&t}Yn0t�|||�\}}t||||���}t||||���}||fS)aasocketpair([family[, type[, proto]]]) -> (socket object, socket object)

        Create a pair of socket objects from the sockets returned by the platform
        socketpair() function.
        The arguments are the same as for socket() except the default family is
        AF_UNIX if defined on the platform; otherwise, the default is AF_INET.
        N)�AF_UNIX�	NameErrorr]rrerrX)rrr�ar?rrrre�s
cspeZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Z�fdd
�Z	dd�Z
edd��Zedd��Z
dd�Z�ZS)rCz�Raw I/O implementation for stream sockets.

    This class supports the makefile() method on sockets.  It provides
    the raw I/O interface on top of a socket object.
    cCsZ|dvrtd|��tj�|�||_d|vr6|d7}||_d|v|_d|v|_d|_dS)N)r:r>�rw�rb�wb�rwbzinvalid mode: %rr?r:r>F)	rBrD�	RawIOBaser�_sock�_mode�_reading�_writing�_timeout_occurred)rr4rJrrrrs

zSocketIO.__init__c
Cs�|��|��|jrtd��z|j�|�WStyFd|_�Yqty�}z*|jdt	vrpWYd}~dS�WYd}~qd}~00qdS)a3Read up to len(b) bytes into the writable buffer *b* and return
        the number of bytes read.  If the socket is non-blocking and no bytes
        are available, None is returned.

        If *b* is non-empty, a 0 return value indicates that the connection
        was shutdown at the other end.
        z!cannot read from timed out objectTrN)
�_checkClosed�_checkReadablerr�IOErrorrn�	recv_into�timeout�errorr"�_blocking_errnos�rr?�errr�readinto)szSocketIO.readintoc
Csf|��|��z|j�|�WSty`}z*|jdtvrJWYd}~dS�WYd}~n
d}~00dS)aWrite the given bytes or bytearray object *b* to the socket
        and return the number of bytes written.  This can be less than
        len(b) if not all data could be written.  If the socket is
        non-blocking and no bytes could be written None is returned.
        rN)rs�_checkWritablern�sendrxr"ryrzrrr�writeBszSocketIO.writecCs|jrtd��|jS)z2True if the SocketIO is open for reading.
        �I/O operation on closed socket.)�closedrBrprrrr�readableRszSocketIO.readablecCs|jrtd��|jS)z2True if the SocketIO is open for writing.
        r�)r�rBrqrrrr�writableYszSocketIO.writablecs|jrtd��t���S)z2True if the SocketIO is open for seeking.
        r�)r�rBr�seekablerrYrrr�`szSocketIO.seekablecCs|��|j��S)z=Return the file descriptor of the underlying socket.
        )rsrnrrrrrrgszSocketIO.filenocCs|js|��SdSdS)NrA)r�rrrrr�namemsz
SocketIO.namecCs|jSr)rorrrrrJtsz
SocketIO.modecCs*|jr
dStj�|�|j��d|_dS)z�Close the SocketIO object.  This doesn't close the underlying
        socket, except if all references to it have disappeared.
        N)r�rDrmr!rnrUrrrrr!xs

zSocketIO.close)r*r)rZr[rr|rr�r�r�r�propertyr�rJr!r_rrrYrrCs

rCr$cCsj|��}|r|dkrt�}zt|�\}}}Wnty>Yn(0|�d|�|D]}d|vrPqfqP|}|S)aGet fully qualified domain name from name.

    An empty argument is interpreted as meaning the local host.

    First the hostname returned by gethostbyaddr() is checked, then
    possibly existing aliases. In case no FQDN is available, hostname
    from gethostname() is returned.
    z0.0.0.0r�.)�strip�gethostname�
gethostbyaddrrx�insert)r��hostname�aliases�ipaddrsrrrr
�s	)�_GLOBAL_DEFAULT_TIMEOUTcCs�|\}}d}t||dt�D]�}|\}}}	}
}d}z@t|||	�}|turP|�|�|r^|�|�|�|�|WSty�}
z |
}|dur�|��WYd}
~
qd}
~
00q|dur�|�ntd��dS)adConnect to *address* and return the socket object.

    Convenience function.  Connect to *address* (a 2-tuple ``(host,
    port)``) and return the socket object.  Passing the optional
    *timeout* parameter will set the timeout on the socket instance
    before attempting to connect.  If no *timeout* is supplied, the
    global default timeout setting returned by :func:`getdefaulttimeout`
    is used.  If *source_address* is set it must be a tuple of (host, port)
    for the socket to bind as a source address before making the connection.
    An host of '' or port 0 tells the OS to use the default.
    Nrz!getaddrinfo returns an empty list)	�getaddrinfor^rr�r1�bind�connectrxr!)�addressrw�source_address�host�port�err�res�af�socktyper�	canonname�sar4�_rrrr�s(



 )r)r$)(r[�
__future__rrrrZfuture.builtinsrr�os�sysrD�errno�ImportErrorr+rr
r�__all__�extend�_get_exports_listrZ_realsocket�platform�lowerr'r�appendra�hasattrrdr^re�setryrmrCr
r�rrrrr�<module>sd+


	w
�PK�Cu\�]��p�p0future/backports/__pycache__/misc.cpython-39.pycnu�[���a

��?h'��@s�dZddlmZddlZddlmZddlmZ	m
ZddlZddl
ZddlmZddlmZmZmZddlmZmZmZmZdd	lmZmZm Z m!Z!m"Z"e r�dd
l#m$Z$m%Z%ndd
l&m$Z$m%Z%dd�Zdd
lm'Z'e!r�d4dd�Z(nddlm(Z(e"�r(zddl)m*Z*Wn e+�y$ddl,m*Z*Yn0n2zddl-m*Z*Wn e+�yXddl.m*Z*Yn0d5dd�Z/Gdd�de0�Z1Gdd�de2�Z3zddlmZddl
m4Z4Wne+�y�Yn0dd�Z5Gdd�de2�Z6d d!�Z7d6d"d�Z(Gd#d$�d$e%�Z8dd%lm9Z9e9dfd&d'�Z:d(d)�Z;e3Z<e6Z=e7Z>e(Z?eZ@e5ZAe/ZBe8ZCe:ZDe;ZEejFd*k�r�dd+l#m3Z3m6Z6ddlm(Z(dd,lGm;Z;zdd-lm7Z7Wne+�y�Yn0dd.lm:Z:ejFd/k�r�ddlmZdd0l#m5Z5ejFd1k�r�dd2lHm/Z/dd3l#m8Z8dS)7a�
Miscellaneous function (re)definitions from the Py3.4+ standard library
for Python 2.6/2.7.

- math.ceil                (for Python 2.7)
- collections.OrderedDict  (for Python 2.6)
- collections.Counter      (for Python 2.6)
- collections.ChainMap     (for all versions prior to Python 3.3)
- itertools.count          (for Python 2.6, with step parameter)
- subprocess.check_output  (for Python 2.6)
- reprlib.recursive_repr   (for Python 2.6+)
- functools.cmp_to_key     (for Python 2.6)
�)�absolute_importN)�ceil)�
itemgetter�eq)�proxy)�repeat�chain�starmap)�getaddrinfo�SOCK_STREAM�error�socket)�	iteritems�
itervalues�PY2�PY26�PY3)�Mapping�MutableMappingcCstt|��S)zZ
    Return the ceiling of x as an int.
    This is the smallest integral value >= x.
    )�int�oldceil)�x�r�?/usr/local/lib/python3.9/site-packages/future/backports/misc.pyr#sr)�islice�ccs|V||7}qdS�Nr��start�steprrr�count3sr )r )�	get_ident�...cs�fdd�}|S)zGDecorator to make a repr function return fillvalue for a recursive callcsLt�����fdd�}t�d�|_t�d�|_t�d�|_t�di�|_|S)Nc	sLt|�t�f}|�vr�S��|�z�|�}W��|�n��|�0|Sr)�idr!�add�discard)�self�key�result)�	fillvalue�repr_running�
user_functionrr�wrapperMs

z<recursive_repr.<locals>.decorating_function.<locals>.wrapper�
__module__�__doc__�__name__�__annotations__)�set�getattrr-r.r/r0)r+r,�r))r*r+r�decorating_functionJsz+recursive_repr.<locals>.decorating_functionr)r)r4rr3r�recursive_reprGsr5c@seZdZdZdS)�_Link)�prev�nextr'�__weakref__N)r/r-�__qualname__�	__slots__rrrrr6jsr6c@s�eZdZdZdd�Zejeefdd�Zej	fdd�Z	dd	�Z
d
d�Zdd
�Zd$dd�Z
d%dd�Zdd�ZejZZejZejZejZejZe�Zefdd�Zd&dd�Ze�dd��Zdd�Zdd�Zed'd d!��Z d"d#�Z!dS)(�OrderedDictz)Dictionary that remembers insertion ordercOs�|std��|d}|dd�}t|�dkr<tdt|���z
|jWn<ty�t�|_t|j�|_}||_|_i|_	Yn0|j
|i|��dS)z�Initialize an ordered dictionary.  The signature is the same as
        regular dictionaries, but keyword arguments are not recommended because
        their insertion order is arbitrary.

        z?descriptor '__init__' of 'OrderedDict' object needs an argumentrrN�$expected at most 1 arguments, got %d)�	TypeError�len�_OrderedDict__root�AttributeErrorr6�_OrderedDict__hardroot�_proxyr7r8�_OrderedDict__map�_OrderedDict__update)�args�kwdsr&�rootrrr�__init__|s
zOrderedDict.__init__c	CsZ||vrJ|�|j|<}|j}|j}||||_|_|_||_||�|_||||�dS)z!od.__setitem__(i, y) <==> od[i]=yN)rDr@r7r8r')	r&r'�valueZdict_setitemr�Link�linkrH�lastrrr�__setitem__�s
zOrderedDict.__setitem__cCs2|||�|j�|�}|j}|j}||_||_dS)z od.__delitem__(y) <==> del od[y]N)rD�popr7r8)r&r'Zdict_delitemrL�	link_prev�	link_nextrrr�__delitem__�s
zOrderedDict.__delitem__ccs(|j}|j}||ur$|jV|j}qdS)zod.__iter__() <==> iter(od)N)r@r8r'�r&rH�currrrr�__iter__�s
zOrderedDict.__iter__ccs(|j}|j}||ur$|jV|j}qdS)z#od.__reversed__() <==> reversed(od)N)r@r7r'rSrrr�__reversed__�s
zOrderedDict.__reversed__cCs*|j}||_|_|j��t�|�dS)z.od.clear() -> None.  Remove all items from od.N)r@r7r8rD�clear�dict)r&rHrrrrW�s
zOrderedDict.clearTcCsj|std��|j}|r0|j}|j}||_||_n|j}|j}||_||_|j}|j|=t�||�}||fS)z�od.popitem() -> (k, v), return and remove a (key, value) pair.
        Pairs are returned in LIFO order if last is true or FIFO order if false.

        zdictionary is empty)�KeyErrorr@r7r8r'rDrXrO)r&rMrHrLrPrQr'rJrrr�popitem�s zOrderedDict.popitemcCsn|j|}|j}|j}||_||_|j}|rL|j}||_||_||_|_n|j}||_||_||_|_dS)z�Move an existing element to the end (or beginning if last==False).

        Raises KeyError if the element does not exist.
        When last=True, acts like a fast version of self[key]=self.pop(key).

        N)rDr7r8r@)r&r'rMrLrPrQrH�firstrrr�move_to_end�s
zOrderedDict.move_to_endcCsVtj}t|�d}||j�}|||j�d7}|||j�|7}|||j�|7}|S)Nr�)�sys�	getsizeofr?�__dict__rDrBr@)r&�sizeof�n�sizerrr�
__sizeof__�s
zOrderedDict.__sizeof__cCs0||vr||}||=|S||jur,t|��|S)z�od.pop(k[,d]) -> v, remove specified key and return the corresponding
        value.  If key is not found, d is returned if given, otherwise KeyError
        is raised.

        )�_OrderedDict__markerrY)r&r'�defaultr(rrrrOs
zOrderedDict.popNcCs||vr||S|||<|S)zDod.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in odr�r&r'rfrrr�
setdefaultszOrderedDict.setdefaultcCs*|sd|jjfSd|jjt|���fS)zod.__repr__() <==> repr(od)�%s()z%s(%r))�	__class__r/�list�items�r&rrr�__repr__szOrderedDict.__repr__cCsDt|���}tt��D]}|�|d�q|jd|p4ddt|���fS)z%Return state information for picklingNr)�vars�copyr<rOrj�iterrl)r&�	inst_dict�krrr�
__reduce__ szOrderedDict.__reduce__cCs
|�|�S)z!od.copy() -> a shallow copy of od�rjrmrrrrp'szOrderedDict.copycCs|�}|D]}|||<q
|S)zOD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
        If not specified, the value defaults to None.

        r)�cls�iterablerJr&r'rrr�fromkeys+s
zOrderedDict.fromkeyscCs2t|t�r&t�||�o$ttt||��St�||�S)z�od.__eq__(y) <==> od==y.  Comparison to another OD is order-sensitive
        while comparison to a regular mapping is order-insensitive.

        )�
isinstancer<rX�__eq__�all�map�_eq�r&�otherrrrrz6s
zOrderedDict.__eq__)T)T)N)N)"r/r-r:r.rIrXrNrCr6rRrUrVrWrZr\rdr�updaterE�keys�valuesrl�__ne__�objectrerOrhr5rnrtrp�classmethodrxrzrrrrr<ms4�
		

	



r<)r)�nlargestcCs&|j}|D]}||d�d||<q
dS)z!Tally elements from the iterable.rrN)�get)�mappingrwZmapping_get�elemrrr�_count_elementsLsr�cs�eZdZdZ�fdd�Zdd�Zd/dd�Zd	d
�Zed0dd��Z	�fd
d�Z
dd�Zdd�Zdd�Z
�fdd�Zdd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�Zd'd(�Zd)d*�Zd+d,�Zd-d.�Z�ZS)1�Countera�Dict subclass for counting hashable items.  Sometimes called a bag
    or multiset.  Elements are stored as dictionary keys and their counts
    are stored as dictionary values.

    >>> c = Counter('abcdeabcdabcaba')  # count elements from a string

    >>> c.most_common(3)                # three most common elements
    [('a', 5), ('b', 4), ('c', 3)]
    >>> sorted(c)                       # list all unique elements
    ['a', 'b', 'c', 'd', 'e']
    >>> ''.join(sorted(c.elements()))   # list elements with repetitions
    'aaaaabbbbcccdde'
    >>> sum(c.values())                 # total of all counts
    15

    >>> c['a']                          # count of letter 'a'
    5
    >>> for elem in 'shazam':           # update counts from an iterable
    ...     c[elem] += 1                # by adding 1 to each element's count
    >>> c['a']                          # now there are seven 'a'
    7
    >>> del c['b']                      # remove all 'b'
    >>> c['b']                          # now there are zero 'b'
    0

    >>> d = Counter('simsalabim')       # make another counter
    >>> c.update(d)                     # add in the second counter
    >>> c['a']                          # now there are nine 'a'
    9

    >>> c.clear()                       # empty the counter
    >>> c
    Counter()

    Note:  If a count is set to zero or reduced to zero, it will remain
    in the counter until the entry is deleted or the counter is cleared:

    >>> c = Counter('aaabbc')
    >>> c['b'] -= 2                     # reduce the count of 'b' by two
    >>> c.most_common()                 # 'b' is still in, but its count is zero
    [('a', 3), ('c', 1), ('b', 0)]

    cs^|std��|d}|dd�}t|�dkr<tdt|���tt|���|j|i|��dS)a	Create a new, empty Counter object.  And if given, count elements
        from an input iterable.  Or, initialize the count from another mapping
        of elements to their counts.

        >>> c = Counter()                           # a new, empty counter
        >>> c = Counter('gallahad')                 # a new counter from an iterable
        >>> c = Counter({'a': 4, 'b': 2})           # a new counter from a mapping
        >>> c = Counter(a=4, b=2)                   # a new counter from keyword args

        z;descriptor '__init__' of 'Counter' object needs an argumentrrNr=)r>r?�superr�rIr�)rFrGr&rurrrI�szCounter.__init__cCsdS)z1The count of elements not in the Counter is zero.rr�r&r'rrr�__missing__�szCounter.__missing__NcCs6|durt|��td�dd�Stj||��td�d�S)z�List the n most common elements and their counts from the most
        common to the least.  If n is None, then list all element counts.

        >>> Counter('abcdeabcdabcaba').most_common(3)
        [('a', 5), ('b', 4), ('c', 3)]

        NrT)r'�reverse�r')�sortedrl�_itemgetter�_heapqr�)r&rbrrr�most_common�s	zCounter.most_commoncCst�tt|����S)a�Iterator over elements repeating each as many times as its count.

        >>> c = Counter('ABCABC')
        >>> sorted(c.elements())
        ['A', 'A', 'B', 'B', 'C', 'C']

        # Knuth's example for prime factors of 1836:  2**2 * 3**3 * 17**1
        >>> prime_factors = Counter({2: 2, 3: 3, 17: 1})
        >>> product = 1
        >>> for factor in prime_factors.elements():     # loop over factors
        ...     product *= factor                       # and multiply them
        >>> product
        1836

        Note, if an element's count has been set to zero or is a negative
        number, elements() will ignore it.

        )�_chain�
from_iterable�_starmap�_repeatrlrmrrr�elements�szCounter.elementscCstd��dS)Nz@Counter.fromkeys() is undefined.  Use Counter(iterable) instead.)�NotImplementedError)rvrw�vrrrrx�s�zCounter.fromkeyscs�|std��|d}|dd�}t|�dkr<tdt|���|rH|dnd}|dur�t|t�r�|r�|j}|��D]\}}|||d�||<qpq�tt|��|�n
t	||�|r�|�|�dS)a�Like dict.update() but add counts instead of replacing them.

        Source can be an iterable, a dictionary, or another Counter instance.

        >>> c = Counter('which')
        >>> c.update('witch')           # add elements from another iterable
        >>> d = Counter('watch')
        >>> c.update(d)                 # add elements from another counter
        >>> c['h']                      # four 'h' in which, witch, and watch
        4

        z9descriptor 'update' of 'Counter' object needs an argumentrrNr=)
r>r?ryrr�rlr�r�r�r��rFrGr&rw�self_getr�r rurrr��s"

zCounter.updatecOs�|std��|d}|dd�}t|�dkr<tdt|���|rH|dnd}|dur�|j}t|t�r�|��D]\}}||d�|||<qln|D]}||d�d||<q�|r�|�|�dS)a�Like dict.update() but subtracts counts instead of replacing them.
        Counts can be reduced below zero.  Both the inputs and outputs are
        allowed to contain zero and negative counts.

        Source can be an iterable, a dictionary, or another Counter instance.

        >>> c = Counter('which')
        >>> c.subtract('witch')             # subtract elements from another iterable
        >>> c.subtract(Counter('watch'))    # subtract elements from another counter
        >>> c['h']                          # 2 in which, minus 1 in witch, minus 1 in watch
        0
        >>> c['w']                          # 1 in which, minus 1 in witch, minus 1 in watch
        -1

        z;descriptor 'subtract' of 'Counter' object needs an argumentrrNr=)r>r?r�ryrrl�subtractr�rrrr��s 
zCounter.subtractcCs
|�|�S)zReturn a shallow copy.rurmrrrrpszCounter.copycCs|jt|�ffSr)rjrXrmrrrrtszCounter.__reduce__cs||vrtt|��|�dS)zGLike dict.__delitem__() but does not raise KeyError for missing values.N)r�r�rR)r&r�rurrrRszCounter.__delitem__cCsd|sd|jjSz(d�tdj|����}d|jj|fWSty^d�|jjt|��YS0dS)Nri�, z%r: %rz%s({%s})z
{0}({1!r}))	rjr/�joinr|�__mod__r�r>�formatrX)r&rlrrrrn#szCounter.__repr__cCspt|t�stSt�}|��D]$\}}|||}|dkr|||<q|��D] \}}||vrJ|dkrJ|||<qJ|S)zAdd counts from two counters.

        >>> Counter('abbb') + Counter('bcc')
        Counter({'b': 4, 'c': 2, 'a': 1})

        r�ryr��NotImplementedrl�r&rr(r�r �newcountrrr�__add__6s


zCounter.__add__cCstt|t�stSt�}|��D]$\}}|||}|dkr|||<q|��D]$\}}||vrJ|dkrJd|||<qJ|S)z� Subtract count, but keep only results with positive counts.

        >>> Counter('abbbc') - Counter('bccd')
        Counter({'b': 2, 'a': 1})

        rr�r�rrr�__sub__Is

zCounter.__sub__cCs|t|t�stSt�}|��D]0\}}||}||kr8|n|}|dkr|||<q|��D] \}}||vrV|dkrV|||<qV|S)z�Union is the maximum of value in either of the input counters.

        >>> Counter('abbb') | Counter('bcc')
        Counter({'b': 3, 'c': 2, 'a': 1})

        rr��r&rr(r�r �other_countr�rrr�__or__\s


zCounter.__or__cCsRt|t�stSt�}|��D]0\}}||}||kr8|n|}|dkr|||<q|S)z� Intersection is the minimum of corresponding counts.

        >>> Counter('abbb') & Counter('bcc')
        Counter({'b': 1})

        rr�r�rrr�__and__ps

zCounter.__and__cCs
|t�S)zEAdds an empty counter, effectively stripping negative and zero counts�r�rmrrr�__pos__�szCounter.__pos__cCs
t�|S)z{Subtracts from an empty counter.  Strips positive and zero counts,
        and flips the sign on negative counts.

        r�rmrrr�__neg__�szCounter.__neg__cCs&dd�|��D�}|D]
}||=q|S)z?Internal method to strip elements with a negative or zero countcSsg|]\}}|dks|�qS)rr)�.0r�r rrr�
<listcomp>��z*Counter._keep_positive.<locals>.<listcomp>)rl)r&�nonpositiver�rrr�_keep_positive�szCounter._keep_positivecCs*|��D]\}}|||7<q|��S)z�Inplace add from another counter, keeping only positive counts.

        >>> c = Counter('abbb')
        >>> c += Counter('bcc')
        >>> c
        Counter({'b': 4, 'c': 2, 'a': 1})

        �rlr��r&rr�r rrr�__iadd__�s	zCounter.__iadd__cCs*|��D]\}}|||8<q|��S)z�Inplace subtract counter, but keep only results with positive counts.

        >>> c = Counter('abbbc')
        >>> c -= Counter('bccd')
        >>> c
        Counter({'b': 2, 'a': 1})

        r�r�rrr�__isub__�s	zCounter.__isub__cCs2|��D] \}}||}||kr|||<q|��S)z�Inplace union is the maximum of value from either counter.

        >>> c = Counter('abbb')
        >>> c |= Counter('bcc')
        >>> c
        Counter({'b': 3, 'c': 2, 'a': 1})

        r�)r&rr�r�r rrr�__ior__�s
	
zCounter.__ior__cCs2|��D] \}}||}||kr|||<q|��S)z�Inplace intersection is the minimum of corresponding counts.

        >>> c = Counter('abbb')
        >>> c &= Counter('bcc')
        >>> c
        Counter({'b': 1})

        r�)r&rr�r r�rrr�__iand__�s
	
zCounter.__iand__)N)N)r/r-r:r.rIr�r�r�r�rxr�r�rprtrRrnr�r�r�r�r�r�r�r�r�r�r��
__classcell__rrrurr�Rs02

)#

r�cOshd|vrtd��tj|dtji|��}|��\}}|��}|rd|�d�}|durX|d}t�||��|S)z[
    For Python 2.6 compatibility: see
    http://stackoverflow.com/questions/4814970/
    �stdoutz3stdout argument not allowed, it will be overridden.rFNr)�
ValueError�
subprocess�Popen�PIPE�communicate�pollr��CalledProcessError)�	popenargs�kwargs�process�outputZ
unused_err�retcode�cmdrrr�check_output�s
r�ccs|V||7}qdS)z�
    ``itertools.count`` in Py 2.6 doesn't accept a step
    parameter. This is an enhanced version of ``itertools.count``
    for Py2.6 equivalent to ``itertools.count`` in Python 2.7+.
    Nrrrrrr �sc@s�eZdZdZdd�Zdd�Zdd�Zd'd	d
�Zdd�Zd
d�Z	dd�Z
dd�ZeZe
�dd��Zedd��Zdd�ZeZd(dd�Zedd��Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�ZdS))�ChainMapa� A ChainMap groups multiple dicts (or other mappings) together
    to create a single, updateable view.

    The underlying mappings are stored in a list.  That list is public and can
    accessed or updated using the *maps* attribute.  There is no other state.

    Lookups search the underlying mappings successively until a key is found.
    In contrast, writes, updates, and deletions only operate on the first
    mapping.

    cGst|�pig|_dS)z�Initialize a ChainMap by setting *maps* to the given mappings.
        If no mappings are provided, a single empty dictionary is used.

        N)rk�maps)r&r�rrrrI�szChainMap.__init__cCst|��dSr)rYr�rrrr�szChainMap.__missing__c	Cs8|jD]&}z||WSty*Yq0q|�|�Sr)r�rYr�)r&r'r�rrr�__getitem__s
zChainMap.__getitem__NcCs||vr||S|Srrrgrrrr�szChainMap.getcCstt�j|j��Sr)r?r1�unionr�rmrrr�__len__szChainMap.__len__cCstt�j|j��Sr)rqr1r�r�rmrrrrUszChainMap.__iter__cst�fdd�|jD��S)Nc3s|]}�|vVqdSrr)r��mr�rr�	<genexpr>r�z(ChainMap.__contains__.<locals>.<genexpr>��anyr�r�rr�r�__contains__szChainMap.__contains__cCs
t|j�Srr�rmrrr�__bool__szChainMap.__bool__cCsd�|d�tt|j���S)Nz{0.__class__.__name__}({1})r�)r�r�r|�reprr�rmrrrrn!s�zChainMap.__repr__cGs|tj|g|�R��S)z?Create a ChainMap with a single dict created from the iterable.)rXrx)rvrwrFrrrrx&szChainMap.fromkeyscCs&|j|jd��g|jdd��R�S)zHNew ChainMap or subclass with a new copy of maps[0] and refs to maps[1:]rrN)rjr�rprmrrrrp+sz
ChainMap.copycCs |duri}|j|g|j�R�S)z�
        New ChainMap with a new map followed by all previous maps. If no
        map is provided, an empty dict is used.
        N�rjr�)r&r�rrr�	new_child1szChainMap.new_childcCs|j|jdd��S)zNew ChainMap from maps[1:].rNr�rmrrr�parents:szChainMap.parentscCs||jd|<dS�Nr)r�)r&r'rJrrrrN?szChainMap.__setitem__cCs6z|jd|=Wn ty0td�|���Yn0dS)Nr�)Key not found in the first mapping: {0!r})r�rYr�r�rrrrRBszChainMap.__delitem__cCs0z|jd��WSty*td��Yn0dS)zPRemove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.rz#No keys found in the first mapping.N)r�rZrYrmrrrrZHszChainMap.popitemcGs@z|jdj|g|�R�WSty:td�|���Yn0dS)zWRemove *key* from maps[0] and return its value. Raise KeyError if *key* not in maps[0].rr�N)r�rOrYr�)r&r'rFrrrrOOszChainMap.popcCs|jd��dS)z'Clear maps[0], leaving maps[1:] intact.rN)r�rWrmrrrrWVszChainMap.clear)N)N)r/r-r:r.rIr�r�r�r�rUr�r��__nonzero__r5rnr�rxrp�__copy__r��propertyr�rNrRrZrOrWrrrrr��s0



	
r�)�_GLOBAL_DEFAULT_TIMEOUTcCs�|\}}d}t||dt�D]�}|\}}}	}
}d}z@t|||	�}|turP|�|�|r^|�|�|�|�|WSty�}
z |
}|dur�|��WYd}
~
qd}
~
00q|dur�|�ntd��dS)a�Backport of 3-argument create_connection() for Py2.6.

    Connect to *address* and return the socket object.

    Convenience function.  Connect to *address* (a 2-tuple ``(host,
    port)``) and return the socket object.  Passing the optional
    *timeout* parameter will set the timeout on the socket instance
    before attempting to connect.  If no *timeout* is supplied, the
    global default timeout setting returned by :func:`getdefaulttimeout`
    is used.  If *source_address* is set it must be a tuple of (host, port)
    for the socket to bind as a source address before making the connection.
    An host of '' or port 0 tells the OS to use the default.
    Nrz!getaddrinfo returns an empty list)	r
rr
r��
settimeout�bind�connectr�close)�address�timeout�source_address�host�port�err�res�af�socktype�proto�	canonname�sa�sock�_rrr�create_connection`s(



 r�csG�fdd�dt�}|S)z,Convert a cmp= function into a key= functioncsjeZdZdgZdd�Z�fdd�Z�fdd�Z�fdd	�Z�fd
d�Z�fdd
�Z	�fdd�Z
dd�ZdS)zcmp_to_key.<locals>.K�objcWs
||_dSr�r�)r&r�rFrrrrI�szcmp_to_key.<locals>.K.__init__cs�|j|j�dkSr�r�r~��mycmprr�__lt__�szcmp_to_key.<locals>.K.__lt__cs�|j|j�dkSr�r�r~r�rr�__gt__�szcmp_to_key.<locals>.K.__gt__cs�|j|j�dkSr�r�r~r�rrrz�szcmp_to_key.<locals>.K.__eq__cs�|j|j�dkSr�r�r~r�rr�__le__�szcmp_to_key.<locals>.K.__le__cs�|j|j�dkSr�r�r~r�rr�__ge__�szcmp_to_key.<locals>.K.__ge__cs�|j|j�dkSr�r�r~r�rrr��szcmp_to_key.<locals>.K.__ne__cSstd��dS)Nzhash not implemented)r>rmrrr�__hash__�szcmp_to_key.<locals>.K.__hash__N)r/r-r:r;rIr�r�rzr�r�r�r�rr�rr�K�sr�)r�)r�r�rr�r�
cmp_to_key�sr�)r]�)r<r�)r�)r�)r�)�r)r�)r�r�)r5)r�)rr)r")rr)Ir.�
__future__rr��mathrr�operatorrr�rr}r^�heapqr��_weakrefrrC�	itertoolsrr�rr�r	r�r
r
rrZfuture.utilsrrrrr�collectionsrr�collections.abcrr �_threadr!�ImportError�
_dummy_thread�thread�dummy_threadr5r�r6rXr<r�r�r�r�r�r�r�r��_OrderedDictZ_CounterZ
_check_output�_count�_ceilZ__count_elements�_recursive_repr�	_ChainMap�_create_connectionZ_cmp_to_key�version_info�	functools�reprlibrrrr�<module>s�
#V|
l�
)PK�Cu\��lr�V�V8future/backports/__pycache__/socketserver.cpython-39.pycnu�[���a

��?h�^�@s�dZddlmZmZdZddlZddlZddlZddlZddl	Z	zddl
Z
WneyfddlZ
Yn0gd�Z
eed�r�e
�gd��dd	�ZGd
d�de�ZGdd
�d
e�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�dee�ZGdd�dee�ZGdd�dee�ZGdd�dee�Zeed��rxGdd�de�ZGdd�de�ZGd d!�d!ee�ZGd"d#�d#ee�ZGd$d%�d%e�ZGd&d'�d'e�Z Gd(d)�d)e�Z!dS)*a�Generic socket server classes.

This module tries to capture the various aspects of defining a server:

For socket-based servers:

- address family:
        - AF_INET{,6}: IP (Internet Protocol) sockets (default)
        - AF_UNIX: Unix domain sockets
        - others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
        - SOCK_STREAM (reliable stream, e.g. TCP)
        - SOCK_DGRAM (datagrams, e.g. UDP)

For request-based servers (including socket-based):

- client address verification before further looking at the request
        (This is actually a hook for any processing that needs to look
         at the request before anything else, e.g. logging)
- how to handle multiple requests:
        - synchronous (one request is handled at a time)
        - forking (each request is handled by a new process)
        - threading (each request is handled by a new thread)

The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server.  This is bad class design, but
save some typing.  (There's also the issue that a deep class hierarchy
slows down method lookups.)

There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:

        +------------+
        | BaseServer |
        +------------+
              |
              v
        +-----------+        +------------------+
        | TCPServer |------->| UnixStreamServer |
        +-----------+        +------------------+
              |
              v
        +-----------+        +--------------------+
        | UDPServer |------->| UnixDatagramServer |
        +-----------+        +--------------------+

Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.

Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes.  For
instance, a threading UDP server class is created as follows:

        class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass

The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.

To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method.  You can then run
various versions of the service by combining one of the server classes
with your request handler class.

The request handler class must be different for datagram or stream
services.  This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.

Of course, you still have to use your head!

For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child).  In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.

On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested.  Here a threading or forking
server is appropriate.

In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data.  This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.

Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request).  This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).

Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
  and encryption schemes
- Standard framework for select-based multiplexing

XXX Open problems:
- What to do with out-of-band data?

BaseServer:
- split generic "request" functionality out into BaseServer class.
  Copyright (C) 2000  Luke Kenneth Casson Leighton <lkcl@samba.org>

  example: read entries from a SQL database (requires overriding
  get_request() to return a table entry from the database).
  entry is processed by a RequestHandlerClass.

�)�absolute_import�print_functionz0.4N)�	TCPServer�	UDPServer�ForkingUDPServer�ForkingTCPServer�ThreadingUDPServer�ThreadingTCPServer�BaseRequestHandler�StreamRequestHandler�DatagramRequestHandler�ThreadingMixIn�ForkingMixIn�AF_UNIX)�UnixStreamServer�UnixDatagramServer�ThreadingUnixStreamServer�ThreadingUnixDatagramServerc
GsDz
||�WSty<}z|jtjkr(�WYd}~qd}~00qdS)z*restart a system call interrupted by EINTRN)�OSError�errnoZEINTR)�func�args�e�r�G/usr/local/lib/python3.9/site-packages/future/backports/socketserver.py�_eintr_retry�s

rc@s�eZdZdZdZdd�Zdd�Zd"dd	�Zd
d�Zdd
�Z	dd�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�Zd d!�ZdS)#�
BaseServera�Base class for server classes.

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you do not use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - server_close()
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - service_actions()
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - allow_reuse_address

    Instance variables:

    - RequestHandlerClass
    - socket

    NcCs ||_||_t��|_d|_dS)�/Constructor.  May be extended, do not override.FN)�server_address�RequestHandlerClass�	threading�Event�_BaseServer__is_shut_down�_BaseServer__shutdown_request)�selfrrrrr�__init__�s
zBaseServer.__init__cCsdS�zSCalled by constructor to activate the server.

        May be overridden.

        Nr�r$rrr�server_activate�szBaseServer.server_activate��?cCsp|j��zN|jsFttj|ggg|�\}}}||vr<|��|��qWd|_|j��nd|_|j��0dS)z�Handle one request at a time until shutdown.

        Polls for shutdown every poll_interval seconds. Ignores
        self.timeout. If you need to do periodic tasks, do them in
        another thread.
        FN)r"�clearr#r�select�_handle_request_noblock�service_actions�set)r$�
poll_interval�r�wrrrr�
serve_forever�s
�
�zBaseServer.serve_forevercCsd|_|j��dS)z�Stops the serve_forever loop.

        Blocks until the loop has finished. This must be called while
        serve_forever() is running in another thread, or it will
        deadlock.
        TN)r#r"�waitr'rrr�shutdown�szBaseServer.shutdowncCsdS)z�Called by the serve_forever() loop.

        May be overridden by a subclass / Mixin to implement any code that
        needs to be run during the loop.
        Nrr'rrrr-szBaseServer.service_actionscCsd|j��}|dur|j}n|jdur0t||j�}ttj|ggg|�}|dsX|��dS|��dS)zOHandle one request, possibly blocking.

        Respects self.timeout.
        Nr)�socket�
gettimeout�timeout�minrr+�handle_timeoutr,)r$r7Zfd_setsrrr�handle_requests

zBaseServer.handle_requestcCslz|��\}}Wntjy&YdS0|�||�rhz|�||�Wn"|�||�|�|�Yn0dS)z�Handle one request, without blocking.

        I assume that select.select has returned that the socket is
        readable before this function was called, so there should be
        no risk of blocking in get_request().
        N)�get_requestr5�error�verify_request�process_request�handle_error�shutdown_request�r$�request�client_addressrrrr,'sz"BaseServer._handle_request_noblockcCsdS)zcCalled if no new request arrives within self.timeout.

        Overridden by ForkingMixIn.
        Nrr'rrrr99szBaseServer.handle_timeoutcCsdS)znVerify the request.  May be overridden.

        Return True if we should proceed with this request.

        TrrArrrr=@szBaseServer.verify_requestcCs|�||�|�|�dS)zVCall finish_request.

        Overridden by ForkingMixIn and ThreadingMixIn.

        N)�finish_requestr@rArrrr>HszBaseServer.process_requestcCsdS�zDCalled to clean-up the server.

        May be overridden.

        Nrr'rrr�server_closeQszBaseServer.server_closecCs|�|||�dS)z8Finish one request by instantiating RequestHandlerClass.N)rrArrrrDYszBaseServer.finish_requestcCs|�|�dS�z3Called to shutdown and close an individual request.N��
close_request�r$rBrrrr@]szBaseServer.shutdown_requestcCsdS�z)Called to clean up an individual request.NrrJrrrrIaszBaseServer.close_requestcCs8td�tddd�t|�ddl}|��td�dS)ztHandle an error gracefully.  May be overridden.

        The default is to print a traceback and continue.

        z(----------------------------------------z4Exception happened during processing of request from� )�endrN)�print�	traceback�	print_exc)r$rBrCrOrrrr?eszBaseServer.handle_error)r))�__name__�
__module__�__qualname__�__doc__r7r%r(r2r4r-r:r,r9r=r>rFrDr@rIr?rrrrr�s"+

	rc@sfeZdZdZejZejZdZ	dZ
ddd�Zdd�Zd	d
�Z
dd�Zd
d�Zdd�Zdd�Zdd�ZdS)ra3Base class for various socket-based server classes.

    Defaults to synchronous IP stream (i.e., TCP).

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass, bind_and_activate=True)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you don't use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - request_queue_size (only for stream sockets)
    - allow_reuse_address

    Instance variables:

    - server_address
    - RequestHandlerClass
    - socket

    �FTcCs8t�|||�t�|j|j�|_|r4|��|��dS)rN)rr%r5�address_family�socket_type�server_bindr()r$rr�bind_and_activaterrrr%�s�zTCPServer.__init__cCs8|jr|j�tjtjd�|j�|j�|j��|_dS)zOCalled by constructor to bind the socket.

        May be overridden.

        �N)�allow_reuse_addressr5�
setsockopt�
SOL_SOCKET�SO_REUSEADDR�bindr�getsocknamer'rrrrX�szTCPServer.server_bindcCs|j�|j�dSr&)r5�listen�request_queue_sizer'rrrr(�szTCPServer.server_activatecCs|j��dSrE)r5�closer'rrrrF�szTCPServer.server_closecCs
|j��S)zMReturn socket file number.

        Interface required by select().

        )r5�filenor'rrrrd�szTCPServer.filenocCs
|j��S)zYGet the request and client address from the socket.

        May be overridden.

        )r5�acceptr'rrrr;�szTCPServer.get_requestcCs4z|�tj�Wntjy$Yn0|�|�dSrG)r4r5�SHUT_WRr<rIrJrrrr@�s
zTCPServer.shutdown_requestcCs|��dSrK)rcrJrrrrI�szTCPServer.close_requestN)T)rQrRrSrTr5�AF_INETrV�SOCK_STREAMrWrbr[r%rXr(rFrdr;r@rIrrrrrss-
	
rc@s>eZdZdZdZejZdZdd�Z	dd�Z
dd	�Zd
d�ZdS)
rzUDP server class.Fi cCs |j�|j�\}}||jf|fS�N)r5�recvfrom�max_packet_size)r$�data�client_addrrrrr;�szUDPServer.get_requestcCsdSrirr'rrrr(�szUDPServer.server_activatecCs|�|�dSrirHrJrrrr@�szUDPServer.shutdown_requestcCsdSrirrJrrrrIszUDPServer.close_requestN)
rQrRrSrTr[r5�
SOCK_DGRAMrWrkr;r(r@rIrrrrr�src@s<eZdZdZdZdZdZdd�Zdd�Zd	d
�Z	dd�Z
dS)
rz5Mix-in class to handle each request in a new process.i,N�(cCs�|jdurdSt|j�|jkrfzt�dd�\}}WntjyJd}Yn0||jvrXq|j�|�q|jD]�}zt�|tj�\}}Wntjy�d}Yn0|s�qlz|j�|�Wqlty�}z"td|j	||jf��WYd}~qld}~00qldS)z7Internal routine to wait for children that have exited.Nrz%s. x=%d and list=%r)
�active_children�len�max_children�os�waitpidr<�remove�WNOHANG�
ValueError�message)r$�pid�status�childrrrr�collect_childrens(



�zForkingMixIn.collect_childrencCs|��dS)znWait for zombies after self.timeout seconds of inactivity.

        May be extended, do not override.
        N�r|r'rrrr9/szForkingMixIn.handle_timeoutcCs|��dS)z�Collect the zombie child processes regularly in the ForkingMixIn.

        service_actions is called in the BaseServer's serve_forver loop.
        Nr}r'rrrr-6szForkingMixIn.service_actionscCs�t��}|r6|jdurg|_|j�|�|�|�dSz$|�||�|�|�t�d�Wn>z$|�||�|�|�Wt�d�nt�d�0Yn0dS)z-Fork a new subprocess to process the request.NrrZ)	rs�forkrp�appendrIrDr@�_exitr?)r$rBrCryrrrr>=s 


zForkingMixIn.process_request)rQrRrSrTr7rprrr|r9r-r>rrrrrs rc@s$eZdZdZdZdd�Zdd�ZdS)r
z4Mix-in class to handle each request in a new thread.FcCsBz|�||�|�|�Wn"|�||�|�|�Yn0dS)zgSame as in BaseServer but as a thread.

        In addition, exception handling is done here.

        N)rDr@r?rArrr�process_request_thread]sz%ThreadingMixIn.process_request_threadcCs(tj|j||fd�}|j|_|��dS)z*Start a new thread to process the request.)�targetrN)r �Threadr��daemon_threads�daemon�start)r$rBrC�trrrr>js
�zThreadingMixIn.process_requestN)rQrRrSrTr�r�r>rrrrr
Vs
r
c@seZdZdS)rN�rQrRrSrrrrrr�rc@seZdZdS)rNr�rrrrrsr�rc@seZdZdS)rNr�rrrrrur�rc@seZdZdS)r	Nr�rrrrr	vr�r	c@seZdZejZdS)rN�rQrRrSr5rrVrrrrrzsrc@seZdZejZdS)rNr�rrrrr}src@seZdZdS)rNr�rrrrr�r�rc@seZdZdS)rNr�rrrrr�r�rc@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)r
a�Base class for request handler classes.

    This class is instantiated for each request to be handled.  The
    constructor sets the instance variables request, client_address
    and server, and then calls the handle() method.  To implement a
    specific service, all you need to do is to derive a class which
    defines a handle() method.

    The handle() method can find the request as self.request, the
    client address as self.client_address, and the server (in case it
    needs access to per-server information) as self.server.  Since a
    separate instance is created for each request, the handle() method
    can define arbitrary other instance variariables.

    cCs>||_||_||_|��z|��W|��n
|��0dSri)rBrC�server�setup�handle�finish)r$rBrCr�rrrr%�s
zBaseRequestHandler.__init__cCsdSrirr'rrrr��szBaseRequestHandler.setupcCsdSrirr'rrrr��szBaseRequestHandler.handlecCsdSrirr'rrrr��szBaseRequestHandler.finishN)rQrRrSrTr%r�r�r�rrrrr
�s

r
c@s0eZdZdZdZdZdZdZdd�Zdd	�Z	dS)
rz4Define self.rfile and self.wfile for stream sockets.���rNFcCsb|j|_|jdur |j�|j�|jr:|j�tjtjd�|j�	d|j
�|_|j�	d|j�|_
dS)NT�rb�wb)rB�
connectionr7�
settimeout�disable_nagle_algorithmr\r5�IPPROTO_TCP�TCP_NODELAY�makefile�rbufsize�rfile�wbufsize�wfiler'rrrr��s

�zStreamRequestHandler.setupcCsD|jjs,z|j��Wntjy*Yn0|j��|j��dSri)r��closed�flushr5r<rcr�r'rrrr��s
zStreamRequestHandler.finish)
rQrRrSrTr�r�r7r�r�r�rrrrr�s	
rc@s eZdZdZdd�Zdd�ZdS)rz6Define self.rfile and self.wfile for datagram sockets.cCs2ddlm}|j\|_|_||j�|_|�|_dS)Nr)�BytesIO)�ior�rB�packetr5r�r�)r$r�rrrr��szDatagramRequestHandler.setupcCs|j�|j��|j�dSri)r5�sendtor��getvaluerCr'rrrr��szDatagramRequestHandler.finishN)rQrRrSrTr�r�rrrrr�sr)"rT�
__future__rr�__version__r5r+�sysrsrr �ImportErrorZdummy_threading�__all__�hasattr�extendr�objectrrrrr
rrrr	rrrrr
rrrrrr�<module>sD
	RzO.+PK�Cu\B��u�$�$7future/backports/__pycache__/_markupbase.cpython-39.pycnu�[���a

��?hW?�@sXdZddlZe�d�jZe�d�jZe�d�Ze�d�Ze�d�Z[Gdd	�d	e	�Z
dS)
aShared support for scanning document type declarations in HTML and XHTML.

Backported for python-future from Python 3.3. Reason: ParserBase is an
old-style class in the Python 2.7 source of markupbase.py, which I suspect
might be the cause of sporadic unit-test failures on travis-ci.org with
test_htmlparser.py.  The test failures look like this:

    ======================================================================

ERROR: test_attr_entity_replacement (future.tests.test_htmlparser.AttributesStrictTestCase)

----------------------------------------------------------------------

Traceback (most recent call last):
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 661, in test_attr_entity_replacement
    [("starttag", "a", [("b", "&><"'")])])
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 93, in _run_check
    collector = self.get_collector()
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 617, in get_collector
    return EventCollector(strict=True)
  File "/home/travis/build/edschofield/python-future/future/tests/test_htmlparser.py", line 27, in __init__
    html.parser.HTMLParser.__init__(self, *args, **kw)
  File "/home/travis/build/edschofield/python-future/future/backports/html/parser.py", line 135, in __init__
    self.reset()
  File "/home/travis/build/edschofield/python-future/future/backports/html/parser.py", line 143, in reset
    _markupbase.ParserBase.reset(self)

TypeError: unbound method reset() must be called with ParserBase instance as first argument (got EventCollector instance instead)

This module is used as a foundation for the html.parser module.  It has no
documented public API and should not be used directly.

�Nz[a-zA-Z][-_.a-zA-Z0-9]*\s*z(\'[^\']*\'|"[^"]*")\s*z--\s*>z	]\s*]\s*>z]\s*>c@s�eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�ZdZ	d
d�Z
d#dd�Zd$dd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�Zd d!�Zd"S)%�
ParserBaseziParser base class which provides some common support methods used
    by the SGML/HTML and XHTML parsers.cCs|jturtd��dS)Nz)_markupbase.ParserBase must be subclassed)�	__class__r�RuntimeError��self�r�F/usr/local/lib/python3.9/site-packages/future/backports/_markupbase.py�__init__6s
�zParserBase.__init__cCstd��dS)Nz.subclasses of ParserBase must override error())�NotImplementedError)r�messagerrr�error;s�zParserBase.errorcCsd|_d|_dS)N�r��lineno�offsetrrrr�reset?szParserBase.resetcCs|j|jfS)z&Return current line number and offset.rrrrr�getposCszParserBase.getposcCsb||kr|S|j}|�d||�}|rN|j||_|�d||�}||d|_n|j|||_|S)N�
r
)�rawdata�countr�rindexr)r�i�jr�nlines�posrrr�	updateposKszParserBase.updatepos�c
Cs�|j}|d}|||�dks&Jd��|||d�dkrB|dS|||d�dvrZdSt|�}|||d�dkr�|�|�S||d	kr�|�|�S|�||�\}}|d
kr�|S|dkr�d|_||k�r�||}|dk�r||d|�}|dk�r|�|�n
|�|�|dS|d
v�r<t||�}|�s2dS|�	�}n�|dv�rX|�||�\}	}n|||jv�rn|d}nf|d	k�r�|dk�r�|�
|d|�}n,|tgd��v�r�|�d|�n
|�d�n|�d||�|d
kr�|Sq�dS)N��<!z$unexpected call to parse_declarationr
�>)�-r���z--�[r�doctyperz"'�4abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ)�attlist�linktype�link�elementz&unsupported '[' char in %s declarationz"unexpected '[' char in declarationz!unexpected %r char in declaration)
r�len�
parse_comment�parse_marked_section�
_scan_name�_decl_otherchars�handle_decl�unknown_decl�_declstringlit_match�end�_parse_doctype_subset�setr)
rrrr�n�decltype�c�data�m�namerrr�parse_declaration[s\













�zParserBase.parse_declarationr
cCs�|j}|||d�dks"Jd��|�|d|�\}}|dkrB|S|tgd��vrdt�||d�}n<|tgd��vr�t�||d�}n|�d||d|��|s�dS|r�|�d�}|�||d|��|�	d�S)	N�z<![z)unexpected call to parse_marked_section()r)�temp�cdata�ignore�include�rcdata)�if�else�endifz+unknown status keyword %r in marked sectionr!)
rr,r3�_markedsectionclose�search�_msmarkedsectioncloser�startr/r1)rr�reportr�sectNamer�matchrrrr+�s 
zParserBase.parse_marked_sectioncCsj|j}|||d�dkr$|�d�t�||d�}|s<dS|r`|�d�}|�||d|��|�d�S)N��<!--z"unexpected call to parse_comment()r!r)rr�
_commentcloserErG�handle_commentr1)rrrHrrJrrrrr*�s

zParserBase.parse_commentc
Cs2|j}t|�}|}||k�r.||}|dk�r8|||d�}|dkrJdS|dkrp|�||d�|�d|�|d|kr�dS|d|kr�dS|||d�dkr�|j|d	d
�}|d	kr|Sq|�|d|�\}}|dkr�dS|tgd��v�r|�||d�|�d|�t|d
|�}	|	||�}|d	k�r,|Sq|dk�r�|d|k�rTdS|�|d|�\}}|d	k�rv|S||dk�r,|d}q|dk�r�|d}||k�r�||���r�|d}�q�||k�r�||dk�r�|S|�||�|�d�ndSq|���r|d}q|�||�|�d|�qdS)N�<rr!rr
z*unexpected char in internal subset (in %r)rKrLr)rH)r%r(�entity�notationz)unknown declaration %r in internal subset�_parse_doctype_�%�;�]rz%unexpected char after internal subsetz%unexpected char %r in internal subset)	rr)rrr*r,r3�getattr�isspace)
rr�declstartposrr4rr6�sr9�methrrrr2�sp

�








z ParserBase._parse_doctype_subsetcCsF|�||�\}}|dkrdS|j}d||d�vrB|�d|�dSdS)Nr!rr
)r,r�find)rrrXr9rrrrr�_parse_doctype_elementsz!ParserBase._parse_doctype_elementcCs�|j}|�||�\}}|||d�}|dkr2dS|dkrB|dS|�||�\}}|dkr^|S|||d�}|dkrzdS|dkr�d||d�vr�|�d|�d}ndS|||d���r�|d}q�||d�s�dSn|�||�\}}|||d�}|�sdS|dv�rDt||�}|�r&|��}ndS|||d�}|�sDdS|d	k�r�||d�d	k�rddS|�|d|�\}}|dk�r�|S|||d�}|�s�dS|dkrB|dSqBdS)
Nr
rr!rr�(�)�'"�#)rr,r[rWr0r1)rrrXrr9rr6r8rrr�_parse_doctype_attlistsX





z!ParserBase._parse_doctype_attlistcCs�|�||�\}}|dkr|S|j}|||d�}|s:dS|dkrJ|dS|dvrnt||�}|sddS|��}q"|�||�\}}|dkr"|Sq"dS)Nrr
r!rr_)r,rr0r1)rrrXr9rrr6r8rrr�_parse_doctype_notationXs"

z"ParserBase._parse_doctype_notationcCs�|j}|||d�dkrR|d}|||d�}|s:dS|��rP|d}q"qVq"n|}|�||�\}}|dkrr|S|j||d�}|s�dS|dvr�t||�}|r�|��}q�dSqr|dkr�|dS|�||�\}}|dkrr|SqrdS)Nr
rSr!rr_r)rrWr,r0r1)rrrXrrr6r9r8rrr�_parse_doctype_entityos4


z ParserBase._parse_doctype_entitycCs�|j}t|�}||krdSt||�}|r\|��}|��}|t|�|krLdS|��|��fS|�||�|�d|||d��dS)N)Nr!zexpected name token at %r�)	rr)�_declname_match�group�strip�lowerr1rr)rrrXrr4r8rYr9rrrr,�s
�zParserBase._scan_namecCsdS)Nr)rr7rrrr/�szParserBase.unknown_declN)r
)r
)�__name__�
__module__�__qualname__�__doc__r	rrrrr-r:r+r*r2r\rarbrcr,r/rrrrr2s"
R

C9$r)rl�re�compilerJrer0rMrDrF�objectrrrrr�<module>s"


PK�Cu\��-L��:future/backports/__pycache__/total_ordering.cpython-39.pycnu�[���a

��?h��@s0dZddlZejdkr$ddlmZndd�ZdS)z�
For Python < 2.7.2. total_ordering in versions prior to 2.7.2 is buggy.
See http://bugs.python.org/issue10042 for details. For these versions use
code borrowed from Python 2.7.3.

From django.utils.
�N)��r)�total_orderingcCs�ddd�fddd�fddd�fgddd�fd	d
d�fddd�fgd	dd�fdd
d�fddd�fgddd�fddd�fd	dd�fgd�}tt|��t|�@}|s�td��t|�}||D]0\}}||vr�||_tt|�j|_t|||�q�|S)z6Class decorator that fills in missing ordering methods�__gt__cSs||kp||kS�N���self�otherrr�I/usr/local/lib/python3.9/site-packages/future/backports/total_ordering.py�<lambda>�z total_ordering.<locals>.<lambda>�__le__cSs||kp||kSrrrrrrrr
�__ge__cSs
||kSrrrrrrrr
cSs||kp||kSrrrrrrrr
�__lt__cSs||ko||kSrrrrrrrr
cSs
||kSrrrrrrrr
cSs||kp||kSrrrrrrrr
cSs||kp||kSrrrrrrrr
cSs
||kSrrrrrrrr
cSs||kp||kSrrrrrrrr
cSs||ko||kSrrrrrrrr
cSs
||kSrrrrrrrr
)rrrrz6must define at least one ordering operation: < > <= >=)	�set�dir�
ValueError�max�__name__�getattr�int�__doc__�setattr)�cls�convert�roots�root�opname�opfuncrrrr
s6


�


�


�


��r)r�sys�version_info�	functoolsrrrrr�<module>s
PK�Cu\+&<<��"future/backports/total_ordering.pynu�[���"""
For Python < 2.7.2. total_ordering in versions prior to 2.7.2 is buggy.
See http://bugs.python.org/issue10042 for details. For these versions use
code borrowed from Python 2.7.3.

From django.utils.
"""

import sys
if sys.version_info >= (2, 7, 2):
    from functools import total_ordering
else:
    def total_ordering(cls):
        """Class decorator that fills in missing ordering methods"""
        convert = {
            '__lt__': [('__gt__', lambda self, other: not (self < other or self == other)),
                       ('__le__', lambda self, other: self < other or self == other),
                       ('__ge__', lambda self, other: not self < other)],
            '__le__': [('__ge__', lambda self, other: not self <= other or self == other),
                       ('__lt__', lambda self, other: self <= other and not self == other),
                       ('__gt__', lambda self, other: not self <= other)],
            '__gt__': [('__lt__', lambda self, other: not (self > other or self == other)),
                       ('__ge__', lambda self, other: self > other or self == other),
                       ('__le__', lambda self, other: not self > other)],
            '__ge__': [('__le__', lambda self, other: (not self >= other) or self == other),
                       ('__gt__', lambda self, other: self >= other and not self == other),
                       ('__lt__', lambda self, other: not self >= other)]
        }
        roots = set(dir(cls)) & set(convert)
        if not roots:
            raise ValueError('must define at least one ordering operation: < > <= >=')
        root = max(roots)       # prefer __lt__ to __le__ to __gt__ to __ge__
        for opname, opfunc in convert[root]:
            if opname not in roots:
                opfunc.__name__ = opname
                opfunc.__doc__ = getattr(int, opname).__doc__
                setattr(cls, opname, opfunc)
        return cls
PK�Cu\���J/�/�!future/backports/xmlrpc/client.pynu�[���#
# XML-RPC CLIENT LIBRARY
# $Id$
#
# an XML-RPC client interface for Python.
#
# the marshalling and response parser code can also be used to
# implement XML-RPC servers.
#
# Notes:
# this version is designed to work with Python 2.1 or newer.
#
# History:
# 1999-01-14 fl  Created
# 1999-01-15 fl  Changed dateTime to use localtime
# 1999-01-16 fl  Added Binary/base64 element, default to RPC2 service
# 1999-01-19 fl  Fixed array data element (from Skip Montanaro)
# 1999-01-21 fl  Fixed dateTime constructor, etc.
# 1999-02-02 fl  Added fault handling, handle empty sequences, etc.
# 1999-02-10 fl  Fixed problem with empty responses (from Skip Montanaro)
# 1999-06-20 fl  Speed improvements, pluggable parsers/transports (0.9.8)
# 2000-11-28 fl  Changed boolean to check the truth value of its argument
# 2001-02-24 fl  Added encoding/Unicode/SafeTransport patches
# 2001-02-26 fl  Added compare support to wrappers (0.9.9/1.0b1)
# 2001-03-28 fl  Make sure response tuple is a singleton
# 2001-03-29 fl  Don't require empty params element (from Nicholas Riley)
# 2001-06-10 fl  Folded in _xmlrpclib accelerator support (1.0b2)
# 2001-08-20 fl  Base xmlrpclib.Error on built-in Exception (from Paul Prescod)
# 2001-09-03 fl  Allow Transport subclass to override getparser
# 2001-09-10 fl  Lazy import of urllib, cgi, xmllib (20x import speedup)
# 2001-10-01 fl  Remove containers from memo cache when done with them
# 2001-10-01 fl  Use faster escape method (80% dumps speedup)
# 2001-10-02 fl  More dumps microtuning
# 2001-10-04 fl  Make sure import expat gets a parser (from Guido van Rossum)
# 2001-10-10 sm  Allow long ints to be passed as ints if they don't overflow
# 2001-10-17 sm  Test for int and long overflow (allows use on 64-bit systems)
# 2001-11-12 fl  Use repr() to marshal doubles (from Paul Felix)
# 2002-03-17 fl  Avoid buffered read when possible (from James Rucker)
# 2002-04-07 fl  Added pythondoc comments
# 2002-04-16 fl  Added __str__ methods to datetime/binary wrappers
# 2002-05-15 fl  Added error constants (from Andrew Kuchling)
# 2002-06-27 fl  Merged with Python CVS version
# 2002-10-22 fl  Added basic authentication (based on code from Phillip Eby)
# 2003-01-22 sm  Add support for the bool type
# 2003-02-27 gvr Remove apply calls
# 2003-04-24 sm  Use cStringIO if available
# 2003-04-25 ak  Add support for nil
# 2003-06-15 gn  Add support for time.struct_time
# 2003-07-12 gp  Correct marshalling of Faults
# 2003-10-31 mvl Add multicall support
# 2004-08-20 mvl Bump minimum supported Python version to 2.1
#
# Copyright (c) 1999-2002 by Secret Labs AB.
# Copyright (c) 1999-2002 by Fredrik Lundh.
#
# info@pythonware.com
# http://www.pythonware.com
#
# --------------------------------------------------------------------
# The XML-RPC client interface is
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS.  IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------

"""
Ported using Python-Future from the Python 3.3 standard library.

An XML-RPC client interface for Python.

The marshalling and response parser code can also be used to
implement XML-RPC servers.

Exported exceptions:

  Error          Base class for client errors
  ProtocolError  Indicates an HTTP protocol error
  ResponseError  Indicates a broken response package
  Fault          Indicates an XML-RPC fault package

Exported classes:

  ServerProxy    Represents a logical connection to an XML-RPC server

  MultiCall      Executor of boxcared xmlrpc requests
  DateTime       dateTime wrapper for an ISO 8601 string or time tuple or
                 localtime integer value to generate a "dateTime.iso8601"
                 XML-RPC value
  Binary         binary data wrapper

  Marshaller     Generate an XML-RPC params chunk from a Python data structure
  Unmarshaller   Unmarshal an XML-RPC response from incoming XML event message
  Transport      Handles an HTTP transaction to an XML-RPC server
  SafeTransport  Handles an HTTPS transaction to an XML-RPC server

Exported constants:

  (none)

Exported functions:

  getparser      Create instance of the fastest available parser & attach
                 to an unmarshalling object
  dumps          Convert an argument tuple or a Fault instance to an XML-RPC
                 request (or response, if the methodresponse option is used).
  loads          Convert an XML-RPC packet to unmarshalled data plus a method
                 name (None if not present).
"""

from __future__ import (absolute_import, division, print_function,
                        unicode_literals)
from future.builtins import bytes, dict, int, range, str

import base64
import sys
if sys.version_info < (3, 9):
    # Py2.7 compatibility hack
    base64.encodebytes = base64.encodestring
    base64.decodebytes = base64.decodestring
import time
from datetime import datetime
from future.backports.http import client as http_client
from future.backports.urllib import parse as urllib_parse
from future.utils import ensure_new_type
from xml.parsers import expat
import socket
import errno
from io import BytesIO
try:
    import gzip
except ImportError:
    gzip = None #python can be built without zlib/gzip support

# --------------------------------------------------------------------
# Internal stuff

def escape(s):
    s = s.replace("&", "&amp;")
    s = s.replace("<", "&lt;")
    return s.replace(">", "&gt;",)

# used in User-Agent header sent
__version__ = sys.version[:3]

# xmlrpc integer limits
MAXINT =  2**31-1
MININT = -2**31

# --------------------------------------------------------------------
# Error constants (from Dan Libby's specification at
# http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)

# Ranges of errors
PARSE_ERROR       = -32700
SERVER_ERROR      = -32600
APPLICATION_ERROR = -32500
SYSTEM_ERROR      = -32400
TRANSPORT_ERROR   = -32300

# Specific errors
NOT_WELLFORMED_ERROR  = -32700
UNSUPPORTED_ENCODING  = -32701
INVALID_ENCODING_CHAR = -32702
INVALID_XMLRPC        = -32600
METHOD_NOT_FOUND      = -32601
INVALID_METHOD_PARAMS = -32602
INTERNAL_ERROR        = -32603

# --------------------------------------------------------------------
# Exceptions

##
# Base class for all kinds of client-side errors.

class Error(Exception):
    """Base class for client errors."""
    def __str__(self):
        return repr(self)

##
# Indicates an HTTP-level protocol error.  This is raised by the HTTP
# transport layer, if the server returns an error code other than 200
# (OK).
#
# @param url The target URL.
# @param errcode The HTTP error code.
# @param errmsg The HTTP error message.
# @param headers The HTTP header dictionary.

class ProtocolError(Error):
    """Indicates an HTTP protocol error."""
    def __init__(self, url, errcode, errmsg, headers):
        Error.__init__(self)
        self.url = url
        self.errcode = errcode
        self.errmsg = errmsg
        self.headers = headers
    def __repr__(self):
        return (
            "<ProtocolError for %s: %s %s>" %
            (self.url, self.errcode, self.errmsg)
            )

##
# Indicates a broken XML-RPC response package.  This exception is
# raised by the unmarshalling layer, if the XML-RPC response is
# malformed.

class ResponseError(Error):
    """Indicates a broken response package."""
    pass

##
# Indicates an XML-RPC fault response package.  This exception is
# raised by the unmarshalling layer, if the XML-RPC response contains
# a fault string.  This exception can also be used as a class, to
# generate a fault XML-RPC message.
#
# @param faultCode The XML-RPC fault code.
# @param faultString The XML-RPC fault string.

class Fault(Error):
    """Indicates an XML-RPC fault package."""
    def __init__(self, faultCode, faultString, **extra):
        Error.__init__(self)
        self.faultCode = faultCode
        self.faultString = faultString
    def __repr__(self):
        return "<Fault %s: %r>" % (ensure_new_type(self.faultCode),
                                   ensure_new_type(self.faultString))

# --------------------------------------------------------------------
# Special values

##
# Backwards compatibility

boolean = Boolean = bool

##
# Wrapper for XML-RPC DateTime values.  This converts a time value to
# the format used by XML-RPC.
# <p>
# The value can be given as a datetime object, as a string in the
# format "yyyymmddThh:mm:ss", as a 9-item time tuple (as returned by
# time.localtime()), or an integer value (as returned by time.time()).
# The wrapper uses time.localtime() to convert an integer to a time
# tuple.
#
# @param value The time, given as a datetime object, an ISO 8601 string,
#              a time tuple, or an integer time value.


### For Python-Future:
def _iso8601_format(value):
    return "%04d%02d%02dT%02d:%02d:%02d" % (
                value.year, value.month, value.day,
                value.hour, value.minute, value.second)
###
# Issue #13305: different format codes across platforms
# _day0 = datetime(1, 1, 1)
# if _day0.strftime('%Y') == '0001':      # Mac OS X
#     def _iso8601_format(value):
#         return value.strftime("%Y%m%dT%H:%M:%S")
# elif _day0.strftime('%4Y') == '0001':   # Linux
#     def _iso8601_format(value):
#         return value.strftime("%4Y%m%dT%H:%M:%S")
# else:
#     def _iso8601_format(value):
#         return value.strftime("%Y%m%dT%H:%M:%S").zfill(17)
# del _day0


def _strftime(value):
    if isinstance(value, datetime):
        return _iso8601_format(value)

    if not isinstance(value, (tuple, time.struct_time)):
        if value == 0:
            value = time.time()
        value = time.localtime(value)

    return "%04d%02d%02dT%02d:%02d:%02d" % value[:6]

class DateTime(object):
    """DateTime wrapper for an ISO 8601 string or time tuple or
    localtime integer value to generate 'dateTime.iso8601' XML-RPC
    value.
    """

    def __init__(self, value=0):
        if isinstance(value, str):
            self.value = value
        else:
            self.value = _strftime(value)

    def make_comparable(self, other):
        if isinstance(other, DateTime):
            s = self.value
            o = other.value
        elif isinstance(other, datetime):
            s = self.value
            o = _iso8601_format(other)
        elif isinstance(other, str):
            s = self.value
            o = other
        elif hasattr(other, "timetuple"):
            s = self.timetuple()
            o = other.timetuple()
        else:
            otype = (hasattr(other, "__class__")
                     and other.__class__.__name__
                     or type(other))
            raise TypeError("Can't compare %s and %s" %
                            (self.__class__.__name__, otype))
        return s, o

    def __lt__(self, other):
        s, o = self.make_comparable(other)
        return s < o

    def __le__(self, other):
        s, o = self.make_comparable(other)
        return s <= o

    def __gt__(self, other):
        s, o = self.make_comparable(other)
        return s > o

    def __ge__(self, other):
        s, o = self.make_comparable(other)
        return s >= o

    def __eq__(self, other):
        s, o = self.make_comparable(other)
        return s == o

    def __ne__(self, other):
        s, o = self.make_comparable(other)
        return s != o

    def timetuple(self):
        return time.strptime(self.value, "%Y%m%dT%H:%M:%S")

    ##
    # Get date/time value.
    #
    # @return Date/time value, as an ISO 8601 string.

    def __str__(self):
        return self.value

    def __repr__(self):
        return "<DateTime %r at %x>" % (ensure_new_type(self.value), id(self))

    def decode(self, data):
        self.value = str(data).strip()

    def encode(self, out):
        out.write("<value><dateTime.iso8601>")
        out.write(self.value)
        out.write("</dateTime.iso8601></value>\n")

def _datetime(data):
    # decode xml element contents into a DateTime structure.
    value = DateTime()
    value.decode(data)
    return value

def _datetime_type(data):
    return datetime.strptime(data, "%Y%m%dT%H:%M:%S")

##
# Wrapper for binary data.  This can be used to transport any kind
# of binary data over XML-RPC, using BASE64 encoding.
#
# @param data An 8-bit string containing arbitrary data.

class Binary(object):
    """Wrapper for binary data."""

    def __init__(self, data=None):
        if data is None:
            data = b""
        else:
            if not isinstance(data, (bytes, bytearray)):
                raise TypeError("expected bytes or bytearray, not %s" %
                                data.__class__.__name__)
            data = bytes(data)  # Make a copy of the bytes!
        self.data = data

    ##
    # Get buffer contents.
    #
    # @return Buffer contents, as an 8-bit string.

    def __str__(self):
        return str(self.data, "latin-1")  # XXX encoding?!

    def __eq__(self, other):
        if isinstance(other, Binary):
            other = other.data
        return self.data == other

    def __ne__(self, other):
        if isinstance(other, Binary):
            other = other.data
        return self.data != other

    def decode(self, data):
        self.data = base64.decodebytes(data)

    def encode(self, out):
        out.write("<value><base64>\n")
        encoded = base64.encodebytes(self.data)
        out.write(encoded.decode('ascii'))
        out.write("</base64></value>\n")

def _binary(data):
    # decode xml element contents into a Binary structure
    value = Binary()
    value.decode(data)
    return value

WRAPPERS = (DateTime, Binary)

# --------------------------------------------------------------------
# XML parsers

class ExpatParser(object):
    # fast expat parser for Python 2.0 and later.
    def __init__(self, target):
        self._parser = parser = expat.ParserCreate(None, None)
        self._target = target
        parser.StartElementHandler = target.start
        parser.EndElementHandler = target.end
        parser.CharacterDataHandler = target.data
        encoding = None
        target.xml(encoding, None)

    def feed(self, data):
        self._parser.Parse(data, 0)

    def close(self):
        self._parser.Parse("", 1) # end of data
        del self._target, self._parser # get rid of circular references

# --------------------------------------------------------------------
# XML-RPC marshalling and unmarshalling code

##
# XML-RPC marshaller.
#
# @param encoding Default encoding for 8-bit strings.  The default
#     value is None (interpreted as UTF-8).
# @see dumps

class Marshaller(object):
    """Generate an XML-RPC params chunk from a Python data structure.

    Create a Marshaller instance for each set of parameters, and use
    the "dumps" method to convert your data (represented as a tuple)
    to an XML-RPC params chunk.  To write a fault response, pass a
    Fault instance instead.  You may prefer to use the "dumps" module
    function for this purpose.
    """

    # by the way, if you don't understand what's going on in here,
    # that's perfectly ok.

    def __init__(self, encoding=None, allow_none=False):
        self.memo = {}
        self.data = None
        self.encoding = encoding
        self.allow_none = allow_none

    dispatch = {}

    def dumps(self, values):
        out = []
        write = out.append
        dump = self.__dump
        if isinstance(values, Fault):
            # fault instance
            write("<fault>\n")
            dump({'faultCode': values.faultCode,
                  'faultString': values.faultString},
                 write)
            write("</fault>\n")
        else:
            # parameter block
            # FIXME: the xml-rpc specification allows us to leave out
            # the entire <params> block if there are no parameters.
            # however, changing this may break older code (including
            # old versions of xmlrpclib.py), so this is better left as
            # is for now.  See @XMLRPC3 for more information. /F
            write("<params>\n")
            for v in values:
                write("<param>\n")
                dump(v, write)
                write("</param>\n")
            write("</params>\n")
        result = "".join(out)
        return str(result)

    def __dump(self, value, write):
        try:
            f = self.dispatch[type(ensure_new_type(value))]
        except KeyError:
            # check if this object can be marshalled as a structure
            if not hasattr(value, '__dict__'):
                raise TypeError("cannot marshal %s objects" % type(value))
            # check if this class is a sub-class of a basic type,
            # because we don't know how to marshal these types
            # (e.g. a string sub-class)
            for type_ in type(value).__mro__:
                if type_ in self.dispatch.keys():
                    raise TypeError("cannot marshal %s objects" % type(value))
            # XXX(twouters): using "_arbitrary_instance" as key as a quick-fix
            # for the p3yk merge, this should probably be fixed more neatly.
            f = self.dispatch["_arbitrary_instance"]
        f(self, value, write)

    def dump_nil (self, value, write):
        if not self.allow_none:
            raise TypeError("cannot marshal None unless allow_none is enabled")
        write("<value><nil/></value>")
    dispatch[type(None)] = dump_nil

    def dump_bool(self, value, write):
        write("<value><boolean>")
        write(value and "1" or "0")
        write("</boolean></value>\n")
    dispatch[bool] = dump_bool

    def dump_long(self, value, write):
        if value > MAXINT or value < MININT:
            raise OverflowError("long int exceeds XML-RPC limits")
        write("<value><int>")
        write(str(int(value)))
        write("</int></value>\n")
    dispatch[int] = dump_long

    # backward compatible
    dump_int = dump_long

    def dump_double(self, value, write):
        write("<value><double>")
        write(repr(ensure_new_type(value)))
        write("</double></value>\n")
    dispatch[float] = dump_double

    def dump_unicode(self, value, write, escape=escape):
        write("<value><string>")
        write(escape(value))
        write("</string></value>\n")
    dispatch[str] = dump_unicode

    def dump_bytes(self, value, write):
        write("<value><base64>\n")
        encoded = base64.encodebytes(value)
        write(encoded.decode('ascii'))
        write("</base64></value>\n")
    dispatch[bytes] = dump_bytes
    dispatch[bytearray] = dump_bytes

    def dump_array(self, value, write):
        i = id(value)
        if i in self.memo:
            raise TypeError("cannot marshal recursive sequences")
        self.memo[i] = None
        dump = self.__dump
        write("<value><array><data>\n")
        for v in value:
            dump(v, write)
        write("</data></array></value>\n")
        del self.memo[i]
    dispatch[tuple] = dump_array
    dispatch[list] = dump_array

    def dump_struct(self, value, write, escape=escape):
        i = id(value)
        if i in self.memo:
            raise TypeError("cannot marshal recursive dictionaries")
        self.memo[i] = None
        dump = self.__dump
        write("<value><struct>\n")
        for k, v in value.items():
            write("<member>\n")
            if not isinstance(k, str):
                raise TypeError("dictionary key must be string")
            write("<name>%s</name>\n" % escape(k))
            dump(v, write)
            write("</member>\n")
        write("</struct></value>\n")
        del self.memo[i]
    dispatch[dict] = dump_struct

    def dump_datetime(self, value, write):
        write("<value><dateTime.iso8601>")
        write(_strftime(value))
        write("</dateTime.iso8601></value>\n")
    dispatch[datetime] = dump_datetime

    def dump_instance(self, value, write):
        # check for special wrappers
        if value.__class__ in WRAPPERS:
            self.write = write
            value.encode(self)
            del self.write
        else:
            # store instance attributes as a struct (really?)
            self.dump_struct(value.__dict__, write)
    dispatch[DateTime] = dump_instance
    dispatch[Binary] = dump_instance
    # XXX(twouters): using "_arbitrary_instance" as key as a quick-fix
    # for the p3yk merge, this should probably be fixed more neatly.
    dispatch["_arbitrary_instance"] = dump_instance

##
# XML-RPC unmarshaller.
#
# @see loads

class Unmarshaller(object):
    """Unmarshal an XML-RPC response, based on incoming XML event
    messages (start, data, end).  Call close() to get the resulting
    data structure.

    Note that this reader is fairly tolerant, and gladly accepts bogus
    XML-RPC data without complaining (but not bogus XML).
    """

    # and again, if you don't understand what's going on in here,
    # that's perfectly ok.

    def __init__(self, use_datetime=False, use_builtin_types=False):
        self._type = None
        self._stack = []
        self._marks = []
        self._data = []
        self._methodname = None
        self._encoding = "utf-8"
        self.append = self._stack.append
        self._use_datetime = use_builtin_types or use_datetime
        self._use_bytes = use_builtin_types

    def close(self):
        # return response tuple and target method
        if self._type is None or self._marks:
            raise ResponseError()
        if self._type == "fault":
            raise Fault(**self._stack[0])
        return tuple(self._stack)

    def getmethodname(self):
        return self._methodname

    #
    # event handlers

    def xml(self, encoding, standalone):
        self._encoding = encoding
        # FIXME: assert standalone == 1 ???

    def start(self, tag, attrs):
        # prepare to handle this element
        if tag == "array" or tag == "struct":
            self._marks.append(len(self._stack))
        self._data = []
        self._value = (tag == "value")

    def data(self, text):
        self._data.append(text)

    def end(self, tag):
        # call the appropriate end tag handler
        try:
            f = self.dispatch[tag]
        except KeyError:
            pass # unknown tag ?
        else:
            return f(self, "".join(self._data))

    #
    # accelerator support

    def end_dispatch(self, tag, data):
        # dispatch data
        try:
            f = self.dispatch[tag]
        except KeyError:
            pass # unknown tag ?
        else:
            return f(self, data)

    #
    # element decoders

    dispatch = {}

    def end_nil (self, data):
        self.append(None)
        self._value = 0
    dispatch["nil"] = end_nil

    def end_boolean(self, data):
        if data == "0":
            self.append(False)
        elif data == "1":
            self.append(True)
        else:
            raise TypeError("bad boolean value")
        self._value = 0
    dispatch["boolean"] = end_boolean

    def end_int(self, data):
        self.append(int(data))
        self._value = 0
    dispatch["i4"] = end_int
    dispatch["i8"] = end_int
    dispatch["int"] = end_int

    def end_double(self, data):
        self.append(float(data))
        self._value = 0
    dispatch["double"] = end_double

    def end_string(self, data):
        if self._encoding:
            data = data.decode(self._encoding)
        self.append(data)
        self._value = 0
    dispatch["string"] = end_string
    dispatch["name"] = end_string # struct keys are always strings

    def end_array(self, data):
        mark = self._marks.pop()
        # map arrays to Python lists
        self._stack[mark:] = [self._stack[mark:]]
        self._value = 0
    dispatch["array"] = end_array

    def end_struct(self, data):
        mark = self._marks.pop()
        # map structs to Python dictionaries
        dict = {}
        items = self._stack[mark:]
        for i in range(0, len(items), 2):
            dict[items[i]] = items[i+1]
        self._stack[mark:] = [dict]
        self._value = 0
    dispatch["struct"] = end_struct

    def end_base64(self, data):
        value = Binary()
        value.decode(data.encode("ascii"))
        if self._use_bytes:
            value = value.data
        self.append(value)
        self._value = 0
    dispatch["base64"] = end_base64

    def end_dateTime(self, data):
        value = DateTime()
        value.decode(data)
        if self._use_datetime:
            value = _datetime_type(data)
        self.append(value)
    dispatch["dateTime.iso8601"] = end_dateTime

    def end_value(self, data):
        # if we stumble upon a value element with no internal
        # elements, treat it as a string element
        if self._value:
            self.end_string(data)
    dispatch["value"] = end_value

    def end_params(self, data):
        self._type = "params"
    dispatch["params"] = end_params

    def end_fault(self, data):
        self._type = "fault"
    dispatch["fault"] = end_fault

    def end_methodName(self, data):
        if self._encoding:
            data = data.decode(self._encoding)
        self._methodname = data
        self._type = "methodName" # no params
    dispatch["methodName"] = end_methodName

## Multicall support
#

class _MultiCallMethod(object):
    # some lesser magic to store calls made to a MultiCall object
    # for batch execution
    def __init__(self, call_list, name):
        self.__call_list = call_list
        self.__name = name
    def __getattr__(self, name):
        return _MultiCallMethod(self.__call_list, "%s.%s" % (self.__name, name))
    def __call__(self, *args):
        self.__call_list.append((self.__name, args))

class MultiCallIterator(object):
    """Iterates over the results of a multicall. Exceptions are
    raised in response to xmlrpc faults."""

    def __init__(self, results):
        self.results = results

    def __getitem__(self, i):
        item = self.results[i]
        if isinstance(type(item), dict):
            raise Fault(item['faultCode'], item['faultString'])
        elif type(item) == type([]):
            return item[0]
        else:
            raise ValueError("unexpected type in multicall result")

class MultiCall(object):
    """server -> a object used to boxcar method calls

    server should be a ServerProxy object.

    Methods can be added to the MultiCall using normal
    method call syntax e.g.:

    multicall = MultiCall(server_proxy)
    multicall.add(2,3)
    multicall.get_address("Guido")

    To execute the multicall, call the MultiCall object e.g.:

    add_result, address = multicall()
    """

    def __init__(self, server):
        self.__server = server
        self.__call_list = []

    def __repr__(self):
        return "<MultiCall at %x>" % id(self)

    __str__ = __repr__

    def __getattr__(self, name):
        return _MultiCallMethod(self.__call_list, name)

    def __call__(self):
        marshalled_list = []
        for name, args in self.__call_list:
            marshalled_list.append({'methodName' : name, 'params' : args})

        return MultiCallIterator(self.__server.system.multicall(marshalled_list))

# --------------------------------------------------------------------
# convenience functions

FastMarshaller = FastParser = FastUnmarshaller = None

##
# Create a parser object, and connect it to an unmarshalling instance.
# This function picks the fastest available XML parser.
#
# return A (parser, unmarshaller) tuple.

def getparser(use_datetime=False, use_builtin_types=False):
    """getparser() -> parser, unmarshaller

    Create an instance of the fastest available parser, and attach it
    to an unmarshalling object.  Return both objects.
    """
    if FastParser and FastUnmarshaller:
        if use_builtin_types:
            mkdatetime = _datetime_type
            mkbytes = base64.decodebytes
        elif use_datetime:
            mkdatetime = _datetime_type
            mkbytes = _binary
        else:
            mkdatetime = _datetime
            mkbytes = _binary
        target = FastUnmarshaller(True, False, mkbytes, mkdatetime, Fault)
        parser = FastParser(target)
    else:
        target = Unmarshaller(use_datetime=use_datetime, use_builtin_types=use_builtin_types)
        if FastParser:
            parser = FastParser(target)
        else:
            parser = ExpatParser(target)
    return parser, target

##
# Convert a Python tuple or a Fault instance to an XML-RPC packet.
#
# @def dumps(params, **options)
# @param params A tuple or Fault instance.
# @keyparam methodname If given, create a methodCall request for
#     this method name.
# @keyparam methodresponse If given, create a methodResponse packet.
#     If used with a tuple, the tuple must be a singleton (that is,
#     it must contain exactly one element).
# @keyparam encoding The packet encoding.
# @return A string containing marshalled data.

def dumps(params, methodname=None, methodresponse=None, encoding=None,
          allow_none=False):
    """data [,options] -> marshalled data

    Convert an argument tuple or a Fault instance to an XML-RPC
    request (or response, if the methodresponse option is used).

    In addition to the data object, the following options can be given
    as keyword arguments:

        methodname: the method name for a methodCall packet

        methodresponse: true to create a methodResponse packet.
        If this option is used with a tuple, the tuple must be
        a singleton (i.e. it can contain only one element).

        encoding: the packet encoding (default is UTF-8)

    All byte strings in the data structure are assumed to use the
    packet encoding.  Unicode strings are automatically converted,
    where necessary.
    """

    assert isinstance(params, (tuple, Fault)), "argument must be tuple or Fault instance"
    if isinstance(params, Fault):
        methodresponse = 1
    elif methodresponse and isinstance(params, tuple):
        assert len(params) == 1, "response tuple must be a singleton"

    if not encoding:
        encoding = "utf-8"

    if FastMarshaller:
        m = FastMarshaller(encoding)
    else:
        m = Marshaller(encoding, allow_none)

    data = m.dumps(params)

    if encoding != "utf-8":
        xmlheader = "<?xml version='1.0' encoding='%s'?>\n" % str(encoding)
    else:
        xmlheader = "<?xml version='1.0'?>\n" # utf-8 is default

    # standard XML-RPC wrappings
    if methodname:
        # a method call
        if not isinstance(methodname, str):
            methodname = methodname.encode(encoding)
        data = (
            xmlheader,
            "<methodCall>\n"
            "<methodName>", methodname, "</methodName>\n",
            data,
            "</methodCall>\n"
            )
    elif methodresponse:
        # a method response, or a fault structure
        data = (
            xmlheader,
            "<methodResponse>\n",
            data,
            "</methodResponse>\n"
            )
    else:
        return data # return as is
    return str("").join(data)

##
# Convert an XML-RPC packet to a Python object.  If the XML-RPC packet
# represents a fault condition, this function raises a Fault exception.
#
# @param data An XML-RPC packet, given as an 8-bit string.
# @return A tuple containing the unpacked data, and the method name
#     (None if not present).
# @see Fault

def loads(data, use_datetime=False, use_builtin_types=False):
    """data -> unmarshalled data, method name

    Convert an XML-RPC packet to unmarshalled data plus a method
    name (None if not present).

    If the XML-RPC packet represents a fault condition, this function
    raises a Fault exception.
    """
    p, u = getparser(use_datetime=use_datetime, use_builtin_types=use_builtin_types)
    p.feed(data)
    p.close()
    return u.close(), u.getmethodname()

##
# Encode a string using the gzip content encoding such as specified by the
# Content-Encoding: gzip
# in the HTTP header, as described in RFC 1952
#
# @param data the unencoded data
# @return the encoded data

def gzip_encode(data):
    """data -> gzip encoded data

    Encode data using the gzip content encoding as described in RFC 1952
    """
    if not gzip:
        raise NotImplementedError
    f = BytesIO()
    gzf = gzip.GzipFile(mode="wb", fileobj=f, compresslevel=1)
    gzf.write(data)
    gzf.close()
    encoded = f.getvalue()
    f.close()
    return encoded

##
# Decode a string using the gzip content encoding such as specified by the
# Content-Encoding: gzip
# in the HTTP header, as described in RFC 1952
#
# @param data The encoded data
# @return the unencoded data
# @raises ValueError if data is not correctly coded.

def gzip_decode(data):
    """gzip encoded data -> unencoded data

    Decode data using the gzip content encoding as described in RFC 1952
    """
    if not gzip:
        raise NotImplementedError
    f = BytesIO(data)
    gzf = gzip.GzipFile(mode="rb", fileobj=f)
    try:
        decoded = gzf.read()
    except IOError:
        raise ValueError("invalid data")
    f.close()
    gzf.close()
    return decoded

##
# Return a decoded file-like object for the gzip encoding
# as described in RFC 1952.
#
# @param response A stream supporting a read() method
# @return a file-like object that the decoded data can be read() from

class GzipDecodedResponse(gzip.GzipFile if gzip else object):
    """a file-like object to decode a response encoded with the gzip
    method, as described in RFC 1952.
    """
    def __init__(self, response):
        #response doesn't support tell() and read(), required by
        #GzipFile
        if not gzip:
            raise NotImplementedError
        self.io = BytesIO(response.read())
        gzip.GzipFile.__init__(self, mode="rb", fileobj=self.io)

    def close(self):
        gzip.GzipFile.close(self)
        self.io.close()


# --------------------------------------------------------------------
# request dispatcher

class _Method(object):
    # some magic to bind an XML-RPC method to an RPC server.
    # supports "nested" methods (e.g. examples.getStateName)
    def __init__(self, send, name):
        self.__send = send
        self.__name = name
    def __getattr__(self, name):
        return _Method(self.__send, "%s.%s" % (self.__name, name))
    def __call__(self, *args):
        return self.__send(self.__name, args)

##
# Standard transport class for XML-RPC over HTTP.
# <p>
# You can create custom transports by subclassing this method, and
# overriding selected methods.

class Transport(object):
    """Handles an HTTP transaction to an XML-RPC server."""

    # client identifier (may be overridden)
    user_agent = "Python-xmlrpc/%s" % __version__

    #if true, we'll request gzip encoding
    accept_gzip_encoding = True

    # if positive, encode request using gzip if it exceeds this threshold
    # note that many server will get confused, so only use it if you know
    # that they can decode such a request
    encode_threshold = None #None = don't encode

    def __init__(self, use_datetime=False, use_builtin_types=False):
        self._use_datetime = use_datetime
        self._use_builtin_types = use_builtin_types
        self._connection = (None, None)
        self._extra_headers = []

    ##
    # Send a complete request, and parse the response.
    # Retry request if a cached connection has disconnected.
    #
    # @param host Target host.
    # @param handler Target PRC handler.
    # @param request_body XML-RPC request body.
    # @param verbose Debugging flag.
    # @return Parsed response.

    def request(self, host, handler, request_body, verbose=False):
        #retry request once if cached connection has gone cold
        for i in (0, 1):
            try:
                return self.single_request(host, handler, request_body, verbose)
            except socket.error as e:
                if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
                    raise
            except http_client.BadStatusLine: #close after we sent request
                if i:
                    raise

    def single_request(self, host, handler, request_body, verbose=False):
        # issue XML-RPC request
        try:
            http_conn = self.send_request(host, handler, request_body, verbose)
            resp = http_conn.getresponse()
            if resp.status == 200:
                self.verbose = verbose
                return self.parse_response(resp)

        except Fault:
            raise
        except Exception:
            #All unexpected errors leave connection in
            # a strange state, so we clear it.
            self.close()
            raise

        #We got an error response.
        #Discard any response data and raise exception
        if resp.getheader("content-length", ""):
            resp.read()
        raise ProtocolError(
            host + handler,
            resp.status, resp.reason,
            dict(resp.getheaders())
            )


    ##
    # Create parser.
    #
    # @return A 2-tuple containing a parser and a unmarshaller.

    def getparser(self):
        # get parser and unmarshaller
        return getparser(use_datetime=self._use_datetime,
                         use_builtin_types=self._use_builtin_types)

    ##
    # Get authorization info from host parameter
    # Host may be a string, or a (host, x509-dict) tuple; if a string,
    # it is checked for a "user:pw@host" format, and a "Basic
    # Authentication" header is added if appropriate.
    #
    # @param host Host descriptor (URL or (URL, x509 info) tuple).
    # @return A 3-tuple containing (actual host, extra headers,
    #     x509 info).  The header and x509 fields may be None.

    def get_host_info(self, host):

        x509 = {}
        if isinstance(host, tuple):
            host, x509 = host

        auth, host = urllib_parse.splituser(host)

        if auth:
            auth = urllib_parse.unquote_to_bytes(auth)
            auth = base64.encodebytes(auth).decode("utf-8")
            auth = "".join(auth.split()) # get rid of whitespace
            extra_headers = [
                ("Authorization", "Basic " + auth)
                ]
        else:
            extra_headers = []

        return host, extra_headers, x509

    ##
    # Connect to server.
    #
    # @param host Target host.
    # @return An HTTPConnection object

    def make_connection(self, host):
        #return an existing connection if possible.  This allows
        #HTTP/1.1 keep-alive.
        if self._connection and host == self._connection[0]:
            return self._connection[1]
        # create a HTTP connection object from a host descriptor
        chost, self._extra_headers, x509 = self.get_host_info(host)
        self._connection = host, http_client.HTTPConnection(chost)
        return self._connection[1]

    ##
    # Clear any cached connection object.
    # Used in the event of socket errors.
    #
    def close(self):
        if self._connection[1]:
            self._connection[1].close()
            self._connection = (None, None)

    ##
    # Send HTTP request.
    #
    # @param host Host descriptor (URL or (URL, x509 info) tuple).
    # @param handler Target RPC handler (a path relative to host)
    # @param request_body The XML-RPC request body
    # @param debug Enable debugging if debug is true.
    # @return An HTTPConnection.

    def send_request(self, host, handler, request_body, debug):
        connection = self.make_connection(host)
        headers = self._extra_headers[:]
        if debug:
            connection.set_debuglevel(1)
        if self.accept_gzip_encoding and gzip:
            connection.putrequest("POST", handler, skip_accept_encoding=True)
            headers.append(("Accept-Encoding", "gzip"))
        else:
            connection.putrequest("POST", handler)
        headers.append(("Content-Type", "text/xml"))
        headers.append(("User-Agent", self.user_agent))
        self.send_headers(connection, headers)
        self.send_content(connection, request_body)
        return connection

    ##
    # Send request headers.
    # This function provides a useful hook for subclassing
    #
    # @param connection httpConnection.
    # @param headers list of key,value pairs for HTTP headers

    def send_headers(self, connection, headers):
        for key, val in headers:
            connection.putheader(key, val)

    ##
    # Send request body.
    # This function provides a useful hook for subclassing
    #
    # @param connection httpConnection.
    # @param request_body XML-RPC request body.

    def send_content(self, connection, request_body):
        #optionally encode the request
        if (self.encode_threshold is not None and
            self.encode_threshold < len(request_body) and
            gzip):
            connection.putheader("Content-Encoding", "gzip")
            request_body = gzip_encode(request_body)

        connection.putheader("Content-Length", str(len(request_body)))
        connection.endheaders(request_body)

    ##
    # Parse response.
    #
    # @param file Stream.
    # @return Response tuple and target method.

    def parse_response(self, response):
        # read response data from httpresponse, and parse it
        # Check for new http response object, otherwise it is a file object.
        if hasattr(response, 'getheader'):
            if response.getheader("Content-Encoding", "") == "gzip":
                stream = GzipDecodedResponse(response)
            else:
                stream = response
        else:
            stream = response

        p, u = self.getparser()

        while 1:
            data = stream.read(1024)
            if not data:
                break
            if self.verbose:
                print("body:", repr(data))
            p.feed(data)

        if stream is not response:
            stream.close()
        p.close()

        return u.close()

##
# Standard transport class for XML-RPC over HTTPS.

class SafeTransport(Transport):
    """Handles an HTTPS transaction to an XML-RPC server."""

    # FIXME: mostly untested

    def make_connection(self, host):
        if self._connection and host == self._connection[0]:
            return self._connection[1]

        if not hasattr(http_client, "HTTPSConnection"):
            raise NotImplementedError(
            "your version of http.client doesn't support HTTPS")
        # create a HTTPS connection object from a host descriptor
        # host may be a string, or a (host, x509-dict) tuple
        chost, self._extra_headers, x509 = self.get_host_info(host)
        self._connection = host, http_client.HTTPSConnection(chost,
            None, **(x509 or {}))
        return self._connection[1]

##
# Standard server proxy.  This class establishes a virtual connection
# to an XML-RPC server.
# <p>
# This class is available as ServerProxy and Server.  New code should
# use ServerProxy, to avoid confusion.
#
# @def ServerProxy(uri, **options)
# @param uri The connection point on the server.
# @keyparam transport A transport factory, compatible with the
#    standard transport class.
# @keyparam encoding The default encoding used for 8-bit strings
#    (default is UTF-8).
# @keyparam verbose Use a true value to enable debugging output.
#    (printed to standard output).
# @see Transport

class ServerProxy(object):
    """uri [,options] -> a logical connection to an XML-RPC server

    uri is the connection point on the server, given as
    scheme://host/target.

    The standard implementation always supports the "http" scheme.  If
    SSL socket support is available (Python 2.0), it also supports
    "https".

    If the target part and the slash preceding it are both omitted,
    "/RPC2" is assumed.

    The following options can be given as keyword arguments:

        transport: a transport factory
        encoding: the request encoding (default is UTF-8)

    All 8-bit strings passed to the server proxy are assumed to use
    the given encoding.
    """

    def __init__(self, uri, transport=None, encoding=None, verbose=False,
                 allow_none=False, use_datetime=False, use_builtin_types=False):
        # establish a "logical" server connection

        # get the url
        type, uri = urllib_parse.splittype(uri)
        if type not in ("http", "https"):
            raise IOError("unsupported XML-RPC protocol")
        self.__host, self.__handler = urllib_parse.splithost(uri)
        if not self.__handler:
            self.__handler = "/RPC2"

        if transport is None:
            if type == "https":
                handler = SafeTransport
            else:
                handler = Transport
            transport = handler(use_datetime=use_datetime,
                                use_builtin_types=use_builtin_types)
        self.__transport = transport

        self.__encoding = encoding or 'utf-8'
        self.__verbose = verbose
        self.__allow_none = allow_none

    def __close(self):
        self.__transport.close()

    def __request(self, methodname, params):
        # call a method on the remote server

        request = dumps(params, methodname, encoding=self.__encoding,
                        allow_none=self.__allow_none).encode(self.__encoding)

        response = self.__transport.request(
            self.__host,
            self.__handler,
            request,
            verbose=self.__verbose
            )

        if len(response) == 1:
            response = response[0]

        return response

    def __repr__(self):
        return (
            "<ServerProxy for %s%s>" %
            (self.__host, self.__handler)
            )

    __str__ = __repr__

    def __getattr__(self, name):
        # magic method dispatcher
        return _Method(self.__request, name)

    # note: to call a remote object with an non-standard name, use
    # result getattr(server, "strange-python-name")(args)

    def __call__(self, attr):
        """A workaround to get special attributes on the ServerProxy
           without interfering with the magic __getattr__
        """
        if attr == "close":
            return self.__close
        elif attr == "transport":
            return self.__transport
        raise AttributeError("Attribute %r not found" % (attr,))

# compatibility

Server = ServerProxy

# --------------------------------------------------------------------
# test code

if __name__ == "__main__":

    # simple test program (from the XML-RPC specification)

    # local server, available from Lib/xmlrpc/server.py
    server = ServerProxy("http://localhost:8000")

    try:
        print(server.currentTime.getCurrentTime())
    except Error as v:
        print("ERROR", v)

    multi = MultiCall(server)
    multi.getData()
    multi.pow(2,9)
    multi.add(1,2)
    try:
        for response in multi():
            print(response)
    except Error as v:
        print("ERROR", v)
PK�Cu\���I&&#future/backports/xmlrpc/__init__.pynu�[���# This directory is a Python package.
PK�Cu\���C��;future/backports/xmlrpc/__pycache__/__init__.cpython-39.pycnu�[���a

��?h&�@sdS)N�rrr�J/usr/local/lib/python3.9/site-packages/future/backports/xmlrpc/__init__.py�<module>�PK�Cu\>�D���9future/backports/xmlrpc/__pycache__/client.cpython-39.pycnu�[���a

��?h/��
@s�dZddlmZmZmZmZddlmZmZm	Z	m
Z
mZddlZddl
Z
e
jdkrbeje_eje_ddlZddlmZddlmZddlmZdd	lmZdd
lmZddlZddl Z ddl!m"Z"zddl#Z#Wne$y�dZ#Yn0dd
�Z%e
j&dd�Z'dZ(dZ)dZ*dZ+dZ,dZ-dZ.dZ/dZ0dZ1dZ2dZ3dZ4dZ5Gdd�de6�Z7Gdd�de7�Z8Gdd �d e7�Z9Gd!d"�d"e7�Z:e;Z<Z=d#d$�Z>d%d&�Z?Gd'd(�d(e@�ZAd)d*�ZBd+d,�ZCGd-d.�d.e@�ZDd/d0�ZEeAeDfZFGd1d2�d2e@�ZGGd3d4�d4e@�ZHGd5d6�d6e@�ZIGd7d8�d8e@�ZJGd9d:�d:e@�ZKGd;d<�d<e@�ZLdZMZNZOdXd>d?�ZPdYd@dA�ZQdZdBdC�ZRdDdE�ZSdFdG�ZTGdHdI�dIe#�r|e#jUne@�ZVGdJdK�dKe@�ZWGdLdM�dMe@�ZXGdNdO�dOeX�ZYGdPdQ�dQe@�ZZeZZ[e\dRk�r�eZdS�Z]ze^e]j_�`��Wn0e7�yZaze^dTea�WYdZa[an
dZa[a00eLe]�Zbeb�c�eb�ddUdV�eb�edWdU�zeb�D]Zfe^ef��qLWn0e7�y�Zaze^dTea�WYdZa[an
dZa[a00dS)[a�
Ported using Python-Future from the Python 3.3 standard library.

An XML-RPC client interface for Python.

The marshalling and response parser code can also be used to
implement XML-RPC servers.

Exported exceptions:

  Error          Base class for client errors
  ProtocolError  Indicates an HTTP protocol error
  ResponseError  Indicates a broken response package
  Fault          Indicates an XML-RPC fault package

Exported classes:

  ServerProxy    Represents a logical connection to an XML-RPC server

  MultiCall      Executor of boxcared xmlrpc requests
  DateTime       dateTime wrapper for an ISO 8601 string or time tuple or
                 localtime integer value to generate a "dateTime.iso8601"
                 XML-RPC value
  Binary         binary data wrapper

  Marshaller     Generate an XML-RPC params chunk from a Python data structure
  Unmarshaller   Unmarshal an XML-RPC response from incoming XML event message
  Transport      Handles an HTTP transaction to an XML-RPC server
  SafeTransport  Handles an HTTPS transaction to an XML-RPC server

Exported constants:

  (none)

Exported functions:

  getparser      Create instance of the fastest available parser & attach
                 to an unmarshalling object
  dumps          Convert an argument tuple or a Fault instance to an XML-RPC
                 request (or response, if the methodresponse option is used).
  loads          Convert an XML-RPC packet to unmarshalled data plus a method
                 name (None if not present).
�)�absolute_import�division�print_function�unicode_literals)�bytes�dict�int�range�strN)��	)�datetime)�client)�parse)�ensure_new_type)�expat)�BytesIOcCs$|�dd�}|�dd�}|�dd�S)N�&z&amp;�<z&lt;�>z&gt;)�replace)�s�r�H/usr/local/lib/python3.9/site-packages/future/backports/xmlrpc/client.py�escape�srri���i�iD���i����i���ip���iԁ��iC���iB���i����i����i����c@seZdZdZdd�ZdS)�ErrorzBase class for client errors.cCst|�S�N)�repr��selfrrr�__str__�sz
Error.__str__N)�__name__�
__module__�__qualname__�__doc__r rrrrr�src@s eZdZdZdd�Zdd�ZdS)�
ProtocolErrorz!Indicates an HTTP protocol error.cCs&t�|�||_||_||_||_dSr)r�__init__�url�errcode�errmsg�headers)rr'r(r)r*rrrr&�s

zProtocolError.__init__cCsd|j|j|jfS)Nz<ProtocolError for %s: %s %s>)r'r(r)rrrr�__repr__�s��zProtocolError.__repr__N�r!r"r#r$r&r+rrrrr%�sr%c@seZdZdZdS)�
ResponseErrorz$Indicates a broken response package.N)r!r"r#r$rrrrr-�sr-c@s eZdZdZdd�Zdd�ZdS)�Faultz#Indicates an XML-RPC fault package.cKst�|�||_||_dSr)rr&�	faultCode�faultString)rr/r0�extrarrrr&�s
zFault.__init__cCsdt|j�t|j�fS)Nz<Fault %s: %r>)rr/r0rrrrr+�s
�zFault.__repr__Nr,rrrrr.�sr.cCs d|j|j|j|j|j|jfS)N�%04d%02d%02dT%02d:%02d:%02d)�year�month�day�hour�minute�second��valuerrr�_iso8601_formats�r;cCsLt|t�rt|�St|ttjf�s<|dkr2t��}t�|�}d|dd�S)Nrr2�)�
isinstancer
r;�tuple�time�struct_time�	localtimer9rrr�	_strftime(s

rBc@szeZdZdZddd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dd�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�ZdS)�DateTimez�DateTime wrapper for an ISO 8601 string or time tuple or
    localtime integer value to generate 'dateTime.iso8601' XML-RPC
    value.
    rcCs t|t�r||_n
t|�|_dSr)r=r
r:rB)rr:rrrr&9s
zDateTime.__init__cCs�t|t�r|j}|j}nzt|t�r2|j}t|�}n`t|t�rH|j}|}nJt|d�rd|��}|��}n.t|d�rv|jj	p|t
|�}td|jj	|f��||fS)N�	timetuple�	__class__zCan't compare %s and %s)r=rCr:r
r;r
�hasattrrDrEr!�type�	TypeError)r�otherr�oZotyperrr�make_comparable?s*






��
�zDateTime.make_comparablecCs|�|�\}}||kSr�rK�rrIrrJrrr�__lt__TszDateTime.__lt__cCs|�|�\}}||kSrrLrMrrr�__le__XszDateTime.__le__cCs|�|�\}}||kSrrLrMrrr�__gt__\szDateTime.__gt__cCs|�|�\}}||kSrrLrMrrr�__ge__`szDateTime.__ge__cCs|�|�\}}||kSrrLrMrrr�__eq__dszDateTime.__eq__cCs|�|�\}}||kSrrLrMrrr�__ne__hszDateTime.__ne__cCst�|jd�S�Nz%Y%m%dT%H:%M:%S)r?�strptimer:rrrrrDlszDateTime.timetuplecCs|jSrr9rrrrr tszDateTime.__str__cCsdt|j�t|�fS)Nz<DateTime %r at %x>)rr:�idrrrrr+wszDateTime.__repr__cCst|���|_dSr)r
�stripr:�r�datarrr�decodezszDateTime.decodecCs$|�d�|�|j�|�d�dS�Nz<value><dateTime.iso8601>z</dateTime.iso8601></value>
)�writer:)r�outrrr�encode}s
zDateTime.encodeN)r)r!r"r#r$r&rKrNrOrPrQrRrSrDr r+rZr^rrrrrC3s
rCcCst�}|�|�|Sr)rCrZ�rYr:rrr�	_datetime�s
r`cCst�|d�SrT)r
rU)rYrrr�_datetime_type�srac@sBeZdZdZddd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dS)�BinaryzWrapper for binary data.NcCs>|durd}n&t|ttf�s,td|jj��t|�}||_dS)N�z#expected bytes or bytearray, not %s)r=r�	bytearrayrHrEr!rYrXrrrr&�s�zBinary.__init__cCst|jd�S)Nzlatin-1)r
rYrrrrr �szBinary.__str__cCst|t�r|j}|j|kSr�r=rbrY�rrIrrrrR�s
z
Binary.__eq__cCst|t�r|j}|j|kSrrerfrrrrS�s
z
Binary.__ne__cCst�|�|_dSr)�base64�decodebytesrYrXrrrrZ�sz
Binary.decodecCs4|�d�t�|j�}|�|�d��|�d�dS�Nz<value><base64>
�asciiz</base64></value>
)r\rg�encodebytesrYrZ)rr]�encodedrrrr^�s
z
Binary.encode)N)
r!r"r#r$r&r rRrSrZr^rrrrrb�s
rbcCst�}|�|�|Sr)rbrZr_rrr�_binary�s
rmc@s$eZdZdd�Zdd�Zdd�ZdS)�ExpatParsercCsDt�dd�|_}||_|j|_|j|_|j|_	d}|�
|d�dSr)r�ParserCreate�_parser�_target�start�StartElementHandler�end�EndElementHandlerrY�CharacterDataHandler�xml)r�target�parser�encodingrrrr&�szExpatParser.__init__cCs|j�|d�dS�Nr)rp�ParserXrrr�feed�szExpatParser.feedcCs|j�dd�|`|`dS)N��)rpr|rqrrrr�close�szExpatParser.closeN)r!r"r#r&r}r�rrrrrn�s	rnc@s�eZdZdZddd�ZiZdd�Zdd	�Zd
d�Zeee	d�<dd
�Z
e
ee<dd�Zeee
<eZdd�Zeee<efdd�Zeee<dd�Zeee<eee<dd�Zeee<eee<efdd�Zeee<dd�Zeee<dd�Zeee<eee <eed<dS) �
MarshalleravGenerate an XML-RPC params chunk from a Python data structure.

    Create a Marshaller instance for each set of parameters, and use
    the "dumps" method to convert your data (represented as a tuple)
    to an XML-RPC params chunk.  To write a fault response, pass a
    Fault instance instead.  You may prefer to use the "dumps" module
    function for this purpose.
    NFcCsi|_d|_||_||_dSr)�memorYrz�
allow_none)rrzr�rrrr&�szMarshaller.__init__cCs�g}|j}|j}t|t�r@|d�||j|jd�|�|d�n4|d�|D]}|d�|||�|d�qL|d�d�|�}t|�S)	Nz<fault>
)r/r0z	</fault>
z	<params>
z<param>
z	</param>
z
</params>
r~)�append�_Marshaller__dumpr=r.r/r0�joinr
)r�valuesr]r\�dump�v�resultrrr�dumps�s&
��



zMarshaller.dumpscCs�z|jtt|��}Wndtyzt|d�s>tdt|���t|�jD]"}||j��vrHtdt|���qH|jd}Yn0||||�dS)N�__dict__zcannot marshal %s objects�_arbitrary_instance)�dispatchrGr�KeyErrorrFrH�__mro__�keys)rr:r\�f�type_rrr�__dumps
zMarshaller.__dumpcCs|jstd��|d�dS)Nz0cannot marshal None unless allow_none is enabledz<value><nil/></value>)r�rH�rr:r\rrr�dump_nil"szMarshaller.dump_nilcCs$|d�||rdpd�|d�dS)Nz<value><boolean>�1�0z</boolean></value>
rr�rrr�	dump_bool(szMarshaller.dump_boolcCs<|tks|tkrtd��|d�|tt|���|d�dS)Nzlong int exceeds XML-RPC limitsz<value><int>z</int></value>
)�MAXINT�MININT�
OverflowErrorr
rr�rrr�	dump_long.s
zMarshaller.dump_longcCs$|d�|tt|���|d�dS)Nz<value><double>z</double></value>
)rrr�rrr�dump_double9szMarshaller.dump_doublecCs |d�|||��|d�dS)Nz<value><string>z</string></value>
r)rr:r\rrrr�dump_unicode?szMarshaller.dump_unicodecCs,|d�t�|�}||�d��|d�dSri)rgrkrZ)rr:r\rlrrr�
dump_bytesEs
zMarshaller.dump_bytescCsZt|�}||jvrtd��d|j|<|j}|d�|D]}|||�q6|d�|j|=dS)Nz"cannot marshal recursive sequencesz<value><array><data>
z</data></array></value>
)rVr�rHr�)rr:r\�ir�r�rrr�
dump_arrayMs

zMarshaller.dump_arraycCs�t|�}||jvrtd��d|j|<|j}|d�|��D]D\}}|d�t|t�s\td��|d||��|||�|d�q:|d�|j|=dS)Nz%cannot marshal recursive dictionariesz<value><struct>
z	<member>
zdictionary key must be stringz<name>%s</name>
z
</member>
z</struct></value>
)rVr�rHr��itemsr=r
)rr:r\rr�r��kr�rrr�dump_struct[s




zMarshaller.dump_structcCs |d�|t|��|d�dSr[)rBr�rrr�
dump_datetimemszMarshaller.dump_datetimecCs2|jtvr ||_|�|�|`n|�|j|�dSr)rE�WRAPPERSr\r^r�r�r�rrr�
dump_instancess


zMarshaller.dump_instancer�)NF)!r!r"r#r$r&r�r�r�r�rGr��boolr�r�dump_intr��floatrr�r
r�rrdr�r>�listr�rr�r
r�rCrbrrrrr��s<
	r�c@s>eZdZdZd>dd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dd�Z
dd�ZiZdd�Z
e
ed<dd�Zeed<dd�Zeed<eed<eed<dd�Zeed <d!d"�Zeed#<eed$<d%d&�Zeed'<d(d)�Zeed*<d+d,�Zeed-<d.d/�Zeed0<d1d2�Zeed3<d4d5�Zeed6<d7d8�Zeed9<d:d;�Zeed<<d=S)?�UnmarshalleraUnmarshal an XML-RPC response, based on incoming XML event
    messages (start, data, end).  Call close() to get the resulting
    data structure.

    Note that this reader is fairly tolerant, and gladly accepts bogus
    XML-RPC data without complaining (but not bogus XML).
    FcCsBd|_g|_g|_g|_d|_d|_|jj|_|p4||_||_dS)N�utf-8)	�_type�_stack�_marks�_data�_methodname�	_encodingr��
_use_datetime�
_use_bytes�r�use_datetime�use_builtin_typesrrrr&�s

zUnmarshaller.__init__cCs>|jdus|jrt��|jdkr4tfi|jd���t|j�S)N�faultr)r�r�r-r.r�r>rrrrr��s

zUnmarshaller.closecCs|jSr)r�rrrr�
getmethodname�szUnmarshaller.getmethodnamecCs
||_dSr)r�)rrz�
standalonerrrrw�szUnmarshaller.xmlcCs6|dks|dkr"|j�t|j��g|_|dk|_dS)N�array�structr:)r�r��lenr�r��_value)r�tag�attrsrrrrr�szUnmarshaller.startcCs|j�|�dSr)r�r�)r�textrrrrY�szUnmarshaller.datacCs8z|j|}Wnty Yn0||d�|j��SdS)Nr~)r�r�r�r�)rr�r�rrrrt�s
zUnmarshaller.endcCs0z|j|}Wnty Yn0|||�SdSr)r�r�)rr�rYr�rrr�end_dispatch�s
zUnmarshaller.end_dispatchcCs|�d�d|_dSr{)r�r�rXrrr�end_nil�s
zUnmarshaller.end_nil�nilcCs:|dkr|�d�n|dkr(|�d�ntd��d|_dS)Nr�Fr�Tzbad boolean valuer)r�rHr�rXrrr�end_boolean�szUnmarshaller.end_boolean�booleancCs|�t|��d|_dSr{)r�rr�rXrrr�end_int�szUnmarshaller.end_int�i4�i8rcCs|�t|��d|_dSr{)r�r�r�rXrrr�
end_double�szUnmarshaller.end_double�doublecCs&|jr|�|j�}|�|�d|_dSr{)r�rZr�r�rXrrr�
end_string�s
zUnmarshaller.end_string�string�namecCs.|j��}|j|d�g|j|d�<d|_dSr{)r��popr�r�)rrY�markrrr�	end_array�s
zUnmarshaller.end_arrayr�cCs`|j��}i}|j|d�}tdt|�d�D]}||d|||<q,|g|j|d�<d|_dS)Nr�r)r�r�r�r	r�r�)rrYr�rr�r�rrr�
end_struct�s
zUnmarshaller.end_structr�cCs6t�}|�|�d��|jr"|j}|�|�d|_dS)Nrjr)rbrZr^r�rYr�r��rrYr:rrr�
end_base64	s
zUnmarshaller.end_base64rgcCs,t�}|�|�|jrt|�}|�|�dSr)rCrZr�rar�r�rrr�end_dateTimes

zUnmarshaller.end_dateTimezdateTime.iso8601cCs|jr|�|�dSr)r�r�rXrrr�	end_valueszUnmarshaller.end_valuer:cCs
d|_dS)N�params�r�rXrrr�
end_params!szUnmarshaller.end_paramsr�cCs
d|_dS)Nr�r�rXrrr�	end_fault%szUnmarshaller.end_faultr�cCs"|jr|�|j�}||_d|_dS)N�
methodName)r�rZr�r�rXrrr�end_methodName)szUnmarshaller.end_methodNamer�N)FF)r!r"r#r$r&r�r�rwrrrYrtr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrrrr��sN
	r�c@s$eZdZdd�Zdd�Zdd�ZdS)�_MultiCallMethodcCs||_||_dSr)�_MultiCallMethod__call_list�_MultiCallMethod__name)r�	call_listr�rrrr&6sz_MultiCallMethod.__init__cCst|jd|j|f�S�Nz%s.%s)r�r�r��rr�rrr�__getattr__9sz_MultiCallMethod.__getattr__cGs|j�|j|f�dSr)r�r�r��r�argsrrr�__call__;sz_MultiCallMethod.__call__N�r!r"r#r&r�r�rrrrr�3sr�c@s eZdZdZdd�Zdd�ZdS)�MultiCallIteratorzaIterates over the results of a multicall. Exceptions are
    raised in response to xmlrpc faults.cCs
||_dSr)�results)rr�rrrr&BszMultiCallIterator.__init__cCsP|j|}tt|�t�r,t|d|d��n t|�tg�krD|dStd��dS)Nr/r0rz#unexpected type in multicall result)r�r=rGrr.�
ValueError)rr��itemrrr�__getitem__Es
zMultiCallIterator.__getitem__N)r!r"r#r$r&r�rrrrr�>sr�c@s4eZdZdZdd�Zdd�ZeZdd�Zdd	�Zd
S)�	MultiCalla}server -> a object used to boxcar method calls

    server should be a ServerProxy object.

    Methods can be added to the MultiCall using normal
    method call syntax e.g.:

    multicall = MultiCall(server_proxy)
    multicall.add(2,3)
    multicall.get_address("Guido")

    To execute the multicall, call the MultiCall object e.g.:

    add_result, address = multicall()
    cCs||_g|_dSr)�_MultiCall__server�_MultiCall__call_list)r�serverrrrr&_szMultiCall.__init__cCsdt|�S)Nz<MultiCall at %x>)rVrrrrr+cszMultiCall.__repr__cCst|j|�Sr)r�r�r�rrrr�hszMultiCall.__getattr__cCs6g}|jD]\}}|�||d��q
t|jj�|��S)N)r�r�)r�r�r�r��system�	multicall)r�marshalled_listr�r�rrrr�kszMultiCall.__call__N)	r!r"r#r$r&r+r r�r�rrrrr�Nsr�FcCsrtrHtrH|rt}tj}n|r&t}t}nt}t}tdd||t�}t|�}n"t||d�}trbt|�}nt	|�}||fS)z�getparser() -> parser, unmarshaller

    Create an instance of the fastest available parser, and attach it
    to an unmarshalling object.  Return both objects.
    TF�r�r�)
�
FastParser�FastUnmarshallerrargrhrmr`r.r�rn)r�r��
mkdatetime�mkbytesrxryrrr�	getparser}s 

r�cCs�t|ttf�sJd��t|t�r&d}n"|rHt|t�rHt|�dksHJd��|sPd}tr^t|�}n
t||�}|�|�}|dkr�dt|�}nd}|r�t|t�s�|�|�}|d|d|d	f}n|r�|d
|df}n|Std��	|�S)
a�data [,options] -> marshalled data

    Convert an argument tuple or a Fault instance to an XML-RPC
    request (or response, if the methodresponse option is used).

    In addition to the data object, the following options can be given
    as keyword arguments:

        methodname: the method name for a methodCall packet

        methodresponse: true to create a methodResponse packet.
        If this option is used with a tuple, the tuple must be
        a singleton (i.e. it can contain only one element).

        encoding: the packet encoding (default is UTF-8)

    All byte strings in the data structure are assumed to use the
    packet encoding.  Unicode strings are automatically converted,
    where necessary.
    z(argument must be tuple or Fault instancerz"response tuple must be a singletonr�z$<?xml version='1.0' encoding='%s'?>
z<?xml version='1.0'?>
z<methodCall>
<methodName>z</methodName>
z</methodCall>
z<methodResponse>
z</methodResponse>
r~)
r=r>r.r��FastMarshallerr�r�r
r^r�)r��
methodname�methodresponserzr��mrY�	xmlheaderrrrr��s>





��r�cCs2t||d�\}}|�|�|��|��|��fS)z�data -> unmarshalled data, method name

    Convert an XML-RPC packet to unmarshalled data plus a method
    name (None if not present).

    If the XML-RPC packet represents a fault condition, this function
    raises a Fault exception.
    r�)r�r}r�r�)rYr�r��p�urrr�loads�s	
rcCsDtst�t�}tjd|dd�}|�|�|��|��}|��|S)zhdata -> gzip encoded data

    Encode data using the gzip content encoding as described in RFC 1952
    �wbr)�mode�fileobj�
compresslevel)�gzip�NotImplementedErrorr�GzipFiler\r��getvalue)rYr��gzfrlrrr�gzip_encodes
rcCsZtst�t|�}tjd|d�}z|��}WntyDtd��Yn0|��|��|S)zrgzip encoded data -> unencoded data

    Decode data using the gzip content encoding as described in RFC 1952
    �rb�rrzinvalid data)r	r
rr�read�IOErrorr�r�)rYr�r
�decodedrrr�gzip_decodesrc@s eZdZdZdd�Zdd�ZdS)�GzipDecodedResponsezha file-like object to decode a response encoded with the gzip
    method, as described in RFC 1952.
    cCs.tst�t|���|_tjj|d|jd�dS)Nrr)r	r
rr�iorr&)r�responserrrr&;szGzipDecodedResponse.__init__cCstj�|�|j��dSr)r	rr�rrrrrr�CszGzipDecodedResponse.closeN)r!r"r#r$r&r�rrrrr7src@s$eZdZdd�Zdd�Zdd�ZdS)�_MethodcCs||_||_dSr��
_Method__send�
_Method__name)r�sendr�rrrr&Nsz_Method.__init__cCst|jd|j|f�Sr�)rrrr�rrrr�Qsz_Method.__getattr__cGs|�|j|�Srrr�rrrr�Ssz_Method.__call__Nr�rrrrrKsrc@s~eZdZdZdeZdZdZddd�Zddd	�Z	dd
d�Z
dd
�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd�Zdd�ZdS)�	Transportz1Handles an HTTP transaction to an XML-RPC server.zPython-xmlrpc/%sTNFcCs||_||_d|_g|_dS)N�NN)r��_use_builtin_types�_connection�_extra_headersr�rrrr&jszTransport.__init__cCs�dD]v}z|�||||�WStjy`}z(|sJ|jtjtjtjfvrL�WYd}~qd}~0tjyx|rt�Yq0qdS)N)rr)	�single_request�socket�error�errno�
ECONNRESET�ECONNABORTED�EPIPE�http_client�
BadStatusLine)r�host�handler�request_body�verboser��errr�requestzszTransport.requestcCs�z8|�||||�}|��}|jdkr6||_|�|�WSWn.tyL�Yntyf|���Yn0|�dd�r||�	�t
|||j|jt|�
����dS)N��zcontent-lengthr~)�send_request�getresponse�statusr.�parse_responser.�	Exceptionr��	getheaderrr%�reasonr�
getheaders)rr+r,r-r.�	http_conn�resprrrr"�s$

�zTransport.single_requestcCst|j|jd�S)Nr�)r�r�rrrrrr��s�zTransport.getparsercCsni}t|t�r|\}}t�|�\}}|r`t�|�}t�|��d�}d�|�	��}dd|fg}ng}|||fS)Nr�r~�
AuthorizationzBasic )
r=r>�urllib_parse�	splituser�unquote_to_bytesrgrkrZr��split)rr+�x509�auth�
extra_headersrrr�
get_host_info�s


�zTransport.get_host_infocCsJ|jr||jdkr|jdS|�|�\}|_}|t�|�f|_|jdS)Nrr)r rDr!r)�HTTPConnection�rr+�chostrArrr�make_connection�s

zTransport.make_connectioncCs"|jdr|jd��d|_dS)Nrr)r r�rrrrr��s
zTransport.closecCs�|�|�}|jdd�}|r&|�d�|jrLtrL|jd|dd�|�d�n|�d|�|�d�|�d|jf�|�||�|�	||�|S)Nr�POSTT)�skip_accept_encoding)zAccept-Encodingr	)zContent-Typeztext/xmlz
User-Agent)
rHr!�set_debuglevel�accept_gzip_encodingr	�
putrequestr��
user_agent�send_headers�send_content)rr+r,r-�debug�
connectionr*rrrr2�s



zTransport.send_requestcCs|D]\}}|�||�qdSr)�	putheader)rrRr*�key�valrrrrOszTransport.send_headerscCsR|jdur0|jt|�kr0tr0|�dd�t|�}|�dtt|���|�|�dS)N�Content-Encodingr	zContent-Length)�encode_thresholdr�r	rSrr
�
endheaders)rrRr-rrrrPs
��zTransport.send_contentcCs�t|d�r*|�dd�dkr$t|�}q.|}n|}|��\}}|�d�}|sJqj|jr^tdt|��|�|�q:||urz|�	�|�	�|�	�S)Nr7rVr~r	izbody:)
rFr7rr�rr.�printrr}r�)rr�streamrrrYrrrr5s 


zTransport.parse_response)FF)F)F)r!r"r#r$�__version__rNrLrWr&r0r"r�rDrHr�r2rOrPr5rrrrr\s


!rc@seZdZdZdd�ZdS)�
SafeTransportz2Handles an HTTPS transaction to an XML-RPC server.cCsj|jr||jdkr|jdSttd�s0td��|�|�\}|_}|tj|dfi|pVi��f|_|jdS)Nrr�HTTPSConnectionz1your version of http.client doesn't support HTTPS)r rFr)r
rDr!r]rFrrrrHBs

���
zSafeTransport.make_connectionN)r!r"r#r$rHrrrrr\=sr\c@sFeZdZdZddd�Zdd�Zdd	�Zd
d�ZeZdd
�Z	dd�Z
dS)�ServerProxya�uri [,options] -> a logical connection to an XML-RPC server

    uri is the connection point on the server, given as
    scheme://host/target.

    The standard implementation always supports the "http" scheme.  If
    SSL socket support is available (Python 2.0), it also supports
    "https".

    If the target part and the slash preceding it are both omitted,
    "/RPC2" is assumed.

    The following options can be given as keyword arguments:

        transport: a transport factory
        encoding: the request encoding (default is UTF-8)

    All 8-bit strings passed to the server proxy are assumed to use
    the given encoding.
    NFc
Cs�t�|�\}}|dvrtd��t�|�\|_|_|js<d|_|durb|dkrRt}	nt}	|	||d�}||_|pnd|_	||_
||_dS)N)�http�httpszunsupported XML-RPC protocolz/RPC2r`r�r�)r=�	splittyper�	splithost�_ServerProxy__host�_ServerProxy__handlerr\r�_ServerProxy__transport�_ServerProxy__encoding�_ServerProxy__verbose�_ServerProxy__allow_none)
r�uri�	transportrzr.r�r�r�rGr,rrrr&ws"�
zServerProxy.__init__cCs|j��dSr)rer�rrrr�__close�szServerProxy.__closecCsNt|||j|jd��|j�}|jj|j|j||jd�}t	|�dkrJ|d}|S)N)rzr�)r.rr)
r�rfrhr^rer0rcrdrgr�)rr�r�r0rrrr�	__request�s
���zServerProxy.__requestcCsd|j|jfS)Nz<ServerProxy for %s%s>)rcrdrrrrr+�s
��zServerProxy.__repr__cCst|j|�Sr)r�_ServerProxy__requestr�rrrr��szServerProxy.__getattr__cCs.|dkr|jS|dkr|jStd|f��dS)z|A workaround to get special attributes on the ServerProxy
           without interfering with the magic __getattr__
        r�rjzAttribute %r not foundN)�_ServerProxy__closere�AttributeError)r�attrrrrr��s
zServerProxy.__call__)NNFFFF)r!r"r#r$r&rnrmr+r r�r�rrrrr^as�
r^�__main__zhttp://localhost:8000�ERRORr�rr)FF)NNNF)FF)gr$�
__future__rrrrZfuture.builtinsrrrr	r
rg�sys�version_info�encodestringrk�decodestringrhr?r
Zfuture.backports.httprr)Zfuture.backports.urllibrr=Zfuture.utilsrZxml.parsersrr#r%rrr	�ImportErrorr�versionr[r�r��PARSE_ERROR�SERVER_ERROR�APPLICATION_ERROR�SYSTEM_ERROR�TRANSPORT_ERROR�NOT_WELLFORMED_ERROR�UNSUPPORTED_ENCODING�INVALID_ENCODING_CHAR�INVALID_XMLRPC�METHOD_NOT_FOUND�INVALID_METHOD_PARAMS�INTERNAL_ERRORr6rr%r-r.r�r��Booleanr;rB�objectrCr`rarbrmr�rnr�r�r�r�r�r�r�r�r�r�rrrrrrrr\r^�Serverr!r�rY�currentTimeZgetCurrentTimer�ZmultiZgetData�pow�addrrrrr�<module>Xs�,


O	((-'
'�
M
b$_
 
PK�Cu\���t�t9future/backports/xmlrpc/__pycache__/server.cpython-39.pycnu�[���a

��?h���@s2dZddlmZmZmZmZddlmZmZddl	m
Z
mZmZm
Z
mZddlmZddlmmmZddlmZddlZddlZddlZddlZddlZddlZzddlZWney�dZYn0d,d	d
�Zdd�Z Gd
d�de!�Z"Gdd�de�Z#Gdd�dej$e"�Z%Gdd�de%�Z&Gdd�de"�Z'Gdd�dej(�Z)Gdd�de!�Z*Gdd�de#�Z+Gdd�de%e*�Z,Gdd �d e'e*�Z-e.d!k�r.ddl/Z/Gd"d#�d#�Z0e%d$�Ze�1e2�e�1d%d&�d'�ej3e0�dd(�e�4�e5d)�e5d*�ze�6�Wn.e7�y,e5d+�e�8�e�9d�Yn0dS)-aK
Ported using Python-Future from the Python 3.3 standard library.

XML-RPC Servers.

This module can be used to create simple XML-RPC servers
by creating a server and either installing functions, a
class instance, or by extending the SimpleXMLRPCServer
class.

It can also be used to handle XML-RPC requests in a CGI
environment using CGIXMLRPCRequestHandler.

The Doc* classes can be used to create XML-RPC servers that
serve pydoc-style documentation in response to HTTP
GET requests. This documentation is dynamically generated
based on the functions and methods registered with the
server.

A list of possible usage patterns follows:

1. Install functions:

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(pow)
server.register_function(lambda x,y: x+y, 'add')
server.serve_forever()

2. Install an instance:

class MyFuncs:
    def __init__(self):
        # make all of the sys functions available through sys.func_name
        import sys
        self.sys = sys
    def _listMethods(self):
        # implement this method so that system.listMethods
        # knows to advertise the sys methods
        return list_public_methods(self) + \
                ['sys.' + method for method in list_public_methods(self.sys)]
    def pow(self, x, y): return pow(x, y)
    def add(self, x, y) : return x + y

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_introspection_functions()
server.register_instance(MyFuncs())
server.serve_forever()

3. Install an instance with custom dispatch method:

class Math:
    def _listMethods(self):
        # this method must be present for system.listMethods
        # to work
        return ['add', 'pow']
    def _methodHelp(self, method):
        # this method must be present for system.methodHelp
        # to work
        if method == 'add':
            return "add(2,3) => 5"
        elif method == 'pow':
            return "pow(x, y[, z]) => number"
        else:
            # By convention, return empty
            # string if no help is available
            return ""
    def _dispatch(self, method, params):
        if method == 'pow':
            return pow(*params)
        elif method == 'add':
            return params[0] + params[1]
        else:
            raise ValueError('bad method')

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_introspection_functions()
server.register_instance(Math())
server.serve_forever()

4. Subclass SimpleXMLRPCServer:

class MathServer(SimpleXMLRPCServer):
    def _dispatch(self, method, params):
        try:
            # We are forcing the 'export_' prefix on methods that are
            # callable through XML-RPC to prevent potential security
            # problems
            func = getattr(self, 'export_' + method)
        except AttributeError:
            raise Exception('method "%s" is not supported' % method)
        else:
            return func(*params)

    def export_add(self, x, y):
        return x + y

server = MathServer(("localhost", 8000))
server.serve_forever()

5. CGI script:

server = CGIXMLRPCRequestHandler()
server.register_function(pow)
server.handle_request()
�)�absolute_import�division�print_function�unicode_literals)�int�str)�Fault�dumps�loads�gzip_encode�gzip_decode)�BaseHTTPRequestHandlerN)�socketserverTcCsF|r|�d�}n|g}|D]&}|�d�r6td|��qt||�}q|S)aGresolve_dotted_attribute(a, 'b.c.d') => a.b.c.d

    Resolves a dotted attribute name to an object.  Raises
    an AttributeError if any attribute in the chain starts with a '_'.

    If the optional allow_dotted_names argument is false, dots are not
    supported and this function operates similar to getattr(obj, attr).
    �.�_z(attempt to access private attribute "%s")�split�
startswith�AttributeError�getattr)�obj�attr�allow_dotted_names�attrs�i�r�H/usr/local/lib/python3.9/site-packages/future/backports/xmlrpc/server.py�resolve_dotted_attribute�s

�rcs�fdd�t��D�S)zkReturns a list of attribute strings, found in the specified
    object, which represent callable attributescs(g|] }|�d�stt�|��r|�qS)r)r�callabler)�.0�member�rrr�
<listcomp>�s
�z'list_public_methods.<locals>.<listcomp>)�dirr rr r�list_public_methods�sr#c@speZdZdZddd�Zddd�Zddd	�Zd
d�Zdd
�Zddd�Z	dd�Z
dd�Zdd�Zdd�Z
dd�ZdS)�SimpleXMLRPCDispatchera&Mix-in class that dispatches XML-RPC requests.

    This class is used to register XML-RPC method handlers
    and then to dispatch them. This class doesn't need to be
    instanced directly when used by SimpleXMLRPCServer but it
    can be instanced when used by the MultiPathXMLRPCServer
    FNcCs&i|_d|_||_|pd|_||_dS�N�utf-8)�funcs�instance�
allow_none�encoding�use_builtin_types��selfr)r*r+rrr�__init__�s

zSimpleXMLRPCDispatcher.__init__cCs||_||_dS)aRegisters an instance to respond to XML-RPC requests.

        Only one instance can be installed at a time.

        If the registered instance has a _dispatch method then that
        method will be called with the name of the XML-RPC method and
        its parameters as a tuple
        e.g. instance._dispatch('add',(2,3))

        If the registered instance does not have a _dispatch method
        then the instance will be searched to find a matching method
        and, if found, will be called. Methods beginning with an '_'
        are considered private and will not be called by
        SimpleXMLRPCServer.

        If a registered function matches a XML-RPC request, then it
        will be called instead of the registered instance.

        If the optional allow_dotted_names argument is true and the
        instance does not have a _dispatch method, method names
        containing dots are supported and resolved, as long as none of
        the name segments start with an '_'.

            *** SECURITY WARNING: ***

            Enabling the allow_dotted_names options allows intruders
            to access your module's global variables and may allow
            intruders to execute arbitrary code on your machine.  Only
            use this option on a secure, closed network.

        N)r(r)r-r(rrrr�register_instance�s!z(SimpleXMLRPCDispatcher.register_instancecCs|dur|j}||j|<dS)z�Registers a function to respond to XML-RPC requests.

        The optional name argument can be used to set a Unicode name
        for the function.
        N)�__name__r')r-�function�namerrr�register_function�sz(SimpleXMLRPCDispatcher.register_functioncCs|j�|j|j|jd��dS)z�Registers the XML-RPC introspection methods in the system
        namespace.

        see http://xmlrpc.usefulinc.com/doc/reserved.html
        )zsystem.listMethodszsystem.methodSignaturezsystem.methodHelpN)r'�update�system_listMethods�system_methodSignature�system_methodHelp�r-rrr� register_introspection_functions�s
�z7SimpleXMLRPCDispatcher.register_introspection_functionscCs|j�d|ji�dS)z�Registers the XML-RPC multicall method in the system
        namespace.

        see http://www.xmlrpc.com/discuss/msgReader$1208zsystem.multicallN)r'r4�system_multicallr8rrr�register_multicall_functions�sz3SimpleXMLRPCDispatcher.register_multicall_functionsc
Cs�zPt||jd�\}}|dur(|||�}n|�||�}|f}t|d|j|jd�}Wnnty�}zt||j|jd�}WYd}~nBd}~0t��\}}	}
ttdd||	f�|j|jd�}Yn0|�	|j�S)a�Dispatches an XML-RPC method from marshalled (XML) data.

        XML-RPC methods are dispatched from the marshalled (XML) data
        using the _dispatch method and the result is returned as
        marshalled data. For backwards compatibility, a dispatch
        function can be provided as an argument (see comment in
        SimpleXMLRPCRequestHandler.do_POST) but overriding the
        existing method through subclassing is the preferred means
        of changing method dispatch behavior.
        )r+N�)�methodresponser)r*)r)r*�%s:%s�r*r))
r
r+�	_dispatchr	r)r*r�sys�exc_info�encode)r-�data�dispatch_method�path�params�method�response�fault�exc_type�	exc_value�exc_tbrrr�_marshaled_dispatch�s(�
��z*SimpleXMLRPCDispatcher._marshaled_dispatchcCs^t|j���}|jdurVt|jd�r8|t|j���O}nt|jd�sV|tt|j��O}t|�S)zwsystem.listMethods() => ['add', 'subtract', 'multiple']

        Returns a list of the methods supported by the server.N�_listMethodsr@)�setr'�keysr(�hasattrrOr#�sorted)r-�methodsrrrr5s
z)SimpleXMLRPCDispatcher.system_listMethodscCsdS)a#system.methodSignature('add') => [double, int, int]

        Returns a list describing the signature of the method. In the
        above example, the add method takes two integers as arguments
        and returns a double result.

        This server does NOT support system.methodSignature.zsignatures not supportedr)r-�method_namerrrr6*sz-SimpleXMLRPCDispatcher.system_methodSignaturecCs�d}||jvr|j|}nV|jdurpt|jd�r<|j�|�St|jd�spzt|j||j�}WntynYn0|dur|dSt�|�SdS)z�system.methodHelp('add') => "Adds two integers together"

        Returns a string containing documentation for the specified method.N�_methodHelpr@�)	r'r(rRrVrrr�pydoc�getdoc)r-rUrHrrrr77s$

�z(SimpleXMLRPCDispatcher.system_methodHelpc
Cs�g}|D]�}|d}|d}z|�|�||�g�Wqtyl}z |�|j|jd��WYd}~qd}~0t��\}}}	|�dd||fd��Yq0q|S)z�system.multicall([{'methodName': 'add', 'params': [2, 2]}, ...]) => [[4], ...]

        Allows the caller to package multiple XML-RPC calls into a single
        request.

        See http://www.xmlrpc.com/discuss/msgReader$1208
        �
methodNamerG)�	faultCode�faultStringNr<r>)�appendr@rr[r\rArB)
r-�	call_list�results�callrUrGrJrKrLrMrrrr:Vs(
��
��z'SimpleXMLRPCDispatcher.system_multicallcCs�d}z|j|}Wnbtyt|jdurpt|jd�rH|j�||�YSzt|j||j�}WntynYn0Yn0|dur�||�Std|��dS)a�Dispatches the XML-RPC method.

        XML-RPC calls are forwarded to a registered function that
        matches the called XML-RPC method name. If no such function
        exists then the call is forwarded to the registered instance,
        if available.

        If the registered instance has a _dispatch method then that
        method will be called with the name of the XML-RPC method and
        its parameters as a tuple
        e.g. instance._dispatch('add',(2,3))

        If the registered instance does not have a _dispatch method
        then the instance will be searched to find a matching method
        and, if found, will be called.

        Methods beginning with an '_' are considered private and will
        not be called.
        Nr@zmethod "%s" is not supported)	r'�KeyErrorr(rRr@rrr�	Exception)r-rHrG�funcrrrr@vs$
�z SimpleXMLRPCDispatcher._dispatch)FNF)F)N)NN)r0�
__module__�__qualname__�__doc__r.r/r3r9r;rNr5r6r7r:r@rrrrr$�s�

$

%
 r$c@sfeZdZdZdZdZdZdZe�	dej
ejB�Zdd�Z
d	d
�Zdd�Zd
d�Zdd�Zddd�ZdS)�SimpleXMLRPCRequestHandlerz�Simple XML-RPC request handler class.

    Handles all HTTP POST requests and attempts to decode them as
    XML-RPC requests.
    )�/z/RPC2ix���Tz�
                            \s* ([^\s;]+) \s*            #content-coding
                            (;\s* q \s*=\s* ([0-9\.]+))? #q
                            cCs^i}|j�dd�}|�d�D]<}|j�|�}|r|�d�}|rFt|�nd}|||�d�<q|S)NzAccept-EncodingrW�,�g�?r<)�headers�getr�	aepattern�match�group�float)r-�rZae�ero�vrrr�accept_encodings�s
z+SimpleXMLRPCRequestHandler.accept_encodingscCs|jr|j|jvSdSdS)NT)�	rpc_pathsrFr8rrr�is_rpc_path_valid�sz,SimpleXMLRPCRequestHandler.is_rpc_path_validc
Cs�|��s|��dSz�d}t|jd�}g}|rht||�}|j�|�}|sLqh|�|�|t|d�8}q,d�	|�}|�
|�}|dur�WdS|j�|t
|dd�|j�}Wn�t�y8}zx|�d�t|jd��r|jj�r|�d	t|��t��}	t|	�d
d�d
�}	|�d|	�|�d
d�|��WYd}~n�d}~00|�d�|�dd�|jdu�r�t|�|jk�r�|���dd�}
|
�r�zt|�}|�dd�Wnt�y�Yn0|�d
tt|���|��|j�|�dS)z�Handles the HTTP POST request.

        Attempts to interpret all HTTP POST requests as XML-RPC calls,
        which are forwarded to the server's _dispatch method for handling.
        Ni�zcontent-lengthri�r@i��_send_traceback_headerzX-exception�ASCII�backslashreplacezX-traceback�Content-length�0���Content-typeztext/xml�gziprzContent-Encoding) rw�
report_404rrl�min�rfile�readr]�len�join�decode_request_content�serverrNrrFrb�
send_responserRry�send_headerr�	traceback�
format_excrC�end_headers�encode_thresholdrurmr�NotImplementedError�wfile�write)r-Zmax_chunk_sizeZsize_remaining�L�
chunk_size�chunkrDrIrs�trace�qrrr�do_POST�s\



�
�
z"SimpleXMLRPCRequestHandler.do_POSTcCs�|j�dd���}|dkr|S|dkrrz
t|�WStyR|�dd|�Yq�tyn|�dd�Yq�0n|�dd|�|�dd	�|��dS)
Nzcontent-encoding�identityr�i�zencoding %r not supported�zerror decoding gzip contentr|r})	rlrm�lowerrr�r��
ValueErrorr�r�)r-rDr*rrrr�s
z1SimpleXMLRPCRequestHandler.decode_request_contentcCsF|�d�d}|�dd�|�dtt|���|��|j�|�dS)Ni�sNo such pagerz
text/plainr|)r�r�rr�r�r�r��r-rIrrrr�'s
z%SimpleXMLRPCRequestHandler.report_404�-cCs|jjrt�|||�dS)z$Selectively log an accepted request.N)r��logRequestsr
�log_request)r-�code�sizerrrr�0sz&SimpleXMLRPCRequestHandler.log_requestN)r�r�)r0rdrerfrvr��wbufsize�disable_nagle_algorithm�re�compile�VERBOSE�
IGNORECASErnrurwr�r�r�r�rrrrrg�s
�G	rgc@s.eZdZdZdZdZedddddfdd�ZdS)�SimpleXMLRPCServeragSimple XML-RPC server.

    Simple XML-RPC server that allows functions and a single instance
    to be installed to handle requests. The default implementation
    attempts to dispatch XML-RPC calls to the functions or instance
    installed in the server. Override the _dispatch method inherited
    from SimpleXMLRPCDispatcher to change this behavior.
    TFNc	Csn||_t�||||�tj�||||�tdurjttd�rjt�|��tj�}|tj	O}t�|��tj
|�dS)N�
FD_CLOEXEC)r�r$r.r�	TCPServer�fcntlrR�filenoZF_GETFDr�ZF_SETFD)	r-�addr�requestHandlerr�r)r*�bind_and_activater+�flagsrrrr.Is
zSimpleXMLRPCServer.__init__)r0rdrerf�allow_reuse_addressryrgr.rrrrr�6s	�r�c@s@eZdZdZedddddfdd�Zdd�Zd	d
�Zd
dd�ZdS)�MultiPathXMLRPCServera\Multipath XML-RPC Server
    This specialization of SimpleXMLRPCServer allows the user to create
    multiple Dispatcher instances and assign them to different
    HTTP request paths.  This makes it possible to run two or more
    'virtual XML-RPC servers' at the same port.
    Make sure that the requestHandler accepts the paths in question.
    TFNc
Cs2t�||||||||�i|_||_|p*d|_dSr%)r�r.�dispatchersr)r*�r-r�r�r�r)r*r�r+rrrr.as�zMultiPathXMLRPCServer.__init__cCs||j|<|S�N�r�)r-rF�
dispatcherrrr�add_dispatcherks
z$MultiPathXMLRPCServer.add_dispatchercCs
|j|Sr�r�)r-rFrrr�get_dispatcherosz$MultiPathXMLRPCServer.get_dispatcherc	Csjz|j|�|||�}WnLt��dd�\}}ttdd||f�|j|jd�}|�|j�}Yn0|S)N�r<r>r?)	r�rNrArBr	rr*r)rC)r-rDrErFrIrKrLrrrrNrs
��z)MultiPathXMLRPCServer._marshaled_dispatch)NN)	r0rdrerfrgr.r�r�rNrrrrr�Ys�

r�c@s4eZdZdZddd�Zdd�Zdd	�Zd
d
d�ZdS)�CGIXMLRPCRequestHandlerz3Simple handler for XML-RPC data passed through CGI.FNcCst�||||�dSr�)r$r.r,rrrr.�sz CGIXMLRPCRequestHandler.__init__cCsP|�|�}td�tdt|��t�tj��tjj�|�tjj��dS)zHandle a single XML-RPC requestzContent-Type: text/xml�Content-Length: %dN)rN�printr�rA�stdout�flush�bufferr�)r-�request_textrIrrr�
handle_xmlrpc�s

z%CGIXMLRPCRequestHandler.handle_xmlrpccCs�d}tj|\}}tj|||d�}|�d�}td||f�tdtj�tdt|��t�tj	�
�tj	j�|�tj	j�
�dS)z�Handle a single HTTP GET request.

        Default implementation indicates an error because
        XML-RPC uses the POST method.
        r�)r��message�explainr&z
Status: %d %szContent-Type: %sr�N)
r
�	responses�http_serverZDEFAULT_ERROR_MESSAGErCr�ZDEFAULT_ERROR_CONTENT_TYPEr�rAr�r�r�r�)r-r�r�r�rIrrr�
handle_get�s ��

z"CGIXMLRPCRequestHandler.handle_getc	Csx|dur$tj�dd�dkr$|��nPzttj�dd��}WnttfyTd}Yn0|durjtj�	|�}|�
|�dS)z�Handle a single XML-RPC request passed through a CGI post method.

        If no XML data is given then it is read from stdin. The resulting
        XML-RPC response is printed to stdout along with the correct HTTP
        headers.
        N�REQUEST_METHOD�GET�CONTENT_LENGTHri)�os�environrmr�rr��	TypeErrorrA�stdinr�r�)r-r��lengthrrr�handle_request�s�

z&CGIXMLRPCRequestHandler.handle_request)FNF)N)r0rdrerfr.r�r�r�rrrrr��s

r�c@s>eZdZdZdiiifdd�Zdiiidfdd�Zdd�ZdS)	�
ServerHTMLDocz7Class used to generate pydoc HTML document for a serverNcCsZ|p|j}g}d}t�d�}|�||�}	|	s0�q:|	��\}
}|�||||
���|	��\}}
}}}}|
r�||��dd�}|�d||f�n�|r�dt|�}|�d|||�f�n~|r�dt|�}|�d|||�f�nV|||d�d	k�r|�|�	||||��n(|�r"|�d
|�n|�|�	||��|}q|�|||d���d�
|�S)
z�Mark up some plain text, given a context of symbols to look for.
        Each context dictionary maps object names to anchor names.rzM\b((http|ftp)://\S+[\w/]|RFC[- ]?(\d+)|PEP[- ]?(\d+)|(self\.)?((?:\w|\.)+))\b�"z&quot;z<a href="%s">%s</a>z'http://www.rfc-editor.org/rfc/rfc%d.txtz(http://www.python.org/dev/peps/pep-%04d/r<�(zself.<strong>%s</strong>NrW)�escaper�r��search�spanr]�groups�replacerZnamelinkr�)r-�textr�r'�classesrTr_�here�patternro�start�end�all�schemeZrfcZpepZselfdotr2�urlrrr�markup�s4

zServerHTMLDoc.markupcCs$|r
|jpdd|}d}	d|�|�|�|�f}
t�|�rrt�|�}tj|jdd�|j|j|j	|j
|jd�}n<t�|�r�t�|�}tj|j|j|j|j	|j
|jd�}nd}t
|t�r�|dp�|}|dp�d}
n
t�|�}
|
||	o�|�d	|	�}|�|
|j|||�}|�od
|}d||fS)z;Produce HTML documentation for a function or method object.rWr�z$<a name="%s"><strong>%s</strong></a>r<N)�annotations�formatvaluez(...)rz'<font face="helvetica, arial">%s</font>z<dd><tt>%s</tt></dd>z<dl><dt>%s</dt>%s</dl>
)r0r��inspect�ismethod�getfullargspec�
formatargspec�args�varargs�varkw�defaultsr�r��
isfunction�
isinstance�tuplerXrYZgreyr��	preformat)r-�objectr2�modr'r�rTZcl�anchorZnote�titler�Zargspec�	docstring�decl�docrrr�
docroutine�sF�

�

�

��zServerHTMLDoc.docroutinec	Cs�i}|��D] \}}d|||<||||<q|�|�}d|}|�|dd�}|�||j|�}	|	ohd|	}	|d|	}g}
t|���}|D]\}}|
�|j|||d��q�||�ddd	d
�	|
��}|S)z1Produce HTML documentation for an XML-RPC server.z#-z)<big><big><strong>%s</strong></big></big>z#ffffffz#7799eez<tt>%s</tt>z
<p>%s</p>
)r'ZMethodsz#eeaa77rW)
�itemsr��headingr�r�rSr]r�Z
bigsectionr�)r-�server_nameZpackage_documentationrTZfdict�key�value�head�resultr��contentsZmethod_itemsrrr�	docservers$
�zServerHTMLDoc.docserver)r0rdrerfr�r�rrrrrr��s)�
-r�c@s8eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�ZdS)
�XMLRPCDocGeneratorz�Generates documentation for an XML-RPC server.

    This class is designed as mix-in and should not
    be constructed directly.
    cCsd|_d|_d|_dS)NzXML-RPC Server DocumentationzGThis server exports the following methods through the XML-RPC protocol.)r��server_documentation�server_titler8rrrr.?s�zXMLRPCDocGenerator.__init__cCs
||_dS)z8Set the HTML title of the generated server documentationN)r)r-rrrr�set_server_titleGsz#XMLRPCDocGenerator.set_server_titlecCs
||_dS)z7Set the name of the generated HTML server documentationN)r�)r-r�rrr�set_server_nameLsz"XMLRPCDocGenerator.set_server_namecCs
||_dS)z3Set the documentation string for the entire server.N)r)r-rrrr�set_server_documentationQsz+XMLRPCDocGenerator.set_server_documentationc	Cs�i}|��D]�}||jvr&|j|}n�|jdur�ddg}t|jd�rT|j�|�|d<t|jd�rp|j�|�|d<t|�}|dkr�|}q�t|jd�s�zt|j|�}Wq�ty�|}Yq�0q�|}nds�Jd��|||<qt	�}|�
|j|j|�}|�
|j|�S)	agenerate_html_documentation() => html documentation for the server

        Generates HTML documentation for the server using introspection for
        installed functions and instances that do not implement the
        _dispatch method. Alternatively, instances can choose to implement
        the _get_method_argstring(method_name) method to provide the
        argument string used in the documentation and the
        _methodHelp(method_name) method to provide the help text used
        in the documentation.N�_get_method_argstringrrVr<)NNr@zACould not find method in self.functions and no instance installed)r5r'r(rRrrVr�rrr�rr�r�pager)r-rTrUrHZmethod_infoZ
documenterZ
documentationrrr�generate_html_documentationVs>

�
�z.XMLRPCDocGenerator.generate_html_documentationN)	r0rdrerfr.rrrr
rrrrr8src@seZdZdZdd�ZdS)�DocXMLRPCRequestHandlerz�XML-RPC and documentation request handler class.

    Handles all HTTP POST requests and attempts to decode them as
    XML-RPC requests.

    Handles all HTTP GET requests and interprets them as requests
    for documentation.
    cCsf|��s|��dS|j���d�}|�d�|�dd�|�dtt|���|�	�|j
�|�dS)�}Handles the HTTP GET request.

        Interpret all HTTP GET requests as requests for server
        documentation.
        Nr&r~rz	text/htmlr|)rwr�r�r
rCr�r�rr�r�r�r�r�rrr�do_GET�s
zDocXMLRPCRequestHandler.do_GETN)r0rdrerfr
rrrrr�s	rc@s&eZdZdZedddddfdd�ZdS)�DocXMLRPCServerz�XML-RPC and HTML documentation server.

    Adds the ability to serve server documentation to the capabilities
    of SimpleXMLRPCServer.
    TFNc
Cs&t�||||||||�t�|�dSr�)r�r.rr�rrrr.�s
�zDocXMLRPCServer.__init__)r0rdrerfrr.rrrrr�s
�rc@s eZdZdZdd�Zdd�ZdS)�DocCGIXMLRPCRequestHandlerzJHandler for XML-RPC data and documentation requests passed through
    CGIcCsT|���d�}td�tdt|��t�tj��tjj�|�tjj��dS)rr&zContent-Type: text/htmlr�N)	r
rCr�r�rAr�r�r�r�r�rrrr��s
z%DocCGIXMLRPCRequestHandler.handle_getcCst�|�t�|�dSr�)r�r.rr8rrrr.�s
z#DocCGIXMLRPCRequestHandler.__init__N)r0rdrerfr�r.rrrrr�sr�__main__c@s"eZdZdd�ZGdd�d�ZdS)�ExampleServicecCsdS)NZ42rr8rrr�getData�szExampleService.getDatac@seZdZedd��ZdS)zExampleService.currentTimecCs
tj��Sr�)�datetime�nowrrrr�getCurrentTime�sz)ExampleService.currentTime.getCurrentTimeN)r0rdre�staticmethodrrrrr�currentTime�srN)r0rdrerrrrrrr�sr)�	localhosti@cCs||Sr�r)�x�yrrr�<lambda>�rxr�add)rz&Serving XML-RPC on localhost port 8000zKIt is advisable to run this example server within a secure, closed network.z&
Keyboard interrupt received, exiting.)T):rf�
__future__rrrrZfuture.builtinsrrZfuture.backports.xmlrpc.clientrr	r
rrZfuture.backports.http.serverr
Z	backports�httpr�r�Zfuture.backportsrrAr�r�rXr�r�r��ImportErrorrr#r�r$rgr�r�r�r�ZHTMLDocr�rrrrr0rrr3�powr/r;r��
serve_forever�KeyboardInterrupt�server_close�exitrrrr�<module>sjj

�#(ErQ��
	
PK�Cu\	�_�����!future/backports/xmlrpc/server.pynu�[���r"""
Ported using Python-Future from the Python 3.3 standard library.

XML-RPC Servers.

This module can be used to create simple XML-RPC servers
by creating a server and either installing functions, a
class instance, or by extending the SimpleXMLRPCServer
class.

It can also be used to handle XML-RPC requests in a CGI
environment using CGIXMLRPCRequestHandler.

The Doc* classes can be used to create XML-RPC servers that
serve pydoc-style documentation in response to HTTP
GET requests. This documentation is dynamically generated
based on the functions and methods registered with the
server.

A list of possible usage patterns follows:

1. Install functions:

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(pow)
server.register_function(lambda x,y: x+y, 'add')
server.serve_forever()

2. Install an instance:

class MyFuncs:
    def __init__(self):
        # make all of the sys functions available through sys.func_name
        import sys
        self.sys = sys
    def _listMethods(self):
        # implement this method so that system.listMethods
        # knows to advertise the sys methods
        return list_public_methods(self) + \
                ['sys.' + method for method in list_public_methods(self.sys)]
    def pow(self, x, y): return pow(x, y)
    def add(self, x, y) : return x + y

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_introspection_functions()
server.register_instance(MyFuncs())
server.serve_forever()

3. Install an instance with custom dispatch method:

class Math:
    def _listMethods(self):
        # this method must be present for system.listMethods
        # to work
        return ['add', 'pow']
    def _methodHelp(self, method):
        # this method must be present for system.methodHelp
        # to work
        if method == 'add':
            return "add(2,3) => 5"
        elif method == 'pow':
            return "pow(x, y[, z]) => number"
        else:
            # By convention, return empty
            # string if no help is available
            return ""
    def _dispatch(self, method, params):
        if method == 'pow':
            return pow(*params)
        elif method == 'add':
            return params[0] + params[1]
        else:
            raise ValueError('bad method')

server = SimpleXMLRPCServer(("localhost", 8000))
server.register_introspection_functions()
server.register_instance(Math())
server.serve_forever()

4. Subclass SimpleXMLRPCServer:

class MathServer(SimpleXMLRPCServer):
    def _dispatch(self, method, params):
        try:
            # We are forcing the 'export_' prefix on methods that are
            # callable through XML-RPC to prevent potential security
            # problems
            func = getattr(self, 'export_' + method)
        except AttributeError:
            raise Exception('method "%s" is not supported' % method)
        else:
            return func(*params)

    def export_add(self, x, y):
        return x + y

server = MathServer(("localhost", 8000))
server.serve_forever()

5. CGI script:

server = CGIXMLRPCRequestHandler()
server.register_function(pow)
server.handle_request()
"""

from __future__ import absolute_import, division, print_function, unicode_literals
from future.builtins import int, str

# Written by Brian Quinlan (brian@sweetapp.com).
# Based on code written by Fredrik Lundh.

from future.backports.xmlrpc.client import Fault, dumps, loads, gzip_encode, gzip_decode
from future.backports.http.server import BaseHTTPRequestHandler
import future.backports.http.server as http_server
from future.backports import socketserver
import sys
import os
import re
import pydoc
import inspect
import traceback
try:
    import fcntl
except ImportError:
    fcntl = None

def resolve_dotted_attribute(obj, attr, allow_dotted_names=True):
    """resolve_dotted_attribute(a, 'b.c.d') => a.b.c.d

    Resolves a dotted attribute name to an object.  Raises
    an AttributeError if any attribute in the chain starts with a '_'.

    If the optional allow_dotted_names argument is false, dots are not
    supported and this function operates similar to getattr(obj, attr).
    """

    if allow_dotted_names:
        attrs = attr.split('.')
    else:
        attrs = [attr]

    for i in attrs:
        if i.startswith('_'):
            raise AttributeError(
                'attempt to access private attribute "%s"' % i
                )
        else:
            obj = getattr(obj,i)
    return obj

def list_public_methods(obj):
    """Returns a list of attribute strings, found in the specified
    object, which represent callable attributes"""

    return [member for member in dir(obj)
                if not member.startswith('_') and
                    callable(getattr(obj, member))]

class SimpleXMLRPCDispatcher(object):
    """Mix-in class that dispatches XML-RPC requests.

    This class is used to register XML-RPC method handlers
    and then to dispatch them. This class doesn't need to be
    instanced directly when used by SimpleXMLRPCServer but it
    can be instanced when used by the MultiPathXMLRPCServer
    """

    def __init__(self, allow_none=False, encoding=None,
                 use_builtin_types=False):
        self.funcs = {}
        self.instance = None
        self.allow_none = allow_none
        self.encoding = encoding or 'utf-8'
        self.use_builtin_types = use_builtin_types

    def register_instance(self, instance, allow_dotted_names=False):
        """Registers an instance to respond to XML-RPC requests.

        Only one instance can be installed at a time.

        If the registered instance has a _dispatch method then that
        method will be called with the name of the XML-RPC method and
        its parameters as a tuple
        e.g. instance._dispatch('add',(2,3))

        If the registered instance does not have a _dispatch method
        then the instance will be searched to find a matching method
        and, if found, will be called. Methods beginning with an '_'
        are considered private and will not be called by
        SimpleXMLRPCServer.

        If a registered function matches a XML-RPC request, then it
        will be called instead of the registered instance.

        If the optional allow_dotted_names argument is true and the
        instance does not have a _dispatch method, method names
        containing dots are supported and resolved, as long as none of
        the name segments start with an '_'.

            *** SECURITY WARNING: ***

            Enabling the allow_dotted_names options allows intruders
            to access your module's global variables and may allow
            intruders to execute arbitrary code on your machine.  Only
            use this option on a secure, closed network.

        """

        self.instance = instance
        self.allow_dotted_names = allow_dotted_names

    def register_function(self, function, name=None):
        """Registers a function to respond to XML-RPC requests.

        The optional name argument can be used to set a Unicode name
        for the function.
        """

        if name is None:
            name = function.__name__
        self.funcs[name] = function

    def register_introspection_functions(self):
        """Registers the XML-RPC introspection methods in the system
        namespace.

        see http://xmlrpc.usefulinc.com/doc/reserved.html
        """

        self.funcs.update({'system.listMethods' : self.system_listMethods,
                      'system.methodSignature' : self.system_methodSignature,
                      'system.methodHelp' : self.system_methodHelp})

    def register_multicall_functions(self):
        """Registers the XML-RPC multicall method in the system
        namespace.

        see http://www.xmlrpc.com/discuss/msgReader$1208"""

        self.funcs.update({'system.multicall' : self.system_multicall})

    def _marshaled_dispatch(self, data, dispatch_method = None, path = None):
        """Dispatches an XML-RPC method from marshalled (XML) data.

        XML-RPC methods are dispatched from the marshalled (XML) data
        using the _dispatch method and the result is returned as
        marshalled data. For backwards compatibility, a dispatch
        function can be provided as an argument (see comment in
        SimpleXMLRPCRequestHandler.do_POST) but overriding the
        existing method through subclassing is the preferred means
        of changing method dispatch behavior.
        """

        try:
            params, method = loads(data, use_builtin_types=self.use_builtin_types)

            # generate response
            if dispatch_method is not None:
                response = dispatch_method(method, params)
            else:
                response = self._dispatch(method, params)
            # wrap response in a singleton tuple
            response = (response,)
            response = dumps(response, methodresponse=1,
                             allow_none=self.allow_none, encoding=self.encoding)
        except Fault as fault:
            response = dumps(fault, allow_none=self.allow_none,
                             encoding=self.encoding)
        except:
            # report exception back to server
            exc_type, exc_value, exc_tb = sys.exc_info()
            response = dumps(
                Fault(1, "%s:%s" % (exc_type, exc_value)),
                encoding=self.encoding, allow_none=self.allow_none,
                )

        return response.encode(self.encoding)

    def system_listMethods(self):
        """system.listMethods() => ['add', 'subtract', 'multiple']

        Returns a list of the methods supported by the server."""

        methods = set(self.funcs.keys())
        if self.instance is not None:
            # Instance can implement _listMethod to return a list of
            # methods
            if hasattr(self.instance, '_listMethods'):
                methods |= set(self.instance._listMethods())
            # if the instance has a _dispatch method then we
            # don't have enough information to provide a list
            # of methods
            elif not hasattr(self.instance, '_dispatch'):
                methods |= set(list_public_methods(self.instance))
        return sorted(methods)

    def system_methodSignature(self, method_name):
        """system.methodSignature('add') => [double, int, int]

        Returns a list describing the signature of the method. In the
        above example, the add method takes two integers as arguments
        and returns a double result.

        This server does NOT support system.methodSignature."""

        # See http://xmlrpc.usefulinc.com/doc/sysmethodsig.html

        return 'signatures not supported'

    def system_methodHelp(self, method_name):
        """system.methodHelp('add') => "Adds two integers together"

        Returns a string containing documentation for the specified method."""

        method = None
        if method_name in self.funcs:
            method = self.funcs[method_name]
        elif self.instance is not None:
            # Instance can implement _methodHelp to return help for a method
            if hasattr(self.instance, '_methodHelp'):
                return self.instance._methodHelp(method_name)
            # if the instance has a _dispatch method then we
            # don't have enough information to provide help
            elif not hasattr(self.instance, '_dispatch'):
                try:
                    method = resolve_dotted_attribute(
                                self.instance,
                                method_name,
                                self.allow_dotted_names
                                )
                except AttributeError:
                    pass

        # Note that we aren't checking that the method actually
        # be a callable object of some kind
        if method is None:
            return ""
        else:
            return pydoc.getdoc(method)

    def system_multicall(self, call_list):
        """system.multicall([{'methodName': 'add', 'params': [2, 2]}, ...]) => \
[[4], ...]

        Allows the caller to package multiple XML-RPC calls into a single
        request.

        See http://www.xmlrpc.com/discuss/msgReader$1208
        """

        results = []
        for call in call_list:
            method_name = call['methodName']
            params = call['params']

            try:
                # XXX A marshalling error in any response will fail the entire
                # multicall. If someone cares they should fix this.
                results.append([self._dispatch(method_name, params)])
            except Fault as fault:
                results.append(
                    {'faultCode' : fault.faultCode,
                     'faultString' : fault.faultString}
                    )
            except:
                exc_type, exc_value, exc_tb = sys.exc_info()
                results.append(
                    {'faultCode' : 1,
                     'faultString' : "%s:%s" % (exc_type, exc_value)}
                    )
        return results

    def _dispatch(self, method, params):
        """Dispatches the XML-RPC method.

        XML-RPC calls are forwarded to a registered function that
        matches the called XML-RPC method name. If no such function
        exists then the call is forwarded to the registered instance,
        if available.

        If the registered instance has a _dispatch method then that
        method will be called with the name of the XML-RPC method and
        its parameters as a tuple
        e.g. instance._dispatch('add',(2,3))

        If the registered instance does not have a _dispatch method
        then the instance will be searched to find a matching method
        and, if found, will be called.

        Methods beginning with an '_' are considered private and will
        not be called.
        """

        func = None
        try:
            # check to see if a matching function has been registered
            func = self.funcs[method]
        except KeyError:
            if self.instance is not None:
                # check for a _dispatch method
                if hasattr(self.instance, '_dispatch'):
                    return self.instance._dispatch(method, params)
                else:
                    # call instance method directly
                    try:
                        func = resolve_dotted_attribute(
                            self.instance,
                            method,
                            self.allow_dotted_names
                            )
                    except AttributeError:
                        pass

        if func is not None:
            return func(*params)
        else:
            raise Exception('method "%s" is not supported' % method)

class SimpleXMLRPCRequestHandler(BaseHTTPRequestHandler):
    """Simple XML-RPC request handler class.

    Handles all HTTP POST requests and attempts to decode them as
    XML-RPC requests.
    """

    # Class attribute listing the accessible path components;
    # paths not on this list will result in a 404 error.
    rpc_paths = ('/', '/RPC2')

    #if not None, encode responses larger than this, if possible
    encode_threshold = 1400 #a common MTU

    #Override form StreamRequestHandler: full buffering of output
    #and no Nagle.
    wbufsize = -1
    disable_nagle_algorithm = True

    # a re to match a gzip Accept-Encoding
    aepattern = re.compile(r"""
                            \s* ([^\s;]+) \s*            #content-coding
                            (;\s* q \s*=\s* ([0-9\.]+))? #q
                            """, re.VERBOSE | re.IGNORECASE)

    def accept_encodings(self):
        r = {}
        ae = self.headers.get("Accept-Encoding", "")
        for e in ae.split(","):
            match = self.aepattern.match(e)
            if match:
                v = match.group(3)
                v = float(v) if v else 1.0
                r[match.group(1)] = v
        return r

    def is_rpc_path_valid(self):
        if self.rpc_paths:
            return self.path in self.rpc_paths
        else:
            # If .rpc_paths is empty, just assume all paths are legal
            return True

    def do_POST(self):
        """Handles the HTTP POST request.

        Attempts to interpret all HTTP POST requests as XML-RPC calls,
        which are forwarded to the server's _dispatch method for handling.
        """

        # Check that the path is legal
        if not self.is_rpc_path_valid():
            self.report_404()
            return

        try:
            # Get arguments by reading body of request.
            # We read this in chunks to avoid straining
            # socket.read(); around the 10 or 15Mb mark, some platforms
            # begin to have problems (bug #792570).
            max_chunk_size = 10*1024*1024
            size_remaining = int(self.headers["content-length"])
            L = []
            while size_remaining:
                chunk_size = min(size_remaining, max_chunk_size)
                chunk = self.rfile.read(chunk_size)
                if not chunk:
                    break
                L.append(chunk)
                size_remaining -= len(L[-1])
            data = b''.join(L)

            data = self.decode_request_content(data)
            if data is None:
                return #response has been sent

            # In previous versions of SimpleXMLRPCServer, _dispatch
            # could be overridden in this class, instead of in
            # SimpleXMLRPCDispatcher. To maintain backwards compatibility,
            # check to see if a subclass implements _dispatch and dispatch
            # using that method if present.
            response = self.server._marshaled_dispatch(
                    data, getattr(self, '_dispatch', None), self.path
                )
        except Exception as e: # This should only happen if the module is buggy
            # internal error, report as HTTP server error
            self.send_response(500)

            # Send information about the exception if requested
            if hasattr(self.server, '_send_traceback_header') and \
                    self.server._send_traceback_header:
                self.send_header("X-exception", str(e))
                trace = traceback.format_exc()
                trace = str(trace.encode('ASCII', 'backslashreplace'), 'ASCII')
                self.send_header("X-traceback", trace)

            self.send_header("Content-length", "0")
            self.end_headers()
        else:
            self.send_response(200)
            self.send_header("Content-type", "text/xml")
            if self.encode_threshold is not None:
                if len(response) > self.encode_threshold:
                    q = self.accept_encodings().get("gzip", 0)
                    if q:
                        try:
                            response = gzip_encode(response)
                            self.send_header("Content-Encoding", "gzip")
                        except NotImplementedError:
                            pass
            self.send_header("Content-length", str(len(response)))
            self.end_headers()
            self.wfile.write(response)

    def decode_request_content(self, data):
        #support gzip encoding of request
        encoding = self.headers.get("content-encoding", "identity").lower()
        if encoding == "identity":
            return data
        if encoding == "gzip":
            try:
                return gzip_decode(data)
            except NotImplementedError:
                self.send_response(501, "encoding %r not supported" % encoding)
            except ValueError:
                self.send_response(400, "error decoding gzip content")
        else:
            self.send_response(501, "encoding %r not supported" % encoding)
        self.send_header("Content-length", "0")
        self.end_headers()

    def report_404 (self):
            # Report a 404 error
        self.send_response(404)
        response = b'No such page'
        self.send_header("Content-type", "text/plain")
        self.send_header("Content-length", str(len(response)))
        self.end_headers()
        self.wfile.write(response)

    def log_request(self, code='-', size='-'):
        """Selectively log an accepted request."""

        if self.server.logRequests:
            BaseHTTPRequestHandler.log_request(self, code, size)

class SimpleXMLRPCServer(socketserver.TCPServer,
                         SimpleXMLRPCDispatcher):
    """Simple XML-RPC server.

    Simple XML-RPC server that allows functions and a single instance
    to be installed to handle requests. The default implementation
    attempts to dispatch XML-RPC calls to the functions or instance
    installed in the server. Override the _dispatch method inherited
    from SimpleXMLRPCDispatcher to change this behavior.
    """

    allow_reuse_address = True

    # Warning: this is for debugging purposes only! Never set this to True in
    # production code, as will be sending out sensitive information (exception
    # and stack trace details) when exceptions are raised inside
    # SimpleXMLRPCRequestHandler.do_POST
    _send_traceback_header = False

    def __init__(self, addr, requestHandler=SimpleXMLRPCRequestHandler,
                 logRequests=True, allow_none=False, encoding=None,
                 bind_and_activate=True, use_builtin_types=False):
        self.logRequests = logRequests

        SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding, use_builtin_types)
        socketserver.TCPServer.__init__(self, addr, requestHandler, bind_and_activate)

        # [Bug #1222790] If possible, set close-on-exec flag; if a
        # method spawns a subprocess, the subprocess shouldn't have
        # the listening socket open.
        if fcntl is not None and hasattr(fcntl, 'FD_CLOEXEC'):
            flags = fcntl.fcntl(self.fileno(), fcntl.F_GETFD)
            flags |= fcntl.FD_CLOEXEC
            fcntl.fcntl(self.fileno(), fcntl.F_SETFD, flags)

class MultiPathXMLRPCServer(SimpleXMLRPCServer):
    """Multipath XML-RPC Server
    This specialization of SimpleXMLRPCServer allows the user to create
    multiple Dispatcher instances and assign them to different
    HTTP request paths.  This makes it possible to run two or more
    'virtual XML-RPC servers' at the same port.
    Make sure that the requestHandler accepts the paths in question.
    """
    def __init__(self, addr, requestHandler=SimpleXMLRPCRequestHandler,
                 logRequests=True, allow_none=False, encoding=None,
                 bind_and_activate=True, use_builtin_types=False):

        SimpleXMLRPCServer.__init__(self, addr, requestHandler, logRequests, allow_none,
                                    encoding, bind_and_activate, use_builtin_types)
        self.dispatchers = {}
        self.allow_none = allow_none
        self.encoding = encoding or 'utf-8'

    def add_dispatcher(self, path, dispatcher):
        self.dispatchers[path] = dispatcher
        return dispatcher

    def get_dispatcher(self, path):
        return self.dispatchers[path]

    def _marshaled_dispatch(self, data, dispatch_method = None, path = None):
        try:
            response = self.dispatchers[path]._marshaled_dispatch(
               data, dispatch_method, path)
        except:
            # report low level exception back to server
            # (each dispatcher should have handled their own
            # exceptions)
            exc_type, exc_value = sys.exc_info()[:2]
            response = dumps(
                Fault(1, "%s:%s" % (exc_type, exc_value)),
                encoding=self.encoding, allow_none=self.allow_none)
            response = response.encode(self.encoding)
        return response

class CGIXMLRPCRequestHandler(SimpleXMLRPCDispatcher):
    """Simple handler for XML-RPC data passed through CGI."""

    def __init__(self, allow_none=False, encoding=None, use_builtin_types=False):
        SimpleXMLRPCDispatcher.__init__(self, allow_none, encoding, use_builtin_types)

    def handle_xmlrpc(self, request_text):
        """Handle a single XML-RPC request"""

        response = self._marshaled_dispatch(request_text)

        print('Content-Type: text/xml')
        print('Content-Length: %d' % len(response))
        print()
        sys.stdout.flush()
        sys.stdout.buffer.write(response)
        sys.stdout.buffer.flush()

    def handle_get(self):
        """Handle a single HTTP GET request.

        Default implementation indicates an error because
        XML-RPC uses the POST method.
        """

        code = 400
        message, explain = BaseHTTPRequestHandler.responses[code]

        response = http_server.DEFAULT_ERROR_MESSAGE % \
            {
             'code' : code,
             'message' : message,
             'explain' : explain
            }
        response = response.encode('utf-8')
        print('Status: %d %s' % (code, message))
        print('Content-Type: %s' % http_server.DEFAULT_ERROR_CONTENT_TYPE)
        print('Content-Length: %d' % len(response))
        print()
        sys.stdout.flush()
        sys.stdout.buffer.write(response)
        sys.stdout.buffer.flush()

    def handle_request(self, request_text=None):
        """Handle a single XML-RPC request passed through a CGI post method.

        If no XML data is given then it is read from stdin. The resulting
        XML-RPC response is printed to stdout along with the correct HTTP
        headers.
        """

        if request_text is None and \
            os.environ.get('REQUEST_METHOD', None) == 'GET':
            self.handle_get()
        else:
            # POST data is normally available through stdin
            try:
                length = int(os.environ.get('CONTENT_LENGTH', None))
            except (ValueError, TypeError):
                length = -1
            if request_text is None:
                request_text = sys.stdin.read(length)

            self.handle_xmlrpc(request_text)


# -----------------------------------------------------------------------------
# Self documenting XML-RPC Server.

class ServerHTMLDoc(pydoc.HTMLDoc):
    """Class used to generate pydoc HTML document for a server"""

    def markup(self, text, escape=None, funcs={}, classes={}, methods={}):
        """Mark up some plain text, given a context of symbols to look for.
        Each context dictionary maps object names to anchor names."""
        escape = escape or self.escape
        results = []
        here = 0

        # XXX Note that this regular expression does not allow for the
        # hyperlinking of arbitrary strings being used as method
        # names. Only methods with names consisting of word characters
        # and '.'s are hyperlinked.
        pattern = re.compile(r'\b((http|ftp)://\S+[\w/]|'
                                r'RFC[- ]?(\d+)|'
                                r'PEP[- ]?(\d+)|'
                                r'(self\.)?((?:\w|\.)+))\b')
        while 1:
            match = pattern.search(text, here)
            if not match: break
            start, end = match.span()
            results.append(escape(text[here:start]))

            all, scheme, rfc, pep, selfdot, name = match.groups()
            if scheme:
                url = escape(all).replace('"', '&quot;')
                results.append('<a href="%s">%s</a>' % (url, url))
            elif rfc:
                url = 'http://www.rfc-editor.org/rfc/rfc%d.txt' % int(rfc)
                results.append('<a href="%s">%s</a>' % (url, escape(all)))
            elif pep:
                url = 'http://www.python.org/dev/peps/pep-%04d/' % int(pep)
                results.append('<a href="%s">%s</a>' % (url, escape(all)))
            elif text[end:end+1] == '(':
                results.append(self.namelink(name, methods, funcs, classes))
            elif selfdot:
                results.append('self.<strong>%s</strong>' % name)
            else:
                results.append(self.namelink(name, classes))
            here = end
        results.append(escape(text[here:]))
        return ''.join(results)

    def docroutine(self, object, name, mod=None,
                   funcs={}, classes={}, methods={}, cl=None):
        """Produce HTML documentation for a function or method object."""

        anchor = (cl and cl.__name__ or '') + '-' + name
        note = ''

        title = '<a name="%s"><strong>%s</strong></a>' % (
            self.escape(anchor), self.escape(name))

        if inspect.ismethod(object):
            args = inspect.getfullargspec(object)
            # exclude the argument bound to the instance, it will be
            # confusing to the non-Python user
            argspec = inspect.formatargspec (
                    args.args[1:],
                    args.varargs,
                    args.varkw,
                    args.defaults,
                    annotations=args.annotations,
                    formatvalue=self.formatvalue
                )
        elif inspect.isfunction(object):
            args = inspect.getfullargspec(object)
            argspec = inspect.formatargspec(
                args.args, args.varargs, args.varkw, args.defaults,
                annotations=args.annotations,
                formatvalue=self.formatvalue)
        else:
            argspec = '(...)'

        if isinstance(object, tuple):
            argspec = object[0] or argspec
            docstring = object[1] or ""
        else:
            docstring = pydoc.getdoc(object)

        decl = title + argspec + (note and self.grey(
               '<font face="helvetica, arial">%s</font>' % note))

        doc = self.markup(
            docstring, self.preformat, funcs, classes, methods)
        doc = doc and '<dd><tt>%s</tt></dd>' % doc
        return '<dl><dt>%s</dt>%s</dl>\n' % (decl, doc)

    def docserver(self, server_name, package_documentation, methods):
        """Produce HTML documentation for an XML-RPC server."""

        fdict = {}
        for key, value in methods.items():
            fdict[key] = '#-' + key
            fdict[value] = fdict[key]

        server_name = self.escape(server_name)
        head = '<big><big><strong>%s</strong></big></big>' % server_name
        result = self.heading(head, '#ffffff', '#7799ee')

        doc = self.markup(package_documentation, self.preformat, fdict)
        doc = doc and '<tt>%s</tt>' % doc
        result = result + '<p>%s</p>\n' % doc

        contents = []
        method_items = sorted(methods.items())
        for key, value in method_items:
            contents.append(self.docroutine(value, key, funcs=fdict))
        result = result + self.bigsection(
            'Methods', '#ffffff', '#eeaa77', ''.join(contents))

        return result

class XMLRPCDocGenerator(object):
    """Generates documentation for an XML-RPC server.

    This class is designed as mix-in and should not
    be constructed directly.
    """

    def __init__(self):
        # setup variables used for HTML documentation
        self.server_name = 'XML-RPC Server Documentation'
        self.server_documentation = \
            "This server exports the following methods through the XML-RPC "\
            "protocol."
        self.server_title = 'XML-RPC Server Documentation'

    def set_server_title(self, server_title):
        """Set the HTML title of the generated server documentation"""

        self.server_title = server_title

    def set_server_name(self, server_name):
        """Set the name of the generated HTML server documentation"""

        self.server_name = server_name

    def set_server_documentation(self, server_documentation):
        """Set the documentation string for the entire server."""

        self.server_documentation = server_documentation

    def generate_html_documentation(self):
        """generate_html_documentation() => html documentation for the server

        Generates HTML documentation for the server using introspection for
        installed functions and instances that do not implement the
        _dispatch method. Alternatively, instances can choose to implement
        the _get_method_argstring(method_name) method to provide the
        argument string used in the documentation and the
        _methodHelp(method_name) method to provide the help text used
        in the documentation."""

        methods = {}

        for method_name in self.system_listMethods():
            if method_name in self.funcs:
                method = self.funcs[method_name]
            elif self.instance is not None:
                method_info = [None, None] # argspec, documentation
                if hasattr(self.instance, '_get_method_argstring'):
                    method_info[0] = self.instance._get_method_argstring(method_name)
                if hasattr(self.instance, '_methodHelp'):
                    method_info[1] = self.instance._methodHelp(method_name)

                method_info = tuple(method_info)
                if method_info != (None, None):
                    method = method_info
                elif not hasattr(self.instance, '_dispatch'):
                    try:
                        method = resolve_dotted_attribute(
                                    self.instance,
                                    method_name
                                    )
                    except AttributeError:
                        method = method_info
                else:
                    method = method_info
            else:
                assert 0, "Could not find method in self.functions and no "\
                          "instance installed"

            methods[method_name] = method

        documenter = ServerHTMLDoc()
        documentation = documenter.docserver(
                                self.server_name,
                                self.server_documentation,
                                methods
                            )

        return documenter.page(self.server_title, documentation)

class DocXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
    """XML-RPC and documentation request handler class.

    Handles all HTTP POST requests and attempts to decode them as
    XML-RPC requests.

    Handles all HTTP GET requests and interprets them as requests
    for documentation.
    """

    def do_GET(self):
        """Handles the HTTP GET request.

        Interpret all HTTP GET requests as requests for server
        documentation.
        """
        # Check that the path is legal
        if not self.is_rpc_path_valid():
            self.report_404()
            return

        response = self.server.generate_html_documentation().encode('utf-8')
        self.send_response(200)
        self.send_header("Content-type", "text/html")
        self.send_header("Content-length", str(len(response)))
        self.end_headers()
        self.wfile.write(response)

class DocXMLRPCServer(  SimpleXMLRPCServer,
                        XMLRPCDocGenerator):
    """XML-RPC and HTML documentation server.

    Adds the ability to serve server documentation to the capabilities
    of SimpleXMLRPCServer.
    """

    def __init__(self, addr, requestHandler=DocXMLRPCRequestHandler,
                 logRequests=True, allow_none=False, encoding=None,
                 bind_and_activate=True, use_builtin_types=False):
        SimpleXMLRPCServer.__init__(self, addr, requestHandler, logRequests,
                                    allow_none, encoding, bind_and_activate,
                                    use_builtin_types)
        XMLRPCDocGenerator.__init__(self)

class DocCGIXMLRPCRequestHandler(   CGIXMLRPCRequestHandler,
                                    XMLRPCDocGenerator):
    """Handler for XML-RPC data and documentation requests passed through
    CGI"""

    def handle_get(self):
        """Handles the HTTP GET request.

        Interpret all HTTP GET requests as requests for server
        documentation.
        """

        response = self.generate_html_documentation().encode('utf-8')

        print('Content-Type: text/html')
        print('Content-Length: %d' % len(response))
        print()
        sys.stdout.flush()
        sys.stdout.buffer.write(response)
        sys.stdout.buffer.flush()

    def __init__(self):
        CGIXMLRPCRequestHandler.__init__(self)
        XMLRPCDocGenerator.__init__(self)


if __name__ == '__main__':
    import datetime

    class ExampleService:
        def getData(self):
            return '42'

        class currentTime:
            @staticmethod
            def getCurrentTime():
                return datetime.datetime.now()

    server = SimpleXMLRPCServer(("localhost", 8000))
    server.register_function(pow)
    server.register_function(lambda x,y: x+y, 'add')
    server.register_instance(ExampleService(), allow_dotted_names=True)
    server.register_multicall_functions()
    print('Serving XML-RPC on localhost port 8000')
    print('It is advisable to run this example server within a secure, closed network.')
    try:
        server.serve_forever()
    except KeyboardInterrupt:
        print("\nKeyboard interrupt received, exiting.")
        server.server_close()
        sys.exit(0)
PK�Cu\����ll#future/backports/urllib/response.pynu�[���"""Response classes used by urllib.

The base class, addbase, defines a minimal file-like interface,
including read() and readline().  The typical response object is an
addinfourl instance, which defines an info() method that returns
headers and a geturl() method that returns the url.
"""
from __future__ import absolute_import, division, unicode_literals
from future.builtins import object

class addbase(object):
    """Base class for addinfo and addclosehook."""

    # XXX Add a method to expose the timeout on the underlying socket?

    def __init__(self, fp):
        # TODO(jhylton): Is there a better way to delegate using io?
        self.fp = fp
        self.read = self.fp.read
        self.readline = self.fp.readline
        # TODO(jhylton): Make sure an object with readlines() is also iterable
        if hasattr(self.fp, "readlines"):
            self.readlines = self.fp.readlines
        if hasattr(self.fp, "fileno"):
            self.fileno = self.fp.fileno
        else:
            self.fileno = lambda: None

    def __iter__(self):
        # Assigning `__iter__` to the instance doesn't work as intended
        # because the iter builtin does something like `cls.__iter__(obj)`
        # and thus fails to find the _bound_ method `obj.__iter__`.
        # Returning just `self.fp` works for built-in file objects but
        # might not work for general file-like objects.
        return iter(self.fp)

    def __repr__(self):
        return '<%s at %r whose fp = %r>' % (self.__class__.__name__,
                                             id(self), self.fp)

    def close(self):
        if self.fp:
            self.fp.close()
        self.fp = None
        self.read = None
        self.readline = None
        self.readlines = None
        self.fileno = None
        self.__iter__ = None
        self.__next__ = None

    def __enter__(self):
        if self.fp is None:
            raise ValueError("I/O operation on closed file")
        return self

    def __exit__(self, type, value, traceback):
        self.close()

class addclosehook(addbase):
    """Class to add a close hook to an open file."""

    def __init__(self, fp, closehook, *hookargs):
        addbase.__init__(self, fp)
        self.closehook = closehook
        self.hookargs = hookargs

    def close(self):
        if self.closehook:
            self.closehook(*self.hookargs)
            self.closehook = None
            self.hookargs = None
        addbase.close(self)

class addinfo(addbase):
    """class to add an info() method to an open file."""

    def __init__(self, fp, headers):
        addbase.__init__(self, fp)
        self.headers = headers

    def info(self):
        return self.headers

class addinfourl(addbase):
    """class to add info() and geturl() methods to an open file."""

    def __init__(self, fp, headers, url, code=None):
        addbase.__init__(self, fp)
        self.headers = headers
        self.url = url
        self.code = code

    def info(self):
        return self.headers

    def getcode(self):
        return self.code

    def geturl(self):
        return self.url

del absolute_import, division, unicode_literals, object
PK�Cu\#future/backports/urllib/__init__.pynu�[���PKDu\���cc:future/backports/urllib/__pycache__/request.cpython-39.pycnu�[���a

��?hx�@sdZddlmZmZmZmZddlmZmZm	Z	m
Z
mZmZm
Z
mZddlmZmZmZddlZddlZddlZddlZddlmZddlmZdd	lmZmZmZdd
l m!Z!m"Z"m#Z#m$Z$m%Z%m&Z&m'Z'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-m.Z.m/Z/m0Z0m1Z1ddl2m3Z3m4Z4ddl5Z5ddl6Z6ddl7Z7ddl8Z8ddl9Z9ddl:Z:ddl;Z;ddl<Z<ddl=Z=ddl>Z>ddlmZe�rpdd
l?m@Z@ndd
lAm@Z@zddlBZBddlBmCZCWneD�y�dZEYn0dZEgd�ZFe:jGdd�ZHdaIde9jJfdd�ZKdd�ZLgZMd�dd�ZNdd�ZOe�re8�Pde8jQ�ZRn
e8�Pd�ZRdd�ZSGdd�deT�ZUGd d!�d!eT�ZVd"d#�ZWGd$d%�d%eT�ZXGd&d'�d'eX�ZYGd(d)�d)eX�ZZGd*d+�d+eX�Z[d,d-�Z\Gd.d/�d/eX�Z]Gd0d1�d1eT�Z^Gd2d3�d3e^�Z_Gd4d5�d5eT�Z`Gd6d7�d7e`eX�ZaGd8d9�d9e`eX�Zbe6jcZdGd:d;�d;eT�ZeGd<d=�d=eXee�ZfGd>d?�d?eXee�ZgGd@dA�dAeX�ZhGdBdC�dCeh�ZiejedD��rvGdEdF�dFeh�ZkeF�ldF�GdGdH�dHeX�ZmGdIdJ�dJeX�ZndKdL�ZodMdN�ZpGdOdP�dPeX�ZqdQdR�ZrGdSdT�dTeX�ZsGdUdV�dVes�ZtdWZue6jvdXk�rddYlwmxZxmyZyndZd[�Zxd\d]�ZyiZzGd^d_�d_eT�Z{Gd`da�dae{�Z|da}dbdc�Z~daddde�Z�da�dfdg�Z�da�dhdi�Z�Gdjdk�dkeT�Z�dldm�Z�dndo�Z�dpdq�Z�e:j�drk�r�ddsl�m�Z�m�Z�dtdu�Z�dvdw�Z�dxdy�Z�dzd{�Z�n6e6jvdXk�r�d|d}�Z�d~d{�Z�dd��Z�d�dy�Z�ne�Z�e�Z�dS)�a�

Ported using Python-Future from the Python 3.3 standard library.

An extensible library for opening URLs using a variety of protocols

The simplest way to use this module is to call the urlopen function,
which accepts a string containing a URL or a Request object (described
below).  It opens the URL and returns the results as file-like
object; the returned object has some extra methods described below.

The OpenerDirector manages a collection of Handler objects that do
all the actual work.  Each Handler implements a particular protocol or
option.  The OpenerDirector is a composite object that invokes the
Handlers needed to open the requested URL.  For example, the
HTTPHandler performs HTTP GET and POST requests and deals with
non-error returns.  The HTTPRedirectHandler automatically deals with
HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler
deals with digest authentication.

urlopen(url, data=None) -- Basic usage is the same as original
urllib.  pass the url and optionally data to post to an HTTP URL, and
get a file-like object back.  One difference is that you can also pass
a Request instance instead of URL.  Raises a URLError (subclass of
IOError); for HTTP errors, raises an HTTPError, which can also be
treated as a valid response.

build_opener -- Function that creates a new OpenerDirector instance.
Will install the default handlers.  Accepts one or more Handlers as
arguments, either instances or Handler classes that it will
instantiate.  If one of the argument is a subclass of the default
handler, the argument will be installed instead of the default.

install_opener -- Installs a new opener as the default opener.

objects of interest:

OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages
the Handler classes, while dealing with requests and responses.

Request -- An object that encapsulates the state of a request.  The
state can be as simple as the URL.  It can also include extra HTTP
headers, e.g. a User-Agent.

BaseHandler --

internals:
BaseHandler and parent
_call_chain conventions

Example usage:

import urllib.request

# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
authinfo.add_password(realm='PDQ Application',
                      uri='https://mahler:8092/site-updates.py',
                      user='klem',
                      passwd='geheim$parole')

proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})

# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
                                     urllib.request.CacheFTPHandler)

# install it
urllib.request.install_opener(opener)

f = urllib.request.urlopen('http://www.python.org/')
�)�absolute_import�division�print_function�unicode_literals)�bytes�dict�filter�input�int�map�open�str)�PY2�PY3�raise_with_tracebackN)�email)�client�)�URLError�	HTTPError�ContentTooShortError)�urlparse�urlsplit�urljoin�unwrap�quote�unquote�	splittype�	splithost�	splitport�	splituser�splitpasswd�	splitattr�
splitquery�
splitvalue�splittag�to_bytes�
urlunparse)�
addinfourl�addclosehook)r)�Iterable)�
SSLContextFT)�Request�OpenerDirector�BaseHandler�HTTPDefaultErrorHandler�HTTPRedirectHandler�HTTPCookieProcessor�ProxyHandler�HTTPPasswordMgr�HTTPPasswordMgrWithDefaultRealm�AbstractBasicAuthHandler�HTTPBasicAuthHandler�ProxyBasicAuthHandler�AbstractDigestAuthHandler�HTTPDigestAuthHandler�ProxyDigestAuthHandler�HTTPHandler�FileHandler�
FTPHandler�CacheFTPHandler�UnknownHandler�HTTPErrorProcessor�urlopen�install_opener�build_opener�pathname2url�url2pathname�
getproxies�urlretrieve�
urlcleanup�	URLopener�FancyURLopener�c
Ks�d|vr|d}|d=nd}d|vr4|d}|d=nd}d|vrP|d}|d=nd}|s`|s`|r�tsltd��t�tj�}|jtjO_tj|_|s�|r�|�	||�n|�
�t|dd�}t|�}	nt
dur�t�a
}	nt
}	|	�|||�S)N�	cadefaultF�capath�cafilezSSL support not availableT��context�check_hostname)�	_have_ssl�
ValueError�sslr+�PROTOCOL_SSLv23�options�OP_NO_SSLv2�
CERT_REQUIRED�verify_mode�load_verify_locations�set_default_verify_paths�HTTPSHandlerrC�_openerr)
�url�data�timeoutZ_3to2kwargsrLrMrNrP�
https_handler�opener�rc�I/usr/local/lib/python3.9/site-packages/future/backports/urllib/request.pyrA�s*
rAcCs|adS�N)r])rbrcrcrdrB�srBc	Cslt|�\}}t�t||����}|��}|dkrR|sRtj�|�|fWd�S|rbt|d�}nt	j
dd�}|j}t�
|�|��||f}	d}
d}d}d}
d	|vr�t|d
�}|r�||
|
|�|�|
�}|sҐq|t|�7}|�|�|
d7}
|r�||
|
|�q�Wd�n1�s0YWd�n1�s80Y|dk�rh||k�rhtd||f|	��|	S)
aW
    Retrieve a URL into a temporary location on disk.

    Requires a URL argument. If a filename is passed, it is used as
    the temporary file location. The reporthook argument should be
    a callable that accepts a block number, a read size, and the
    total file size of the URL target. The data argument should be
    valid URL encoded data.

    If a filename is passed and the URL points to a local resource,
    the result is a copy from local file to new file.

    Returns a tuple containing the path to the newly created
    data file as well as the resulting HTTPMessage object.
    �fileN�wbF)�delete� ���r�content-length�Content-Lengthr�1retrieval incomplete: got only %i out of %i bytes)r�
contextlib�closingrA�info�os�path�normpathr�tempfile�NamedTemporaryFile�name�_url_tempfiles�appendr
�read�len�writer)r^�filename�
reporthookr_�url_typerr�fp�headers�tfp�result�bs�sizery�blocknum�blockrcrcrdrG�sH


N��rGc	CsBtD]&}zt�|�Wqty(Yq0qtdd�=tr>dadSre)rwrq�unlink�EnvironmentErrorr])�	temp_filercrcrdrH�s
rHz:\d+$cCs<|j}t|�d}|dkr&|�dd�}t�d|d�}|��S)z�Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    r��Host)�full_urlr�
get_header�_cut_port_re�sub�lower)�requestr^�hostrcrcrd�request_hostsr�c@s�eZdZdidddfdd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Zdd�Z	dd�Z
dd�Zdd�Zdd�Z
dd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd)d%d&�Zd'd(�ZdS)*r,NFc	Cs�t|�|_t|j�\|_|_||_i|_d|_|��D]\}}|�||�q6i|_	|durbt
|�}||_||_||_
|��dSre)rr�r%�fragmentr_r��_tunnel_host�items�
add_header�unredirected_hdrsr��origin_req_host�unverifiable�method�_parse)	�selfr^r_r�r�r�r��key�valuercrcrd�__init__s
zRequest.__init__cCsNt|j�\|_}|jdur(td|j��t|�\|_|_|jrJt|j�|_dS)Nzunknown url type: %r)rr��typerSrr��selectorr)r��restrcrcrdr�.s
zRequest._parsecCs&|jdur|jS|jdurdSdSdS)z3Return a string indicating the HTTP request method.N�POST�GET)r�r_�r�rcrcrd�
get_method6s


zRequest.get_methodcCs |jrd|j|jfS|jSdS)Nz%s#%s)r�r�r�rcrcrd�get_full_url?szRequest.get_full_urlcCsd}tj|tdd�||_dS)Nz&Request.add_data method is deprecated.r��
stacklevel��warnings�warn�DeprecationWarningr_)r�r_�msgrcrcrd�add_dataGszRequest.add_datacCsd}tj|tdd�|jduS)Nz&Request.has_data method is deprecated.rr�r��r�r�rcrcrd�has_dataLszRequest.has_datacCsd}tj|tdd�|jS)Nz&Request.get_data method is deprecated.rr�r�r�rcrcrd�get_dataQszRequest.get_datacCsd}tj|tdd�|jS)Nz&Request.get_type method is deprecated.rr�)r�r�r�r�r�rcrcrd�get_typeVszRequest.get_typecCsd}tj|tdd�|jS)Nz&Request.get_host method is deprecated.rr�)r�r�r�r�r�rcrcrd�get_host[szRequest.get_hostcCsd}tj|tdd�|jS)Nz*Request.get_selector method is deprecated.rr�)r�r�r�r�r�rcrcrd�get_selector`szRequest.get_selectorcCsd}tj|tdd�|jS)Nz-Request.is_unverifiable method is deprecated.rr�)r�r�r�r�r�rcrcrd�is_unverifiableeszRequest.is_unverifiablecCsd}tj|tdd�|jS)Nz1Request.get_origin_req_host method is deprecated.rr�)r�r�r�r�r�rcrcrd�get_origin_req_hostjszRequest.get_origin_req_hostcCs2|jdkr|js|j|_n||_|j|_||_dS)N�https)r�r�r�r�r�)r�r�r�rcrcrd�	set_proxyqs

zRequest.set_proxycCs|j|jkSre)r�r�r�rcrcrd�	has_proxyyszRequest.has_proxycCs||j|��<dSre)r��
capitalize�r�r��valrcrcrdr�|szRequest.add_headercCs||j|��<dSre)r�r�r�rcrcrd�add_unredirected_header�szRequest.add_unredirected_headercCs||jvp||jvSre)r�r�)r��header_namercrcrd�
has_header�s
�zRequest.has_headercCs|j�||j�||��Sre)r��getr�)r�r��defaultrcrcrdr��s�zRequest.get_headercCs"|j��}|�|j�t|���Sre)r��copy�updater��listr�)r��hdrsrcrcrd�header_items�s
zRequest.header_items)N)�__name__�
__module__�__qualname__r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rcrcrcrdr,s,�
	
r,c@sNeZdZdd�Zdd�Zdd�Zdd�Zd	ejfd
d�Z	ddd
�Z
dd�Zd	S)r-cCs6dt}d|fg|_g|_i|_i|_i|_i|_dS)N�Python-urllib/%sz
User-agent)�__version__�
addheaders�handlers�handle_open�handle_error�process_response�process_request)r��client_versionrcrcrdr��szOpenerDirector.__init__c	CsRt|d�stdt|���d}t|�D�]}|dvr6q&|�d�}|d|�}||dd�}|�d�r�|�d�|d}||dd�}zt|�}Wnty�Yn0|j�	|i�}	|	|j|<n>|dkr�|}|j
}	n*|d	kr�|}|j}	n|d
kr&|}|j}	nq&|	�
|g�}
|
�r t�|
|�n
|
�|�d}q&|�rNt�|j|�|�|�dS)N�
add_parentz%expected BaseHandler instance, got %rF)�redirect_request�do_open�
proxy_open�_r�errorr�responser�T)�hasattr�	TypeErrorr��dir�find�
startswithr
rSr�r�r�r�r��
setdefault�bisect�insortrxr�r�)r��handler�added�meth�i�protocol�	condition�j�kind�lookupr�rcrcrd�add_handler�sL
�


zOpenerDirector.add_handlercCsdSrercr�rcrcrd�close�szOpenerDirector.closec	Gs<|�|d�}|D]&}t||�}||�}|dur|SqdS)Nrc)r��getattr)	r��chainr��	meth_name�argsr�r��funcr�rcrcrd�_call_chain�s
zOpenerDirector._call_chainNc
Cs�t|t�r|��}t|t�r(t||�}n|}|dur:||_||_|j}|d}|j�	|g�D]}t
||�}||�}q\|�||�}	|d}|j�	|g�D]}t
||�}|||	�}	q�|	S)z�
        Accept a URL or a Request object

        Python-Future: if the URL is passed as a byte-string, decode it first.
        N�_request�	_response)
�
isinstancer�decoder
r,r_r`r�r�r�r��_openr�)
r��fullurlr_r`�reqr�r��	processorr�r�rcrcrdr�s&




zOpenerDirector.opencCsP|�|jdd|�}|r|S|j}|�|j||d|�}|r>|S|�|jdd|�S)Nr��default_openr��unknown�unknown_open)r�r�r�)r�r�r_r�r�rcrcrdr��s 
���
�zOpenerDirector._opencGs~|dvr,|jd}|d}d|}d}|}n|j}|d}d}|||f|}|j|�}|r^|S|rz|dd	f|}|j|�SdS)
N��httpr�r�z
http_error_%sr�_errorrr��http_error_default)r�r�)r��protor�rr��http_err�	orig_argsr�rcrcrdr�
s 

zOpenerDirector.error)N)r�r�r�r�r�r�r��socket�_GLOBAL_DEFAULT_TIMEOUTrr�r�rcrcrcrdr-�s/"
r-cGs�dd�}t�}ttttttttg}t	t
d�r6|�t�t
�}|D]@}|D]6}||�rjt||�r~|�|�qHt||�rH|�|�qHq@|D]}|�|�q�|D]}|�|��q�|D]}||�r�|�}|�|�q�|S)a*Create an opener object from a list of handlers.

    The opener will use several default handlers, including support
    for HTTP, FTP and when applicable HTTPS.

    If any of the handlers passed as arguments are subclasses of the
    default handlers, the default handlers will not be used.
    cSst|t�pt|d�S)N�	__bases__)r�r�r�)�objrcrcrd�isclass/szbuild_opener.<locals>.isclass�HTTPSConnection)r-r2r?r;r/r0r=r<r@r��http_clientrxr\�set�
issubclass�addr��remover�)r�rrb�default_classes�skip�klass�check�hrcrcrdrC&s2	�



rCc@s(eZdZdZdd�Zdd�Zdd�ZdS)	r.��cCs
||_dSre)�parent)r�rrcrcrdr�OszBaseHandler.add_parentcCsdSrercr�rcrcrdr�RszBaseHandler.closecCst|d�sdS|j|jkS)N�
handler_orderT)r�r)r��otherrcrcrd�__lt__Vs
zBaseHandler.__lt__N)r�r�r�rr�r�rrcrcrcrdr.Lsr.c@s eZdZdZdZdd�ZeZdS)r@zProcess HTTP error responses.i�cCsH|j|j|��}}}d|kr,dksDn|j�d|||||�}|S)N���,r)�coder�rprr�)r�r�r�rr�r�rcrcrd�
http_responsecs�z HTTPErrorProcessor.http_responseN)r�r�r��__doc__rr �https_responsercrcrcrdr@_sr@c@seZdZdd�ZdS)r/cCst|j||||��dSre)rr�)r�r�rrr�r�rcrcrdrqsz*HTTPDefaultErrorHandler.http_error_defaultN)r�r�r�rrcrcrcrdr/psr/c@s4eZdZdZdZdd�Zdd�ZeZZZ	dZ
dS)	r0��
c	sx|��}|dvr|dvs:|dvr(|dks:t|j||||��|�dd�}d�t�fdd	�|j��D��}t|||jd
d�S)a�Return a Request or None in response to a redirect.

        This is called by the http_error_30x methods when a
        redirection response is received.  If a redirection should
        take place, return a new Request to allow http_error_30x to
        perform the redirect.  Otherwise, raise HTTPError if no-one
        else should try to handle this url.  Return None if you can't
        but another Handler might.
        )�-�.�/i3)r��HEAD)r%r&r'r�� z%20)rkzcontent-typec3s&|]\}}|���vr||fVqdSre)r���.0�k�v��CONTENT_HEADERSrcrd�	<genexpr>�s
�z7HTTPRedirectHandler.redirect_request.<locals>.<genexpr>T)r�r�r�)	r�rr��replacerr�r�r,r�)	r�r�rrr�r��newurl�m�
newheadersrcr.rdr�|s
���z$HTTPRedirectHandler.redirect_requestc
Cs2d|vr|d}nd|vr$|d}ndSt|�}|jdvrRt||d||f||��|jsht|�}d|d<t|�}t|j|�}|�||||||�}|dur�dSt	|d�r�|j
}	|_
|	�|d�|jks�t
|	�|jkr�t|j||j|||��ni}	|_
|_
|	�|d�d	|	|<|��|��|jj||jd
�S)N�location�uri�rr��ftpr�z+%s - Redirection to url '%s' is not allowed�/r�
redirect_dictrr�r`)r�schemerrrr�r'rr�r�r�r:r��max_repeatsrz�max_redirections�inf_msgryr�rrr`)
r�r�rrr�r�r2�urlparts�new�visitedrcrcrd�http_error_302�sB



�
��z"HTTPRedirectHandler.http_error_302zoThe HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
N)r�r�r�r=r>r�rC�http_error_301�http_error_303�http_error_307r?rcrcrcrdr0ts"7r0c	Cs�t|�\}}|�d�s d}|}n:|�d�s6td|��|�dd�}|dkrNd}|d|�}t|�\}}|dur|t|�\}}nd}}||||fS)a3Return (scheme, user, password, host/port) given a URL or an authority.

    If a URL is supplied, it must have an authority (host:port) component.
    According to RFC 3986, having an authority component means the URL must
    have two slashes after the scheme:

    >>> _parse_proxy('file:/ftp.example.com/')
    Traceback (most recent call last):
    ValueError: proxy URL with no authority: 'file:/ftp.example.com/'

    The first three items of the returned tuple may be None.

    Examples of authority parsing:

    >>> _parse_proxy('proxy.example.com')
    (None, None, None, 'proxy.example.com')
    >>> _parse_proxy('proxy.example.com:3128')
    (None, None, None, 'proxy.example.com:3128')

    The authority component may optionally include userinfo (assumed to be
    username:password):

    >>> _parse_proxy('joe:password@proxy.example.com')
    (None, 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('joe:password@proxy.example.com:3128')
    (None, 'joe', 'password', 'proxy.example.com:3128')

    Same examples, but with URLs instead:

    >>> _parse_proxy('http://proxy.example.com/')
    ('http', None, None, 'proxy.example.com')
    >>> _parse_proxy('http://proxy.example.com:3128/')
    ('http', None, None, 'proxy.example.com:3128')
    >>> _parse_proxy('http://joe:password@proxy.example.com/')
    ('http', 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('http://joe:password@proxy.example.com:3128')
    ('http', 'joe', 'password', 'proxy.example.com:3128')

    Everything after the authority is ignored:

    >>> _parse_proxy('ftp://joe:password@proxy.example.com/rubbish:3128')
    ('ftp', 'joe', 'password', 'proxy.example.com')

    Test for no trailing '/' case:

    >>> _parse_proxy('http://joe:password@proxy.example.com')
    ('http', 'joe', 'password', 'proxy.example.com')

    r9N�//zproxy URL with no authority: %rrrj)rr�rSr�r r!)	�proxyr<�r_scheme�	authority�end�userinfo�hostport�user�passwordrcrcrd�_parse_proxy�s2

rPc@s"eZdZdZddd�Zdd�ZdS)r2�dNcCsZ|durt�}t|d�s Jd��||_|��D]&\}}t|d||||jfdd��q.dS)N�keys�proxies must be a mappingz%s_opencSs||||�Srerc)�rrHr�r�rcrcrd�<lambda>/sz'ProxyHandler.__init__.<locals>.<lambda>)rFr��proxiesr��setattrr�)r�rVr�r^rcrcrdr�(s
�zProxyHandler.__init__cCs�|j}t|�\}}}}|dur"|}|jr6t|j�r6dS|rv|rvdt|�t|�f}	t�|	����d�}
|�	dd|
�t|�}|�
||�||ks�|dkr�dS|jj||j
d�SdS)N�%s:%s�ascii�Proxy-authorization�Basic r�r;)r�rPr��proxy_bypassr�base64�	b64encode�encoder�r�r�rrr`)r�r�rHr��	orig_type�
proxy_typerNrOrM�	user_pass�credsrcrcrdr�2s"�zProxyHandler.proxy_open)N)r�r�r�rr�r�rcrcrcrdr2$s

r2c@s6eZdZdd�Zdd�Zdd�Zd
dd	�Zd
d�ZdS)r3cCs
i|_dSre)�passwdr�rcrcrdr�PszHTTPPasswordMgr.__init__cs\t|t�r|g}|�jvr$i�j|<dD].�t��fdd�|D��}||f�j||<q(dS)N�TFcsg|]}��|���qSrc)�
reduce_uri)r+�u��default_portr�rcrd�
<listcomp>[�z0HTTPPasswordMgr.add_password.<locals>.<listcomp>)r�r
rd�tuple)r��realmr6rNrd�reduced_urircrhrd�add_passwordSs


�zHTTPPasswordMgr.add_passwordc	Cs`|j�|i�}dD]H}|�||�}|��D].\}}|D] }|�||�r6|Sq6q*qdS)Nre�NN)rdr�rfr��	is_suburi)	r�rm�authuri�domainsri�reduced_authuri�uris�authinfor6rcrcrd�find_user_password^sz"HTTPPasswordMgr.find_user_passwordTc
Cs�t|�}|dr.|d}|d}|dp*d}nd}|}d}t|�\}}|r~|dur~|dur~ddd��|�}	|	dur~d	||	f}||fS)
z@Accept authority or URI and extract only the authority and path.rrrr9N�Pi�rz%s:%d)rrr�)
r�r6ri�partsr<rJrrr��port�dportrcrcrdrfhs$��zHTTPPasswordMgr.reduce_uricCsR||krdS|d|dkr dSt�|d|df�}t|�t|d�krNdSdS)zcCheck if test is below base in a URI tree

        Both args must be URIs in reduced form.
        TrFr)�	posixpath�commonprefixrz)r��base�test�commonrcrcrdrqszHTTPPasswordMgr.is_suburiN)T)r�r�r�r�rorwrfrqrcrcrcrdr3Ns


r3c@seZdZdd�ZdS)r4cCs0t�|||�\}}|dur"||fSt�|d|�Sre)r3rw)r�rmrrrNrOrcrcrdrw�s�z2HTTPPasswordMgrWithDefaultRealm.find_user_passwordN)r�r�r�rwrcrcrcrdr4�sr4c@s<eZdZe�dej�Zddd�Zdd�Zdd�Z	d	d
�Z
dS)r5z1(?:.*,)*[ 	]*([^ 	]+)[ 	]+realm=(["']?)([^"']*)\2NcCs(|durt�}||_|jj|_d|_dS�Nr)r3rdro�retried)r��password_mgrrcrcrdr��s

z!AbstractBasicAuthHandler.__init__cCs
d|_dSr��r�r�rcrcrd�reset_retry_count�sz*AbstractBasicAuthHandler.reset_retry_countc
Cs�|�|d�}|jdkr,t|��dd|d��n|jd7_|r�|��d}|��dkrdtd|��nftj�	|�}|r�|�
�\}}}|dvr�t�d	t
d
�|��dkr�|�|||�}	|	r�|	jdkr�d|_|	SdS)N��zbasic auth failedrr�basiczDAbstractBasicAuthHandler does not support the following scheme: '%s')�"�'zBasic Auth Realm was unquotedr)r�r�rr��splitr�rSr5�rx�search�groupsr�r��UserWarning�retry_http_basic_authr)
r��authreqr�r�r�r<�morrmr�rcrcrd�http_error_auth_reqed�s0
���z.AbstractBasicAuthHandler.http_error_auth_reqedcCs~|j�||�\}}|durvd||f}dt�|����d�}|j�|jd�|krVdS|�	|j|�|j
j||jd�SdSdS)NrXr[rYr;)
rdrwr]r^r_r�r�r��auth_headerr�rrr`)r�r�r�rmrN�pw�raw�authrcrcrdr��sz.AbstractBasicAuthHandler.retry_http_basic_auth)N)r�r�r��re�compile�Ir�r�r�r�r�rcrcrcrdr5�s�
 r5c@seZdZdZdd�ZdS)r6�
AuthorizationcCs"|j}|�d|||�}|��|S)N�www-authenticate)r�r�r�)r�r�rrr�r�r^r�rcrcrd�http_error_401�s�z#HTTPBasicAuthHandler.http_error_401N)r�r�r�r�r�rcrcrcrdr6�sr6c@seZdZdZdd�ZdS)r7rZcCs"|j}|�d|||�}|��|S�N�proxy-authenticate�r�r�r�)r�r�rrr�r�rJr�rcrcrd�http_error_407�s�z$ProxyBasicAuthHandler.http_error_407N)r�r�r�r�r�rcrcrcrdr7�sr7c@sNeZdZddd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Zdd�Z	dd�Z
dS)r8NcCs4|durt�}||_|jj|_d|_d|_d|_dSr�)r3rdror��nonce_count�
last_nonce)r�rdrcrcrdr�s
z"AbstractDigestAuthHandler.__init__cCs
d|_dSr�r�r�rcrcrdr�sz+AbstractDigestAuthHandler.reset_retry_countcCs||�|d�}|jdkr*t|jdd|d��n|jd7_|rx|��d}|��dkr`|�||�S|��dkrxtd|��dS)	Nr�r�zdigest auth failedrr�digestr�zEAbstractDigestAuthHandler does not support the following scheme: '%s')r�r�rr�r�r��retry_http_digest_authrS)r�r�r�r�r�r�r<rcrcrdr�s

��z/AbstractDigestAuthHandler.http_error_auth_reqedcCsz|�dd�\}}ttdt|���}|�||�}|rvd|}|j�|jd�|krRdS|�|j|�|j	j
||jd�}|SdS)Nr)rz	Digest %sr;)r��parse_keqv_listr�parse_http_list�get_authorizationr�r�r�r�rrr`)r�r�r��token�	challenge�chal�auth_val�resprcrcrdr�(sz0AbstractDigestAuthHandler.retry_http_digest_authcCs@d|j|t��f}|�d�td�}t�|���}|dd�S)Nz	%s:%s:%s:rY��)r��time�ctimer_�_randombytes�hashlib�sha1�	hexdigest)r��nonce�s�b�digrcrcrd�
get_cnonce4sz$AbstractDigestAuthHandler.get_cnoncecCs�z6|d}|d}|�d�}|�dd�}|�dd�}WntyJYdS0|�|�\}}	|durfdS|j�||j�\}
}|
dur�dS|jdur�|�|j|�}nd}d|
||f}
d|��|j	f}|d	k�r.||j
kr�|jd
7_nd
|_||_
d|j}|�|�}d||||||�f}|	||
�|�}n2|du�rT|	||
�d|||�f�}nt
d
|��d|
|||j	|f}|�r�|d|7}|�r�|d|7}|d|7}|�r�|d||f7}|S)Nrmr��qop�	algorithm�MD5�opaquez%s:%s:%srXr�rz%08xz%s:%s:%s:%s:%szqop '%s' is not supported.z>username="%s", realm="%s", nonce="%s", uri="%s", response="%s"z
, opaque="%s"z
, digest="%s"z, algorithm="%s"z, qop=auth, nc=%s, cnonce="%s")r��KeyError�get_algorithm_implsrdrwr�r_�get_entity_digestr�r�r�r�r�r)r�r�r�rmr�r�r�r��H�KDrNr��entdig�A1�A2�ncvalue�cnonce�noncebit�respdigr~rcrcrdr�?s\

�





��z+AbstractDigestAuthHandler.get_authorizationcs6|dkrdd��n|dkr"dd���fdd�}�|fS)Nr�cSst�|�d����S�NrY)r��md5r_r���xrcrcrdrU~rkz?AbstractDigestAuthHandler.get_algorithm_impls.<locals>.<lambda>�SHAcSst�|�d����Sr�)r�r�r_r�r�rcrcrdrU�rkcs�d||f�S)NrXrc)r��d�r�rcrdrU�rkrc)r�r�r�rcr�rdr�{s
z-AbstractDigestAuthHandler.get_algorithm_implscCsdSrerc)r�r_r�rcrcrdr��sz+AbstractDigestAuthHandler.get_entity_digest)N)r�r�r�r�r�r�r�r�r�r�r�rcrcrcrdr8�s
	<
r8c@s eZdZdZdZdZdd�ZdS)r9z�An authentication protocol defined by RFC 2069

    Digest authentication improves on basic authentication because it
    does not transmit passwords in the clear.
    r���cCs*t|j�d}|�d|||�}|��|S)Nrr�)rr�r�r��r�r�rrr�r�r��retryrcrcrdr��s�z$HTTPDigestAuthHandler.http_error_401N)r�r�r�r!r�rr�rcrcrcrdr9�sr9c@seZdZdZdZdd�ZdS)r:�Proxy-Authorizationr�cCs"|j}|�d|||�}|��|Sr�r�r�rcrcrdr��s�z%ProxyDigestAuthHandler.http_error_407N)r�r�r�r�rr�rcrcrcrdr:�sr:c@s.eZdZddd�Zdd�Zdd�Zdd	�Zd
S)�AbstractHTTPHandlerrcCs
||_dSre��_debuglevel)r��
debuglevelrcrcrdr��szAbstractHTTPHandler.__init__cCs
||_dSrer�)r��levelrcrcrd�set_http_debuglevel�sz'AbstractHTTPHandler.set_http_debuglevelc
CsN|j}|std��|jdur�|j}t|t�r8d}t|��|�d�sN|�dd�|�d�s�d}z:tr~t|t	j	�r~t
|�|j}nt|�}t
|�|j}Wn0ty�t|t
�r�tdt|�|f��Yn0|�dd|�|}|��r�t|j�\}}	t|	�\}}
|�d��s|�d|�|jjD]*\}}|��}|�|��s|�||��q|S)	N�
no host givenzLPOST data should be bytes or an iterable of bytes. It cannot be of type str.zContent-type�!application/x-www-form-urlencodedzContent-lengthzBContent-Length should be specified for iterable data of type %r %rz%dr�)r�rr_r�r
r�r�r�r�arrayrz�itemsize�
memoryviewr*rSr�r�rr�rrr�r�)
r�r�r�r_r�r��mv�sel_hostr<�sel�sel_pathrvr�rcrcrd�do_request_�sR


�

���zAbstractHTTPHandler.do_request_c

s4|j}|std��||fd|ji|��}t|j����t�fdd�|j��D���d�d<tdd����D���|jr�i}d}|�vr��|||<�|=|j	|j|d	�z|�
|��|j|j
��Wn6tjy�}z|��t|��WYd
}~n*d
}~00|��}	|j�r|j��d
|_|��|	_|	j|	_|	S)z�Return an HTTPResponse object for the request, using http_class.

        http_class must implement the HTTPConnection API from http.client.
        r�r`c3s"|]\}}|�vr||fVqdSrercr*�r�rcrdr0�s
�z.AbstractHTTPHandler.do_open.<locals>.<genexpr>r��
Connectioncss|]\}}|��|fVqdSre)�title)r+rvr�rcrcrdr0�rkr�r�N)r�rr`rr�r�r�r�r��
set_tunnelr�r�r�r_rr�r��getresponse�sockr�r^�reasonr�)
r��
http_classr��http_conn_argsr�r�tunnel_headers�proxy_auth_hdr�errrTrcr�rdr��s6
"

zAbstractHTTPHandler.do_openN)r)r�r�r�r�r�r�r�rcrcrcrdr��s
3r�c@seZdZdd�ZejZdS)r;cCs|�tj|�Sre)r�r�HTTPConnection�r�r�rcrcrd�	http_open$szHTTPHandler.http_openN)r�r�r�r�r�r��http_requestrcrcrcrdr;"sr;r
c@s$eZdZddd�Zdd�ZejZdS)r\rNcCst�||�||_||_dSre)r�r��_context�_check_hostname)r�r�rPrQrcrcrdr�-szHTTPSHandler.__init__cCs|jtj||j|jd�S)NrO)r�rr
r�r�r�rcrcrd�
https_open2s
�zHTTPSHandler.https_open)rNN)r�r�r�r�r�r�r��
https_requestrcrcrcrdr\+s
r\c@s.eZdZddd�Zdd�Zdd�ZeZeZdS)	r1NcCs2ddlmmm}|dur(|��}||_dSr�)Zfuture.backports.http.cookiejar�	backportsr�	cookiejar�	CookieJar)r�r��http_cookiejarrcrcrdr�;szHTTPCookieProcessor.__init__cCs|j�|�|Sre)r��add_cookie_header)r�r�rcrcrdr�Asz HTTPCookieProcessor.http_requestcCs|j�||�|Sre)r��extract_cookies)r�r�r�rcrcrdr Esz!HTTPCookieProcessor.http_response)N)r�r�r�r�r�r r�r"rcrcrcrdr1:s

r1c@seZdZdd�ZdS)r?cCs|j}td|��dS)Nzunknown url type: %s)r�r)r�r�r�rcrcrdr�MszUnknownHandler.unknown_openN)r�r�r�r�rcrcrcrdr?Lsr?cCsNi}|D]@}|�dd�\}}|ddkr@|ddkr@|dd�}|||<q|S)z>Parse list of key=value strings where keys are not duplicated.�=rrr�rj)r�)�l�parsed�eltr,r-rcrcrdr�Qs
r�cCs�g}d}d}}|D]l}|r*||7}d}q|rT|dkr>d}qn|dkrJd}||7}q|dkrl|�|�d}q|dkrxd}||7}q|r�|�|�dd�|D�S)	apParse lists as described by RFC 2068 Section 2.

    In particular, parse comma-separated lists where the elements of
    the list may include quoted-strings.  A quoted-string could
    contain a comma.  A non-quoted string could have quotes in the
    middle.  Neither commas nor quotes count if they are escaped.
    Only double-quotes count, not single-quotes.
    r�F�\Tr��,cSsg|]}|���qSrc��strip)r+�partrcrcrdrj�rkz#parse_http_list.<locals>.<listcomp>)rx)r��resr�escaper�currcrcrdr�[s4	


r�c@s(eZdZdd�ZdZdd�Zdd�ZdS)r<cCs\|j}|dd�dkrN|dd�dkrN|jrN|jdkrN|j|��urXtd��n
|�|�SdS)NrrGrKr9�	localhost�-file:// scheme is supported only on localhost)r�r��	get_namesr�open_local_file)r�r�r^rcrcrd�	file_open�s&�
zFileHandler.file_openNcCs^tjdurXz*tt�d�dt�t���d�t_Wn"tjyVt�d�ft_Yn0tjS)Nrr)r<�namesrlr�gethostbyname_ex�gethostname�gaierror�
gethostbynamer�rcrcrdr�s
��
zFileHandler.get_namesc
Csddlmmm}ddl}|j}|j}t|�}z�t�	|�}|j
}|j|jdd�}	|�
|�d}
t�d|
ppd||	f�}|r�t|�\}}|r�|s�t|�|��vr�|r�d||}
nd|}
tt|d�||
�WSWn.t�y}zt|��WYd}~n
d}~00td��dS)	NrT��usegmtz6Content-type: %s
Content-length: %d
Last-modified: %s
�
text/plain�file://�rbzfile not on local host)�future.backports.email.utilsr�r�utils�	mimetypesr�r�rErq�stat�st_size�
formatdate�st_mtime�
guess_type�message_from_stringr�_safe_gethostbynamerr(r�OSErrorr)r�r��email_utilsrr�r|�	localfile�statsr��modified�mtyper�rz�origurl�exprcrcrdr�s:
����zFileHandler.open_local_file)r�r�r�rrrrrcrcrcrdr<�s
r<cCs(zt�|�WStjy"YdS0dSre)rrr�r�rcrcrdr$�sr$c@seZdZdd�Zdd�ZdS)r=c
Cs&ddl}ddl}|j}|s"td��t|�\}}|dur>|j}nt|�}t|�\}}|rdt|�\}}nd}t	|�}|pvd}|p~d}zt
�|�}Wn.t
jy�}zt|��WYd}~n
d}~00t
|j�\}	}
|	�d�}ttt	|��}|dd�|d}}|�r|d�s|dd�}z�|�||||||j�}
|�r:d�p<d}|
D]2}t|�\}}|��d	k�rB|d
v�rB|��}�qB|
�||�\}}d}|�|j�d}|�r�|d|7}|du�r�|dk�r�|d|7}t�|�}t|||j�WS|j�y }z td
|�}t|�WYd}~n
d}~00dS)Nr�ftp error: no host givenr�r9rjrr��Dr���a�Ar�r�r�r/zContent-type: %s
zContent-length: %d
�
ftp error: %r)�ftplibrr�rr�FTP_PORTr
r r!rrrr�r"r�r�r�r�connect_ftpr`r$r��upper�retrfiler"r�rr#r(�
all_errorsr)r�r�r4rr�rzrNrdr�rr�attrs�dirsrf�fwr��attrr�r�retrlenr�r*r,�excrcrcrd�ftp_open�s^
�
zFTPHandler.ftp_openc	Cst||||||dd�S)NF)�
persistent)�
ftpwrapper)r�rNrdr�rzr;r`rcrcrdr6�s�zFTPHandler.connect_ftpN)r�r�r�r@r6rcrcrcrdr=�s5r=c@s<eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
S)r>cCs"i|_i|_d|_d|_d|_dS)Nr�<r�)�cacher`�soonest�delay�	max_connsr�rcrcrdr��s
zCacheFTPHandler.__init__cCs
||_dSre)rF)r��trcrcrd�
setTimeoutszCacheFTPHandler.setTimeoutcCs
||_dSre)rG)r�r3rcrcrd�setMaxConnsszCacheFTPHandler.setMaxConnscCsr|||d�|�|f}||jvr4t��|j|j|<n,t||||||�|j|<t��|j|j|<|��|j|S)Nr9)�joinrDr�rFr`rB�check_cache)r�rNrdr�rzr;r`r�rcrcrdr6
s

�
zCacheFTPHandler.connect_ftpcCs�t��}|j|krPt|j���D].\}}||kr |j|��|j|=|j|=q tt|j����|_t	|j�|j
kr�t|j���D]&\}}||jkr�|j|=|j|=q�q�tt|j����|_dSre)r�rEr�r`r�rDr��min�valuesrzrG)r�rHr,r-rcrcrdrLs


zCacheFTPHandler.check_cachecCs0|j��D]}|��q
|j��|j��dSre)rDrNr��clearr`)r��connrcrcrd�clear_cache)s

zCacheFTPHandler.clear_cacheN)	r�r�r�r�rIrJr6rLrQrcrcrcrdr>�sr>r$�nt)rErDcCst|�S)zOS-specific conversion from a relative URL of the 'file' scheme
        to a file system path; not recommended for general use.)r��pathnamercrcrdrE8srEcCst|�S)zOS-specific conversion from a file system path to a relative URL
        of the 'file' scheme; not recommended for general use.)rrSrcrcrdrD=srDc@s�eZdZdZdZdeZd*dd�Zdd�Zdd	�Z	d
d�Z
dd
�Zd+dd�Zd,dd�Z
d-dd�Zd.dd�Zdd�Zd/dd�Zd0dd�Zdd�Zer�dd�Zd1d d!�Zd"d#�Zd$d%�Zd&d'�Zd2d(d)�ZdS)3rIa,Class to open URLs.
    This is a class rather than just a subroutine because we may need
    more than one set of global protocol-specific options.
    Note -- this is a base class for those who don't want the
    automatic handling of errors type 302 (relocated) and 401
    (authorization needed).Nr�cKs�dd|jji}tj|tdd�|dur.t�}t|d�s@Jd��||_|�d�|_	|�d�|_
d	|jfg|_g|_
tj|_d|_t|_dS)
NzW%(class)s style of invoking requests is deprecated. Use newer urlopen functions/methods�classrKr�rRrS�key_file�	cert_filez
User-Agent)�	__class__r�r�r�r�rFr�rVr�rVrW�versionr��_URLopener__tempfilesrqr��_URLopener__unlink�	tempcache�ftpcache)r�rV�x509r�rcrcrdr�Ws
�zURLopener.__init__cCs|��dSre)r�r�rcrcrd�__del__qszURLopener.__del__cCs|��dSre)�cleanupr�rcrcrdr�tszURLopener.closec	CsT|jr@|jD]&}z|�|�Wqty0Yq0q|jdd�=|jrP|j��dSre)rZr[r%r\rO)r�rfrcrcrdr`ws
zURLopener.cleanupcGs|j�|�dS)zdAdd a header to be used by the HTTP interface only
        e.g. u.addheader('Accept', 'sound/basic')N)r�rx)r�r�rcrcrd�	addheader�szURLopener.addheaderc
CsZtt|��}t|dd�}|jrL||jvrL|j|\}}t|d�}t|||�St|�\}}|s`d}||jvr�|j|}t|�\}}	t|	�\}
}|
|f}nd}d|}||_	|�
dd�}t||�s�|r�|�|||�S|�
||�Sz.|dur�t||�|�WSt||�||�WSWnJt�y �Yn6tj�yT}
zttd	|
��WYd}
~
n
d}
~
00dS)
z6Use URLopener().open(file) instead of open(file, 'r').z%/:=&?~#+!$,;'@()*[]|��saferrfN�open_�-r�zsocket error)rr&rr\rr(rrVrr�r1r��open_unknown_proxy�open_unknownr�rrr�r�IOError)r�r�r_r|r�r�urltyper^rH�	proxyhostr�r�rvr�rcrcrdr�s<




zURLopener.opencCst|�\}}tdd|��dS)�/Overridable interface to open unknown URL type.�	url errorzunknown url typeN�rrh)r�r�r_r�r^rcrcrdrg�szURLopener.open_unknowncCs t|�\}}tdd||��dS)rkrlzinvalid proxy for %sNrm)r�rHr�r_r�r^rcrcrdrf�szURLopener.open_unknown_proxyc
Cs6tt|��}|jr&||jvr&|j|St|�\}}|dur�|rF|dkr�z0|�|�}|��}|��tt|�d�|fWSt	y�}	zWYd}	~	n
d}	~	00|�
||�}�zV|��}
|r�t
|d�}n|ddl}t|�\}
}t|p�d�\}
}t|p�d�\}}
t
|�pd�\}}
tj�|�d}|�|�\}}|j�|�t�|d�}z�||
f}|jdu�r`||j|<d}d}d}d}d	|
v�r�t|
d
�}|�r�||||�|�|�}|�s��q�|t|�7}|�|�|d7}|�r�||||��q�W|��n
|��0W|��n
|��0|dk�r2||k�r2td||f|��|S)ztretrieve(url) returns (filename, headers) for a local object
        or (tempfilename, headers) for a remote object.Nrfrrgrr�rirjrkrlrm)rr&r\rrrpr�rErrhrrtr#r"rqrr�splitext�mkstemprZrx�fdopenr
ryrzr{r)r�r^r|r}r_r��url1rr�r�r�r�rt�garbagerr�suffix�fdr�r�r�ryr�r�rcrcrd�retrieve�sp





��zURLopener.retrievecCs d}d}t|t�r<t|�\}}|r6t|�\}}t|�}|}nt|\}}t|�\}}t|�\}	}
|
}d}|	��dkrvd}n:t|
�\}}
|r�t|�\}}|r�d|	||
f}t|�r�|}|s�tdd��|r�t|�}t	�
|����d�}nd}|�rt|�}t	�
|����d�}nd}||�}
i}|�r*d||d<|�r<d||d	<|�rJ||d
<d|d<|j
D]\}}|||<�qX|du�r�d
|d<|
�d|||�n|
jd||d�z|
��}Wntj�y�td��Yn0d|jk�r�dk�rnnt||jd||j�S|�||j|j|j|j|�SdS)a�Make an HTTP connection using connection_class.

        This is an internal method that should be called from
        open_http() or open_https().

        Arguments:
        - connection_factory should take a host name and return an
          HTTPConnection instance.
        - url is the url to retrieval or a host, relative-path pair.
        - data is payload for a POST request or None.
        Nrz	%s://%s%sz
http errorr�rYzBasic %sr�r�r�r�r�r�zContent-Typer�r�r�z$http protocol error: bad status linerr�http:)r�r
rr rrr�r\rhr]r^r_r�r�r�r�r�
BadStatusLiner�statusr(r��
http_errorrr�)r��connection_factoryr^r_�user_passwd�proxy_passwdr�r��realhostrir��
proxy_authr��	http_connr��headerr�r�rcrcrd�_open_generic_http�st


��zURLopener._open_generic_httpcCs|�tj||�S)zUse HTTP protocol.)r�rr��r�r^r_rcrcrd�	open_httpXszURLopener.open_httpc
Csbd|}t||�rPt||�}|dur6||||||�}	n|||||||�}	|	rP|	S|�|||||�S)z�Handle http errors.

        Derived class can override this, or provide specific handlers
        named http_error_DDD where DDD is the 3-digit error code.z
http_error_%dN)r�r�r)
r�r^r�errcode�errmsgr�r_rvr�r�rcrcrdry\s

zURLopener.http_errorcCs|��t||||d��dS)z>Default error handler: close the connection and raise IOError.N)r�r�r�r^rr�r�r�rcrcrdrlszURLopener.http_error_defaultcCstj||j|jd�S)N)rVrW)rr
rVrW)r�r�rcrcrd�_https_connectionrs�zURLopener._https_connectioncCs|�|j||�S)zUse HTTPS protocol.)r�r�r�rcrcrd�
open_httpswszURLopener.open_httpscCs^t|t�std��|dd�dkrP|dd�dkrP|dd���dkrPtd	��n
|�|�SdS)
z/Use local file or FTP depending on form of URL.zEfile error: proxy support for file protocol currently not implementedNrrGrKr9�z
localhost/r
)r�r
rr�rSr)r�r^rcrcrd�	open_file{s

4
zURLopener.open_filec
Cslddlmmm}ddl}t|�\}}t|�}zt�|�}Wn2t	yt}zt
|j|j��WYd}~n
d}~00|j
}	|j|jdd�}
|�|�d}t�d|p�d|	|
f�}|s�|}
|dd�dkr�d	|}
tt|d
�||
�St|�\}}|�s`t�|�t�ft�v�r`|}
|dd�dk�r0d	|}
n|dd�dk�rNtd
|��tt|d
�||
�St
d��dS)zUse local file.rNTrz6Content-Type: %s
Content-Length: %d
Last-modified: %s
rrr9rrrz./zAlocal file url may start with / or file:. Unknown url of type: %sz#local file error: not on local host)rr�rrrrrErqrr%r�strerrorr|rr r!r"r#r(rrrrr�thishostrS)r�r^r&rr�rf�	localnamer(�er�r)r*r��urlfilerzrcrcrdr�s@$���
zURLopener.open_local_filec
Cs�t|t�std��ddl}t|�\}}|s2td��t|�\}}t|�\}}|r\t|�\}}nd}t|�}t|ppd�}t|p|d�}t	�
|�}|s�ddl}|j}nt
|�}t|�\}}	t|�}|�d�}
|
dd�|
d}
}|
r�|
ds�|
dd�}
|
�r
|
d�s
d|
d<|||d�|
�f}t|j�tk�rb|j��D]*}
|
|k�r6|j|
}|j|
=|���q6z�||jv�r�t|||||
�|j|<|�s�d	}nd
}|	D]2}t|�\}}|��dk�r�|dv�r�|��}�q�|j|�||�\}}|�d
|�d}d}|�r|d|7}|du�r,|dk�r,|d|7}t�|�}t||d
|�WSt��y~}zt td|��WYd}~n
d}~00dS)zUse FTP protocol.zCftp error: proxy support for ftp protocol currently not implementedrNr.r�r9rjrr/r�r�r0zftp:zContent-Type: %s
zContent-Length: %d
zftp error %r)!r�r
rrrrr r!rrrr4r5r
r"r�rKrzr]�MAXFTPCACHErRr�rBr$r�r7r8r"rr#r(�	ftperrorsr)r�r^rr�rrrzrNrdr4r:r;rfr�r,r-r�r=r�rr>r*r�r,rcrcrd�open_ftp�sj




��
zURLopener.open_ftpc	
Cs:t|t�std��z|�dd�\}}WntyBtdd��Yn0|sLd}|�d�}|dkr�d	||d
�vr�||dd
�}|d
|�}nd}g}|�dt�	d
t�
t�����|�d|�|dkr�t�|�
d���d�}nt|�}|�dt|��|�d�|�|�d�|�}t�|�}t�|�}t|||�S)zUse "data" URL.zEdata error: proxy support for data protocol currently not implementedrrz
data errorzbad data URLztext/plain;charset=US-ASCII�;rrNr�zDate: %sz%a, %d %b %Y %H:%M:%S GMTzContent-type: %sr]rYzlatin-1zContent-Length: %d�
)r�r
rr�rSrh�rfindrxr��strftime�gmtimer]�decodebytesr_r�rrzrKrr#�io�StringIOr()	r�r^r_r��semi�encodingr�r��frcrcrd�	open_data�s8

�




zURLopener.open_data)N)N)N)N)NNN)N)N)N)N)r�r�r�r!rZr�rYr�r_r�r`rarrgrfrur�r�ryrrRr�r�r�rr�r�rcrcrcrdrIJs.

$


B\


	 :rIc@s�eZdZdZdd�Zdd�Zd#dd�Zd	d
�Zd$dd�Zd%d
d�Z	d&dd�Z
d'dd�Zd(dd�Zd)dd�Z
d*dd�Zd+dd�Zd,dd�Zd-dd �Zd!d"�ZdS).rJz?Derived class with handlers for errors we can handle (perhaps).cOs.tj|g|�Ri|��i|_d|_d|_dS)Nrr$)rIr��
auth_cache�tries�maxtries)r�r��kwargsrcrcrdr�szFancyURLopener.__init__cCst||d||�S)z3Default error handling -- don't raise an exception.rv)r(r�rcrcrdrsz!FancyURLopener.http_error_defaultNc	Csl|jd7_|jrN|j|jkrNt|d�r2|j}n|j}d|_|||dd|�S|�||||||�}d|_|S)z%Error 302 -- relocated (temporarily).r�http_error_500rrz)Internal Server Error: Redirect Recursion)r�r�r�r�r�redirect_internal)	r�r^rr�r�r�r_r�r�rcrcrdrCs
��zFancyURLopener.http_error_302c	Csxd|vr|d}nd|vr$|d}ndS|��t|jd||�}t|�}|jdvrnt|||d|||��|�|�S)Nr5r6�:r7z( Redirection to url '%s' is not allowed.)r�rr�rr<rr)	r�r^rr�r�r�r_r2r@rcrcrdr�%s 


��z FancyURLopener.redirect_internalcCs|�||||||�S)z*Error 301 -- also relocated (permanently).�rC�r�r^rr�r�r�r_rcrcrdrDAszFancyURLopener.http_error_301cCs|�||||||�S)z;Error 303 -- also relocated (essentially identical to 302).r�r�rcrcrdrEEszFancyURLopener.http_error_303cCs2|dur|�||||||�S|�|||||�SdS)z1Error 307 -- relocated, but turn POST into error.N)rCrr�rcrcrdrFIszFancyURLopener.http_error_307Fc
Cs�d|vrt�||||||�|d}t�d|�}	|	sHt�||||||�|	��\}
}|
��dkrtt�||||||�|s�t�||||||�d|jd}|dur�t||�||�St||�|||�SdS)z_Error 401 -- authentication required.
        This function supports Basic authentication only.r��![ 	]*([^ 	]+)[ 	]+realm="([^"]*)"r��retry_�_basic_authN�rIrr��matchr�r�r�r��
r�r^rr�r�r�r_r��stuffr�r<rmrvrcrcrdr�Ps.
�
�
��zFancyURLopener.http_error_401c
Cs�d|vrt�||||||�|d}t�d|�}	|	sHt�||||||�|	��\}
}|
��dkrtt�||||||�|s�t�||||||�d|jd}|dur�t||�||�St||�|||�SdS)zeError 407 -- proxy authentication required.
        This function supports Basic authentication only.r�r�r��retry_proxy_r�Nr�r�rcrcrdr�is.
�
�
��zFancyURLopener.http_error_407cCs�t|�\}}d||}|jd}t|�\}}	t|	�\}	}
|	�d�d}|	|d�}	|�|	||�\}}
|sr|
srdSdt|dd�t|
dd�|	f}	d|	|
|jd<|dur�|�|�S|�||�SdS)N�http://r�@r�%s:%s@%sr�rb�rrVrr��get_user_passwdrr�r�r^rmr_r�r�r2rHrirj�
proxyselectorr�rNrdrcrcrd�retry_proxy_http_basic_auth�s 
�
z*FancyURLopener.retry_proxy_http_basic_authcCs�t|�\}}d||}|jd}t|�\}}	t|	�\}	}
|	�d�d}|	|d�}	|�|	||�\}}
|sr|
srdSdt|dd�t|
dd�|	f}	d|	|
|jd<|dur�|�|�S|�||�SdS)N�https://r�r�rr�r�rbr�r�rcrcrd�retry_proxy_https_basic_auth�s 
�
z+FancyURLopener.retry_proxy_https_basic_authc
Cs�t|�\}}|�d�d}||d�}|�|||�\}}|sD|sDdSdt|dd�t|dd�|f}d||}	|dur�|�|	�S|�|	|�SdS)Nr�rr�r�rbr��rr�r�rr�
r�r^rmr_r�r�r�rNrdr2rcrcrdr��s�
z$FancyURLopener.retry_http_basic_authc
Cs�t|�\}}|�d�d}||d�}|�|||�\}}|sD|sDdSdt|dd�t|dd�|f}d||}	|dur�|�|	�S|�|	|�SdS)Nr�rr�r�rbr�r�r�rcrcrd�retry_https_basic_auth�s�
z%FancyURLopener.retry_https_basic_authrcCs`|d|��}||jvr2|r(|j|=n
|j|S|�||�\}}|sJ|rX||f|j|<||fS)Nr�)r�r��prompt_user_passwd)r�r�rmrQr�rNrdrcrcrdr��s


zFancyURLopener.get_user_passwdcCsVddl}z.td||f�}|�d|||f�}||fWStyPt�YdS0dS)z#Override this in a GUI environment!rNzEnter username for %s at %s: z#Enter password for %s in %s at %s: rp)�getpassr	�KeyboardInterrupt�print)r�r�rmr�rNrdrcrcrdr��s�
z!FancyURLopener.prompt_user_passwd)N)N)N)N)NF)NF)N)N)N)N)r)r�r�r�r!r�rrCr�rDrErFr�r�r�r�r�r�r�r�rcrcrcrdrJs(



�
�





rJcCstdurt�d�atS)z8Return the IP address of the magic hostname 'localhost'.Nr)�
_localhostrrrcrcrcrdr�s
rcCsNtdurJztt�t���d�aWn&tjyHtt�d�d�aYn0tS)z,Return the IP addresses of the current host.Nrr)�	_thishostrlrrrrrcrcrcrdr��sr�cCstdurddl}|jatS)z1Return the set of errors raised by the FTP class.Nr)�
_ftperrorsr4r9)r4rcrcrdr��sr�cCstdurt�d�atS)z%Return an empty email Message object.Nr�)�
_noheadersrr#rcrcrcrd�	noheaders�s
r�c@sJeZdZdZddd�Zdd�Zdd	�Zd
d�Zdd
�Zdd�Z	dd�Z
dS)rBz;Class used by open_ftp() for cache of open FTP connections.NTcCs<||_||_||_||_||_||_d|_||_|��dSr�)	rNrdr�rzr;r`�refcount�	keepalive�init)r�rNrdr�rzr;r`rArcrcrdr�	szftpwrapper.__init__cCs\ddl}d|_|��|_|j�|j|j|j�|j�|j	|j
�d�|j�}|j�
|�dS)Nrr9)r4�busy�FTPr8�connectr�rzr`�loginrNrdrKr;�cwd)r�r4�_targetrcrcrdr�	s
zftpwrapper.initc
Cs�ddl}|��|dvr"d}d}nd|}d}z|j�|�Wn(|jyf|��|j�|�Yn0d}|r�|s�zd|}|j�|�\}}WnJ|jy�}z0t|�dd�dkr�t	t
d	|��WYd}~n
d}~00|�s�|j�d�|�rx|j��}	z`z|j�|�Wn>|j�yN}z"t
d	|�}
||
_
|
�WYd}~n
d}~00W|j�|	�n|j�|	�0d
|}nd}|j�|�\}}d|_t|�d�|j�}|jd7_|��||fS)
Nr)r�r/zTYPE ArzTYPE zRETR rK�550r3zLIST �LISTr)r4�endtransferr8�voidcmdr9r��ntransfercmd�
error_permr
rr�pwdr��	__cause__r�r)�makefile�
file_closer�r�)r�rfr�r4�cmd�isdirrPr>r�r�r?�ftpobjrcrcrdr8	sJ&

zftpwrapper.retrfilecCs
d|_dSr�)r�r�rcrcrdr�K	szftpwrapper.endtransfercCsd|_|jdkr|��dS)NFr)r�r��
real_closer�rcrcrdr�N	s
zftpwrapper.closecCs2|��|jd8_|jdkr.|js.|��dS)Nrr)r�r�r�r�r�rcrcrdr�S	szftpwrapper.file_closecCs0|��z|j��Wnt�y*Yn0dSre)r�r8r�r�r�rcrcrdr�Y	s
zftpwrapper.real_close)NT)r�r�r�r!r�r�r8r�r�r�r�rcrcrcrdrB	s�
	0rBcCsHi}tj��D]4\}}|��}|r|dd�dkr|||dd�<q|S)aReturn a dictionary of scheme -> proxy server URL mappings.

    Scan the environment for variables named <scheme>_proxy;
    this seems to be the standard convention.  If you need a
    different way, you can pass a proxies dictionary to the
    [Fancy]URLopener constructor.

    i����N�_proxy)rq�environr�r�)rVrvr�rcrcrd�getproxies_environmenta	s	r�cCsttj�dd�ptj�dd�}|dkr(dSt|�\}}dd�|�d�D�}|D]"}|rL|�|�sh|�|�rLdSqLd	S)
z�Test if proxies should not be used for a particular host.

    Checks the environment for a variable named no_proxy, which should
    be a list of DNS suffixes separated by commas, or '*' for all hosts.
    �no_proxyr�ZNO_PROXY�*rcSsg|]}|���qSrcr)r+rHrcrcrdrj~	rkz,proxy_bypass_environment.<locals>.<listcomp>rr)rqr�r�rr��endswith)r�r��hostonlyrzZ
no_proxy_listrvrcrcrd�proxy_bypass_environmentq	sr�c	Csddlm}t|�\}}dd�}d|vr4|dr4dSd}|�d	d
�D]�}|sNqDt�d|�}|du�r|dur�zt�|�}||�}Wntjy�YqDYn0||�d��}	|�d
�}
|
dur�d|�d��	d�d}
nt
|
dd��}
d|
}
||
?|	|
?k�rdSqD|||�rDdSqDdS)aj
    Return True iff this host shouldn't be accessed using a proxy

    This function uses the MacOSX framework SystemConfiguration
    to fetch the proxy information.

    proxy_settings come from _scproxy._get_proxy_settings or get mocked ie:
    { 'exclude_simple': bool,
      'exceptions': ['foo.bar', '*.bar.com', '127.0.0.1', '10.1', '10.0/16']
    }
    r)�fnmatchcSsd|�d�}ttt|��}t|�dkr8|gd�dd�}|dd>|dd>B|dd	>B|d
BS)N�.r#)rrrrr�rr�rr�rK)r�r�rr
rz)�ipAddrryrcrcrd�ip2num�	s

z,_proxy_bypass_macosx_sysconf.<locals>.ip2numr��exclude_simpleTN�
exceptionsrcz(\d+(?:\.\d+)*)(/\d+)?rrr�� F)r�rr�r�r�rrr��group�countr
)r��proxy_settingsr�r�rzr��hostIPr�r3r~�maskrcrcrd�_proxy_bypass_macosx_sysconf�	s8




r��darwin)�_get_proxy_settings�_get_proxiescCst�}t||�Sre)r�r�)r�r�rcrcrd�proxy_bypass_macosx_sysconf�	sr�cCst�S)z�Return a dictionary of scheme -> proxy server URL mappings.

        This function uses the MacOSX framework SystemConfiguration
        to fetch the proxy information.
        )r�rcrcrcrd�getproxies_macosx_sysconf�	sr�cCst�rt|�St|�SdSre)r�r�r�r-rcrcrdr\�	sr\cCst�p
t�Sre)r�r�rcrcrcrdrF�	srFc
Csi}zddl}Wnty&|YS0z�|�|jd�}|�|d�d}|r�t|�|d�d�}d|vr�|�d�D]4}|�dd�\}}t�d	|�s�d
||f}|||<qrn>|dd�dkr�||d
<n$d||d
<d||d<d||d<|�	�Wnt
ttf�yYn0|S)zxReturn a dictionary of scheme -> proxy server URL mappings.

        Win32 uses the registry to store proxies.

        rN�;Software\Microsoft\Windows\CurrentVersion\Internet Settings�ProxyEnableZProxyServerrr�rz^([^/:]+)://z%s://%sr�rvrz	http://%sz
https://%sr�zftp://%sr8)
�winreg�ImportError�OpenKey�HKEY_CURRENT_USER�QueryValueExr
r�r�r��Close�WindowsErrorrSr�)rVr��internetSettings�proxyEnableZproxyServer�pr��addressrcrcrd�getproxies_registry�	sF
�����
rcCst�p
t�S)��Return a dictionary of scheme -> proxy server URL mappings.

        Returns settings gathered from the environment, if specified,
        or the registry.

        )r�rrcrcrcrdrF
scCsvzddl}Wnty YdS0z6|�|jd�}|�|d�d}t|�|d�d�}WntylYdS0|rv|szdSt|�\}}|g}z t�	|�}||kr�|�
|�Wntjy�Yn0z t�|�}||kr�|�
|�Wntjy�Yn0|�
d�}|D]j}	|	dk�r$d|v�r$dS|	�dd	�}	|	�d
d�}	|	�dd�}	|D] }
t�|	|
tj��rLdS�qL�qdS)
Nrr�r�Z
ProxyOverrider�z<local>r�rz\.r�z.*�?)r�r�r�r�r�r
rrrrrxr��getfqdnr�r1r�r�r�)r�r�rrZ
proxyOverrideZrawHostrz�addrZfqdnrr�rcrcrd�proxy_bypass_registry
s`�����





r
cCst�rt|�St|�SdS)rN)r�r�r
r-rcrcrdr\H
s)NNN)�r!�
__future__rrrrZfuture.builtinsrrrr	r
rrr
Zfuture.utilsrrrr]r�r�r�Zfuture.backportsrZfuture.backports.httprrr�rrr�parserrrrrrrrrr r!r"r#r$r%r&r'r�r(r)r�rqr|r�r�sysr�rtrnr��collectionsr*�collections.abcrTr+r�rR�__all__rYr�r]r	rArBrwrGrHr��ASCIIr�r��objectr,r-rCr.r@r/r0rPr2r3r4r5r6r7�urandomr�r8r9r:r�r;r�r\rxr1r?r�r�r<r$r=r>r�rvZ
nturl2pathrErDr]rIrJr�rr�r�r�r�r�r�rBr�r�r��platformZ_scproxyr�r�r�r�r\rFrr
rcrcrcrd�<module>s�V(L

?
y&hH*@
Ez

+4:8AU

^<

-	2
PKDu\_S}�e
e
8future/backports/urllib/__pycache__/error.cpython-39.pycnu�[���a

��?h�
�@spdZddlmZmZmZddlmZddlmZ	gd�Z
Gdd�de�ZGdd	�d	ee	j
�ZGd
d�de�ZdS)
a�Exception classes raised by urllib.

The base exception class is URLError, which inherits from IOError.  It
doesn't define any behavior of its own, but is the base class for all
exceptions defined in this package.

HTTPError is an exception class that is also a valid HTTP response
instance.  It behaves this way because HTTP protocol errors are valid
responses, with a status code, headers, and a body.  In some contexts,
an application may want to handle an exception like a regular
response.
�)�absolute_import�division�unicode_literals)�standard_library)�response)�URLError�	HTTPError�ContentTooShortErrorc@seZdZddd�Zdd�ZdS)rNcCs |f|_||_|dur||_dS�N)�args�reason�filename)�selfrr
�r�G/usr/local/lib/python3.9/site-packages/future/backports/urllib/error.py�__init__ szURLError.__init__cCs
d|jS)Nz<urlopen error %s>)r�rrrr�__str__&szURLError.__str__)N)�__name__�
__module__�__qualname__rrrrrrrs
rc@s<eZdZdZejjZdd�Zdd�Ze	dd��Z
dd	�Zd
S)rzBRaised when HTTP error occurs, but also acts like non-error returncCs:||_||_||_||_||_|dur6|�||||�dSr
)�code�msg�hdrs�fpr
�_HTTPError__super_init)r�urlrrrrrrrr-szHTTPError.__init__cCsd|j|jfS)NzHTTP Error %s: %s)rrrrrrr:szHTTPError.__str__cCs|jSr
)rrrrrr?szHTTPError.reasoncCs|jSr
)rrrrr�infoCszHTTPError.infoN)rrr�__doc__�urllib_response�
addinfourlrrr�propertyrrrrrrr)s

rc@seZdZdd�ZdS)r	cCst�||�||_dSr
)rr�content)r�messager"rrrrIszContentTooShortError.__init__N)rrrrrrrrr	Hsr	N)r�
__future__rrr�futurerZfuture.backports.urllibrr�__all__�IOErrorrr rr	rrrr�<module>sPKDu\^����>future/backports/urllib/__pycache__/robotparser.cpython-39.pycnu�[���a

��?h��@s�ddlmZmZmZddlmZddlmZddlm	Z
mZe
e_	ee_dgZ
Gdd�de�ZGdd�de�ZGd	d
�d
e�ZdS)�)�absolute_import�division�unicode_literals��str)�urllib)�parse�request�RobotFileParserc@sZeZdZdZddd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dd�Z
dd�Zdd�ZdS)r
zs This class provides a set of methods to read, parse and answer
    questions about a single robots.txt file.

    �cCs,g|_d|_d|_d|_|�|�d|_dS)NFr)�entries�
default_entry�disallow_all�	allow_all�set_url�last_checked��self�url�r�M/usr/local/lib/python3.9/site-packages/future/backports/urllib/robotparser.py�__init__s
zRobotFileParser.__init__cCs|jS)z�Returns the time the robots.txt file was last fetched.

        This is useful for long-running web spiders that need to
        check for new robots.txt files periodically.

        )r�rrrr�mtime&szRobotFileParser.mtimecCsddl}|��|_dS)zYSets the time the robots.txt file was last fetched to the
        current time.

        rN)�timer)rrrrr�modified/szRobotFileParser.modifiedcCs&||_tj�|�dd�\|_|_dS)z,Sets the URL referring to a robots.txt file.��N)rrr�urlparse�host�pathrrrrr7szRobotFileParser.set_urlc
Cs~ztj�|j�}WnJtjjy\}z.|jdvr8d|_n|jdkrHd|_WYd}~n&d}~00|�	�}|�
|�d����dS)z4Reads the robots.txt URL and feeds it to the parser.)i�i�Ti�Nzutf-8)
rr	�urlopenr�error�	HTTPError�coderr�readr�decode�
splitlines)r�f�err�rawrrrr%<s

zRobotFileParser.readcCs,d|jvr|jdur(||_n|j�|�dS�N�*)�
useragentsr
r�append)r�entryrrr�
_add_entryIs

zRobotFileParser._add_entrycCsnd}t�}|D�]D}|sH|dkr,t�}d}n|dkrH|�|�t�}d}|�d�}|dkrf|d|�}|��}|stq|�dd�}t|�dkr|d����|d<tj�	|d���|d<|ddkr�|dkr�|�|�t�}|j
�|d�d}q|ddk�r&|dk�rT|j�t
|dd	��d}q|dd
kr|dkr|j�t
|dd��d}q|dk�rj|�|�dS)z�Parse the input lines from a robots.txt file.

        We allow that a user-agent: line is not preceded by
        one or more blank lines.
        rr��#N�:z
user-agentZdisallowFZallowT)�Entryr0�find�strip�split�len�lowerrr�unquoter-r.�	rulelines�RuleLine)r�lines�stater/�line�irrrrRsJ






zRobotFileParser.parsecCs�|jr
dS|jrdStj�tj�|��}tj�dd|j|j|j	|j
f�}tj�|�}|s\d}|jD]}|�
|�rb|�|�Sqb|jr�|j�|�SdS)z=using the parsed robots.txt decide if useragent can fetch urlFTr�/)rrrrrr:�
urlunparser �params�query�fragment�quoter�
applies_to�	allowancer
)r�	useragentr�
parsed_urlr/rrr�	can_fetch�s"�

zRobotFileParser.can_fetchcCsd�dd�|jD��S)NrcSsg|]}t|�d�qS)�
r)�.0r/rrr�
<listcomp>��z+RobotFileParser.__str__.<locals>.<listcomp>)�joinrrrrr�__str__�szRobotFileParser.__str__N)r)
�__name__�
__module__�__qualname__�__doc__rrrrr%r0rrKrQrrrrr
s
	
	3c@s(eZdZdZdd�Zdd�Zdd�ZdS)	r<zoA rule line is a single "Allow:" (allowance==True) or "Disallow:"
       (allowance==False) followed by a path.cCs(|dkr|sd}tj�|�|_||_dS)NrT)rrrFr rH)rr rHrrrr�szRuleLine.__init__cCs|jdkp|�|j�Sr+)r �
startswith)r�filenamerrrrG�szRuleLine.applies_tocCs|jr
dpdd|jS)NZAllowZDisallowz: )rHr rrrrrQ�szRuleLine.__str__N)rRrSrTrUrrGrQrrrrr<�sr<c@s0eZdZdZdd�Zdd�Zdd�Zdd	�Zd
S)r4z?An entry has one or more user-agents and zero or more rulelinescCsg|_g|_dS)N)r-r;rrrrr�szEntry.__init__cCsHg}|jD]}|�d|dg�q
|jD]}|�t|�dg�q&d�|�S)NzUser-agent: rLr)r-�extendr;rrP)r�ret�agentr?rrrrQ�s

z
Entry.__str__cCsF|�d�d��}|jD](}|dkr*dS|��}||vrdSqdS)z2check if this entry applies to the specified agentrArr,TF)r7r9r-)rrIrZrrrrG�s
zEntry.applies_tocCs$|jD]}|�|�r|jSqdS)zZPreconditions:
        - our agent applies to this entry
        - filename is URL decodedT)r;rGrH)rrWr?rrrrH�s

zEntry.allowanceN)rRrSrTrUrrQrGrHrrrrr4�s

r4N)�
__future__rrrZfuture.builtinsrZfuture.backportsrZfuture.backports.urllibr�_parser	�_request�__all__�objectr
r<r4rrrr�<module>s	PKDu\���ڥ�;future/backports/urllib/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�J/usr/local/lib/python3.9/site-packages/future/backports/urllib/__init__.py�<module>�PK
Du\���!�p�p8future/backports/urllib/__pycache__/parse.cpython-39.pycnu�[���a

��?hЋ�@sNdZddlmZmZmZddlmZmZmZm	Z	m
Z
mZddlm
Z
ddlZddlZddlZgd�Zgd�Zgd�Zgd	�Zgd
�Zgd�Zgd�Zd
ZdZiZdd�ZdZdZdd�Zeefdd�Zeefdd�Z dd�Z!Gdd�de"�Z#Gdd�de"�Z$Gdd �d e"�Z%Gd!d"�d"e%e#�Z&Gd#d$�d$e%e$�Z'dd%lm(Z(e(d&d'�Z)e(d(d)�Z*e(d*d+�Z+e&Z,Gd,d&�d&e)e#�Z-Gd-d(�d(e*e&�Z.Gd.d*�d*e+e&�Z/Gd/d0�d0e)e$�Z0Gd1d2�d2e*e'�Z1Gd3d4�d4e+e'�Z2d5d6�Z3e3�[3dd9d:�Z4d;d<�Z5d�d=d>�Z6d�d?d@�Z7dAdB�Z8dCdD�Z9d�dEdF�Z:dGdH�Z;dIZ<edJdK�e<D��Z=dLdM�Z>e�?dN�Z@d�dQdR�ZAd�dTdU�ZBd�dVdW�ZCd�dXdY�ZDeEedZ��ZFeeF�ZGiZHGd[d\�d\ejI�ZJd�d^d_�ZKd�d`da�ZLd�dbdc�ZMd�ddde�ZNdfdg�ZOdhdi�ZPdaQdjdk�ZRdaSdldm�ZTdaUdndo�ZVdaWdpdq�ZXdaYdrds�ZZda[d�dudv�Z\da]dwdx�Z^da_dydz�Z`d{d|�Zadabd}d~�ZcdS)�a�
Ported using Python-Future from the Python 3.3 standard library.

Parse (absolute and relative) URLs.

urlparse module is based upon the following RFC specifications.

RFC 3986 (STD66): "Uniform Resource Identifiers" by T. Berners-Lee, R. Fielding
and L.  Masinter, January 2005.

RFC 2732 : "Format for Literal IPv6 Addresses in URL's by R.Hinden, B.Carpenter
and L.Masinter, December 1999.

RFC 2396:  "Uniform Resource Identifiers (URI)": Generic Syntax by T.
Berners-Lee, R. Fielding, and L. Masinter, August 1998.

RFC 2368: "The mailto URL scheme", by P.Hoffman , L Masinter, J. Zawinski, July 1998.

RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, UC Irvine, June
1995.

RFC 1738: "Uniform Resource Locators (URL)" by T. Berners-Lee, L. Masinter, M.
McCahill, December 1994

RFC 3986 is considered the current standard and any future changes to
urlparse module should conform with it.  The urlparse module is
currently not entirely compliant with this RFC due to defacto
scenarios for parsing, and for backward compatibility purposes, some
parsing quirks from older RFCs are retained. The testcases in
test_urlparse.py provides a good indicator of parsing behavior.
�)�absolute_import�division�unicode_literals)�bytes�chr�dict�int�range�str)�raise_with_tracebackN)�urlparse�
urlunparse�urljoin�	urldefrag�urlsplit�
urlunsplit�	urlencode�parse_qs�	parse_qsl�quote�
quote_plus�quote_from_bytes�unquote�unquote_plus�unquote_to_bytes)�ftp�http�gopher�nntp�imap�wais�file�https�shttp�mms�prospero�rtsp�rtspu��sftp�svn�svn+ssh)rrrr�telnetrr r!r$r"r#�snewsr%r&r'�rsyncr(r*r+r)�nfs�gitzgit+ssh)r�hdlr%rrr"r#r&r'�sip�sipsr$r(r)�tel)
rr1�mailto�newsr,r rr-r2r3)rr rr"r#r$rr&r'r2r3r()
rr1rrr6rr r"r#r-r!r%r(zAabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+-.�cCst��t��dS)z,Clear the parse cache and the quoters cache.N)�_parse_cache�clear�
_safe_quoters�r;r;�G/usr/local/lib/python3.9/site-packages/future/backports/urllib/parse.py�clear_cacheNsr=�ascii�strictcCs|S�Nr;)�objr;r;r<�_noop]srBcCs|�||�Sr@��encode)rA�encoding�errorsr;r;r<�_encode_result`srGcst��fdd�|D��S)Nc3s"|]}|r|����ndVqdS)r(N��decode��.0�x�rErFr;r<�	<genexpr>f�z_decode_args.<locals>.<genexpr>)�tuple)�argsrErFr;rMr<�_decode_argsdsrRcGsVt|dt�}|dd�D]}|rt|t�|krtd��q|rH|tfSt|�tfS)Nr�z$Cannot mix str and non-str arguments)�
isinstancer
�	TypeErrorrBrRrG)rQ�	str_input�argr;r;r<�_coerce_argshs

rXc@seZdZdZdZddd�ZdS)	�_ResultMixinStrz>Standard approach to encoding parsed results from str to bytesr;r>r?cs|j��fdd�|D��S)Nc3s|]}|����VqdSr@rCrJrMr;r<rN~rOz)_ResultMixinStr.encode.<locals>.<genexpr>)�_encoded_counterpart��selfrErFr;rMr<rD}sz_ResultMixinStr.encodeN)r>r?)�__name__�
__module__�__qualname__�__doc__�	__slots__rDr;r;r;r<rYysrYc@seZdZdZdZddd�ZdS)	�_ResultMixinBytesz>Standard approach to decoding parsed results from bytes to strr;r>r?cs|j��fdd�|D��S)Nc3s|]}|����VqdSr@rHrJrMr;r<rN�rOz+_ResultMixinBytes.decode.<locals>.<genexpr>)�_decoded_counterpartr[r;rMr<rI�sz_ResultMixinBytes.decodeN)r>r?)r]r^r_r`rarIr;r;r;r<rb�srbc@sDeZdZdZdZedd��Zedd��Zedd��Zed	d
��Z	dS)�_NetlocResultMixinBasezHShared methods for the parsed result objects containing a netloc elementr;cCs
|jdS�Nr��	_userinfo�r\r;r;r<�username�sz_NetlocResultMixinBase.usernamecCs
|jdS)NrSrfrhr;r;r<�password�sz_NetlocResultMixinBase.passwordcCs(|jd}|sd}n|dur$|��}|Sre)�	_hostinfo�lower)r\�hostnamer;r;r<rm�s
z_NetlocResultMixinBase.hostnamecCs:|jd}|dur6t|d�}d|kr0dks6ndS|S)NrS�
ri��)rkr)r\�portr;r;r<ro�s

z_NetlocResultMixinBase.portN)
r]r^r_r`ra�propertyrirjrmror;r;r;r<rd�s


rdc@s(eZdZdZedd��Zedd��ZdS)�_NetlocResultMixinStrr;cCsD|j}|�d�\}}}|r4|�d�\}}}|s<d}nd}}||fS)N�@�:��netloc�
rpartition�	partition�r\ru�userinfo�	have_info�hostinfori�
have_passwordrjr;r;r<rg�sz_NetlocResultMixinStr._userinfoc	Csl|j}|�d�\}}}|�d�\}}}|rL|�d�\}}}|�d�\}}}n|�d�\}}}|sdd}||fS)Nrr�[�]rsrt�	r\ru�_r{�have_open_br�	bracketedrmroZ	have_portr;r;r<rk�sz_NetlocResultMixinStr._hostinfoN�r]r^r_rarprgrkr;r;r;r<rq�s

rqc@s(eZdZdZedd��Zedd��ZdS)�_NetlocResultMixinBytesr;cCsD|j}|�d�\}}}|r4|�d�\}}}|s<d}nd}}||fS)N�@�:rtrxr;r;r<rg�sz!_NetlocResultMixinBytes._userinfoc	Csl|j}|�d�\}}}|�d�\}}}|rL|�d�\}}}|�d�\}}}n|�d�\}}}|sdd}||fS)Nr��[�]r�rtrr;r;r<rk�sz!_NetlocResultMixinBytes._hostinfoNr�r;r;r;r<r��s

r�)�
namedtuple�DefragResultzurl fragment�SplitResultz!scheme netloc path query fragment�ParseResultz(scheme netloc path params query fragmentc@seZdZdZdd�ZdS)r�r;cCs |jr|jd|jS|jSdS)N�#��fragment�urlrhr;r;r<�geturl�szDefragResult.geturlN�r]r^r_rar�r;r;r;r<r��sc@seZdZdZdd�ZdS)r�r;cCst|�Sr@�rrhr;r;r<r��szSplitResult.geturlNr�r;r;r;r<r��sc@seZdZdZdd�ZdS)r�r;cCst|�Sr@�r
rhr;r;r<r�szParseResult.geturlNr�r;r;r;r<r��sc@seZdZdZdd�ZdS)�DefragResultBytesr;cCs |jr|jd|jS|jSdS)N�#r�rhr;r;r<r�szDefragResultBytes.geturlNr�r;r;r;r<r�sr�c@seZdZdZdd�ZdS)�SplitResultBytesr;cCst|�Sr@r�rhr;r;r<r�szSplitResultBytes.geturlNr�r;r;r;r<r�sr�c@seZdZdZdd�ZdS)�ParseResultBytesr;cCst|�Sr@r�rhr;r;r<r�szParseResultBytes.geturlNr�r;r;r;r<r�sr�cCs4ttfttfttff}|D]\}}||_||_qdSr@)r�r�r�r�r�r�rZrc)Z
_result_pairsZ_decodedZ_encodedr;r;r<�_fix_result_transcodings�r�r(Tc
Csft||�\}}}t|||�}|\}}}}}|tvrHd|vrHt|�\}}nd}t||||||�}	||	�S)a#Parse a URL into 6 components:
    <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.�;r()rXr�uses_params�_splitparamsr�)
r��scheme�allow_fragments�_coerce_result�splitresultru�queryr��params�resultr;r;r<r$srcCsRd|vr,|�d|�d��}|dkr6|dfSn
|�d�}|d|�||dd�fS)N�/r�rr(rS)�find�rfind)r��ir;r;r<r�4s

r�cCsHt|�}dD]"}|�||�}|dkrt||�}q|||�||d�fS)Nz/?#r)�lenr��min)r��start�delim�c�wdelimr;r;r<�_splitnetloc=sr�c
CsXt||�\}}}t|�}|||t|�t|�f}t�|d�}|rF||�Stt�tkrXt�d}}}|�d�}	|	dk�r�|d|	�dk�rJ|d|	��	�}||	dd�}|dd�dkr�t
|d�\}}d	|vr�d
|vs�d
|vr�d	|vr�td��|�rd|v�r|�dd�\}}d
|v�r*|�d
d�\}}t
|||||�}
|
t|<||
�S|d|	�D]}|tv�rV�q��qV||	dd�}|�r�tdd�|D���r�|d|	��	�|}}|dd�dk�r�t
|d�\}}d	|v�r�d
|v�s�d
|v�r�d	|v�r�td��|�rd|v�r|�dd�\}}d
|v�r8|�d
d�\}}t
|||||�}
|
t|<||
�S)aParse a URL into 5 components:
    <scheme>://<netloc>/<path>?<query>#<fragment>
    Return a 5-tuple: (scheme, netloc, path, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.Nr(rsrrrS��//r}r~zInvalid IPv6 URLr��?css|]}|dvVqdS)�
0123456789Nr;�rKr�r;r;r<rNlrOzurlsplit.<locals>.<genexpr>)rX�bool�typer8�getr��MAX_CACHE_SIZEr=r�rlr��
ValueError�splitr��scheme_chars�any)
r�r�r�r��key�cachedrur�r�r��vr��restr;r;r<rEsd

��


��
rcCs<t|�\}}}}}}}|r&d||f}|t|||||f��S)z�Put a parsed URL back together again.  This may result in a
    slightly different, but equivalent URL, if the URL that was parsed
    originally had redundant delimiters, e.g. a ? with an empty query
    (the draft states that these are equivalent).z%s;%s)rXr)�
componentsr�rur�r�r�r�r�r;r;r<r
}s
�r
cCs�t|�\}}}}}}|s4|r`|tvr`|dd�dkr`|rP|dd�dkrPd|}d|pXd|}|rp|d|}|r�|d|}|r�|d	|}||�S)
akCombine the elements of a tuple as returned by urlsplit() into a
    complete URL as a string. The data argument can be any five-item iterable.
    This may result in a slightly different, but equivalent URL, if the URL that
    was parsed originally had unnecessary delimiters (for example, a ? with an
    empty query; the RFC states that these are equivalent).Nr�r�rSr�r(rsr�r�)rX�uses_netloc)r�r�rur�r�r�r�r;r;r<r�s� rcCs�|s|S|s|St||�\}}}t|d|�\}}}}}}	t|||�\}
}}}
}}|
|ks`|
tvrh||�S|
tvr�|r�|t|
|||
||f��S|}|dd�dkr�|t|
|||
||f��S|s�|
s�|}|}
|s�|}|t|
|||
||f��S|�d�dd�|�d�}|ddk�rd|d<d|v�r2|�d��qd}t|�d}||k�r�||dk�r�||ddv�r�||d|d�=�q2|d}�qB�q��q2|ddgk�r�d|d<n*t|�d	k�r�|ddk�r�dg|d
d�<|t|
|d�|�|
||f��S)zaJoin a base URL and a possibly relative URL to form an absolute
    interpretation of the latter.r(NrSr�����.�..)r(r�r����)	rXr�
uses_relativer�r
r��remover��join)�baser�r�r��bscheme�bnetloc�bpath�bparams�bquery�	bfragmentr�ru�pathr�r�r��segmentsr��nr;r;r<r�sf
�
�
�
�
�

�
�rc	CsTt|�\}}d|vr>t|�\}}}}}}t|||||df�}nd}|}|t||��S)z�Removes any existing fragment from URL.

    Returns a tuple of the defragmented URL and the fragment.  If
    the URL contained no fragments, the second element is the
    empty string.
    r�r()rXrr
r�)	r�r��sr��p�a�q�frag�defragr;r;r<r�sr�0123456789ABCDEFabcdefccs8|]0}tD]&}||��tt||d�g�fVq
qdS)�N)�_hexdigrDrr)rKr��br;r;r<rN�s
�rNc	Cs�|s|jtd�St|t�r&|�d�}t|�}|�d�}t|�dkrH|S|dg}|j}|dd�D]P}z(|t|dd��||dd��Wqdty�|d�||�Yqd0qdtd��	|�S)z,unquote_to_bytes('abc%20def') -> b'abc def'.rO�utf-8�%rSrNr�)
r�rrTr
rDr��append�
_hextobyte�KeyErrorr�)�string�bits�resr��itemr;r;r<r�s&



rz([-]+)r��replacecCs�d|vr|j|S|durd}|dur*d}t�|�}|dg}|j}tdt|�d�D],}|t||��||��|||d�qTd�|�S)	a�Replace %xx escapes by their single-character equivalent. The optional
    encoding and errors parameters specify how to decode percent-encoded
    sequences into Unicode characters, as accepted by the bytes.decode()
    method.
    By default, percent-encoded sequences are decoded with UTF-8, and invalid
    sequences are replaced by a placeholder character.

    unquote('abc%20def') -> 'abc def'.
    �%Nr�r�rrSr�r()r��_asciirer�r	r�rrIr�)r�rErFr�r�r�r�r;r;r<rs


rFc	CsJi}t|||||d�}|D]*\}}||vr:||�|�q|g||<q|S)aOParse a query given as a string argument.

        Arguments:

        qs: percent-encoded query string to be parsed

        keep_blank_values: flag indicating whether blank values in
            percent-encoded queries should be treated as blank strings.
            A true value indicates that blanks should be retained as
            blank strings.  The default false value indicates that
            blank values are to be ignored and treated as if they were
            not included.

        strict_parsing: flag indicating what to do with parsing errors.
            If false (the default), errors are silently ignored.
            If true, errors raise a ValueError exception.

        encoding and errors: specify how to decode percent-encoded sequences
            into Unicode characters, as accepted by the bytes.decode() method.
    rM)rr�)	�qs�keep_blank_values�strict_parsingrErF�
parsed_result�pairs�name�valuer;r;r<rs�rcCs�t|�\}}dd�|�d�D�}g}|D]�}|s6|s6q(|�dd�}	t|	�dkrr|r`td|f��|r(|	�d�nq(t|	d�s�|r(|	d	�d
d�}
t|
||d�}
||
�}
|	d�d
d�}t|||d�}||�}|�|
|f�q(|S)
a;Parse a query given as a string argument.

    Arguments:

    qs: percent-encoded query string to be parsed

    keep_blank_values: flag indicating whether blank values in
        percent-encoded queries should be treated as blank strings.  A
        true value indicates that blanks should be retained as blank
        strings.  The default false value indicates that blank values
        are to be ignored and treated as if they were  not included.

    strict_parsing: flag indicating what to do with parsing errors. If
        false (the default), errors are silently ignored. If true,
        errors raise a ValueError exception.

    encoding and errors: specify how to decode percent-encoded sequences
        into Unicode characters, as accepted by the bytes.decode() method.

    Returns a list, as G-d intended.
    cSs g|]}|�d�D]}|�qqS)r��r�)rK�s1�s2r;r;r<�
<listcomp>TrOzparse_qsl.<locals>.<listcomp>�&�=rSr�zbad query field: %rr(r�+� rM)rXr�r�r�r�r�r)r�r�r�rErFr�r��r�
name_value�nvr�r�r;r;r<r<s,rcCs|�dd�}t|||�S)z�Like unquote(), but also replace plus signs by spaces, as required for
    unquoting HTML form values.

    unquote_plus('%7e/abc+def') -> '~/abc def'
    r�r�)r�r)r�rErFr;r;r<rlsrsAABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_.-c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�Quoterz�A mapping from bytes (in range(0,256)) to strings.

    String values are percent-encoded byte values, unless the key < 128, and
    in the "safe" set (either the specified safe set, or default set).
    cCst�t|��|_dS)zsafe: bytes object.N)�_ALWAYS_SAFE�unionr�safe)r\r�r;r;r<�__init__�szQuoter.__init__cCsdt|�S)Nz<Quoter %r>)rrhr;r;r<�__repr__�szQuoter.__repr__cCs(||jvrt|�nd�|�}|||<|S)Nz%{0:02X})r�r�format)r\r�r�r;r;r<�__missing__�szQuoter.__missing__N)r]r^r_r`r�rrr;r;r;r<r�|sr�r�cCsbt|t�r8|s|S|durd}|dur*d}|�||�}n |durHtd��|durXtd��t||�S)a�quote('abc def') -> 'abc%20def'

    Each part of a URL, e.g. the path info, the query, etc., has a
    different set of reserved characters that must be quoted.

    RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists
    the following reserved characters.

    reserved    = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |
                  "$" | ","

    Each of these characters is reserved in some component of a URL,
    but not necessarily in all of them.

    By default, the quote function is intended for quoting the path
    section of a URL.  Thus, it will not encode '/'.  This character
    is reserved, but in typical usage the quote function is being
    called on a path where the existing slash characters are used as
    reserved characters.

    string and safe may be either str or bytes objects. encoding must
    not be specified if string is a str.

    The optional encoding and errors parameters specify how to deal with
    non-ASCII characters, as accepted by the str.encode method.
    By default, encoding='utf-8' (characters are encoded with UTF-8), and
    errors='strict' (unsupported characters raise a UnicodeEncodeError).
    Nr�r?z,quote() doesn't support 'encoding' for bytesz*quote() doesn't support 'errors' for bytes)rTr
rDrUr)r�r�rErFr;r;r<r�s
rcCslt|t�rd|vs$t|t�r2d|vr2t||||�St|t�rFtd�}ntd�}t|||||�}|�dd�S)z�Like quote(), but also replace ' ' with '+', as required for quoting
    HTML form values. Plus signs in the original string are escaped unless
    they are included in safe. It also does not have safe default to '/'.
    r�� r�)rTr
rrr�)r�r�rErF�spacer;r;r<r�s��

rcs�t|ttf�std��|s"td�St|�}t|t�rFt|��dd�}nt|�}tdd�|D��}|�t|�sv|��Szt	|�Wn$t
y�t|�jt	|<�Yn0td��
�fdd�|D��S)z�Like quote(), but accepts a bytes object rather than a str, and does
    not perform string-to-bytes encoding.  It always returns an ASCII string.
    quote_from_bytes(b'abc def?') -> 'abc%20def%3f'
    z!quote_from_bytes() expected bytesr(r>�ignorecSsg|]}|dkr|�qS)�r;r�r;r;r<r��rOz$quote_from_bytes.<locals>.<listcomp>csg|]}�|��qSr;r;)rK�char��quoterr;r<r��rO)rTr�	bytearrayrUr
rD�rstrip�_ALWAYS_SAFE_BYTESrIr:r�r��__getitem__r�)�bsr�r;rr<r�s 
rc
	Cst|d�r|��}nNzt|�r0t|dt�s0t�Wn.ty`t��\}}}ttd�|�Yn0g}|s�|D]j\}	}
t|	t	�r�t
|	|�}	nt
t|	�|||�}	t|
t	�r�t
|
|�}
nt
t|
�|||�}
|�|	d|
�qn�n |D�]\}	}
t|	t	��rt
|	|�}	nt
t|	�|||�}	t|
t	��r@t
|
|�}
|�|	d|
�q�t|
t��rnt
|
|||�}
|�|	d|
�q�zt|
�}Wn8t�y�t
t|
�|||�}
|�|	d|
�Yq�0|
D]B}t|t	��r�t
||�}nt
t|�|||�}|�|	d|��q�q�td��
|�S)a#Encode a sequence of two-element tuples or dictionary into a URL query string.

    If any values in the query arg are sequences and doseq is true, each
    sequence element is converted to a separate parameter.

    If the query arg is a sequence of two-element tuples, the order of the
    parameters in the output will match the order of parameters in the
    input.

    The query arg may be either a string or a bytes type. When query arg is a
    string, the safe, encoding and error parameters are sent the quote_plus for
    encoding.
    �itemsrz1not a valid non-string sequence or mapping objectr�r�)�hasattrrr�rTrPrU�sys�exc_inforrrr
r�r�)
r��doseqr�rErF�ty�va�tb�l�kr�rL�eltr;r;r<r�sR

�



rcCsHt|t�rDz|�d���}Wn&tyBtdt|�d��Yn0|S)zto_bytes(u"URL") --> 'URL'.�ASCIIzURL z contains non-ASCII characters)rTr
rDrI�UnicodeError�repr�r�r;r;r<�to_bytesHs
�rcCs`t|���}|dd�dkr<|dd�dkr<|dd���}|dd�dkr\|dd���}|S)z8unwrap('<URL:type://host/path>') --> 'type://host/path'.NrS�<r��>�zURL:)r
�striprr;r;r<�unwrapUs
  r#cCsVtdurddl}|�d�at�|�}|rN|�d�}|��|t|�dd�fSd|fS)z:splittype('type:opaquestring') --> 'type', 'opaquestring'.Nrz
^([^/:]+):rS)�	_typeprog�re�compile�match�grouprlr�)r�r%r'r�r;r;r<�	splittype^s


r)cCsbtdurddl}|�d�at�|�}|rZ|�d�}|�d�}|rR|�d�sRd|}||fSd|fS)z;splithost('//host[:port]/path') --> 'host[:port]', '/path'.Nrz^//([^/?]*)(.*)$rSr�r�)�	_hostprogr%r&r'r(�
startswith)r�r%r'�	host_portr�r;r;r<�	splithostls



r-cCs<tdurddl}|�d�at�|�}|r4|�dd�Sd|fS)zJsplituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'.Nrz^(.*)@(.*)$rSr�)�	_userprogr%r&r'r(��hostr%r'r;r;r<�	splituser}s

r1cCs@tdurddl}|�d|j�at�|�}|r8|�dd�S|dfS)z/splitpasswd('user:passwd') -> 'user', 'passwd'.Nrz^([^:]*):(.*)$rSr�)�_passwdprogr%r&�Sr'r()�userr%r'r;r;r<�splitpasswd�s
r5cCs<tdurddl}|�d�at�|�}|r4|�dd�S|dfS)z*splitport('host:port') --> 'host', 'port'.Nrz^(.*):([0-9]+)$rSr�)�	_portprogr%r&r'r(r/r;r;r<�	splitport�s

r7r�cCsxtdurddl}|�d�at�|�}|rp|�dd�\}}z|sFtd��t|�}Wntyfd}Yn0||fS||fS)z�Split host and port, returning numeric port.
    Return given default port if no ':' found; defaults to -1.
    Return numerical port if a valid number are found after ':'.
    Return None if ':' but not a valid number.Nrz^(.*):(.*)$rSr�z	no digits)�
_nportprogr%r&r'r(r�r)r0�defportr%r'ro�nportr;r;r<�
splitnport�s


r;cCs<tdurddl}|�d�at�|�}|r4|�dd�S|dfS)z/splitquery('/path?query') --> '/path', 'query'.Nrz^(.*)\?([^?]*)$rSr�)�
_queryprogr%r&r'r(�r�r%r'r;r;r<�
splitquery�s

r>cCs<tdurddl}|�d�at�|�}|r4|�dd�S|dfS)z)splittag('/path#tag') --> '/path', 'tag'.Nrz^(.*)#([^#]*)$rSr�)�_tagprogr%r&r'r(r=r;r;r<�splittag�s

r@cCs|�d�}|d|dd�fS)zksplitattr('/path;attr1=value1;attr2=value2;...') ->
        '/path', ['attr1=value1', 'attr2=value2', ...].r�rrSNr�)r��wordsr;r;r<�	splitattr�s
rBcCs<tdurddl}|�d�at�|�}|r4|�dd�S|dfS)z-splitvalue('attr=value') --> 'attr', 'value'.Nrz^([^=]*)=(.*)$rSr�)�
_valueprogr%r&r'r()�attrr%r'r;r;r<�
splitvalue�s

rE)r(T)r)r(T)T)r�r�)FFr�r�)FFr�r�)r�r�)r�NN)r(NN)r�)Fr(NN)r�)dr`�
__future__rrrZfuture.builtinsrrrrr	r
Zfuture.utilsrr%r�collections�__all__r�r�r��non_hierarchical�
uses_query�
uses_fragmentr�r�r8r=�_implicit_encoding�_implicit_errorsrBrGrRrX�objectrYrbrdrqr�r��_DefragResultBase�_SplitResultBase�_ParseResultBase�
ResultBaser�r�r�r�r�r�r�rr�r�rr
rrrr�r�rr&r�rrrr�	frozensetr�rr:�defaultdictr�rrrrrr#r$r)r*r-r.r1r2r5r6r7r8r;r<r>r?r@rBrCrEr;r;r;r<�<module>s� �
�
 




	

8
5�

�
 �
0
	
,


]


PK
Du\�U�@@;future/backports/urllib/__pycache__/response.cpython-39.pycnu�[���a

��?hl�@spdZddlmZmZmZddlmZGdd�de�ZGdd�de�ZGdd	�d	e�Z	Gd
d�de�Z
[[[[dS)
aResponse classes used by urllib.

The base class, addbase, defines a minimal file-like interface,
including read() and readline().  The typical response object is an
addinfourl instance, which defines an info() method that returns
headers and a geturl() method that returns the url.
�)�absolute_import�division�unicode_literals)�objectc@s@eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dS)�addbasez(Base class for addinfo and addclosehook.cCsV||_|jj|_|jj|_t|jd�r0|jj|_t|jd�rH|jj|_n
dd�|_dS)N�	readlines�filenocSsdS�N�r
r
r
�J/usr/local/lib/python3.9/site-packages/future/backports/urllib/response.py�<lambda>�z"addbase.__init__.<locals>.<lambda>)�fp�read�readline�hasattrrr)�selfrr
r
r�__init__s


zaddbase.__init__cCs
t|j�Sr	)�iterr�rr
r
r�__iter__szaddbase.__iter__cCsd|jjt|�|jfS)Nz<%s at %r whose fp = %r>)�	__class__�__name__�idrrr
r
r�__repr__%s
�zaddbase.__repr__cCs>|jr|j��d|_d|_d|_d|_d|_d|_d|_dSr	)r�closerrrrr�__next__rr
r
rr)s
z
addbase.closecCs|jdurtd��|S)NzI/O operation on closed file)r�
ValueErrorrr
r
r�	__enter__4s
zaddbase.__enter__cCs|��dSr	)r)r�type�value�	tracebackr
r
r�__exit__9szaddbase.__exit__N)
r�
__module__�__qualname__�__doc__rrrrrr"r
r
r
rrs
rc@s eZdZdZdd�Zdd�ZdS)�addclosehookz*Class to add a close hook to an open file.cGst�||�||_||_dSr	)rr�	closehook�hookargs)rrr'r(r
r
rr?szaddclosehook.__init__cCs,|jr|j|j�d|_d|_t�|�dSr	)r'r(rrrr
r
rrDs
zaddclosehook.closeN)rr#r$r%rrr
r
r
rr&<sr&c@s eZdZdZdd�Zdd�ZdS)�addinfoz.class to add an info() method to an open file.cCst�||�||_dSr	)rr�headers)rrr*r
r
rrNszaddinfo.__init__cCs|jSr	�r*rr
r
r�infoRszaddinfo.infoN)rr#r$r%rr,r
r
r
rr)Ksr)c@s2eZdZdZddd�Zdd�Zdd�Zd	d
�ZdS)�
addinfourlz9class to add info() and geturl() methods to an open file.NcCs"t�||�||_||_||_dSr	)rrr*�url�code)rrr*r.r/r
r
rrXszaddinfourl.__init__cCs|jSr	r+rr
r
rr,^szaddinfourl.infocCs|jSr	)r/rr
r
r�getcodeaszaddinfourl.getcodecCs|jSr	)r.rr
r
r�geturldszaddinfourl.geturl)N)rr#r$r%rr,r0r1r
r
r
rr-Us

r-N)r%�
__future__rrrZfuture.builtinsrrr&r)r-r
r
r
r�<module>s1
PKDu\+ܓ�
�
 future/backports/urllib/error.pynu�[���"""Exception classes raised by urllib.

The base exception class is URLError, which inherits from IOError.  It
doesn't define any behavior of its own, but is the base class for all
exceptions defined in this package.

HTTPError is an exception class that is also a valid HTTP response
instance.  It behaves this way because HTTP protocol errors are valid
responses, with a status code, headers, and a body.  In some contexts,
an application may want to handle an exception like a regular
response.
"""
from __future__ import absolute_import, division, unicode_literals
from future import standard_library

from future.backports.urllib import response as urllib_response


__all__ = ['URLError', 'HTTPError', 'ContentTooShortError']


# do these error classes make sense?
# make sure all of the IOError stuff is overridden.  we just want to be
# subtypes.

class URLError(IOError):
    # URLError is a sub-type of IOError, but it doesn't share any of
    # the implementation.  need to override __init__ and __str__.
    # It sets self.args for compatibility with other EnvironmentError
    # subclasses, but args doesn't have the typical format with errno in
    # slot 0 and strerror in slot 1.  This may be better than nothing.
    def __init__(self, reason, filename=None):
        self.args = reason,
        self.reason = reason
        if filename is not None:
            self.filename = filename

    def __str__(self):
        return '<urlopen error %s>' % self.reason

class HTTPError(URLError, urllib_response.addinfourl):
    """Raised when HTTP error occurs, but also acts like non-error return"""
    __super_init = urllib_response.addinfourl.__init__

    def __init__(self, url, code, msg, hdrs, fp):
        self.code = code
        self.msg = msg
        self.hdrs = hdrs
        self.fp = fp
        self.filename = url
        # The addinfourl classes depend on fp being a valid file
        # object.  In some cases, the HTTPError may not have a valid
        # file object.  If this happens, the simplest workaround is to
        # not initialize the base classes.
        if fp is not None:
            self.__super_init(fp, hdrs, url, code)

    def __str__(self):
        return 'HTTP Error %s: %s' % (self.code, self.msg)

    # since URLError specifies a .reason attribute, HTTPError should also
    #  provide this attribute. See issue13211 for discussion.
    @property
    def reason(self):
        return self.msg

    def info(self):
        return self.hdrs


# exception raised when downloaded size does not match content-length
class ContentTooShortError(URLError):
    def __init__(self, message, content):
        URLError.__init__(self, message)
        self.content = content
PKDu\~LD�ЋЋ future/backports/urllib/parse.pynu�[���"""
Ported using Python-Future from the Python 3.3 standard library.

Parse (absolute and relative) URLs.

urlparse module is based upon the following RFC specifications.

RFC 3986 (STD66): "Uniform Resource Identifiers" by T. Berners-Lee, R. Fielding
and L.  Masinter, January 2005.

RFC 2732 : "Format for Literal IPv6 Addresses in URL's by R.Hinden, B.Carpenter
and L.Masinter, December 1999.

RFC 2396:  "Uniform Resource Identifiers (URI)": Generic Syntax by T.
Berners-Lee, R. Fielding, and L. Masinter, August 1998.

RFC 2368: "The mailto URL scheme", by P.Hoffman , L Masinter, J. Zawinski, July 1998.

RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, UC Irvine, June
1995.

RFC 1738: "Uniform Resource Locators (URL)" by T. Berners-Lee, L. Masinter, M.
McCahill, December 1994

RFC 3986 is considered the current standard and any future changes to
urlparse module should conform with it.  The urlparse module is
currently not entirely compliant with this RFC due to defacto
scenarios for parsing, and for backward compatibility purposes, some
parsing quirks from older RFCs are retained. The testcases in
test_urlparse.py provides a good indicator of parsing behavior.
"""
from __future__ import absolute_import, division, unicode_literals
from future.builtins import bytes, chr, dict, int, range, str
from future.utils import raise_with_traceback

import re
import sys
import collections

__all__ = ["urlparse", "urlunparse", "urljoin", "urldefrag",
           "urlsplit", "urlunsplit", "urlencode", "parse_qs",
           "parse_qsl", "quote", "quote_plus", "quote_from_bytes",
           "unquote", "unquote_plus", "unquote_to_bytes"]

# A classification of schemes ('' means apply by default)
uses_relative = ['ftp', 'http', 'gopher', 'nntp', 'imap',
                 'wais', 'file', 'https', 'shttp', 'mms',
                 'prospero', 'rtsp', 'rtspu', '', 'sftp',
                 'svn', 'svn+ssh']
uses_netloc = ['ftp', 'http', 'gopher', 'nntp', 'telnet',
               'imap', 'wais', 'file', 'mms', 'https', 'shttp',
               'snews', 'prospero', 'rtsp', 'rtspu', 'rsync', '',
               'svn', 'svn+ssh', 'sftp', 'nfs', 'git', 'git+ssh']
uses_params = ['ftp', 'hdl', 'prospero', 'http', 'imap',
               'https', 'shttp', 'rtsp', 'rtspu', 'sip', 'sips',
               'mms', '', 'sftp', 'tel']

# These are not actually used anymore, but should stay for backwards
# compatibility.  (They are undocumented, but have a public-looking name.)
non_hierarchical = ['gopher', 'hdl', 'mailto', 'news',
                    'telnet', 'wais', 'imap', 'snews', 'sip', 'sips']
uses_query = ['http', 'wais', 'imap', 'https', 'shttp', 'mms',
              'gopher', 'rtsp', 'rtspu', 'sip', 'sips', '']
uses_fragment = ['ftp', 'hdl', 'http', 'gopher', 'news',
                 'nntp', 'wais', 'https', 'shttp', 'snews',
                 'file', 'prospero', '']

# Characters valid in scheme names
scheme_chars = ('abcdefghijklmnopqrstuvwxyz'
                'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
                '0123456789'
                '+-.')

# XXX: Consider replacing with functools.lru_cache
MAX_CACHE_SIZE = 20
_parse_cache = {}

def clear_cache():
    """Clear the parse cache and the quoters cache."""
    _parse_cache.clear()
    _safe_quoters.clear()


# Helpers for bytes handling
# For 3.2, we deliberately require applications that
# handle improperly quoted URLs to do their own
# decoding and encoding. If valid use cases are
# presented, we may relax this by using latin-1
# decoding internally for 3.3
_implicit_encoding = 'ascii'
_implicit_errors = 'strict'

def _noop(obj):
    return obj

def _encode_result(obj, encoding=_implicit_encoding,
                        errors=_implicit_errors):
    return obj.encode(encoding, errors)

def _decode_args(args, encoding=_implicit_encoding,
                       errors=_implicit_errors):
    return tuple(x.decode(encoding, errors) if x else '' for x in args)

def _coerce_args(*args):
    # Invokes decode if necessary to create str args
    # and returns the coerced inputs along with
    # an appropriate result coercion function
    #   - noop for str inputs
    #   - encoding function otherwise
    str_input = isinstance(args[0], str)
    for arg in args[1:]:
        # We special-case the empty string to support the
        # "scheme=''" default argument to some functions
        if arg and isinstance(arg, str) != str_input:
            raise TypeError("Cannot mix str and non-str arguments")
    if str_input:
        return args + (_noop,)
    return _decode_args(args) + (_encode_result,)

# Result objects are more helpful than simple tuples
class _ResultMixinStr(object):
    """Standard approach to encoding parsed results from str to bytes"""
    __slots__ = ()

    def encode(self, encoding='ascii', errors='strict'):
        return self._encoded_counterpart(*(x.encode(encoding, errors) for x in self))


class _ResultMixinBytes(object):
    """Standard approach to decoding parsed results from bytes to str"""
    __slots__ = ()

    def decode(self, encoding='ascii', errors='strict'):
        return self._decoded_counterpart(*(x.decode(encoding, errors) for x in self))


class _NetlocResultMixinBase(object):
    """Shared methods for the parsed result objects containing a netloc element"""
    __slots__ = ()

    @property
    def username(self):
        return self._userinfo[0]

    @property
    def password(self):
        return self._userinfo[1]

    @property
    def hostname(self):
        hostname = self._hostinfo[0]
        if not hostname:
            hostname = None
        elif hostname is not None:
            hostname = hostname.lower()
        return hostname

    @property
    def port(self):
        port = self._hostinfo[1]
        if port is not None:
            port = int(port, 10)
            # Return None on an illegal port
            if not ( 0 <= port <= 65535):
                return None
        return port


class _NetlocResultMixinStr(_NetlocResultMixinBase, _ResultMixinStr):
    __slots__ = ()

    @property
    def _userinfo(self):
        netloc = self.netloc
        userinfo, have_info, hostinfo = netloc.rpartition('@')
        if have_info:
            username, have_password, password = userinfo.partition(':')
            if not have_password:
                password = None
        else:
            username = password = None
        return username, password

    @property
    def _hostinfo(self):
        netloc = self.netloc
        _, _, hostinfo = netloc.rpartition('@')
        _, have_open_br, bracketed = hostinfo.partition('[')
        if have_open_br:
            hostname, _, port = bracketed.partition(']')
            _, have_port, port = port.partition(':')
        else:
            hostname, have_port, port = hostinfo.partition(':')
        if not have_port:
            port = None
        return hostname, port


class _NetlocResultMixinBytes(_NetlocResultMixinBase, _ResultMixinBytes):
    __slots__ = ()

    @property
    def _userinfo(self):
        netloc = self.netloc
        userinfo, have_info, hostinfo = netloc.rpartition(b'@')
        if have_info:
            username, have_password, password = userinfo.partition(b':')
            if not have_password:
                password = None
        else:
            username = password = None
        return username, password

    @property
    def _hostinfo(self):
        netloc = self.netloc
        _, _, hostinfo = netloc.rpartition(b'@')
        _, have_open_br, bracketed = hostinfo.partition(b'[')
        if have_open_br:
            hostname, _, port = bracketed.partition(b']')
            _, have_port, port = port.partition(b':')
        else:
            hostname, have_port, port = hostinfo.partition(b':')
        if not have_port:
            port = None
        return hostname, port


from collections import namedtuple

_DefragResultBase = namedtuple('DefragResult', 'url fragment')
_SplitResultBase = namedtuple('SplitResult', 'scheme netloc path query fragment')
_ParseResultBase = namedtuple('ParseResult', 'scheme netloc path params query fragment')

# For backwards compatibility, alias _NetlocResultMixinStr
# ResultBase is no longer part of the documented API, but it is
# retained since deprecating it isn't worth the hassle
ResultBase = _NetlocResultMixinStr

# Structured result objects for string data
class DefragResult(_DefragResultBase, _ResultMixinStr):
    __slots__ = ()
    def geturl(self):
        if self.fragment:
            return self.url + '#' + self.fragment
        else:
            return self.url

class SplitResult(_SplitResultBase, _NetlocResultMixinStr):
    __slots__ = ()
    def geturl(self):
        return urlunsplit(self)

class ParseResult(_ParseResultBase, _NetlocResultMixinStr):
    __slots__ = ()
    def geturl(self):
        return urlunparse(self)

# Structured result objects for bytes data
class DefragResultBytes(_DefragResultBase, _ResultMixinBytes):
    __slots__ = ()
    def geturl(self):
        if self.fragment:
            return self.url + b'#' + self.fragment
        else:
            return self.url

class SplitResultBytes(_SplitResultBase, _NetlocResultMixinBytes):
    __slots__ = ()
    def geturl(self):
        return urlunsplit(self)

class ParseResultBytes(_ParseResultBase, _NetlocResultMixinBytes):
    __slots__ = ()
    def geturl(self):
        return urlunparse(self)

# Set up the encode/decode result pairs
def _fix_result_transcoding():
    _result_pairs = (
        (DefragResult, DefragResultBytes),
        (SplitResult, SplitResultBytes),
        (ParseResult, ParseResultBytes),
    )
    for _decoded, _encoded in _result_pairs:
        _decoded._encoded_counterpart = _encoded
        _encoded._decoded_counterpart = _decoded

_fix_result_transcoding()
del _fix_result_transcoding

def urlparse(url, scheme='', allow_fragments=True):
    """Parse a URL into 6 components:
    <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes."""
    url, scheme, _coerce_result = _coerce_args(url, scheme)
    splitresult = urlsplit(url, scheme, allow_fragments)
    scheme, netloc, url, query, fragment = splitresult
    if scheme in uses_params and ';' in url:
        url, params = _splitparams(url)
    else:
        params = ''
    result = ParseResult(scheme, netloc, url, params, query, fragment)
    return _coerce_result(result)

def _splitparams(url):
    if '/'  in url:
        i = url.find(';', url.rfind('/'))
        if i < 0:
            return url, ''
    else:
        i = url.find(';')
    return url[:i], url[i+1:]

def _splitnetloc(url, start=0):
    delim = len(url)   # position of end of domain part of url, default is end
    for c in '/?#':    # look for delimiters; the order is NOT important
        wdelim = url.find(c, start)        # find first of this delim
        if wdelim >= 0:                    # if found
            delim = min(delim, wdelim)     # use earliest delim position
    return url[start:delim], url[delim:]   # return (domain, rest)

def urlsplit(url, scheme='', allow_fragments=True):
    """Parse a URL into 5 components:
    <scheme>://<netloc>/<path>?<query>#<fragment>
    Return a 5-tuple: (scheme, netloc, path, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes."""
    url, scheme, _coerce_result = _coerce_args(url, scheme)
    allow_fragments = bool(allow_fragments)
    key = url, scheme, allow_fragments, type(url), type(scheme)
    cached = _parse_cache.get(key, None)
    if cached:
        return _coerce_result(cached)
    if len(_parse_cache) >= MAX_CACHE_SIZE: # avoid runaway growth
        clear_cache()
    netloc = query = fragment = ''
    i = url.find(':')
    if i > 0:
        if url[:i] == 'http': # optimize the common case
            scheme = url[:i].lower()
            url = url[i+1:]
            if url[:2] == '//':
                netloc, url = _splitnetloc(url, 2)
                if (('[' in netloc and ']' not in netloc) or
                        (']' in netloc and '[' not in netloc)):
                    raise ValueError("Invalid IPv6 URL")
            if allow_fragments and '#' in url:
                url, fragment = url.split('#', 1)
            if '?' in url:
                url, query = url.split('?', 1)
            v = SplitResult(scheme, netloc, url, query, fragment)
            _parse_cache[key] = v
            return _coerce_result(v)
        for c in url[:i]:
            if c not in scheme_chars:
                break
        else:
            # make sure "url" is not actually a port number (in which case
            # "scheme" is really part of the path)
            rest = url[i+1:]
            if not rest or any(c not in '0123456789' for c in rest):
                # not a port number
                scheme, url = url[:i].lower(), rest

    if url[:2] == '//':
        netloc, url = _splitnetloc(url, 2)
        if (('[' in netloc and ']' not in netloc) or
                (']' in netloc and '[' not in netloc)):
            raise ValueError("Invalid IPv6 URL")
    if allow_fragments and '#' in url:
        url, fragment = url.split('#', 1)
    if '?' in url:
        url, query = url.split('?', 1)
    v = SplitResult(scheme, netloc, url, query, fragment)
    _parse_cache[key] = v
    return _coerce_result(v)

def urlunparse(components):
    """Put a parsed URL back together again.  This may result in a
    slightly different, but equivalent URL, if the URL that was parsed
    originally had redundant delimiters, e.g. a ? with an empty query
    (the draft states that these are equivalent)."""
    scheme, netloc, url, params, query, fragment, _coerce_result = (
                                                  _coerce_args(*components))
    if params:
        url = "%s;%s" % (url, params)
    return _coerce_result(urlunsplit((scheme, netloc, url, query, fragment)))

def urlunsplit(components):
    """Combine the elements of a tuple as returned by urlsplit() into a
    complete URL as a string. The data argument can be any five-item iterable.
    This may result in a slightly different, but equivalent URL, if the URL that
    was parsed originally had unnecessary delimiters (for example, a ? with an
    empty query; the RFC states that these are equivalent)."""
    scheme, netloc, url, query, fragment, _coerce_result = (
                                          _coerce_args(*components))
    if netloc or (scheme and scheme in uses_netloc and url[:2] != '//'):
        if url and url[:1] != '/': url = '/' + url
        url = '//' + (netloc or '') + url
    if scheme:
        url = scheme + ':' + url
    if query:
        url = url + '?' + query
    if fragment:
        url = url + '#' + fragment
    return _coerce_result(url)

def urljoin(base, url, allow_fragments=True):
    """Join a base URL and a possibly relative URL to form an absolute
    interpretation of the latter."""
    if not base:
        return url
    if not url:
        return base
    base, url, _coerce_result = _coerce_args(base, url)
    bscheme, bnetloc, bpath, bparams, bquery, bfragment = \
            urlparse(base, '', allow_fragments)
    scheme, netloc, path, params, query, fragment = \
            urlparse(url, bscheme, allow_fragments)
    if scheme != bscheme or scheme not in uses_relative:
        return _coerce_result(url)
    if scheme in uses_netloc:
        if netloc:
            return _coerce_result(urlunparse((scheme, netloc, path,
                                              params, query, fragment)))
        netloc = bnetloc
    if path[:1] == '/':
        return _coerce_result(urlunparse((scheme, netloc, path,
                                          params, query, fragment)))
    if not path and not params:
        path = bpath
        params = bparams
        if not query:
            query = bquery
        return _coerce_result(urlunparse((scheme, netloc, path,
                                          params, query, fragment)))
    segments = bpath.split('/')[:-1] + path.split('/')
    # XXX The stuff below is bogus in various ways...
    if segments[-1] == '.':
        segments[-1] = ''
    while '.' in segments:
        segments.remove('.')
    while 1:
        i = 1
        n = len(segments) - 1
        while i < n:
            if (segments[i] == '..'
                and segments[i-1] not in ('', '..')):
                del segments[i-1:i+1]
                break
            i = i+1
        else:
            break
    if segments == ['', '..']:
        segments[-1] = ''
    elif len(segments) >= 2 and segments[-1] == '..':
        segments[-2:] = ['']
    return _coerce_result(urlunparse((scheme, netloc, '/'.join(segments),
                                      params, query, fragment)))

def urldefrag(url):
    """Removes any existing fragment from URL.

    Returns a tuple of the defragmented URL and the fragment.  If
    the URL contained no fragments, the second element is the
    empty string.
    """
    url, _coerce_result = _coerce_args(url)
    if '#' in url:
        s, n, p, a, q, frag = urlparse(url)
        defrag = urlunparse((s, n, p, a, q, ''))
    else:
        frag = ''
        defrag = url
    return _coerce_result(DefragResult(defrag, frag))

_hexdig = '0123456789ABCDEFabcdef'
_hextobyte = dict(((a + b).encode(), bytes([int(a + b, 16)]))
                  for a in _hexdig for b in _hexdig)

def unquote_to_bytes(string):
    """unquote_to_bytes('abc%20def') -> b'abc def'."""
    # Note: strings are encoded as UTF-8. This is only an issue if it contains
    # unescaped non-ASCII characters, which URIs should not.
    if not string:
        # Is it a string-like object?
        string.split
        return bytes(b'')
    if isinstance(string, str):
        string = string.encode('utf-8')
    ### For Python-Future:
    # It is already a byte-string object, but force it to be newbytes here on
    # Py2:
    string = bytes(string)
    ###
    bits = string.split(b'%')
    if len(bits) == 1:
        return string
    res = [bits[0]]
    append = res.append
    for item in bits[1:]:
        try:
            append(_hextobyte[item[:2]])
            append(item[2:])
        except KeyError:
            append(b'%')
            append(item)
    return bytes(b'').join(res)

_asciire = re.compile('([\x00-\x7f]+)')

def unquote(string, encoding='utf-8', errors='replace'):
    """Replace %xx escapes by their single-character equivalent. The optional
    encoding and errors parameters specify how to decode percent-encoded
    sequences into Unicode characters, as accepted by the bytes.decode()
    method.
    By default, percent-encoded sequences are decoded with UTF-8, and invalid
    sequences are replaced by a placeholder character.

    unquote('abc%20def') -> 'abc def'.
    """
    if '%' not in string:
        string.split
        return string
    if encoding is None:
        encoding = 'utf-8'
    if errors is None:
        errors = 'replace'
    bits = _asciire.split(string)
    res = [bits[0]]
    append = res.append
    for i in range(1, len(bits), 2):
        append(unquote_to_bytes(bits[i]).decode(encoding, errors))
        append(bits[i + 1])
    return ''.join(res)

def parse_qs(qs, keep_blank_values=False, strict_parsing=False,
             encoding='utf-8', errors='replace'):
    """Parse a query given as a string argument.

        Arguments:

        qs: percent-encoded query string to be parsed

        keep_blank_values: flag indicating whether blank values in
            percent-encoded queries should be treated as blank strings.
            A true value indicates that blanks should be retained as
            blank strings.  The default false value indicates that
            blank values are to be ignored and treated as if they were
            not included.

        strict_parsing: flag indicating what to do with parsing errors.
            If false (the default), errors are silently ignored.
            If true, errors raise a ValueError exception.

        encoding and errors: specify how to decode percent-encoded sequences
            into Unicode characters, as accepted by the bytes.decode() method.
    """
    parsed_result = {}
    pairs = parse_qsl(qs, keep_blank_values, strict_parsing,
                      encoding=encoding, errors=errors)
    for name, value in pairs:
        if name in parsed_result:
            parsed_result[name].append(value)
        else:
            parsed_result[name] = [value]
    return parsed_result

def parse_qsl(qs, keep_blank_values=False, strict_parsing=False,
              encoding='utf-8', errors='replace'):
    """Parse a query given as a string argument.

    Arguments:

    qs: percent-encoded query string to be parsed

    keep_blank_values: flag indicating whether blank values in
        percent-encoded queries should be treated as blank strings.  A
        true value indicates that blanks should be retained as blank
        strings.  The default false value indicates that blank values
        are to be ignored and treated as if they were  not included.

    strict_parsing: flag indicating what to do with parsing errors. If
        false (the default), errors are silently ignored. If true,
        errors raise a ValueError exception.

    encoding and errors: specify how to decode percent-encoded sequences
        into Unicode characters, as accepted by the bytes.decode() method.

    Returns a list, as G-d intended.
    """
    qs, _coerce_result = _coerce_args(qs)
    pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
    r = []
    for name_value in pairs:
        if not name_value and not strict_parsing:
            continue
        nv = name_value.split('=', 1)
        if len(nv) != 2:
            if strict_parsing:
                raise ValueError("bad query field: %r" % (name_value,))
            # Handle case of a control-name with no equal sign
            if keep_blank_values:
                nv.append('')
            else:
                continue
        if len(nv[1]) or keep_blank_values:
            name = nv[0].replace('+', ' ')
            name = unquote(name, encoding=encoding, errors=errors)
            name = _coerce_result(name)
            value = nv[1].replace('+', ' ')
            value = unquote(value, encoding=encoding, errors=errors)
            value = _coerce_result(value)
            r.append((name, value))
    return r

def unquote_plus(string, encoding='utf-8', errors='replace'):
    """Like unquote(), but also replace plus signs by spaces, as required for
    unquoting HTML form values.

    unquote_plus('%7e/abc+def') -> '~/abc def'
    """
    string = string.replace('+', ' ')
    return unquote(string, encoding, errors)

_ALWAYS_SAFE = frozenset(bytes(b'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
                               b'abcdefghijklmnopqrstuvwxyz'
                               b'0123456789'
                               b'_.-'))
_ALWAYS_SAFE_BYTES = bytes(_ALWAYS_SAFE)
_safe_quoters = {}

class Quoter(collections.defaultdict):
    """A mapping from bytes (in range(0,256)) to strings.

    String values are percent-encoded byte values, unless the key < 128, and
    in the "safe" set (either the specified safe set, or default set).
    """
    # Keeps a cache internally, using defaultdict, for efficiency (lookups
    # of cached keys don't call Python code at all).
    def __init__(self, safe):
        """safe: bytes object."""
        self.safe = _ALWAYS_SAFE.union(bytes(safe))

    def __repr__(self):
        # Without this, will just display as a defaultdict
        return "<Quoter %r>" % dict(self)

    def __missing__(self, b):
        # Handle a cache miss. Store quoted string in cache and return.
        res = chr(b) if b in self.safe else '%{0:02X}'.format(b)
        self[b] = res
        return res

def quote(string, safe='/', encoding=None, errors=None):
    """quote('abc def') -> 'abc%20def'

    Each part of a URL, e.g. the path info, the query, etc., has a
    different set of reserved characters that must be quoted.

    RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists
    the following reserved characters.

    reserved    = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |
                  "$" | ","

    Each of these characters is reserved in some component of a URL,
    but not necessarily in all of them.

    By default, the quote function is intended for quoting the path
    section of a URL.  Thus, it will not encode '/'.  This character
    is reserved, but in typical usage the quote function is being
    called on a path where the existing slash characters are used as
    reserved characters.

    string and safe may be either str or bytes objects. encoding must
    not be specified if string is a str.

    The optional encoding and errors parameters specify how to deal with
    non-ASCII characters, as accepted by the str.encode method.
    By default, encoding='utf-8' (characters are encoded with UTF-8), and
    errors='strict' (unsupported characters raise a UnicodeEncodeError).
    """
    if isinstance(string, str):
        if not string:
            return string
        if encoding is None:
            encoding = 'utf-8'
        if errors is None:
            errors = 'strict'
        string = string.encode(encoding, errors)
    else:
        if encoding is not None:
            raise TypeError("quote() doesn't support 'encoding' for bytes")
        if errors is not None:
            raise TypeError("quote() doesn't support 'errors' for bytes")
    return quote_from_bytes(string, safe)

def quote_plus(string, safe='', encoding=None, errors=None):
    """Like quote(), but also replace ' ' with '+', as required for quoting
    HTML form values. Plus signs in the original string are escaped unless
    they are included in safe. It also does not have safe default to '/'.
    """
    # Check if ' ' in string, where string may either be a str or bytes.  If
    # there are no spaces, the regular quote will produce the right answer.
    if ((isinstance(string, str) and ' ' not in string) or
        (isinstance(string, bytes) and b' ' not in string)):
        return quote(string, safe, encoding, errors)
    if isinstance(safe, str):
        space = str(' ')
    else:
        space = bytes(b' ')
    string = quote(string, safe + space, encoding, errors)
    return string.replace(' ', '+')

def quote_from_bytes(bs, safe='/'):
    """Like quote(), but accepts a bytes object rather than a str, and does
    not perform string-to-bytes encoding.  It always returns an ASCII string.
    quote_from_bytes(b'abc def\x3f') -> 'abc%20def%3f'
    """
    if not isinstance(bs, (bytes, bytearray)):
        raise TypeError("quote_from_bytes() expected bytes")
    if not bs:
        return str('')
    ### For Python-Future:
    bs = bytes(bs)
    ###
    if isinstance(safe, str):
        # Normalize 'safe' by converting to bytes and removing non-ASCII chars
        safe = str(safe).encode('ascii', 'ignore')
    else:
        ### For Python-Future:
        safe = bytes(safe)
        ###
        safe = bytes([c for c in safe if c < 128])
    if not bs.rstrip(_ALWAYS_SAFE_BYTES + safe):
        return bs.decode()
    try:
        quoter = _safe_quoters[safe]
    except KeyError:
        _safe_quoters[safe] = quoter = Quoter(safe).__getitem__
    return str('').join([quoter(char) for char in bs])

def urlencode(query, doseq=False, safe='', encoding=None, errors=None):
    """Encode a sequence of two-element tuples or dictionary into a URL query string.

    If any values in the query arg are sequences and doseq is true, each
    sequence element is converted to a separate parameter.

    If the query arg is a sequence of two-element tuples, the order of the
    parameters in the output will match the order of parameters in the
    input.

    The query arg may be either a string or a bytes type. When query arg is a
    string, the safe, encoding and error parameters are sent the quote_plus for
    encoding.
    """

    if hasattr(query, "items"):
        query = query.items()
    else:
        # It's a bother at times that strings and string-like objects are
        # sequences.
        try:
            # non-sequence items should not work with len()
            # non-empty strings will fail this
            if len(query) and not isinstance(query[0], tuple):
                raise TypeError
            # Zero-length sequences of all types will get here and succeed,
            # but that's a minor nit.  Since the original implementation
            # allowed empty dicts that type of behavior probably should be
            # preserved for consistency
        except TypeError:
            ty, va, tb = sys.exc_info()
            raise_with_traceback(TypeError("not a valid non-string sequence "
                                           "or mapping object"), tb)

    l = []
    if not doseq:
        for k, v in query:
            if isinstance(k, bytes):
                k = quote_plus(k, safe)
            else:
                k = quote_plus(str(k), safe, encoding, errors)

            if isinstance(v, bytes):
                v = quote_plus(v, safe)
            else:
                v = quote_plus(str(v), safe, encoding, errors)
            l.append(k + '=' + v)
    else:
        for k, v in query:
            if isinstance(k, bytes):
                k = quote_plus(k, safe)
            else:
                k = quote_plus(str(k), safe, encoding, errors)

            if isinstance(v, bytes):
                v = quote_plus(v, safe)
                l.append(k + '=' + v)
            elif isinstance(v, str):
                v = quote_plus(v, safe, encoding, errors)
                l.append(k + '=' + v)
            else:
                try:
                    # Is this a sufficient test for sequence-ness?
                    x = len(v)
                except TypeError:
                    # not a sequence
                    v = quote_plus(str(v), safe, encoding, errors)
                    l.append(k + '=' + v)
                else:
                    # loop over the sequence
                    for elt in v:
                        if isinstance(elt, bytes):
                            elt = quote_plus(elt, safe)
                        else:
                            elt = quote_plus(str(elt), safe, encoding, errors)
                        l.append(k + '=' + elt)
    return str('&').join(l)

# Utilities to parse URLs (most of these return None for missing parts):
# unwrap('<URL:type://host/path>') --> 'type://host/path'
# splittype('type:opaquestring') --> 'type', 'opaquestring'
# splithost('//host[:port]/path') --> 'host[:port]', '/path'
# splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'
# splitpasswd('user:passwd') -> 'user', 'passwd'
# splitport('host:port') --> 'host', 'port'
# splitquery('/path?query') --> '/path', 'query'
# splittag('/path#tag') --> '/path', 'tag'
# splitattr('/path;attr1=value1;attr2=value2;...') ->
#   '/path', ['attr1=value1', 'attr2=value2', ...]
# splitvalue('attr=value') --> 'attr', 'value'
# urllib.parse.unquote('abc%20def') -> 'abc def'
# quote('abc def') -> 'abc%20def')

def to_bytes(url):
    """to_bytes(u"URL") --> 'URL'."""
    # Most URL schemes require ASCII. If that changes, the conversion
    # can be relaxed.
    # XXX get rid of to_bytes()
    if isinstance(url, str):
        try:
            url = url.encode("ASCII").decode()
        except UnicodeError:
            raise UnicodeError("URL " + repr(url) +
                               " contains non-ASCII characters")
    return url

def unwrap(url):
    """unwrap('<URL:type://host/path>') --> 'type://host/path'."""
    url = str(url).strip()
    if url[:1] == '<' and url[-1:] == '>':
        url = url[1:-1].strip()
    if url[:4] == 'URL:': url = url[4:].strip()
    return url

_typeprog = None
def splittype(url):
    """splittype('type:opaquestring') --> 'type', 'opaquestring'."""
    global _typeprog
    if _typeprog is None:
        import re
        _typeprog = re.compile('^([^/:]+):')

    match = _typeprog.match(url)
    if match:
        scheme = match.group(1)
        return scheme.lower(), url[len(scheme) + 1:]
    return None, url

_hostprog = None
def splithost(url):
    """splithost('//host[:port]/path') --> 'host[:port]', '/path'."""
    global _hostprog
    if _hostprog is None:
        import re
        _hostprog = re.compile('^//([^/?]*)(.*)$')

    match = _hostprog.match(url)
    if match:
        host_port = match.group(1)
        path = match.group(2)
        if path and not path.startswith('/'):
            path = '/' + path
        return host_port, path
    return None, url

_userprog = None
def splituser(host):
    """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'."""
    global _userprog
    if _userprog is None:
        import re
        _userprog = re.compile('^(.*)@(.*)$')

    match = _userprog.match(host)
    if match: return match.group(1, 2)
    return None, host

_passwdprog = None
def splitpasswd(user):
    """splitpasswd('user:passwd') -> 'user', 'passwd'."""
    global _passwdprog
    if _passwdprog is None:
        import re
        _passwdprog = re.compile('^([^:]*):(.*)$',re.S)

    match = _passwdprog.match(user)
    if match: return match.group(1, 2)
    return user, None

# splittag('/path#tag') --> '/path', 'tag'
_portprog = None
def splitport(host):
    """splitport('host:port') --> 'host', 'port'."""
    global _portprog
    if _portprog is None:
        import re
        _portprog = re.compile('^(.*):([0-9]+)$')

    match = _portprog.match(host)
    if match: return match.group(1, 2)
    return host, None

_nportprog = None
def splitnport(host, defport=-1):
    """Split host and port, returning numeric port.
    Return given default port if no ':' found; defaults to -1.
    Return numerical port if a valid number are found after ':'.
    Return None if ':' but not a valid number."""
    global _nportprog
    if _nportprog is None:
        import re
        _nportprog = re.compile('^(.*):(.*)$')

    match = _nportprog.match(host)
    if match:
        host, port = match.group(1, 2)
        try:
            if not port: raise ValueError("no digits")
            nport = int(port)
        except ValueError:
            nport = None
        return host, nport
    return host, defport

_queryprog = None
def splitquery(url):
    """splitquery('/path?query') --> '/path', 'query'."""
    global _queryprog
    if _queryprog is None:
        import re
        _queryprog = re.compile('^(.*)\?([^?]*)$')

    match = _queryprog.match(url)
    if match: return match.group(1, 2)
    return url, None

_tagprog = None
def splittag(url):
    """splittag('/path#tag') --> '/path', 'tag'."""
    global _tagprog
    if _tagprog is None:
        import re
        _tagprog = re.compile('^(.*)#([^#]*)$')

    match = _tagprog.match(url)
    if match: return match.group(1, 2)
    return url, None

def splitattr(url):
    """splitattr('/path;attr1=value1;attr2=value2;...') ->
        '/path', ['attr1=value1', 'attr2=value2', ...]."""
    words = url.split(';')
    return words[0], words[1:]

_valueprog = None
def splitvalue(attr):
    """splitvalue('attr=value') --> 'attr', 'value'."""
    global _valueprog
    if _valueprog is None:
        import re
        _valueprog = re.compile('^([^=]*)=(.*)$')

    match = _valueprog.match(attr)
    if match: return match.group(1, 2)
    return attr, None
PKDu\�Pxx"future/backports/urllib/request.pynu�[���"""
Ported using Python-Future from the Python 3.3 standard library.

An extensible library for opening URLs using a variety of protocols

The simplest way to use this module is to call the urlopen function,
which accepts a string containing a URL or a Request object (described
below).  It opens the URL and returns the results as file-like
object; the returned object has some extra methods described below.

The OpenerDirector manages a collection of Handler objects that do
all the actual work.  Each Handler implements a particular protocol or
option.  The OpenerDirector is a composite object that invokes the
Handlers needed to open the requested URL.  For example, the
HTTPHandler performs HTTP GET and POST requests and deals with
non-error returns.  The HTTPRedirectHandler automatically deals with
HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler
deals with digest authentication.

urlopen(url, data=None) -- Basic usage is the same as original
urllib.  pass the url and optionally data to post to an HTTP URL, and
get a file-like object back.  One difference is that you can also pass
a Request instance instead of URL.  Raises a URLError (subclass of
IOError); for HTTP errors, raises an HTTPError, which can also be
treated as a valid response.

build_opener -- Function that creates a new OpenerDirector instance.
Will install the default handlers.  Accepts one or more Handlers as
arguments, either instances or Handler classes that it will
instantiate.  If one of the argument is a subclass of the default
handler, the argument will be installed instead of the default.

install_opener -- Installs a new opener as the default opener.

objects of interest:

OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages
the Handler classes, while dealing with requests and responses.

Request -- An object that encapsulates the state of a request.  The
state can be as simple as the URL.  It can also include extra HTTP
headers, e.g. a User-Agent.

BaseHandler --

internals:
BaseHandler and parent
_call_chain conventions

Example usage:

import urllib.request

# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
authinfo.add_password(realm='PDQ Application',
                      uri='https://mahler:8092/site-updates.py',
                      user='klem',
                      passwd='geheim$parole')

proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})

# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
                                     urllib.request.CacheFTPHandler)

# install it
urllib.request.install_opener(opener)

f = urllib.request.urlopen('http://www.python.org/')
"""

# XXX issues:
# If an authentication error handler that tries to perform
# authentication for some reason but fails, how should the error be
# signalled?  The client needs to know the HTTP error code.  But if
# the handler knows that the problem was, e.g., that it didn't know
# that hash algo that requested in the challenge, it would be good to
# pass that information along to the client, too.
# ftp errors aren't handled cleanly
# check digest against correct (i.e. non-apache) implementation

# Possible extensions:
# complex proxies  XXX not sure what exactly was meant by this
# abstract factory for opener

from __future__ import absolute_import, division, print_function, unicode_literals
from future.builtins import bytes, dict, filter, input, int, map, open, str
from future.utils import PY2, PY3, raise_with_traceback

import base64
import bisect
import hashlib
import array

from future.backports import email
from future.backports.http import client as http_client
from .error import URLError, HTTPError, ContentTooShortError
from .parse import (
    urlparse, urlsplit, urljoin, unwrap, quote, unquote,
    splittype, splithost, splitport, splituser, splitpasswd,
    splitattr, splitquery, splitvalue, splittag, to_bytes, urlunparse)
from .response import addinfourl, addclosehook

import io
import os
import posixpath
import re
import socket
import sys
import time
import tempfile
import contextlib
import warnings

from future.utils import PY2

if PY2:
    from collections import Iterable
else:
    from collections.abc import Iterable

# check for SSL
try:
    import ssl
    # Not available in the SSL module in Py2:
    from ssl import SSLContext
except ImportError:
    _have_ssl = False
else:
    _have_ssl = True

__all__ = [
    # Classes
    'Request', 'OpenerDirector', 'BaseHandler', 'HTTPDefaultErrorHandler',
    'HTTPRedirectHandler', 'HTTPCookieProcessor', 'ProxyHandler',
    'HTTPPasswordMgr', 'HTTPPasswordMgrWithDefaultRealm',
    'AbstractBasicAuthHandler', 'HTTPBasicAuthHandler', 'ProxyBasicAuthHandler',
    'AbstractDigestAuthHandler', 'HTTPDigestAuthHandler', 'ProxyDigestAuthHandler',
    'HTTPHandler', 'FileHandler', 'FTPHandler', 'CacheFTPHandler',
    'UnknownHandler', 'HTTPErrorProcessor',
    # Functions
    'urlopen', 'install_opener', 'build_opener',
    'pathname2url', 'url2pathname', 'getproxies',
    # Legacy interface
    'urlretrieve', 'urlcleanup', 'URLopener', 'FancyURLopener',
]

# used in User-Agent header sent
__version__ = sys.version[:3]

_opener = None
def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, **_3to2kwargs):
    if 'cadefault' in _3to2kwargs: cadefault = _3to2kwargs['cadefault']; del _3to2kwargs['cadefault']
    else: cadefault = False
    if 'capath' in _3to2kwargs: capath = _3to2kwargs['capath']; del _3to2kwargs['capath']
    else: capath = None
    if 'cafile' in _3to2kwargs: cafile = _3to2kwargs['cafile']; del _3to2kwargs['cafile']
    else: cafile = None
    global _opener
    if cafile or capath or cadefault:
        if not _have_ssl:
            raise ValueError('SSL support not available')
        context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
        context.options |= ssl.OP_NO_SSLv2
        context.verify_mode = ssl.CERT_REQUIRED
        if cafile or capath:
            context.load_verify_locations(cafile, capath)
        else:
            context.set_default_verify_paths()
        https_handler = HTTPSHandler(context=context, check_hostname=True)
        opener = build_opener(https_handler)
    elif _opener is None:
        _opener = opener = build_opener()
    else:
        opener = _opener
    return opener.open(url, data, timeout)

def install_opener(opener):
    global _opener
    _opener = opener

_url_tempfiles = []
def urlretrieve(url, filename=None, reporthook=None, data=None):
    """
    Retrieve a URL into a temporary location on disk.

    Requires a URL argument. If a filename is passed, it is used as
    the temporary file location. The reporthook argument should be
    a callable that accepts a block number, a read size, and the
    total file size of the URL target. The data argument should be
    valid URL encoded data.

    If a filename is passed and the URL points to a local resource,
    the result is a copy from local file to new file.

    Returns a tuple containing the path to the newly created
    data file as well as the resulting HTTPMessage object.
    """
    url_type, path = splittype(url)

    with contextlib.closing(urlopen(url, data)) as fp:
        headers = fp.info()

        # Just return the local path and the "headers" for file://
        # URLs. No sense in performing a copy unless requested.
        if url_type == "file" and not filename:
            return os.path.normpath(path), headers

        # Handle temporary file setup.
        if filename:
            tfp = open(filename, 'wb')
        else:
            tfp = tempfile.NamedTemporaryFile(delete=False)
            filename = tfp.name
            _url_tempfiles.append(filename)

        with tfp:
            result = filename, headers
            bs = 1024*8
            size = -1
            read = 0
            blocknum = 0
            if "content-length" in headers:
                size = int(headers["Content-Length"])

            if reporthook:
                reporthook(blocknum, bs, size)

            while True:
                block = fp.read(bs)
                if not block:
                    break
                read += len(block)
                tfp.write(block)
                blocknum += 1
                if reporthook:
                    reporthook(blocknum, bs, size)

    if size >= 0 and read < size:
        raise ContentTooShortError(
            "retrieval incomplete: got only %i out of %i bytes"
            % (read, size), result)

    return result

def urlcleanup():
    for temp_file in _url_tempfiles:
        try:
            os.unlink(temp_file)
        except EnvironmentError:
            pass

    del _url_tempfiles[:]
    global _opener
    if _opener:
        _opener = None

if PY3:
    _cut_port_re = re.compile(r":\d+$", re.ASCII)
else:
    _cut_port_re = re.compile(r":\d+$")

def request_host(request):

    """Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    """
    url = request.full_url
    host = urlparse(url)[1]
    if host == "":
        host = request.get_header("Host", "")

    # remove port, if present
    host = _cut_port_re.sub("", host, 1)
    return host.lower()

class Request(object):

    def __init__(self, url, data=None, headers={},
                 origin_req_host=None, unverifiable=False,
                 method=None):
        # unwrap('<URL:type://host/path>') --> 'type://host/path'
        self.full_url = unwrap(url)
        self.full_url, self.fragment = splittag(self.full_url)
        self.data = data
        self.headers = {}
        self._tunnel_host = None
        for key, value in headers.items():
            self.add_header(key, value)
        self.unredirected_hdrs = {}
        if origin_req_host is None:
            origin_req_host = request_host(self)
        self.origin_req_host = origin_req_host
        self.unverifiable = unverifiable
        self.method = method
        self._parse()

    def _parse(self):
        self.type, rest = splittype(self.full_url)
        if self.type is None:
            raise ValueError("unknown url type: %r" % self.full_url)
        self.host, self.selector = splithost(rest)
        if self.host:
            self.host = unquote(self.host)

    def get_method(self):
        """Return a string indicating the HTTP request method."""
        if self.method is not None:
            return self.method
        elif self.data is not None:
            return "POST"
        else:
            return "GET"

    def get_full_url(self):
        if self.fragment:
            return '%s#%s' % (self.full_url, self.fragment)
        else:
            return self.full_url

    # Begin deprecated methods

    def add_data(self, data):
        msg = "Request.add_data method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        self.data = data

    def has_data(self):
        msg = "Request.has_data method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.data is not None

    def get_data(self):
        msg = "Request.get_data method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.data

    def get_type(self):
        msg = "Request.get_type method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.type

    def get_host(self):
        msg = "Request.get_host method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.host

    def get_selector(self):
        msg = "Request.get_selector method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.selector

    def is_unverifiable(self):
        msg = "Request.is_unverifiable method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.unverifiable

    def get_origin_req_host(self):
        msg = "Request.get_origin_req_host method is deprecated."
        warnings.warn(msg, DeprecationWarning, stacklevel=1)
        return self.origin_req_host

    # End deprecated methods

    def set_proxy(self, host, type):
        if self.type == 'https' and not self._tunnel_host:
            self._tunnel_host = self.host
        else:
            self.type= type
            self.selector = self.full_url
        self.host = host

    def has_proxy(self):
        return self.selector == self.full_url

    def add_header(self, key, val):
        # useful for something like authentication
        self.headers[key.capitalize()] = val

    def add_unredirected_header(self, key, val):
        # will not be added to a redirected request
        self.unredirected_hdrs[key.capitalize()] = val

    def has_header(self, header_name):
        return (header_name in self.headers or
                header_name in self.unredirected_hdrs)

    def get_header(self, header_name, default=None):
        return self.headers.get(
            header_name,
            self.unredirected_hdrs.get(header_name, default))

    def header_items(self):
        hdrs = self.unredirected_hdrs.copy()
        hdrs.update(self.headers)
        return list(hdrs.items())

class OpenerDirector(object):
    def __init__(self):
        client_version = "Python-urllib/%s" % __version__
        self.addheaders = [('User-agent', client_version)]
        # self.handlers is retained only for backward compatibility
        self.handlers = []
        # manage the individual handlers
        self.handle_open = {}
        self.handle_error = {}
        self.process_response = {}
        self.process_request = {}

    def add_handler(self, handler):
        if not hasattr(handler, "add_parent"):
            raise TypeError("expected BaseHandler instance, got %r" %
                            type(handler))

        added = False
        for meth in dir(handler):
            if meth in ["redirect_request", "do_open", "proxy_open"]:
                # oops, coincidental match
                continue

            i = meth.find("_")
            protocol = meth[:i]
            condition = meth[i+1:]

            if condition.startswith("error"):
                j = condition.find("_") + i + 1
                kind = meth[j+1:]
                try:
                    kind = int(kind)
                except ValueError:
                    pass
                lookup = self.handle_error.get(protocol, {})
                self.handle_error[protocol] = lookup
            elif condition == "open":
                kind = protocol
                lookup = self.handle_open
            elif condition == "response":
                kind = protocol
                lookup = self.process_response
            elif condition == "request":
                kind = protocol
                lookup = self.process_request
            else:
                continue

            handlers = lookup.setdefault(kind, [])
            if handlers:
                bisect.insort(handlers, handler)
            else:
                handlers.append(handler)
            added = True

        if added:
            bisect.insort(self.handlers, handler)
            handler.add_parent(self)

    def close(self):
        # Only exists for backwards compatibility.
        pass

    def _call_chain(self, chain, kind, meth_name, *args):
        # Handlers raise an exception if no one else should try to handle
        # the request, or return None if they can't but another handler
        # could.  Otherwise, they return the response.
        handlers = chain.get(kind, ())
        for handler in handlers:
            func = getattr(handler, meth_name)
            result = func(*args)
            if result is not None:
                return result

    def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT):
        """
        Accept a URL or a Request object

        Python-Future: if the URL is passed as a byte-string, decode it first.
        """
        if isinstance(fullurl, bytes):
            fullurl = fullurl.decode()
        if isinstance(fullurl, str):
            req = Request(fullurl, data)
        else:
            req = fullurl
            if data is not None:
                req.data = data

        req.timeout = timeout
        protocol = req.type

        # pre-process request
        meth_name = protocol+"_request"
        for processor in self.process_request.get(protocol, []):
            meth = getattr(processor, meth_name)
            req = meth(req)

        response = self._open(req, data)

        # post-process response
        meth_name = protocol+"_response"
        for processor in self.process_response.get(protocol, []):
            meth = getattr(processor, meth_name)
            response = meth(req, response)

        return response

    def _open(self, req, data=None):
        result = self._call_chain(self.handle_open, 'default',
                                  'default_open', req)
        if result:
            return result

        protocol = req.type
        result = self._call_chain(self.handle_open, protocol, protocol +
                                  '_open', req)
        if result:
            return result

        return self._call_chain(self.handle_open, 'unknown',
                                'unknown_open', req)

    def error(self, proto, *args):
        if proto in ('http', 'https'):
            # XXX http[s] protocols are special-cased
            dict = self.handle_error['http'] # https is not different than http
            proto = args[2]  # YUCK!
            meth_name = 'http_error_%s' % proto
            http_err = 1
            orig_args = args
        else:
            dict = self.handle_error
            meth_name = proto + '_error'
            http_err = 0
        args = (dict, proto, meth_name) + args
        result = self._call_chain(*args)
        if result:
            return result

        if http_err:
            args = (dict, 'default', 'http_error_default') + orig_args
            return self._call_chain(*args)

# XXX probably also want an abstract factory that knows when it makes
# sense to skip a superclass in favor of a subclass and when it might
# make sense to include both

def build_opener(*handlers):
    """Create an opener object from a list of handlers.

    The opener will use several default handlers, including support
    for HTTP, FTP and when applicable HTTPS.

    If any of the handlers passed as arguments are subclasses of the
    default handlers, the default handlers will not be used.
    """
    def isclass(obj):
        return isinstance(obj, type) or hasattr(obj, "__bases__")

    opener = OpenerDirector()
    default_classes = [ProxyHandler, UnknownHandler, HTTPHandler,
                       HTTPDefaultErrorHandler, HTTPRedirectHandler,
                       FTPHandler, FileHandler, HTTPErrorProcessor]
    if hasattr(http_client, "HTTPSConnection"):
        default_classes.append(HTTPSHandler)
    skip = set()
    for klass in default_classes:
        for check in handlers:
            if isclass(check):
                if issubclass(check, klass):
                    skip.add(klass)
            elif isinstance(check, klass):
                skip.add(klass)
    for klass in skip:
        default_classes.remove(klass)

    for klass in default_classes:
        opener.add_handler(klass())

    for h in handlers:
        if isclass(h):
            h = h()
        opener.add_handler(h)
    return opener

class BaseHandler(object):
    handler_order = 500

    def add_parent(self, parent):
        self.parent = parent

    def close(self):
        # Only exists for backwards compatibility
        pass

    def __lt__(self, other):
        if not hasattr(other, "handler_order"):
            # Try to preserve the old behavior of having custom classes
            # inserted after default ones (works only for custom user
            # classes which are not aware of handler_order).
            return True
        return self.handler_order < other.handler_order


class HTTPErrorProcessor(BaseHandler):
    """Process HTTP error responses."""
    handler_order = 1000  # after all other processing

    def http_response(self, request, response):
        code, msg, hdrs = response.code, response.msg, response.info()

        # According to RFC 2616, "2xx" code indicates that the client's
        # request was successfully received, understood, and accepted.
        if not (200 <= code < 300):
            response = self.parent.error(
                'http', request, response, code, msg, hdrs)

        return response

    https_response = http_response

class HTTPDefaultErrorHandler(BaseHandler):
    def http_error_default(self, req, fp, code, msg, hdrs):
        raise HTTPError(req.full_url, code, msg, hdrs, fp)

class HTTPRedirectHandler(BaseHandler):
    # maximum number of redirections to any single URL
    # this is needed because of the state that cookies introduce
    max_repeats = 4
    # maximum total number of redirections (regardless of URL) before
    # assuming we're in a loop
    max_redirections = 10

    def redirect_request(self, req, fp, code, msg, headers, newurl):
        """Return a Request or None in response to a redirect.

        This is called by the http_error_30x methods when a
        redirection response is received.  If a redirection should
        take place, return a new Request to allow http_error_30x to
        perform the redirect.  Otherwise, raise HTTPError if no-one
        else should try to handle this url.  Return None if you can't
        but another Handler might.
        """
        m = req.get_method()
        if (not (code in (301, 302, 303, 307) and m in ("GET", "HEAD")
            or code in (301, 302, 303) and m == "POST")):
            raise HTTPError(req.full_url, code, msg, headers, fp)

        # Strictly (according to RFC 2616), 301 or 302 in response to
        # a POST MUST NOT cause a redirection without confirmation
        # from the user (of urllib.request, in this case).  In practice,
        # essentially all clients do redirect in this case, so we do
        # the same.
        # be conciliant with URIs containing a space
        newurl = newurl.replace(' ', '%20')
        CONTENT_HEADERS = ("content-length", "content-type")
        newheaders = dict((k, v) for k, v in req.headers.items()
                          if k.lower() not in CONTENT_HEADERS)
        return Request(newurl,
                       headers=newheaders,
                       origin_req_host=req.origin_req_host,
                       unverifiable=True)

    # Implementation note: To avoid the server sending us into an
    # infinite loop, the request object needs to track what URLs we
    # have already seen.  Do this by adding a handler-specific
    # attribute to the Request object.
    def http_error_302(self, req, fp, code, msg, headers):
        # Some servers (incorrectly) return multiple Location headers
        # (so probably same goes for URI).  Use first header.
        if "location" in headers:
            newurl = headers["location"]
        elif "uri" in headers:
            newurl = headers["uri"]
        else:
            return

        # fix a possible malformed URL
        urlparts = urlparse(newurl)

        # For security reasons we don't allow redirection to anything other
        # than http, https or ftp.

        if urlparts.scheme not in ('http', 'https', 'ftp', ''):
            raise HTTPError(
                newurl, code,
                "%s - Redirection to url '%s' is not allowed" % (msg, newurl),
                headers, fp)

        if not urlparts.path:
            urlparts = list(urlparts)
            urlparts[2] = "/"
        newurl = urlunparse(urlparts)

        newurl = urljoin(req.full_url, newurl)

        # XXX Probably want to forget about the state of the current
        # request, although that might interact poorly with other
        # handlers that also use handler-specific request attributes
        new = self.redirect_request(req, fp, code, msg, headers, newurl)
        if new is None:
            return

        # loop detection
        # .redirect_dict has a key url if url was previously visited.
        if hasattr(req, 'redirect_dict'):
            visited = new.redirect_dict = req.redirect_dict
            if (visited.get(newurl, 0) >= self.max_repeats or
                len(visited) >= self.max_redirections):
                raise HTTPError(req.full_url, code,
                                self.inf_msg + msg, headers, fp)
        else:
            visited = new.redirect_dict = req.redirect_dict = {}
        visited[newurl] = visited.get(newurl, 0) + 1

        # Don't close the fp until we are sure that we won't use it
        # with HTTPError.
        fp.read()
        fp.close()

        return self.parent.open(new, timeout=req.timeout)

    http_error_301 = http_error_303 = http_error_307 = http_error_302

    inf_msg = "The HTTP server returned a redirect error that would " \
              "lead to an infinite loop.\n" \
              "The last 30x error message was:\n"


def _parse_proxy(proxy):
    """Return (scheme, user, password, host/port) given a URL or an authority.

    If a URL is supplied, it must have an authority (host:port) component.
    According to RFC 3986, having an authority component means the URL must
    have two slashes after the scheme:

    >>> _parse_proxy('file:/ftp.example.com/')
    Traceback (most recent call last):
    ValueError: proxy URL with no authority: 'file:/ftp.example.com/'

    The first three items of the returned tuple may be None.

    Examples of authority parsing:

    >>> _parse_proxy('proxy.example.com')
    (None, None, None, 'proxy.example.com')
    >>> _parse_proxy('proxy.example.com:3128')
    (None, None, None, 'proxy.example.com:3128')

    The authority component may optionally include userinfo (assumed to be
    username:password):

    >>> _parse_proxy('joe:password@proxy.example.com')
    (None, 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('joe:password@proxy.example.com:3128')
    (None, 'joe', 'password', 'proxy.example.com:3128')

    Same examples, but with URLs instead:

    >>> _parse_proxy('http://proxy.example.com/')
    ('http', None, None, 'proxy.example.com')
    >>> _parse_proxy('http://proxy.example.com:3128/')
    ('http', None, None, 'proxy.example.com:3128')
    >>> _parse_proxy('http://joe:password@proxy.example.com/')
    ('http', 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('http://joe:password@proxy.example.com:3128')
    ('http', 'joe', 'password', 'proxy.example.com:3128')

    Everything after the authority is ignored:

    >>> _parse_proxy('ftp://joe:password@proxy.example.com/rubbish:3128')
    ('ftp', 'joe', 'password', 'proxy.example.com')

    Test for no trailing '/' case:

    >>> _parse_proxy('http://joe:password@proxy.example.com')
    ('http', 'joe', 'password', 'proxy.example.com')

    """
    scheme, r_scheme = splittype(proxy)
    if not r_scheme.startswith("/"):
        # authority
        scheme = None
        authority = proxy
    else:
        # URL
        if not r_scheme.startswith("//"):
            raise ValueError("proxy URL with no authority: %r" % proxy)
        # We have an authority, so for RFC 3986-compliant URLs (by ss 3.
        # and 3.3.), path is empty or starts with '/'
        end = r_scheme.find("/", 2)
        if end == -1:
            end = None
        authority = r_scheme[2:end]
    userinfo, hostport = splituser(authority)
    if userinfo is not None:
        user, password = splitpasswd(userinfo)
    else:
        user = password = None
    return scheme, user, password, hostport

class ProxyHandler(BaseHandler):
    # Proxies must be in front
    handler_order = 100

    def __init__(self, proxies=None):
        if proxies is None:
            proxies = getproxies()
        assert hasattr(proxies, 'keys'), "proxies must be a mapping"
        self.proxies = proxies
        for type, url in proxies.items():
            setattr(self, '%s_open' % type,
                    lambda r, proxy=url, type=type, meth=self.proxy_open:
                        meth(r, proxy, type))

    def proxy_open(self, req, proxy, type):
        orig_type = req.type
        proxy_type, user, password, hostport = _parse_proxy(proxy)
        if proxy_type is None:
            proxy_type = orig_type

        if req.host and proxy_bypass(req.host):
            return None

        if user and password:
            user_pass = '%s:%s' % (unquote(user),
                                   unquote(password))
            creds = base64.b64encode(user_pass.encode()).decode("ascii")
            req.add_header('Proxy-authorization', 'Basic ' + creds)
        hostport = unquote(hostport)
        req.set_proxy(hostport, proxy_type)
        if orig_type == proxy_type or orig_type == 'https':
            # let other handlers take care of it
            return None
        else:
            # need to start over, because the other handlers don't
            # grok the proxy's URL type
            # e.g. if we have a constructor arg proxies like so:
            # {'http': 'ftp://proxy.example.com'}, we may end up turning
            # a request for http://acme.example.com/a into one for
            # ftp://proxy.example.com/a
            return self.parent.open(req, timeout=req.timeout)

class HTTPPasswordMgr(object):

    def __init__(self):
        self.passwd = {}

    def add_password(self, realm, uri, user, passwd):
        # uri could be a single URI or a sequence
        if isinstance(uri, str):
            uri = [uri]
        if realm not in self.passwd:
            self.passwd[realm] = {}
        for default_port in True, False:
            reduced_uri = tuple(
                [self.reduce_uri(u, default_port) for u in uri])
            self.passwd[realm][reduced_uri] = (user, passwd)

    def find_user_password(self, realm, authuri):
        domains = self.passwd.get(realm, {})
        for default_port in True, False:
            reduced_authuri = self.reduce_uri(authuri, default_port)
            for uris, authinfo in domains.items():
                for uri in uris:
                    if self.is_suburi(uri, reduced_authuri):
                        return authinfo
        return None, None

    def reduce_uri(self, uri, default_port=True):
        """Accept authority or URI and extract only the authority and path."""
        # note HTTP URLs do not have a userinfo component
        parts = urlsplit(uri)
        if parts[1]:
            # URI
            scheme = parts[0]
            authority = parts[1]
            path = parts[2] or '/'
        else:
            # host or host:port
            scheme = None
            authority = uri
            path = '/'
        host, port = splitport(authority)
        if default_port and port is None and scheme is not None:
            dport = {"http": 80,
                     "https": 443,
                     }.get(scheme)
            if dport is not None:
                authority = "%s:%d" % (host, dport)
        return authority, path

    def is_suburi(self, base, test):
        """Check if test is below base in a URI tree

        Both args must be URIs in reduced form.
        """
        if base == test:
            return True
        if base[0] != test[0]:
            return False
        common = posixpath.commonprefix((base[1], test[1]))
        if len(common) == len(base[1]):
            return True
        return False


class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr):

    def find_user_password(self, realm, authuri):
        user, password = HTTPPasswordMgr.find_user_password(self, realm,
                                                            authuri)
        if user is not None:
            return user, password
        return HTTPPasswordMgr.find_user_password(self, None, authuri)


class AbstractBasicAuthHandler(object):

    # XXX this allows for multiple auth-schemes, but will stupidly pick
    # the last one with a realm specified.

    # allow for double- and single-quoted realm values
    # (single quotes are a violation of the RFC, but appear in the wild)
    rx = re.compile('(?:.*,)*[ \t]*([^ \t]+)[ \t]+'
                    'realm=(["\']?)([^"\']*)\\2', re.I)

    # XXX could pre-emptively send auth info already accepted (RFC 2617,
    # end of section 2, and section 1.2 immediately after "credentials"
    # production).

    def __init__(self, password_mgr=None):
        if password_mgr is None:
            password_mgr = HTTPPasswordMgr()
        self.passwd = password_mgr
        self.add_password = self.passwd.add_password
        self.retried = 0

    def reset_retry_count(self):
        self.retried = 0

    def http_error_auth_reqed(self, authreq, host, req, headers):
        # host may be an authority (without userinfo) or a URL with an
        # authority
        # XXX could be multiple headers
        authreq = headers.get(authreq, None)

        if self.retried > 5:
            # retry sending the username:password 5 times before failing.
            raise HTTPError(req.get_full_url(), 401, "basic auth failed",
                    headers, None)
        else:
            self.retried += 1

        if authreq:
            scheme = authreq.split()[0]
            if scheme.lower() != 'basic':
                raise ValueError("AbstractBasicAuthHandler does not"
                                 " support the following scheme: '%s'" %
                                 scheme)
            else:
                mo = AbstractBasicAuthHandler.rx.search(authreq)
                if mo:
                    scheme, quote, realm = mo.groups()
                    if quote not in ['"',"'"]:
                        warnings.warn("Basic Auth Realm was unquoted",
                                      UserWarning, 2)
                    if scheme.lower() == 'basic':
                        response = self.retry_http_basic_auth(host, req, realm)
                        if response and response.code != 401:
                            self.retried = 0
                        return response

    def retry_http_basic_auth(self, host, req, realm):
        user, pw = self.passwd.find_user_password(realm, host)
        if pw is not None:
            raw = "%s:%s" % (user, pw)
            auth = "Basic " + base64.b64encode(raw.encode()).decode("ascii")
            if req.headers.get(self.auth_header, None) == auth:
                return None
            req.add_unredirected_header(self.auth_header, auth)
            return self.parent.open(req, timeout=req.timeout)
        else:
            return None


class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler):

    auth_header = 'Authorization'

    def http_error_401(self, req, fp, code, msg, headers):
        url = req.full_url
        response = self.http_error_auth_reqed('www-authenticate',
                                          url, req, headers)
        self.reset_retry_count()
        return response


class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler):

    auth_header = 'Proxy-authorization'

    def http_error_407(self, req, fp, code, msg, headers):
        # http_error_auth_reqed requires that there is no userinfo component in
        # authority.  Assume there isn't one, since urllib.request does not (and
        # should not, RFC 3986 s. 3.2.1) support requests for URLs containing
        # userinfo.
        authority = req.host
        response = self.http_error_auth_reqed('proxy-authenticate',
                                          authority, req, headers)
        self.reset_retry_count()
        return response


# Return n random bytes.
_randombytes = os.urandom


class AbstractDigestAuthHandler(object):
    # Digest authentication is specified in RFC 2617.

    # XXX The client does not inspect the Authentication-Info header
    # in a successful response.

    # XXX It should be possible to test this implementation against
    # a mock server that just generates a static set of challenges.

    # XXX qop="auth-int" supports is shaky

    def __init__(self, passwd=None):
        if passwd is None:
            passwd = HTTPPasswordMgr()
        self.passwd = passwd
        self.add_password = self.passwd.add_password
        self.retried = 0
        self.nonce_count = 0
        self.last_nonce = None

    def reset_retry_count(self):
        self.retried = 0

    def http_error_auth_reqed(self, auth_header, host, req, headers):
        authreq = headers.get(auth_header, None)
        if self.retried > 5:
            # Don't fail endlessly - if we failed once, we'll probably
            # fail a second time. Hm. Unless the Password Manager is
            # prompting for the information. Crap. This isn't great
            # but it's better than the current 'repeat until recursion
            # depth exceeded' approach <wink>
            raise HTTPError(req.full_url, 401, "digest auth failed",
                            headers, None)
        else:
            self.retried += 1
        if authreq:
            scheme = authreq.split()[0]
            if scheme.lower() == 'digest':
                return self.retry_http_digest_auth(req, authreq)
            elif scheme.lower() != 'basic':
                raise ValueError("AbstractDigestAuthHandler does not support"
                                 " the following scheme: '%s'" % scheme)

    def retry_http_digest_auth(self, req, auth):
        token, challenge = auth.split(' ', 1)
        chal = parse_keqv_list(filter(None, parse_http_list(challenge)))
        auth = self.get_authorization(req, chal)
        if auth:
            auth_val = 'Digest %s' % auth
            if req.headers.get(self.auth_header, None) == auth_val:
                return None
            req.add_unredirected_header(self.auth_header, auth_val)
            resp = self.parent.open(req, timeout=req.timeout)
            return resp

    def get_cnonce(self, nonce):
        # The cnonce-value is an opaque
        # quoted string value provided by the client and used by both client
        # and server to avoid chosen plaintext attacks, to provide mutual
        # authentication, and to provide some message integrity protection.
        # This isn't a fabulous effort, but it's probably Good Enough.
        s = "%s:%s:%s:" % (self.nonce_count, nonce, time.ctime())
        b = s.encode("ascii") + _randombytes(8)
        dig = hashlib.sha1(b).hexdigest()
        return dig[:16]

    def get_authorization(self, req, chal):
        try:
            realm = chal['realm']
            nonce = chal['nonce']
            qop = chal.get('qop')
            algorithm = chal.get('algorithm', 'MD5')
            # mod_digest doesn't send an opaque, even though it isn't
            # supposed to be optional
            opaque = chal.get('opaque', None)
        except KeyError:
            return None

        H, KD = self.get_algorithm_impls(algorithm)
        if H is None:
            return None

        user, pw = self.passwd.find_user_password(realm, req.full_url)
        if user is None:
            return None

        # XXX not implemented yet
        if req.data is not None:
            entdig = self.get_entity_digest(req.data, chal)
        else:
            entdig = None

        A1 = "%s:%s:%s" % (user, realm, pw)
        A2 = "%s:%s" % (req.get_method(),
                        # XXX selector: what about proxies and full urls
                        req.selector)
        if qop == 'auth':
            if nonce == self.last_nonce:
                self.nonce_count += 1
            else:
                self.nonce_count = 1
                self.last_nonce = nonce
            ncvalue = '%08x' % self.nonce_count
            cnonce = self.get_cnonce(nonce)
            noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2))
            respdig = KD(H(A1), noncebit)
        elif qop is None:
            respdig = KD(H(A1), "%s:%s" % (nonce, H(A2)))
        else:
            # XXX handle auth-int.
            raise URLError("qop '%s' is not supported." % qop)

        # XXX should the partial digests be encoded too?

        base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \
               'response="%s"' % (user, realm, nonce, req.selector,
                                  respdig)
        if opaque:
            base += ', opaque="%s"' % opaque
        if entdig:
            base += ', digest="%s"' % entdig
        base += ', algorithm="%s"' % algorithm
        if qop:
            base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce)
        return base

    def get_algorithm_impls(self, algorithm):
        # lambdas assume digest modules are imported at the top level
        if algorithm == 'MD5':
            H = lambda x: hashlib.md5(x.encode("ascii")).hexdigest()
        elif algorithm == 'SHA':
            H = lambda x: hashlib.sha1(x.encode("ascii")).hexdigest()
        # XXX MD5-sess
        KD = lambda s, d: H("%s:%s" % (s, d))
        return H, KD

    def get_entity_digest(self, data, chal):
        # XXX not implemented yet
        return None


class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler):
    """An authentication protocol defined by RFC 2069

    Digest authentication improves on basic authentication because it
    does not transmit passwords in the clear.
    """

    auth_header = 'Authorization'
    handler_order = 490  # before Basic auth

    def http_error_401(self, req, fp, code, msg, headers):
        host = urlparse(req.full_url)[1]
        retry = self.http_error_auth_reqed('www-authenticate',
                                           host, req, headers)
        self.reset_retry_count()
        return retry


class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler):

    auth_header = 'Proxy-Authorization'
    handler_order = 490  # before Basic auth

    def http_error_407(self, req, fp, code, msg, headers):
        host = req.host
        retry = self.http_error_auth_reqed('proxy-authenticate',
                                           host, req, headers)
        self.reset_retry_count()
        return retry

class AbstractHTTPHandler(BaseHandler):

    def __init__(self, debuglevel=0):
        self._debuglevel = debuglevel

    def set_http_debuglevel(self, level):
        self._debuglevel = level

    def do_request_(self, request):
        host = request.host
        if not host:
            raise URLError('no host given')

        if request.data is not None:  # POST
            data = request.data
            if isinstance(data, str):
                msg = "POST data should be bytes or an iterable of bytes. " \
                      "It cannot be of type str."
                raise TypeError(msg)
            if not request.has_header('Content-type'):
                request.add_unredirected_header(
                    'Content-type',
                    'application/x-www-form-urlencoded')
            if not request.has_header('Content-length'):
                size = None
                try:
                    ### For Python-Future:
                    if PY2 and isinstance(data, array.array):
                        # memoryviews of arrays aren't supported
                        # in Py2.7. (e.g. memoryview(array.array('I',
                        # [1, 2, 3, 4])) raises a TypeError.)
                        # So we calculate the size manually instead:
                        size = len(data) * data.itemsize
                    ###
                    else:
                        mv = memoryview(data)
                        size = len(mv) * mv.itemsize
                except TypeError:
                    if isinstance(data, Iterable):
                        raise ValueError("Content-Length should be specified "
                                "for iterable data of type %r %r" % (type(data),
                                data))
                else:
                    request.add_unredirected_header(
                            'Content-length', '%d' % size)

        sel_host = host
        if request.has_proxy():
            scheme, sel = splittype(request.selector)
            sel_host, sel_path = splithost(sel)
        if not request.has_header('Host'):
            request.add_unredirected_header('Host', sel_host)
        for name, value in self.parent.addheaders:
            name = name.capitalize()
            if not request.has_header(name):
                request.add_unredirected_header(name, value)

        return request

    def do_open(self, http_class, req, **http_conn_args):
        """Return an HTTPResponse object for the request, using http_class.

        http_class must implement the HTTPConnection API from http.client.
        """
        host = req.host
        if not host:
            raise URLError('no host given')

        # will parse host:port
        h = http_class(host, timeout=req.timeout, **http_conn_args)

        headers = dict(req.unredirected_hdrs)
        headers.update(dict((k, v) for k, v in req.headers.items()
                            if k not in headers))

        # TODO(jhylton): Should this be redesigned to handle
        # persistent connections?

        # We want to make an HTTP/1.1 request, but the addinfourl
        # class isn't prepared to deal with a persistent connection.
        # It will try to read all remaining data from the socket,
        # which will block while the server waits for the next request.
        # So make sure the connection gets closed after the (only)
        # request.
        headers["Connection"] = "close"
        headers = dict((name.title(), val) for name, val in headers.items())

        if req._tunnel_host:
            tunnel_headers = {}
            proxy_auth_hdr = "Proxy-Authorization"
            if proxy_auth_hdr in headers:
                tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
                # Proxy-Authorization should not be sent to origin
                # server.
                del headers[proxy_auth_hdr]
            h.set_tunnel(req._tunnel_host, headers=tunnel_headers)

        try:
            h.request(req.get_method(), req.selector, req.data, headers)
        except socket.error as err: # timeout error
            h.close()
            raise URLError(err)
        else:
            r = h.getresponse()
            # If the server does not send us a 'Connection: close' header,
            # HTTPConnection assumes the socket should be left open. Manually
            # mark the socket to be closed when this response object goes away.
            if h.sock:
                h.sock.close()
                h.sock = None


        r.url = req.get_full_url()
        # This line replaces the .msg attribute of the HTTPResponse
        # with .headers, because urllib clients expect the response to
        # have the reason in .msg.  It would be good to mark this
        # attribute is deprecated and get then to use info() or
        # .headers.
        r.msg = r.reason
        return r


class HTTPHandler(AbstractHTTPHandler):

    def http_open(self, req):
        return self.do_open(http_client.HTTPConnection, req)

    http_request = AbstractHTTPHandler.do_request_

if hasattr(http_client, 'HTTPSConnection'):

    class HTTPSHandler(AbstractHTTPHandler):

        def __init__(self, debuglevel=0, context=None, check_hostname=None):
            AbstractHTTPHandler.__init__(self, debuglevel)
            self._context = context
            self._check_hostname = check_hostname

        def https_open(self, req):
            return self.do_open(http_client.HTTPSConnection, req,
                context=self._context, check_hostname=self._check_hostname)

        https_request = AbstractHTTPHandler.do_request_

    __all__.append('HTTPSHandler')

class HTTPCookieProcessor(BaseHandler):
    def __init__(self, cookiejar=None):
        import future.backports.http.cookiejar as http_cookiejar
        if cookiejar is None:
            cookiejar = http_cookiejar.CookieJar()
        self.cookiejar = cookiejar

    def http_request(self, request):
        self.cookiejar.add_cookie_header(request)
        return request

    def http_response(self, request, response):
        self.cookiejar.extract_cookies(response, request)
        return response

    https_request = http_request
    https_response = http_response

class UnknownHandler(BaseHandler):
    def unknown_open(self, req):
        type = req.type
        raise URLError('unknown url type: %s' % type)

def parse_keqv_list(l):
    """Parse list of key=value strings where keys are not duplicated."""
    parsed = {}
    for elt in l:
        k, v = elt.split('=', 1)
        if v[0] == '"' and v[-1] == '"':
            v = v[1:-1]
        parsed[k] = v
    return parsed

def parse_http_list(s):
    """Parse lists as described by RFC 2068 Section 2.

    In particular, parse comma-separated lists where the elements of
    the list may include quoted-strings.  A quoted-string could
    contain a comma.  A non-quoted string could have quotes in the
    middle.  Neither commas nor quotes count if they are escaped.
    Only double-quotes count, not single-quotes.
    """
    res = []
    part = ''

    escape = quote = False
    for cur in s:
        if escape:
            part += cur
            escape = False
            continue
        if quote:
            if cur == '\\':
                escape = True
                continue
            elif cur == '"':
                quote = False
            part += cur
            continue

        if cur == ',':
            res.append(part)
            part = ''
            continue

        if cur == '"':
            quote = True

        part += cur

    # append last part
    if part:
        res.append(part)

    return [part.strip() for part in res]

class FileHandler(BaseHandler):
    # Use local file or FTP depending on form of URL
    def file_open(self, req):
        url = req.selector
        if url[:2] == '//' and url[2:3] != '/' and (req.host and
                req.host != 'localhost'):
            if not req.host is self.get_names():
                raise URLError("file:// scheme is supported only on localhost")
        else:
            return self.open_local_file(req)

    # names for the localhost
    names = None
    def get_names(self):
        if FileHandler.names is None:
            try:
                FileHandler.names = tuple(
                    socket.gethostbyname_ex('localhost')[2] +
                    socket.gethostbyname_ex(socket.gethostname())[2])
            except socket.gaierror:
                FileHandler.names = (socket.gethostbyname('localhost'),)
        return FileHandler.names

    # not entirely sure what the rules are here
    def open_local_file(self, req):
        import future.backports.email.utils as email_utils
        import mimetypes
        host = req.host
        filename = req.selector
        localfile = url2pathname(filename)
        try:
            stats = os.stat(localfile)
            size = stats.st_size
            modified = email_utils.formatdate(stats.st_mtime, usegmt=True)
            mtype = mimetypes.guess_type(filename)[0]
            headers = email.message_from_string(
                'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' %
                (mtype or 'text/plain', size, modified))
            if host:
                host, port = splitport(host)
            if not host or \
                (not port and _safe_gethostbyname(host) in self.get_names()):
                if host:
                    origurl = 'file://' + host + filename
                else:
                    origurl = 'file://' + filename
                return addinfourl(open(localfile, 'rb'), headers, origurl)
        except OSError as exp:
            # users shouldn't expect OSErrors coming from urlopen()
            raise URLError(exp)
        raise URLError('file not on local host')

def _safe_gethostbyname(host):
    try:
        return socket.gethostbyname(host)
    except socket.gaierror:
        return None

class FTPHandler(BaseHandler):
    def ftp_open(self, req):
        import ftplib
        import mimetypes
        host = req.host
        if not host:
            raise URLError('ftp error: no host given')
        host, port = splitport(host)
        if port is None:
            port = ftplib.FTP_PORT
        else:
            port = int(port)

        # username/password handling
        user, host = splituser(host)
        if user:
            user, passwd = splitpasswd(user)
        else:
            passwd = None
        host = unquote(host)
        user = user or ''
        passwd = passwd or ''

        try:
            host = socket.gethostbyname(host)
        except socket.error as msg:
            raise URLError(msg)
        path, attrs = splitattr(req.selector)
        dirs = path.split('/')
        dirs = list(map(unquote, dirs))
        dirs, file = dirs[:-1], dirs[-1]
        if dirs and not dirs[0]:
            dirs = dirs[1:]
        try:
            fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout)
            type = file and 'I' or 'D'
            for attr in attrs:
                attr, value = splitvalue(attr)
                if attr.lower() == 'type' and \
                   value in ('a', 'A', 'i', 'I', 'd', 'D'):
                    type = value.upper()
            fp, retrlen = fw.retrfile(file, type)
            headers = ""
            mtype = mimetypes.guess_type(req.full_url)[0]
            if mtype:
                headers += "Content-type: %s\n" % mtype
            if retrlen is not None and retrlen >= 0:
                headers += "Content-length: %d\n" % retrlen
            headers = email.message_from_string(headers)
            return addinfourl(fp, headers, req.full_url)
        except ftplib.all_errors as exp:
            exc = URLError('ftp error: %r' % exp)
            raise_with_traceback(exc)

    def connect_ftp(self, user, passwd, host, port, dirs, timeout):
        return ftpwrapper(user, passwd, host, port, dirs, timeout,
                          persistent=False)

class CacheFTPHandler(FTPHandler):
    # XXX would be nice to have pluggable cache strategies
    # XXX this stuff is definitely not thread safe
    def __init__(self):
        self.cache = {}
        self.timeout = {}
        self.soonest = 0
        self.delay = 60
        self.max_conns = 16

    def setTimeout(self, t):
        self.delay = t

    def setMaxConns(self, m):
        self.max_conns = m

    def connect_ftp(self, user, passwd, host, port, dirs, timeout):
        key = user, host, port, '/'.join(dirs), timeout
        if key in self.cache:
            self.timeout[key] = time.time() + self.delay
        else:
            self.cache[key] = ftpwrapper(user, passwd, host, port,
                                         dirs, timeout)
            self.timeout[key] = time.time() + self.delay
        self.check_cache()
        return self.cache[key]

    def check_cache(self):
        # first check for old ones
        t = time.time()
        if self.soonest <= t:
            for k, v in list(self.timeout.items()):
                if v < t:
                    self.cache[k].close()
                    del self.cache[k]
                    del self.timeout[k]
        self.soonest = min(list(self.timeout.values()))

        # then check the size
        if len(self.cache) == self.max_conns:
            for k, v in list(self.timeout.items()):
                if v == self.soonest:
                    del self.cache[k]
                    del self.timeout[k]
                    break
            self.soonest = min(list(self.timeout.values()))

    def clear_cache(self):
        for conn in self.cache.values():
            conn.close()
        self.cache.clear()
        self.timeout.clear()


# Code move from the old urllib module

MAXFTPCACHE = 10        # Trim the ftp cache beyond this size

# Helper for non-unix systems
if os.name == 'nt':
    from nturl2path import url2pathname, pathname2url
else:
    def url2pathname(pathname):
        """OS-specific conversion from a relative URL of the 'file' scheme
        to a file system path; not recommended for general use."""
        return unquote(pathname)

    def pathname2url(pathname):
        """OS-specific conversion from a file system path to a relative URL
        of the 'file' scheme; not recommended for general use."""
        return quote(pathname)

# This really consists of two pieces:
# (1) a class which handles opening of all sorts of URLs
#     (plus assorted utilities etc.)
# (2) a set of functions for parsing URLs
# XXX Should these be separated out into different modules?


ftpcache = {}
class URLopener(object):
    """Class to open URLs.
    This is a class rather than just a subroutine because we may need
    more than one set of global protocol-specific options.
    Note -- this is a base class for those who don't want the
    automatic handling of errors type 302 (relocated) and 401
    (authorization needed)."""

    __tempfiles = None

    version = "Python-urllib/%s" % __version__

    # Constructor
    def __init__(self, proxies=None, **x509):
        msg = "%(class)s style of invoking requests is deprecated. " \
              "Use newer urlopen functions/methods" % {'class': self.__class__.__name__}
        warnings.warn(msg, DeprecationWarning, stacklevel=3)
        if proxies is None:
            proxies = getproxies()
        assert hasattr(proxies, 'keys'), "proxies must be a mapping"
        self.proxies = proxies
        self.key_file = x509.get('key_file')
        self.cert_file = x509.get('cert_file')
        self.addheaders = [('User-Agent', self.version)]
        self.__tempfiles = []
        self.__unlink = os.unlink # See cleanup()
        self.tempcache = None
        # Undocumented feature: if you assign {} to tempcache,
        # it is used to cache files retrieved with
        # self.retrieve().  This is not enabled by default
        # since it does not work for changing documents (and I
        # haven't got the logic to check expiration headers
        # yet).
        self.ftpcache = ftpcache
        # Undocumented feature: you can use a different
        # ftp cache by assigning to the .ftpcache member;
        # in case you want logically independent URL openers
        # XXX This is not threadsafe.  Bah.

    def __del__(self):
        self.close()

    def close(self):
        self.cleanup()

    def cleanup(self):
        # This code sometimes runs when the rest of this module
        # has already been deleted, so it can't use any globals
        # or import anything.
        if self.__tempfiles:
            for file in self.__tempfiles:
                try:
                    self.__unlink(file)
                except OSError:
                    pass
            del self.__tempfiles[:]
        if self.tempcache:
            self.tempcache.clear()

    def addheader(self, *args):
        """Add a header to be used by the HTTP interface only
        e.g. u.addheader('Accept', 'sound/basic')"""
        self.addheaders.append(args)

    # External interface
    def open(self, fullurl, data=None):
        """Use URLopener().open(file) instead of open(file, 'r')."""
        fullurl = unwrap(to_bytes(fullurl))
        fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'@()*[]|")
        if self.tempcache and fullurl in self.tempcache:
            filename, headers = self.tempcache[fullurl]
            fp = open(filename, 'rb')
            return addinfourl(fp, headers, fullurl)
        urltype, url = splittype(fullurl)
        if not urltype:
            urltype = 'file'
        if urltype in self.proxies:
            proxy = self.proxies[urltype]
            urltype, proxyhost = splittype(proxy)
            host, selector = splithost(proxyhost)
            url = (host, fullurl) # Signal special case to open_*()
        else:
            proxy = None
        name = 'open_' + urltype
        self.type = urltype
        name = name.replace('-', '_')
        if not hasattr(self, name):
            if proxy:
                return self.open_unknown_proxy(proxy, fullurl, data)
            else:
                return self.open_unknown(fullurl, data)
        try:
            if data is None:
                return getattr(self, name)(url)
            else:
                return getattr(self, name)(url, data)
        except HTTPError:
            raise
        except socket.error as msg:
            raise_with_traceback(IOError('socket error', msg))

    def open_unknown(self, fullurl, data=None):
        """Overridable interface to open unknown URL type."""
        type, url = splittype(fullurl)
        raise IOError('url error', 'unknown url type', type)

    def open_unknown_proxy(self, proxy, fullurl, data=None):
        """Overridable interface to open unknown URL type."""
        type, url = splittype(fullurl)
        raise IOError('url error', 'invalid proxy for %s' % type, proxy)

    # External interface
    def retrieve(self, url, filename=None, reporthook=None, data=None):
        """retrieve(url) returns (filename, headers) for a local object
        or (tempfilename, headers) for a remote object."""
        url = unwrap(to_bytes(url))
        if self.tempcache and url in self.tempcache:
            return self.tempcache[url]
        type, url1 = splittype(url)
        if filename is None and (not type or type == 'file'):
            try:
                fp = self.open_local_file(url1)
                hdrs = fp.info()
                fp.close()
                return url2pathname(splithost(url1)[1]), hdrs
            except IOError as msg:
                pass
        fp = self.open(url, data)
        try:
            headers = fp.info()
            if filename:
                tfp = open(filename, 'wb')
            else:
                import tempfile
                garbage, path = splittype(url)
                garbage, path = splithost(path or "")
                path, garbage = splitquery(path or "")
                path, garbage = splitattr(path or "")
                suffix = os.path.splitext(path)[1]
                (fd, filename) = tempfile.mkstemp(suffix)
                self.__tempfiles.append(filename)
                tfp = os.fdopen(fd, 'wb')
            try:
                result = filename, headers
                if self.tempcache is not None:
                    self.tempcache[url] = result
                bs = 1024*8
                size = -1
                read = 0
                blocknum = 0
                if "content-length" in headers:
                    size = int(headers["Content-Length"])
                if reporthook:
                    reporthook(blocknum, bs, size)
                while 1:
                    block = fp.read(bs)
                    if not block:
                        break
                    read += len(block)
                    tfp.write(block)
                    blocknum += 1
                    if reporthook:
                        reporthook(blocknum, bs, size)
            finally:
                tfp.close()
        finally:
            fp.close()

        # raise exception if actual size does not match content-length header
        if size >= 0 and read < size:
            raise ContentTooShortError(
                "retrieval incomplete: got only %i out of %i bytes"
                % (read, size), result)

        return result

    # Each method named open_<type> knows how to open that type of URL

    def _open_generic_http(self, connection_factory, url, data):
        """Make an HTTP connection using connection_class.

        This is an internal method that should be called from
        open_http() or open_https().

        Arguments:
        - connection_factory should take a host name and return an
          HTTPConnection instance.
        - url is the url to retrieval or a host, relative-path pair.
        - data is payload for a POST request or None.
        """

        user_passwd = None
        proxy_passwd= None
        if isinstance(url, str):
            host, selector = splithost(url)
            if host:
                user_passwd, host = splituser(host)
                host = unquote(host)
            realhost = host
        else:
            host, selector = url
            # check whether the proxy contains authorization information
            proxy_passwd, host = splituser(host)
            # now we proceed with the url we want to obtain
            urltype, rest = splittype(selector)
            url = rest
            user_passwd = None
            if urltype.lower() != 'http':
                realhost = None
            else:
                realhost, rest = splithost(rest)
                if realhost:
                    user_passwd, realhost = splituser(realhost)
                if user_passwd:
                    selector = "%s://%s%s" % (urltype, realhost, rest)
                if proxy_bypass(realhost):
                    host = realhost

        if not host: raise IOError('http error', 'no host given')

        if proxy_passwd:
            proxy_passwd = unquote(proxy_passwd)
            proxy_auth = base64.b64encode(proxy_passwd.encode()).decode('ascii')
        else:
            proxy_auth = None

        if user_passwd:
            user_passwd = unquote(user_passwd)
            auth = base64.b64encode(user_passwd.encode()).decode('ascii')
        else:
            auth = None
        http_conn = connection_factory(host)
        headers = {}
        if proxy_auth:
            headers["Proxy-Authorization"] = "Basic %s" % proxy_auth
        if auth:
            headers["Authorization"] =  "Basic %s" % auth
        if realhost:
            headers["Host"] = realhost

        # Add Connection:close as we don't support persistent connections yet.
        # This helps in closing the socket and avoiding ResourceWarning

        headers["Connection"] = "close"

        for header, value in self.addheaders:
            headers[header] = value

        if data is not None:
            headers["Content-Type"] = "application/x-www-form-urlencoded"
            http_conn.request("POST", selector, data, headers)
        else:
            http_conn.request("GET", selector, headers=headers)

        try:
            response = http_conn.getresponse()
        except http_client.BadStatusLine:
            # something went wrong with the HTTP status line
            raise URLError("http protocol error: bad status line")

        # According to RFC 2616, "2xx" code indicates that the client's
        # request was successfully received, understood, and accepted.
        if 200 <= response.status < 300:
            return addinfourl(response, response.msg, "http:" + url,
                              response.status)
        else:
            return self.http_error(
                url, response.fp,
                response.status, response.reason, response.msg, data)

    def open_http(self, url, data=None):
        """Use HTTP protocol."""
        return self._open_generic_http(http_client.HTTPConnection, url, data)

    def http_error(self, url, fp, errcode, errmsg, headers, data=None):
        """Handle http errors.

        Derived class can override this, or provide specific handlers
        named http_error_DDD where DDD is the 3-digit error code."""
        # First check if there's a specific handler for this error
        name = 'http_error_%d' % errcode
        if hasattr(self, name):
            method = getattr(self, name)
            if data is None:
                result = method(url, fp, errcode, errmsg, headers)
            else:
                result = method(url, fp, errcode, errmsg, headers, data)
            if result: return result
        return self.http_error_default(url, fp, errcode, errmsg, headers)

    def http_error_default(self, url, fp, errcode, errmsg, headers):
        """Default error handler: close the connection and raise IOError."""
        fp.close()
        raise HTTPError(url, errcode, errmsg, headers, None)

    if _have_ssl:
        def _https_connection(self, host):
            return http_client.HTTPSConnection(host,
                                           key_file=self.key_file,
                                           cert_file=self.cert_file)

        def open_https(self, url, data=None):
            """Use HTTPS protocol."""
            return self._open_generic_http(self._https_connection, url, data)

    def open_file(self, url):
        """Use local file or FTP depending on form of URL."""
        if not isinstance(url, str):
            raise URLError('file error: proxy support for file protocol currently not implemented')
        if url[:2] == '//' and url[2:3] != '/' and url[2:12].lower() != 'localhost/':
            raise ValueError("file:// scheme is supported only on localhost")
        else:
            return self.open_local_file(url)

    def open_local_file(self, url):
        """Use local file."""
        import future.backports.email.utils as email_utils
        import mimetypes
        host, file = splithost(url)
        localname = url2pathname(file)
        try:
            stats = os.stat(localname)
        except OSError as e:
            raise URLError(e.strerror, e.filename)
        size = stats.st_size
        modified = email_utils.formatdate(stats.st_mtime, usegmt=True)
        mtype = mimetypes.guess_type(url)[0]
        headers = email.message_from_string(
            'Content-Type: %s\nContent-Length: %d\nLast-modified: %s\n' %
            (mtype or 'text/plain', size, modified))
        if not host:
            urlfile = file
            if file[:1] == '/':
                urlfile = 'file://' + file
            return addinfourl(open(localname, 'rb'), headers, urlfile)
        host, port = splitport(host)
        if (not port
           and socket.gethostbyname(host) in ((localhost(),) + thishost())):
            urlfile = file
            if file[:1] == '/':
                urlfile = 'file://' + file
            elif file[:2] == './':
                raise ValueError("local file url may start with / or file:. Unknown url of type: %s" % url)
            return addinfourl(open(localname, 'rb'), headers, urlfile)
        raise URLError('local file error: not on local host')

    def open_ftp(self, url):
        """Use FTP protocol."""
        if not isinstance(url, str):
            raise URLError('ftp error: proxy support for ftp protocol currently not implemented')
        import mimetypes
        host, path = splithost(url)
        if not host: raise URLError('ftp error: no host given')
        host, port = splitport(host)
        user, host = splituser(host)
        if user: user, passwd = splitpasswd(user)
        else: passwd = None
        host = unquote(host)
        user = unquote(user or '')
        passwd = unquote(passwd or '')
        host = socket.gethostbyname(host)
        if not port:
            import ftplib
            port = ftplib.FTP_PORT
        else:
            port = int(port)
        path, attrs = splitattr(path)
        path = unquote(path)
        dirs = path.split('/')
        dirs, file = dirs[:-1], dirs[-1]
        if dirs and not dirs[0]: dirs = dirs[1:]
        if dirs and not dirs[0]: dirs[0] = '/'
        key = user, host, port, '/'.join(dirs)
        # XXX thread unsafe!
        if len(self.ftpcache) > MAXFTPCACHE:
            # Prune the cache, rather arbitrarily
            for k in self.ftpcache.keys():
                if k != key:
                    v = self.ftpcache[k]
                    del self.ftpcache[k]
                    v.close()
        try:
            if key not in self.ftpcache:
                self.ftpcache[key] = \
                    ftpwrapper(user, passwd, host, port, dirs)
            if not file: type = 'D'
            else: type = 'I'
            for attr in attrs:
                attr, value = splitvalue(attr)
                if attr.lower() == 'type' and \
                   value in ('a', 'A', 'i', 'I', 'd', 'D'):
                    type = value.upper()
            (fp, retrlen) = self.ftpcache[key].retrfile(file, type)
            mtype = mimetypes.guess_type("ftp:" + url)[0]
            headers = ""
            if mtype:
                headers += "Content-Type: %s\n" % mtype
            if retrlen is not None and retrlen >= 0:
                headers += "Content-Length: %d\n" % retrlen
            headers = email.message_from_string(headers)
            return addinfourl(fp, headers, "ftp:" + url)
        except ftperrors() as exp:
            raise_with_traceback(URLError('ftp error %r' % exp))

    def open_data(self, url, data=None):
        """Use "data" URL."""
        if not isinstance(url, str):
            raise URLError('data error: proxy support for data protocol currently not implemented')
        # ignore POSTed data
        #
        # syntax of data URLs:
        # dataurl   := "data:" [ mediatype ] [ ";base64" ] "," data
        # mediatype := [ type "/" subtype ] *( ";" parameter )
        # data      := *urlchar
        # parameter := attribute "=" value
        try:
            [type, data] = url.split(',', 1)
        except ValueError:
            raise IOError('data error', 'bad data URL')
        if not type:
            type = 'text/plain;charset=US-ASCII'
        semi = type.rfind(';')
        if semi >= 0 and '=' not in type[semi:]:
            encoding = type[semi+1:]
            type = type[:semi]
        else:
            encoding = ''
        msg = []
        msg.append('Date: %s'%time.strftime('%a, %d %b %Y %H:%M:%S GMT',
                                            time.gmtime(time.time())))
        msg.append('Content-type: %s' % type)
        if encoding == 'base64':
            # XXX is this encoding/decoding ok?
            data = base64.decodebytes(data.encode('ascii')).decode('latin-1')
        else:
            data = unquote(data)
        msg.append('Content-Length: %d' % len(data))
        msg.append('')
        msg.append(data)
        msg = '\n'.join(msg)
        headers = email.message_from_string(msg)
        f = io.StringIO(msg)
        #f.fileno = None     # needed for addinfourl
        return addinfourl(f, headers, url)


class FancyURLopener(URLopener):
    """Derived class with handlers for errors we can handle (perhaps)."""

    def __init__(self, *args, **kwargs):
        URLopener.__init__(self, *args, **kwargs)
        self.auth_cache = {}
        self.tries = 0
        self.maxtries = 10

    def http_error_default(self, url, fp, errcode, errmsg, headers):
        """Default error handling -- don't raise an exception."""
        return addinfourl(fp, headers, "http:" + url, errcode)

    def http_error_302(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 302 -- relocated (temporarily)."""
        self.tries += 1
        if self.maxtries and self.tries >= self.maxtries:
            if hasattr(self, "http_error_500"):
                meth = self.http_error_500
            else:
                meth = self.http_error_default
            self.tries = 0
            return meth(url, fp, 500,
                        "Internal Server Error: Redirect Recursion", headers)
        result = self.redirect_internal(url, fp, errcode, errmsg, headers,
                                        data)
        self.tries = 0
        return result

    def redirect_internal(self, url, fp, errcode, errmsg, headers, data):
        if 'location' in headers:
            newurl = headers['location']
        elif 'uri' in headers:
            newurl = headers['uri']
        else:
            return
        fp.close()

        # In case the server sent a relative URL, join with original:
        newurl = urljoin(self.type + ":" + url, newurl)

        urlparts = urlparse(newurl)

        # For security reasons, we don't allow redirection to anything other
        # than http, https and ftp.

        # We are using newer HTTPError with older redirect_internal method
        # This older method will get deprecated in 3.3

        if urlparts.scheme not in ('http', 'https', 'ftp', ''):
            raise HTTPError(newurl, errcode,
                            errmsg +
                            " Redirection to url '%s' is not allowed." % newurl,
                            headers, fp)

        return self.open(newurl)

    def http_error_301(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 301 -- also relocated (permanently)."""
        return self.http_error_302(url, fp, errcode, errmsg, headers, data)

    def http_error_303(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 303 -- also relocated (essentially identical to 302)."""
        return self.http_error_302(url, fp, errcode, errmsg, headers, data)

    def http_error_307(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 307 -- relocated, but turn POST into error."""
        if data is None:
            return self.http_error_302(url, fp, errcode, errmsg, headers, data)
        else:
            return self.http_error_default(url, fp, errcode, errmsg, headers)

    def http_error_401(self, url, fp, errcode, errmsg, headers, data=None,
            retry=False):
        """Error 401 -- authentication required.
        This function supports Basic authentication only."""
        if 'www-authenticate' not in headers:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        stuff = headers['www-authenticate']
        match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
        if not match:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        scheme, realm = match.groups()
        if scheme.lower() != 'basic':
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        if not retry:
            URLopener.http_error_default(self, url, fp, errcode, errmsg,
                    headers)
        name = 'retry_' + self.type + '_basic_auth'
        if data is None:
            return getattr(self,name)(url, realm)
        else:
            return getattr(self,name)(url, realm, data)

    def http_error_407(self, url, fp, errcode, errmsg, headers, data=None,
            retry=False):
        """Error 407 -- proxy authentication required.
        This function supports Basic authentication only."""
        if 'proxy-authenticate' not in headers:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        stuff = headers['proxy-authenticate']
        match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
        if not match:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        scheme, realm = match.groups()
        if scheme.lower() != 'basic':
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        if not retry:
            URLopener.http_error_default(self, url, fp, errcode, errmsg,
                    headers)
        name = 'retry_proxy_' + self.type + '_basic_auth'
        if data is None:
            return getattr(self,name)(url, realm)
        else:
            return getattr(self,name)(url, realm, data)

    def retry_proxy_http_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        newurl = 'http://' + host + selector
        proxy = self.proxies['http']
        urltype, proxyhost = splittype(proxy)
        proxyhost, proxyselector = splithost(proxyhost)
        i = proxyhost.find('@') + 1
        proxyhost = proxyhost[i:]
        user, passwd = self.get_user_passwd(proxyhost, realm, i)
        if not (user or passwd): return None
        proxyhost = "%s:%s@%s" % (quote(user, safe=''),
                                  quote(passwd, safe=''), proxyhost)
        self.proxies['http'] = 'http://' + proxyhost + proxyselector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_proxy_https_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        newurl = 'https://' + host + selector
        proxy = self.proxies['https']
        urltype, proxyhost = splittype(proxy)
        proxyhost, proxyselector = splithost(proxyhost)
        i = proxyhost.find('@') + 1
        proxyhost = proxyhost[i:]
        user, passwd = self.get_user_passwd(proxyhost, realm, i)
        if not (user or passwd): return None
        proxyhost = "%s:%s@%s" % (quote(user, safe=''),
                                  quote(passwd, safe=''), proxyhost)
        self.proxies['https'] = 'https://' + proxyhost + proxyselector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_http_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        i = host.find('@') + 1
        host = host[i:]
        user, passwd = self.get_user_passwd(host, realm, i)
        if not (user or passwd): return None
        host = "%s:%s@%s" % (quote(user, safe=''),
                             quote(passwd, safe=''), host)
        newurl = 'http://' + host + selector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_https_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        i = host.find('@') + 1
        host = host[i:]
        user, passwd = self.get_user_passwd(host, realm, i)
        if not (user or passwd): return None
        host = "%s:%s@%s" % (quote(user, safe=''),
                             quote(passwd, safe=''), host)
        newurl = 'https://' + host + selector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def get_user_passwd(self, host, realm, clear_cache=0):
        key = realm + '@' + host.lower()
        if key in self.auth_cache:
            if clear_cache:
                del self.auth_cache[key]
            else:
                return self.auth_cache[key]
        user, passwd = self.prompt_user_passwd(host, realm)
        if user or passwd: self.auth_cache[key] = (user, passwd)
        return user, passwd

    def prompt_user_passwd(self, host, realm):
        """Override this in a GUI environment!"""
        import getpass
        try:
            user = input("Enter username for %s at %s: " % (realm, host))
            passwd = getpass.getpass("Enter password for %s in %s at %s: " %
                (user, realm, host))
            return user, passwd
        except KeyboardInterrupt:
            print()
            return None, None


# Utility functions

_localhost = None
def localhost():
    """Return the IP address of the magic hostname 'localhost'."""
    global _localhost
    if _localhost is None:
        _localhost = socket.gethostbyname('localhost')
    return _localhost

_thishost = None
def thishost():
    """Return the IP addresses of the current host."""
    global _thishost
    if _thishost is None:
        try:
            _thishost = tuple(socket.gethostbyname_ex(socket.gethostname())[2])
        except socket.gaierror:
            _thishost = tuple(socket.gethostbyname_ex('localhost')[2])
    return _thishost

_ftperrors = None
def ftperrors():
    """Return the set of errors raised by the FTP class."""
    global _ftperrors
    if _ftperrors is None:
        import ftplib
        _ftperrors = ftplib.all_errors
    return _ftperrors

_noheaders = None
def noheaders():
    """Return an empty email Message object."""
    global _noheaders
    if _noheaders is None:
        _noheaders = email.message_from_string("")
    return _noheaders


# Utility classes

class ftpwrapper(object):
    """Class used by open_ftp() for cache of open FTP connections."""

    def __init__(self, user, passwd, host, port, dirs, timeout=None,
                 persistent=True):
        self.user = user
        self.passwd = passwd
        self.host = host
        self.port = port
        self.dirs = dirs
        self.timeout = timeout
        self.refcount = 0
        self.keepalive = persistent
        self.init()

    def init(self):
        import ftplib
        self.busy = 0
        self.ftp = ftplib.FTP()
        self.ftp.connect(self.host, self.port, self.timeout)
        self.ftp.login(self.user, self.passwd)
        _target = '/'.join(self.dirs)
        self.ftp.cwd(_target)

    def retrfile(self, file, type):
        import ftplib
        self.endtransfer()
        if type in ('d', 'D'): cmd = 'TYPE A'; isdir = 1
        else: cmd = 'TYPE ' + type; isdir = 0
        try:
            self.ftp.voidcmd(cmd)
        except ftplib.all_errors:
            self.init()
            self.ftp.voidcmd(cmd)
        conn = None
        if file and not isdir:
            # Try to retrieve as a file
            try:
                cmd = 'RETR ' + file
                conn, retrlen = self.ftp.ntransfercmd(cmd)
            except ftplib.error_perm as reason:
                if str(reason)[:3] != '550':
                    raise_with_traceback(URLError('ftp error: %r' % reason))
        if not conn:
            # Set transfer mode to ASCII!
            self.ftp.voidcmd('TYPE A')
            # Try a directory listing. Verify that directory exists.
            if file:
                pwd = self.ftp.pwd()
                try:
                    try:
                        self.ftp.cwd(file)
                    except ftplib.error_perm as reason:
                        ### Was:
                        # raise URLError('ftp error: %r' % reason) from reason
                        exc = URLError('ftp error: %r' % reason)
                        exc.__cause__ = reason
                        raise exc
                finally:
                    self.ftp.cwd(pwd)
                cmd = 'LIST ' + file
            else:
                cmd = 'LIST'
            conn, retrlen = self.ftp.ntransfercmd(cmd)
        self.busy = 1

        ftpobj = addclosehook(conn.makefile('rb'), self.file_close)
        self.refcount += 1
        conn.close()
        # Pass back both a suitably decorated object and a retrieval length
        return (ftpobj, retrlen)

    def endtransfer(self):
        self.busy = 0

    def close(self):
        self.keepalive = False
        if self.refcount <= 0:
            self.real_close()

    def file_close(self):
        self.endtransfer()
        self.refcount -= 1
        if self.refcount <= 0 and not self.keepalive:
            self.real_close()

    def real_close(self):
        self.endtransfer()
        try:
            self.ftp.close()
        except ftperrors():
            pass

# Proxy handling
def getproxies_environment():
    """Return a dictionary of scheme -> proxy server URL mappings.

    Scan the environment for variables named <scheme>_proxy;
    this seems to be the standard convention.  If you need a
    different way, you can pass a proxies dictionary to the
    [Fancy]URLopener constructor.

    """
    proxies = {}
    for name, value in os.environ.items():
        name = name.lower()
        if value and name[-6:] == '_proxy':
            proxies[name[:-6]] = value
    return proxies

def proxy_bypass_environment(host):
    """Test if proxies should not be used for a particular host.

    Checks the environment for a variable named no_proxy, which should
    be a list of DNS suffixes separated by commas, or '*' for all hosts.
    """
    no_proxy = os.environ.get('no_proxy', '') or os.environ.get('NO_PROXY', '')
    # '*' is special case for always bypass
    if no_proxy == '*':
        return 1
    # strip port off host
    hostonly, port = splitport(host)
    # check if the host ends with any of the DNS suffixes
    no_proxy_list = [proxy.strip() for proxy in no_proxy.split(',')]
    for name in no_proxy_list:
        if name and (hostonly.endswith(name) or host.endswith(name)):
            return 1
    # otherwise, don't bypass
    return 0


# This code tests an OSX specific data structure but is testable on all
# platforms
def _proxy_bypass_macosx_sysconf(host, proxy_settings):
    """
    Return True iff this host shouldn't be accessed using a proxy

    This function uses the MacOSX framework SystemConfiguration
    to fetch the proxy information.

    proxy_settings come from _scproxy._get_proxy_settings or get mocked ie:
    { 'exclude_simple': bool,
      'exceptions': ['foo.bar', '*.bar.com', '127.0.0.1', '10.1', '10.0/16']
    }
    """
    from fnmatch import fnmatch

    hostonly, port = splitport(host)

    def ip2num(ipAddr):
        parts = ipAddr.split('.')
        parts = list(map(int, parts))
        if len(parts) != 4:
            parts = (parts + [0, 0, 0, 0])[:4]
        return (parts[0] << 24) | (parts[1] << 16) | (parts[2] << 8) | parts[3]

    # Check for simple host names:
    if '.' not in host:
        if proxy_settings['exclude_simple']:
            return True

    hostIP = None

    for value in proxy_settings.get('exceptions', ()):
        # Items in the list are strings like these: *.local, 169.254/16
        if not value: continue

        m = re.match(r"(\d+(?:\.\d+)*)(/\d+)?", value)
        if m is not None:
            if hostIP is None:
                try:
                    hostIP = socket.gethostbyname(hostonly)
                    hostIP = ip2num(hostIP)
                except socket.error:
                    continue

            base = ip2num(m.group(1))
            mask = m.group(2)
            if mask is None:
                mask = 8 * (m.group(1).count('.') + 1)
            else:
                mask = int(mask[1:])
            mask = 32 - mask

            if (hostIP >> mask) == (base >> mask):
                return True

        elif fnmatch(host, value):
            return True

    return False


if sys.platform == 'darwin':
    from _scproxy import _get_proxy_settings, _get_proxies

    def proxy_bypass_macosx_sysconf(host):
        proxy_settings = _get_proxy_settings()
        return _proxy_bypass_macosx_sysconf(host, proxy_settings)

    def getproxies_macosx_sysconf():
        """Return a dictionary of scheme -> proxy server URL mappings.

        This function uses the MacOSX framework SystemConfiguration
        to fetch the proxy information.
        """
        return _get_proxies()



    def proxy_bypass(host):
        if getproxies_environment():
            return proxy_bypass_environment(host)
        else:
            return proxy_bypass_macosx_sysconf(host)

    def getproxies():
        return getproxies_environment() or getproxies_macosx_sysconf()


elif os.name == 'nt':
    def getproxies_registry():
        """Return a dictionary of scheme -> proxy server URL mappings.

        Win32 uses the registry to store proxies.

        """
        proxies = {}
        try:
            import winreg
        except ImportError:
            # Std module, so should be around - but you never know!
            return proxies
        try:
            internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
            proxyEnable = winreg.QueryValueEx(internetSettings,
                                               'ProxyEnable')[0]
            if proxyEnable:
                # Returned as Unicode but problems if not converted to ASCII
                proxyServer = str(winreg.QueryValueEx(internetSettings,
                                                       'ProxyServer')[0])
                if '=' in proxyServer:
                    # Per-protocol settings
                    for p in proxyServer.split(';'):
                        protocol, address = p.split('=', 1)
                        # See if address has a type:// prefix
                        if not re.match('^([^/:]+)://', address):
                            address = '%s://%s' % (protocol, address)
                        proxies[protocol] = address
                else:
                    # Use one setting for all protocols
                    if proxyServer[:5] == 'http:':
                        proxies['http'] = proxyServer
                    else:
                        proxies['http'] = 'http://%s' % proxyServer
                        proxies['https'] = 'https://%s' % proxyServer
                        proxies['ftp'] = 'ftp://%s' % proxyServer
            internetSettings.Close()
        except (WindowsError, ValueError, TypeError):
            # Either registry key not found etc, or the value in an
            # unexpected format.
            # proxies already set up to be empty so nothing to do
            pass
        return proxies

    def getproxies():
        """Return a dictionary of scheme -> proxy server URL mappings.

        Returns settings gathered from the environment, if specified,
        or the registry.

        """
        return getproxies_environment() or getproxies_registry()

    def proxy_bypass_registry(host):
        try:
            import winreg
        except ImportError:
            # Std modules, so should be around - but you never know!
            return 0
        try:
            internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
                r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
            proxyEnable = winreg.QueryValueEx(internetSettings,
                                               'ProxyEnable')[0]
            proxyOverride = str(winreg.QueryValueEx(internetSettings,
                                                     'ProxyOverride')[0])
            # ^^^^ Returned as Unicode but problems if not converted to ASCII
        except WindowsError:
            return 0
        if not proxyEnable or not proxyOverride:
            return 0
        # try to make a host list from name and IP address.
        rawHost, port = splitport(host)
        host = [rawHost]
        try:
            addr = socket.gethostbyname(rawHost)
            if addr != rawHost:
                host.append(addr)
        except socket.error:
            pass
        try:
            fqdn = socket.getfqdn(rawHost)
            if fqdn != rawHost:
                host.append(fqdn)
        except socket.error:
            pass
        # make a check value list from the registry entry: replace the
        # '<local>' string by the localhost entry and the corresponding
        # canonical entry.
        proxyOverride = proxyOverride.split(';')
        # now check if we match one of the registry values.
        for test in proxyOverride:
            if test == '<local>':
                if '.' not in rawHost:
                    return 1
            test = test.replace(".", r"\.")     # mask dots
            test = test.replace("*", r".*")     # change glob sequence
            test = test.replace("?", r".")      # change glob char
            for val in host:
                if re.match(test, val, re.I):
                    return 1
        return 0

    def proxy_bypass(host):
        """Return a dictionary of scheme -> proxy server URL mappings.

        Returns settings gathered from the environment, if specified,
        or the registry.

        """
        if getproxies_environment():
            return proxy_bypass_environment(host)
        else:
            return proxy_bypass_registry(host)

else:
    # By default use environment variables
    getproxies = getproxies_environment
    proxy_bypass = proxy_bypass_environment
PKDu\����&future/backports/urllib/robotparser.pynu�[���from __future__ import absolute_import, division, unicode_literals
from future.builtins import str
""" robotparser.py

    Copyright (C) 2000  Bastian Kleineidam

    You can choose between two licenses when using this package:
    1) GNU GPLv2
    2) PSF license for Python 2.2

    The robots.txt Exclusion Protocol is implemented as specified in
    http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html
"""

# Was: import urllib.parse, urllib.request
from future.backports import urllib
from future.backports.urllib import parse as _parse, request as _request
urllib.parse = _parse
urllib.request = _request


__all__ = ["RobotFileParser"]

class RobotFileParser(object):
    """ This class provides a set of methods to read, parse and answer
    questions about a single robots.txt file.

    """

    def __init__(self, url=''):
        self.entries = []
        self.default_entry = None
        self.disallow_all = False
        self.allow_all = False
        self.set_url(url)
        self.last_checked = 0

    def mtime(self):
        """Returns the time the robots.txt file was last fetched.

        This is useful for long-running web spiders that need to
        check for new robots.txt files periodically.

        """
        return self.last_checked

    def modified(self):
        """Sets the time the robots.txt file was last fetched to the
        current time.

        """
        import time
        self.last_checked = time.time()

    def set_url(self, url):
        """Sets the URL referring to a robots.txt file."""
        self.url = url
        self.host, self.path = urllib.parse.urlparse(url)[1:3]

    def read(self):
        """Reads the robots.txt URL and feeds it to the parser."""
        try:
            f = urllib.request.urlopen(self.url)
        except urllib.error.HTTPError as err:
            if err.code in (401, 403):
                self.disallow_all = True
            elif err.code >= 400:
                self.allow_all = True
        else:
            raw = f.read()
            self.parse(raw.decode("utf-8").splitlines())

    def _add_entry(self, entry):
        if "*" in entry.useragents:
            # the default entry is considered last
            if self.default_entry is None:
                # the first default entry wins
                self.default_entry = entry
        else:
            self.entries.append(entry)

    def parse(self, lines):
        """Parse the input lines from a robots.txt file.

        We allow that a user-agent: line is not preceded by
        one or more blank lines.
        """
        # states:
        #   0: start state
        #   1: saw user-agent line
        #   2: saw an allow or disallow line
        state = 0
        entry = Entry()

        for line in lines:
            if not line:
                if state == 1:
                    entry = Entry()
                    state = 0
                elif state == 2:
                    self._add_entry(entry)
                    entry = Entry()
                    state = 0
            # remove optional comment and strip line
            i = line.find('#')
            if i >= 0:
                line = line[:i]
            line = line.strip()
            if not line:
                continue
            line = line.split(':', 1)
            if len(line) == 2:
                line[0] = line[0].strip().lower()
                line[1] = urllib.parse.unquote(line[1].strip())
                if line[0] == "user-agent":
                    if state == 2:
                        self._add_entry(entry)
                        entry = Entry()
                    entry.useragents.append(line[1])
                    state = 1
                elif line[0] == "disallow":
                    if state != 0:
                        entry.rulelines.append(RuleLine(line[1], False))
                        state = 2
                elif line[0] == "allow":
                    if state != 0:
                        entry.rulelines.append(RuleLine(line[1], True))
                        state = 2
        if state == 2:
            self._add_entry(entry)


    def can_fetch(self, useragent, url):
        """using the parsed robots.txt decide if useragent can fetch url"""
        if self.disallow_all:
            return False
        if self.allow_all:
            return True
        # search for given user agent matches
        # the first match counts
        parsed_url = urllib.parse.urlparse(urllib.parse.unquote(url))
        url = urllib.parse.urlunparse(('','',parsed_url.path,
            parsed_url.params,parsed_url.query, parsed_url.fragment))
        url = urllib.parse.quote(url)
        if not url:
            url = "/"
        for entry in self.entries:
            if entry.applies_to(useragent):
                return entry.allowance(url)
        # try the default entry last
        if self.default_entry:
            return self.default_entry.allowance(url)
        # agent not found ==> access granted
        return True

    def __str__(self):
        return ''.join([str(entry) + "\n" for entry in self.entries])


class RuleLine(object):
    """A rule line is a single "Allow:" (allowance==True) or "Disallow:"
       (allowance==False) followed by a path."""
    def __init__(self, path, allowance):
        if path == '' and not allowance:
            # an empty value means allow all
            allowance = True
        self.path = urllib.parse.quote(path)
        self.allowance = allowance

    def applies_to(self, filename):
        return self.path == "*" or filename.startswith(self.path)

    def __str__(self):
        return (self.allowance and "Allow" or "Disallow") + ": " + self.path


class Entry(object):
    """An entry has one or more user-agents and zero or more rulelines"""
    def __init__(self):
        self.useragents = []
        self.rulelines = []

    def __str__(self):
        ret = []
        for agent in self.useragents:
            ret.extend(["User-agent: ", agent, "\n"])
        for line in self.rulelines:
            ret.extend([str(line), "\n"])
        return ''.join(ret)

    def applies_to(self, useragent):
        """check if this entry applies to the specified agent"""
        # split the name token and make it lower case
        useragent = useragent.split("/")[0].lower()
        for agent in self.useragents:
            if agent == '*':
                # we have the catch-all agent
                return True
            agent = agent.lower()
            if agent in useragent:
                return True
        return False

    def allowance(self, filename):
        """Preconditions:
        - our agent applies to this entry
        - filename is URL decoded"""
        for line in self.rulelines:
            if line.applies_to(filename):
                return line.allowance
        return True
PKDu\P��!'!'future/backports/datetime.pynu�[���"""Concrete date/time and related types.

See http://www.iana.org/time-zones/repository/tz-link.html for
time zone and DST data sources.
"""
from __future__ import division
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from future.builtins import str
from future.builtins import bytes
from future.builtins import map
from future.builtins import round
from future.builtins import int
from future.builtins import object
from future.utils import native_str, PY2

import time as _time
import math as _math

def _cmp(x, y):
    return 0 if x == y else 1 if x > y else -1

MINYEAR = 1
MAXYEAR = 9999
_MAXORDINAL = 3652059 # date.max.toordinal()

# Utility functions, adapted from Python's Demo/classes/Dates.py, which
# also assumes the current Gregorian calendar indefinitely extended in
# both directions.  Difference:  Dates.py calls January 1 of year 0 day
# number 1.  The code here calls January 1 of year 1 day number 1.  This is
# to match the definition of the "proleptic Gregorian" calendar in Dershowitz
# and Reingold's "Calendrical Calculations", where it's the base calendar
# for all computations.  See the book for algorithms for converting between
# proleptic Gregorian ordinals and many other calendar systems.

_DAYS_IN_MONTH = [None, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]

_DAYS_BEFORE_MONTH = [None]
dbm = 0
for dim in _DAYS_IN_MONTH[1:]:
    _DAYS_BEFORE_MONTH.append(dbm)
    dbm += dim
del dbm, dim

def _is_leap(year):
    "year -> 1 if leap year, else 0."
    return year % 4 == 0 and (year % 100 != 0 or year % 400 == 0)

def _days_before_year(year):
    "year -> number of days before January 1st of year."
    y = year - 1
    return y*365 + y//4 - y//100 + y//400

def _days_in_month(year, month):
    "year, month -> number of days in that month in that year."
    assert 1 <= month <= 12, month
    if month == 2 and _is_leap(year):
        return 29
    return _DAYS_IN_MONTH[month]

def _days_before_month(year, month):
    "year, month -> number of days in year preceding first day of month."
    assert 1 <= month <= 12, 'month must be in 1..12'
    return _DAYS_BEFORE_MONTH[month] + (month > 2 and _is_leap(year))

def _ymd2ord(year, month, day):
    "year, month, day -> ordinal, considering 01-Jan-0001 as day 1."
    assert 1 <= month <= 12, 'month must be in 1..12'
    dim = _days_in_month(year, month)
    assert 1 <= day <= dim, ('day must be in 1..%d' % dim)
    return (_days_before_year(year) +
            _days_before_month(year, month) +
            day)

_DI400Y = _days_before_year(401)    # number of days in 400 years
_DI100Y = _days_before_year(101)    #    "    "   "   " 100   "
_DI4Y   = _days_before_year(5)      #    "    "   "   "   4   "

# A 4-year cycle has an extra leap day over what we'd get from pasting
# together 4 single years.
assert _DI4Y == 4 * 365 + 1

# Similarly, a 400-year cycle has an extra leap day over what we'd get from
# pasting together 4 100-year cycles.
assert _DI400Y == 4 * _DI100Y + 1

# OTOH, a 100-year cycle has one fewer leap day than we'd get from
# pasting together 25 4-year cycles.
assert _DI100Y == 25 * _DI4Y - 1

def _ord2ymd(n):
    "ordinal -> (year, month, day), considering 01-Jan-0001 as day 1."

    # n is a 1-based index, starting at 1-Jan-1.  The pattern of leap years
    # repeats exactly every 400 years.  The basic strategy is to find the
    # closest 400-year boundary at or before n, then work with the offset
    # from that boundary to n.  Life is much clearer if we subtract 1 from
    # n first -- then the values of n at 400-year boundaries are exactly
    # those divisible by _DI400Y:
    #
    #     D  M   Y            n              n-1
    #     -- --- ----        ----------     ----------------
    #     31 Dec -400        -_DI400Y       -_DI400Y -1
    #      1 Jan -399         -_DI400Y +1   -_DI400Y      400-year boundary
    #     ...
    #     30 Dec  000        -1             -2
    #     31 Dec  000         0             -1
    #      1 Jan  001         1              0            400-year boundary
    #      2 Jan  001         2              1
    #      3 Jan  001         3              2
    #     ...
    #     31 Dec  400         _DI400Y        _DI400Y -1
    #      1 Jan  401         _DI400Y +1     _DI400Y      400-year boundary
    n -= 1
    n400, n = divmod(n, _DI400Y)
    year = n400 * 400 + 1   # ..., -399, 1, 401, ...

    # Now n is the (non-negative) offset, in days, from January 1 of year, to
    # the desired date.  Now compute how many 100-year cycles precede n.
    # Note that it's possible for n100 to equal 4!  In that case 4 full
    # 100-year cycles precede the desired day, which implies the desired
    # day is December 31 at the end of a 400-year cycle.
    n100, n = divmod(n, _DI100Y)

    # Now compute how many 4-year cycles precede it.
    n4, n = divmod(n, _DI4Y)

    # And now how many single years.  Again n1 can be 4, and again meaning
    # that the desired day is December 31 at the end of the 4-year cycle.
    n1, n = divmod(n, 365)

    year += n100 * 100 + n4 * 4 + n1
    if n1 == 4 or n100 == 4:
        assert n == 0
        return year-1, 12, 31

    # Now the year is correct, and n is the offset from January 1.  We find
    # the month via an estimate that's either exact or one too large.
    leapyear = n1 == 3 and (n4 != 24 or n100 == 3)
    assert leapyear == _is_leap(year)
    month = (n + 50) >> 5
    preceding = _DAYS_BEFORE_MONTH[month] + (month > 2 and leapyear)
    if preceding > n:  # estimate is too large
        month -= 1
        preceding -= _DAYS_IN_MONTH[month] + (month == 2 and leapyear)
    n -= preceding
    assert 0 <= n < _days_in_month(year, month)

    # Now the year and month are correct, and n is the offset from the
    # start of that month:  we're done!
    return year, month, n+1

# Month and day names.  For localized versions, see the calendar module.
_MONTHNAMES = [None, "Jan", "Feb", "Mar", "Apr", "May", "Jun",
                     "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
_DAYNAMES = [None, "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]


def _build_struct_time(y, m, d, hh, mm, ss, dstflag):
    wday = (_ymd2ord(y, m, d) + 6) % 7
    dnum = _days_before_month(y, m) + d
    return _time.struct_time((y, m, d, hh, mm, ss, wday, dnum, dstflag))

def _format_time(hh, mm, ss, us):
    # Skip trailing microseconds when us==0.
    result = "%02d:%02d:%02d" % (hh, mm, ss)
    if us:
        result += ".%06d" % us
    return result

# Correctly substitute for %z and %Z escapes in strftime formats.
def _wrap_strftime(object, format, timetuple):
    # Don't call utcoffset() or tzname() unless actually needed.
    freplace = None # the string to use for %f
    zreplace = None # the string to use for %z
    Zreplace = None # the string to use for %Z

    # Scan format for %z and %Z escapes, replacing as needed.
    newformat = []
    push = newformat.append
    i, n = 0, len(format)
    while i < n:
        ch = format[i]
        i += 1
        if ch == '%':
            if i < n:
                ch = format[i]
                i += 1
                if ch == 'f':
                    if freplace is None:
                        freplace = '%06d' % getattr(object,
                                                    'microsecond', 0)
                    newformat.append(freplace)
                elif ch == 'z':
                    if zreplace is None:
                        zreplace = ""
                        if hasattr(object, "utcoffset"):
                            offset = object.utcoffset()
                            if offset is not None:
                                sign = '+'
                                if offset.days < 0:
                                    offset = -offset
                                    sign = '-'
                                h, m = divmod(offset, timedelta(hours=1))
                                assert not m % timedelta(minutes=1), "whole minute"
                                m //= timedelta(minutes=1)
                                zreplace = '%c%02d%02d' % (sign, h, m)
                    assert '%' not in zreplace
                    newformat.append(zreplace)
                elif ch == 'Z':
                    if Zreplace is None:
                        Zreplace = ""
                        if hasattr(object, "tzname"):
                            s = object.tzname()
                            if s is not None:
                                # strftime is going to have at this: escape %
                                Zreplace = s.replace('%', '%%')
                    newformat.append(Zreplace)
                else:
                    push('%')
                    push(ch)
            else:
                push('%')
        else:
            push(ch)
    newformat = "".join(newformat)
    return _time.strftime(newformat, timetuple)

def _call_tzinfo_method(tzinfo, methname, tzinfoarg):
    if tzinfo is None:
        return None
    return getattr(tzinfo, methname)(tzinfoarg)

# Just raise TypeError if the arg isn't None or a string.
def _check_tzname(name):
    if name is not None and not isinstance(name, str):
        raise TypeError("tzinfo.tzname() must return None or string, "
                        "not '%s'" % type(name))

# name is the offset-producing method, "utcoffset" or "dst".
# offset is what it returned.
# If offset isn't None or timedelta, raises TypeError.
# If offset is None, returns None.
# Else offset is checked for being in range, and a whole # of minutes.
# If it is, its integer value is returned.  Else ValueError is raised.
def _check_utc_offset(name, offset):
    assert name in ("utcoffset", "dst")
    if offset is None:
        return
    if not isinstance(offset, timedelta):
        raise TypeError("tzinfo.%s() must return None "
                        "or timedelta, not '%s'" % (name, type(offset)))
    if offset % timedelta(minutes=1) or offset.microseconds:
        raise ValueError("tzinfo.%s() must return a whole number "
                         "of minutes, got %s" % (name, offset))
    if not -timedelta(1) < offset < timedelta(1):
        raise ValueError("%s()=%s, must be must be strictly between"
                         " -timedelta(hours=24) and timedelta(hours=24)"
                         % (name, offset))

def _check_date_fields(year, month, day):
    if not isinstance(year, int):
        raise TypeError('int expected')
    if not MINYEAR <= year <= MAXYEAR:
        raise ValueError('year must be in %d..%d' % (MINYEAR, MAXYEAR), year)
    if not 1 <= month <= 12:
        raise ValueError('month must be in 1..12', month)
    dim = _days_in_month(year, month)
    if not 1 <= day <= dim:
        raise ValueError('day must be in 1..%d' % dim, day)

def _check_time_fields(hour, minute, second, microsecond):
    if not isinstance(hour, int):
        raise TypeError('int expected')
    if not 0 <= hour <= 23:
        raise ValueError('hour must be in 0..23', hour)
    if not 0 <= minute <= 59:
        raise ValueError('minute must be in 0..59', minute)
    if not 0 <= second <= 59:
        raise ValueError('second must be in 0..59', second)
    if not 0 <= microsecond <= 999999:
        raise ValueError('microsecond must be in 0..999999', microsecond)

def _check_tzinfo_arg(tz):
    if tz is not None and not isinstance(tz, tzinfo):
        raise TypeError("tzinfo argument must be None or of a tzinfo subclass")

def _cmperror(x, y):
    raise TypeError("can't compare '%s' to '%s'" % (
                    type(x).__name__, type(y).__name__))

class timedelta(object):
    """Represent the difference between two datetime objects.

    Supported operators:

    - add, subtract timedelta
    - unary plus, minus, abs
    - compare to timedelta
    - multiply, divide by int

    In addition, datetime supports subtraction of two datetime objects
    returning a timedelta, and addition or subtraction of a datetime
    and a timedelta giving a datetime.

    Representation: (days, seconds, microseconds).  Why?  Because I
    felt like it.
    """
    __slots__ = '_days', '_seconds', '_microseconds'

    def __new__(cls, days=0, seconds=0, microseconds=0,
                milliseconds=0, minutes=0, hours=0, weeks=0):
        # Doing this efficiently and accurately in C is going to be difficult
        # and error-prone, due to ubiquitous overflow possibilities, and that
        # C double doesn't have enough bits of precision to represent
        # microseconds over 10K years faithfully.  The code here tries to make
        # explicit where go-fast assumptions can be relied on, in order to
        # guide the C implementation; it's way more convoluted than speed-
        # ignoring auto-overflow-to-long idiomatic Python could be.

        # XXX Check that all inputs are ints or floats.

        # Final values, all integer.
        # s and us fit in 32-bit signed ints; d isn't bounded.
        d = s = us = 0

        # Normalize everything to days, seconds, microseconds.
        days += weeks*7
        seconds += minutes*60 + hours*3600
        microseconds += milliseconds*1000

        # Get rid of all fractions, and normalize s and us.
        # Take a deep breath <wink>.
        if isinstance(days, float):
            dayfrac, days = _math.modf(days)
            daysecondsfrac, daysecondswhole = _math.modf(dayfrac * (24.*3600.))
            assert daysecondswhole == int(daysecondswhole)  # can't overflow
            s = int(daysecondswhole)
            assert days == int(days)
            d = int(days)
        else:
            daysecondsfrac = 0.0
            d = days
        assert isinstance(daysecondsfrac, float)
        assert abs(daysecondsfrac) <= 1.0
        assert isinstance(d, int)
        assert abs(s) <= 24 * 3600
        # days isn't referenced again before redefinition

        if isinstance(seconds, float):
            secondsfrac, seconds = _math.modf(seconds)
            assert seconds == int(seconds)
            seconds = int(seconds)
            secondsfrac += daysecondsfrac
            assert abs(secondsfrac) <= 2.0
        else:
            secondsfrac = daysecondsfrac
        # daysecondsfrac isn't referenced again
        assert isinstance(secondsfrac, float)
        assert abs(secondsfrac) <= 2.0

        assert isinstance(seconds, int)
        days, seconds = divmod(seconds, 24*3600)
        d += days
        s += int(seconds)    # can't overflow
        assert isinstance(s, int)
        assert abs(s) <= 2 * 24 * 3600
        # seconds isn't referenced again before redefinition

        usdouble = secondsfrac * 1e6
        assert abs(usdouble) < 2.1e6    # exact value not critical
        # secondsfrac isn't referenced again

        if isinstance(microseconds, float):
            microseconds += usdouble
            microseconds = round(microseconds, 0)
            seconds, microseconds = divmod(microseconds, 1e6)
            assert microseconds == int(microseconds)
            assert seconds == int(seconds)
            days, seconds = divmod(seconds, 24.*3600.)
            assert days == int(days)
            assert seconds == int(seconds)
            d += int(days)
            s += int(seconds)   # can't overflow
            assert isinstance(s, int)
            assert abs(s) <= 3 * 24 * 3600
        else:
            seconds, microseconds = divmod(microseconds, 1000000)
            days, seconds = divmod(seconds, 24*3600)
            d += days
            s += int(seconds)    # can't overflow
            assert isinstance(s, int)
            assert abs(s) <= 3 * 24 * 3600
            microseconds = float(microseconds)
            microseconds += usdouble
            microseconds = round(microseconds, 0)
        assert abs(s) <= 3 * 24 * 3600
        assert abs(microseconds) < 3.1e6

        # Just a little bit of carrying possible for microseconds and seconds.
        assert isinstance(microseconds, float)
        assert int(microseconds) == microseconds
        us = int(microseconds)
        seconds, us = divmod(us, 1000000)
        s += seconds    # cant't overflow
        assert isinstance(s, int)
        days, s = divmod(s, 24*3600)
        d += days

        assert isinstance(d, int)
        assert isinstance(s, int) and 0 <= s < 24*3600
        assert isinstance(us, int) and 0 <= us < 1000000

        self = object.__new__(cls)

        self._days = d
        self._seconds = s
        self._microseconds = us
        if abs(d) > 999999999:
            raise OverflowError("timedelta # of days is too large: %d" % d)

        return self

    def __repr__(self):
        if self._microseconds:
            return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__,
                                       self._days,
                                       self._seconds,
                                       self._microseconds)
        if self._seconds:
            return "%s(%d, %d)" % ('datetime.' + self.__class__.__name__,
                                   self._days,
                                   self._seconds)
        return "%s(%d)" % ('datetime.' + self.__class__.__name__, self._days)

    def __str__(self):
        mm, ss = divmod(self._seconds, 60)
        hh, mm = divmod(mm, 60)
        s = "%d:%02d:%02d" % (hh, mm, ss)
        if self._days:
            def plural(n):
                return n, abs(n) != 1 and "s" or ""
            s = ("%d day%s, " % plural(self._days)) + s
        if self._microseconds:
            s = s + ".%06d" % self._microseconds
        return s

    def total_seconds(self):
        """Total seconds in the duration."""
        return ((self.days * 86400 + self.seconds)*10**6 +
                self.microseconds) / 10**6

    # Read-only field accessors
    @property
    def days(self):
        """days"""
        return self._days

    @property
    def seconds(self):
        """seconds"""
        return self._seconds

    @property
    def microseconds(self):
        """microseconds"""
        return self._microseconds

    def __add__(self, other):
        if isinstance(other, timedelta):
            # for CPython compatibility, we cannot use
            # our __class__ here, but need a real timedelta
            return timedelta(self._days + other._days,
                             self._seconds + other._seconds,
                             self._microseconds + other._microseconds)
        return NotImplemented

    __radd__ = __add__

    def __sub__(self, other):
        if isinstance(other, timedelta):
            # for CPython compatibility, we cannot use
            # our __class__ here, but need a real timedelta
            return timedelta(self._days - other._days,
                             self._seconds - other._seconds,
                             self._microseconds - other._microseconds)
        return NotImplemented

    def __rsub__(self, other):
        if isinstance(other, timedelta):
            return -self + other
        return NotImplemented

    def __neg__(self):
        # for CPython compatibility, we cannot use
        # our __class__ here, but need a real timedelta
        return timedelta(-self._days,
                         -self._seconds,
                         -self._microseconds)

    def __pos__(self):
        return self

    def __abs__(self):
        if self._days < 0:
            return -self
        else:
            return self

    def __mul__(self, other):
        if isinstance(other, int):
            # for CPython compatibility, we cannot use
            # our __class__ here, but need a real timedelta
            return timedelta(self._days * other,
                             self._seconds * other,
                             self._microseconds * other)
        if isinstance(other, float):
            a, b = other.as_integer_ratio()
            return self * a / b
        return NotImplemented

    __rmul__ = __mul__

    def _to_microseconds(self):
        return ((self._days * (24*3600) + self._seconds) * 1000000 +
                self._microseconds)

    def __floordiv__(self, other):
        if not isinstance(other, (int, timedelta)):
            return NotImplemented
        usec = self._to_microseconds()
        if isinstance(other, timedelta):
            return usec // other._to_microseconds()
        if isinstance(other, int):
            return timedelta(0, 0, usec // other)

    def __truediv__(self, other):
        if not isinstance(other, (int, float, timedelta)):
            return NotImplemented
        usec = self._to_microseconds()
        if isinstance(other, timedelta):
            return usec / other._to_microseconds()
        if isinstance(other, int):
            return timedelta(0, 0, usec / other)
        if isinstance(other, float):
            a, b = other.as_integer_ratio()
            return timedelta(0, 0, b * usec / a)

    def __mod__(self, other):
        if isinstance(other, timedelta):
            r = self._to_microseconds() % other._to_microseconds()
            return timedelta(0, 0, r)
        return NotImplemented

    def __divmod__(self, other):
        if isinstance(other, timedelta):
            q, r = divmod(self._to_microseconds(),
                          other._to_microseconds())
            return q, timedelta(0, 0, r)
        return NotImplemented

    # Comparisons of timedelta objects with other.

    def __eq__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) == 0
        else:
            return False

    def __ne__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) != 0
        else:
            return True

    def __le__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) <= 0
        else:
            _cmperror(self, other)

    def __lt__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) < 0
        else:
            _cmperror(self, other)

    def __ge__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) >= 0
        else:
            _cmperror(self, other)

    def __gt__(self, other):
        if isinstance(other, timedelta):
            return self._cmp(other) > 0
        else:
            _cmperror(self, other)

    def _cmp(self, other):
        assert isinstance(other, timedelta)
        return _cmp(self._getstate(), other._getstate())

    def __hash__(self):
        return hash(self._getstate())

    def __bool__(self):
        return (self._days != 0 or
                self._seconds != 0 or
                self._microseconds != 0)

    # Pickle support.

    def _getstate(self):
        return (self._days, self._seconds, self._microseconds)

    def __reduce__(self):
        return (self.__class__, self._getstate())

timedelta.min = timedelta(-999999999)
timedelta.max = timedelta(days=999999999, hours=23, minutes=59, seconds=59,
                          microseconds=999999)
timedelta.resolution = timedelta(microseconds=1)

class date(object):
    """Concrete date type.

    Constructors:

    __new__()
    fromtimestamp()
    today()
    fromordinal()

    Operators:

    __repr__, __str__
    __cmp__, __hash__
    __add__, __radd__, __sub__ (add/radd only with timedelta arg)

    Methods:

    timetuple()
    toordinal()
    weekday()
    isoweekday(), isocalendar(), isoformat()
    ctime()
    strftime()

    Properties (readonly):
    year, month, day
    """
    __slots__ = '_year', '_month', '_day'

    def __new__(cls, year, month=None, day=None):
        """Constructor.

        Arguments:

        year, month, day (required, base 1)
        """
        if (isinstance(year, bytes) and len(year) == 4 and
            1 <= year[2] <= 12 and month is None):  # Month is sane
            # Pickle support
            self = object.__new__(cls)
            self.__setstate(year)
            return self
        _check_date_fields(year, month, day)
        self = object.__new__(cls)
        self._year = year
        self._month = month
        self._day = day
        return self

    # Additional constructors

    @classmethod
    def fromtimestamp(cls, t):
        "Construct a date from a POSIX timestamp (like time.time())."
        y, m, d, hh, mm, ss, weekday, jday, dst = _time.localtime(t)
        return cls(y, m, d)

    @classmethod
    def today(cls):
        "Construct a date from time.time()."
        t = _time.time()
        return cls.fromtimestamp(t)

    @classmethod
    def fromordinal(cls, n):
        """Construct a date from a proleptic Gregorian ordinal.

        January 1 of year 1 is day 1.  Only the year, month and day are
        non-zero in the result.
        """
        y, m, d = _ord2ymd(n)
        return cls(y, m, d)

    # Conversions to string

    def __repr__(self):
        """Convert to formal string, for repr().

        >>> dt = datetime(2010, 1, 1)
        >>> repr(dt)
        'datetime.datetime(2010, 1, 1, 0, 0)'

        >>> dt = datetime(2010, 1, 1, tzinfo=timezone.utc)
        >>> repr(dt)
        'datetime.datetime(2010, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)'
        """
        return "%s(%d, %d, %d)" % ('datetime.' + self.__class__.__name__,
                                   self._year,
                                   self._month,
                                   self._day)
    # XXX These shouldn't depend on time.localtime(), because that
    # clips the usable dates to [1970 .. 2038).  At least ctime() is
    # easily done without using strftime() -- that's better too because
    # strftime("%c", ...) is locale specific.


    def ctime(self):
        "Return ctime() style string."
        weekday = self.toordinal() % 7 or 7
        return "%s %s %2d 00:00:00 %04d" % (
            _DAYNAMES[weekday],
            _MONTHNAMES[self._month],
            self._day, self._year)

    def strftime(self, fmt):
        "Format using strftime()."
        return _wrap_strftime(self, fmt, self.timetuple())

    def __format__(self, fmt):
        if len(fmt) != 0:
            return self.strftime(fmt)
        return str(self)

    def isoformat(self):
        """Return the date formatted according to ISO.

        This is 'YYYY-MM-DD'.

        References:
        - http://www.w3.org/TR/NOTE-datetime
        - http://www.cl.cam.ac.uk/~mgk25/iso-time.html
        """
        return "%04d-%02d-%02d" % (self._year, self._month, self._day)

    __str__ = isoformat

    # Read-only field accessors
    @property
    def year(self):
        """year (1-9999)"""
        return self._year

    @property
    def month(self):
        """month (1-12)"""
        return self._month

    @property
    def day(self):
        """day (1-31)"""
        return self._day

    # Standard conversions, __cmp__, __hash__ (and helpers)

    def timetuple(self):
        "Return local time tuple compatible with time.localtime()."
        return _build_struct_time(self._year, self._month, self._day,
                                  0, 0, 0, -1)

    def toordinal(self):
        """Return proleptic Gregorian ordinal for the year, month and day.

        January 1 of year 1 is day 1.  Only the year, month and day values
        contribute to the result.
        """
        return _ymd2ord(self._year, self._month, self._day)

    def replace(self, year=None, month=None, day=None):
        """Return a new date with new values for the specified fields."""
        if year is None:
            year = self._year
        if month is None:
            month = self._month
        if day is None:
            day = self._day
        _check_date_fields(year, month, day)
        return date(year, month, day)

    # Comparisons of date objects with other.

    def __eq__(self, other):
        if isinstance(other, date):
            return self._cmp(other) == 0
        return NotImplemented

    def __ne__(self, other):
        if isinstance(other, date):
            return self._cmp(other) != 0
        return NotImplemented

    def __le__(self, other):
        if isinstance(other, date):
            return self._cmp(other) <= 0
        return NotImplemented

    def __lt__(self, other):
        if isinstance(other, date):
            return self._cmp(other) < 0
        return NotImplemented

    def __ge__(self, other):
        if isinstance(other, date):
            return self._cmp(other) >= 0
        return NotImplemented

    def __gt__(self, other):
        if isinstance(other, date):
            return self._cmp(other) > 0
        return NotImplemented

    def _cmp(self, other):
        assert isinstance(other, date)
        y, m, d = self._year, self._month, self._day
        y2, m2, d2 = other._year, other._month, other._day
        return _cmp((y, m, d), (y2, m2, d2))

    def __hash__(self):
        "Hash."
        return hash(self._getstate())

    # Computations

    def __add__(self, other):
        "Add a date to a timedelta."
        if isinstance(other, timedelta):
            o = self.toordinal() + other.days
            if 0 < o <= _MAXORDINAL:
                return date.fromordinal(o)
            raise OverflowError("result out of range")
        return NotImplemented

    __radd__ = __add__

    def __sub__(self, other):
        """Subtract two dates, or a date and a timedelta."""
        if isinstance(other, timedelta):
            return self + timedelta(-other.days)
        if isinstance(other, date):
            days1 = self.toordinal()
            days2 = other.toordinal()
            return timedelta(days1 - days2)
        return NotImplemented

    def weekday(self):
        "Return day of the week, where Monday == 0 ... Sunday == 6."
        return (self.toordinal() + 6) % 7

    # Day-of-the-week and week-of-the-year, according to ISO

    def isoweekday(self):
        "Return day of the week, where Monday == 1 ... Sunday == 7."
        # 1-Jan-0001 is a Monday
        return self.toordinal() % 7 or 7

    def isocalendar(self):
        """Return a 3-tuple containing ISO year, week number, and weekday.

        The first ISO week of the year is the (Mon-Sun) week
        containing the year's first Thursday; everything else derives
        from that.

        The first week is 1; Monday is 1 ... Sunday is 7.

        ISO calendar algorithm taken from
        http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm
        """
        year = self._year
        week1monday = _isoweek1monday(year)
        today = _ymd2ord(self._year, self._month, self._day)
        # Internally, week and day have origin 0
        week, day = divmod(today - week1monday, 7)
        if week < 0:
            year -= 1
            week1monday = _isoweek1monday(year)
            week, day = divmod(today - week1monday, 7)
        elif week >= 52:
            if today >= _isoweek1monday(year+1):
                year += 1
                week = 0
        return year, week+1, day+1

    # Pickle support.

    def _getstate(self):
        yhi, ylo = divmod(self._year, 256)
        return bytes([yhi, ylo, self._month, self._day]),

    def __setstate(self, string):
        if len(string) != 4 or not (1 <= string[2] <= 12):
            raise TypeError("not enough arguments")
        yhi, ylo, self._month, self._day = string
        self._year = yhi * 256 + ylo

    def __reduce__(self):
        return (self.__class__, self._getstate())

_date_class = date  # so functions w/ args named "date" can get at the class

date.min = date(1, 1, 1)
date.max = date(9999, 12, 31)
date.resolution = timedelta(days=1)

class tzinfo(object):
    """Abstract base class for time zone info classes.

    Subclasses must override the name(), utcoffset() and dst() methods.
    """
    __slots__ = ()
    def tzname(self, dt):
        "datetime -> string name of time zone."
        raise NotImplementedError("tzinfo subclass must override tzname()")

    def utcoffset(self, dt):
        "datetime -> minutes east of UTC (negative for west of UTC)"
        raise NotImplementedError("tzinfo subclass must override utcoffset()")

    def dst(self, dt):
        """datetime -> DST offset in minutes east of UTC.

        Return 0 if DST not in effect.  utcoffset() must include the DST
        offset.
        """
        raise NotImplementedError("tzinfo subclass must override dst()")

    def fromutc(self, dt):
        "datetime in UTC -> datetime in local time."

        if not isinstance(dt, datetime):
            raise TypeError("fromutc() requires a datetime argument")
        if dt.tzinfo is not self:
            raise ValueError("dt.tzinfo is not self")

        dtoff = dt.utcoffset()
        if dtoff is None:
            raise ValueError("fromutc() requires a non-None utcoffset() "
                             "result")

        # See the long comment block at the end of this file for an
        # explanation of this algorithm.
        dtdst = dt.dst()
        if dtdst is None:
            raise ValueError("fromutc() requires a non-None dst() result")
        delta = dtoff - dtdst
        if delta:
            dt += delta
            dtdst = dt.dst()
            if dtdst is None:
                raise ValueError("fromutc(): dt.dst gave inconsistent "
                                 "results; cannot convert")
        return dt + dtdst

    # Pickle support.

    def __reduce__(self):
        getinitargs = getattr(self, "__getinitargs__", None)
        if getinitargs:
            args = getinitargs()
        else:
            args = ()
        getstate = getattr(self, "__getstate__", None)
        if getstate:
            state = getstate()
        else:
            state = getattr(self, "__dict__", None) or None
        if state is None:
            return (self.__class__, args)
        else:
            return (self.__class__, args, state)

_tzinfo_class = tzinfo

class time(object):
    """Time with time zone.

    Constructors:

    __new__()

    Operators:

    __repr__, __str__
    __cmp__, __hash__

    Methods:

    strftime()
    isoformat()
    utcoffset()
    tzname()
    dst()

    Properties (readonly):
    hour, minute, second, microsecond, tzinfo
    """

    def __new__(cls, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
        """Constructor.

        Arguments:

        hour, minute (required)
        second, microsecond (default to zero)
        tzinfo (default to None)
        """
        self = object.__new__(cls)
        if isinstance(hour, bytes) and len(hour) == 6:
            # Pickle support
            self.__setstate(hour, minute or None)
            return self
        _check_tzinfo_arg(tzinfo)
        _check_time_fields(hour, minute, second, microsecond)
        self._hour = hour
        self._minute = minute
        self._second = second
        self._microsecond = microsecond
        self._tzinfo = tzinfo
        return self

    # Read-only field accessors
    @property
    def hour(self):
        """hour (0-23)"""
        return self._hour

    @property
    def minute(self):
        """minute (0-59)"""
        return self._minute

    @property
    def second(self):
        """second (0-59)"""
        return self._second

    @property
    def microsecond(self):
        """microsecond (0-999999)"""
        return self._microsecond

    @property
    def tzinfo(self):
        """timezone info object"""
        return self._tzinfo

    # Standard conversions, __hash__ (and helpers)

    # Comparisons of time objects with other.

    def __eq__(self, other):
        if isinstance(other, time):
            return self._cmp(other, allow_mixed=True) == 0
        else:
            return False

    def __ne__(self, other):
        if isinstance(other, time):
            return self._cmp(other, allow_mixed=True) != 0
        else:
            return True

    def __le__(self, other):
        if isinstance(other, time):
            return self._cmp(other) <= 0
        else:
            _cmperror(self, other)

    def __lt__(self, other):
        if isinstance(other, time):
            return self._cmp(other) < 0
        else:
            _cmperror(self, other)

    def __ge__(self, other):
        if isinstance(other, time):
            return self._cmp(other) >= 0
        else:
            _cmperror(self, other)

    def __gt__(self, other):
        if isinstance(other, time):
            return self._cmp(other) > 0
        else:
            _cmperror(self, other)

    def _cmp(self, other, allow_mixed=False):
        assert isinstance(other, time)
        mytz = self._tzinfo
        ottz = other._tzinfo
        myoff = otoff = None

        if mytz is ottz:
            base_compare = True
        else:
            myoff = self.utcoffset()
            otoff = other.utcoffset()
            base_compare = myoff == otoff

        if base_compare:
            return _cmp((self._hour, self._minute, self._second,
                         self._microsecond),
                       (other._hour, other._minute, other._second,
                        other._microsecond))
        if myoff is None or otoff is None:
            if allow_mixed:
                return 2 # arbitrary non-zero value
            else:
                raise TypeError("cannot compare naive and aware times")
        myhhmm = self._hour * 60 + self._minute - myoff//timedelta(minutes=1)
        othhmm = other._hour * 60 + other._minute - otoff//timedelta(minutes=1)
        return _cmp((myhhmm, self._second, self._microsecond),
                    (othhmm, other._second, other._microsecond))

    def __hash__(self):
        """Hash."""
        tzoff = self.utcoffset()
        if not tzoff: # zero or None
            return hash(self._getstate()[0])
        h, m = divmod(timedelta(hours=self.hour, minutes=self.minute) - tzoff,
                      timedelta(hours=1))
        assert not m % timedelta(minutes=1), "whole minute"
        m //= timedelta(minutes=1)
        if 0 <= h < 24:
            return hash(time(h, m, self.second, self.microsecond))
        return hash((h, m, self.second, self.microsecond))

    # Conversion to string

    def _tzstr(self, sep=":"):
        """Return formatted timezone offset (+xx:xx) or None."""
        off = self.utcoffset()
        if off is not None:
            if off.days < 0:
                sign = "-"
                off = -off
            else:
                sign = "+"
            hh, mm = divmod(off, timedelta(hours=1))
            assert not mm % timedelta(minutes=1), "whole minute"
            mm //= timedelta(minutes=1)
            assert 0 <= hh < 24
            off = "%s%02d%s%02d" % (sign, hh, sep, mm)
        return off

    def __repr__(self):
        """Convert to formal string, for repr()."""
        if self._microsecond != 0:
            s = ", %d, %d" % (self._second, self._microsecond)
        elif self._second != 0:
            s = ", %d" % self._second
        else:
            s = ""
        s= "%s(%d, %d%s)" % ('datetime.' + self.__class__.__name__,
                             self._hour, self._minute, s)
        if self._tzinfo is not None:
            assert s[-1:] == ")"
            s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")"
        return s

    def isoformat(self):
        """Return the time formatted according to ISO.

        This is 'HH:MM:SS.mmmmmm+zz:zz', or 'HH:MM:SS+zz:zz' if
        self.microsecond == 0.
        """
        s = _format_time(self._hour, self._minute, self._second,
                         self._microsecond)
        tz = self._tzstr()
        if tz:
            s += tz
        return s

    __str__ = isoformat

    def strftime(self, fmt):
        """Format using strftime().  The date part of the timestamp passed
        to underlying strftime should not be used.
        """
        # The year must be >= 1000 else Python's strftime implementation
        # can raise a bogus exception.
        timetuple = (1900, 1, 1,
                     self._hour, self._minute, self._second,
                     0, 1, -1)
        return _wrap_strftime(self, fmt, timetuple)

    def __format__(self, fmt):
        if len(fmt) != 0:
            return self.strftime(fmt)
        return str(self)

    # Timezone functions

    def utcoffset(self):
        """Return the timezone offset in minutes east of UTC (negative west of
        UTC)."""
        if self._tzinfo is None:
            return None
        offset = self._tzinfo.utcoffset(None)
        _check_utc_offset("utcoffset", offset)
        return offset

    def tzname(self):
        """Return the timezone name.

        Note that the name is 100% informational -- there's no requirement that
        it mean anything in particular. For example, "GMT", "UTC", "-500",
        "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies.
        """
        if self._tzinfo is None:
            return None
        name = self._tzinfo.tzname(None)
        _check_tzname(name)
        return name

    def dst(self):
        """Return 0 if DST is not in effect, or the DST offset (in minutes
        eastward) if DST is in effect.

        This is purely informational; the DST offset has already been added to
        the UTC offset returned by utcoffset() if applicable, so there's no
        need to consult dst() unless you're interested in displaying the DST
        info.
        """
        if self._tzinfo is None:
            return None
        offset = self._tzinfo.dst(None)
        _check_utc_offset("dst", offset)
        return offset

    def replace(self, hour=None, minute=None, second=None, microsecond=None,
                tzinfo=True):
        """Return a new time with new values for the specified fields."""
        if hour is None:
            hour = self.hour
        if minute is None:
            minute = self.minute
        if second is None:
            second = self.second
        if microsecond is None:
            microsecond = self.microsecond
        if tzinfo is True:
            tzinfo = self.tzinfo
        _check_time_fields(hour, minute, second, microsecond)
        _check_tzinfo_arg(tzinfo)
        return time(hour, minute, second, microsecond, tzinfo)

    def __bool__(self):
        if self.second or self.microsecond:
            return True
        offset = self.utcoffset() or timedelta(0)
        return timedelta(hours=self.hour, minutes=self.minute) != offset

    # Pickle support.

    def _getstate(self):
        us2, us3 = divmod(self._microsecond, 256)
        us1, us2 = divmod(us2, 256)
        basestate = bytes([self._hour, self._minute, self._second,
                           us1, us2, us3])
        if self._tzinfo is None:
            return (basestate,)
        else:
            return (basestate, self._tzinfo)

    def __setstate(self, string, tzinfo):
        if len(string) != 6 or string[0] >= 24:
            raise TypeError("an integer is required")
        (self._hour, self._minute, self._second,
         us1, us2, us3) = string
        self._microsecond = (((us1 << 8) | us2) << 8) | us3
        if tzinfo is None or isinstance(tzinfo, _tzinfo_class):
            self._tzinfo = tzinfo
        else:
            raise TypeError("bad tzinfo state arg %r" % tzinfo)

    def __reduce__(self):
        return (time, self._getstate())

_time_class = time  # so functions w/ args named "time" can get at the class

time.min = time(0, 0, 0)
time.max = time(23, 59, 59, 999999)
time.resolution = timedelta(microseconds=1)

class datetime(date):
    """datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])

    The year, month and day arguments are required. tzinfo may be None, or an
    instance of a tzinfo subclass. The remaining arguments may be ints.
    """

    __slots__ = date.__slots__ + (
        '_hour', '_minute', '_second',
        '_microsecond', '_tzinfo')
    def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0,
                microsecond=0, tzinfo=None):
        if isinstance(year, bytes) and len(year) == 10:
            # Pickle support
            self = date.__new__(cls, year[:4])
            self.__setstate(year, month)
            return self
        _check_tzinfo_arg(tzinfo)
        _check_time_fields(hour, minute, second, microsecond)
        self = date.__new__(cls, year, month, day)
        self._hour = hour
        self._minute = minute
        self._second = second
        self._microsecond = microsecond
        self._tzinfo = tzinfo
        return self

    # Read-only field accessors
    @property
    def hour(self):
        """hour (0-23)"""
        return self._hour

    @property
    def minute(self):
        """minute (0-59)"""
        return self._minute

    @property
    def second(self):
        """second (0-59)"""
        return self._second

    @property
    def microsecond(self):
        """microsecond (0-999999)"""
        return self._microsecond

    @property
    def tzinfo(self):
        """timezone info object"""
        return self._tzinfo

    @classmethod
    def fromtimestamp(cls, t, tz=None):
        """Construct a datetime from a POSIX timestamp (like time.time()).

        A timezone info object may be passed in as well.
        """

        _check_tzinfo_arg(tz)

        converter = _time.localtime if tz is None else _time.gmtime

        t, frac = divmod(t, 1.0)
        us = int(frac * 1e6)

        # If timestamp is less than one microsecond smaller than a
        # full second, us can be rounded up to 1000000.  In this case,
        # roll over to seconds, otherwise, ValueError is raised
        # by the constructor.
        if us == 1000000:
            t += 1
            us = 0
        y, m, d, hh, mm, ss, weekday, jday, dst = converter(t)
        ss = min(ss, 59)    # clamp out leap seconds if the platform has them
        result = cls(y, m, d, hh, mm, ss, us, tz)
        if tz is not None:
            result = tz.fromutc(result)
        return result

    @classmethod
    def utcfromtimestamp(cls, t):
        "Construct a UTC datetime from a POSIX timestamp (like time.time())."
        t, frac = divmod(t, 1.0)
        us = int(frac * 1e6)

        # If timestamp is less than one microsecond smaller than a
        # full second, us can be rounded up to 1000000.  In this case,
        # roll over to seconds, otherwise, ValueError is raised
        # by the constructor.
        if us == 1000000:
            t += 1
            us = 0
        y, m, d, hh, mm, ss, weekday, jday, dst = _time.gmtime(t)
        ss = min(ss, 59)    # clamp out leap seconds if the platform has them
        return cls(y, m, d, hh, mm, ss, us)

    # XXX This is supposed to do better than we *can* do by using time.time(),
    # XXX if the platform supports a more accurate way.  The C implementation
    # XXX uses gettimeofday on platforms that have it, but that isn't
    # XXX available from Python.  So now() may return different results
    # XXX across the implementations.
    @classmethod
    def now(cls, tz=None):
        "Construct a datetime from time.time() and optional time zone info."
        t = _time.time()
        return cls.fromtimestamp(t, tz)

    @classmethod
    def utcnow(cls):
        "Construct a UTC datetime from time.time()."
        t = _time.time()
        return cls.utcfromtimestamp(t)

    @classmethod
    def combine(cls, date, time):
        "Construct a datetime from a given date and a given time."
        if not isinstance(date, _date_class):
            raise TypeError("date argument must be a date instance")
        if not isinstance(time, _time_class):
            raise TypeError("time argument must be a time instance")
        return cls(date.year, date.month, date.day,
                   time.hour, time.minute, time.second, time.microsecond,
                   time.tzinfo)

    def timetuple(self):
        "Return local time tuple compatible with time.localtime()."
        dst = self.dst()
        if dst is None:
            dst = -1
        elif dst:
            dst = 1
        else:
            dst = 0
        return _build_struct_time(self.year, self.month, self.day,
                                  self.hour, self.minute, self.second,
                                  dst)

    def timestamp(self):
        "Return POSIX timestamp as float"
        if self._tzinfo is None:
            return _time.mktime((self.year, self.month, self.day,
                                 self.hour, self.minute, self.second,
                                 -1, -1, -1)) + self.microsecond / 1e6
        else:
            return (self - _EPOCH).total_seconds()

    def utctimetuple(self):
        "Return UTC time tuple compatible with time.gmtime()."
        offset = self.utcoffset()
        if offset:
            self -= offset
        y, m, d = self.year, self.month, self.day
        hh, mm, ss = self.hour, self.minute, self.second
        return _build_struct_time(y, m, d, hh, mm, ss, 0)

    def date(self):
        "Return the date part."
        return date(self._year, self._month, self._day)

    def time(self):
        "Return the time part, with tzinfo None."
        return time(self.hour, self.minute, self.second, self.microsecond)

    def timetz(self):
        "Return the time part, with same tzinfo."
        return time(self.hour, self.minute, self.second, self.microsecond,
                    self._tzinfo)

    def replace(self, year=None, month=None, day=None, hour=None,
                minute=None, second=None, microsecond=None, tzinfo=True):
        """Return a new datetime with new values for the specified fields."""
        if year is None:
            year = self.year
        if month is None:
            month = self.month
        if day is None:
            day = self.day
        if hour is None:
            hour = self.hour
        if minute is None:
            minute = self.minute
        if second is None:
            second = self.second
        if microsecond is None:
            microsecond = self.microsecond
        if tzinfo is True:
            tzinfo = self.tzinfo
        _check_date_fields(year, month, day)
        _check_time_fields(hour, minute, second, microsecond)
        _check_tzinfo_arg(tzinfo)
        return datetime(year, month, day, hour, minute, second,
                          microsecond, tzinfo)

    def astimezone(self, tz=None):
        if tz is None:
            if self.tzinfo is None:
                raise ValueError("astimezone() requires an aware datetime")
            ts = (self - _EPOCH) // timedelta(seconds=1)
            localtm = _time.localtime(ts)
            local = datetime(*localtm[:6])
            try:
                # Extract TZ data if available
                gmtoff = localtm.tm_gmtoff
                zone = localtm.tm_zone
            except AttributeError:
                # Compute UTC offset and compare with the value implied
                # by tm_isdst.  If the values match, use the zone name
                # implied by tm_isdst.
                delta = local - datetime(*_time.gmtime(ts)[:6])
                dst = _time.daylight and localtm.tm_isdst > 0
                gmtoff = -(_time.altzone if dst else _time.timezone)
                if delta == timedelta(seconds=gmtoff):
                    tz = timezone(delta, _time.tzname[dst])
                else:
                    tz = timezone(delta)
            else:
                tz = timezone(timedelta(seconds=gmtoff), zone)

        elif not isinstance(tz, tzinfo):
            raise TypeError("tz argument must be an instance of tzinfo")

        mytz = self.tzinfo
        if mytz is None:
            raise ValueError("astimezone() requires an aware datetime")

        if tz is mytz:
            return self

        # Convert self to UTC, and attach the new time zone object.
        myoffset = self.utcoffset()
        if myoffset is None:
            raise ValueError("astimezone() requires an aware datetime")
        utc = (self - myoffset).replace(tzinfo=tz)

        # Convert from UTC to tz's local time.
        return tz.fromutc(utc)

    # Ways to produce a string.

    def ctime(self):
        "Return ctime() style string."
        weekday = self.toordinal() % 7 or 7
        return "%s %s %2d %02d:%02d:%02d %04d" % (
            _DAYNAMES[weekday],
            _MONTHNAMES[self._month],
            self._day,
            self._hour, self._minute, self._second,
            self._year)

    def isoformat(self, sep='T'):
        """Return the time formatted according to ISO.

        This is 'YYYY-MM-DD HH:MM:SS.mmmmmm', or 'YYYY-MM-DD HH:MM:SS' if
        self.microsecond == 0.

        If self.tzinfo is not None, the UTC offset is also attached, giving
        'YYYY-MM-DD HH:MM:SS.mmmmmm+HH:MM' or 'YYYY-MM-DD HH:MM:SS+HH:MM'.

        Optional argument sep specifies the separator between date and
        time, default 'T'.
        """
        s = ("%04d-%02d-%02d%c" % (self._year, self._month, self._day,
                                  sep) +
                _format_time(self._hour, self._minute, self._second,
                             self._microsecond))
        off = self.utcoffset()
        if off is not None:
            if off.days < 0:
                sign = "-"
                off = -off
            else:
                sign = "+"
            hh, mm = divmod(off, timedelta(hours=1))
            assert not mm % timedelta(minutes=1), "whole minute"
            mm //= timedelta(minutes=1)
            s += "%s%02d:%02d" % (sign, hh, mm)
        return s

    def __repr__(self):
        """Convert to formal string, for repr()."""
        L = [self._year, self._month, self._day, # These are never zero
             self._hour, self._minute, self._second, self._microsecond]
        if L[-1] == 0:
            del L[-1]
        if L[-1] == 0:
            del L[-1]
        s = ", ".join(map(str, L))
        s = "%s(%s)" % ('datetime.' + self.__class__.__name__, s)
        if self._tzinfo is not None:
            assert s[-1:] == ")"
            s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")"
        return s

    def __str__(self):
        "Convert to string, for str()."
        return self.isoformat(sep=' ')

    @classmethod
    def strptime(cls, date_string, format):
        'string, format -> new datetime parsed from a string (like time.strptime()).'
        import _strptime
        return _strptime._strptime_datetime(cls, date_string, format)

    def utcoffset(self):
        """Return the timezone offset in minutes east of UTC (negative west of
        UTC)."""
        if self._tzinfo is None:
            return None
        offset = self._tzinfo.utcoffset(self)
        _check_utc_offset("utcoffset", offset)
        return offset

    def tzname(self):
        """Return the timezone name.

        Note that the name is 100% informational -- there's no requirement that
        it mean anything in particular. For example, "GMT", "UTC", "-500",
        "-5:00", "EDT", "US/Eastern", "America/New York" are all valid replies.
        """
        name = _call_tzinfo_method(self._tzinfo, "tzname", self)
        _check_tzname(name)
        return name

    def dst(self):
        """Return 0 if DST is not in effect, or the DST offset (in minutes
        eastward) if DST is in effect.

        This is purely informational; the DST offset has already been added to
        the UTC offset returned by utcoffset() if applicable, so there's no
        need to consult dst() unless you're interested in displaying the DST
        info.
        """
        if self._tzinfo is None:
            return None
        offset = self._tzinfo.dst(self)
        _check_utc_offset("dst", offset)
        return offset

    # Comparisons of datetime objects with other.

    def __eq__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other, allow_mixed=True) == 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            return False

    def __ne__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other, allow_mixed=True) != 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            return True

    def __le__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other) <= 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            _cmperror(self, other)

    def __lt__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other) < 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            _cmperror(self, other)

    def __ge__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other) >= 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            _cmperror(self, other)

    def __gt__(self, other):
        if isinstance(other, datetime):
            return self._cmp(other) > 0
        elif not isinstance(other, date):
            return NotImplemented
        else:
            _cmperror(self, other)

    def _cmp(self, other, allow_mixed=False):
        assert isinstance(other, datetime)
        mytz = self._tzinfo
        ottz = other._tzinfo
        myoff = otoff = None

        if mytz is ottz:
            base_compare = True
        else:
            myoff = self.utcoffset()
            otoff = other.utcoffset()
            base_compare = myoff == otoff

        if base_compare:
            return _cmp((self._year, self._month, self._day,
                         self._hour, self._minute, self._second,
                         self._microsecond),
                       (other._year, other._month, other._day,
                        other._hour, other._minute, other._second,
                        other._microsecond))
        if myoff is None or otoff is None:
            if allow_mixed:
                return 2 # arbitrary non-zero value
            else:
                raise TypeError("cannot compare naive and aware datetimes")
        # XXX What follows could be done more efficiently...
        diff = self - other     # this will take offsets into account
        if diff.days < 0:
            return -1
        return diff and 1 or 0

    def __add__(self, other):
        "Add a datetime and a timedelta."
        if not isinstance(other, timedelta):
            return NotImplemented
        delta = timedelta(self.toordinal(),
                          hours=self._hour,
                          minutes=self._minute,
                          seconds=self._second,
                          microseconds=self._microsecond)
        delta += other
        hour, rem = divmod(delta.seconds, 3600)
        minute, second = divmod(rem, 60)
        if 0 < delta.days <= _MAXORDINAL:
            return datetime.combine(date.fromordinal(delta.days),
                                    time(hour, minute, second,
                                         delta.microseconds,
                                         tzinfo=self._tzinfo))
        raise OverflowError("result out of range")

    __radd__ = __add__

    def __sub__(self, other):
        "Subtract two datetimes, or a datetime and a timedelta."
        if not isinstance(other, datetime):
            if isinstance(other, timedelta):
                return self + -other
            return NotImplemented

        days1 = self.toordinal()
        days2 = other.toordinal()
        secs1 = self._second + self._minute * 60 + self._hour * 3600
        secs2 = other._second + other._minute * 60 + other._hour * 3600
        base = timedelta(days1 - days2,
                         secs1 - secs2,
                         self._microsecond - other._microsecond)
        if self._tzinfo is other._tzinfo:
            return base
        myoff = self.utcoffset()
        otoff = other.utcoffset()
        if myoff == otoff:
            return base
        if myoff is None or otoff is None:
            raise TypeError("cannot mix naive and timezone-aware time")
        return base + otoff - myoff

    def __hash__(self):
        tzoff = self.utcoffset()
        if tzoff is None:
            return hash(self._getstate()[0])
        days = _ymd2ord(self.year, self.month, self.day)
        seconds = self.hour * 3600 + self.minute * 60 + self.second
        return hash(timedelta(days, seconds, self.microsecond) - tzoff)

    # Pickle support.

    def _getstate(self):
        yhi, ylo = divmod(self._year, 256)
        us2, us3 = divmod(self._microsecond, 256)
        us1, us2 = divmod(us2, 256)
        basestate = bytes([yhi, ylo, self._month, self._day,
                           self._hour, self._minute, self._second,
                           us1, us2, us3])
        if self._tzinfo is None:
            return (basestate,)
        else:
            return (basestate, self._tzinfo)

    def __setstate(self, string, tzinfo):
        (yhi, ylo, self._month, self._day, self._hour,
         self._minute, self._second, us1, us2, us3) = string
        self._year = yhi * 256 + ylo
        self._microsecond = (((us1 << 8) | us2) << 8) | us3
        if tzinfo is None or isinstance(tzinfo, _tzinfo_class):
            self._tzinfo = tzinfo
        else:
            raise TypeError("bad tzinfo state arg %r" % tzinfo)

    def __reduce__(self):
        return (self.__class__, self._getstate())


datetime.min = datetime(1, 1, 1)
datetime.max = datetime(9999, 12, 31, 23, 59, 59, 999999)
datetime.resolution = timedelta(microseconds=1)


def _isoweek1monday(year):
    # Helper to calculate the day number of the Monday starting week 1
    # XXX This could be done more efficiently
    THURSDAY = 3
    firstday = _ymd2ord(year, 1, 1)
    firstweekday = (firstday + 6) % 7 # See weekday() above
    week1monday = firstday - firstweekday
    if firstweekday > THURSDAY:
        week1monday += 7
    return week1monday

class timezone(tzinfo):
    __slots__ = '_offset', '_name'

    # Sentinel value to disallow None
    _Omitted = object()
    def __new__(cls, offset, name=_Omitted):
        if not isinstance(offset, timedelta):
            raise TypeError("offset must be a timedelta")
        if name is cls._Omitted:
            if not offset:
                return cls.utc
            name = None
        elif not isinstance(name, str):
            ###
            # For Python-Future:
            if PY2 and isinstance(name, native_str):
                name = name.decode()
            else:
                raise TypeError("name must be a string")
            ###
        if not cls._minoffset <= offset <= cls._maxoffset:
            raise ValueError("offset must be a timedelta"
                             " strictly between -timedelta(hours=24) and"
                             " timedelta(hours=24).")
        if (offset.microseconds != 0 or
            offset.seconds % 60 != 0):
            raise ValueError("offset must be a timedelta"
                             " representing a whole number of minutes")
        return cls._create(offset, name)

    @classmethod
    def _create(cls, offset, name=None):
        self = tzinfo.__new__(cls)
        self._offset = offset
        self._name = name
        return self

    def __getinitargs__(self):
        """pickle support"""
        if self._name is None:
            return (self._offset,)
        return (self._offset, self._name)

    def __eq__(self, other):
        if type(other) != timezone:
            return False
        return self._offset == other._offset

    def __hash__(self):
        return hash(self._offset)

    def __repr__(self):
        """Convert to formal string, for repr().

        >>> tz = timezone.utc
        >>> repr(tz)
        'datetime.timezone.utc'
        >>> tz = timezone(timedelta(hours=-5), 'EST')
        >>> repr(tz)
        "datetime.timezone(datetime.timedelta(-1, 68400), 'EST')"
        """
        if self is self.utc:
            return 'datetime.timezone.utc'
        if self._name is None:
            return "%s(%r)" % ('datetime.' + self.__class__.__name__,
                               self._offset)
        return "%s(%r, %r)" % ('datetime.' + self.__class__.__name__,
                               self._offset, self._name)

    def __str__(self):
        return self.tzname(None)

    def utcoffset(self, dt):
        if isinstance(dt, datetime) or dt is None:
            return self._offset
        raise TypeError("utcoffset() argument must be a datetime instance"
                        " or None")

    def tzname(self, dt):
        if isinstance(dt, datetime) or dt is None:
            if self._name is None:
                return self._name_from_offset(self._offset)
            return self._name
        raise TypeError("tzname() argument must be a datetime instance"
                        " or None")

    def dst(self, dt):
        if isinstance(dt, datetime) or dt is None:
            return None
        raise TypeError("dst() argument must be a datetime instance"
                        " or None")

    def fromutc(self, dt):
        if isinstance(dt, datetime):
            if dt.tzinfo is not self:
                raise ValueError("fromutc: dt.tzinfo "
                                 "is not self")
            return dt + self._offset
        raise TypeError("fromutc() argument must be a datetime instance"
                        " or None")

    _maxoffset = timedelta(hours=23, minutes=59)
    _minoffset = -_maxoffset

    @staticmethod
    def _name_from_offset(delta):
        if delta < timedelta(0):
            sign = '-'
            delta = -delta
        else:
            sign = '+'
        hours, rest = divmod(delta, timedelta(hours=1))
        minutes = rest // timedelta(minutes=1)
        return 'UTC{}{:02d}:{:02d}'.format(sign, hours, minutes)

timezone.utc = timezone._create(timedelta(0))
timezone.min = timezone._create(timezone._minoffset)
timezone.max = timezone._create(timezone._maxoffset)
_EPOCH = datetime(1970, 1, 1, tzinfo=timezone.utc)
"""
Some time zone algebra.  For a datetime x, let
    x.n = x stripped of its timezone -- its naive time.
    x.o = x.utcoffset(), and assuming that doesn't raise an exception or
          return None
    x.d = x.dst(), and assuming that doesn't raise an exception or
          return None
    x.s = x's standard offset, x.o - x.d

Now some derived rules, where k is a duration (timedelta).

1. x.o = x.s + x.d
   This follows from the definition of x.s.

2. If x and y have the same tzinfo member, x.s = y.s.
   This is actually a requirement, an assumption we need to make about
   sane tzinfo classes.

3. The naive UTC time corresponding to x is x.n - x.o.
   This is again a requirement for a sane tzinfo class.

4. (x+k).s = x.s
   This follows from #2, and that datimetimetz+timedelta preserves tzinfo.

5. (x+k).n = x.n + k
   Again follows from how arithmetic is defined.

Now we can explain tz.fromutc(x).  Let's assume it's an interesting case
(meaning that the various tzinfo methods exist, and don't blow up or return
None when called).

The function wants to return a datetime y with timezone tz, equivalent to x.
x is already in UTC.

By #3, we want

    y.n - y.o = x.n                             [1]

The algorithm starts by attaching tz to x.n, and calling that y.  So
x.n = y.n at the start.  Then it wants to add a duration k to y, so that [1]
becomes true; in effect, we want to solve [2] for k:

   (y+k).n - (y+k).o = x.n                      [2]

By #1, this is the same as

   (y+k).n - ((y+k).s + (y+k).d) = x.n          [3]

By #5, (y+k).n = y.n + k, which equals x.n + k because x.n=y.n at the start.
Substituting that into [3],

   x.n + k - (y+k).s - (y+k).d = x.n; the x.n terms cancel, leaving
   k - (y+k).s - (y+k).d = 0; rearranging,
   k = (y+k).s - (y+k).d; by #4, (y+k).s == y.s, so
   k = y.s - (y+k).d

On the RHS, (y+k).d can't be computed directly, but y.s can be, and we
approximate k by ignoring the (y+k).d term at first.  Note that k can't be
very large, since all offset-returning methods return a duration of magnitude
less than 24 hours.  For that reason, if y is firmly in std time, (y+k).d must
be 0, so ignoring it has no consequence then.

In any case, the new value is

    z = y + y.s                                 [4]

It's helpful to step back at look at [4] from a higher level:  it's simply
mapping from UTC to tz's standard time.

At this point, if

    z.n - z.o = x.n                             [5]

we have an equivalent time, and are almost done.  The insecurity here is
at the start of daylight time.  Picture US Eastern for concreteness.  The wall
time jumps from 1:59 to 3:00, and wall hours of the form 2:MM don't make good
sense then.  The docs ask that an Eastern tzinfo class consider such a time to
be EDT (because it's "after 2"), which is a redundant spelling of 1:MM EST
on the day DST starts.  We want to return the 1:MM EST spelling because that's
the only spelling that makes sense on the local wall clock.

In fact, if [5] holds at this point, we do have the standard-time spelling,
but that takes a bit of proof.  We first prove a stronger result.  What's the
difference between the LHS and RHS of [5]?  Let

    diff = x.n - (z.n - z.o)                    [6]

Now
    z.n =                       by [4]
    (y + y.s).n =               by #5
    y.n + y.s =                 since y.n = x.n
    x.n + y.s =                 since z and y are have the same tzinfo member,
                                    y.s = z.s by #2
    x.n + z.s

Plugging that back into [6] gives

    diff =
    x.n - ((x.n + z.s) - z.o) =     expanding
    x.n - x.n - z.s + z.o =         cancelling
    - z.s + z.o =                   by #2
    z.d

So diff = z.d.

If [5] is true now, diff = 0, so z.d = 0 too, and we have the standard-time
spelling we wanted in the endcase described above.  We're done.  Contrarily,
if z.d = 0, then we have a UTC equivalent, and are also done.

If [5] is not true now, diff = z.d != 0, and z.d is the offset we need to
add to z (in effect, z is in tz's standard time, and we need to shift the
local clock into tz's daylight time).

Let

    z' = z + z.d = z + diff                     [7]

and we can again ask whether

    z'.n - z'.o = x.n                           [8]

If so, we're done.  If not, the tzinfo class is insane, according to the
assumptions we've made.  This also requires a bit of proof.  As before, let's
compute the difference between the LHS and RHS of [8] (and skipping some of
the justifications for the kinds of substitutions we've done several times
already):

    diff' = x.n - (z'.n - z'.o) =           replacing z'.n via [7]
            x.n  - (z.n + diff - z'.o) =    replacing diff via [6]
            x.n - (z.n + x.n - (z.n - z.o) - z'.o) =
            x.n - z.n - x.n + z.n - z.o + z'.o =    cancel x.n
            - z.n + z.n - z.o + z'.o =              cancel z.n
            - z.o + z'.o =                      #1 twice
            -z.s - z.d + z'.s + z'.d =          z and z' have same tzinfo
            z'.d - z.d

So z' is UTC-equivalent to x iff z'.d = z.d at this point.  If they are equal,
we've found the UTC-equivalent so are done.  In fact, we stop with [7] and
return z', not bothering to compute z'.d.

How could z.d and z'd differ?  z' = z + z.d [7], so merely moving z' by
a dst() offset, and starting *from* a time already in DST (we know z.d != 0),
would have to change the result dst() returns:  we start in DST, and moving
a little further into it takes us out of DST.

There isn't a sane case where this can happen.  The closest it gets is at
the end of DST, where there's an hour in UTC with no spelling in a hybrid
tzinfo class.  In US Eastern, that's 5:MM UTC = 0:MM EST = 1:MM EDT.  During
that hour, on an Eastern clock 1:MM is taken as being in standard time (6:MM
UTC) because the docs insist on that, but 0:MM is taken as being in daylight
time (4:MM UTC).  There is no local time mapping to 5:MM UTC.  The local
clock jumps from 1:59 back to 1:00 again, and repeats the 1:MM hour in
standard time.  Since that's what the local clock *does*, we want to map both
UTC hours 5:MM and 6:MM to 1:MM Eastern.  The result is ambiguous
in local time, but so it goes -- it's the way the local clock works.

When x = 5:MM UTC is the input to this algorithm, x.o=0, y.o=-5 and y.d=0,
so z=0:MM.  z.d=60 (minutes) then, so [5] doesn't hold and we keep going.
z' = z + z.d = 1:MM then, and z'.d=0, and z'.d - z.d = -60 != 0 so [8]
(correctly) concludes that z' is not UTC-equivalent to x.

Because we know z.d said z was in daylight time (else [5] would have held and
we would have stopped then), and we know z.d != z'.d (else [8] would have held
and we have stopped then), and there are only 2 possible values dst() can
return in Eastern, it follows that z'.d must be 0 (which it is in the example,
but the reasoning doesn't depend on the example -- it depends on there being
two possible dst() outcomes, one zero and the other non-zero).  Therefore
z' must be in standard time, and is the spelling we want in this case.

Note again that z' is not UTC-equivalent as far as the hybrid tzinfo class is
concerned (because it takes z' as being in standard time rather than the
daylight time we intend here), but returning it gives the real-life "local
clock repeats an hour" behavior when mapping the "unspellable" UTC hour into
tz.

When the input is 6:MM, z=1:MM and z.d=0, and we stop at once, again with
the 1:MM standard time spelling we want.

So how can this break?  One of the assumptions must be violated.  Two
possibilities:

1) [2] effectively says that y.s is invariant across all y belong to a given
   time zone.  This isn't true if, for political reasons or continental drift,
   a region decides to change its base offset from UTC.

2) There may be versions of "double daylight" time where the tail end of
   the analysis gives up a step too early.  I haven't thought about that
   enough to say.

In any case, it's clear that the default fromutc() is strong enough to handle
"almost all" time zones:  so long as the standard offset is invariant, it
doesn't matter if daylight time transition points change from year to year, or
if daylight time is skipped in some years; it doesn't matter how large or
small dst() may get within its bounds; and it doesn't even matter if some
perverse time zone returns a negative dst()).  So a breaking case must be
pretty bizarre, and a tzinfo subclass can override fromutc() if it is.
"""
try:
    from _datetime import *
except ImportError:
    pass
else:
    # Clean up unused names
    del (_DAYNAMES, _DAYS_BEFORE_MONTH, _DAYS_IN_MONTH,
         _DI100Y, _DI400Y, _DI4Y, _MAXORDINAL, _MONTHNAMES,
         _build_struct_time, _call_tzinfo_method, _check_date_fields,
         _check_time_fields, _check_tzinfo_arg, _check_tzname,
         _check_utc_offset, _cmp, _cmperror, _date_class, _days_before_month,
         _days_before_year, _days_in_month, _format_time, _is_leap,
         _isoweek1monday, _math, _ord2ymd, _time, _time_class, _tzinfo_class,
         _wrap_strftime, _ymd2ord)
    # XXX Since import * above excludes names that start with _,
    # docstring does not get overwritten. In the future, it may be
    # appropriate to maintain a single module level docstring and
    # remove the following line.
    from _datetime import __doc__
PK#Du\���"future/backports/test/keycert2.pemnu�[���-----BEGIN PRIVATE KEY-----
MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBAJnsJZVrppL+W5I9
zGQrrawWwE5QJpBK9nWw17mXrZ03R1cD9BamLGivVISbPlRlAVnZBEyh1ATpsB7d
CUQ+WHEvALquvx4+Yw5l+fXeiYRjrLRBYZuVy8yNtXzU3iWcGObcYRkUdiXdOyP7
sLF2YZHRvQZpzgDBKkrraeQ81w21AgMBAAECgYBEm7n07FMHWlE+0kT0sXNsLYfy
YE+QKZnJw9WkaDN+zFEEPELkhZVt5BjsMraJr6v2fIEqF0gGGJPkbenffVq2B5dC
lWUOxvJHufMK4sM3Cp6s/gOp3LP+QkzVnvJSfAyZU6l+4PGX5pLdUsXYjPxgzjzL
S36tF7/2Uv1WePyLUQJBAMsPhYzUXOPRgmbhcJiqi9A9c3GO8kvSDYTCKt3VMnqz
HBn6MQ4VQasCD1F+7jWTI0FU/3vdw8non/Fj8hhYqZcCQQDCDRdvmZqDiZnpMqDq
L6ZSrLTVtMvZXZbgwForaAD9uHj51TME7+eYT7EG2YCgJTXJ4YvRJEnPNyskwdKt
vTSTAkEAtaaN/vyemEJ82BIGStwONNw0ILsSr5cZ9tBHzqiA/tipY+e36HRFiXhP
QcU9zXlxyWkDH8iz9DSAmE2jbfoqwwJANlMJ65E543cjIlitGcKLMnvtCCLcKpb7
xSG0XJB6Lo11OKPJ66jp0gcFTSCY1Lx2CXVd+gfJrfwI1Pp562+bhwJBAJ9IfDPU
R8OpO9v1SGd8x33Owm7uXOpB9d63/T70AD1QOXjKUC4eXYbt0WWfWuny/RNPRuyh
w7DXSfUF+kPKolU=
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIICXTCCAcagAwIBAgIJAIO3upAG445fMA0GCSqGSIb3DQEBBQUAMGIxCzAJBgNV
BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u
IFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMTDGZha2Vob3N0bmFtZTAeFw0x
MDEwMDkxNTAxMDBaFw0yMDEwMDYxNTAxMDBaMGIxCzAJBgNVBAYTAlhZMRcwFQYD
VQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZv
dW5kYXRpb24xFTATBgNVBAMTDGZha2Vob3N0bmFtZTCBnzANBgkqhkiG9w0BAQEF
AAOBjQAwgYkCgYEAmewllWumkv5bkj3MZCutrBbATlAmkEr2dbDXuZetnTdHVwP0
FqYsaK9UhJs+VGUBWdkETKHUBOmwHt0JRD5YcS8Auq6/Hj5jDmX59d6JhGOstEFh
m5XLzI21fNTeJZwY5txhGRR2Jd07I/uwsXZhkdG9BmnOAMEqSutp5DzXDbUCAwEA
AaMbMBkwFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA0GCSqGSIb3DQEBBQUAA4GB
AH+iMClLLGSaKWgwXsmdVo4FhTZZHo8Uprrtg3N9FxEeE50btpDVQysgRt5ias3K
m+bME9zbKwvbVWD5zZdjus4pDgzwF/iHyccL8JyYhxOvS/9zmvAtFXj/APIIbZFp
IT75d9f88ScIGEtknZQejnrdhB64tYki/EqluiuKBqKD
-----END CERTIFICATE-----
PK&Du\��g��future/backports/test/dh512.pemnu�[���-----BEGIN DH PARAMETERS-----
MEYCQQD1Kv884bEpQBgRjXyEpwpy1obEAxnIByl6ypUM2Zafq9AKUJsCRtMIPWak
XUGfnHy9iUsiGSa6q6Jew1XpKgVfAgEC
-----END DH PARAMETERS-----

These are the 512 bit DH parameters from "Assigned Number for SKIP Protocols"
(http://www.skip-vpn.org/spec/numbers.html).
See there for how they were generated.
Note that g is not a generator, but this is not a problem since p is a safe prime.
PK(Du\����!future/backports/test/__init__.pynu�[���"""
test package backported for python-future.

Its primary purpose is to allow use of "import test.support" for running
the Python standard library unit tests using the new Python 3 stdlib
import location.

Python 3 renamed test.test_support to test.support.
"""
PK+Du\��2�cc"future/backports/test/ssl_cert.pemnu�[���-----BEGIN CERTIFICATE-----
MIICVDCCAb2gAwIBAgIJANfHOBkZr8JOMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV
BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u
IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xMDEw
MDgyMzAxNTZaFw0yMDEwMDUyMzAxNTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH
Ew5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZvdW5k
YXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAw
gYkCgYEA21vT5isq7F68amYuuNpSFlKDPrMUCa4YWYqZRt2OZ+/3NKaZ2xAiSwr7
6MrQF70t5nLbSPpqE5+5VrS58SY+g/sXLiFd6AplH1wJZwh78DofbFYXUggktFMt
pTyiX8jtP66bkcPkDADA089RI1TQR6Ca+n7HFa7c1fabVV6i3zkCAwEAAaMYMBYw
FAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBBQUAA4GBAHPctQBEQ4wd
BJ6+JcpIraopLn8BGhbjNWj40mmRqWB/NAWF6M5ne7KpGAu7tLeG4hb1zLaldK8G
lxy2GPSRF6LFS48dpEj2HbMv2nvv6xxalDMJ9+DicWgAKTQ6bcX2j3GUkCR0g/T1
CRlNBAAlvhKzO7Clpf9l0YKBEfraJByX
-----END CERTIFICATE-----
PK.Du\̺v	
	
3future/backports/test/https_svn_python_org_root.pemnu�[���-----BEGIN CERTIFICATE-----
MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290
IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB
IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA
Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO
BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi
MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ
ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC
CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ
8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6
zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y
fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7
w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc
G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k
epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q
laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ
QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU
fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826
YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w
ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY
gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe
MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0
IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy
dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw
czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0
dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl
aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC
AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg
b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB
ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc
nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg
18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c
gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl
Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY
sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T
SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF
CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum
GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk
zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW
omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD
-----END CERTIFICATE-----
PK0Du\���;��!future/backports/test/ssl_key.pemnu�[���-----BEGIN PRIVATE KEY-----
MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBANtb0+YrKuxevGpm
LrjaUhZSgz6zFAmuGFmKmUbdjmfv9zSmmdsQIksK++jK0Be9LeZy20j6ahOfuVa0
ufEmPoP7Fy4hXegKZR9cCWcIe/A6H2xWF1IIJLRTLaU8ol/I7T+um5HD5AwAwNPP
USNU0Eegmvp+xxWu3NX2m1Veot85AgMBAAECgYA3ZdZ673X0oexFlq7AAmrutkHt
CL7LvwrpOiaBjhyTxTeSNWzvtQBkIU8DOI0bIazA4UreAFffwtvEuPmonDb3F+Iq
SMAu42XcGyVZEl+gHlTPU9XRX7nTOXVt+MlRRRxL6t9GkGfUAXI3XxJDXW3c0vBK
UL9xqD8cORXOfE06rQJBAP8mEX1ERkR64Ptsoe4281vjTlNfIbs7NMPkUnrn9N/Y
BLhjNIfQ3HFZG8BTMLfX7kCS9D593DW5tV4Z9BP/c6cCQQDcFzCcVArNh2JSywOQ
ZfTfRbJg/Z5Lt9Fkngv1meeGNPgIMLN8Sg679pAOOWmzdMO3V706rNPzSVMME7E5
oPIfAkEA8pDddarP5tCvTTgUpmTFbakm0KoTZm2+FzHcnA4jRh+XNTjTOv98Y6Ik
eO5d1ZnKXseWvkZncQgxfdnMqqpj5wJAcNq/RVne1DbYlwWchT2Si65MYmmJ8t+F
0mcsULqjOnEMwf5e+ptq5LzwbyrHZYq5FNk7ocufPv/ZQrcSSC+cFwJBAKvOJByS
x56qyGeZLOQlWS2JS3KJo59XuLFGqcbgN9Om9xFa41Yb4N9NvplFivsvZdw3m1Q/
SPIXQuT8RMPDVNQ=
-----END PRIVATE KEY-----
PK3Du\�C�� future/backports/test/support.pynu�[���# -*- coding: utf-8 -*-
"""Supporting definitions for the Python regression tests.

Backported for python-future from Python 3.3 test/support.py.
"""

from __future__ import (absolute_import, division,
                        print_function, unicode_literals)
from future import utils
from future.builtins import str, range, open, int, map, list

import contextlib
import errno
import functools
import gc
import socket
import sys
import os
import platform
import shutil
import warnings
import unittest
# For Python 2.6 compatibility:
if not hasattr(unittest, 'skip'):
    import unittest2 as unittest

import importlib
# import collections.abc    # not present on Py2.7
import re
import subprocess
import time
try:
    import sysconfig
except ImportError:
    # sysconfig is not available on Python 2.6. Try using distutils.sysconfig instead:
    from distutils import sysconfig
import fnmatch
import logging.handlers
import struct
import tempfile

try:
    if utils.PY3:
        import _thread, threading
    else:
        import thread as _thread, threading
except ImportError:
    _thread = None
    threading = None
try:
    import multiprocessing.process
except ImportError:
    multiprocessing = None

try:
    import zlib
except ImportError:
    zlib = None

try:
    import gzip
except ImportError:
    gzip = None

try:
    import bz2
except ImportError:
    bz2 = None

try:
    import lzma
except ImportError:
    lzma = None

__all__ = [
    "Error", "TestFailed", "ResourceDenied", "import_module", "verbose",
    "use_resources", "max_memuse", "record_original_stdout",
    "get_original_stdout", "unload", "unlink", "rmtree", "forget",
    "is_resource_enabled", "requires", "requires_freebsd_version",
    "requires_linux_version", "requires_mac_ver", "find_unused_port",
    "bind_port", "IPV6_ENABLED", "is_jython", "TESTFN", "HOST", "SAVEDCWD",
    "temp_cwd", "findfile", "create_empty_file", "sortdict",
    "check_syntax_error", "open_urlresource", "check_warnings", "CleanImport",
    "EnvironmentVarGuard", "TransientResource", "captured_stdout",
    "captured_stdin", "captured_stderr", "time_out", "socket_peer_reset",
    "ioerror_peer_reset", "run_with_locale", 'temp_umask',
    "transient_internet", "set_memlimit", "bigmemtest", "bigaddrspacetest",
    "BasicTestRunner", "run_unittest", "run_doctest", "threading_setup",
    "threading_cleanup", "reap_children", "cpython_only", "check_impl_detail",
    "get_attribute", "swap_item", "swap_attr", "requires_IEEE_754",
    "TestHandler", "Matcher", "can_symlink", "skip_unless_symlink",
    "skip_unless_xattr", "import_fresh_module", "requires_zlib",
    "PIPE_MAX_SIZE", "failfast", "anticipate_failure", "run_with_tz",
    "requires_gzip", "requires_bz2", "requires_lzma", "suppress_crash_popup",
    ]

class Error(Exception):
    """Base class for regression test exceptions."""

class TestFailed(Error):
    """Test failed."""

class ResourceDenied(unittest.SkipTest):
    """Test skipped because it requested a disallowed resource.

    This is raised when a test calls requires() for a resource that
    has not be enabled.  It is used to distinguish between expected
    and unexpected skips.
    """

@contextlib.contextmanager
def _ignore_deprecated_imports(ignore=True):
    """Context manager to suppress package and module deprecation
    warnings when importing them.

    If ignore is False, this context manager has no effect."""
    if ignore:
        with warnings.catch_warnings():
            warnings.filterwarnings("ignore", ".+ (module|package)",
                                    DeprecationWarning)
            yield
    else:
        yield


def import_module(name, deprecated=False):
    """Import and return the module to be tested, raising SkipTest if
    it is not available.

    If deprecated is True, any module or package deprecation messages
    will be suppressed."""
    with _ignore_deprecated_imports(deprecated):
        try:
            return importlib.import_module(name)
        except ImportError as msg:
            raise unittest.SkipTest(str(msg))


def _save_and_remove_module(name, orig_modules):
    """Helper function to save and remove a module from sys.modules

    Raise ImportError if the module can't be imported.
    """
    # try to import the module and raise an error if it can't be imported
    if name not in sys.modules:
        __import__(name)
        del sys.modules[name]
    for modname in list(sys.modules):
        if modname == name or modname.startswith(name + '.'):
            orig_modules[modname] = sys.modules[modname]
            del sys.modules[modname]

def _save_and_block_module(name, orig_modules):
    """Helper function to save and block a module in sys.modules

    Return True if the module was in sys.modules, False otherwise.
    """
    saved = True
    try:
        orig_modules[name] = sys.modules[name]
    except KeyError:
        saved = False
    sys.modules[name] = None
    return saved


def anticipate_failure(condition):
    """Decorator to mark a test that is known to be broken in some cases

       Any use of this decorator should have a comment identifying the
       associated tracker issue.
    """
    if condition:
        return unittest.expectedFailure
    return lambda f: f


def import_fresh_module(name, fresh=(), blocked=(), deprecated=False):
    """Import and return a module, deliberately bypassing sys.modules.
    This function imports and returns a fresh copy of the named Python module
    by removing the named module from sys.modules before doing the import.
    Note that unlike reload, the original module is not affected by
    this operation.

    *fresh* is an iterable of additional module names that are also removed
    from the sys.modules cache before doing the import.

    *blocked* is an iterable of module names that are replaced with None
    in the module cache during the import to ensure that attempts to import
    them raise ImportError.

    The named module and any modules named in the *fresh* and *blocked*
    parameters are saved before starting the import and then reinserted into
    sys.modules when the fresh import is complete.

    Module and package deprecation messages are suppressed during this import
    if *deprecated* is True.

    This function will raise ImportError if the named module cannot be
    imported.

    If deprecated is True, any module or package deprecation messages
    will be suppressed.
    """
    # NOTE: test_heapq, test_json and test_warnings include extra sanity checks
    # to make sure that this utility function is working as expected
    with _ignore_deprecated_imports(deprecated):
        # Keep track of modules saved for later restoration as well
        # as those which just need a blocking entry removed
        orig_modules = {}
        names_to_remove = []
        _save_and_remove_module(name, orig_modules)
        try:
            for fresh_name in fresh:
                _save_and_remove_module(fresh_name, orig_modules)
            for blocked_name in blocked:
                if not _save_and_block_module(blocked_name, orig_modules):
                    names_to_remove.append(blocked_name)
            fresh_module = importlib.import_module(name)
        except ImportError:
            fresh_module = None
        finally:
            for orig_name, module in orig_modules.items():
                sys.modules[orig_name] = module
            for name_to_remove in names_to_remove:
                del sys.modules[name_to_remove]
        return fresh_module


def get_attribute(obj, name):
    """Get an attribute, raising SkipTest if AttributeError is raised."""
    try:
        attribute = getattr(obj, name)
    except AttributeError:
        raise unittest.SkipTest("object %r has no attribute %r" % (obj, name))
    else:
        return attribute

verbose = 1              # Flag set to 0 by regrtest.py
use_resources = None     # Flag set to [] by regrtest.py
max_memuse = 0           # Disable bigmem tests (they will still be run with
                         # small sizes, to make sure they work.)
real_max_memuse = 0
failfast = False
match_tests = None

# _original_stdout is meant to hold stdout at the time regrtest began.
# This may be "the real" stdout, or IDLE's emulation of stdout, or whatever.
# The point is to have some flavor of stdout the user can actually see.
_original_stdout = None
def record_original_stdout(stdout):
    global _original_stdout
    _original_stdout = stdout

def get_original_stdout():
    return _original_stdout or sys.stdout

def unload(name):
    try:
        del sys.modules[name]
    except KeyError:
        pass

if sys.platform.startswith("win"):
    def _waitfor(func, pathname, waitall=False):
        # Perform the operation
        func(pathname)
        # Now setup the wait loop
        if waitall:
            dirname = pathname
        else:
            dirname, name = os.path.split(pathname)
            dirname = dirname or '.'
        # Check for `pathname` to be removed from the filesystem.
        # The exponential backoff of the timeout amounts to a total
        # of ~1 second after which the deletion is probably an error
        # anyway.
        # Testing on a i7@4.3GHz shows that usually only 1 iteration is
        # required when contention occurs.
        timeout = 0.001
        while timeout < 1.0:
            # Note we are only testing for the existence of the file(s) in
            # the contents of the directory regardless of any security or
            # access rights.  If we have made it this far, we have sufficient
            # permissions to do that much using Python's equivalent of the
            # Windows API FindFirstFile.
            # Other Windows APIs can fail or give incorrect results when
            # dealing with files that are pending deletion.
            L = os.listdir(dirname)
            if not (L if waitall else name in L):
                return
            # Increase the timeout and try again
            time.sleep(timeout)
            timeout *= 2
        warnings.warn('tests may fail, delete still pending for ' + pathname,
                      RuntimeWarning, stacklevel=4)

    def _unlink(filename):
        _waitfor(os.unlink, filename)

    def _rmdir(dirname):
        _waitfor(os.rmdir, dirname)

    def _rmtree(path):
        def _rmtree_inner(path):
            for name in os.listdir(path):
                fullname = os.path.join(path, name)
                if os.path.isdir(fullname):
                    _waitfor(_rmtree_inner, fullname, waitall=True)
                    os.rmdir(fullname)
                else:
                    os.unlink(fullname)
        _waitfor(_rmtree_inner, path, waitall=True)
        _waitfor(os.rmdir, path)
else:
    _unlink = os.unlink
    _rmdir = os.rmdir
    _rmtree = shutil.rmtree

def unlink(filename):
    try:
        _unlink(filename)
    except OSError as error:
        # The filename need not exist.
        if error.errno not in (errno.ENOENT, errno.ENOTDIR):
            raise

def rmdir(dirname):
    try:
        _rmdir(dirname)
    except OSError as error:
        # The directory need not exist.
        if error.errno != errno.ENOENT:
            raise

def rmtree(path):
    try:
        _rmtree(path)
    except OSError as error:
        if error.errno != errno.ENOENT:
            raise


# On some platforms, should not run gui test even if it is allowed
# in `use_resources'.
if sys.platform.startswith('win'):
    import ctypes
    import ctypes.wintypes
    def _is_gui_available():
        UOI_FLAGS = 1
        WSF_VISIBLE = 0x0001
        class USEROBJECTFLAGS(ctypes.Structure):
            _fields_ = [("fInherit", ctypes.wintypes.BOOL),
                        ("fReserved", ctypes.wintypes.BOOL),
                        ("dwFlags", ctypes.wintypes.DWORD)]
        dll = ctypes.windll.user32
        h = dll.GetProcessWindowStation()
        if not h:
            raise ctypes.WinError()
        uof = USEROBJECTFLAGS()
        needed = ctypes.wintypes.DWORD()
        res = dll.GetUserObjectInformationW(h,
            UOI_FLAGS,
            ctypes.byref(uof),
            ctypes.sizeof(uof),
            ctypes.byref(needed))
        if not res:
            raise ctypes.WinError()
        return bool(uof.dwFlags & WSF_VISIBLE)
else:
    def _is_gui_available():
        return True

def is_resource_enabled(resource):
    """Test whether a resource is enabled.  Known resources are set by
    regrtest.py."""
    return use_resources is not None and resource in use_resources

def requires(resource, msg=None):
    """Raise ResourceDenied if the specified resource is not available.

    If the caller's module is __main__ then automatically return True.  The
    possibility of False being returned occurs when regrtest.py is
    executing.
    """
    if resource == 'gui' and not _is_gui_available():
        raise unittest.SkipTest("Cannot use the 'gui' resource")
    # see if the caller's module is __main__ - if so, treat as if
    # the resource was set
    if sys._getframe(1).f_globals.get("__name__") == "__main__":
        return
    if not is_resource_enabled(resource):
        if msg is None:
            msg = "Use of the %r resource not enabled" % resource
        raise ResourceDenied(msg)

def _requires_unix_version(sysname, min_version):
    """Decorator raising SkipTest if the OS is `sysname` and the version is less
    than `min_version`.

    For example, @_requires_unix_version('FreeBSD', (7, 2)) raises SkipTest if
    the FreeBSD version is less than 7.2.
    """
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kw):
            if platform.system() == sysname:
                version_txt = platform.release().split('-', 1)[0]
                try:
                    version = tuple(map(int, version_txt.split('.')))
                except ValueError:
                    pass
                else:
                    if version < min_version:
                        min_version_txt = '.'.join(map(str, min_version))
                        raise unittest.SkipTest(
                            "%s version %s or higher required, not %s"
                            % (sysname, min_version_txt, version_txt))
            return func(*args, **kw)
        wrapper.min_version = min_version
        return wrapper
    return decorator

def requires_freebsd_version(*min_version):
    """Decorator raising SkipTest if the OS is FreeBSD and the FreeBSD version is
    less than `min_version`.

    For example, @requires_freebsd_version(7, 2) raises SkipTest if the FreeBSD
    version is less than 7.2.
    """
    return _requires_unix_version('FreeBSD', min_version)

def requires_linux_version(*min_version):
    """Decorator raising SkipTest if the OS is Linux and the Linux version is
    less than `min_version`.

    For example, @requires_linux_version(2, 6, 32) raises SkipTest if the Linux
    version is less than 2.6.32.
    """
    return _requires_unix_version('Linux', min_version)

def requires_mac_ver(*min_version):
    """Decorator raising SkipTest if the OS is Mac OS X and the OS X
    version if less than min_version.

    For example, @requires_mac_ver(10, 5) raises SkipTest if the OS X version
    is lesser than 10.5.
    """
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kw):
            if sys.platform == 'darwin':
                version_txt = platform.mac_ver()[0]
                try:
                    version = tuple(map(int, version_txt.split('.')))
                except ValueError:
                    pass
                else:
                    if version < min_version:
                        min_version_txt = '.'.join(map(str, min_version))
                        raise unittest.SkipTest(
                            "Mac OS X %s or higher required, not %s"
                            % (min_version_txt, version_txt))
            return func(*args, **kw)
        wrapper.min_version = min_version
        return wrapper
    return decorator

# Don't use "localhost", since resolving it uses the DNS under recent
# Windows versions (see issue #18792).
HOST = "127.0.0.1"
HOSTv6 = "::1"


def find_unused_port(family=socket.AF_INET, socktype=socket.SOCK_STREAM):
    """Returns an unused port that should be suitable for binding.  This is
    achieved by creating a temporary socket with the same family and type as
    the 'sock' parameter (default is AF_INET, SOCK_STREAM), and binding it to
    the specified host address (defaults to 0.0.0.0) with the port set to 0,
    eliciting an unused ephemeral port from the OS.  The temporary socket is
    then closed and deleted, and the ephemeral port is returned.

    Either this method or bind_port() should be used for any tests where a
    server socket needs to be bound to a particular port for the duration of
    the test.  Which one to use depends on whether the calling code is creating
    a python socket, or if an unused port needs to be provided in a constructor
    or passed to an external program (i.e. the -accept argument to openssl's
    s_server mode).  Always prefer bind_port() over find_unused_port() where
    possible.  Hard coded ports should *NEVER* be used.  As soon as a server
    socket is bound to a hard coded port, the ability to run multiple instances
    of the test simultaneously on the same host is compromised, which makes the
    test a ticking time bomb in a buildbot environment. On Unix buildbots, this
    may simply manifest as a failed test, which can be recovered from without
    intervention in most cases, but on Windows, the entire python process can
    completely and utterly wedge, requiring someone to log in to the buildbot
    and manually kill the affected process.

    (This is easy to reproduce on Windows, unfortunately, and can be traced to
    the SO_REUSEADDR socket option having different semantics on Windows versus
    Unix/Linux.  On Unix, you can't have two AF_INET SOCK_STREAM sockets bind,
    listen and then accept connections on identical host/ports.  An EADDRINUSE
    socket.error will be raised at some point (depending on the platform and
    the order bind and listen were called on each socket).

    However, on Windows, if SO_REUSEADDR is set on the sockets, no EADDRINUSE
    will ever be raised when attempting to bind two identical host/ports. When
    accept() is called on each socket, the second caller's process will steal
    the port from the first caller, leaving them both in an awkwardly wedged
    state where they'll no longer respond to any signals or graceful kills, and
    must be forcibly killed via OpenProcess()/TerminateProcess().

    The solution on Windows is to use the SO_EXCLUSIVEADDRUSE socket option
    instead of SO_REUSEADDR, which effectively affords the same semantics as
    SO_REUSEADDR on Unix.  Given the propensity of Unix developers in the Open
    Source world compared to Windows ones, this is a common mistake.  A quick
    look over OpenSSL's 0.9.8g source shows that they use SO_REUSEADDR when
    openssl.exe is called with the 's_server' option, for example. See
    http://bugs.python.org/issue2550 for more info.  The following site also
    has a very thorough description about the implications of both REUSEADDR
    and EXCLUSIVEADDRUSE on Windows:
    http://msdn2.microsoft.com/en-us/library/ms740621(VS.85).aspx)

    XXX: although this approach is a vast improvement on previous attempts to
    elicit unused ports, it rests heavily on the assumption that the ephemeral
    port returned to us by the OS won't immediately be dished back out to some
    other process when we close and delete our temporary socket but before our
    calling code has a chance to bind the returned port.  We can deal with this
    issue if/when we come across it.
    """

    tempsock = socket.socket(family, socktype)
    port = bind_port(tempsock)
    tempsock.close()
    del tempsock
    return port

def bind_port(sock, host=HOST):
    """Bind the socket to a free port and return the port number.  Relies on
    ephemeral ports in order to ensure we are using an unbound port.  This is
    important as many tests may be running simultaneously, especially in a
    buildbot environment.  This method raises an exception if the sock.family
    is AF_INET and sock.type is SOCK_STREAM, *and* the socket has SO_REUSEADDR
    or SO_REUSEPORT set on it.  Tests should *never* set these socket options
    for TCP/IP sockets.  The only case for setting these options is testing
    multicasting via multiple UDP sockets.

    Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e.
    on Windows), it will be set on the socket.  This will prevent anyone else
    from bind()'ing to our host/port for the duration of the test.
    """

    if sock.family == socket.AF_INET and sock.type == socket.SOCK_STREAM:
        if hasattr(socket, 'SO_REUSEADDR'):
            if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) == 1:
                raise TestFailed("tests should never set the SO_REUSEADDR "   \
                                 "socket option on TCP/IP sockets!")
        if hasattr(socket, 'SO_REUSEPORT'):
            try:
                if sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) == 1:
                    raise TestFailed("tests should never set the SO_REUSEPORT "   \
                                     "socket option on TCP/IP sockets!")
            except socket.error:
                # Python's socket module was compiled using modern headers
                # thus defining SO_REUSEPORT but this process is running
                # under an older kernel that does not support SO_REUSEPORT.
                pass
        if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'):
            sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)

    sock.bind((host, 0))
    port = sock.getsockname()[1]
    return port

def _is_ipv6_enabled():
    """Check whether IPv6 is enabled on this host."""
    if socket.has_ipv6:
        sock = None
        try:
            sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
            sock.bind(('::1', 0))
            return True
        except (socket.error, socket.gaierror):
            pass
        finally:
            if sock:
                sock.close()
    return False

IPV6_ENABLED = _is_ipv6_enabled()


# A constant likely larger than the underlying OS pipe buffer size, to
# make writes blocking.
# Windows limit seems to be around 512 B, and many Unix kernels have a
# 64 KiB pipe buffer size or 16 * PAGE_SIZE: take a few megs to be sure.
# (see issue #17835 for a discussion of this number).
PIPE_MAX_SIZE = 4 * 1024 * 1024 + 1

# A constant likely larger than the underlying OS socket buffer size, to make
# writes blocking.
# The socket buffer sizes can usually be tuned system-wide (e.g. through sysctl
# on Linux), or on a per-socket basis (SO_SNDBUF/SO_RCVBUF). See issue #18643
# for a discussion of this number).
SOCK_MAX_SIZE = 16 * 1024 * 1024 + 1

# # decorator for skipping tests on non-IEEE 754 platforms
# requires_IEEE_754 = unittest.skipUnless(
#     float.__getformat__("double").startswith("IEEE"),
#     "test requires IEEE 754 doubles")

requires_zlib = unittest.skipUnless(zlib, 'requires zlib')

requires_bz2 = unittest.skipUnless(bz2, 'requires bz2')

requires_lzma = unittest.skipUnless(lzma, 'requires lzma')

is_jython = sys.platform.startswith('java')

# Filename used for testing
if os.name == 'java':
    # Jython disallows @ in module names
    TESTFN = '$test'
else:
    TESTFN = '@test'

# Disambiguate TESTFN for parallel testing, while letting it remain a valid
# module name.
TESTFN = "{0}_{1}_tmp".format(TESTFN, os.getpid())

# # FS_NONASCII: non-ASCII character encodable by os.fsencode(),
# # or None if there is no such character.
# FS_NONASCII = None
# for character in (
#     # First try printable and common characters to have a readable filename.
#     # For each character, the encoding list are just example of encodings able
#     # to encode the character (the list is not exhaustive).
#
#     # U+00E6 (Latin Small Letter Ae): cp1252, iso-8859-1
#     '\u00E6',
#     # U+0130 (Latin Capital Letter I With Dot Above): cp1254, iso8859_3
#     '\u0130',
#     # U+0141 (Latin Capital Letter L With Stroke): cp1250, cp1257
#     '\u0141',
#     # U+03C6 (Greek Small Letter Phi): cp1253
#     '\u03C6',
#     # U+041A (Cyrillic Capital Letter Ka): cp1251
#     '\u041A',
#     # U+05D0 (Hebrew Letter Alef): Encodable to cp424
#     '\u05D0',
#     # U+060C (Arabic Comma): cp864, cp1006, iso8859_6, mac_arabic
#     '\u060C',
#     # U+062A (Arabic Letter Teh): cp720
#     '\u062A',
#     # U+0E01 (Thai Character Ko Kai): cp874
#     '\u0E01',
#
#     # Then try more "special" characters. "special" because they may be
#     # interpreted or displayed differently depending on the exact locale
#     # encoding and the font.
#
#     # U+00A0 (No-Break Space)
#     '\u00A0',
#     # U+20AC (Euro Sign)
#     '\u20AC',
# ):
#     try:
#         os.fsdecode(os.fsencode(character))
#     except UnicodeError:
#         pass
#     else:
#         FS_NONASCII = character
#         break
#
# # TESTFN_UNICODE is a non-ascii filename
# TESTFN_UNICODE = TESTFN + "-\xe0\xf2\u0258\u0141\u011f"
# if sys.platform == 'darwin':
#     # In Mac OS X's VFS API file names are, by definition, canonically
#     # decomposed Unicode, encoded using UTF-8. See QA1173:
#     # http://developer.apple.com/mac/library/qa/qa2001/qa1173.html
#     import unicodedata
#     TESTFN_UNICODE = unicodedata.normalize('NFD', TESTFN_UNICODE)
# TESTFN_ENCODING = sys.getfilesystemencoding()
#
# # TESTFN_UNENCODABLE is a filename (str type) that should *not* be able to be
# # encoded by the filesystem encoding (in strict mode). It can be None if we
# # cannot generate such filename.
# TESTFN_UNENCODABLE = None
# if os.name in ('nt', 'ce'):
#     # skip win32s (0) or Windows 9x/ME (1)
#     if sys.getwindowsversion().platform >= 2:
#         # Different kinds of characters from various languages to minimize the
#         # probability that the whole name is encodable to MBCS (issue #9819)
#         TESTFN_UNENCODABLE = TESTFN + "-\u5171\u0141\u2661\u0363\uDC80"
#         try:
#             TESTFN_UNENCODABLE.encode(TESTFN_ENCODING)
#         except UnicodeEncodeError:
#             pass
#         else:
#             print('WARNING: The filename %r CAN be encoded by the filesystem encoding (%s). '
#                   'Unicode filename tests may not be effective'
#                   % (TESTFN_UNENCODABLE, TESTFN_ENCODING))
#             TESTFN_UNENCODABLE = None
# # Mac OS X denies unencodable filenames (invalid utf-8)
# elif sys.platform != 'darwin':
#     try:
#         # ascii and utf-8 cannot encode the byte 0xff
#         b'\xff'.decode(TESTFN_ENCODING)
#     except UnicodeDecodeError:
#         # 0xff will be encoded using the surrogate character u+DCFF
#         TESTFN_UNENCODABLE = TESTFN \
#             + b'-\xff'.decode(TESTFN_ENCODING, 'surrogateescape')
#     else:
#         # File system encoding (eg. ISO-8859-* encodings) can encode
#         # the byte 0xff. Skip some unicode filename tests.
#         pass
#
# # TESTFN_UNDECODABLE is a filename (bytes type) that should *not* be able to be
# # decoded from the filesystem encoding (in strict mode). It can be None if we
# # cannot generate such filename (ex: the latin1 encoding can decode any byte
# # sequence). On UNIX, TESTFN_UNDECODABLE can be decoded by os.fsdecode() thanks
# # to the surrogateescape error handler (PEP 383), but not from the filesystem
# # encoding in strict mode.
# TESTFN_UNDECODABLE = None
# for name in (
#     # b'\xff' is not decodable by os.fsdecode() with code page 932. Windows
#     # accepts it to create a file or a directory, or don't accept to enter to
#     # such directory (when the bytes name is used). So test b'\xe7' first: it is
#     # not decodable from cp932.
#     b'\xe7w\xf0',
#     # undecodable from ASCII, UTF-8
#     b'\xff',
#     # undecodable from iso8859-3, iso8859-6, iso8859-7, cp424, iso8859-8, cp856
#     # and cp857
#     b'\xae\xd5'
#     # undecodable from UTF-8 (UNIX and Mac OS X)
#     b'\xed\xb2\x80', b'\xed\xb4\x80',
#     # undecodable from shift_jis, cp869, cp874, cp932, cp1250, cp1251, cp1252,
#     # cp1253, cp1254, cp1255, cp1257, cp1258
#     b'\x81\x98',
# ):
#     try:
#         name.decode(TESTFN_ENCODING)
#     except UnicodeDecodeError:
#         TESTFN_UNDECODABLE = os.fsencode(TESTFN) + name
#         break
#
# if FS_NONASCII:
#     TESTFN_NONASCII = TESTFN + '-' + FS_NONASCII
# else:
#     TESTFN_NONASCII = None

# Save the initial cwd
SAVEDCWD = os.getcwd()

@contextlib.contextmanager
def temp_cwd(name='tempcwd', quiet=False, path=None):
    """
    Context manager that temporarily changes the CWD.

    An existing path may be provided as *path*, in which case this
    function makes no changes to the file system.

    Otherwise, the new CWD is created in the current directory and it's
    named *name*. If *quiet* is False (default) and it's not possible to
    create or change the CWD, an error is raised.  If it's True, only a
    warning is raised and the original CWD is used.
    """
    saved_dir = os.getcwd()
    is_temporary = False
    if path is None:
        path = name
        try:
            os.mkdir(name)
            is_temporary = True
        except OSError:
            if not quiet:
                raise
            warnings.warn('tests may fail, unable to create temp CWD ' + name,
                          RuntimeWarning, stacklevel=3)
    try:
        os.chdir(path)
    except OSError:
        if not quiet:
            raise
        warnings.warn('tests may fail, unable to change the CWD to ' + path,
                      RuntimeWarning, stacklevel=3)
    try:
        yield os.getcwd()
    finally:
        os.chdir(saved_dir)
        if is_temporary:
            rmtree(name)


if hasattr(os, "umask"):
    @contextlib.contextmanager
    def temp_umask(umask):
        """Context manager that temporarily sets the process umask."""
        oldmask = os.umask(umask)
        try:
            yield
        finally:
            os.umask(oldmask)


def findfile(file, here=__file__, subdir=None):
    """Try to find a file on sys.path and the working directory.  If it is not
    found the argument passed to the function is returned (this does not
    necessarily signal failure; could still be the legitimate path)."""
    if os.path.isabs(file):
        return file
    if subdir is not None:
        file = os.path.join(subdir, file)
    path = sys.path
    path = [os.path.dirname(here)] + path
    for dn in path:
        fn = os.path.join(dn, file)
        if os.path.exists(fn): return fn
    return file

def create_empty_file(filename):
    """Create an empty file. If the file already exists, truncate it."""
    fd = os.open(filename, os.O_WRONLY | os.O_CREAT | os.O_TRUNC)
    os.close(fd)

def sortdict(dict):
    "Like repr(dict), but in sorted order."
    items = sorted(dict.items())
    reprpairs = ["%r: %r" % pair for pair in items]
    withcommas = ", ".join(reprpairs)
    return "{%s}" % withcommas

def make_bad_fd():
    """
    Create an invalid file descriptor by opening and closing a file and return
    its fd.
    """
    file = open(TESTFN, "wb")
    try:
        return file.fileno()
    finally:
        file.close()
        unlink(TESTFN)

def check_syntax_error(testcase, statement):
    testcase.assertRaises(SyntaxError, compile, statement,
                          '<test string>', 'exec')

def open_urlresource(url, *args, **kw):
    from future.backports.urllib import (request as urllib_request,
                                         parse as urllib_parse)

    check = kw.pop('check', None)

    filename = urllib_parse.urlparse(url)[2].split('/')[-1] # '/': it's URL!

    fn = os.path.join(os.path.dirname(__file__), "data", filename)

    def check_valid_file(fn):
        f = open(fn, *args, **kw)
        if check is None:
            return f
        elif check(f):
            f.seek(0)
            return f
        f.close()

    if os.path.exists(fn):
        f = check_valid_file(fn)
        if f is not None:
            return f
        unlink(fn)

    # Verify the requirement before downloading the file
    requires('urlfetch')

    print('\tfetching %s ...' % url, file=get_original_stdout())
    f = urllib_request.urlopen(url, timeout=15)
    try:
        with open(fn, "wb") as out:
            s = f.read()
            while s:
                out.write(s)
                s = f.read()
    finally:
        f.close()

    f = check_valid_file(fn)
    if f is not None:
        return f
    raise TestFailed('invalid resource %r' % fn)


class WarningsRecorder(object):
    """Convenience wrapper for the warnings list returned on
       entry to the warnings.catch_warnings() context manager.
    """
    def __init__(self, warnings_list):
        self._warnings = warnings_list
        self._last = 0

    def __getattr__(self, attr):
        if len(self._warnings) > self._last:
            return getattr(self._warnings[-1], attr)
        elif attr in warnings.WarningMessage._WARNING_DETAILS:
            return None
        raise AttributeError("%r has no attribute %r" % (self, attr))

    @property
    def warnings(self):
        return self._warnings[self._last:]

    def reset(self):
        self._last = len(self._warnings)


def _filterwarnings(filters, quiet=False):
    """Catch the warnings, then check if all the expected
    warnings have been raised and re-raise unexpected warnings.
    If 'quiet' is True, only re-raise the unexpected warnings.
    """
    # Clear the warning registry of the calling module
    # in order to re-raise the warnings.
    frame = sys._getframe(2)
    registry = frame.f_globals.get('__warningregistry__')
    if registry:
        if utils.PY3:
            registry.clear()
        else:
            # Py2-compatible:
            for i in range(len(registry)):
                registry.pop()
    with warnings.catch_warnings(record=True) as w:
        # Set filter "always" to record all warnings.  Because
        # test_warnings swap the module, we need to look up in
        # the sys.modules dictionary.
        sys.modules['warnings'].simplefilter("always")
        yield WarningsRecorder(w)
    # Filter the recorded warnings
    reraise = list(w)
    missing = []
    for msg, cat in filters:
        seen = False
        for w in reraise[:]:
            warning = w.message
            # Filter out the matching messages
            if (re.match(msg, str(warning), re.I) and
                issubclass(warning.__class__, cat)):
                seen = True
                reraise.remove(w)
        if not seen and not quiet:
            # This filter caught nothing
            missing.append((msg, cat.__name__))
    if reraise:
        raise AssertionError("unhandled warning %s" % reraise[0])
    if missing:
        raise AssertionError("filter (%r, %s) did not catch any warning" %
                             missing[0])


@contextlib.contextmanager
def check_warnings(*filters, **kwargs):
    """Context manager to silence warnings.

    Accept 2-tuples as positional arguments:
        ("message regexp", WarningCategory)

    Optional argument:
     - if 'quiet' is True, it does not fail if a filter catches nothing
        (default True without argument,
         default False if some filters are defined)

    Without argument, it defaults to:
        check_warnings(("", Warning), quiet=True)
    """
    quiet = kwargs.get('quiet')
    if not filters:
        filters = (("", Warning),)
        # Preserve backward compatibility
        if quiet is None:
            quiet = True
    return _filterwarnings(filters, quiet)


class CleanImport(object):
    """Context manager to force import to return a new module reference.

    This is useful for testing module-level behaviours, such as
    the emission of a DeprecationWarning on import.

    Use like this:

        with CleanImport("foo"):
            importlib.import_module("foo") # new reference
    """

    def __init__(self, *module_names):
        self.original_modules = sys.modules.copy()
        for module_name in module_names:
            if module_name in sys.modules:
                module = sys.modules[module_name]
                # It is possible that module_name is just an alias for
                # another module (e.g. stub for modules renamed in 3.x).
                # In that case, we also need delete the real module to clear
                # the import cache.
                if module.__name__ != module_name:
                    del sys.modules[module.__name__]
                del sys.modules[module_name]

    def __enter__(self):
        return self

    def __exit__(self, *ignore_exc):
        sys.modules.update(self.original_modules)

### Added for python-future:
if utils.PY3:
    import collections.abc
    mybase = collections.abc.MutableMapping
else:
    import UserDict
    mybase = UserDict.DictMixin
###

class EnvironmentVarGuard(mybase):

    """Class to help protect the environment variable properly.  Can be used as
    a context manager."""

    def __init__(self):
        self._environ = os.environ
        self._changed = {}

    def __getitem__(self, envvar):
        return self._environ[envvar]

    def __setitem__(self, envvar, value):
        # Remember the initial value on the first access
        if envvar not in self._changed:
            self._changed[envvar] = self._environ.get(envvar)
        self._environ[envvar] = value

    def __delitem__(self, envvar):
        # Remember the initial value on the first access
        if envvar not in self._changed:
            self._changed[envvar] = self._environ.get(envvar)
        if envvar in self._environ:
            del self._environ[envvar]

    def keys(self):
        return self._environ.keys()

    def __iter__(self):
        return iter(self._environ)

    def __len__(self):
        return len(self._environ)

    def set(self, envvar, value):
        self[envvar] = value

    def unset(self, envvar):
        del self[envvar]

    def __enter__(self):
        return self

    def __exit__(self, *ignore_exc):
        for (k, v) in self._changed.items():
            if v is None:
                if k in self._environ:
                    del self._environ[k]
            else:
                self._environ[k] = v
        os.environ = self._environ


class DirsOnSysPath(object):
    """Context manager to temporarily add directories to sys.path.

    This makes a copy of sys.path, appends any directories given
    as positional arguments, then reverts sys.path to the copied
    settings when the context ends.

    Note that *all* sys.path modifications in the body of the
    context manager, including replacement of the object,
    will be reverted at the end of the block.
    """

    def __init__(self, *paths):
        self.original_value = sys.path[:]
        self.original_object = sys.path
        sys.path.extend(paths)

    def __enter__(self):
        return self

    def __exit__(self, *ignore_exc):
        sys.path = self.original_object
        sys.path[:] = self.original_value


class TransientResource(object):

    """Raise ResourceDenied if an exception is raised while the context manager
    is in effect that matches the specified exception and attributes."""

    def __init__(self, exc, **kwargs):
        self.exc = exc
        self.attrs = kwargs

    def __enter__(self):
        return self

    def __exit__(self, type_=None, value=None, traceback=None):
        """If type_ is a subclass of self.exc and value has attributes matching
        self.attrs, raise ResourceDenied.  Otherwise let the exception
        propagate (if any)."""
        if type_ is not None and issubclass(self.exc, type_):
            for attr, attr_value in self.attrs.items():
                if not hasattr(value, attr):
                    break
                if getattr(value, attr) != attr_value:
                    break
            else:
                raise ResourceDenied("an optional resource is not available")

# Context managers that raise ResourceDenied when various issues
# with the Internet connection manifest themselves as exceptions.
# XXX deprecate these and use transient_internet() instead
time_out = TransientResource(IOError, errno=errno.ETIMEDOUT)
socket_peer_reset = TransientResource(socket.error, errno=errno.ECONNRESET)
ioerror_peer_reset = TransientResource(IOError, errno=errno.ECONNRESET)


@contextlib.contextmanager
def transient_internet(resource_name, timeout=30.0, errnos=()):
    """Return a context manager that raises ResourceDenied when various issues
    with the Internet connection manifest themselves as exceptions."""
    default_errnos = [
        ('ECONNREFUSED', 111),
        ('ECONNRESET', 104),
        ('EHOSTUNREACH', 113),
        ('ENETUNREACH', 101),
        ('ETIMEDOUT', 110),
    ]
    default_gai_errnos = [
        ('EAI_AGAIN', -3),
        ('EAI_FAIL', -4),
        ('EAI_NONAME', -2),
        ('EAI_NODATA', -5),
        # Encountered when trying to resolve IPv6-only hostnames
        ('WSANO_DATA', 11004),
    ]

    denied = ResourceDenied("Resource %r is not available" % resource_name)
    captured_errnos = errnos
    gai_errnos = []
    if not captured_errnos:
        captured_errnos = [getattr(errno, name, num)
                           for (name, num) in default_errnos]
        gai_errnos = [getattr(socket, name, num)
                      for (name, num) in default_gai_errnos]

    def filter_error(err):
        n = getattr(err, 'errno', None)
        if (isinstance(err, socket.timeout) or
            (isinstance(err, socket.gaierror) and n in gai_errnos) or
            n in captured_errnos):
            if not verbose:
                sys.stderr.write(denied.args[0] + "\n")
            # Was: raise denied from err
            # For Python-Future:
            exc = denied
            exc.__cause__ = err
            raise exc

    old_timeout = socket.getdefaulttimeout()
    try:
        if timeout is not None:
            socket.setdefaulttimeout(timeout)
        yield
    except IOError as err:
        # urllib can wrap original socket errors multiple times (!), we must
        # unwrap to get at the original error.
        while True:
            a = err.args
            if len(a) >= 1 and isinstance(a[0], IOError):
                err = a[0]
            # The error can also be wrapped as args[1]:
            #    except socket.error as msg:
            #        raise IOError('socket error', msg).with_traceback(sys.exc_info()[2])
            elif len(a) >= 2 and isinstance(a[1], IOError):
                err = a[1]
            else:
                break
        filter_error(err)
        raise
    # XXX should we catch generic exceptions and look for their
    # __cause__ or __context__?
    finally:
        socket.setdefaulttimeout(old_timeout)


@contextlib.contextmanager
def captured_output(stream_name):
    """Return a context manager used by captured_stdout/stdin/stderr
    that temporarily replaces the sys stream *stream_name* with a StringIO."""
    import io
    orig_stdout = getattr(sys, stream_name)
    setattr(sys, stream_name, io.StringIO())
    try:
        yield getattr(sys, stream_name)
    finally:
        setattr(sys, stream_name, orig_stdout)

def captured_stdout():
    """Capture the output of sys.stdout:

       with captured_stdout() as s:
           print("hello")
       self.assertEqual(s.getvalue(), "hello")
    """
    return captured_output("stdout")

def captured_stderr():
    return captured_output("stderr")

def captured_stdin():
    return captured_output("stdin")


def gc_collect():
    """Force as many objects as possible to be collected.

    In non-CPython implementations of Python, this is needed because timely
    deallocation is not guaranteed by the garbage collector.  (Even in CPython
    this can be the case in case of reference cycles.)  This means that __del__
    methods may be called later than expected and weakrefs may remain alive for
    longer than expected.  This function tries its best to force all garbage
    objects to disappear.
    """
    gc.collect()
    if is_jython:
        time.sleep(0.1)
    gc.collect()
    gc.collect()

@contextlib.contextmanager
def disable_gc():
    have_gc = gc.isenabled()
    gc.disable()
    try:
        yield
    finally:
        if have_gc:
            gc.enable()


def python_is_optimized():
    """Find if Python was built with optimizations."""
    # We don't have sysconfig on Py2.6:
    import sysconfig
    cflags = sysconfig.get_config_var('PY_CFLAGS') or ''
    final_opt = ""
    for opt in cflags.split():
        if opt.startswith('-O'):
            final_opt = opt
    return final_opt != '' and final_opt != '-O0'


_header = 'nP'
_align = '0n'
if hasattr(sys, "gettotalrefcount"):
    _header = '2P' + _header
    _align = '0P'
_vheader = _header + 'n'

def calcobjsize(fmt):
    return struct.calcsize(_header + fmt + _align)

def calcvobjsize(fmt):
    return struct.calcsize(_vheader + fmt + _align)


_TPFLAGS_HAVE_GC = 1<<14
_TPFLAGS_HEAPTYPE = 1<<9

def check_sizeof(test, o, size):
    result = sys.getsizeof(o)
    # add GC header size
    if ((type(o) == type) and (o.__flags__ & _TPFLAGS_HEAPTYPE) or\
        ((type(o) != type) and (type(o).__flags__ & _TPFLAGS_HAVE_GC))):
        size += _testcapi.SIZEOF_PYGC_HEAD
    msg = 'wrong size for %s: got %d, expected %d' \
            % (type(o), result, size)
    test.assertEqual(result, size, msg)

#=======================================================================
# Decorator for running a function in a different locale, correctly resetting
# it afterwards.

def run_with_locale(catstr, *locales):
    def decorator(func):
        def inner(*args, **kwds):
            try:
                import locale
                category = getattr(locale, catstr)
                orig_locale = locale.setlocale(category)
            except AttributeError:
                # if the test author gives us an invalid category string
                raise
            except:
                # cannot retrieve original locale, so do nothing
                locale = orig_locale = None
            else:
                for loc in locales:
                    try:
                        locale.setlocale(category, loc)
                        break
                    except:
                        pass

            # now run the function, resetting the locale on exceptions
            try:
                return func(*args, **kwds)
            finally:
                if locale and orig_locale:
                    locale.setlocale(category, orig_locale)
        inner.__name__ = func.__name__
        inner.__doc__ = func.__doc__
        return inner
    return decorator

#=======================================================================
# Decorator for running a function in a specific timezone, correctly
# resetting it afterwards.

def run_with_tz(tz):
    def decorator(func):
        def inner(*args, **kwds):
            try:
                tzset = time.tzset
            except AttributeError:
                raise unittest.SkipTest("tzset required")
            if 'TZ' in os.environ:
                orig_tz = os.environ['TZ']
            else:
                orig_tz = None
            os.environ['TZ'] = tz
            tzset()

            # now run the function, resetting the tz on exceptions
            try:
                return func(*args, **kwds)
            finally:
                if orig_tz is None:
                    del os.environ['TZ']
                else:
                    os.environ['TZ'] = orig_tz
                time.tzset()

        inner.__name__ = func.__name__
        inner.__doc__ = func.__doc__
        return inner
    return decorator

#=======================================================================
# Big-memory-test support. Separate from 'resources' because memory use
# should be configurable.

# Some handy shorthands. Note that these are used for byte-limits as well
# as size-limits, in the various bigmem tests
_1M = 1024*1024
_1G = 1024 * _1M
_2G = 2 * _1G
_4G = 4 * _1G

MAX_Py_ssize_t = sys.maxsize

def set_memlimit(limit):
    global max_memuse
    global real_max_memuse
    sizes = {
        'k': 1024,
        'm': _1M,
        'g': _1G,
        't': 1024*_1G,
    }
    m = re.match(r'(\d+(\.\d+)?) (K|M|G|T)b?$', limit,
                 re.IGNORECASE | re.VERBOSE)
    if m is None:
        raise ValueError('Invalid memory limit %r' % (limit,))
    memlimit = int(float(m.group(1)) * sizes[m.group(3).lower()])
    real_max_memuse = memlimit
    if memlimit > MAX_Py_ssize_t:
        memlimit = MAX_Py_ssize_t
    if memlimit < _2G - 1:
        raise ValueError('Memory limit %r too low to be useful' % (limit,))
    max_memuse = memlimit

class _MemoryWatchdog(object):
    """An object which periodically watches the process' memory consumption
    and prints it out.
    """

    def __init__(self):
        self.procfile = '/proc/{pid}/statm'.format(pid=os.getpid())
        self.started = False

    def start(self):
        try:
            f = open(self.procfile, 'r')
        except OSError as e:
            warnings.warn('/proc not available for stats: {0}'.format(e),
                          RuntimeWarning)
            sys.stderr.flush()
            return

        watchdog_script = findfile("memory_watchdog.py")
        self.mem_watchdog = subprocess.Popen([sys.executable, watchdog_script],
                                             stdin=f, stderr=subprocess.DEVNULL)
        f.close()
        self.started = True

    def stop(self):
        if self.started:
            self.mem_watchdog.terminate()
            self.mem_watchdog.wait()


def bigmemtest(size, memuse, dry_run=True):
    """Decorator for bigmem tests.

    'minsize' is the minimum useful size for the test (in arbitrary,
    test-interpreted units.) 'memuse' is the number of 'bytes per size' for
    the test, or a good estimate of it.

    if 'dry_run' is False, it means the test doesn't support dummy runs
    when -M is not specified.
    """
    def decorator(f):
        def wrapper(self):
            size = wrapper.size
            memuse = wrapper.memuse
            if not real_max_memuse:
                maxsize = 5147
            else:
                maxsize = size

            if ((real_max_memuse or not dry_run)
                and real_max_memuse < maxsize * memuse):
                raise unittest.SkipTest(
                    "not enough memory: %.1fG minimum needed"
                    % (size * memuse / (1024 ** 3)))

            if real_max_memuse and verbose:
                print()
                print(" ... expected peak memory use: {peak:.1f}G"
                      .format(peak=size * memuse / (1024 ** 3)))
                watchdog = _MemoryWatchdog()
                watchdog.start()
            else:
                watchdog = None

            try:
                return f(self, maxsize)
            finally:
                if watchdog:
                    watchdog.stop()

        wrapper.size = size
        wrapper.memuse = memuse
        return wrapper
    return decorator

def bigaddrspacetest(f):
    """Decorator for tests that fill the address space."""
    def wrapper(self):
        if max_memuse < MAX_Py_ssize_t:
            if MAX_Py_ssize_t >= 2**63 - 1 and max_memuse >= 2**31:
                raise unittest.SkipTest(
                    "not enough memory: try a 32-bit build instead")
            else:
                raise unittest.SkipTest(
                    "not enough memory: %.1fG minimum needed"
                    % (MAX_Py_ssize_t / (1024 ** 3)))
        else:
            return f(self)
    return wrapper

#=======================================================================
# unittest integration.

class BasicTestRunner(object):
    def run(self, test):
        result = unittest.TestResult()
        test(result)
        return result

def _id(obj):
    return obj

def requires_resource(resource):
    if resource == 'gui' and not _is_gui_available():
        return unittest.skip("resource 'gui' is not available")
    if is_resource_enabled(resource):
        return _id
    else:
        return unittest.skip("resource {0!r} is not enabled".format(resource))

def cpython_only(test):
    """
    Decorator for tests only applicable on CPython.
    """
    return impl_detail(cpython=True)(test)

def impl_detail(msg=None, **guards):
    if check_impl_detail(**guards):
        return _id
    if msg is None:
        guardnames, default = _parse_guards(guards)
        if default:
            msg = "implementation detail not available on {0}"
        else:
            msg = "implementation detail specific to {0}"
        guardnames = sorted(guardnames.keys())
        msg = msg.format(' or '.join(guardnames))
    return unittest.skip(msg)

def _parse_guards(guards):
    # Returns a tuple ({platform_name: run_me}, default_value)
    if not guards:
        return ({'cpython': True}, False)
    is_true = list(guards.values())[0]
    assert list(guards.values()) == [is_true] * len(guards)   # all True or all False
    return (guards, not is_true)

# Use the following check to guard CPython's implementation-specific tests --
# or to run them only on the implementation(s) guarded by the arguments.
def check_impl_detail(**guards):
    """This function returns True or False depending on the host platform.
       Examples:
          if check_impl_detail():               # only on CPython (default)
          if check_impl_detail(jython=True):    # only on Jython
          if check_impl_detail(cpython=False):  # everywhere except on CPython
    """
    guards, default = _parse_guards(guards)
    return guards.get(platform.python_implementation().lower(), default)


def no_tracing(func):
    """Decorator to temporarily turn off tracing for the duration of a test."""
    if not hasattr(sys, 'gettrace'):
        return func
    else:
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            original_trace = sys.gettrace()
            try:
                sys.settrace(None)
                return func(*args, **kwargs)
            finally:
                sys.settrace(original_trace)
        return wrapper


def refcount_test(test):
    """Decorator for tests which involve reference counting.

    To start, the decorator does not run the test if is not run by CPython.
    After that, any trace function is unset during the test to prevent
    unexpected refcounts caused by the trace function.

    """
    return no_tracing(cpython_only(test))


def _filter_suite(suite, pred):
    """Recursively filter test cases in a suite based on a predicate."""
    newtests = []
    for test in suite._tests:
        if isinstance(test, unittest.TestSuite):
            _filter_suite(test, pred)
            newtests.append(test)
        else:
            if pred(test):
                newtests.append(test)
    suite._tests = newtests

def _run_suite(suite):
    """Run tests from a unittest.TestSuite-derived class."""
    if verbose:
        runner = unittest.TextTestRunner(sys.stdout, verbosity=2,
                                         failfast=failfast)
    else:
        runner = BasicTestRunner()

    result = runner.run(suite)
    if not result.wasSuccessful():
        if len(result.errors) == 1 and not result.failures:
            err = result.errors[0][1]
        elif len(result.failures) == 1 and not result.errors:
            err = result.failures[0][1]
        else:
            err = "multiple errors occurred"
            if not verbose: err += "; run in verbose mode for details"
        raise TestFailed(err)


def run_unittest(*classes):
    """Run tests from unittest.TestCase-derived classes."""
    valid_types = (unittest.TestSuite, unittest.TestCase)
    suite = unittest.TestSuite()
    for cls in classes:
        if isinstance(cls, str):
            if cls in sys.modules:
                suite.addTest(unittest.findTestCases(sys.modules[cls]))
            else:
                raise ValueError("str arguments must be keys in sys.modules")
        elif isinstance(cls, valid_types):
            suite.addTest(cls)
        else:
            suite.addTest(unittest.makeSuite(cls))
    def case_pred(test):
        if match_tests is None:
            return True
        for name in test.id().split("."):
            if fnmatch.fnmatchcase(name, match_tests):
                return True
        return False
    _filter_suite(suite, case_pred)
    _run_suite(suite)

# We don't have sysconfig on Py2.6:
# #=======================================================================
# # Check for the presence of docstrings.
#
# HAVE_DOCSTRINGS = (check_impl_detail(cpython=False) or
#                    sys.platform == 'win32' or
#                    sysconfig.get_config_var('WITH_DOC_STRINGS'))
#
# requires_docstrings = unittest.skipUnless(HAVE_DOCSTRINGS,
#                                           "test requires docstrings")
#
#
# #=======================================================================
# doctest driver.

def run_doctest(module, verbosity=None, optionflags=0):
    """Run doctest on the given module.  Return (#failures, #tests).

    If optional argument verbosity is not specified (or is None), pass
    support's belief about verbosity on to doctest.  Else doctest's
    usual behavior is used (it searches sys.argv for -v).
    """

    import doctest

    if verbosity is None:
        verbosity = verbose
    else:
        verbosity = None

    f, t = doctest.testmod(module, verbose=verbosity, optionflags=optionflags)
    if f:
        raise TestFailed("%d of %d doctests failed" % (f, t))
    if verbose:
        print('doctest (%s) ... %d tests with zero failures' %
              (module.__name__, t))
    return f, t


#=======================================================================
# Support for saving and restoring the imported modules.

def modules_setup():
    return sys.modules.copy(),

def modules_cleanup(oldmodules):
    # Encoders/decoders are registered permanently within the internal
    # codec cache. If we destroy the corresponding modules their
    # globals will be set to None which will trip up the cached functions.
    encodings = [(k, v) for k, v in sys.modules.items()
                 if k.startswith('encodings.')]
    # Was:
    # sys.modules.clear()
    # Py2-compatible:
    for i in range(len(sys.modules)):
        sys.modules.pop()

    sys.modules.update(encodings)
    # XXX: This kind of problem can affect more than just encodings. In particular
    # extension modules (such as _ssl) don't cope with reloading properly.
    # Really, test modules should be cleaning out the test specific modules they
    # know they added (ala test_runpy) rather than relying on this function (as
    # test_importhooks and test_pkg do currently).
    # Implicitly imported *real* modules should be left alone (see issue 10556).
    sys.modules.update(oldmodules)

#=======================================================================
# Backported versions of threading_setup() and threading_cleanup() which don't refer
# to threading._dangling (not available on Py2.7).

# Threading support to prevent reporting refleaks when running regrtest.py -R

# NOTE: we use thread._count() rather than threading.enumerate() (or the
# moral equivalent thereof) because a threading.Thread object is still alive
# until its __bootstrap() method has returned, even after it has been
# unregistered from the threading module.
# thread._count(), on the other hand, only gets decremented *after* the
# __bootstrap() method has returned, which gives us reliable reference counts
# at the end of a test run.

def threading_setup():
    if _thread:
        return _thread._count(),
    else:
        return 1,

def threading_cleanup(nb_threads):
    if not _thread:
        return

    _MAX_COUNT = 10
    for count in range(_MAX_COUNT):
        n = _thread._count()
        if n == nb_threads:
            break
        time.sleep(0.1)
    # XXX print a warning in case of failure?

def reap_threads(func):
    """Use this function when threads are being used.  This will
    ensure that the threads are cleaned up even when the test fails.
    If threading is unavailable this function does nothing.
    """
    if not _thread:
        return func

    @functools.wraps(func)
    def decorator(*args):
        key = threading_setup()
        try:
            return func(*args)
        finally:
            threading_cleanup(*key)
    return decorator

def reap_children():
    """Use this function at the end of test_main() whenever sub-processes
    are started.  This will help ensure that no extra children (zombies)
    stick around to hog resources and create problems when looking
    for refleaks.
    """

    # Reap all our dead child processes so we don't leave zombies around.
    # These hog resources and might be causing some of the buildbots to die.
    if hasattr(os, 'waitpid'):
        any_process = -1
        while True:
            try:
                # This will raise an exception on Windows.  That's ok.
                pid, status = os.waitpid(any_process, os.WNOHANG)
                if pid == 0:
                    break
            except:
                break

@contextlib.contextmanager
def swap_attr(obj, attr, new_val):
    """Temporary swap out an attribute with a new object.

    Usage:
        with swap_attr(obj, "attr", 5):
            ...

        This will set obj.attr to 5 for the duration of the with: block,
        restoring the old value at the end of the block. If `attr` doesn't
        exist on `obj`, it will be created and then deleted at the end of the
        block.
    """
    if hasattr(obj, attr):
        real_val = getattr(obj, attr)
        setattr(obj, attr, new_val)
        try:
            yield
        finally:
            setattr(obj, attr, real_val)
    else:
        setattr(obj, attr, new_val)
        try:
            yield
        finally:
            delattr(obj, attr)

@contextlib.contextmanager
def swap_item(obj, item, new_val):
    """Temporary swap out an item with a new object.

    Usage:
        with swap_item(obj, "item", 5):
            ...

        This will set obj["item"] to 5 for the duration of the with: block,
        restoring the old value at the end of the block. If `item` doesn't
        exist on `obj`, it will be created and then deleted at the end of the
        block.
    """
    if item in obj:
        real_val = obj[item]
        obj[item] = new_val
        try:
            yield
        finally:
            obj[item] = real_val
    else:
        obj[item] = new_val
        try:
            yield
        finally:
            del obj[item]

def strip_python_stderr(stderr):
    """Strip the stderr of a Python process from potential debug output
    emitted by the interpreter.

    This will typically be run on the result of the communicate() method
    of a subprocess.Popen object.
    """
    stderr = re.sub(br"\[\d+ refs\]\r?\n?", b"", stderr).strip()
    return stderr

def args_from_interpreter_flags():
    """Return a list of command-line arguments reproducing the current
    settings in sys.flags and sys.warnoptions."""
    return subprocess._args_from_interpreter_flags()

#============================================================
# Support for assertions about logging.
#============================================================

class TestHandler(logging.handlers.BufferingHandler):
    def __init__(self, matcher):
        # BufferingHandler takes a "capacity" argument
        # so as to know when to flush. As we're overriding
        # shouldFlush anyway, we can set a capacity of zero.
        # You can call flush() manually to clear out the
        # buffer.
        logging.handlers.BufferingHandler.__init__(self, 0)
        self.matcher = matcher

    def shouldFlush(self):
        return False

    def emit(self, record):
        self.format(record)
        self.buffer.append(record.__dict__)

    def matches(self, **kwargs):
        """
        Look for a saved dict whose keys/values match the supplied arguments.
        """
        result = False
        for d in self.buffer:
            if self.matcher.matches(d, **kwargs):
                result = True
                break
        return result

class Matcher(object):

    _partial_matches = ('msg', 'message')

    def matches(self, d, **kwargs):
        """
        Try to match a single dict with the supplied arguments.

        Keys whose values are strings and which are in self._partial_matches
        will be checked for partial (i.e. substring) matches. You can extend
        this scheme to (for example) do regular expression matching, etc.
        """
        result = True
        for k in kwargs:
            v = kwargs[k]
            dv = d.get(k)
            if not self.match_value(k, dv, v):
                result = False
                break
        return result

    def match_value(self, k, dv, v):
        """
        Try to match a single stored value (dv) with a supplied value (v).
        """
        if type(v) != type(dv):
            result = False
        elif type(dv) is not str or k not in self._partial_matches:
            result = (v == dv)
        else:
            result = dv.find(v) >= 0
        return result


_can_symlink = None
def can_symlink():
    global _can_symlink
    if _can_symlink is not None:
        return _can_symlink
    symlink_path = TESTFN + "can_symlink"
    try:
        os.symlink(TESTFN, symlink_path)
        can = True
    except (OSError, NotImplementedError, AttributeError):
        can = False
    else:
        os.remove(symlink_path)
    _can_symlink = can
    return can

def skip_unless_symlink(test):
    """Skip decorator for tests that require functional symlink"""
    ok = can_symlink()
    msg = "Requires functional symlink implementation"
    return test if ok else unittest.skip(msg)(test)

_can_xattr = None
def can_xattr():
    global _can_xattr
    if _can_xattr is not None:
        return _can_xattr
    if not hasattr(os, "setxattr"):
        can = False
    else:
        tmp_fp, tmp_name = tempfile.mkstemp()
        try:
            with open(TESTFN, "wb") as fp:
                try:
                    # TESTFN & tempfile may use different file systems with
                    # different capabilities
                    os.setxattr(tmp_fp, b"user.test", b"")
                    os.setxattr(fp.fileno(), b"user.test", b"")
                    # Kernels < 2.6.39 don't respect setxattr flags.
                    kernel_version = platform.release()
                    m = re.match("2.6.(\d{1,2})", kernel_version)
                    can = m is None or int(m.group(1)) >= 39
                except OSError:
                    can = False
        finally:
            unlink(TESTFN)
            unlink(tmp_name)
    _can_xattr = can
    return can

def skip_unless_xattr(test):
    """Skip decorator for tests that require functional extended attributes"""
    ok = can_xattr()
    msg = "no non-broken extended attribute support"
    return test if ok else unittest.skip(msg)(test)


if sys.platform.startswith('win'):
    @contextlib.contextmanager
    def suppress_crash_popup():
        """Disable Windows Error Reporting dialogs using SetErrorMode."""
        # see http://msdn.microsoft.com/en-us/library/windows/desktop/ms680621%28v=vs.85%29.aspx
        # GetErrorMode is not available on Windows XP and Windows Server 2003,
        # but SetErrorMode returns the previous value, so we can use that
        import ctypes
        k32 = ctypes.windll.kernel32
        SEM_NOGPFAULTERRORBOX = 0x02
        old_error_mode = k32.SetErrorMode(SEM_NOGPFAULTERRORBOX)
        k32.SetErrorMode(old_error_mode | SEM_NOGPFAULTERRORBOX)
        try:
            yield
        finally:
            k32.SetErrorMode(old_error_mode)
else:
    # this is a no-op for other platforms
    @contextlib.contextmanager
    def suppress_crash_popup():
        yield


def patch(test_instance, object_to_patch, attr_name, new_value):
    """Override 'object_to_patch'.'attr_name' with 'new_value'.

    Also, add a cleanup procedure to 'test_instance' to restore
    'object_to_patch' value for 'attr_name'.
    The 'attr_name' should be a valid attribute for 'object_to_patch'.

    """
    # check that 'attr_name' is a real attribute for 'object_to_patch'
    # will raise AttributeError if it does not exist
    getattr(object_to_patch, attr_name)

    # keep a copy of the old value
    attr_is_local = False
    try:
        old_value = object_to_patch.__dict__[attr_name]
    except (AttributeError, KeyError):
        old_value = getattr(object_to_patch, attr_name, None)
    else:
        attr_is_local = True

    # restore the value when the test is done
    def cleanup():
        if attr_is_local:
            setattr(object_to_patch, attr_name, old_value)
        else:
            delattr(object_to_patch, attr_name)

    test_instance.addCleanup(cleanup)

    # actually override the attribute
    setattr(object_to_patch, attr_name, new_value)
PK5Du\��� future/backports/test/pystone.pynu�[���#!/usr/bin/env python3

"""
"PYSTONE" Benchmark Program

Version:        Python/1.1 (corresponds to C/1.1 plus 2 Pystone fixes)

Author:         Reinhold P. Weicker,  CACM Vol 27, No 10, 10/84 pg. 1013.

                Translated from ADA to C by Rick Richardson.
                Every method to preserve ADA-likeness has been used,
                at the expense of C-ness.

                Translated from C to Python by Guido van Rossum.

Version History:

                Version 1.1 corrects two bugs in version 1.0:

                First, it leaked memory: in Proc1(), NextRecord ends
                up having a pointer to itself.  I have corrected this
                by zapping NextRecord.PtrComp at the end of Proc1().

                Second, Proc3() used the operator != to compare a
                record to None.  This is rather inefficient and not
                true to the intention of the original benchmark (where
                a pointer comparison to None is intended; the !=
                operator attempts to find a method __cmp__ to do value
                comparison of the record).  Version 1.1 runs 5-10
                percent faster than version 1.0, so benchmark figures
                of different versions can't be compared directly.

"""

from __future__ import print_function

from time import clock

LOOPS = 50000

__version__ = "1.1"

[Ident1, Ident2, Ident3, Ident4, Ident5] = range(1, 6)

class Record(object):

    def __init__(self, PtrComp = None, Discr = 0, EnumComp = 0,
                       IntComp = 0, StringComp = 0):
        self.PtrComp = PtrComp
        self.Discr = Discr
        self.EnumComp = EnumComp
        self.IntComp = IntComp
        self.StringComp = StringComp

    def copy(self):
        return Record(self.PtrComp, self.Discr, self.EnumComp,
                      self.IntComp, self.StringComp)

TRUE = 1
FALSE = 0

def main(loops=LOOPS):
    benchtime, stones = pystones(loops)
    print("Pystone(%s) time for %d passes = %g" % \
          (__version__, loops, benchtime))
    print("This machine benchmarks at %g pystones/second" % stones)


def pystones(loops=LOOPS):
    return Proc0(loops)

IntGlob = 0
BoolGlob = FALSE
Char1Glob = '\0'
Char2Glob = '\0'
Array1Glob = [0]*51
Array2Glob = [x[:] for x in [Array1Glob]*51]
PtrGlb = None
PtrGlbNext = None

def Proc0(loops=LOOPS):
    global IntGlob
    global BoolGlob
    global Char1Glob
    global Char2Glob
    global Array1Glob
    global Array2Glob
    global PtrGlb
    global PtrGlbNext

    starttime = clock()
    for i in range(loops):
        pass
    nulltime = clock() - starttime

    PtrGlbNext = Record()
    PtrGlb = Record()
    PtrGlb.PtrComp = PtrGlbNext
    PtrGlb.Discr = Ident1
    PtrGlb.EnumComp = Ident3
    PtrGlb.IntComp = 40
    PtrGlb.StringComp = "DHRYSTONE PROGRAM, SOME STRING"
    String1Loc = "DHRYSTONE PROGRAM, 1'ST STRING"
    Array2Glob[8][7] = 10

    starttime = clock()

    for i in range(loops):
        Proc5()
        Proc4()
        IntLoc1 = 2
        IntLoc2 = 3
        String2Loc = "DHRYSTONE PROGRAM, 2'ND STRING"
        EnumLoc = Ident2
        BoolGlob = not Func2(String1Loc, String2Loc)
        while IntLoc1 < IntLoc2:
            IntLoc3 = 5 * IntLoc1 - IntLoc2
            IntLoc3 = Proc7(IntLoc1, IntLoc2)
            IntLoc1 = IntLoc1 + 1
        Proc8(Array1Glob, Array2Glob, IntLoc1, IntLoc3)
        PtrGlb = Proc1(PtrGlb)
        CharIndex = 'A'
        while CharIndex <= Char2Glob:
            if EnumLoc == Func1(CharIndex, 'C'):
                EnumLoc = Proc6(Ident1)
            CharIndex = chr(ord(CharIndex)+1)
        IntLoc3 = IntLoc2 * IntLoc1
        IntLoc2 = IntLoc3 / IntLoc1
        IntLoc2 = 7 * (IntLoc3 - IntLoc2) - IntLoc1
        IntLoc1 = Proc2(IntLoc1)

    benchtime = clock() - starttime - nulltime
    if benchtime == 0.0:
        loopsPerBenchtime = 0.0
    else:
        loopsPerBenchtime = (loops / benchtime)
    return benchtime, loopsPerBenchtime

def Proc1(PtrParIn):
    PtrParIn.PtrComp = NextRecord = PtrGlb.copy()
    PtrParIn.IntComp = 5
    NextRecord.IntComp = PtrParIn.IntComp
    NextRecord.PtrComp = PtrParIn.PtrComp
    NextRecord.PtrComp = Proc3(NextRecord.PtrComp)
    if NextRecord.Discr == Ident1:
        NextRecord.IntComp = 6
        NextRecord.EnumComp = Proc6(PtrParIn.EnumComp)
        NextRecord.PtrComp = PtrGlb.PtrComp
        NextRecord.IntComp = Proc7(NextRecord.IntComp, 10)
    else:
        PtrParIn = NextRecord.copy()
    NextRecord.PtrComp = None
    return PtrParIn

def Proc2(IntParIO):
    IntLoc = IntParIO + 10
    while 1:
        if Char1Glob == 'A':
            IntLoc = IntLoc - 1
            IntParIO = IntLoc - IntGlob
            EnumLoc = Ident1
        if EnumLoc == Ident1:
            break
    return IntParIO

def Proc3(PtrParOut):
    global IntGlob

    if PtrGlb is not None:
        PtrParOut = PtrGlb.PtrComp
    else:
        IntGlob = 100
    PtrGlb.IntComp = Proc7(10, IntGlob)
    return PtrParOut

def Proc4():
    global Char2Glob

    BoolLoc = Char1Glob == 'A'
    BoolLoc = BoolLoc or BoolGlob
    Char2Glob = 'B'

def Proc5():
    global Char1Glob
    global BoolGlob

    Char1Glob = 'A'
    BoolGlob = FALSE

def Proc6(EnumParIn):
    EnumParOut = EnumParIn
    if not Func3(EnumParIn):
        EnumParOut = Ident4
    if EnumParIn == Ident1:
        EnumParOut = Ident1
    elif EnumParIn == Ident2:
        if IntGlob > 100:
            EnumParOut = Ident1
        else:
            EnumParOut = Ident4
    elif EnumParIn == Ident3:
        EnumParOut = Ident2
    elif EnumParIn == Ident4:
        pass
    elif EnumParIn == Ident5:
        EnumParOut = Ident3
    return EnumParOut

def Proc7(IntParI1, IntParI2):
    IntLoc = IntParI1 + 2
    IntParOut = IntParI2 + IntLoc
    return IntParOut

def Proc8(Array1Par, Array2Par, IntParI1, IntParI2):
    global IntGlob

    IntLoc = IntParI1 + 5
    Array1Par[IntLoc] = IntParI2
    Array1Par[IntLoc+1] = Array1Par[IntLoc]
    Array1Par[IntLoc+30] = IntLoc
    for IntIndex in range(IntLoc, IntLoc+2):
        Array2Par[IntLoc][IntIndex] = IntLoc
    Array2Par[IntLoc][IntLoc-1] = Array2Par[IntLoc][IntLoc-1] + 1
    Array2Par[IntLoc+20][IntLoc] = Array1Par[IntLoc]
    IntGlob = 5

def Func1(CharPar1, CharPar2):
    CharLoc1 = CharPar1
    CharLoc2 = CharLoc1
    if CharLoc2 != CharPar2:
        return Ident1
    else:
        return Ident2

def Func2(StrParI1, StrParI2):
    IntLoc = 1
    while IntLoc <= 1:
        if Func1(StrParI1[IntLoc], StrParI2[IntLoc+1]) == Ident1:
            CharLoc = 'A'
            IntLoc = IntLoc + 1
    if CharLoc >= 'W' and CharLoc <= 'Z':
        IntLoc = 7
    if CharLoc == 'X':
        return TRUE
    else:
        if StrParI1 > StrParI2:
            IntLoc = IntLoc + 7
            return TRUE
        else:
            return FALSE

def Func3(EnumParIn):
    EnumLoc = EnumParIn
    if EnumLoc == Ident3: return TRUE
    return FALSE

if __name__ == '__main__':
    import sys
    def error(msg):
        print(msg, end=' ', file=sys.stderr)
        print("usage: %s [number_of_loops]" % sys.argv[0], file=sys.stderr)
        sys.exit(100)
    nargs = len(sys.argv) - 1
    if nargs > 1:
        error("%d arguments are too many;" % nargs)
    elif nargs == 1:
        try: loops = int(sys.argv[1])
        except ValueError:
            error("Invalid argument %r;" % sys.argv[1])
    else:
        loops = LOOPS
    main(loops)
PK:Du\d�|��9future/backports/test/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdZdS)a
test package backported for python-future.

Its primary purpose is to allow use of "import test.support" for running
the Python standard library unit tests using the new Python 3 stdlib
import location.

Python 3 renamed test.test_support to test.support.
N)�__doc__�rr�H/usr/local/lib/python3.9/site-packages/future/backports/test/__init__.py�<module>�PK=Du\#(�SS8future/backports/test/__pycache__/pystone.cpython-39.pycnu�[���a

��?h�@s�dZddlmZddlmZdZdZedd�\ZZ	Z
ZZGdd	�d	e
�ZdZdZefd
d�Zefdd
�Zdaeadadadgdadd�tgdD�adadaefdd�Zdd�Zdd�Zdd�Zdd�Zdd�Z dd �Z!d!d"�Z"d#d$�Z#d%d&�Z$d'd(�Z%d)d*�Z&e'd+k�r�ddl(Z(d,d-�Z)e*e(j+�dZ,e,dk�rBe)d.e,�nJe,dk�r�ze-e(j+d�Z.Wn&e/�y�e)d/e(j+d�Yn0neZ.ee.�dS)0a�
"PYSTONE" Benchmark Program

Version:        Python/1.1 (corresponds to C/1.1 plus 2 Pystone fixes)

Author:         Reinhold P. Weicker,  CACM Vol 27, No 10, 10/84 pg. 1013.

                Translated from ADA to C by Rick Richardson.
                Every method to preserve ADA-likeness has been used,
                at the expense of C-ness.

                Translated from C to Python by Guido van Rossum.

Version History:

                Version 1.1 corrects two bugs in version 1.0:

                First, it leaked memory: in Proc1(), NextRecord ends
                up having a pointer to itself.  I have corrected this
                by zapping NextRecord.PtrComp at the end of Proc1().

                Second, Proc3() used the operator != to compare a
                record to None.  This is rather inefficient and not
                true to the intention of the original benchmark (where
                a pointer comparison to None is intended; the !=
                operator attempts to find a method __cmp__ to do value
                comparison of the record).  Version 1.1 runs 5-10
                percent faster than version 1.0, so benchmark figures
                of different versions can't be compared directly.

�)�print_function)�clockiP�z1.1��c@seZdZddd�Zdd�ZdS)�RecordNrcCs"||_||_||_||_||_dS�N)�PtrComp�Discr�EnumComp�IntComp�
StringComp)�selfrr	r
rr�r�G/usr/local/lib/python3.9/site-packages/future/backports/test/pystone.py�__init__/s
zRecord.__init__cCst|j|j|j|j|j�Sr)rrr	r
rr)r
rrr�copy7s�zRecord.copy)Nrrrr)�__name__�
__module__�__qualname__rrrrrrr-s�
rcCs.t|�\}}tdt||f�td|�dS)Nz#Pystone(%s) time for %d passes = %gz-This machine benchmarks at %g pystones/second)�pystones�print�__version__)�loops�	benchtimeZstonesrrr�main>s
�rcCst|�Sr)�Proc0)rrrrrEsr��3cCsg|]}|dd��qSrr)�.0�xrrr�
<listcomp>M�r Nc
Cs^t�}t|�D]}qt�|}t�at�att_tt_tt_	dt_
dt_d}dtdd<t�}t|�D]�}t
�t�d}d}d	}t}t||�a||kr�d
||}	t||�}	|d}q�ttt||	�tt�ad}
|
tk�r|t|
d
�kr�tt�}tt|
�d�}
q�||}	|	|}d|	||}t|�}qft�||}|dk�rNd}n||}||fS)N�(zDHRYSTONE PROGRAM, SOME STRINGzDHRYSTONE PROGRAM, 1'ST STRING�
����zDHRYSTONE PROGRAM, 2'ND STRING�r�A�Cg)r�ranger�
PtrGlbNext�PtrGlbr�Ident1r	�Ident3r
rr�
Array2Glob�Proc5�Proc4�Ident2�Func2�BoolGlob�Proc7�Proc8�
Array1Glob�Proc1�	Char2Glob�Func1�Proc6�chr�ord�Proc2)
rZ	starttime�iZnulltimeZ
String1LocZIntLoc1ZIntLoc2Z
String2Loc�EnumLocZIntLoc3Z	CharIndexrZloopsPerBenchtimerrrrQsT






rcCsvt��|_}d|_|j|_|j|_t|j�|_|jtkrdd|_t|j�|_tj|_t	|jd�|_n|��}d|_|S)Nr(rr#)
r-rrr�Proc3r	r.r<r
r6)ZPtrParInZ
NextRecordrrrr9�s
r9cCs4|d}tdkr$|d}|t}t}|tkrq0q|S)Nr#r)r)�	Char1Glob�IntGlobr.)ZIntParIO�IntLocrArrrr?�sr?cCs$tdurtj}ndatdt�t_|S)N�dr#)r-rrDr6r)Z	PtrParOutrrrrB�s
rBcCstdk}|pt}dadS)Nr)�B)rCr5r:)ZBoolLocrrrr2�sr2cCsdatadS)Nr))rC�FALSEr5rrrrr1�sr1cCsb|}t|�st}|tkrt}n@|tkr:tdkr4t}q^t}n$|tkrHt}n|tkrRn|tkr^t}|S)NrF)�Func3�Ident4r.r3rDr/�Ident5)�	EnumParInZ
EnumParOutrrrr<�s r<cCs|d}||}|S)Nr&r)�IntParI1�IntParI2rEZ	IntParOutrrrr6�sr6cCs�|d}|||<||||d<|||d<t||d�D]}||||<q:|||dd|||d<||||d|<dadS)Nr(r�r&�)r+rD)Z	Array1ParZ	Array2ParrMrNrEZIntIndexrrrr7�s r7cCs|}|}||krtStSdSr)r.r3)ZCharPar1ZCharPar2ZCharLoc1ZCharLoc2rrrr;�s
r;cCspd}|dkr4t||||d�tkrd}|d}q|dkrH|dkrHd}|dkrTtS||krh|d}tStSdS)Nrr)�W�Zr%�X)r;r.�TRUErH)ZStrParI1ZStrParI2rEZCharLocrrrr4�s
r4cCs|}|tkrtStSr)r/rTrH)rLrArrrrI�srI�__main__cCs6t|dtjd�tdtjdtjd�t�d�dS)N� )�end�filezusage: %s [number_of_loops]r)rXrF)r�sys�stderr�argv�exit)�msgrrr�errorsr^z%d arguments are too many;zInvalid argument %r;)0�__doc__�
__future__r�timerZLOOPSrr+r.r3r/rJrK�objectrrTrHrrrDr5rCr:r8r0r-r,rr9r?rBr2r1r<r6r7r;r4rIrrYr^�lenr[�nargs�intr�
ValueErrorrrrr�<module>sT 
:




PKADu\�������8future/backports/test/__pycache__/support.cpython-39.pycnu�[���a

��?h��@s�dZddlmZmZmZmZddlmZddlm	Z	m
Z
mZmZm
Z
mZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZddlZeed�s�ddlZddlZddlZddlZddlZzddl Z Wne!y�ddl"m Z Yn0ddl#Z#ddl$Z%ddl&Z&ddl'Z'z.ej(�r:ddl)Z)ddl*Z*nddl+Z)ddl*Z*Wne!�yhdZ)dZ*Yn0zddl,Z-Wne!�y�dZ-Yn0zddl.Z.Wne!�y�dZ.Yn0zddl/Z/Wne!�y�dZ/Yn0zddl0Z0Wne!�ydZ0Yn0zddl1Z1Wne!�y&dZ1Yn0gd�Z2Gd	d
�d
e3�Z4Gdd�de4�Z5Gd
d�dej6�Z7ej8d�dd��Z9d�dd�Z:dd�Z;dd�Z<dd�Z=d�dd�Z>dd�Z?d Z@dZAdaBdaCdZDdZEdaFd!d"�ZGd#d$�ZHd%d&�ZIej�Jd'��rd�d(d)�ZKd*d+�ZLd,d-�ZMd.d/�ZNnejOZLejPZMejQZNd0d1�ZOd2d3�ZPd4d5�ZQej�Jd'��r^ddlRZRddlSZRd6d7�ZTnd8d7�ZTd9d:�ZUd�d;d<�ZVd=d>�ZWd?d@�ZXdAdB�ZYdCdD�ZZdEZ[dFZ\ej]ej^fdGdH�Z_e[fdIdJ�Z`dKdL�Zaea�ZbdMZcdNZde�ee.dO�Zfe�ee0dP�Zge�ee1dQ�Zhej�JdR�ZiejjdRk�rdSZkndTZkdU�leke�m��Zke�n�Zoej8d�dWdX��ZpeedY��r\ej8dZd[��Zqerdfd\d]�Zsd^d_�Ztd`da�Zudbdc�Zvddde�Zwdfdg�ZxGdhdi�diey�Zzd�djdk�Z{ej8dldm��Z|Gdndo�doey�Z}ej(�r�ddl~Zej�j�Z�nddl�Z�e�j�Z�Gdpdq�dqe��Z�Gdrds�dsey�Z�Gdtdu�duey�Z�e�e�ej�dv�Z�e�ej�ej�dv�Z�e�e�ej�dv�Z�ej8d�dxdy��Z�ej8dzd{��Z�d|d}�Z�d~d�Z�d�d��Z�d�d��Z�ej8d�d���Z�d�d��Z�d�Z�d�Z�eed���r�d�e�Z�d�Z�e�d�Z�d�d��Z�d�d��Z�d�Z�d�Z�d�d��Z�d�d��Z�d�d��Z�d�Z�d�e�Z�d�e�Z�d�e�Z�ej�Z�d�d��Z�Gd�d��d�ey�Z�d�d�d��Z�d�d��Z�Gd�d��d�ey�Z�d�d��Z�d�d��Z�d�d��Z�d�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d�d��Z�d�d��Z�d�dÄZ�d�dńZ�d�dDŽZ�d�dɄZ�d�d˄Z�ej8d�d̈́�Z�ej8d�dτ�Z�d�dфZ�d�dӄZ�Gd�dՄd�e%j�jÃZ�Gd�dׄd�ey�Z�da�d�dلZ�d�dۄZ�da�d�d݄Z�d�d߄Z�ej�Jd'��r�ej8d�d��Z�nej8d�d��Z�d�d�Z�dS)�zwSupporting definitions for the Python regression tests.

Backported for python-future from Python 3.3 test/support.py.
�)�absolute_import�division�print_function�unicode_literals)�utils)�str�range�open�int�map�listN�skip)�	sysconfig)J�Error�
TestFailed�ResourceDenied�
import_module�verbose�
use_resources�
max_memuse�record_original_stdout�get_original_stdout�unload�unlink�rmtreeZforget�is_resource_enabled�requires�requires_freebsd_version�requires_linux_version�requires_mac_ver�find_unused_port�	bind_port�IPV6_ENABLED�	is_jython�TESTFN�HOST�SAVEDCWD�temp_cwd�findfile�create_empty_file�sortdict�check_syntax_error�open_urlresource�check_warnings�CleanImport�EnvironmentVarGuard�TransientResource�captured_stdout�captured_stdin�captured_stderr�time_out�socket_peer_reset�ioerror_peer_reset�run_with_locale�
temp_umask�transient_internet�set_memlimit�
bigmemtest�bigaddrspacetest�BasicTestRunner�run_unittest�run_doctest�threading_setup�threading_cleanup�
reap_children�cpython_only�check_impl_detail�
get_attribute�	swap_item�	swap_attrZrequires_IEEE_754�TestHandler�Matcher�can_symlink�skip_unless_symlink�skip_unless_xattr�import_fresh_module�
requires_zlib�
PIPE_MAX_SIZE�failfast�anticipate_failure�run_with_tzZ
requires_gzip�requires_bz2�
requires_lzma�suppress_crash_popupc@seZdZdZdS)rz*Base class for regression test exceptions.N��__name__�
__module__�__qualname__�__doc__�r[r[�G/usr/local/lib/python3.9/site-packages/future/backports/test/support.pyrasrc@seZdZdZdS)rzTest failed.NrVr[r[r[r\rdsrc@seZdZdZdS)rz�Test skipped because it requested a disallowed resource.

    This is raised when a test calls requires() for a resource that
    has not be enabled.  It is used to distinguish between expected
    and unexpected skips.
    NrVr[r[r[r\rgsrTccsL|rBt���$t�ddt�dVWd�qH1s60YndVdS)z�Context manager to suppress package and module deprecation
    warnings when importing them.

    If ignore is False, this context manager has no effect.�ignorez.+ (module|package)N)�warnings�catch_warnings�filterwarnings�DeprecationWarning)r]r[r[r\�_ignore_deprecated_importsos
�&rbFcCszt|��^zt�|�WWd�StyV}zt�t|���WYd}~n
d}~00Wd�n1sl0YdS)z�Import and return the module to be tested, raising SkipTest if
    it is not available.

    If deprecated is True, any module or package deprecation messages
    will be suppressed.N)rb�	importlibr�ImportError�unittest�SkipTestr)�name�
deprecated�msgr[r[r\r~s

rcCsZ|tjvrt|�tj|=ttj�D]0}||ks>|�|d�r$tj|||<tj|=q$dS)zyHelper function to save and remove a module from sys.modules

    Raise ImportError if the module can't be imported.
    �.N)�sys�modules�
__import__r�
startswith)rg�orig_modules�modnamer[r[r\�_save_and_remove_module�s
rqcCs<d}ztj|||<Wnty,d}Yn0dtj|<|S)z�Helper function to save and block a module in sys.modules

    Return True if the module was in sys.modules, False otherwise.
    TFN�rkrl�KeyError)rgroZsavedr[r[r\�_save_and_block_module�s

rtcCs|r
tjSdd�S)z�Decorator to mark a test that is known to be broken in some cases

       Any use of this decorator should have a comment identifying the
       associated tracker issue.
    cSs|S�Nr[��fr[r[r\�<lambda>��z$anticipate_failure.<locals>.<lambda>)reZexpectedFailure)�	conditionr[r[r\rQ�srQr[cCs�t|���i}g}t||�z�z@|D]}t||�q$|D]}t||�s8|�|�q8t�|�}Wntytd}Yn0W|��D]\}	}
|
tj	|	<q�|D]}tj	|=q�n0|��D]\}	}
|
tj	|	<q�|D]}tj	|=q�0|Wd�S1s�0YdS)aVImport and return a module, deliberately bypassing sys.modules.
    This function imports and returns a fresh copy of the named Python module
    by removing the named module from sys.modules before doing the import.
    Note that unlike reload, the original module is not affected by
    this operation.

    *fresh* is an iterable of additional module names that are also removed
    from the sys.modules cache before doing the import.

    *blocked* is an iterable of module names that are replaced with None
    in the module cache during the import to ensure that attempts to import
    them raise ImportError.

    The named module and any modules named in the *fresh* and *blocked*
    parameters are saved before starting the import and then reinserted into
    sys.modules when the fresh import is complete.

    Module and package deprecation messages are suppressed during this import
    if *deprecated* is True.

    This function will raise ImportError if the named module cannot be
    imported.

    If deprecated is True, any module or package deprecation messages
    will be suppressed.
    N)
rbrqrt�appendrcrrd�itemsrkrl)rgZfreshZblockedrhroZnames_to_removeZ
fresh_nameZblocked_nameZfresh_moduleZ	orig_name�moduleZname_to_remover[r[r\rM�s,


�rMcCs<zt||�}Wn$ty2t�d||f��Yn0|SdS)z?Get an attribute, raising SkipTest if AttributeError is raised.zobject %r has no attribute %rN)�getattr�AttributeErrorrerf)�objrg�	attributer[r[r\rE�s
rE�cCs|adSru)�_original_stdout)�stdoutr[r[r\r�srcCs
tptjSru)r�rkr�r[r[r[r\r�srcCs$ztj|=WntyYn0dSrurr)rgr[r[r\rsr�wincCs�||�|r|}ntj�|�\}}|p(d}d}|dkrjt�|�}|rJ|sVn||vsVdSt�|�|d9}q.tjd|tdd�dS)Nrjg����MbP?g�?�z)tests may fail, delete still pending for ���
stacklevel)	�os�path�split�listdir�time�sleepr^�warn�RuntimeWarning)�func�pathname�waitall�dirnamerg�timeout�Lr[r[r\�_waitfor	s



�r�cCsttj|�dSru)r�r�r)�filenamer[r[r\�_unlink*sr�cCsttj|�dSru�r�r��rmdir)r�r[r[r\�_rmdir-sr�cs*�fdd��t�|dd�ttj|�dS)NcsRt�|�D]B}tj�||�}tj�|�rBt�|dd�t�|�q
t�|�q
dS)NT�r�)r�r�r��join�isdirr�r�r)r�rg�fullname��
_rmtree_innerr[r\r�1sz_rmtree.<locals>._rmtree_innerTr�r�)r�r[r�r\�_rmtree0sr�c
CsJzt|�Wn8tyD}z |jtjtjfvr0�WYd}~n
d}~00dSru)r��OSError�errno�ENOENT�ENOTDIR)r��errorr[r[r\r@s
rc
CsDzt|�Wn2ty>}z|jtjkr*�WYd}~n
d}~00dSru)r�r�r�r�)r�r�r[r[r\r�Hs
r�c
CsDzt|�Wn2ty>}z|jtjkr*�WYd}~n
d}~00dSru)r�r�r�r�)r�r�r[r[r\rPs
rc	Cs�d}d}Gdd�dtj�}tjj}|��}|s6t���|�}tj��}|�||t�	|�t�
|�t�	|��}|svt���t|j|@�S)Nr�c@s.eZdZdejjfdejjfdejjfgZdS)z*_is_gui_available.<locals>.USEROBJECTFLAGSZfInheritZ	fReserved�dwFlagsN)rWrXrY�ctypes�wintypesZBOOL�DWORD�_fields_r[r[r[r\�USEROBJECTFLAGS`s


�r�)
r��	Structure�windllZuser32ZGetProcessWindowStationZWinErrorr�r�ZGetUserObjectInformationW�byref�sizeof�boolr�)Z	UOI_FLAGSZWSF_VISIBLEr��dll�hZuof�needed�resr[r[r\�_is_gui_available]s$
�r�cCsdS)NTr[r[r[r[r\r�sscCstduo|tvS)zPTest whether a resource is enabled.  Known resources are set by
    regrtest.py.N)r��resourcer[r[r\rvsrcCsV|dkrt�st�d��t�d�j�d�dkr2dSt|�sR|durJd|}t|��dS)z�Raise ResourceDenied if the specified resource is not available.

    If the caller's module is __main__ then automatically return True.  The
    possibility of False being returned occurs when regrtest.py is
    executing.
    �guizCannot use the 'gui' resourcer�rW�__main__Nz"Use of the %r resource not enabled)	r�rerfrk�	_getframe�	f_globals�getrr)r�rir[r[r\r{s
rcs��fdd�}|S)z�Decorator raising SkipTest if the OS is `sysname` and the version is less
    than `min_version`.

    For example, @_requires_unix_version('FreeBSD', (7, 2)) raises SkipTest if
    the FreeBSD version is less than 7.2.
    cs$t������fdd��}�|_|S)Ncs�t���krxt���dd�d}zttt|�d���}WntyJYn.0|�krxd�tt	���}t
�d�||f���|i|��S)N�-r�rrjz(%s version %s or higher required, not %s)�platform�system�releaser��tuplerr
�
ValueErrorr�rrerf��args�kwZversion_txt�versionZmin_version_txt)r��min_version�sysnamer[r\�wrapper�s��z:_requires_unix_version.<locals>.decorator.<locals>.wrapper��	functools�wrapsr��r�r��r�r��r�r\�	decorator�sz)_requires_unix_version.<locals>.decoratorr[)r�r�r�r[r�r\�_requires_unix_version�sr�cGs
td|�S)z�Decorator raising SkipTest if the OS is FreeBSD and the FreeBSD version is
    less than `min_version`.

    For example, @requires_freebsd_version(7, 2) raises SkipTest if the FreeBSD
    version is less than 7.2.
    ZFreeBSD�r��r�r[r[r\r�srcGs
td|�S)z�Decorator raising SkipTest if the OS is Linux and the Linux version is
    less than `min_version`.

    For example, @requires_linux_version(2, 6, 32) raises SkipTest if the Linux
    version is less than 2.6.32.
    �Linuxr�r�r[r[r\r�srcs�fdd�}|S)z�Decorator raising SkipTest if the OS is Mac OS X and the OS X
    version if less than min_version.

    For example, @requires_mac_ver(10, 5) raises SkipTest if the OS X version
    is lesser than 10.5.
    cs"t�����fdd��}�|_|S)Ncsztjdkrlt��d}zttt|�d���}Wnty@Yn,0|�krld�tt	���}t
�d||f���|i|��S)N�darwinrrjz&Mac OS X %s or higher required, not %s)rkr��mac_verr�rr
r�r�r�rrerfr�)r�r�r[r\r��s
��z4requires_mac_ver.<locals>.decorator.<locals>.wrapperr�r�r�r�r\r��sz#requires_mac_ver.<locals>.decoratorr[)r�r�r[r�r\r�srz	127.0.0.1�::1cCs"t�||�}t|�}|��~|S)a�
Returns an unused port that should be suitable for binding.  This is
    achieved by creating a temporary socket with the same family and type as
    the 'sock' parameter (default is AF_INET, SOCK_STREAM), and binding it to
    the specified host address (defaults to 0.0.0.0) with the port set to 0,
    eliciting an unused ephemeral port from the OS.  The temporary socket is
    then closed and deleted, and the ephemeral port is returned.

    Either this method or bind_port() should be used for any tests where a
    server socket needs to be bound to a particular port for the duration of
    the test.  Which one to use depends on whether the calling code is creating
    a python socket, or if an unused port needs to be provided in a constructor
    or passed to an external program (i.e. the -accept argument to openssl's
    s_server mode).  Always prefer bind_port() over find_unused_port() where
    possible.  Hard coded ports should *NEVER* be used.  As soon as a server
    socket is bound to a hard coded port, the ability to run multiple instances
    of the test simultaneously on the same host is compromised, which makes the
    test a ticking time bomb in a buildbot environment. On Unix buildbots, this
    may simply manifest as a failed test, which can be recovered from without
    intervention in most cases, but on Windows, the entire python process can
    completely and utterly wedge, requiring someone to log in to the buildbot
    and manually kill the affected process.

    (This is easy to reproduce on Windows, unfortunately, and can be traced to
    the SO_REUSEADDR socket option having different semantics on Windows versus
    Unix/Linux.  On Unix, you can't have two AF_INET SOCK_STREAM sockets bind,
    listen and then accept connections on identical host/ports.  An EADDRINUSE
    socket.error will be raised at some point (depending on the platform and
    the order bind and listen were called on each socket).

    However, on Windows, if SO_REUSEADDR is set on the sockets, no EADDRINUSE
    will ever be raised when attempting to bind two identical host/ports. When
    accept() is called on each socket, the second caller's process will steal
    the port from the first caller, leaving them both in an awkwardly wedged
    state where they'll no longer respond to any signals or graceful kills, and
    must be forcibly killed via OpenProcess()/TerminateProcess().

    The solution on Windows is to use the SO_EXCLUSIVEADDRUSE socket option
    instead of SO_REUSEADDR, which effectively affords the same semantics as
    SO_REUSEADDR on Unix.  Given the propensity of Unix developers in the Open
    Source world compared to Windows ones, this is a common mistake.  A quick
    look over OpenSSL's 0.9.8g source shows that they use SO_REUSEADDR when
    openssl.exe is called with the 's_server' option, for example. See
    http://bugs.python.org/issue2550 for more info.  The following site also
    has a very thorough description about the implications of both REUSEADDR
    and EXCLUSIVEADDRUSE on Windows:
    http://msdn2.microsoft.com/en-us/library/ms740621(VS.85).aspx)

    XXX: although this approach is a vast improvement on previous attempts to
    elicit unused ports, it rests heavily on the assumption that the ephemeral
    port returned to us by the OS won't immediately be dished back out to some
    other process when we close and delete our temporary socket but before our
    calling code has a chance to bind the returned port.  We can deal with this
    issue if/when we come across it.
    )�socketr!�close)�family�socktypeZtempsock�portr[r[r\r �s
8r cCs�|jtjkr�|jtjkr�ttd�r>|�tjtj�dkr>t	d��ttd�r~z |�tjtj
�dkrft	d��Wntjy|Yn0ttd�r�|�tjtj
d�|�|df�|��d}|S)a%Bind the socket to a free port and return the port number.  Relies on
    ephemeral ports in order to ensure we are using an unbound port.  This is
    important as many tests may be running simultaneously, especially in a
    buildbot environment.  This method raises an exception if the sock.family
    is AF_INET and sock.type is SOCK_STREAM, *and* the socket has SO_REUSEADDR
    or SO_REUSEPORT set on it.  Tests should *never* set these socket options
    for TCP/IP sockets.  The only case for setting these options is testing
    multicasting via multiple UDP sockets.

    Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e.
    on Windows), it will be set on the socket.  This will prevent anyone else
    from bind()'ing to our host/port for the duration of the test.
    �SO_REUSEADDRr�zHtests should never set the SO_REUSEADDR socket option on TCP/IP sockets!�SO_REUSEPORTzHtests should never set the SO_REUSEPORT socket option on TCP/IP sockets!�SO_EXCLUSIVEADDRUSEr)r�r��AF_INET�type�SOCK_STREAM�hasattr�
getsockopt�
SOL_SOCKETr�rr�r��
setsockoptr��bind�getsockname)�sock�hostr�r[r[r\r!s


r!c	Csxtjrtd}zZz.t�tjtj�}|�d�WW|r8|��dStjtjfyTYn0W|rt|��n|rr|��0dS)z+Check whether IPv6 is enabled on this host.N)r�rTF)r��has_ipv6�AF_INET6r�r�r�r��gaierror)r�r[r[r\�_is_ipv6_enabled>s 
�
�
r�i@iz
requires zlibzrequires bz2z
requires lzma�javaz$testz@testz{0}_{1}_tmp�tempcwdc	cs�t��}d}|durX|}zt�|�d}Wn,tyV|s>�tjd|tdd�Yn0zt�|�Wn,ty�|sz�tjd|tdd�Yn0z$t��VWt�|�|r�t|�nt�|�|r�t|�0dS)a�
    Context manager that temporarily changes the CWD.

    An existing path may be provided as *path*, in which case this
    function makes no changes to the file system.

    Otherwise, the new CWD is created in the current directory and it's
    named *name*. If *quiet* is False (default) and it's not possible to
    create or change the CWD, an error is raised.  If it's True, only a
    warning is raised and the original CWD is used.
    FNTz*tests may fail, unable to create temp CWD �r�z,tests may fail, unable to change the CWD to )	r��getcwd�mkdirr�r^r�r��chdirr)rg�quietr�Z	saved_dirZis_temporaryr[r[r\r'�s:


�
�

�
r'�umaskc	cs0t�|�}zdVWt�|�nt�|�0dS)z8Context manager that temporarily sets the process umask.N)r�r�)r�Zoldmaskr[r[r\r8s
r8cCsntj�|�r|S|dur&tj�||�}tj}tj�|�g|}|D]&}tj�||�}tj�|�rB|SqB|S)z�Try to find a file on sys.path and the working directory.  If it is not
    found the argument passed to the function is returned (this does not
    necessarily signal failure; could still be the legitimate path).N)r�r��isabsr�rkr��exists)�file�here�subdirr��dn�fnr[r[r\r(&sr(cCs(t�|tjtjBtjB�}t�|�dS)z>Create an empty file. If the file already exists, truncate it.N)r�r	�O_WRONLY�O_CREAT�O_TRUNCr�)r��fdr[r[r\r)5sr)cCs,t|���}dd�|D�}d�|�}d|S)z%Like repr(dict), but in sorted order.cSsg|]}d|�qS)z%r: %rr[)�.0�pairr[r[r\�
<listcomp>=ryzsortdict.<locals>.<listcomp>z, z{%s})�sortedr|r�)�dictr|Z	reprpairsZ
withcommasr[r[r\r*:s
r*cCs<ttd�}z|��W|��tt�S|��tt�0dS)z`
    Create an invalid file descriptor by opening and closing a file and return
    its fd.
    �wbN)r	r$�filenor�r�r�r[r[r\�make_bad_fdAs
��rcCs|�tt|dd�dS)Nz
<test string>�exec)�assertRaises�SyntaxError�compile)ZtestcaseZ	statementr[r[r\r+Ms
�r+cs4ddlm}m}��dd��|�|�d�d�d}tj�tj�	t
�d|�}���fdd	�}tj�|�r�||�}|dur||St|�t
d
�td|t�d�|j|d
d�}zVt|d��0}	|��}
|
r�|	�|
�|��}
q�Wd�n1s�0YW|��n
|��0||�}|du�r$|Std|��dS)Nr)�request�parse�checkr��/����datacsDt|g��Ri���}�dur"|S�|�r8|�d�|S|��dS�Nr)r	�seekr�)rrw�r�rr�r[r\�check_valid_file[s
z*open_urlresource.<locals>.check_valid_fileZurlfetchz	fetching %s ...r
�)r�rzinvalid resource %r)Zfuture.backports.urllibrr�pop�urlparser�r�r�r�r��__file__r�rr�printr�urlopenr	�read�writer�r)�urlr�r�Zurllib_request�urllib_parser�rrrw�out�sr[rr\r,Qs0	
*
r,c@s4eZdZdZdd�Zdd�Zedd��Zdd	�Zd
S)�WarningsRecorderzyConvenience wrapper for the warnings list returned on
       entry to the warnings.catch_warnings() context manager.
    cCs||_d|_dSr��	_warnings�_last)�selfZ
warnings_listr[r[r\�__init__�szWarningsRecorder.__init__cCsDt|j�|jkr t|jd|�S|tjjvr0dStd||f��dS)Nrz%r has no attribute %r)�lenr+r,r~r^�WarningMessage�_WARNING_DETAILSr)r-�attrr[r[r\�__getattr__�s
zWarningsRecorder.__getattr__cCs|j|jd�Srur*�r-r[r[r\r^�szWarningsRecorder.warningscCst|j�|_dSru)r/r+r,r4r[r[r\�reset�szWarningsRecorder.resetN)	rWrXrYrZr.r3�propertyr^r5r[r[r[r\r)~s
r)ccs6t�d�}|j�d�}|rDtjr*|��ntt|��D]}|�	�q6t
jdd��*}tjd�
d�t|�VWd�n1s�0Yt|�}g}|D]j\}}	d}
|dd�D]8}|j}t�|t|�tj�r�t|j|	�r�d}
|�|�q�|
s�|s�|�||	jf�q�|�rtd	|d
��|�r2td|d
��dS)z�Catch the warnings, then check if all the expected
    warnings have been raised and re-raise unexpected warnings.
    If 'quiet' is True, only re-raise the unexpected warnings.
    r�Z__warningregistry__T)�recordr^�alwaysNFzunhandled warning %srz)filter (%r, %s) did not catch any warning)rkr�r�r�r�PY3�clearrr/rr^r_rl�simplefilterr)r�message�re�matchr�I�
issubclass�	__class__�remover{rW�AssertionError)�filtersr��frame�registry�i�w�reraise�missingri�cat�seen�warningr[r[r\�_filterwarnings�s:


(
��rNcOs.|�d�}|s$dtff}|dur$d}t||�S)a�Context manager to silence warnings.

    Accept 2-tuples as positional arguments:
        ("message regexp", WarningCategory)

    Optional argument:
     - if 'quiet' is True, it does not fail if a filter catches nothing
        (default True without argument,
         default False if some filters are defined)

    Without argument, it defaults to:
        check_warnings(("", Warning), quiet=True)
    r��NT)r��WarningrN)rD�kwargsr�r[r[r\r-�s

r-c@s(eZdZdZdd�Zdd�Zdd�ZdS)	r.a,Context manager to force import to return a new module reference.

    This is useful for testing module-level behaviours, such as
    the emission of a DeprecationWarning on import.

    Use like this:

        with CleanImport("foo"):
            importlib.import_module("foo") # new reference
    cGsJtj��|_|D]4}|tjvrtj|}|j|kr<tj|j=tj|=qdSru)rkrl�copy�original_modulesrW)r-Zmodule_names�module_namer}r[r[r\r.�s



zCleanImport.__init__cCs|Srur[r4r[r[r\�	__enter__�szCleanImport.__enter__cGstj�|j�dSru)rkrl�updaterS�r-�
ignore_excr[r[r\�__exit__�szCleanImport.__exit__N�rWrXrYrZr.rUrYr[r[r[r\r.�s
r.c@sheZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dd�Zdd�Zdd�Z
dd�ZdS)r/z_Class to help protect the environment variable properly.  Can be used as
    a context manager.cCstj|_i|_dSru)r��environ�_environ�_changedr4r[r[r\r.szEnvironmentVarGuard.__init__cCs
|j|Sru)r\�r-�envvarr[r[r\�__getitem__
szEnvironmentVarGuard.__getitem__cCs*||jvr|j�|�|j|<||j|<dSru�r]r\r��r-r_�valuer[r[r\�__setitem__
s
zEnvironmentVarGuard.__setitem__cCs2||jvr|j�|�|j|<||jvr.|j|=dSrurar^r[r[r\�__delitem__s

zEnvironmentVarGuard.__delitem__cCs
|j��Sru)r\�keysr4r[r[r\rfszEnvironmentVarGuard.keyscCs
t|j�Sru)�iterr\r4r[r[r\�__iter__szEnvironmentVarGuard.__iter__cCs
t|j�Sru)r/r\r4r[r[r\�__len__ szEnvironmentVarGuard.__len__cCs|||<dSrur[rbr[r[r\�set#szEnvironmentVarGuard.setcCs
||=dSrur[r^r[r[r\�unset&szEnvironmentVarGuard.unsetcCs|Srur[r4r[r[r\rU)szEnvironmentVarGuard.__enter__cGsF|j��D].\}}|dur.||jvr8|j|=q
||j|<q
|jt_dSru)r]r|r\r�r[)r-rX�k�vr[r[r\rY,s

zEnvironmentVarGuard.__exit__N)rWrXrYrZr.r`rdrerfrhrirjrkrUrYr[r[r[r\r/sr/c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�
DirsOnSysPatha�Context manager to temporarily add directories to sys.path.

    This makes a copy of sys.path, appends any directories given
    as positional arguments, then reverts sys.path to the copied
    settings when the context ends.

    Note that *all* sys.path modifications in the body of the
    context manager, including replacement of the object,
    will be reverted at the end of the block.
    cGs(tjdd�|_tj|_tj�|�dSru)rkr��original_value�original_object�extend)r-�pathsr[r[r\r.BszDirsOnSysPath.__init__cCs|Srur[r4r[r[r\rUGszDirsOnSysPath.__enter__cGs|jt_|jtjdd�<dSru)rprkr�rorWr[r[r\rYJszDirsOnSysPath.__exit__NrZr[r[r[r\rn6srnc@s*eZdZdZdd�Zdd�Zd	dd�ZdS)
r0z�Raise ResourceDenied if an exception is raised while the context manager
    is in effect that matches the specified exception and attributes.cKs||_||_dSru)�exc�attrs)r-rsrQr[r[r\r.TszTransientResource.__init__cCs|Srur[r4r[r[r\rUXszTransientResource.__enter__NcCsT|durPt|j|�rP|j��D](\}}t||�s4qPt||�|krqPqtd��dS)z�If type_ is a subclass of self.exc and value has attributes matching
        self.attrs, raise ResourceDenied.  Otherwise let the exception
        propagate (if any).Nz%an optional resource is not available)r@rsrtr|r�r~r)r-�type_rc�	tracebackr2�
attr_valuer[r[r\rY[s
zTransientResource.__exit__)NNNrZr[r[r[r\r0Osr0)r��>@c	
#sgd�}gd�}td|��|�g��sDdd�|D��dd�|D�����fdd�}t��}z�z|d	urrt�|�d	VWn�ty�}zh|j}t|�d
kr�t|dt�r�|d}q�t|�dkr�t|d
t�r�|d
}q�q�q�||��WYd	}~n
d	}~00Wt�|�nt�|�0d	S)
z�Return a context manager that raises ResourceDenied when various issues
    with the Internet connection manifest themselves as exceptions.))ZECONNREFUSED�o)�
ECONNRESET�h)ZEHOSTUNREACH�q)ZENETUNREACH�e)�	ETIMEDOUT�n))�	EAI_AGAIN���)�EAI_FAIL���)�
EAI_NONAME���)�
EAI_NODATA���)Z
WSANO_DATAi�*zResource %r is not availablecSsg|]\}}tt||��qSr[)r~r��rrg�numr[r[r\r�s�z&transient_internet.<locals>.<listcomp>cSsg|]\}}tt||��qSr[)r~r�r�r[r[r\r�s�cs`t|dd�}t|tj�s4t|tj�r,|�vs4|�vr\tsNtj��j	dd��}||_
|�dS)Nr�r�
)r~�
isinstancer�r�r�rrk�stderrr$r��	__cause__)�err�nrs�Zcaptured_errnosZdeniedZ
gai_errnosr[r\�filter_error�s
���z(transient_internet.<locals>.filter_errorNr�rr�)rr��getdefaulttimeout�setdefaulttimeout�IOErrorr�r/r�)	�
resource_namer�ZerrnosZdefault_errnosZdefault_gai_errnosr�Zold_timeoutr��ar[r�r\r9ps8	��




r9c
csRddl}tt|�}tt||���ztt|�VWtt||�ntt||�0dS)z�Return a context manager used by captured_stdout/stdin/stderr
    that temporarily replaces the sys stream *stream_name* with a StringIO.rN)�ior~rk�setattr�StringIO)�stream_namer��orig_stdoutr[r[r\�captured_output�s
r�cCstd�S)z�Capture the output of sys.stdout:

       with captured_stdout() as s:
           print("hello")
       self.assertEqual(s.getvalue(), "hello")
    r��r�r[r[r[r\r1�sr1cCstd�S)Nr�r�r[r[r[r\r3�sr3cCstd�S)N�stdinr�r[r[r[r\r2�sr2cCs*t��trt�d�t��t��dS)a�Force as many objects as possible to be collected.

    In non-CPython implementations of Python, this is needed because timely
    deallocation is not guaranteed by the garbage collector.  (Even in CPython
    this can be the case in case of reference cycles.)  This means that __del__
    methods may be called later than expected and weakrefs may remain alive for
    longer than expected.  This function tries its best to force all garbage
    objects to disappear.
    皙�����?N)�gcZcollectr#r�r�r[r[r[r\�
gc_collect�s


r�ccs:t��}t��zdVW|r6t��n|r4t��0dSru)r��	isenabled�disable�enable)Zhave_gcr[r[r\�
disable_gc�s
�r�cCsFddl}|�d�pd}d}|��D]}|�d�r"|}q"|dkoD|dkS)z,Find if Python was built with optimizations.rN�	PY_CFLAGSrOz-Oz-O0)r�get_config_varr�rn)r�cflagsZ	final_opt�optr[r[r\�python_is_optimized�s
r�ZnPZ0n�gettotalrefcountZ2PZ0Pr�cCst�t|t�Sru)�struct�calcsize�_header�_align��fmtr[r[r\�calcobjsize�sr�cCst�t|t�Sru)r�r��_vheaderr�r�r[r[r\�calcvobjsizesr�i@icCsht�|�}t|�tkr |jt@s:t|�tkrDt|�jt@rD|tj7}dt|�||f}|�|||�dS)Nz&wrong size for %s: got %d, expected %d)	rk�	getsizeofr��	__flags__�_TPFLAGS_HEAPTYPE�_TPFLAGS_HAVE_GCZ	_testcapiZSIZEOF_PYGC_HEAD�assertEqual)�test�o�size�resultrir[r[r\�check_sizeof	s

��
�r�cs��fdd�}|S)Ncs$���fdd�}�j|_�j|_|S)Nc
s�z ddl}t|��}|�|�}Wn&ty4�YnBd}}Yn00�D](}z|�||�WqvWqLYqL0qLz&�|i|��W|r�|r�|�||�Sn|r�|r�|�||�0dSr)�localer~�	setlocaler)r��kwdsr��categoryZorig_locale�loc)�catstrr��localesr[r\�inners.

��z1run_with_locale.<locals>.decorator.<locals>.inner�rWrZ�r�r��r�r�r�r\r�sz"run_with_locale.<locals>.decoratorr[)r�r�r�r[r�r\r7sr7cs�fdd�}|S)Ncs"��fdd�}�j|_�j|_|S)Nc	s�z
tj}Wnty&t�d��Yn0dtjvr>tjd}nd}�tjd<|�zH�|i|��W|durttjd=n
|tjd<t��S|tjd<t��n&|dur�tjd=n
|tjd<t��0dS)Nztzset requiredZTZ)r��tzsetrrerfr�r[)r�r�r�Zorig_tz)r��tzr[r\r�=s0




�

�

z-run_with_tz.<locals>.decorator.<locals>.innerr�r��r�r�r\r�<szrun_with_tz.<locals>.decoratorr[)r�r�r[r�r\rR;srRi�r�r�cCs�dttdtd�}t�d|tjtjB�}|dur>td|f��tt|�	d��||�	d��
��}|a|tkrrt}|t
dkr�td|f��|adS)Nr�)rl�m�g�tz(\d+(\.\d+)?) (K|M|G|T)b?$zInvalid memory limit %rr�r�z$Memory limit %r too low to be useful)�_1M�_1Gr=r>�
IGNORECASE�VERBOSEr�r
�float�group�lower�real_max_memuse�MAX_Py_ssize_t�_2Gr)�limit�sizesr��memlimitr[r[r\r:es"�
�$r:c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�_MemoryWatchdogz`An object which periodically watches the process' memory consumption
    and prints it out.
    cCsdjt��d�|_d|_dS)Nz/proc/{pid}/statm)�pidF)�formatr��getpid�procfile�startedr4r[r[r\r.sz_MemoryWatchdog.__init__c
Cs�zt|jd�}WnBtyR}z*t�d�|�t�tj�	�WYd}~dSd}~00t
d�}tjtj
|g|tjd�|_|��d|_dS)N�rz"/proc not available for stats: {0}zmemory_watchdog.py)r�r�T)r	r�r�r^r�r�r�rkr��flushr(�
subprocess�Popen�
executable�DEVNULL�mem_watchdogr�r�)r-rw�eZwatchdog_scriptr[r[r\�start�s�
�z_MemoryWatchdog.startcCs|jr|j��|j��dSru)r�r��	terminate�waitr4r[r[r\�stop�s
z_MemoryWatchdog.stopN)rWrXrYrZr.r�r�r[r[r[r\r�zsr�cs���fdd�}|S)aADecorator for bigmem tests.

    'minsize' is the minimum useful size for the test (in arbitrary,
    test-interpreted units.) 'memuse' is the number of 'bytes per size' for
    the test, or a good estimate of it.

    if 'dry_run' is False, it means the test doesn't support dummy runs
    when -M is not specified.
    cs ���fdd����_��_�S)Ncs��j}�j}tsd}n|}ts"�sDt||krDt�d||d��trztrzt�tdj||dd��t�}|�	�nd}z�||�W|r�|�
�Sn|r�|�
�0dS)Ni�'not enough memory: %.1fG minimum needed�@z* ... expected peak memory use: {peak:.1f}G)Zpeak)r��memuser�rerfrr!r�r�r�r�)r-r�r��maxsizeZwatchdog)�dry_runrwr�r[r\r��s<
�
��
�

��z.bigmemtest.<locals>.decorator.<locals>.wrapper)r�r�rv�r�r�r��rwr�r\r��szbigmemtest.<locals>.decoratorr[)r�r�r�r�r[r�r\r;�s
!r;cs�fdd�}|S)z0Decorator for tests that fill the address space.csDttkr8tdkr$tdkr$t�d��q@t�dtd��n�|�SdS)Nl����lz-not enough memory: try a 32-bit build insteadr�r�)rr�rerfr4rvr[r\r��s���z!bigaddrspacetest.<locals>.wrapperr[r�r[rvr\r<�sr<c@seZdZdd�ZdS)r=cCst��}||�|Sru)reZ
TestResult)r-r�r�r[r[r\�run�szBasicTestRunner.runN)rWrXrYr�r[r[r[r\r=�sr=cCs|Srur[)r�r[r[r\�_id�sr�cCs8|dkrt�st�d�St|�r$tSt�d�|��SdS)Nr�zresource 'gui' is not availablezresource {0!r} is not enabled)r�rer
rr�r�r�r[r[r\�requires_resource�s

r�cCstdd�|�S)z9
    Decorator for tests only applicable on CPython.
    T)�cpython)�impl_detail�r�r[r[r\rC�srCcKsZtfi|��rtS|durPt|�\}}|r0d}nd}t|���}|�d�|��}t�|�S)Nz*implementation detail not available on {0}z%implementation detail specific to {0}z or )	rDr��
_parse_guardsr	rfr�r�rer
)ri�guardsZ
guardnames�defaultr[r[r\r��sr�cCsH|sddidfSt|���d}t|���|gt|�ks>J�||fS)Nr�TFr)r�valuesr/)r�Zis_truer[r[r\r��s
r�cKs t|�\}}|�t����|�S)a5This function returns True or False depending on the host platform.
       Examples:
          if check_impl_detail():               # only on CPython (default)
          if check_impl_detail(jython=True):    # only on Jython
          if check_impl_detail(cpython=False):  # everywhere except on CPython
    )r�r�r��python_implementationr�)r�r�r[r[r\rDsrDcs,ttd�s�St����fdd��}|SdS)zEDecorator to temporarily turn off tracing for the duration of a test.�gettracec	s>t��}z$t�d��|i|��Wt�|�St�|�0dSru)rkr�settrace)r�rQZoriginal_tracer�r[r\r�s

�zno_tracing.<locals>.wrapperN)r�rkr�r�r�r[r�r\�
no_tracings

rcCstt|��S)aDecorator for tests which involve reference counting.

    To start, the decorator does not run the test if is not run by CPython.
    After that, any trace function is unset during the test to prevent
    unexpected refcounts caused by the trace function.

    )rrCr�r[r[r\�
refcount_test srcCsNg}|jD]8}t|tj�r0t||�|�|�q
||�r
|�|�q
||_dS)z>Recursively filter test cases in a suite based on a predicate.N)Z_testsr�re�	TestSuite�
_filter_suiter{)�suite�predZnewtestsr�r[r[r\r+s

rcCs�trtjtjdtd�}nt�}|�|�}|��s�t	|j
�dkrT|jsT|j
dd}n4t	|j�dkrx|j
sx|jdd}nd}ts�|d7}t|��dS)z2Run tests from a unittest.TestSuite-derived class.r�)�	verbosityrPr�rzmultiple errors occurredz!; run in verbose mode for detailsN)
rreZTextTestRunnerrkr�rPr=r�Z
wasSuccessfulr/�errorsZfailuresr)r�runnerr�r�r[r[r\�
_run_suite7s
�
r
cGs�tjtjf}t��}|D]`}t|t�rR|tjvrH|�t�tj|��qxt	d��qt||�rh|�|�q|�t�
|��qdd�}t||�t|�dS)z1Run tests from unittest.TestCase-derived classes.z)str arguments must be keys in sys.modulescSs6tdurdS|���d�D]}t�|t�rdSqdS)NTrjF)�match_tests�idr��fnmatch�fnmatchcase)r�rgr[r[r\�	case_predYszrun_unittest.<locals>.case_predN)
rerZTestCaser�rrkrlZaddTestZ
findTestCasesr�Z	makeSuiterr
)�classesZvalid_typesr�clsrr[r[r\r>Ks




r>cCs`ddl}|durt}nd}|j|||d�\}}|rBtd||f��trXtd|j|f�||fS)aRun doctest on the given module.  Return (#failures, #tests).

    If optional argument verbosity is not specified (or is None), pass
    support's belief about verbosity on to doctest.  Else doctest's
    usual behavior is used (it searches sys.argv for -v).
    rN)r�optionflagsz%d of %d doctests failedz,doctest (%s) ... %d tests with zero failures)�doctestr�testmodrr!rW)r}r
rrrwr�r[r[r\r?rs�r?cCstj��fSru)rkrlrRr[r[r[r\�
modules_setup�srcCsNdd�tj��D�}tttj��D]}tj��q"tj�|�tj�|�dS)NcSs"g|]\}}|�d�r||f�qS)z
encodings.)rn)rrlrmr[r[r\r�s
�z#modules_cleanup.<locals>.<listcomp>)rkrlr|rr/rrV)Z
oldmodules�	encodingsrGr[r[r\�modules_cleanup�s
rcCstrt��fSdSdS)N)r�)�_thread�_countr[r[r[r\r@�s
r@cCs<tsdSd}t|�D]"}t��}||kr,q8t�d�qdS)N�
r�)rrrr�r�)Z
nb_threadsZ
_MAX_COUNT�countr�r[r[r\rA�srAcs"ts�St����fdd��}|S)z�Use this function when threads are being used.  This will
    ensure that the threads are cleaned up even when the test fails.
    If threading is unavailable this function does nothing.
    cs(t�}z�|�Wt|�St|�0dSru)r@rA)r��keyr�r[r\r��s�zreap_threads.<locals>.decorator)rr�r�)r�r�r[r�r\�reap_threads�s
r cCsHttd�rDd}z"t�|tj�\}}|dkr.WqDWqYqDYq0qdS)z�Use this function at the end of test_main() whenever sub-processes
    are started.  This will help ensure that no extra children (zombies)
    stick around to hog resources and create problems when looking
    for refleaks.
    �waitpidrrN)r�r�r!�WNOHANG)Zany_processr��statusr[r[r\rB�s	
rBc
cszt||�rHt||�}t|||�zdVWt|||�qvt|||�0n.t|||�zdVWt||�nt||�0dS)amTemporary swap out an attribute with a new object.

    Usage:
        with swap_attr(obj, "attr", 5):
            ...

        This will set obj.attr to 5 for the duration of the with: block,
        restoring the old value at the end of the block. If `attr` doesn't
        exist on `obj`, it will be created and then deleted at the end of the
        block.
    N)r�r~r��delattr)r�r2�new_val�real_valr[r[r\rG�s

rGc	cs^||vr8||}|||<zdVW|||<qZ|||<0n"|||<zdVW||=n||=0dS)akTemporary swap out an item with a new object.

    Usage:
        with swap_item(obj, "item", 5):
            ...

        This will set obj["item"] to 5 for the duration of the with: block,
        restoring the old value at the end of the block. If `item` doesn't
        exist on `obj`, it will be created and then deleted at the end of the
        block.
    Nr[)r��itemr%r&r[r[r\rFs
rFcCst�dd|���}|S)z�Strip the stderr of a Python process from potential debug output
    emitted by the interpreter.

    This will typically be run on the result of the communicate() method
    of a subprocess.Popen object.
    s\[\d+ refs\]\r?\n?ry)r=�sub�strip)r�r[r[r\�strip_python_stderr sr*cCst��S)znReturn a list of command-line arguments reproducing the current
    settings in sys.flags and sys.warnoptions.)r��_args_from_interpreter_flagsr[r[r[r\�args_from_interpreter_flags*sr,c@s,eZdZdd�Zdd�Zdd�Zdd�Zd	S)
rHcCstjj�|d�||_dSr)�logging�handlers�BufferingHandlerr.�matcher)r-r0r[r[r\r.4szTestHandler.__init__cCsdS)NFr[r4r[r[r\�shouldFlush=szTestHandler.shouldFlushcCs|�|�|j�|j�dSru)r��bufferr{�__dict__)r-r7r[r[r\�emit@s
zTestHandler.emitcKs0d}|jD] }|jj|fi|��r
d}q,q
|S)zW
        Look for a saved dict whose keys/values match the supplied arguments.
        FT)r2r0�matches)r-rQr��dr[r[r\r5Ds
zTestHandler.matchesN)rWrXrYr.r1r4r5r[r[r[r\rH3s	rHc@s eZdZdZdd�Zdd�ZdS)rI)rir<cKs:d}|D],}||}|�|�}|�|||�sd}q6q|S)a.
        Try to match a single dict with the supplied arguments.

        Keys whose values are strings and which are in self._partial_matches
        will be checked for partial (i.e. substring) matches. You can extend
        this scheme to (for example) do regular expression matching, etc.
        TF)r��match_value)r-r6rQr�rlrm�dvr[r[r\r5Ss
zMatcher.matchescCsHt|�t|�krd}n.t|�tus,||jvr6||k}n|�|�dk}|S)zT
        Try to match a single stored value (dv) with a supplied value (v).
        Fr)r�r�_partial_matches�find)r-rlr8rmr�r[r[r\r7ds
zMatcher.match_valueN)rWrXrYr9r5r7r[r[r[r\rIOsrIc
CsXtdurtStd}zt�t|�d}WntttfyDd}Yn0t�|�|a|S)NrJTF)�_can_symlinkr$r��symlinkr��NotImplementedErrorrrB)Zsymlink_path�canr[r[r\rJrs

rJcCs t�}d}|r|St�|�|�S)z8Skip decorator for tests that require functional symlinkz*Requires functional symlink implementation)rJrer
�r��okrir[r[r\rK�srKc	Cs�tdurtSttd�sd}n�t��\}}z�ttd��z}zRt�|dd�t�|��dd�t	�
�}t�d|�}|dup�t
|�d��dk}Wnty�d}Yn0Wd�n1s�0YWtt�t|�ntt�t|�0|a|S)	N�setxattrFrs	user.testryz
2.6.(\d{1,2})r��')�
_can_xattrr�r��tempfile�mkstempr	r$rArr�r�r=r>r
r�r�r)r>Ztmp_fp�tmp_name�fpZkernel_versionr�r[r[r\�	can_xattr�s*
*
�
rHcCs t�}d}|r|St�|�|�S)zDSkip decorator for tests that require functional extended attributesz(no non-broken extended attribute support)rHrer
r?r[r[r\rL�srLc	csRddl}|jj}d}|�|�}|�||B�zdVW|�|�n|�|�0dS)z;Disable Windows Error Reporting dialogs using SetErrorMode.rNr�)r�r�Zkernel32ZSetErrorMode)r�Zk32ZSEM_NOGPFAULTERRORBOXZold_error_moder[r[r\rU�s
rUccs
dVdSrur[r[r[r[r\rU�sc	spt���d�z�j��Wn"ttfy>t��d��Yn0d�����fdd�}|�|�t��|�dS)z�Override 'object_to_patch'.'attr_name' with 'new_value'.

    Also, add a cleanup procedure to 'test_instance' to restore
    'object_to_patch' value for 'attr_name'.
    The 'attr_name' should be a valid attribute for 'object_to_patch'.

    FNTcs �rt����n
t���dSru)r�r$r[�Z
attr_is_local�	attr_name�object_to_patch�	old_valuer[r\�cleanup�szpatch.<locals>.cleanup)r~r3rrsZ
addCleanupr�)Z
test_instancerKrJ�	new_valuerMr[rIr\�patch�s


rO)T)F)r[r[F)F)N)r�FN)F)rxr[)T)N)Nr)�rZ�
__future__rrrr�futurerZfuture.builtinsrrr	r
rr�
contextlibr�r�r�r�rkr�r��shutilr^rer�Z	unittest2rcr=r�r�rrd�	distutilsr�logging.handlersr-r�rDr9r�	threading�threadZmultiprocessing.process�multiprocessing�zlib�gzip�bz2�lzma�__all__�	Exceptionrrrfr�contextmanagerrbrrqrtrQrMrErrrr�rPrr�rrrrnr�r�r�r�rr�rr�Zctypes.wintypesr�rrr�rrrr%ZHOSTv6r�r�r r!r�r"rOZ
SOCK_MAX_SIZEZ
skipUnlessrNrSrTr#rgr$r�r�r�r&r'r8r r(r)r*rr+r,�objectr)rNr-r.�collections.abc�collections�abc�MutableMappingZmybase�UserDictZ	DictMixinr/rnr0r�r~r4r�rzr5r6r9r�r1r3r2r�r�r�r�r�r�r�r�r�r�r�r7rRr�r�r�Z_4Gr�r�r:r�r;r<r=r�r�rCr�r�rDrrrr
r>r?rrr@rAr rBrGrFr*r,r.r/rHrIr;rJrKrCrHrLrUrOr[r[r[r\�<module>s� 









4	
!



		>%}'
	-
,
 
5D
	

$#
-


'
#


	"
PKDDu\��E���<future/backports/test/__pycache__/ssl_servers.cpython-39.pycnu�[���a

��?h)�@s>ddlmZmZmZmZddlmZmZddlm	Z	ddl
Z
ddlZddlZddl
Z
ddlZddlmZddlmZmZmZddlmZe�d�Ze
j�e�ZejZe
j� ed	�Z!Gd
d�de�Z"Gdd
�d
e�Z#Gdd�de�Z$Gdd�dej%�Z&e!edfdd�Z'e(dk�r:ddl)Z)e)j*dd�Z+e+j,dde-ddd�e+j,ddddd d!d"�e+j,d#d$d%d&d'd(d"�e+j,d)d*ed+d,d-�e+j,d.d/ed+d0d-�e+�.�Z/e/j0e_0e/j1�r�e$Z2n"e#Z2e	j3�r�e
�4�e2_5n
e
�6�e2_5e�7ej8�Z9e9�:e!�e/j;�r�e9�<e/j;�e/j=�re9�>e/j=�e"d1e/j?fe2e9�Z@e/j0�r0eAd2�Be/��e@�Cd3�dS)4�)�absolute_import�division�print_function�unicode_literals)�filter�str)�utilsN)�parse)�
HTTPServer�SimpleHTTPRequestHandler�BaseHTTPRequestHandler)�support�	threadingzkeycert.pemc@s$eZdZdd�Zdd�Zdd�ZdS)�HTTPSServercCst�|||�||_dS�N)�_HTTPServer�__init__�context)�self�server_address�
handler_classr�r�K/usr/local/lib/python3.9/site-packages/future/backports/test/ssl_servers.pyrszHTTPSServer.__init__cCsd|jj|j|jfS)Nz
<%s %s:%s>)�	__class__�__name__Zserver_name�server_port�rrrr�__str__s��zHTTPSServer.__str__c
Csjz"|j��\}}|jj|dd�}Wn>tjy`}z$tjrJtj�	d|��WYd}~n
d}~00||fS)NT)�server_sidezGot an error:
%s
)
�socket�acceptr�wrap_socket�errorr
�verbose�sys�stderr�write)r�sock�addrZsslconn�errr�get_request"szHTTPSServer.get_requestN)r�
__module__�__qualname__rrr*rrrrrsrc@s(eZdZdZeZdZdd�Zdd�ZdS)�RootedHTTPRequestHandlerz
TestHTTPS/1.0�cCsztj�|�d}tj�tj�|��}|�d�}td|�}|j	}|D]2}tj�
|�\}}tj�|�\}}tj�||�}qB|S)z�Translate a /-separated PATH to the local filename syntax.

        Components that mean special things to the local file system
        (e.g. drive or directory names) are ignored.  (XXX They should
        probably be diagnosed.)

        ��/N)�urllibr	�urlparse�os�path�normpath�unquote�splitr�root�
splitdrive�join)rr4�words�word�drive�headrrr�translate_path8s	

z'RootedHTTPRequestHandler.translate_pathc	Gs:tjr6tj�d|jj|jj|j�	�|�
�||f�dS)Nz server (%s:%d %s):
   [%s] %s
)r
r#r$�stdoutr&�serverrr�request�cipherZlog_date_time_string�r�format�argsrrr�log_messageLs��z$RootedHTTPRequestHandler.log_messageN)	rr+r,�server_version�herer8�timeoutr?rGrrrrr-.s
r-c@s.eZdZdZdZddd�Zdd�Zdd	�Zd
S)�StatsRequestHandlerzSExample HTTP request handler which returns SSL statistics on GET
    requests.
    zStatsHTTPS/1.0TcCs�|jjj}|j}|��|��|��d�}t�|�}|�	d�}|�
d�|�dd�|�dtt
|���|��|r~|j�|�dS)zServe a GET request.)Z
session_cacherC�compressionzutf-8��zContent-typeztext/plain; charset=utf-8zContent-LengthN)�rfile�raw�_sockr�
session_statsrCrL�pprint�pformat�encodeZ
send_responseZsend_headerr�lenZend_headers�wfiler&)r�	send_bodyr'r�stats�bodyrrr�do_GET^s
�


zStatsRequestHandler.do_GETcCs|jdd�dS)zServe a HEAD request.F)rWN)rZrrrr�do_HEADpszStatsRequestHandler.do_HEADcGstjrtj||g|�R�dSr)r
r#r�log_requestrDrrrr\tszStatsRequestHandler.log_requestN)T)rr+r,�__doc__rHrZr[r\rrrrrKWs

rKc@s<eZdZedfdd�Zdd�Zddd�Zdd	�Zd
d�ZdS)
�HTTPSServerThreadNcCs<d|_t|df|pt|�|_|jj|_tj�|�d|_	dS)NrT)
�flagrr-rAr�portr�Threadr�daemon)rr�hostrrrrr{s�
zHTTPSServerThread.__init__cCsd|jj|jfS)Nz<%s %s>)rrrArrrrr�szHTTPSServerThread.__str__cCs||_tj�|�dSr)r_rra�start)rr_rrrrd�szHTTPSServerThread.startcCs<|jr|j��z|j�d�W|j��n|j��0dS)Ng�������?)r_�setrA�
serve_forever�server_closerrrr�run�s

zHTTPSServerThread.runcCs|j��dSr)rA�shutdownrrrr�stop�szHTTPSServerThread.stop)N)	rr+r,�HOSTrrrdrhrjrrrrr^ys
	
r^csVt�tj�}|�|�t|||��t��}��|�|���fdd�}|�	|��S)Ncs8tjrtj�d����tjr,tj�d����dS)Nzstopping HTTPS server
zjoining HTTPS thread
)r
r#r$r@r&rjr:r�rArr�cleanup�sz"make_https_server.<locals>.cleanup)
�ssl�
SSLContext�PROTOCOL_SSLv23�load_cert_chainr^r�Eventrd�waitZ
addCleanup)Zcase�certfilercrrr_rmrrlr�make_https_server�s


ru�__main__zERun a test HTTPS server. By default, the current directory is served.)�descriptionz-pz--portiQz(port to listen on (default: %(default)s))�type�default�helpz-qz--quietr#T�store_falsezbe less verbose)�destry�actionrzz-sz--stats�use_stats_handlerF�
store_truezalways return stats pagez--curve-name�
curve_name�storez&curve name for EC-based Diffie-Hellman)r|rxr}rzz--dh�dh_filez!PEM file containing DH parameters�z'Listening on https://localhost:{0.port}g�������?)D�
__future__rrrrZfuture.builtinsrr�futurerr3r$rnrRrZfuture.backports.urllibr	�urllib_parseZfuture.backports.http.serverr
rrrZfuture.backports.testr
�
import_modulerr4�dirname�__file__rIrkr:ZCERTFILErr-rKrar^rur�argparse�ArgumentParser�parser�add_argument�int�
parse_argsrFr#r~r�PY2Zgetcwdur8�getcwdro�PROTOCOL_TLSv1rrqr��set_ecdh_curver��load_dh_paramsr`rA�printrErfrrrr�<module>st
)"
����
��

PKIDu\��|())$future/backports/test/ssl_servers.pynu�[���from __future__ import absolute_import, division, print_function, unicode_literals
from future.builtins import filter, str
from future import utils
import os
import sys
import ssl
import pprint
import socket
from future.backports.urllib import parse as urllib_parse
from future.backports.http.server import (HTTPServer as _HTTPServer,
    SimpleHTTPRequestHandler, BaseHTTPRequestHandler)
from future.backports.test import support
threading = support.import_module("threading")

here = os.path.dirname(__file__)

HOST = support.HOST
CERTFILE = os.path.join(here, 'keycert.pem')

# This one's based on HTTPServer, which is based on SocketServer

class HTTPSServer(_HTTPServer):

    def __init__(self, server_address, handler_class, context):
        _HTTPServer.__init__(self, server_address, handler_class)
        self.context = context

    def __str__(self):
        return ('<%s %s:%s>' %
                (self.__class__.__name__,
                 self.server_name,
                 self.server_port))

    def get_request(self):
        # override this to wrap socket with SSL
        try:
            sock, addr = self.socket.accept()
            sslconn = self.context.wrap_socket(sock, server_side=True)
        except socket.error as e:
            # socket errors are silenced by the caller, print them here
            if support.verbose:
                sys.stderr.write("Got an error:\n%s\n" % e)
            raise
        return sslconn, addr

class RootedHTTPRequestHandler(SimpleHTTPRequestHandler):
    # need to override translate_path to get a known root,
    # instead of using os.curdir, since the test could be
    # run from anywhere

    server_version = "TestHTTPS/1.0"
    root = here
    # Avoid hanging when a request gets interrupted by the client
    timeout = 5

    def translate_path(self, path):
        """Translate a /-separated PATH to the local filename syntax.

        Components that mean special things to the local file system
        (e.g. drive or directory names) are ignored.  (XXX They should
        probably be diagnosed.)

        """
        # abandon query parameters
        path = urllib.parse.urlparse(path)[2]
        path = os.path.normpath(urllib.parse.unquote(path))
        words = path.split('/')
        words = filter(None, words)
        path = self.root
        for word in words:
            drive, word = os.path.splitdrive(word)
            head, word = os.path.split(word)
            path = os.path.join(path, word)
        return path

    def log_message(self, format, *args):
        # we override this to suppress logging unless "verbose"
        if support.verbose:
            sys.stdout.write(" server (%s:%d %s):\n   [%s] %s\n" %
                             (self.server.server_address,
                              self.server.server_port,
                              self.request.cipher(),
                              self.log_date_time_string(),
                              format%args))


class StatsRequestHandler(BaseHTTPRequestHandler):
    """Example HTTP request handler which returns SSL statistics on GET
    requests.
    """

    server_version = "StatsHTTPS/1.0"

    def do_GET(self, send_body=True):
        """Serve a GET request."""
        sock = self.rfile.raw._sock
        context = sock.context
        stats = {
            'session_cache': context.session_stats(),
            'cipher': sock.cipher(),
            'compression': sock.compression(),
            }
        body = pprint.pformat(stats)
        body = body.encode('utf-8')
        self.send_response(200)
        self.send_header("Content-type", "text/plain; charset=utf-8")
        self.send_header("Content-Length", str(len(body)))
        self.end_headers()
        if send_body:
            self.wfile.write(body)

    def do_HEAD(self):
        """Serve a HEAD request."""
        self.do_GET(send_body=False)

    def log_request(self, format, *args):
        if support.verbose:
            BaseHTTPRequestHandler.log_request(self, format, *args)


class HTTPSServerThread(threading.Thread):

    def __init__(self, context, host=HOST, handler_class=None):
        self.flag = None
        self.server = HTTPSServer((host, 0),
                                  handler_class or RootedHTTPRequestHandler,
                                  context)
        self.port = self.server.server_port
        threading.Thread.__init__(self)
        self.daemon = True

    def __str__(self):
        return "<%s %s>" % (self.__class__.__name__, self.server)

    def start(self, flag=None):
        self.flag = flag
        threading.Thread.start(self)

    def run(self):
        if self.flag:
            self.flag.set()
        try:
            self.server.serve_forever(0.05)
        finally:
            self.server.server_close()

    def stop(self):
        self.server.shutdown()


def make_https_server(case, certfile=CERTFILE, host=HOST, handler_class=None):
    # we assume the certfile contains both private key and certificate
    context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
    context.load_cert_chain(certfile)
    server = HTTPSServerThread(context, host, handler_class)
    flag = threading.Event()
    server.start(flag)
    flag.wait()
    def cleanup():
        if support.verbose:
            sys.stdout.write('stopping HTTPS server\n')
        server.stop()
        if support.verbose:
            sys.stdout.write('joining HTTPS thread\n')
        server.join()
    case.addCleanup(cleanup)
    return server


if __name__ == "__main__":
    import argparse
    parser = argparse.ArgumentParser(
        description='Run a test HTTPS server. '
                    'By default, the current directory is served.')
    parser.add_argument('-p', '--port', type=int, default=4433,
                        help='port to listen on (default: %(default)s)')
    parser.add_argument('-q', '--quiet', dest='verbose', default=True,
                        action='store_false', help='be less verbose')
    parser.add_argument('-s', '--stats', dest='use_stats_handler', default=False,
                        action='store_true', help='always return stats page')
    parser.add_argument('--curve-name', dest='curve_name', type=str,
                        action='store',
                        help='curve name for EC-based Diffie-Hellman')
    parser.add_argument('--dh', dest='dh_file', type=str, action='store',
                        help='PEM file containing DH parameters')
    args = parser.parse_args()

    support.verbose = args.verbose
    if args.use_stats_handler:
        handler_class = StatsRequestHandler
    else:
        handler_class = RootedHTTPRequestHandler
        if utils.PY2:
            handler_class.root = os.getcwdu()
        else:
            handler_class.root = os.getcwd()
    context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
    context.load_cert_chain(CERTFILE)
    if args.curve_name:
        context.set_ecdh_curve(args.curve_name)
    if args.dh_file:
        context.load_dh_params(args.dh_file)

    server = HTTPSServer(("", args.port), handler_class, context)
    if args.verbose:
        print("Listening on https://localhost:{0.port}".format(args))
    server.serve_forever(0.1)
PKLDu\'�q� �  future/backports/test/sha256.pemnu�[���# Certificate chain for https://sha256.tbs-internet.com
 0 s:/C=FR/postalCode=14000/ST=Calvados/L=CAEN/street=22 rue de Bretagne/O=TBS INTERNET/OU=0002 440443810/OU=sha-256 production/CN=sha256.tbs-internet.com
   i:/C=FR/ST=Calvados/L=Caen/O=TBS INTERNET/OU=Terms and Conditions: http://www.tbs-internet.com/CA/repository/OU=TBS INTERNET CA/CN=TBS X509 CA SGC
-----BEGIN CERTIFICATE-----
MIIGXDCCBUSgAwIBAgIRAKpVmHgg9nfCodAVwcP4siwwDQYJKoZIhvcNAQELBQAw
gcQxCzAJBgNVBAYTAkZSMREwDwYDVQQIEwhDYWx2YWRvczENMAsGA1UEBxMEQ2Fl
bjEVMBMGA1UEChMMVEJTIElOVEVSTkVUMUgwRgYDVQQLEz9UZXJtcyBhbmQgQ29u
ZGl0aW9uczogaHR0cDovL3d3dy50YnMtaW50ZXJuZXQuY29tL0NBL3JlcG9zaXRv
cnkxGDAWBgNVBAsTD1RCUyBJTlRFUk5FVCBDQTEYMBYGA1UEAxMPVEJTIFg1MDkg
Q0EgU0dDMB4XDTEyMDEwNDAwMDAwMFoXDTE0MDIxNzIzNTk1OVowgcsxCzAJBgNV
BAYTAkZSMQ4wDAYDVQQREwUxNDAwMDERMA8GA1UECBMIQ2FsdmFkb3MxDTALBgNV
BAcTBENBRU4xGzAZBgNVBAkTEjIyIHJ1ZSBkZSBCcmV0YWduZTEVMBMGA1UEChMM
VEJTIElOVEVSTkVUMRcwFQYDVQQLEw4wMDAyIDQ0MDQ0MzgxMDEbMBkGA1UECxMS
c2hhLTI1NiBwcm9kdWN0aW9uMSAwHgYDVQQDExdzaGEyNTYudGJzLWludGVybmV0
LmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKQIX/zdJcyxty0m
PM1XQSoSSifueS3AVcgqMsaIKS/u+rYzsv4hQ/qA6vLn5m5/ewUcZDj7zdi6rBVf
PaVNXJ6YinLX0tkaW8TEjeVuZG5yksGZlhCt1CJ1Ho9XLiLaP4uJ7MCoNUntpJ+E
LfrOdgsIj91kPmwjDJeztVcQCvKzhjVJA/KxdInc0JvOATn7rpaSmQI5bvIjufgo
qVsTPwVFzuUYULXBk7KxRT7MiEqnd5HvviNh0285QC478zl3v0I0Fb5El4yD3p49
IthcRnxzMKc0UhU5ogi0SbONyBfm/mzONVfSxpM+MlyvZmJqrbuuLoEDzJD+t8PU
xSuzgbcCAwEAAaOCAj4wggI6MB8GA1UdIwQYMBaAFAdEdoWTKLx/bXjSCuv6TEvf
2YIfMB0GA1UdDgQWBBT/qTGYdaj+f61c2IRFL/B1eEsM8DAOBgNVHQ8BAf8EBAMC
BaAwDAYDVR0TAQH/BAIwADA0BgNVHSUELTArBggrBgEFBQcDAQYIKwYBBQUHAwIG
CisGAQQBgjcKAwMGCWCGSAGG+EIEATBLBgNVHSAERDBCMEAGCisGAQQB5TcCBAEw
MjAwBggrBgEFBQcCARYkaHR0cHM6Ly93d3cudGJzLWludGVybmV0LmNvbS9DQS9D
UFM0MG0GA1UdHwRmMGQwMqAwoC6GLGh0dHA6Ly9jcmwudGJzLWludGVybmV0LmNv
bS9UQlNYNTA5Q0FTR0MuY3JsMC6gLKAqhihodHRwOi8vY3JsLnRicy14NTA5LmNv
bS9UQlNYNTA5Q0FTR0MuY3JsMIGmBggrBgEFBQcBAQSBmTCBljA4BggrBgEFBQcw
AoYsaHR0cDovL2NydC50YnMtaW50ZXJuZXQuY29tL1RCU1g1MDlDQVNHQy5jcnQw
NAYIKwYBBQUHMAKGKGh0dHA6Ly9jcnQudGJzLXg1MDkuY29tL1RCU1g1MDlDQVNH
Qy5jcnQwJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLnRicy14NTA5LmNvbTA/BgNV
HREEODA2ghdzaGEyNTYudGJzLWludGVybmV0LmNvbYIbd3d3LnNoYTI1Ni50YnMt
aW50ZXJuZXQuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQA0pOuL8QvAa5yksTbGShzX
ABApagunUGoEydv4YJT1MXy9tTp7DrWaozZSlsqBxrYAXP1d9r2fuKbEniYHxaQ0
UYaf1VSIlDo1yuC8wE7wxbHDIpQ/E5KAyxiaJ8obtDhFstWAPAH+UoGXq0kj2teN
21sFQ5dXgA95nldvVFsFhrRUNB6xXAcaj0VZFhttI0ZfQZmQwEI/P+N9Jr40OGun
aa+Dn0TMeUH4U20YntfLbu2nDcJcYfyurm+8/0Tr4HznLnedXu9pCPYj0TaddrgT
XO0oFiyy7qGaY6+qKh71yD64Y3ycCJ/HR9Wm39mjZYc9ezYwT4noP6r7Lk8YO7/q
-----END CERTIFICATE-----
 1 s:/C=FR/ST=Calvados/L=Caen/O=TBS INTERNET/OU=Terms and Conditions: http://www.tbs-internet.com/CA/repository/OU=TBS INTERNET CA/CN=TBS X509 CA SGC
   i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
-----BEGIN CERTIFICATE-----
MIIFVjCCBD6gAwIBAgIQXpDZ0ETJMV02WTx3GTnhhTANBgkqhkiG9w0BAQUFADBv
MQswCQYDVQQGEwJTRTEUMBIGA1UEChMLQWRkVHJ1c3QgQUIxJjAkBgNVBAsTHUFk
ZFRydXN0IEV4dGVybmFsIFRUUCBOZXR3b3JrMSIwIAYDVQQDExlBZGRUcnVzdCBF
eHRlcm5hbCBDQSBSb290MB4XDTA1MTIwMTAwMDAwMFoXDTE5MDYyNDE5MDYzMFow
gcQxCzAJBgNVBAYTAkZSMREwDwYDVQQIEwhDYWx2YWRvczENMAsGA1UEBxMEQ2Fl
bjEVMBMGA1UEChMMVEJTIElOVEVSTkVUMUgwRgYDVQQLEz9UZXJtcyBhbmQgQ29u
ZGl0aW9uczogaHR0cDovL3d3dy50YnMtaW50ZXJuZXQuY29tL0NBL3JlcG9zaXRv
cnkxGDAWBgNVBAsTD1RCUyBJTlRFUk5FVCBDQTEYMBYGA1UEAxMPVEJTIFg1MDkg
Q0EgU0dDMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAsgOkO3f7wzN6
rOjg45tR5vjBfzK7qmV9IBxb/QW9EEXxG+E7FNhZqQLtwGBKoSsHTnQqV75wWMk0
9tinWvftBkSpj5sTi/8cbzJfUvTSVYh3Qxv6AVVjMMH/ruLjE6y+4PoaPs8WoYAQ
ts5R4Z1g8c/WnTepLst2x0/Wv7GmuoQi+gXvHU6YrBiu7XkeYhzc95QdviWSJRDk
owhb5K43qhcvjRmBfO/paGlCliDGZp8mHwrI21mwobWpVjTxZRwYO3bd4+TGcI4G
Ie5wmHwE8F7SK1tgSqbBacKjDa93j7txKkfz/Yd2n7TGqOXiHPsJpG655vrKtnXk
9vs1zoDeJQIDAQABo4IBljCCAZIwHQYDVR0OBBYEFAdEdoWTKLx/bXjSCuv6TEvf
2YIfMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8ECDAGAQH/AgEAMCAGA1UdJQQZ
MBcGCisGAQQBgjcKAwMGCWCGSAGG+EIEATAYBgNVHSAEETAPMA0GCysGAQQBgOU3
AgQBMHsGA1UdHwR0MHIwOKA2oDSGMmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0Fk
ZFRydXN0RXh0ZXJuYWxDQVJvb3QuY3JsMDagNKAyhjBodHRwOi8vY3JsLmNvbW9k
by5uZXQvQWRkVHJ1c3RFeHRlcm5hbENBUm9vdC5jcmwwgYAGCCsGAQUFBwEBBHQw
cjA4BggrBgEFBQcwAoYsaHR0cDovL2NydC5jb21vZG9jYS5jb20vQWRkVHJ1c3RV
VE5TR0NDQS5jcnQwNgYIKwYBBQUHMAKGKmh0dHA6Ly9jcnQuY29tb2RvLm5ldC9B
ZGRUcnVzdFVUTlNHQ0NBLmNydDARBglghkgBhvhCAQEEBAMCAgQwDQYJKoZIhvcN
AQEFBQADggEBAK2zEzs+jcIrVK9oDkdDZNvhuBYTdCfpxfFs+OAujW0bIfJAy232
euVsnJm6u/+OrqKudD2tad2BbejLLXhMZViaCmK7D9nrXHx4te5EP8rL19SUVqLY
1pTnv5dhNgEgvA7n5lIzDSYs7yRLsr7HJsYPr6SeYSuZizyX1SNz7ooJ32/F3X98
RB0Mlc/E0OyOrkQ9/y5IrnpnaSora8CnUrV5XNOg+kyCz9edCyx4D5wXYcwZPVWz
8aDqquESrezPyjtfi4WRO4s/VD3HLZvOxzMrWAVYCDG9FxaOhF0QGuuG1F7F3GKV
v6prNyCl016kRl2j1UT+a7gLd8fA25A4C9E=
-----END CERTIFICATE-----
 2 s:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
   i:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN - DATACorp SGC
-----BEGIN CERTIFICATE-----
MIIEZjCCA06gAwIBAgIQUSYKkxzif5zDpV954HKugjANBgkqhkiG9w0BAQUFADCB
kzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xGzAZBgNVBAMTElVUTiAtIERBVEFDb3Jw
IFNHQzAeFw0wNTA2MDcwODA5MTBaFw0xOTA2MjQxOTA2MzBaMG8xCzAJBgNVBAYT
AlNFMRQwEgYDVQQKEwtBZGRUcnVzdCBBQjEmMCQGA1UECxMdQWRkVHJ1c3QgRXh0
ZXJuYWwgVFRQIE5ldHdvcmsxIjAgBgNVBAMTGUFkZFRydXN0IEV4dGVybmFsIENB
IFJvb3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC39xoz5vIABC05
4E5b7R+8bA/Ntfojts7emxEzl6QpTH2Tn71KvJPtAxrjj8/lbVBa1pcplFqAsEl6
2y6V/bjKvzc4LR4+kUGtcFbH8E8/6DKedMrIkFTpxl8PeJ2aQDwOrGGqXhSPnoeh
alDc15pOrwWzpnGUnHGzUGAKxxOdOAeGAqjpqGkmGJCrTLBPI6s6T4TY386f4Wlv
u9dC12tE5Met7m1BX3JacQg3s3llpFmglDf3AC8NwpJy2tA4ctsUqEXEXSp9t7TW
xO6szRNEt8kr3UMAJfphuWlqWCMRt6czj1Z1WfXNKddGtworZbbTQm8Vsrh7++/p
XVPVNFonAgMBAAGjgdgwgdUwHwYDVR0jBBgwFoAUUzLRs89/+uDxoF2FTpLSnkUd
tE8wHQYDVR0OBBYEFK29mHo0tCb3+sQmVO8DveAky1QaMA4GA1UdDwEB/wQEAwIB
BjAPBgNVHRMBAf8EBTADAQH/MBEGCWCGSAGG+EIBAQQEAwIBAjAgBgNVHSUEGTAX
BgorBgEEAYI3CgMDBglghkgBhvhCBAEwPQYDVR0fBDYwNDAyoDCgLoYsaHR0cDov
L2NybC51c2VydHJ1c3QuY29tL1VUTi1EQVRBQ29ycFNHQy5jcmwwDQYJKoZIhvcN
AQEFBQADggEBAMbuUxdoFLJRIh6QWA2U/b3xcOWGLcM2MY9USEbnLQg3vGwKYOEO
rVE04BKT6b64q7gmtOmWPSiPrmQH/uAB7MXjkesYoPF1ftsK5p+R26+udd8jkWjd
FwBaS/9kbHDrARrQkNnHptZt9hPk/7XJ0h4qy7ElQyZ42TCbTg0evmnv3+r+LbPM
+bDdtRTKkdSytaX7ARmjR3mfnYyVhzT4HziS2jamEfpr62vp3EV4FTkG101B5CHI
3C+H0be/SGB1pWLLJN47YaApIKa+xWycxOkKaSLvkTr6Jq/RW0GnOuL4OAdCq8Fb
+M5tug8EPzI0rNwEKNdwMBQmBsTkm5jVz3g=
-----END CERTIFICATE-----
 3 s:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN - DATACorp SGC
   i:/C=US/ST=UT/L=Salt Lake City/O=The USERTRUST Network/OU=http://www.usertrust.com/CN=UTN - DATACorp SGC
-----BEGIN CERTIFICATE-----
MIIEXjCCA0agAwIBAgIQRL4Mi1AAIbQR0ypoBqmtaTANBgkqhkiG9w0BAQUFADCB
kzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAlVUMRcwFQYDVQQHEw5TYWx0IExha2Ug
Q2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMSEwHwYDVQQLExho
dHRwOi8vd3d3LnVzZXJ0cnVzdC5jb20xGzAZBgNVBAMTElVUTiAtIERBVEFDb3Jw
IFNHQzAeFw05OTA2MjQxODU3MjFaFw0xOTA2MjQxOTA2MzBaMIGTMQswCQYDVQQG
EwJVUzELMAkGA1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYD
VQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cu
dXNlcnRydXN0LmNvbTEbMBkGA1UEAxMSVVROIC0gREFUQUNvcnAgU0dDMIIBIjAN
BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA3+5YEKIrblXEjr8uRgnn4AgPLit6
E5Qbvfa2gI5lBZMAHryv4g+OGQ0SR+ysraP6LnD43m77VkIVni5c7yPeIbkFdicZ
D0/Ww5y0vpQZY/KmEQrrU0icvvIpOxboGqBMpsn0GFlowHDyUwDAXlCCpVZvNvlK
4ESGoE1O1kduSUrLZ9emxAW5jh70/P/N5zbgnAVssjMiFdC04MwXwLLA9P4yPykq
lXvY8qdOD1R8oQ2AswkDwf9c3V6aPryuvEeKaq5xyh+xKrhfQgUL7EYw0XILyulW
bfXv33i+Ybqypa4ETLyorGkVl73v67SMvzX41MPRKA5cOp9wGDMgd8SirwIDAQAB
o4GrMIGoMAsGA1UdDwQEAwIBxjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRT
MtGzz3/64PGgXYVOktKeRR20TzA9BgNVHR8ENjA0MDKgMKAuhixodHRwOi8vY3Js
LnVzZXJ0cnVzdC5jb20vVVROLURBVEFDb3JwU0dDLmNybDAqBgNVHSUEIzAhBggr
BgEFBQcDAQYKKwYBBAGCNwoDAwYJYIZIAYb4QgQBMA0GCSqGSIb3DQEBBQUAA4IB
AQAnNZcAiosovcYzMB4p/OL31ZjUQLtgyr+rFywJNn9Q+kHcrpY6CiM+iVnJowft
Gzet/Hy+UUla3joKVAgWRcKZsYfNjGjgaQPpxE6YsjuMFrMOoAyYUJuTqXAJyCyj
j98C5OBxOvG0I3KgqgHf35g+FFCgMSa9KOlaMCZ1+XtgHI3zzVAmbQQnmt/VDUVH
KWss5nbZqSl9Mt3JNjy9rjXxEZ4du5A/EkdOjtd+D2JzHVImOBwYSf0wdJrE5SIv
2MCN7ZF6TACPcn9d2t0bi0Vr591pl6jFVkwPDPafepE39peC4N1xaf92P2BNPM/3
mfnGV/TJVTl4uix5yaaIK/QI
-----END CERTIFICATE-----
PKNDu\���;;&future/backports/test/nullbytecert.pemnu�[���Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 0 (0x0)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org
        Validity
            Not Before: Aug  7 13:11:52 2013 GMT
            Not After : Aug  7 13:12:52 2013 GMT
        Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3:
                    03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97:
                    16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2:
                    88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1:
                    32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4:
                    56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8:
                    a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02:
                    45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75:
                    ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91:
                    90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d:
                    7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30:
                    68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7:
                    f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12:
                    f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5:
                    ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb:
                    d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f:
                    9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da:
                    2f:85
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier:
                88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C
            X509v3 Key Usage:
                Digital Signature, Non Repudiation, Key Encipherment
            X509v3 Subject Alternative Name:
                *************************************************************
                WARNING: The values for DNS, email and URI are WRONG. OpenSSL
                         doesn't print the text after a NULL byte.
                *************************************************************
                DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1
    Signature Algorithm: sha1WithRSAEncryption
         ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5:
         a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44:
         3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37:
         86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3:
         56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86:
         de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac:
         3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4:
         60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60:
         d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5:
         25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60:
         1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6:
         4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d:
         0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e:
         5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6:
         c1:ca:a9:94
-----BEGIN CERTIFICATE-----
MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx
DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ
eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg
RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y
ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw
NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI
DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv
ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt
ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq
hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB
BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j
pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P
vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv
KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA
oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL
08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV
HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E
BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu
Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251
bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA
AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9
i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j
HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk
kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx
VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW
RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ=
-----END CERTIFICATE-----
PKQDu\��|G��future/backports/test/nokia.pemnu�[���# Certificate for projects.developer.nokia.com:443 (see issue 13034)
-----BEGIN CERTIFICATE-----
MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB
vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL
ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug
YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt
VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X
DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM
BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ
BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2
lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow
CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn
yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu
ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG
A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T
VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE
PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl
cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI
KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz
cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp
YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc
MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH
iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ
KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522
O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL
x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y
0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y
ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix
UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0=
-----END CERTIFICATE-----
PKSDu\�,\��!future/backports/test/keycert.pemnu�[���-----BEGIN PRIVATE KEY-----
MIICdwIBADANBgkqhkiG9w0BAQEFAASCAmEwggJdAgEAAoGBANtb0+YrKuxevGpm
LrjaUhZSgz6zFAmuGFmKmUbdjmfv9zSmmdsQIksK++jK0Be9LeZy20j6ahOfuVa0
ufEmPoP7Fy4hXegKZR9cCWcIe/A6H2xWF1IIJLRTLaU8ol/I7T+um5HD5AwAwNPP
USNU0Eegmvp+xxWu3NX2m1Veot85AgMBAAECgYA3ZdZ673X0oexFlq7AAmrutkHt
CL7LvwrpOiaBjhyTxTeSNWzvtQBkIU8DOI0bIazA4UreAFffwtvEuPmonDb3F+Iq
SMAu42XcGyVZEl+gHlTPU9XRX7nTOXVt+MlRRRxL6t9GkGfUAXI3XxJDXW3c0vBK
UL9xqD8cORXOfE06rQJBAP8mEX1ERkR64Ptsoe4281vjTlNfIbs7NMPkUnrn9N/Y
BLhjNIfQ3HFZG8BTMLfX7kCS9D593DW5tV4Z9BP/c6cCQQDcFzCcVArNh2JSywOQ
ZfTfRbJg/Z5Lt9Fkngv1meeGNPgIMLN8Sg679pAOOWmzdMO3V706rNPzSVMME7E5
oPIfAkEA8pDddarP5tCvTTgUpmTFbakm0KoTZm2+FzHcnA4jRh+XNTjTOv98Y6Ik
eO5d1ZnKXseWvkZncQgxfdnMqqpj5wJAcNq/RVne1DbYlwWchT2Si65MYmmJ8t+F
0mcsULqjOnEMwf5e+ptq5LzwbyrHZYq5FNk7ocufPv/ZQrcSSC+cFwJBAKvOJByS
x56qyGeZLOQlWS2JS3KJo59XuLFGqcbgN9Om9xFa41Yb4N9NvplFivsvZdw3m1Q/
SPIXQuT8RMPDVNQ=
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIICVDCCAb2gAwIBAgIJANfHOBkZr8JOMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV
BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u
IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xMDEw
MDgyMzAxNTZaFw0yMDEwMDUyMzAxNTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH
Ew5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZvdW5k
YXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAw
gYkCgYEA21vT5isq7F68amYuuNpSFlKDPrMUCa4YWYqZRt2OZ+/3NKaZ2xAiSwr7
6MrQF70t5nLbSPpqE5+5VrS58SY+g/sXLiFd6AplH1wJZwh78DofbFYXUggktFMt
pTyiX8jtP66bkcPkDADA089RI1TQR6Ca+n7HFa7c1fabVV6i3zkCAwEAAaMYMBYw
FAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBBQUAA4GBAHPctQBEQ4wd
BJ6+JcpIraopLn8BGhbjNWj40mmRqWB/NAWF6M5ne7KpGAu7tLeG4hb1zLaldK8G
lxy2GPSRF6LFS48dpEj2HbMv2nvv6xxalDMJ9+DicWgAKTQ6bcX2j3GUkCR0g/T1
CRlNBAAlvhKzO7Clpf9l0YKBEfraJByX
-----END CERTIFICATE-----
PKVDu\"future/backports/test/nullcert.pemnu�[���PKVDu\<�&&(future/backports/test/keycert.passwd.pemnu�[���-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,1A8D9D2A02EC698A

kJYbfZ8L0sfe9Oty3gw0aloNnY5E8fegRfQLZlNoxTl6jNt0nIwI8kDJ36CZgR9c
u3FDJm/KqrfUoz8vW+qEnWhSG7QPX2wWGPHd4K94Yz/FgrRzZ0DoK7XxXq9gOtVA
AVGQhnz32p+6WhfGsCr9ArXEwRZrTk/FvzEPaU5fHcoSkrNVAGX8IpSVkSDwEDQr
Gv17+cfk99UV1OCza6yKHoFkTtrC+PZU71LomBabivS2Oc4B9hYuSR2hF01wTHP+
YlWNagZOOVtNz4oKK9x9eNQpmfQXQvPPTfusexKIbKfZrMvJoxcm1gfcZ0H/wK6P
6wmXSG35qMOOztCZNtperjs1wzEBXznyK8QmLcAJBjkfarABJX9vBEzZV0OUKhy+
noORFwHTllphbmydLhu6ehLUZMHPhzAS5UN7srtpSN81eerDMy0RMUAwA7/PofX1
94Me85Q8jP0PC9ETdsJcPqLzAPETEYu0ELewKRcrdyWi+tlLFrpE5KT/s5ecbl9l
7B61U4Kfd1PIXc/siINhU3A3bYK+845YyUArUOnKf1kEox7p1RpD7yFqVT04lRTo
cibNKATBusXSuBrp2G6GNuhWEOSafWCKJQAzgCYIp6ZTV2khhMUGppc/2H3CF6cO
zX0KtlPVZC7hLkB6HT8SxYUwF1zqWY7+/XPPdc37MeEZ87Q3UuZwqORLY+Z0hpgt
L5JXBCoklZhCAaN2GqwFLXtGiRSRFGY7xXIhbDTlE65Wv1WGGgDLMKGE1gOz3yAo
2jjG1+yAHJUdE69XTFHSqSkvaloA1W03LdMXZ9VuQJ/ySXCie6ABAQ==
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIICVDCCAb2gAwIBAgIJANfHOBkZr8JOMA0GCSqGSIb3DQEBBQUAMF8xCzAJBgNV
BAYTAlhZMRcwFQYDVQQHEw5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9u
IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0xMDEw
MDgyMzAxNTZaFw0yMDEwMDUyMzAxNTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH
Ew5DYXN0bGUgQW50aHJheDEjMCEGA1UEChMaUHl0aG9uIFNvZnR3YXJlIEZvdW5k
YXRpb24xEjAQBgNVBAMTCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAw
gYkCgYEA21vT5isq7F68amYuuNpSFlKDPrMUCa4YWYqZRt2OZ+/3NKaZ2xAiSwr7
6MrQF70t5nLbSPpqE5+5VrS58SY+g/sXLiFd6AplH1wJZwh78DofbFYXUggktFMt
pTyiX8jtP66bkcPkDADA089RI1TQR6Ca+n7HFa7c1fabVV6i3zkCAwEAAaMYMBYw
FAYDVR0RBA0wC4IJbG9jYWxob3N0MA0GCSqGSIb3DQEBBQUAA4GBAHPctQBEQ4wd
BJ6+JcpIraopLn8BGhbjNWj40mmRqWB/NAWF6M5ne7KpGAu7tLeG4hb1zLaldK8G
lxy2GPSRF6LFS48dpEj2HbMv2nvv6xxalDMJ9+DicWgAKTQ6bcX2j3GUkCR0g/T1
CRlNBAAlvhKzO7Clpf9l0YKBEfraJByX
-----END CERTIFICATE-----
PKYDu\�<����(future/backports/test/ssl_key.passwd.pemnu�[���-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,1A8D9D2A02EC698A

kJYbfZ8L0sfe9Oty3gw0aloNnY5E8fegRfQLZlNoxTl6jNt0nIwI8kDJ36CZgR9c
u3FDJm/KqrfUoz8vW+qEnWhSG7QPX2wWGPHd4K94Yz/FgrRzZ0DoK7XxXq9gOtVA
AVGQhnz32p+6WhfGsCr9ArXEwRZrTk/FvzEPaU5fHcoSkrNVAGX8IpSVkSDwEDQr
Gv17+cfk99UV1OCza6yKHoFkTtrC+PZU71LomBabivS2Oc4B9hYuSR2hF01wTHP+
YlWNagZOOVtNz4oKK9x9eNQpmfQXQvPPTfusexKIbKfZrMvJoxcm1gfcZ0H/wK6P
6wmXSG35qMOOztCZNtperjs1wzEBXznyK8QmLcAJBjkfarABJX9vBEzZV0OUKhy+
noORFwHTllphbmydLhu6ehLUZMHPhzAS5UN7srtpSN81eerDMy0RMUAwA7/PofX1
94Me85Q8jP0PC9ETdsJcPqLzAPETEYu0ELewKRcrdyWi+tlLFrpE5KT/s5ecbl9l
7B61U4Kfd1PIXc/siINhU3A3bYK+845YyUArUOnKf1kEox7p1RpD7yFqVT04lRTo
cibNKATBusXSuBrp2G6GNuhWEOSafWCKJQAzgCYIp6ZTV2khhMUGppc/2H3CF6cO
zX0KtlPVZC7hLkB6HT8SxYUwF1zqWY7+/XPPdc37MeEZ87Q3UuZwqORLY+Z0hpgt
L5JXBCoklZhCAaN2GqwFLXtGiRSRFGY7xXIhbDTlE65Wv1WGGgDLMKGE1gOz3yAo
2jjG1+yAHJUdE69XTFHSqSkvaloA1W03LdMXZ9VuQJ/ySXCie6ABAQ==
-----END RSA PRIVATE KEY-----
PK[Du\0,5��!future/backports/test/badcert.pemnu�[���-----BEGIN RSA PRIVATE KEY-----
MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L
opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH
fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB
AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU
D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA
IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM
oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0
ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/
loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j
oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA
z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq
ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV
q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
Just bad cert data
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L
opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH
fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB
AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU
D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA
IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM
oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0
ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/
loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j
oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA
z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq
ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV
q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU=
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
Just bad cert data
-----END CERTIFICATE-----
PK`Du\�
��rr future/backports/test/badkey.pemnu�[���-----BEGIN RSA PRIVATE KEY-----
Bad Key, though the cert should be OK
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD
VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x
IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT
U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1
NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl
bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m
dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj
aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh
m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8
M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn
fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC
AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb
08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx
CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/
iHkC6gGdBJhogs4=
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
Bad Key, though the cert should be OK
-----END RSA PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD
VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x
IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT
U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1
NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl
bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m
dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj
aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh
m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8
M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn
fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC
AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb
08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx
CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/
iHkC6gGdBJhogs4=
-----END CERTIFICATE-----
PKbDu\��t�^�^ future/backports/socketserver.pynu�[���"""Generic socket server classes.

This module tries to capture the various aspects of defining a server:

For socket-based servers:

- address family:
        - AF_INET{,6}: IP (Internet Protocol) sockets (default)
        - AF_UNIX: Unix domain sockets
        - others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
        - SOCK_STREAM (reliable stream, e.g. TCP)
        - SOCK_DGRAM (datagrams, e.g. UDP)

For request-based servers (including socket-based):

- client address verification before further looking at the request
        (This is actually a hook for any processing that needs to look
         at the request before anything else, e.g. logging)
- how to handle multiple requests:
        - synchronous (one request is handled at a time)
        - forking (each request is handled by a new process)
        - threading (each request is handled by a new thread)

The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server.  This is bad class design, but
save some typing.  (There's also the issue that a deep class hierarchy
slows down method lookups.)

There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:

        +------------+
        | BaseServer |
        +------------+
              |
              v
        +-----------+        +------------------+
        | TCPServer |------->| UnixStreamServer |
        +-----------+        +------------------+
              |
              v
        +-----------+        +--------------------+
        | UDPServer |------->| UnixDatagramServer |
        +-----------+        +--------------------+

Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.

Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes.  For
instance, a threading UDP server class is created as follows:

        class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass

The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.

To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method.  You can then run
various versions of the service by combining one of the server classes
with your request handler class.

The request handler class must be different for datagram or stream
services.  This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.

Of course, you still have to use your head!

For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child).  In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.

On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested.  Here a threading or forking
server is appropriate.

In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data.  This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.

Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request).  This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).

Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
  and encryption schemes
- Standard framework for select-based multiplexing

XXX Open problems:
- What to do with out-of-band data?

BaseServer:
- split generic "request" functionality out into BaseServer class.
  Copyright (C) 2000  Luke Kenneth Casson Leighton <lkcl@samba.org>

  example: read entries from a SQL database (requires overriding
  get_request() to return a table entry from the database).
  entry is processed by a RequestHandlerClass.

"""

# Author of the BaseServer patch: Luke Kenneth Casson Leighton

# XXX Warning!
# There is a test suite for this module, but it cannot be run by the
# standard regression test.
# To run it manually, run Lib/test/test_socketserver.py.

from __future__ import (absolute_import, print_function)

__version__ = "0.4"


import socket
import select
import sys
import os
import errno
try:
    import threading
except ImportError:
    import dummy_threading as threading

__all__ = ["TCPServer","UDPServer","ForkingUDPServer","ForkingTCPServer",
           "ThreadingUDPServer","ThreadingTCPServer","BaseRequestHandler",
           "StreamRequestHandler","DatagramRequestHandler",
           "ThreadingMixIn", "ForkingMixIn"]
if hasattr(socket, "AF_UNIX"):
    __all__.extend(["UnixStreamServer","UnixDatagramServer",
                    "ThreadingUnixStreamServer",
                    "ThreadingUnixDatagramServer"])

def _eintr_retry(func, *args):
    """restart a system call interrupted by EINTR"""
    while True:
        try:
            return func(*args)
        except OSError as e:
            if e.errno != errno.EINTR:
                raise

class BaseServer(object):

    """Base class for server classes.

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you do not use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - server_close()
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - service_actions()
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - allow_reuse_address

    Instance variables:

    - RequestHandlerClass
    - socket

    """

    timeout = None

    def __init__(self, server_address, RequestHandlerClass):
        """Constructor.  May be extended, do not override."""
        self.server_address = server_address
        self.RequestHandlerClass = RequestHandlerClass
        self.__is_shut_down = threading.Event()
        self.__shutdown_request = False

    def server_activate(self):
        """Called by constructor to activate the server.

        May be overridden.

        """
        pass

    def serve_forever(self, poll_interval=0.5):
        """Handle one request at a time until shutdown.

        Polls for shutdown every poll_interval seconds. Ignores
        self.timeout. If you need to do periodic tasks, do them in
        another thread.
        """
        self.__is_shut_down.clear()
        try:
            while not self.__shutdown_request:
                # XXX: Consider using another file descriptor or
                # connecting to the socket to wake this up instead of
                # polling. Polling reduces our responsiveness to a
                # shutdown request and wastes cpu at all other times.
                r, w, e = _eintr_retry(select.select, [self], [], [],
                                       poll_interval)
                if self in r:
                    self._handle_request_noblock()

                self.service_actions()
        finally:
            self.__shutdown_request = False
            self.__is_shut_down.set()

    def shutdown(self):
        """Stops the serve_forever loop.

        Blocks until the loop has finished. This must be called while
        serve_forever() is running in another thread, or it will
        deadlock.
        """
        self.__shutdown_request = True
        self.__is_shut_down.wait()

    def service_actions(self):
        """Called by the serve_forever() loop.

        May be overridden by a subclass / Mixin to implement any code that
        needs to be run during the loop.
        """
        pass

    # The distinction between handling, getting, processing and
    # finishing a request is fairly arbitrary.  Remember:
    #
    # - handle_request() is the top-level call.  It calls
    #   select, get_request(), verify_request() and process_request()
    # - get_request() is different for stream or datagram sockets
    # - process_request() is the place that may fork a new process
    #   or create a new thread to finish the request
    # - finish_request() instantiates the request handler class;
    #   this constructor will handle the request all by itself

    def handle_request(self):
        """Handle one request, possibly blocking.

        Respects self.timeout.
        """
        # Support people who used socket.settimeout() to escape
        # handle_request before self.timeout was available.
        timeout = self.socket.gettimeout()
        if timeout is None:
            timeout = self.timeout
        elif self.timeout is not None:
            timeout = min(timeout, self.timeout)
        fd_sets = _eintr_retry(select.select, [self], [], [], timeout)
        if not fd_sets[0]:
            self.handle_timeout()
            return
        self._handle_request_noblock()

    def _handle_request_noblock(self):
        """Handle one request, without blocking.

        I assume that select.select has returned that the socket is
        readable before this function was called, so there should be
        no risk of blocking in get_request().
        """
        try:
            request, client_address = self.get_request()
        except socket.error:
            return
        if self.verify_request(request, client_address):
            try:
                self.process_request(request, client_address)
            except:
                self.handle_error(request, client_address)
                self.shutdown_request(request)

    def handle_timeout(self):
        """Called if no new request arrives within self.timeout.

        Overridden by ForkingMixIn.
        """
        pass

    def verify_request(self, request, client_address):
        """Verify the request.  May be overridden.

        Return True if we should proceed with this request.

        """
        return True

    def process_request(self, request, client_address):
        """Call finish_request.

        Overridden by ForkingMixIn and ThreadingMixIn.

        """
        self.finish_request(request, client_address)
        self.shutdown_request(request)

    def server_close(self):
        """Called to clean-up the server.

        May be overridden.

        """
        pass

    def finish_request(self, request, client_address):
        """Finish one request by instantiating RequestHandlerClass."""
        self.RequestHandlerClass(request, client_address, self)

    def shutdown_request(self, request):
        """Called to shutdown and close an individual request."""
        self.close_request(request)

    def close_request(self, request):
        """Called to clean up an individual request."""
        pass

    def handle_error(self, request, client_address):
        """Handle an error gracefully.  May be overridden.

        The default is to print a traceback and continue.

        """
        print('-'*40)
        print('Exception happened during processing of request from', end=' ')
        print(client_address)
        import traceback
        traceback.print_exc() # XXX But this goes to stderr!
        print('-'*40)


class TCPServer(BaseServer):

    """Base class for various socket-based server classes.

    Defaults to synchronous IP stream (i.e., TCP).

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass, bind_and_activate=True)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you don't use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - request_queue_size (only for stream sockets)
    - allow_reuse_address

    Instance variables:

    - server_address
    - RequestHandlerClass
    - socket

    """

    address_family = socket.AF_INET

    socket_type = socket.SOCK_STREAM

    request_queue_size = 5

    allow_reuse_address = False

    def __init__(self, server_address, RequestHandlerClass, bind_and_activate=True):
        """Constructor.  May be extended, do not override."""
        BaseServer.__init__(self, server_address, RequestHandlerClass)
        self.socket = socket.socket(self.address_family,
                                    self.socket_type)
        if bind_and_activate:
            self.server_bind()
            self.server_activate()

    def server_bind(self):
        """Called by constructor to bind the socket.

        May be overridden.

        """
        if self.allow_reuse_address:
            self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        self.socket.bind(self.server_address)
        self.server_address = self.socket.getsockname()

    def server_activate(self):
        """Called by constructor to activate the server.

        May be overridden.

        """
        self.socket.listen(self.request_queue_size)

    def server_close(self):
        """Called to clean-up the server.

        May be overridden.

        """
        self.socket.close()

    def fileno(self):
        """Return socket file number.

        Interface required by select().

        """
        return self.socket.fileno()

    def get_request(self):
        """Get the request and client address from the socket.

        May be overridden.

        """
        return self.socket.accept()

    def shutdown_request(self, request):
        """Called to shutdown and close an individual request."""
        try:
            #explicitly shutdown.  socket.close() merely releases
            #the socket and waits for GC to perform the actual close.
            request.shutdown(socket.SHUT_WR)
        except socket.error:
            pass #some platforms may raise ENOTCONN here
        self.close_request(request)

    def close_request(self, request):
        """Called to clean up an individual request."""
        request.close()


class UDPServer(TCPServer):

    """UDP server class."""

    allow_reuse_address = False

    socket_type = socket.SOCK_DGRAM

    max_packet_size = 8192

    def get_request(self):
        data, client_addr = self.socket.recvfrom(self.max_packet_size)
        return (data, self.socket), client_addr

    def server_activate(self):
        # No need to call listen() for UDP.
        pass

    def shutdown_request(self, request):
        # No need to shutdown anything.
        self.close_request(request)

    def close_request(self, request):
        # No need to close anything.
        pass

class ForkingMixIn(object):

    """Mix-in class to handle each request in a new process."""

    timeout = 300
    active_children = None
    max_children = 40

    def collect_children(self):
        """Internal routine to wait for children that have exited."""
        if self.active_children is None: return
        while len(self.active_children) >= self.max_children:
            # XXX: This will wait for any child process, not just ones
            # spawned by this library. This could confuse other
            # libraries that expect to be able to wait for their own
            # children.
            try:
                pid, status = os.waitpid(0, 0)
            except os.error:
                pid = None
            if pid not in self.active_children: continue
            self.active_children.remove(pid)

        # XXX: This loop runs more system calls than it ought
        # to. There should be a way to put the active_children into a
        # process group and then use os.waitpid(-pgid) to wait for any
        # of that set, but I couldn't find a way to allocate pgids
        # that couldn't collide.
        for child in self.active_children:
            try:
                pid, status = os.waitpid(child, os.WNOHANG)
            except os.error:
                pid = None
            if not pid: continue
            try:
                self.active_children.remove(pid)
            except ValueError as e:
                raise ValueError('%s. x=%d and list=%r' % (e.message, pid,
                                                           self.active_children))

    def handle_timeout(self):
        """Wait for zombies after self.timeout seconds of inactivity.

        May be extended, do not override.
        """
        self.collect_children()

    def service_actions(self):
        """Collect the zombie child processes regularly in the ForkingMixIn.

        service_actions is called in the BaseServer's serve_forver loop.
        """
        self.collect_children()

    def process_request(self, request, client_address):
        """Fork a new subprocess to process the request."""
        pid = os.fork()
        if pid:
            # Parent process
            if self.active_children is None:
                self.active_children = []
            self.active_children.append(pid)
            self.close_request(request)
            return
        else:
            # Child process.
            # This must never return, hence os._exit()!
            try:
                self.finish_request(request, client_address)
                self.shutdown_request(request)
                os._exit(0)
            except:
                try:
                    self.handle_error(request, client_address)
                    self.shutdown_request(request)
                finally:
                    os._exit(1)


class ThreadingMixIn(object):
    """Mix-in class to handle each request in a new thread."""

    # Decides how threads will act upon termination of the
    # main process
    daemon_threads = False

    def process_request_thread(self, request, client_address):
        """Same as in BaseServer but as a thread.

        In addition, exception handling is done here.

        """
        try:
            self.finish_request(request, client_address)
            self.shutdown_request(request)
        except:
            self.handle_error(request, client_address)
            self.shutdown_request(request)

    def process_request(self, request, client_address):
        """Start a new thread to process the request."""
        t = threading.Thread(target = self.process_request_thread,
                             args = (request, client_address))
        t.daemon = self.daemon_threads
        t.start()


class ForkingUDPServer(ForkingMixIn, UDPServer): pass
class ForkingTCPServer(ForkingMixIn, TCPServer): pass

class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass
class ThreadingTCPServer(ThreadingMixIn, TCPServer): pass

if hasattr(socket, 'AF_UNIX'):

    class UnixStreamServer(TCPServer):
        address_family = socket.AF_UNIX

    class UnixDatagramServer(UDPServer):
        address_family = socket.AF_UNIX

    class ThreadingUnixStreamServer(ThreadingMixIn, UnixStreamServer): pass

    class ThreadingUnixDatagramServer(ThreadingMixIn, UnixDatagramServer): pass

class BaseRequestHandler(object):

    """Base class for request handler classes.

    This class is instantiated for each request to be handled.  The
    constructor sets the instance variables request, client_address
    and server, and then calls the handle() method.  To implement a
    specific service, all you need to do is to derive a class which
    defines a handle() method.

    The handle() method can find the request as self.request, the
    client address as self.client_address, and the server (in case it
    needs access to per-server information) as self.server.  Since a
    separate instance is created for each request, the handle() method
    can define arbitrary other instance variariables.

    """

    def __init__(self, request, client_address, server):
        self.request = request
        self.client_address = client_address
        self.server = server
        self.setup()
        try:
            self.handle()
        finally:
            self.finish()

    def setup(self):
        pass

    def handle(self):
        pass

    def finish(self):
        pass


# The following two classes make it possible to use the same service
# class for stream or datagram servers.
# Each class sets up these instance variables:
# - rfile: a file object from which receives the request is read
# - wfile: a file object to which the reply is written
# When the handle() method returns, wfile is flushed properly


class StreamRequestHandler(BaseRequestHandler):

    """Define self.rfile and self.wfile for stream sockets."""

    # Default buffer sizes for rfile, wfile.
    # We default rfile to buffered because otherwise it could be
    # really slow for large data (a getc() call per byte); we make
    # wfile unbuffered because (a) often after a write() we want to
    # read and we need to flush the line; (b) big writes to unbuffered
    # files are typically optimized by stdio even when big reads
    # aren't.
    rbufsize = -1
    wbufsize = 0

    # A timeout to apply to the request socket, if not None.
    timeout = None

    # Disable nagle algorithm for this socket, if True.
    # Use only when wbufsize != 0, to avoid small packets.
    disable_nagle_algorithm = False

    def setup(self):
        self.connection = self.request
        if self.timeout is not None:
            self.connection.settimeout(self.timeout)
        if self.disable_nagle_algorithm:
            self.connection.setsockopt(socket.IPPROTO_TCP,
                                       socket.TCP_NODELAY, True)
        self.rfile = self.connection.makefile('rb', self.rbufsize)
        self.wfile = self.connection.makefile('wb', self.wbufsize)

    def finish(self):
        if not self.wfile.closed:
            try:
                self.wfile.flush()
            except socket.error:
                # An final socket error may have occurred here, such as
                # the local error ECONNABORTED.
                pass
        self.wfile.close()
        self.rfile.close()


class DatagramRequestHandler(BaseRequestHandler):

    # XXX Regrettably, I cannot get this working on Linux;
    # s.recvfrom() doesn't return a meaningful client address.

    """Define self.rfile and self.wfile for datagram sockets."""

    def setup(self):
        from io import BytesIO
        self.packet, self.socket = self.request
        self.rfile = BytesIO(self.packet)
        self.wfile = BytesIO()

    def finish(self):
        self.socket.sendto(self.wfile.getvalue(), self.client_address)
PKgDu\dd����.future/backports/email/_header_value_parser.pynu�[���"""Header value parser implementing various email-related RFC parsing rules.

The parsing methods defined in this module implement various email related
parsing rules.  Principal among them is RFC 5322, which is the followon
to RFC 2822 and primarily a clarification of the former.  It also implements
RFC 2047 encoded word decoding.

RFC 5322 goes to considerable trouble to maintain backward compatibility with
RFC 822 in the parse phase, while cleaning up the structure on the generation
phase.  This parser supports correct RFC 5322 generation by tagging white space
as folding white space only when folding is allowed in the non-obsolete rule
sets.  Actually, the parser is even more generous when accepting input than RFC
5322 mandates, following the spirit of Postel's Law, which RFC 5322 encourages.
Where possible deviations from the standard are annotated on the 'defects'
attribute of tokens that deviate.

The general structure of the parser follows RFC 5322, and uses its terminology
where there is a direct correspondence.  Where the implementation requires a
somewhat different structure than that used by the formal grammar, new terms
that mimic the closest existing terms are used.  Thus, it really helps to have
a copy of RFC 5322 handy when studying this code.

Input to the parser is a string that has already been unfolded according to
RFC 5322 rules.  According to the RFC this unfolding is the very first step, and
this parser leaves the unfolding step to a higher level message parser, which
will have already detected the line breaks that need unfolding while
determining the beginning and end of each header.

The output of the parser is a TokenList object, which is a list subclass.  A
TokenList is a recursive data structure.  The terminal nodes of the structure
are Terminal objects, which are subclasses of str.  These do not correspond
directly to terminal objects in the formal grammar, but are instead more
practical higher level combinations of true terminals.

All TokenList and Terminal objects have a 'value' attribute, which produces the
semantically meaningful value of that part of the parse subtree.  The value of
all whitespace tokens (no matter how many sub-tokens they may contain) is a
single space, as per the RFC rules.  This includes 'CFWS', which is herein
included in the general class of whitespace tokens.  There is one exception to
the rule that whitespace tokens are collapsed into single spaces in values: in
the value of a 'bare-quoted-string' (a quoted-string with no leading or
trailing whitespace), any whitespace that appeared between the quotation marks
is preserved in the returned value.  Note that in all Terminal strings quoted
pairs are turned into their unquoted values.

All TokenList and Terminal objects also have a string value, which attempts to
be a "canonical" representation of the RFC-compliant form of the substring that
produced the parsed subtree, including minimal use of quoted pair quoting.
Whitespace runs are not collapsed.

Comment tokens also have a 'content' attribute providing the string found
between the parens (including any nested comments) with whitespace preserved.

All TokenList and Terminal objects have a 'defects' attribute which is a
possibly empty list all of the defects found while creating the token.  Defects
may appear on any token in the tree, and a composite list of all defects in the
subtree is available through the 'all_defects' attribute of any node.  (For
Terminal notes x.defects == x.all_defects.)

Each object in a parse tree is called a 'token', and each has a 'token_type'
attribute that gives the name from the RFC 5322 grammar that it represents.
Not all RFC 5322 nodes are produced, and there is one non-RFC 5322 node that
may be produced: 'ptext'.  A 'ptext' is a string of printable ascii characters.
It is returned in place of lists of (ctext/quoted-pair) and
(qtext/quoted-pair).

XXX: provide complete list of token types.
"""
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import int, range, str, super, list

import re
from collections import namedtuple, OrderedDict

from future.backports.urllib.parse import (unquote, unquote_to_bytes)
from future.backports.email import _encoded_words as _ew
from future.backports.email import errors
from future.backports.email import utils

#
# Useful constants and functions
#

WSP = set(' \t')
CFWS_LEADER = WSP | set('(')
SPECIALS = set(r'()<>@,:;.\"[]')
ATOM_ENDS = SPECIALS | WSP
DOT_ATOM_ENDS = ATOM_ENDS - set('.')
# '.', '"', and '(' do not end phrases in order to support obs-phrase
PHRASE_ENDS = SPECIALS - set('."(')
TSPECIALS = (SPECIALS | set('/?=')) - set('.')
TOKEN_ENDS = TSPECIALS | WSP
ASPECIALS = TSPECIALS | set("*'%")
ATTRIBUTE_ENDS = ASPECIALS | WSP
EXTENDED_ATTRIBUTE_ENDS = ATTRIBUTE_ENDS - set('%')

def quote_string(value):
    return '"'+str(value).replace('\\', '\\\\').replace('"', r'\"')+'"'

#
# Accumulator for header folding
#

class _Folded(object):

    def __init__(self, maxlen, policy):
        self.maxlen = maxlen
        self.policy = policy
        self.lastlen = 0
        self.stickyspace = None
        self.firstline = True
        self.done = []
        self.current = list()    # uses l.clear()

    def newline(self):
        self.done.extend(self.current)
        self.done.append(self.policy.linesep)
        self.current.clear()
        self.lastlen = 0

    def finalize(self):
        if self.current:
            self.newline()

    def __str__(self):
        return ''.join(self.done)

    def append(self, stoken):
        self.current.append(stoken)

    def append_if_fits(self, token, stoken=None):
        if stoken is None:
            stoken = str(token)
        l = len(stoken)
        if self.stickyspace is not None:
            stickyspace_len = len(self.stickyspace)
            if self.lastlen + stickyspace_len + l <= self.maxlen:
                self.current.append(self.stickyspace)
                self.lastlen += stickyspace_len
                self.current.append(stoken)
                self.lastlen += l
                self.stickyspace = None
                self.firstline = False
                return True
            if token.has_fws:
                ws = token.pop_leading_fws()
                if ws is not None:
                    self.stickyspace += str(ws)
                    stickyspace_len += len(ws)
                token._fold(self)
                return True
            if stickyspace_len and l + 1 <= self.maxlen:
                margin = self.maxlen - l
                if 0 < margin < stickyspace_len:
                    trim = stickyspace_len - margin
                    self.current.append(self.stickyspace[:trim])
                    self.stickyspace = self.stickyspace[trim:]
                    stickyspace_len = trim
                self.newline()
                self.current.append(self.stickyspace)
                self.current.append(stoken)
                self.lastlen = l + stickyspace_len
                self.stickyspace = None
                self.firstline = False
                return True
            if not self.firstline:
                self.newline()
            self.current.append(self.stickyspace)
            self.current.append(stoken)
            self.stickyspace = None
            self.firstline = False
            return True
        if self.lastlen + l <= self.maxlen:
            self.current.append(stoken)
            self.lastlen += l
            return True
        if l < self.maxlen:
            self.newline()
            self.current.append(stoken)
            self.lastlen = l
            return True
        return False

#
# TokenList and its subclasses
#

class TokenList(list):

    token_type = None

    def __init__(self, *args, **kw):
        super(TokenList, self).__init__(*args, **kw)
        self.defects = []

    def __str__(self):
        return ''.join(str(x) for x in self)

    def __repr__(self):
        return '{}({})'.format(self.__class__.__name__,
                               super(TokenList, self).__repr__())

    @property
    def value(self):
        return ''.join(x.value for x in self if x.value)

    @property
    def all_defects(self):
        return sum((x.all_defects for x in self), self.defects)

    #
    # Folding API
    #
    # parts():
    #
    # return a list of objects that constitute the "higher level syntactic
    # objects" specified by the RFC as the best places to fold a header line.
    # The returned objects must include leading folding white space, even if
    # this means mutating the underlying parse tree of the object.  Each object
    # is only responsible for returning *its* parts, and should not drill down
    # to any lower level except as required to meet the leading folding white
    # space constraint.
    #
    # _fold(folded):
    #
    #   folded: the result accumulator.  This is an instance of _Folded.
    #       (XXX: I haven't finished factoring this out yet, the folding code
    #       pretty much uses this as a state object.) When the folded.current
    #       contains as much text as will fit, the _fold method should call
    #       folded.newline.
    #  folded.lastlen: the current length of the test stored in folded.current.
    #  folded.maxlen: The maximum number of characters that may appear on a
    #       folded line.  Differs from the policy setting in that "no limit" is
    #       represented by +inf, which means it can be used in the trivially
    #       logical fashion in comparisons.
    #
    # Currently no subclasses implement parts, and I think this will remain
    # true.  A subclass only needs to implement _fold when the generic version
    # isn't sufficient.  _fold will need to be implemented primarily when it is
    # possible for encoded words to appear in the specialized token-list, since
    # there is no generic algorithm that can know where exactly the encoded
    # words are allowed.  A _fold implementation is responsible for filling
    # lines in the same general way that the top level _fold does. It may, and
    # should, call the _fold method of sub-objects in a similar fashion to that
    # of the top level _fold.
    #
    # XXX: I'm hoping it will be possible to factor the existing code further
    # to reduce redundancy and make the logic clearer.

    @property
    def parts(self):
        klass = self.__class__
        this = list()
        for token in self:
            if token.startswith_fws():
                if this:
                    yield this[0] if len(this)==1 else klass(this)
                    this.clear()
            end_ws = token.pop_trailing_ws()
            this.append(token)
            if end_ws:
                yield klass(this)
                this = [end_ws]
        if this:
            yield this[0] if len(this)==1 else klass(this)

    def startswith_fws(self):
        return self[0].startswith_fws()

    def pop_leading_fws(self):
        if self[0].token_type == 'fws':
            return self.pop(0)
        return self[0].pop_leading_fws()

    def pop_trailing_ws(self):
        if self[-1].token_type == 'cfws':
            return self.pop(-1)
        return self[-1].pop_trailing_ws()

    @property
    def has_fws(self):
        for part in self:
            if part.has_fws:
                return True
        return False

    def has_leading_comment(self):
        return self[0].has_leading_comment()

    @property
    def comments(self):
        comments = []
        for token in self:
            comments.extend(token.comments)
        return comments

    def fold(self, **_3to2kwargs):
        # max_line_length 0/None means no limit, ie: infinitely long.
        policy = _3to2kwargs['policy']; del _3to2kwargs['policy']
        maxlen = policy.max_line_length or float("+inf")
        folded = _Folded(maxlen, policy)
        self._fold(folded)
        folded.finalize()
        return str(folded)

    def as_encoded_word(self, charset):
        # This works only for things returned by 'parts', which include
        # the leading fws, if any, that should be used.
        res = []
        ws = self.pop_leading_fws()
        if ws:
            res.append(ws)
        trailer = self.pop(-1) if self[-1].token_type=='fws' else ''
        res.append(_ew.encode(str(self), charset))
        res.append(trailer)
        return ''.join(res)

    def cte_encode(self, charset, policy):
        res = []
        for part in self:
            res.append(part.cte_encode(charset, policy))
        return ''.join(res)

    def _fold(self, folded):
        for part in self.parts:
            tstr = str(part)
            tlen = len(tstr)
            try:
                str(part).encode('us-ascii')
            except UnicodeEncodeError:
                if any(isinstance(x, errors.UndecodableBytesDefect)
                        for x in part.all_defects):
                    charset = 'unknown-8bit'
                else:
                    # XXX: this should be a policy setting
                    charset = 'utf-8'
                tstr = part.cte_encode(charset, folded.policy)
                tlen = len(tstr)
            if folded.append_if_fits(part, tstr):
                continue
            # Peel off the leading whitespace if any and make it sticky, to
            # avoid infinite recursion.
            ws = part.pop_leading_fws()
            if ws is not None:
                # Peel off the leading whitespace and make it sticky, to
                # avoid infinite recursion.
                folded.stickyspace = str(part.pop(0))
                if folded.append_if_fits(part):
                    continue
            if part.has_fws:
                part._fold(folded)
                continue
            # There are no fold points in this one; it is too long for a single
            # line and can't be split...we just have to put it on its own line.
            folded.append(tstr)
            folded.newline()

    def pprint(self, indent=''):
        print('\n'.join(self._pp(indent='')))

    def ppstr(self, indent=''):
        return '\n'.join(self._pp(indent=''))

    def _pp(self, indent=''):
        yield '{}{}/{}('.format(
            indent,
            self.__class__.__name__,
            self.token_type)
        for token in self:
            if not hasattr(token, '_pp'):
                yield (indent + '    !! invalid element in token '
                                        'list: {!r}'.format(token))
            else:
                for line in token._pp(indent+'    '):
                    yield line
        if self.defects:
            extra = ' Defects: {}'.format(self.defects)
        else:
            extra = ''
        yield '{}){}'.format(indent, extra)


class WhiteSpaceTokenList(TokenList):

    @property
    def value(self):
        return ' '

    @property
    def comments(self):
        return [x.content for x in self if x.token_type=='comment']


class UnstructuredTokenList(TokenList):

    token_type = 'unstructured'

    def _fold(self, folded):
        if any(x.token_type=='encoded-word' for x in self):
            return self._fold_encoded(folded)
        # Here we can have either a pure ASCII string that may or may not
        # have surrogateescape encoded bytes, or a unicode string.
        last_ew = None
        for part in self.parts:
            tstr = str(part)
            is_ew = False
            try:
                str(part).encode('us-ascii')
            except UnicodeEncodeError:
                if any(isinstance(x, errors.UndecodableBytesDefect)
                       for x in part.all_defects):
                    charset = 'unknown-8bit'
                else:
                    charset = 'utf-8'
                if last_ew is not None:
                    # We've already done an EW, combine this one with it
                    # if there's room.
                    chunk = get_unstructured(
                        ''.join(folded.current[last_ew:]+[tstr])).as_encoded_word(charset)
                    oldlastlen = sum(len(x) for x in folded.current[:last_ew])
                    schunk = str(chunk)
                    lchunk = len(schunk)
                    if oldlastlen + lchunk <= folded.maxlen:
                        del folded.current[last_ew:]
                        folded.append(schunk)
                        folded.lastlen = oldlastlen + lchunk
                        continue
                tstr = part.as_encoded_word(charset)
                is_ew = True
            if folded.append_if_fits(part, tstr):
                if is_ew:
                    last_ew = len(folded.current) - 1
                continue
            if is_ew or last_ew:
                # It's too big to fit on the line, but since we've
                # got encoded words we can use encoded word folding.
                part._fold_as_ew(folded)
                continue
            # Peel off the leading whitespace if any and make it sticky, to
            # avoid infinite recursion.
            ws = part.pop_leading_fws()
            if ws is not None:
                folded.stickyspace = str(ws)
                if folded.append_if_fits(part):
                    continue
            if part.has_fws:
                part.fold(folded)
                continue
            # It can't be split...we just have to put it on its own line.
            folded.append(tstr)
            folded.newline()
            last_ew = None

    def cte_encode(self, charset, policy):
        res = []
        last_ew = None
        for part in self:
            spart = str(part)
            try:
                spart.encode('us-ascii')
                res.append(spart)
            except UnicodeEncodeError:
                if last_ew is None:
                    res.append(part.cte_encode(charset, policy))
                    last_ew = len(res)
                else:
                    tl = get_unstructured(''.join(res[last_ew:] + [spart]))
                    res.append(tl.as_encoded_word())
        return ''.join(res)


class Phrase(TokenList):

    token_type = 'phrase'

    def _fold(self, folded):
        # As with Unstructured, we can have pure ASCII with or without
        # surrogateescape encoded bytes, or we could have unicode.  But this
        # case is more complicated, since we have to deal with the various
        # sub-token types and how they can be composed in the face of
        # unicode-that-needs-CTE-encoding, and the fact that if a token a
        # comment that becomes a barrier across which we can't compose encoded
        # words.
        last_ew = None
        for part in self.parts:
            tstr = str(part)
            tlen = len(tstr)
            has_ew = False
            try:
                str(part).encode('us-ascii')
            except UnicodeEncodeError:
                if any(isinstance(x, errors.UndecodableBytesDefect)
                        for x in part.all_defects):
                    charset = 'unknown-8bit'
                else:
                    charset = 'utf-8'
                if last_ew is not None and not part.has_leading_comment():
                    # We've already done an EW, let's see if we can combine
                    # this one with it.  The last_ew logic ensures that all we
                    # have at this point is atoms, no comments or quoted
                    # strings.  So we can treat the text between the last
                    # encoded word and the content of this token as
                    # unstructured text, and things will work correctly.  But
                    # we have to strip off any trailing comment on this token
                    # first, and if it is a quoted string we have to pull out
                    # the content (we're encoding it, so it no longer needs to
                    # be quoted).
                    if part[-1].token_type == 'cfws' and part.comments:
                        remainder = part.pop(-1)
                    else:
                        remainder = ''
                    for i, token in enumerate(part):
                        if token.token_type == 'bare-quoted-string':
                            part[i] = UnstructuredTokenList(token[:])
                    chunk = get_unstructured(
                        ''.join(folded.current[last_ew:]+[tstr])).as_encoded_word(charset)
                    schunk = str(chunk)
                    lchunk = len(schunk)
                    if last_ew + lchunk <= folded.maxlen:
                        del folded.current[last_ew:]
                        folded.append(schunk)
                        folded.lastlen = sum(len(x) for x in folded.current)
                        continue
                tstr = part.as_encoded_word(charset)
                tlen = len(tstr)
                has_ew = True
            if folded.append_if_fits(part, tstr):
                if has_ew and not part.comments:
                    last_ew = len(folded.current) - 1
                elif part.comments or part.token_type == 'quoted-string':
                    # If a comment is involved we can't combine EWs.  And if a
                    # quoted string is involved, it's not worth the effort to
                    # try to combine them.
                    last_ew = None
                continue
            part._fold(folded)

    def cte_encode(self, charset, policy):
        res = []
        last_ew = None
        is_ew = False
        for part in self:
            spart = str(part)
            try:
                spart.encode('us-ascii')
                res.append(spart)
            except UnicodeEncodeError:
                is_ew = True
                if last_ew is None:
                    if not part.comments:
                        last_ew = len(res)
                    res.append(part.cte_encode(charset, policy))
                elif not part.has_leading_comment():
                    if part[-1].token_type == 'cfws' and part.comments:
                        remainder = part.pop(-1)
                    else:
                        remainder = ''
                    for i, token in enumerate(part):
                        if token.token_type == 'bare-quoted-string':
                            part[i] = UnstructuredTokenList(token[:])
                    tl = get_unstructured(''.join(res[last_ew:] + [spart]))
                    res[last_ew:] = [tl.as_encoded_word(charset)]
            if part.comments or (not is_ew and part.token_type == 'quoted-string'):
                last_ew = None
        return ''.join(res)

class Word(TokenList):

    token_type = 'word'


class CFWSList(WhiteSpaceTokenList):

    token_type = 'cfws'

    def has_leading_comment(self):
        return bool(self.comments)


class Atom(TokenList):

    token_type = 'atom'


class Token(TokenList):

    token_type = 'token'


class EncodedWord(TokenList):

    token_type = 'encoded-word'
    cte = None
    charset = None
    lang = None

    @property
    def encoded(self):
        if self.cte is not None:
            return self.cte
        _ew.encode(str(self), self.charset)



class QuotedString(TokenList):

    token_type = 'quoted-string'

    @property
    def content(self):
        for x in self:
            if x.token_type == 'bare-quoted-string':
                return x.value

    @property
    def quoted_value(self):
        res = []
        for x in self:
            if x.token_type == 'bare-quoted-string':
                res.append(str(x))
            else:
                res.append(x.value)
        return ''.join(res)

    @property
    def stripped_value(self):
        for token in self:
            if token.token_type == 'bare-quoted-string':
                return token.value


class BareQuotedString(QuotedString):

    token_type = 'bare-quoted-string'

    def __str__(self):
        return quote_string(''.join(str(x) for x in self))

    @property
    def value(self):
        return ''.join(str(x) for x in self)


class Comment(WhiteSpaceTokenList):

    token_type = 'comment'

    def __str__(self):
        return ''.join(sum([
                            ["("],
                            [self.quote(x) for x in self],
                            [")"],
                            ], []))

    def quote(self, value):
        if value.token_type == 'comment':
            return str(value)
        return str(value).replace('\\', '\\\\').replace(
                                  '(', '\(').replace(
                                  ')', '\)')

    @property
    def content(self):
        return ''.join(str(x) for x in self)

    @property
    def comments(self):
        return [self.content]

class AddressList(TokenList):

    token_type = 'address-list'

    @property
    def addresses(self):
        return [x for x in self if x.token_type=='address']

    @property
    def mailboxes(self):
        return sum((x.mailboxes
                    for x in self if x.token_type=='address'), [])

    @property
    def all_mailboxes(self):
        return sum((x.all_mailboxes
                    for x in self if x.token_type=='address'), [])


class Address(TokenList):

    token_type = 'address'

    @property
    def display_name(self):
        if self[0].token_type == 'group':
            return self[0].display_name

    @property
    def mailboxes(self):
        if self[0].token_type == 'mailbox':
            return [self[0]]
        elif self[0].token_type == 'invalid-mailbox':
            return []
        return self[0].mailboxes

    @property
    def all_mailboxes(self):
        if self[0].token_type == 'mailbox':
            return [self[0]]
        elif self[0].token_type == 'invalid-mailbox':
            return [self[0]]
        return self[0].all_mailboxes

class MailboxList(TokenList):

    token_type = 'mailbox-list'

    @property
    def mailboxes(self):
        return [x for x in self if x.token_type=='mailbox']

    @property
    def all_mailboxes(self):
        return [x for x in self
            if x.token_type in ('mailbox', 'invalid-mailbox')]


class GroupList(TokenList):

    token_type = 'group-list'

    @property
    def mailboxes(self):
        if not self or self[0].token_type != 'mailbox-list':
            return []
        return self[0].mailboxes

    @property
    def all_mailboxes(self):
        if not self or self[0].token_type != 'mailbox-list':
            return []
        return self[0].all_mailboxes


class Group(TokenList):

    token_type = "group"

    @property
    def mailboxes(self):
        if self[2].token_type != 'group-list':
            return []
        return self[2].mailboxes

    @property
    def all_mailboxes(self):
        if self[2].token_type != 'group-list':
            return []
        return self[2].all_mailboxes

    @property
    def display_name(self):
        return self[0].display_name


class NameAddr(TokenList):

    token_type = 'name-addr'

    @property
    def display_name(self):
        if len(self) == 1:
            return None
        return self[0].display_name

    @property
    def local_part(self):
        return self[-1].local_part

    @property
    def domain(self):
        return self[-1].domain

    @property
    def route(self):
        return self[-1].route

    @property
    def addr_spec(self):
        return self[-1].addr_spec


class AngleAddr(TokenList):

    token_type = 'angle-addr'

    @property
    def local_part(self):
        for x in self:
            if x.token_type == 'addr-spec':
                return x.local_part

    @property
    def domain(self):
        for x in self:
            if x.token_type == 'addr-spec':
                return x.domain

    @property
    def route(self):
        for x in self:
            if x.token_type == 'obs-route':
                return x.domains

    @property
    def addr_spec(self):
        for x in self:
            if x.token_type == 'addr-spec':
                return x.addr_spec
        else:
            return '<>'


class ObsRoute(TokenList):

    token_type = 'obs-route'

    @property
    def domains(self):
        return [x.domain for x in self if x.token_type == 'domain']


class Mailbox(TokenList):

    token_type = 'mailbox'

    @property
    def display_name(self):
        if self[0].token_type == 'name-addr':
            return self[0].display_name

    @property
    def local_part(self):
        return self[0].local_part

    @property
    def domain(self):
        return self[0].domain

    @property
    def route(self):
        if self[0].token_type == 'name-addr':
            return self[0].route

    @property
    def addr_spec(self):
        return self[0].addr_spec


class InvalidMailbox(TokenList):

    token_type = 'invalid-mailbox'

    @property
    def display_name(self):
        return None

    local_part = domain = route = addr_spec = display_name


class Domain(TokenList):

    token_type = 'domain'

    @property
    def domain(self):
        return ''.join(super(Domain, self).value.split())


class DotAtom(TokenList):

    token_type = 'dot-atom'


class DotAtomText(TokenList):

    token_type = 'dot-atom-text'


class AddrSpec(TokenList):

    token_type = 'addr-spec'

    @property
    def local_part(self):
        return self[0].local_part

    @property
    def domain(self):
        if len(self) < 3:
            return None
        return self[-1].domain

    @property
    def value(self):
        if len(self) < 3:
            return self[0].value
        return self[0].value.rstrip()+self[1].value+self[2].value.lstrip()

    @property
    def addr_spec(self):
        nameset = set(self.local_part)
        if len(nameset) > len(nameset-DOT_ATOM_ENDS):
            lp = quote_string(self.local_part)
        else:
            lp = self.local_part
        if self.domain is not None:
            return lp + '@' + self.domain
        return lp


class ObsLocalPart(TokenList):

    token_type = 'obs-local-part'


class DisplayName(Phrase):

    token_type = 'display-name'

    @property
    def display_name(self):
        res = TokenList(self)
        if res[0].token_type == 'cfws':
            res.pop(0)
        else:
            if res[0][0].token_type == 'cfws':
                res[0] = TokenList(res[0][1:])
        if res[-1].token_type == 'cfws':
            res.pop()
        else:
            if res[-1][-1].token_type == 'cfws':
                res[-1] = TokenList(res[-1][:-1])
        return res.value

    @property
    def value(self):
        quote = False
        if self.defects:
            quote = True
        else:
            for x in self:
                if x.token_type == 'quoted-string':
                    quote = True
        if quote:
            pre = post = ''
            if self[0].token_type=='cfws' or self[0][0].token_type=='cfws':
                pre = ' '
            if self[-1].token_type=='cfws' or self[-1][-1].token_type=='cfws':
                post = ' '
            return pre+quote_string(self.display_name)+post
        else:
            return super(DisplayName, self).value


class LocalPart(TokenList):

    token_type = 'local-part'

    @property
    def value(self):
        if self[0].token_type == "quoted-string":
            return self[0].quoted_value
        else:
            return self[0].value

    @property
    def local_part(self):
        # Strip whitespace from front, back, and around dots.
        res = [DOT]
        last = DOT
        last_is_tl = False
        for tok in self[0] + [DOT]:
            if tok.token_type == 'cfws':
                continue
            if (last_is_tl and tok.token_type == 'dot' and
                    last[-1].token_type == 'cfws'):
                res[-1] = TokenList(last[:-1])
            is_tl = isinstance(tok, TokenList)
            if (is_tl and last.token_type == 'dot' and
                    tok[0].token_type == 'cfws'):
                res.append(TokenList(tok[1:]))
            else:
                res.append(tok)
            last = res[-1]
            last_is_tl = is_tl
        res = TokenList(res[1:-1])
        return res.value


class DomainLiteral(TokenList):

    token_type = 'domain-literal'

    @property
    def domain(self):
        return ''.join(super(DomainLiteral, self).value.split())

    @property
    def ip(self):
        for x in self:
            if x.token_type == 'ptext':
                return x.value


class MIMEVersion(TokenList):

    token_type = 'mime-version'
    major = None
    minor = None


class Parameter(TokenList):

    token_type = 'parameter'
    sectioned = False
    extended = False
    charset = 'us-ascii'

    @property
    def section_number(self):
        # Because the first token, the attribute (name) eats CFWS, the second
        # token is always the section if there is one.
        return self[1].number if self.sectioned else 0

    @property
    def param_value(self):
        # This is part of the "handle quoted extended parameters" hack.
        for token in self:
            if token.token_type == 'value':
                return token.stripped_value
            if token.token_type == 'quoted-string':
                for token in token:
                    if token.token_type == 'bare-quoted-string':
                        for token in token:
                            if token.token_type == 'value':
                                return token.stripped_value
        return ''


class InvalidParameter(Parameter):

    token_type = 'invalid-parameter'


class Attribute(TokenList):

    token_type = 'attribute'

    @property
    def stripped_value(self):
        for token in self:
            if token.token_type.endswith('attrtext'):
                return token.value

class Section(TokenList):

    token_type = 'section'
    number = None


class Value(TokenList):

    token_type = 'value'

    @property
    def stripped_value(self):
        token = self[0]
        if token.token_type == 'cfws':
            token = self[1]
        if token.token_type.endswith(
                ('quoted-string', 'attribute', 'extended-attribute')):
            return token.stripped_value
        return self.value


class MimeParameters(TokenList):

    token_type = 'mime-parameters'

    @property
    def params(self):
        # The RFC specifically states that the ordering of parameters is not
        # guaranteed and may be reordered by the transport layer.  So we have
        # to assume the RFC 2231 pieces can come in any order.  However, we
        # output them in the order that we first see a given name, which gives
        # us a stable __str__.
        params = OrderedDict()
        for token in self:
            if not token.token_type.endswith('parameter'):
                continue
            if token[0].token_type != 'attribute':
                continue
            name = token[0].value.strip()
            if name not in params:
                params[name] = []
            params[name].append((token.section_number, token))
        for name, parts in params.items():
            parts = sorted(parts)
            # XXX: there might be more recovery we could do here if, for
            # example, this is really a case of a duplicate attribute name.
            value_parts = []
            charset = parts[0][1].charset
            for i, (section_number, param) in enumerate(parts):
                if section_number != i:
                    param.defects.append(errors.InvalidHeaderDefect(
                        "inconsistent multipart parameter numbering"))
                value = param.param_value
                if param.extended:
                    try:
                        value = unquote_to_bytes(value)
                    except UnicodeEncodeError:
                        # source had surrogate escaped bytes.  What we do now
                        # is a bit of an open question.  I'm not sure this is
                        # the best choice, but it is what the old algorithm did
                        value = unquote(value, encoding='latin-1')
                    else:
                        try:
                            value = value.decode(charset, 'surrogateescape')
                        except LookupError:
                            # XXX: there should really be a custom defect for
                            # unknown character set to make it easy to find,
                            # because otherwise unknown charset is a silent
                            # failure.
                            value = value.decode('us-ascii', 'surrogateescape')
                        if utils._has_surrogates(value):
                            param.defects.append(errors.UndecodableBytesDefect())
                value_parts.append(value)
            value = ''.join(value_parts)
            yield name, value

    def __str__(self):
        params = []
        for name, value in self.params:
            if value:
                params.append('{}={}'.format(name, quote_string(value)))
            else:
                params.append(name)
        params = '; '.join(params)
        return ' ' + params if params else ''


class ParameterizedHeaderValue(TokenList):

    @property
    def params(self):
        for token in reversed(self):
            if token.token_type == 'mime-parameters':
                return token.params
        return {}

    @property
    def parts(self):
        if self and self[-1].token_type == 'mime-parameters':
            # We don't want to start a new line if all of the params don't fit
            # after the value, so unwrap the parameter list.
            return TokenList(self[:-1] + self[-1])
        return TokenList(self).parts


class ContentType(ParameterizedHeaderValue):

    token_type = 'content-type'
    maintype = 'text'
    subtype = 'plain'


class ContentDisposition(ParameterizedHeaderValue):

    token_type = 'content-disposition'
    content_disposition = None


class ContentTransferEncoding(TokenList):

    token_type = 'content-transfer-encoding'
    cte = '7bit'


class HeaderLabel(TokenList):

    token_type = 'header-label'


class Header(TokenList):

    token_type = 'header'

    def _fold(self, folded):
        folded.append(str(self.pop(0)))
        folded.lastlen = len(folded.current[0])
        # The first line of the header is different from all others: we don't
        # want to start a new object on a new line if it has any fold points in
        # it that would allow part of it to be on the first header line.
        # Further, if the first fold point would fit on the new line, we want
        # to do that, but if it doesn't we want to put it on the first line.
        # Folded supports this via the stickyspace attribute.  If this
        # attribute is not None, it does the special handling.
        folded.stickyspace = str(self.pop(0)) if self[0].token_type == 'cfws' else ''
        rest = self.pop(0)
        if self:
            raise ValueError("Malformed Header token list")
        rest._fold(folded)


#
# Terminal classes and instances
#

class Terminal(str):

    def __new__(cls, value, token_type):
        self = super(Terminal, cls).__new__(cls, value)
        self.token_type = token_type
        self.defects = []
        return self

    def __repr__(self):
        return "{}({})".format(self.__class__.__name__, super(Terminal, self).__repr__())

    @property
    def all_defects(self):
        return list(self.defects)

    def _pp(self, indent=''):
        return ["{}{}/{}({}){}".format(
            indent,
            self.__class__.__name__,
            self.token_type,
            super(Terminal, self).__repr__(),
            '' if not self.defects else ' {}'.format(self.defects),
            )]

    def cte_encode(self, charset, policy):
        value = str(self)
        try:
            value.encode('us-ascii')
            return value
        except UnicodeEncodeError:
            return _ew.encode(value, charset)

    def pop_trailing_ws(self):
        # This terminates the recursion.
        return None

    def pop_leading_fws(self):
        # This terminates the recursion.
        return None

    @property
    def comments(self):
        return []

    def has_leading_comment(self):
        return False

    def __getnewargs__(self):
        return(str(self), self.token_type)


class WhiteSpaceTerminal(Terminal):

    @property
    def value(self):
        return ' '

    def startswith_fws(self):
        return True

    has_fws = True


class ValueTerminal(Terminal):

    @property
    def value(self):
        return self

    def startswith_fws(self):
        return False

    has_fws = False

    def as_encoded_word(self, charset):
        return _ew.encode(str(self), charset)


class EWWhiteSpaceTerminal(WhiteSpaceTerminal):

    @property
    def value(self):
        return ''

    @property
    def encoded(self):
        return self[:]

    def __str__(self):
        return ''

    has_fws = True


# XXX these need to become classes and used as instances so
# that a program can't change them in a parse tree and screw
# up other parse trees.  Maybe should have  tests for that, too.
DOT = ValueTerminal('.', 'dot')
ListSeparator = ValueTerminal(',', 'list-separator')
RouteComponentMarker = ValueTerminal('@', 'route-component-marker')

#
# Parser
#

"""Parse strings according to RFC822/2047/2822/5322 rules.

This is a stateless parser.  Each get_XXX function accepts a string and
returns either a Terminal or a TokenList representing the RFC object named
by the method and a string containing the remaining unparsed characters
from the input.  Thus a parser method consumes the next syntactic construct
of a given type and returns a token representing the construct plus the
unparsed remainder of the input string.

For example, if the first element of a structured header is a 'phrase',
then:

    phrase, value = get_phrase(value)

returns the complete phrase from the start of the string value, plus any
characters left in the string after the phrase is removed.

"""

_wsp_splitter = re.compile(r'([{}]+)'.format(''.join(WSP))).split
_non_atom_end_matcher = re.compile(r"[^{}]+".format(
    ''.join(ATOM_ENDS).replace('\\','\\\\').replace(']','\]'))).match
_non_printable_finder = re.compile(r"[\x00-\x20\x7F]").findall
_non_token_end_matcher = re.compile(r"[^{}]+".format(
    ''.join(TOKEN_ENDS).replace('\\','\\\\').replace(']','\]'))).match
_non_attribute_end_matcher = re.compile(r"[^{}]+".format(
    ''.join(ATTRIBUTE_ENDS).replace('\\','\\\\').replace(']','\]'))).match
_non_extended_attribute_end_matcher = re.compile(r"[^{}]+".format(
    ''.join(EXTENDED_ATTRIBUTE_ENDS).replace(
                                    '\\','\\\\').replace(']','\]'))).match

def _validate_xtext(xtext):
    """If input token contains ASCII non-printables, register a defect."""

    non_printables = _non_printable_finder(xtext)
    if non_printables:
        xtext.defects.append(errors.NonPrintableDefect(non_printables))
    if utils._has_surrogates(xtext):
        xtext.defects.append(errors.UndecodableBytesDefect(
            "Non-ASCII characters found in header token"))

def _get_ptext_to_endchars(value, endchars):
    """Scan printables/quoted-pairs until endchars and return unquoted ptext.

    This function turns a run of qcontent, ccontent-without-comments, or
    dtext-with-quoted-printables into a single string by unquoting any
    quoted printables.  It returns the string, the remaining value, and
    a flag that is True iff there were any quoted printables decoded.

    """
    _3to2list = list(_wsp_splitter(value, 1))
    fragment, remainder, = _3to2list[:1] + [_3to2list[1:]]
    vchars = []
    escape = False
    had_qp = False
    for pos in range(len(fragment)):
        if fragment[pos] == '\\':
            if escape:
                escape = False
                had_qp = True
            else:
                escape = True
                continue
        if escape:
            escape = False
        elif fragment[pos] in endchars:
            break
        vchars.append(fragment[pos])
    else:
        pos = pos + 1
    return ''.join(vchars), ''.join([fragment[pos:]] + remainder), had_qp

def _decode_ew_run(value):
    """ Decode a run of RFC2047 encoded words.

        _decode_ew_run(value) -> (text, value, defects)

    Scans the supplied value for a run of tokens that look like they are RFC
    2047 encoded words, decodes those words into text according to RFC 2047
    rules (whitespace between encoded words is discarded), and returns the text
    and the remaining value (including any leading whitespace on the remaining
    value), as well as a list of any defects encountered while decoding.  The
    input value may not have any leading whitespace.

    """
    res = []
    defects = []
    last_ws = ''
    while value:
        try:
            tok, ws, value = _wsp_splitter(value, 1)
        except ValueError:
            tok, ws, value = value, '', ''
        if not (tok.startswith('=?') and tok.endswith('?=')):
            return ''.join(res), last_ws + tok + ws + value, defects
        text, charset, lang, new_defects = _ew.decode(tok)
        res.append(text)
        defects.extend(new_defects)
        last_ws = ws
    return ''.join(res), last_ws, defects

def get_fws(value):
    """FWS = 1*WSP

    This isn't the RFC definition.  We're using fws to represent tokens where
    folding can be done, but when we are parsing the *un*folding has already
    been done so we don't need to watch out for CRLF.

    """
    newvalue = value.lstrip()
    fws = WhiteSpaceTerminal(value[:len(value)-len(newvalue)], 'fws')
    return fws, newvalue

def get_encoded_word(value):
    """ encoded-word = "=?" charset "?" encoding "?" encoded-text "?="

    """
    ew = EncodedWord()
    if not value.startswith('=?'):
        raise errors.HeaderParseError(
            "expected encoded word but found {}".format(value))
    _3to2list1 = list(value[2:].split('?=', 1))
    tok, remainder, = _3to2list1[:1] + [_3to2list1[1:]]
    if tok == value[2:]:
        raise errors.HeaderParseError(
            "expected encoded word but found {}".format(value))
    remstr = ''.join(remainder)
    if remstr[:2].isdigit():
        _3to2list3 = list(remstr.split('?=', 1))
        rest, remainder, = _3to2list3[:1] + [_3to2list3[1:]]
        tok = tok + '?=' + rest
    if len(tok.split()) > 1:
        ew.defects.append(errors.InvalidHeaderDefect(
            "whitespace inside encoded word"))
    ew.cte = value
    value = ''.join(remainder)
    try:
        text, charset, lang, defects = _ew.decode('=?' + tok + '?=')
    except ValueError:
        raise errors.HeaderParseError(
            "encoded word format invalid: '{}'".format(ew.cte))
    ew.charset = charset
    ew.lang = lang
    ew.defects.extend(defects)
    while text:
        if text[0] in WSP:
            token, text = get_fws(text)
            ew.append(token)
            continue
        _3to2list5 = list(_wsp_splitter(text, 1))
        chars, remainder, = _3to2list5[:1] + [_3to2list5[1:]]
        vtext = ValueTerminal(chars, 'vtext')
        _validate_xtext(vtext)
        ew.append(vtext)
        text = ''.join(remainder)
    return ew, value

def get_unstructured(value):
    """unstructured = (*([FWS] vchar) *WSP) / obs-unstruct
       obs-unstruct = *((*LF *CR *(obs-utext) *LF *CR)) / FWS)
       obs-utext = %d0 / obs-NO-WS-CTL / LF / CR

       obs-NO-WS-CTL is control characters except WSP/CR/LF.

    So, basically, we have printable runs, plus control characters or nulls in
    the obsolete syntax, separated by whitespace.  Since RFC 2047 uses the
    obsolete syntax in its specification, but requires whitespace on either
    side of the encoded words, I can see no reason to need to separate the
    non-printable-non-whitespace from the printable runs if they occur, so we
    parse this into xtext tokens separated by WSP tokens.

    Because an 'unstructured' value must by definition constitute the entire
    value, this 'get' routine does not return a remaining value, only the
    parsed TokenList.

    """
    # XXX: but what about bare CR and LF?  They might signal the start or
    # end of an encoded word.  YAGNI for now, since out current parsers
    # will never send us strings with bard CR or LF.

    unstructured = UnstructuredTokenList()
    while value:
        if value[0] in WSP:
            token, value = get_fws(value)
            unstructured.append(token)
            continue
        if value.startswith('=?'):
            try:
                token, value = get_encoded_word(value)
            except errors.HeaderParseError:
                pass
            else:
                have_ws = True
                if len(unstructured) > 0:
                    if unstructured[-1].token_type != 'fws':
                        unstructured.defects.append(errors.InvalidHeaderDefect(
                            "missing whitespace before encoded word"))
                        have_ws = False
                if have_ws and len(unstructured) > 1:
                    if unstructured[-2].token_type == 'encoded-word':
                        unstructured[-1] = EWWhiteSpaceTerminal(
                            unstructured[-1], 'fws')
                unstructured.append(token)
                continue
        _3to2list7 = list(_wsp_splitter(value, 1))
        tok, remainder, = _3to2list7[:1] + [_3to2list7[1:]]
        vtext = ValueTerminal(tok, 'vtext')
        _validate_xtext(vtext)
        unstructured.append(vtext)
        value = ''.join(remainder)
    return unstructured

def get_qp_ctext(value):
    """ctext = <printable ascii except \ ( )>

    This is not the RFC ctext, since we are handling nested comments in comment
    and unquoting quoted-pairs here.  We allow anything except the '()'
    characters, but if we find any ASCII other than the RFC defined printable
    ASCII an NonPrintableDefect is added to the token's defects list.  Since
    quoted pairs are converted to their unquoted values, what is returned is
    a 'ptext' token.  In this case it is a WhiteSpaceTerminal, so it's value
    is ' '.

    """
    ptext, value, _ = _get_ptext_to_endchars(value, '()')
    ptext = WhiteSpaceTerminal(ptext, 'ptext')
    _validate_xtext(ptext)
    return ptext, value

def get_qcontent(value):
    """qcontent = qtext / quoted-pair

    We allow anything except the DQUOTE character, but if we find any ASCII
    other than the RFC defined printable ASCII an NonPrintableDefect is
    added to the token's defects list.  Any quoted pairs are converted to their
    unquoted values, so what is returned is a 'ptext' token.  In this case it
    is a ValueTerminal.

    """
    ptext, value, _ = _get_ptext_to_endchars(value, '"')
    ptext = ValueTerminal(ptext, 'ptext')
    _validate_xtext(ptext)
    return ptext, value

def get_atext(value):
    """atext = <matches _atext_matcher>

    We allow any non-ATOM_ENDS in atext, but add an InvalidATextDefect to
    the token's defects list if we find non-atext characters.
    """
    m = _non_atom_end_matcher(value)
    if not m:
        raise errors.HeaderParseError(
            "expected atext but found '{}'".format(value))
    atext = m.group()
    value = value[len(atext):]
    atext = ValueTerminal(atext, 'atext')
    _validate_xtext(atext)
    return atext, value

def get_bare_quoted_string(value):
    """bare-quoted-string = DQUOTE *([FWS] qcontent) [FWS] DQUOTE

    A quoted-string without the leading or trailing white space.  Its
    value is the text between the quote marks, with whitespace
    preserved and quoted pairs decoded.
    """
    if value[0] != '"':
        raise errors.HeaderParseError(
            "expected '\"' but found '{}'".format(value))
    bare_quoted_string = BareQuotedString()
    value = value[1:]
    while value and value[0] != '"':
        if value[0] in WSP:
            token, value = get_fws(value)
        else:
            token, value = get_qcontent(value)
        bare_quoted_string.append(token)
    if not value:
        bare_quoted_string.defects.append(errors.InvalidHeaderDefect(
            "end of header inside quoted string"))
        return bare_quoted_string, value
    return bare_quoted_string, value[1:]

def get_comment(value):
    """comment = "(" *([FWS] ccontent) [FWS] ")"
       ccontent = ctext / quoted-pair / comment

    We handle nested comments here, and quoted-pair in our qp-ctext routine.
    """
    if value and value[0] != '(':
        raise errors.HeaderParseError(
            "expected '(' but found '{}'".format(value))
    comment = Comment()
    value = value[1:]
    while value and value[0] != ")":
        if value[0] in WSP:
            token, value = get_fws(value)
        elif value[0] == '(':
            token, value = get_comment(value)
        else:
            token, value = get_qp_ctext(value)
        comment.append(token)
    if not value:
        comment.defects.append(errors.InvalidHeaderDefect(
            "end of header inside comment"))
        return comment, value
    return comment, value[1:]

def get_cfws(value):
    """CFWS = (1*([FWS] comment) [FWS]) / FWS

    """
    cfws = CFWSList()
    while value and value[0] in CFWS_LEADER:
        if value[0] in WSP:
            token, value = get_fws(value)
        else:
            token, value = get_comment(value)
        cfws.append(token)
    return cfws, value

def get_quoted_string(value):
    """quoted-string = [CFWS] <bare-quoted-string> [CFWS]

    'bare-quoted-string' is an intermediate class defined by this
    parser and not by the RFC grammar.  It is the quoted string
    without any attached CFWS.
    """
    quoted_string = QuotedString()
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        quoted_string.append(token)
    token, value = get_bare_quoted_string(value)
    quoted_string.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        quoted_string.append(token)
    return quoted_string, value

def get_atom(value):
    """atom = [CFWS] 1*atext [CFWS]

    """
    atom = Atom()
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        atom.append(token)
    if value and value[0] in ATOM_ENDS:
        raise errors.HeaderParseError(
            "expected atom but found '{}'".format(value))
    token, value = get_atext(value)
    atom.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        atom.append(token)
    return atom, value

def get_dot_atom_text(value):
    """ dot-text = 1*atext *("." 1*atext)

    """
    dot_atom_text = DotAtomText()
    if not value or value[0] in ATOM_ENDS:
        raise errors.HeaderParseError("expected atom at a start of "
            "dot-atom-text but found '{}'".format(value))
    while value and value[0] not in ATOM_ENDS:
        token, value = get_atext(value)
        dot_atom_text.append(token)
        if value and value[0] == '.':
            dot_atom_text.append(DOT)
            value = value[1:]
    if dot_atom_text[-1] is DOT:
        raise errors.HeaderParseError("expected atom at end of dot-atom-text "
            "but found '{}'".format('.'+value))
    return dot_atom_text, value

def get_dot_atom(value):
    """ dot-atom = [CFWS] dot-atom-text [CFWS]

    """
    dot_atom = DotAtom()
    if value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        dot_atom.append(token)
    token, value = get_dot_atom_text(value)
    dot_atom.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        dot_atom.append(token)
    return dot_atom, value

def get_word(value):
    """word = atom / quoted-string

    Either atom or quoted-string may start with CFWS.  We have to peel off this
    CFWS first to determine which type of word to parse.  Afterward we splice
    the leading CFWS, if any, into the parsed sub-token.

    If neither an atom or a quoted-string is found before the next special, a
    HeaderParseError is raised.

    The token returned is either an Atom or a QuotedString, as appropriate.
    This means the 'word' level of the formal grammar is not represented in the
    parse tree; this is because having that extra layer when manipulating the
    parse tree is more confusing than it is helpful.

    """
    if value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
    else:
        leader = None
    if value[0]=='"':
        token, value = get_quoted_string(value)
    elif value[0] in SPECIALS:
        raise errors.HeaderParseError("Expected 'atom' or 'quoted-string' "
                                      "but found '{}'".format(value))
    else:
        token, value = get_atom(value)
    if leader is not None:
        token[:0] = [leader]
    return token, value

def get_phrase(value):
    """ phrase = 1*word / obs-phrase
        obs-phrase = word *(word / "." / CFWS)

    This means a phrase can be a sequence of words, periods, and CFWS in any
    order as long as it starts with at least one word.  If anything other than
    words is detected, an ObsoleteHeaderDefect is added to the token's defect
    list.  We also accept a phrase that starts with CFWS followed by a dot;
    this is registered as an InvalidHeaderDefect, since it is not supported by
    even the obsolete grammar.

    """
    phrase = Phrase()
    try:
        token, value = get_word(value)
        phrase.append(token)
    except errors.HeaderParseError:
        phrase.defects.append(errors.InvalidHeaderDefect(
            "phrase does not start with word"))
    while value and value[0] not in PHRASE_ENDS:
        if value[0]=='.':
            phrase.append(DOT)
            phrase.defects.append(errors.ObsoleteHeaderDefect(
                "period in 'phrase'"))
            value = value[1:]
        else:
            try:
                token, value = get_word(value)
            except errors.HeaderParseError:
                if value[0] in CFWS_LEADER:
                    token, value = get_cfws(value)
                    phrase.defects.append(errors.ObsoleteHeaderDefect(
                        "comment found without atom"))
                else:
                    raise
            phrase.append(token)
    return phrase, value

def get_local_part(value):
    """ local-part = dot-atom / quoted-string / obs-local-part

    """
    local_part = LocalPart()
    leader = None
    if value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
    if not value:
        raise errors.HeaderParseError(
            "expected local-part but found '{}'".format(value))
    try:
        token, value = get_dot_atom(value)
    except errors.HeaderParseError:
        try:
            token, value = get_word(value)
        except errors.HeaderParseError:
            if value[0] != '\\' and value[0] in PHRASE_ENDS:
                raise
            token = TokenList()
    if leader is not None:
        token[:0] = [leader]
    local_part.append(token)
    if value and (value[0]=='\\' or value[0] not in PHRASE_ENDS):
        obs_local_part, value = get_obs_local_part(str(local_part) + value)
        if obs_local_part.token_type == 'invalid-obs-local-part':
            local_part.defects.append(errors.InvalidHeaderDefect(
                "local-part is not dot-atom, quoted-string, or obs-local-part"))
        else:
            local_part.defects.append(errors.ObsoleteHeaderDefect(
                "local-part is not a dot-atom (contains CFWS)"))
        local_part[0] = obs_local_part
    try:
        local_part.value.encode('ascii')
    except UnicodeEncodeError:
        local_part.defects.append(errors.NonASCIILocalPartDefect(
                "local-part contains non-ASCII characters)"))
    return local_part, value

def get_obs_local_part(value):
    """ obs-local-part = word *("." word)
    """
    obs_local_part = ObsLocalPart()
    last_non_ws_was_dot = False
    while value and (value[0]=='\\' or value[0] not in PHRASE_ENDS):
        if value[0] == '.':
            if last_non_ws_was_dot:
                obs_local_part.defects.append(errors.InvalidHeaderDefect(
                    "invalid repeated '.'"))
            obs_local_part.append(DOT)
            last_non_ws_was_dot = True
            value = value[1:]
            continue
        elif value[0]=='\\':
            obs_local_part.append(ValueTerminal(value[0],
                                                'misplaced-special'))
            value = value[1:]
            obs_local_part.defects.append(errors.InvalidHeaderDefect(
                "'\\' character outside of quoted-string/ccontent"))
            last_non_ws_was_dot = False
            continue
        if obs_local_part and obs_local_part[-1].token_type != 'dot':
            obs_local_part.defects.append(errors.InvalidHeaderDefect(
                "missing '.' between words"))
        try:
            token, value = get_word(value)
            last_non_ws_was_dot = False
        except errors.HeaderParseError:
            if value[0] not in CFWS_LEADER:
                raise
            token, value = get_cfws(value)
        obs_local_part.append(token)
    if (obs_local_part[0].token_type == 'dot' or
            obs_local_part[0].token_type=='cfws' and
            obs_local_part[1].token_type=='dot'):
        obs_local_part.defects.append(errors.InvalidHeaderDefect(
            "Invalid leading '.' in local part"))
    if (obs_local_part[-1].token_type == 'dot' or
            obs_local_part[-1].token_type=='cfws' and
            obs_local_part[-2].token_type=='dot'):
        obs_local_part.defects.append(errors.InvalidHeaderDefect(
            "Invalid trailing '.' in local part"))
    if obs_local_part.defects:
        obs_local_part.token_type = 'invalid-obs-local-part'
    return obs_local_part, value

def get_dtext(value):
    """ dtext = <printable ascii except \ [ ]> / obs-dtext
        obs-dtext = obs-NO-WS-CTL / quoted-pair

    We allow anything except the excluded characters, but if we find any
    ASCII other than the RFC defined printable ASCII an NonPrintableDefect is
    added to the token's defects list.  Quoted pairs are converted to their
    unquoted values, so what is returned is a ptext token, in this case a
    ValueTerminal.  If there were quoted-printables, an ObsoleteHeaderDefect is
    added to the returned token's defect list.

    """
    ptext, value, had_qp = _get_ptext_to_endchars(value, '[]')
    ptext = ValueTerminal(ptext, 'ptext')
    if had_qp:
        ptext.defects.append(errors.ObsoleteHeaderDefect(
            "quoted printable found in domain-literal"))
    _validate_xtext(ptext)
    return ptext, value

def _check_for_early_dl_end(value, domain_literal):
    if value:
        return False
    domain_literal.append(errors.InvalidHeaderDefect(
        "end of input inside domain-literal"))
    domain_literal.append(ValueTerminal(']', 'domain-literal-end'))
    return True

def get_domain_literal(value):
    """ domain-literal = [CFWS] "[" *([FWS] dtext) [FWS] "]" [CFWS]

    """
    domain_literal = DomainLiteral()
    if value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        domain_literal.append(token)
    if not value:
        raise errors.HeaderParseError("expected domain-literal")
    if value[0] != '[':
        raise errors.HeaderParseError("expected '[' at start of domain-literal "
                "but found '{}'".format(value))
    value = value[1:]
    if _check_for_early_dl_end(value, domain_literal):
        return domain_literal, value
    domain_literal.append(ValueTerminal('[', 'domain-literal-start'))
    if value[0] in WSP:
        token, value = get_fws(value)
        domain_literal.append(token)
    token, value = get_dtext(value)
    domain_literal.append(token)
    if _check_for_early_dl_end(value, domain_literal):
        return domain_literal, value
    if value[0] in WSP:
        token, value = get_fws(value)
        domain_literal.append(token)
    if _check_for_early_dl_end(value, domain_literal):
        return domain_literal, value
    if value[0] != ']':
        raise errors.HeaderParseError("expected ']' at end of domain-literal "
                "but found '{}'".format(value))
    domain_literal.append(ValueTerminal(']', 'domain-literal-end'))
    value = value[1:]
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        domain_literal.append(token)
    return domain_literal, value

def get_domain(value):
    """ domain = dot-atom / domain-literal / obs-domain
        obs-domain = atom *("." atom))

    """
    domain = Domain()
    leader = None
    if value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
    if not value:
        raise errors.HeaderParseError(
            "expected domain but found '{}'".format(value))
    if value[0] == '[':
        token, value = get_domain_literal(value)
        if leader is not None:
            token[:0] = [leader]
        domain.append(token)
        return domain, value
    try:
        token, value = get_dot_atom(value)
    except errors.HeaderParseError:
        token, value = get_atom(value)
    if leader is not None:
        token[:0] = [leader]
    domain.append(token)
    if value and value[0] == '.':
        domain.defects.append(errors.ObsoleteHeaderDefect(
            "domain is not a dot-atom (contains CFWS)"))
        if domain[0].token_type == 'dot-atom':
            domain[:] = domain[0]
        while value and value[0] == '.':
            domain.append(DOT)
            token, value = get_atom(value[1:])
            domain.append(token)
    return domain, value

def get_addr_spec(value):
    """ addr-spec = local-part "@" domain

    """
    addr_spec = AddrSpec()
    token, value = get_local_part(value)
    addr_spec.append(token)
    if not value or value[0] != '@':
        addr_spec.defects.append(errors.InvalidHeaderDefect(
            "add-spec local part with no domain"))
        return addr_spec, value
    addr_spec.append(ValueTerminal('@', 'address-at-symbol'))
    token, value = get_domain(value[1:])
    addr_spec.append(token)
    return addr_spec, value

def get_obs_route(value):
    """ obs-route = obs-domain-list ":"
        obs-domain-list = *(CFWS / ",") "@" domain *("," [CFWS] ["@" domain])

        Returns an obs-route token with the appropriate sub-tokens (that is,
        there is no obs-domain-list in the parse tree).
    """
    obs_route = ObsRoute()
    while value and (value[0]==',' or value[0] in CFWS_LEADER):
        if value[0] in CFWS_LEADER:
            token, value = get_cfws(value)
            obs_route.append(token)
        elif value[0] == ',':
            obs_route.append(ListSeparator)
            value = value[1:]
    if not value or value[0] != '@':
        raise errors.HeaderParseError(
            "expected obs-route domain but found '{}'".format(value))
    obs_route.append(RouteComponentMarker)
    token, value = get_domain(value[1:])
    obs_route.append(token)
    while value and value[0]==',':
        obs_route.append(ListSeparator)
        value = value[1:]
        if not value:
            break
        if value[0] in CFWS_LEADER:
            token, value = get_cfws(value)
            obs_route.append(token)
        if value[0] == '@':
            obs_route.append(RouteComponentMarker)
            token, value = get_domain(value[1:])
            obs_route.append(token)
    if not value:
        raise errors.HeaderParseError("end of header while parsing obs-route")
    if value[0] != ':':
        raise errors.HeaderParseError( "expected ':' marking end of "
            "obs-route but found '{}'".format(value))
    obs_route.append(ValueTerminal(':', 'end-of-obs-route-marker'))
    return obs_route, value[1:]

def get_angle_addr(value):
    """ angle-addr = [CFWS] "<" addr-spec ">" [CFWS] / obs-angle-addr
        obs-angle-addr = [CFWS] "<" obs-route addr-spec ">" [CFWS]

    """
    angle_addr = AngleAddr()
    if value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        angle_addr.append(token)
    if not value or value[0] != '<':
        raise errors.HeaderParseError(
            "expected angle-addr but found '{}'".format(value))
    angle_addr.append(ValueTerminal('<', 'angle-addr-start'))
    value = value[1:]
    # Although it is not legal per RFC5322, SMTP uses '<>' in certain
    # circumstances.
    if value[0] == '>':
        angle_addr.append(ValueTerminal('>', 'angle-addr-end'))
        angle_addr.defects.append(errors.InvalidHeaderDefect(
            "null addr-spec in angle-addr"))
        value = value[1:]
        return angle_addr, value
    try:
        token, value = get_addr_spec(value)
    except errors.HeaderParseError:
        try:
            token, value = get_obs_route(value)
            angle_addr.defects.append(errors.ObsoleteHeaderDefect(
                "obsolete route specification in angle-addr"))
        except errors.HeaderParseError:
            raise errors.HeaderParseError(
                "expected addr-spec or obs-route but found '{}'".format(value))
        angle_addr.append(token)
        token, value = get_addr_spec(value)
    angle_addr.append(token)
    if value and value[0] == '>':
        value = value[1:]
    else:
        angle_addr.defects.append(errors.InvalidHeaderDefect(
            "missing trailing '>' on angle-addr"))
    angle_addr.append(ValueTerminal('>', 'angle-addr-end'))
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        angle_addr.append(token)
    return angle_addr, value

def get_display_name(value):
    """ display-name = phrase

    Because this is simply a name-rule, we don't return a display-name
    token containing a phrase, but rather a display-name token with
    the content of the phrase.

    """
    display_name = DisplayName()
    token, value = get_phrase(value)
    display_name.extend(token[:])
    display_name.defects = token.defects[:]
    return display_name, value


def get_name_addr(value):
    """ name-addr = [display-name] angle-addr

    """
    name_addr = NameAddr()
    # Both the optional display name and the angle-addr can start with cfws.
    leader = None
    if value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
        if not value:
            raise errors.HeaderParseError(
                "expected name-addr but found '{}'".format(leader))
    if value[0] != '<':
        if value[0] in PHRASE_ENDS:
            raise errors.HeaderParseError(
                "expected name-addr but found '{}'".format(value))
        token, value = get_display_name(value)
        if not value:
            raise errors.HeaderParseError(
                "expected name-addr but found '{}'".format(token))
        if leader is not None:
            token[0][:0] = [leader]
            leader = None
        name_addr.append(token)
    token, value = get_angle_addr(value)
    if leader is not None:
        token[:0] = [leader]
    name_addr.append(token)
    return name_addr, value

def get_mailbox(value):
    """ mailbox = name-addr / addr-spec

    """
    # The only way to figure out if we are dealing with a name-addr or an
    # addr-spec is to try parsing each one.
    mailbox = Mailbox()
    try:
        token, value = get_name_addr(value)
    except errors.HeaderParseError:
        try:
            token, value = get_addr_spec(value)
        except errors.HeaderParseError:
            raise errors.HeaderParseError(
                "expected mailbox but found '{}'".format(value))
    if any(isinstance(x, errors.InvalidHeaderDefect)
                       for x in token.all_defects):
        mailbox.token_type = 'invalid-mailbox'
    mailbox.append(token)
    return mailbox, value

def get_invalid_mailbox(value, endchars):
    """ Read everything up to one of the chars in endchars.

    This is outside the formal grammar.  The InvalidMailbox TokenList that is
    returned acts like a Mailbox, but the data attributes are None.

    """
    invalid_mailbox = InvalidMailbox()
    while value and value[0] not in endchars:
        if value[0] in PHRASE_ENDS:
            invalid_mailbox.append(ValueTerminal(value[0],
                                                 'misplaced-special'))
            value = value[1:]
        else:
            token, value = get_phrase(value)
            invalid_mailbox.append(token)
    return invalid_mailbox, value

def get_mailbox_list(value):
    """ mailbox-list = (mailbox *("," mailbox)) / obs-mbox-list
        obs-mbox-list = *([CFWS] ",") mailbox *("," [mailbox / CFWS])

    For this routine we go outside the formal grammar in order to improve error
    handling.  We recognize the end of the mailbox list only at the end of the
    value or at a ';' (the group terminator).  This is so that we can turn
    invalid mailboxes into InvalidMailbox tokens and continue parsing any
    remaining valid mailboxes.  We also allow all mailbox entries to be null,
    and this condition is handled appropriately at a higher level.

    """
    mailbox_list = MailboxList()
    while value and value[0] != ';':
        try:
            token, value = get_mailbox(value)
            mailbox_list.append(token)
        except errors.HeaderParseError:
            leader = None
            if value[0] in CFWS_LEADER:
                leader, value = get_cfws(value)
                if not value or value[0] in ',;':
                    mailbox_list.append(leader)
                    mailbox_list.defects.append(errors.ObsoleteHeaderDefect(
                        "empty element in mailbox-list"))
                else:
                    token, value = get_invalid_mailbox(value, ',;')
                    if leader is not None:
                        token[:0] = [leader]
                    mailbox_list.append(token)
                    mailbox_list.defects.append(errors.InvalidHeaderDefect(
                        "invalid mailbox in mailbox-list"))
            elif value[0] == ',':
                mailbox_list.defects.append(errors.ObsoleteHeaderDefect(
                    "empty element in mailbox-list"))
            else:
                token, value = get_invalid_mailbox(value, ',;')
                if leader is not None:
                    token[:0] = [leader]
                mailbox_list.append(token)
                mailbox_list.defects.append(errors.InvalidHeaderDefect(
                    "invalid mailbox in mailbox-list"))
        if value and value[0] not in ',;':
            # Crap after mailbox; treat it as an invalid mailbox.
            # The mailbox info will still be available.
            mailbox = mailbox_list[-1]
            mailbox.token_type = 'invalid-mailbox'
            token, value = get_invalid_mailbox(value, ',;')
            mailbox.extend(token)
            mailbox_list.defects.append(errors.InvalidHeaderDefect(
                "invalid mailbox in mailbox-list"))
        if value and value[0] == ',':
            mailbox_list.append(ListSeparator)
            value = value[1:]
    return mailbox_list, value


def get_group_list(value):
    """ group-list = mailbox-list / CFWS / obs-group-list
        obs-group-list = 1*([CFWS] ",") [CFWS]

    """
    group_list = GroupList()
    if not value:
        group_list.defects.append(errors.InvalidHeaderDefect(
            "end of header before group-list"))
        return group_list, value
    leader = None
    if value and value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
        if not value:
            # This should never happen in email parsing, since CFWS-only is a
            # legal alternative to group-list in a group, which is the only
            # place group-list appears.
            group_list.defects.append(errors.InvalidHeaderDefect(
                "end of header in group-list"))
            group_list.append(leader)
            return group_list, value
        if value[0] == ';':
            group_list.append(leader)
            return group_list, value
    token, value = get_mailbox_list(value)
    if len(token.all_mailboxes)==0:
        if leader is not None:
            group_list.append(leader)
        group_list.extend(token)
        group_list.defects.append(errors.ObsoleteHeaderDefect(
            "group-list with empty entries"))
        return group_list, value
    if leader is not None:
        token[:0] = [leader]
    group_list.append(token)
    return group_list, value

def get_group(value):
    """ group = display-name ":" [group-list] ";" [CFWS]

    """
    group = Group()
    token, value = get_display_name(value)
    if not value or value[0] != ':':
        raise errors.HeaderParseError("expected ':' at end of group "
            "display name but found '{}'".format(value))
    group.append(token)
    group.append(ValueTerminal(':', 'group-display-name-terminator'))
    value = value[1:]
    if value and value[0] == ';':
        group.append(ValueTerminal(';', 'group-terminator'))
        return group, value[1:]
    token, value = get_group_list(value)
    group.append(token)
    if not value:
        group.defects.append(errors.InvalidHeaderDefect(
            "end of header in group"))
    if value[0] != ';':
        raise errors.HeaderParseError(
            "expected ';' at end of group but found {}".format(value))
    group.append(ValueTerminal(';', 'group-terminator'))
    value = value[1:]
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        group.append(token)
    return group, value

def get_address(value):
    """ address = mailbox / group

    Note that counter-intuitively, an address can be either a single address or
    a list of addresses (a group).  This is why the returned Address object has
    a 'mailboxes' attribute which treats a single address as a list of length
    one.  When you need to differentiate between to two cases, extract the single
    element, which is either a mailbox or a group token.

    """
    # The formal grammar isn't very helpful when parsing an address.  mailbox
    # and group, especially when allowing for obsolete forms, start off very
    # similarly.  It is only when you reach one of @, <, or : that you know
    # what you've got.  So, we try each one in turn, starting with the more
    # likely of the two.  We could perhaps make this more efficient by looking
    # for a phrase and then branching based on the next character, but that
    # would be a premature optimization.
    address = Address()
    try:
        token, value = get_group(value)
    except errors.HeaderParseError:
        try:
            token, value = get_mailbox(value)
        except errors.HeaderParseError:
            raise errors.HeaderParseError(
                "expected address but found '{}'".format(value))
    address.append(token)
    return address, value

def get_address_list(value):
    """ address_list = (address *("," address)) / obs-addr-list
        obs-addr-list = *([CFWS] ",") address *("," [address / CFWS])

    We depart from the formal grammar here by continuing to parse until the end
    of the input, assuming the input to be entirely composed of an
    address-list.  This is always true in email parsing, and allows us
    to skip invalid addresses to parse additional valid ones.

    """
    address_list = AddressList()
    while value:
        try:
            token, value = get_address(value)
            address_list.append(token)
        except errors.HeaderParseError as err:
            leader = None
            if value[0] in CFWS_LEADER:
                leader, value = get_cfws(value)
                if not value or value[0] == ',':
                    address_list.append(leader)
                    address_list.defects.append(errors.ObsoleteHeaderDefect(
                        "address-list entry with no content"))
                else:
                    token, value = get_invalid_mailbox(value, ',')
                    if leader is not None:
                        token[:0] = [leader]
                    address_list.append(Address([token]))
                    address_list.defects.append(errors.InvalidHeaderDefect(
                        "invalid address in address-list"))
            elif value[0] == ',':
                address_list.defects.append(errors.ObsoleteHeaderDefect(
                    "empty element in address-list"))
            else:
                token, value = get_invalid_mailbox(value, ',')
                if leader is not None:
                    token[:0] = [leader]
                address_list.append(Address([token]))
                address_list.defects.append(errors.InvalidHeaderDefect(
                    "invalid address in address-list"))
        if value and value[0] != ',':
            # Crap after address; treat it as an invalid mailbox.
            # The mailbox info will still be available.
            mailbox = address_list[-1][0]
            mailbox.token_type = 'invalid-mailbox'
            token, value = get_invalid_mailbox(value, ',')
            mailbox.extend(token)
            address_list.defects.append(errors.InvalidHeaderDefect(
                "invalid address in address-list"))
        if value:  # Must be a , at this point.
            address_list.append(ValueTerminal(',', 'list-separator'))
            value = value[1:]
    return address_list, value

#
# XXX: As I begin to add additional header parsers, I'm realizing we probably
# have two level of parser routines: the get_XXX methods that get a token in
# the grammar, and parse_XXX methods that parse an entire field value.  So
# get_address_list above should really be a parse_ method, as probably should
# be get_unstructured.
#

def parse_mime_version(value):
    """ mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS]

    """
    # The [CFWS] is implicit in the RFC 2045 BNF.
    # XXX: This routine is a bit verbose, should factor out a get_int method.
    mime_version = MIMEVersion()
    if not value:
        mime_version.defects.append(errors.HeaderMissingRequiredValue(
            "Missing MIME version number (eg: 1.0)"))
        return mime_version
    if value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mime_version.append(token)
        if not value:
            mime_version.defects.append(errors.HeaderMissingRequiredValue(
                "Expected MIME version number but found only CFWS"))
    digits = ''
    while value and value[0] != '.' and value[0] not in CFWS_LEADER:
        digits += value[0]
        value = value[1:]
    if not digits.isdigit():
        mime_version.defects.append(errors.InvalidHeaderDefect(
            "Expected MIME major version number but found {!r}".format(digits)))
        mime_version.append(ValueTerminal(digits, 'xtext'))
    else:
        mime_version.major = int(digits)
        mime_version.append(ValueTerminal(digits, 'digits'))
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mime_version.append(token)
    if not value or value[0] != '.':
        if mime_version.major is not None:
            mime_version.defects.append(errors.InvalidHeaderDefect(
                "Incomplete MIME version; found only major number"))
        if value:
            mime_version.append(ValueTerminal(value, 'xtext'))
        return mime_version
    mime_version.append(ValueTerminal('.', 'version-separator'))
    value = value[1:]
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mime_version.append(token)
    if not value:
        if mime_version.major is not None:
            mime_version.defects.append(errors.InvalidHeaderDefect(
                "Incomplete MIME version; found only major number"))
        return mime_version
    digits = ''
    while value and value[0] not in CFWS_LEADER:
        digits += value[0]
        value = value[1:]
    if not digits.isdigit():
        mime_version.defects.append(errors.InvalidHeaderDefect(
            "Expected MIME minor version number but found {!r}".format(digits)))
        mime_version.append(ValueTerminal(digits, 'xtext'))
    else:
        mime_version.minor = int(digits)
        mime_version.append(ValueTerminal(digits, 'digits'))
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mime_version.append(token)
    if value:
        mime_version.defects.append(errors.InvalidHeaderDefect(
            "Excess non-CFWS text after MIME version"))
        mime_version.append(ValueTerminal(value, 'xtext'))
    return mime_version

def get_invalid_parameter(value):
    """ Read everything up to the next ';'.

    This is outside the formal grammar.  The InvalidParameter TokenList that is
    returned acts like a Parameter, but the data attributes are None.

    """
    invalid_parameter = InvalidParameter()
    while value and value[0] != ';':
        if value[0] in PHRASE_ENDS:
            invalid_parameter.append(ValueTerminal(value[0],
                                                   'misplaced-special'))
            value = value[1:]
        else:
            token, value = get_phrase(value)
            invalid_parameter.append(token)
    return invalid_parameter, value

def get_ttext(value):
    """ttext = <matches _ttext_matcher>

    We allow any non-TOKEN_ENDS in ttext, but add defects to the token's
    defects list if we find non-ttext characters.  We also register defects for
    *any* non-printables even though the RFC doesn't exclude all of them,
    because we follow the spirit of RFC 5322.

    """
    m = _non_token_end_matcher(value)
    if not m:
        raise errors.HeaderParseError(
            "expected ttext but found '{}'".format(value))
    ttext = m.group()
    value = value[len(ttext):]
    ttext = ValueTerminal(ttext, 'ttext')
    _validate_xtext(ttext)
    return ttext, value

def get_token(value):
    """token = [CFWS] 1*ttext [CFWS]

    The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or
    tspecials.  We also exclude tabs even though the RFC doesn't.

    The RFC implies the CFWS but is not explicit about it in the BNF.

    """
    mtoken = Token()
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mtoken.append(token)
    if value and value[0] in TOKEN_ENDS:
        raise errors.HeaderParseError(
            "expected token but found '{}'".format(value))
    token, value = get_ttext(value)
    mtoken.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        mtoken.append(token)
    return mtoken, value

def get_attrtext(value):
    """attrtext = 1*(any non-ATTRIBUTE_ENDS character)

    We allow any non-ATTRIBUTE_ENDS in attrtext, but add defects to the
    token's defects list if we find non-attrtext characters.  We also register
    defects for *any* non-printables even though the RFC doesn't exclude all of
    them, because we follow the spirit of RFC 5322.

    """
    m = _non_attribute_end_matcher(value)
    if not m:
        raise errors.HeaderParseError(
            "expected attrtext but found {!r}".format(value))
    attrtext = m.group()
    value = value[len(attrtext):]
    attrtext = ValueTerminal(attrtext, 'attrtext')
    _validate_xtext(attrtext)
    return attrtext, value

def get_attribute(value):
    """ [CFWS] 1*attrtext [CFWS]

    This version of the BNF makes the CFWS explicit, and as usual we use a
    value terminal for the actual run of characters.  The RFC equivalent of
    attrtext is the token characters, with the subtraction of '*', "'", and '%'.
    We include tab in the excluded set just as we do for token.

    """
    attribute = Attribute()
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        attribute.append(token)
    if value and value[0] in ATTRIBUTE_ENDS:
        raise errors.HeaderParseError(
            "expected token but found '{}'".format(value))
    token, value = get_attrtext(value)
    attribute.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        attribute.append(token)
    return attribute, value

def get_extended_attrtext(value):
    """attrtext = 1*(any non-ATTRIBUTE_ENDS character plus '%')

    This is a special parsing routine so that we get a value that
    includes % escapes as a single string (which we decode as a single
    string later).

    """
    m = _non_extended_attribute_end_matcher(value)
    if not m:
        raise errors.HeaderParseError(
            "expected extended attrtext but found {!r}".format(value))
    attrtext = m.group()
    value = value[len(attrtext):]
    attrtext = ValueTerminal(attrtext, 'extended-attrtext')
    _validate_xtext(attrtext)
    return attrtext, value

def get_extended_attribute(value):
    """ [CFWS] 1*extended_attrtext [CFWS]

    This is like the non-extended version except we allow % characters, so that
    we can pick up an encoded value as a single string.

    """
    # XXX: should we have an ExtendedAttribute TokenList?
    attribute = Attribute()
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        attribute.append(token)
    if value and value[0] in EXTENDED_ATTRIBUTE_ENDS:
        raise errors.HeaderParseError(
            "expected token but found '{}'".format(value))
    token, value = get_extended_attrtext(value)
    attribute.append(token)
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        attribute.append(token)
    return attribute, value

def get_section(value):
    """ '*' digits

    The formal BNF is more complicated because leading 0s are not allowed.  We
    check for that and add a defect.  We also assume no CFWS is allowed between
    the '*' and the digits, though the RFC is not crystal clear on that.
    The caller should already have dealt with leading CFWS.

    """
    section = Section()
    if not value or value[0] != '*':
        raise errors.HeaderParseError("Expected section but found {}".format(
                                        value))
    section.append(ValueTerminal('*', 'section-marker'))
    value = value[1:]
    if not value or not value[0].isdigit():
        raise errors.HeaderParseError("Expected section number but "
                                      "found {}".format(value))
    digits = ''
    while value and value[0].isdigit():
        digits += value[0]
        value = value[1:]
    if digits[0] == '0' and digits != '0':
        section.defects.append(errors.InvalidHeaderError("section number"
            "has an invalid leading 0"))
    section.number = int(digits)
    section.append(ValueTerminal(digits, 'digits'))
    return section, value


def get_value(value):
    """ quoted-string / attribute

    """
    v = Value()
    if not value:
        raise errors.HeaderParseError("Expected value but found end of string")
    leader = None
    if value[0] in CFWS_LEADER:
        leader, value = get_cfws(value)
    if not value:
        raise errors.HeaderParseError("Expected value but found "
                                      "only {}".format(leader))
    if value[0] == '"':
        token, value = get_quoted_string(value)
    else:
        token, value = get_extended_attribute(value)
    if leader is not None:
        token[:0] = [leader]
    v.append(token)
    return v, value

def get_parameter(value):
    """ attribute [section] ["*"] [CFWS] "=" value

    The CFWS is implied by the RFC but not made explicit in the BNF.  This
    simplified form of the BNF from the RFC is made to conform with the RFC BNF
    through some extra checks.  We do it this way because it makes both error
    recovery and working with the resulting parse tree easier.
    """
    # It is possible CFWS would also be implicitly allowed between the section
    # and the 'extended-attribute' marker (the '*') , but we've never seen that
    # in the wild and we will therefore ignore the possibility.
    param = Parameter()
    token, value = get_attribute(value)
    param.append(token)
    if not value or value[0] == ';':
        param.defects.append(errors.InvalidHeaderDefect("Parameter contains "
            "name ({}) but no value".format(token)))
        return param, value
    if value[0] == '*':
        try:
            token, value = get_section(value)
            param.sectioned = True
            param.append(token)
        except errors.HeaderParseError:
            pass
        if not value:
            raise errors.HeaderParseError("Incomplete parameter")
        if value[0] == '*':
            param.append(ValueTerminal('*', 'extended-parameter-marker'))
            value = value[1:]
            param.extended = True
    if value[0] != '=':
        raise errors.HeaderParseError("Parameter not followed by '='")
    param.append(ValueTerminal('=', 'parameter-separator'))
    value = value[1:]
    leader = None
    if value and value[0] in CFWS_LEADER:
        token, value = get_cfws(value)
        param.append(token)
    remainder = None
    appendto = param
    if param.extended and value and value[0] == '"':
        # Now for some serious hackery to handle the common invalid case of
        # double quotes around an extended value.  We also accept (with defect)
        # a value marked as encoded that isn't really.
        qstring, remainder = get_quoted_string(value)
        inner_value = qstring.stripped_value
        semi_valid = False
        if param.section_number == 0:
            if inner_value and inner_value[0] == "'":
                semi_valid = True
            else:
                token, rest = get_attrtext(inner_value)
                if rest and rest[0] == "'":
                    semi_valid = True
        else:
            try:
                token, rest = get_extended_attrtext(inner_value)
            except:
                pass
            else:
                if not rest:
                    semi_valid = True
        if semi_valid:
            param.defects.append(errors.InvalidHeaderDefect(
                "Quoted string value for extended parameter is invalid"))
            param.append(qstring)
            for t in qstring:
                if t.token_type == 'bare-quoted-string':
                    t[:] = []
                    appendto = t
                    break
            value = inner_value
        else:
            remainder = None
            param.defects.append(errors.InvalidHeaderDefect(
                "Parameter marked as extended but appears to have a "
                "quoted string value that is non-encoded"))
    if value and value[0] == "'":
        token = None
    else:
        token, value = get_value(value)
    if not param.extended or param.section_number > 0:
        if not value or value[0] != "'":
            appendto.append(token)
            if remainder is not None:
                assert not value, value
                value = remainder
            return param, value
        param.defects.append(errors.InvalidHeaderDefect(
            "Apparent initial-extended-value but attribute "
            "was not marked as extended or was not initial section"))
    if not value:
        # Assume the charset/lang is missing and the token is the value.
        param.defects.append(errors.InvalidHeaderDefect(
            "Missing required charset/lang delimiters"))
        appendto.append(token)
        if remainder is None:
            return param, value
    else:
        if token is not None:
            for t in token:
                if t.token_type == 'extended-attrtext':
                    break
            t.token_type == 'attrtext'
            appendto.append(t)
            param.charset = t.value
        if value[0] != "'":
            raise errors.HeaderParseError("Expected RFC2231 char/lang encoding "
                                          "delimiter, but found {!r}".format(value))
        appendto.append(ValueTerminal("'", 'RFC2231 delimiter'))
        value = value[1:]
        if value and value[0] != "'":
            token, value = get_attrtext(value)
            appendto.append(token)
            param.lang = token.value
            if not value or value[0] != "'":
                raise errors.HeaderParseError("Expected RFC2231 char/lang encoding "
                                  "delimiter, but found {}".format(value))
        appendto.append(ValueTerminal("'", 'RFC2231 delimiter'))
        value = value[1:]
    if remainder is not None:
        # Treat the rest of value as bare quoted string content.
        v = Value()
        while value:
            if value[0] in WSP:
                token, value = get_fws(value)
            else:
                token, value = get_qcontent(value)
            v.append(token)
        token = v
    else:
        token, value = get_value(value)
    appendto.append(token)
    if remainder is not None:
        assert not value, value
        value = remainder
    return param, value

def parse_mime_parameters(value):
    """ parameter *( ";" parameter )

    That BNF is meant to indicate this routine should only be called after
    finding and handling the leading ';'.  There is no corresponding rule in
    the formal RFC grammar, but it is more convenient for us for the set of
    parameters to be treated as its own TokenList.

    This is 'parse' routine because it consumes the reminaing value, but it
    would never be called to parse a full header.  Instead it is called to
    parse everything after the non-parameter value of a specific MIME header.

    """
    mime_parameters = MimeParameters()
    while value:
        try:
            token, value = get_parameter(value)
            mime_parameters.append(token)
        except errors.HeaderParseError as err:
            leader = None
            if value[0] in CFWS_LEADER:
                leader, value = get_cfws(value)
            if not value:
                mime_parameters.append(leader)
                return mime_parameters
            if value[0] == ';':
                if leader is not None:
                    mime_parameters.append(leader)
                mime_parameters.defects.append(errors.InvalidHeaderDefect(
                    "parameter entry with no content"))
            else:
                token, value = get_invalid_parameter(value)
                if leader:
                    token[:0] = [leader]
                mime_parameters.append(token)
                mime_parameters.defects.append(errors.InvalidHeaderDefect(
                    "invalid parameter {!r}".format(token)))
        if value and value[0] != ';':
            # Junk after the otherwise valid parameter.  Mark it as
            # invalid, but it will have a value.
            param = mime_parameters[-1]
            param.token_type = 'invalid-parameter'
            token, value = get_invalid_parameter(value)
            param.extend(token)
            mime_parameters.defects.append(errors.InvalidHeaderDefect(
                "parameter with invalid trailing text {!r}".format(token)))
        if value:
            # Must be a ';' at this point.
            mime_parameters.append(ValueTerminal(';', 'parameter-separator'))
            value = value[1:]
    return mime_parameters

def _find_mime_parameters(tokenlist, value):
    """Do our best to find the parameters in an invalid MIME header

    """
    while value and value[0] != ';':
        if value[0] in PHRASE_ENDS:
            tokenlist.append(ValueTerminal(value[0], 'misplaced-special'))
            value = value[1:]
        else:
            token, value = get_phrase(value)
            tokenlist.append(token)
    if not value:
        return
    tokenlist.append(ValueTerminal(';', 'parameter-separator'))
    tokenlist.append(parse_mime_parameters(value[1:]))

def parse_content_type_header(value):
    """ maintype "/" subtype *( ";" parameter )

    The maintype and substype are tokens.  Theoretically they could
    be checked against the official IANA list + x-token, but we
    don't do that.
    """
    ctype = ContentType()
    recover = False
    if not value:
        ctype.defects.append(errors.HeaderMissingRequiredValue(
            "Missing content type specification"))
        return ctype
    try:
        token, value = get_token(value)
    except errors.HeaderParseError:
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Expected content maintype but found {!r}".format(value)))
        _find_mime_parameters(ctype, value)
        return ctype
    ctype.append(token)
    # XXX: If we really want to follow the formal grammar we should make
    # mantype and subtype specialized TokenLists here.  Probably not worth it.
    if not value or value[0] != '/':
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Invalid content type"))
        if value:
            _find_mime_parameters(ctype, value)
        return ctype
    ctype.maintype = token.value.strip().lower()
    ctype.append(ValueTerminal('/', 'content-type-separator'))
    value = value[1:]
    try:
        token, value = get_token(value)
    except errors.HeaderParseError:
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Expected content subtype but found {!r}".format(value)))
        _find_mime_parameters(ctype, value)
        return ctype
    ctype.append(token)
    ctype.subtype = token.value.strip().lower()
    if not value:
        return ctype
    if value[0] != ';':
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Only parameters are valid after content type, but "
            "found {!r}".format(value)))
        # The RFC requires that a syntactically invalid content-type be treated
        # as text/plain.  Perhaps we should postel this, but we should probably
        # only do that if we were checking the subtype value against IANA.
        del ctype.maintype, ctype.subtype
        _find_mime_parameters(ctype, value)
        return ctype
    ctype.append(ValueTerminal(';', 'parameter-separator'))
    ctype.append(parse_mime_parameters(value[1:]))
    return ctype

def parse_content_disposition_header(value):
    """ disposition-type *( ";" parameter )

    """
    disp_header = ContentDisposition()
    if not value:
        disp_header.defects.append(errors.HeaderMissingRequiredValue(
            "Missing content disposition"))
        return disp_header
    try:
        token, value = get_token(value)
    except errors.HeaderParseError:
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Expected content disposition but found {!r}".format(value)))
        _find_mime_parameters(disp_header, value)
        return disp_header
    disp_header.append(token)
    disp_header.content_disposition = token.value.strip().lower()
    if not value:
        return disp_header
    if value[0] != ';':
        disp_header.defects.append(errors.InvalidHeaderDefect(
            "Only parameters are valid after content disposition, but "
            "found {!r}".format(value)))
        _find_mime_parameters(disp_header, value)
        return disp_header
    disp_header.append(ValueTerminal(';', 'parameter-separator'))
    disp_header.append(parse_mime_parameters(value[1:]))
    return disp_header

def parse_content_transfer_encoding_header(value):
    """ mechanism

    """
    # We should probably validate the values, since the list is fixed.
    cte_header = ContentTransferEncoding()
    if not value:
        cte_header.defects.append(errors.HeaderMissingRequiredValue(
            "Missing content transfer encoding"))
        return cte_header
    try:
        token, value = get_token(value)
    except errors.HeaderParseError:
        ctype.defects.append(errors.InvalidHeaderDefect(
            "Expected content trnasfer encoding but found {!r}".format(value)))
    else:
        cte_header.append(token)
        cte_header.cte = token.value.strip().lower()
    if not value:
        return cte_header
    while value:
        cte_header.defects.append(errors.InvalidHeaderDefect(
            "Extra text after content transfer encoding"))
        if value[0] in PHRASE_ENDS:
            cte_header.append(ValueTerminal(value[0], 'misplaced-special'))
            value = value[1:]
        else:
            token, value = get_phrase(value)
            cte_header.append(token)
    return cte_header
PKjDu\7L��DD!future/backports/email/charset.pynu�[���from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import str
from future.builtins import next

# Copyright (C) 2001-2007 Python Software Foundation
# Author: Ben Gertzfield, Barry Warsaw
# Contact: email-sig@python.org

__all__ = [
    'Charset',
    'add_alias',
    'add_charset',
    'add_codec',
    ]

from functools import partial

from future.backports import email
from future.backports.email import errors
from future.backports.email.encoders import encode_7or8bit


# Flags for types of header encodings
QP          = 1 # Quoted-Printable
BASE64      = 2 # Base64
SHORTEST    = 3 # the shorter of QP and base64, but only for headers

# In "=?charset?q?hello_world?=", the =?, ?q?, and ?= add up to 7
RFC2047_CHROME_LEN = 7

DEFAULT_CHARSET = 'us-ascii'
UNKNOWN8BIT = 'unknown-8bit'
EMPTYSTRING = ''


# Defaults
CHARSETS = {
    # input        header enc  body enc output conv
    'iso-8859-1':  (QP,        QP,      None),
    'iso-8859-2':  (QP,        QP,      None),
    'iso-8859-3':  (QP,        QP,      None),
    'iso-8859-4':  (QP,        QP,      None),
    # iso-8859-5 is Cyrillic, and not especially used
    # iso-8859-6 is Arabic, also not particularly used
    # iso-8859-7 is Greek, QP will not make it readable
    # iso-8859-8 is Hebrew, QP will not make it readable
    'iso-8859-9':  (QP,        QP,      None),
    'iso-8859-10': (QP,        QP,      None),
    # iso-8859-11 is Thai, QP will not make it readable
    'iso-8859-13': (QP,        QP,      None),
    'iso-8859-14': (QP,        QP,      None),
    'iso-8859-15': (QP,        QP,      None),
    'iso-8859-16': (QP,        QP,      None),
    'windows-1252':(QP,        QP,      None),
    'viscii':      (QP,        QP,      None),
    'us-ascii':    (None,      None,    None),
    'big5':        (BASE64,    BASE64,  None),
    'gb2312':      (BASE64,    BASE64,  None),
    'euc-jp':      (BASE64,    None,    'iso-2022-jp'),
    'shift_jis':   (BASE64,    None,    'iso-2022-jp'),
    'iso-2022-jp': (BASE64,    None,    None),
    'koi8-r':      (BASE64,    BASE64,  None),
    'utf-8':       (SHORTEST,  BASE64, 'utf-8'),
    }

# Aliases for other commonly-used names for character sets.  Map
# them to the real ones used in email.
ALIASES = {
    'latin_1': 'iso-8859-1',
    'latin-1': 'iso-8859-1',
    'latin_2': 'iso-8859-2',
    'latin-2': 'iso-8859-2',
    'latin_3': 'iso-8859-3',
    'latin-3': 'iso-8859-3',
    'latin_4': 'iso-8859-4',
    'latin-4': 'iso-8859-4',
    'latin_5': 'iso-8859-9',
    'latin-5': 'iso-8859-9',
    'latin_6': 'iso-8859-10',
    'latin-6': 'iso-8859-10',
    'latin_7': 'iso-8859-13',
    'latin-7': 'iso-8859-13',
    'latin_8': 'iso-8859-14',
    'latin-8': 'iso-8859-14',
    'latin_9': 'iso-8859-15',
    'latin-9': 'iso-8859-15',
    'latin_10':'iso-8859-16',
    'latin-10':'iso-8859-16',
    'cp949':   'ks_c_5601-1987',
    'euc_jp':  'euc-jp',
    'euc_kr':  'euc-kr',
    'ascii':   'us-ascii',
    }


# Map charsets to their Unicode codec strings.
CODEC_MAP = {
    'gb2312':      'eucgb2312_cn',
    'big5':        'big5_tw',
    # Hack: We don't want *any* conversion for stuff marked us-ascii, as all
    # sorts of garbage might be sent to us in the guise of 7-bit us-ascii.
    # Let that stuff pass through without conversion to/from Unicode.
    'us-ascii':    None,
    }


# Convenience functions for extending the above mappings
def add_charset(charset, header_enc=None, body_enc=None, output_charset=None):
    """Add character set properties to the global registry.

    charset is the input character set, and must be the canonical name of a
    character set.

    Optional header_enc and body_enc is either Charset.QP for
    quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for
    the shortest of qp or base64 encoding, or None for no encoding.  SHORTEST
    is only valid for header_enc.  It describes how message headers and
    message bodies in the input charset are to be encoded.  Default is no
    encoding.

    Optional output_charset is the character set that the output should be
    in.  Conversions will proceed from input charset, to Unicode, to the
    output charset when the method Charset.convert() is called.  The default
    is to output in the same character set as the input.

    Both input_charset and output_charset must have Unicode codec entries in
    the module's charset-to-codec mapping; use add_codec(charset, codecname)
    to add codecs the module does not know about.  See the codecs module's
    documentation for more information.
    """
    if body_enc == SHORTEST:
        raise ValueError('SHORTEST not allowed for body_enc')
    CHARSETS[charset] = (header_enc, body_enc, output_charset)


def add_alias(alias, canonical):
    """Add a character set alias.

    alias is the alias name, e.g. latin-1
    canonical is the character set's canonical name, e.g. iso-8859-1
    """
    ALIASES[alias] = canonical


def add_codec(charset, codecname):
    """Add a codec that map characters in the given charset to/from Unicode.

    charset is the canonical name of a character set.  codecname is the name
    of a Python codec, as appropriate for the second argument to the unicode()
    built-in, or to the encode() method of a Unicode string.
    """
    CODEC_MAP[charset] = codecname


# Convenience function for encoding strings, taking into account
# that they might be unknown-8bit (ie: have surrogate-escaped bytes)
def _encode(string, codec):
    string = str(string)
    if codec == UNKNOWN8BIT:
        return string.encode('ascii', 'surrogateescape')
    else:
        return string.encode(codec)


class Charset(object):
    """Map character sets to their email properties.

    This class provides information about the requirements imposed on email
    for a specific character set.  It also provides convenience routines for
    converting between character sets, given the availability of the
    applicable codecs.  Given a character set, it will do its best to provide
    information on how to use that character set in an email in an
    RFC-compliant way.

    Certain character sets must be encoded with quoted-printable or base64
    when used in email headers or bodies.  Certain character sets must be
    converted outright, and are not allowed in email.  Instances of this
    module expose the following information about a character set:

    input_charset: The initial character set specified.  Common aliases
                   are converted to their `official' email names (e.g. latin_1
                   is converted to iso-8859-1).  Defaults to 7-bit us-ascii.

    header_encoding: If the character set must be encoded before it can be
                     used in an email header, this attribute will be set to
                     Charset.QP (for quoted-printable), Charset.BASE64 (for
                     base64 encoding), or Charset.SHORTEST for the shortest of
                     QP or BASE64 encoding.  Otherwise, it will be None.

    body_encoding: Same as header_encoding, but describes the encoding for the
                   mail message's body, which indeed may be different than the
                   header encoding.  Charset.SHORTEST is not allowed for
                   body_encoding.

    output_charset: Some character sets must be converted before they can be
                    used in email headers or bodies.  If the input_charset is
                    one of them, this attribute will contain the name of the
                    charset output will be converted to.  Otherwise, it will
                    be None.

    input_codec: The name of the Python codec used to convert the
                 input_charset to Unicode.  If no conversion codec is
                 necessary, this attribute will be None.

    output_codec: The name of the Python codec used to convert Unicode
                  to the output_charset.  If no conversion codec is necessary,
                  this attribute will have the same value as the input_codec.
    """
    def __init__(self, input_charset=DEFAULT_CHARSET):
        # RFC 2046, $4.1.2 says charsets are not case sensitive.  We coerce to
        # unicode because its .lower() is locale insensitive.  If the argument
        # is already a unicode, we leave it at that, but ensure that the
        # charset is ASCII, as the standard (RFC XXX) requires.
        try:
            if isinstance(input_charset, str):
                input_charset.encode('ascii')
            else:
                input_charset = str(input_charset, 'ascii')
        except UnicodeError:
            raise errors.CharsetError(input_charset)
        input_charset = input_charset.lower()
        # Set the input charset after filtering through the aliases
        self.input_charset = ALIASES.get(input_charset, input_charset)
        # We can try to guess which encoding and conversion to use by the
        # charset_map dictionary.  Try that first, but let the user override
        # it.
        henc, benc, conv = CHARSETS.get(self.input_charset,
                                        (SHORTEST, BASE64, None))
        if not conv:
            conv = self.input_charset
        # Set the attributes, allowing the arguments to override the default.
        self.header_encoding = henc
        self.body_encoding = benc
        self.output_charset = ALIASES.get(conv, conv)
        # Now set the codecs.  If one isn't defined for input_charset,
        # guess and try a Unicode codec with the same name as input_codec.
        self.input_codec = CODEC_MAP.get(self.input_charset,
                                         self.input_charset)
        self.output_codec = CODEC_MAP.get(self.output_charset,
                                          self.output_charset)

    def __str__(self):
        return self.input_charset.lower()

    __repr__ = __str__

    def __eq__(self, other):
        return str(self) == str(other).lower()

    def __ne__(self, other):
        return not self.__eq__(other)

    def get_body_encoding(self):
        """Return the content-transfer-encoding used for body encoding.

        This is either the string `quoted-printable' or `base64' depending on
        the encoding used, or it is a function in which case you should call
        the function with a single argument, the Message object being
        encoded.  The function should then set the Content-Transfer-Encoding
        header itself to whatever is appropriate.

        Returns "quoted-printable" if self.body_encoding is QP.
        Returns "base64" if self.body_encoding is BASE64.
        Returns conversion function otherwise.
        """
        assert self.body_encoding != SHORTEST
        if self.body_encoding == QP:
            return 'quoted-printable'
        elif self.body_encoding == BASE64:
            return 'base64'
        else:
            return encode_7or8bit

    def get_output_charset(self):
        """Return the output character set.

        This is self.output_charset if that is not None, otherwise it is
        self.input_charset.
        """
        return self.output_charset or self.input_charset

    def header_encode(self, string):
        """Header-encode a string by converting it first to bytes.

        The type of encoding (base64 or quoted-printable) will be based on
        this charset's `header_encoding`.

        :param string: A unicode string for the header.  It must be possible
            to encode this string to bytes using the character set's
            output codec.
        :return: The encoded string, with RFC 2047 chrome.
        """
        codec = self.output_codec or 'us-ascii'
        header_bytes = _encode(string, codec)
        # 7bit/8bit encodings return the string unchanged (modulo conversions)
        encoder_module = self._get_encoder(header_bytes)
        if encoder_module is None:
            return string
        return encoder_module.header_encode(header_bytes, codec)

    def header_encode_lines(self, string, maxlengths):
        """Header-encode a string by converting it first to bytes.

        This is similar to `header_encode()` except that the string is fit
        into maximum line lengths as given by the argument.

        :param string: A unicode string for the header.  It must be possible
            to encode this string to bytes using the character set's
            output codec.
        :param maxlengths: Maximum line length iterator.  Each element
            returned from this iterator will provide the next maximum line
            length.  This parameter is used as an argument to built-in next()
            and should never be exhausted.  The maximum line lengths should
            not count the RFC 2047 chrome.  These line lengths are only a
            hint; the splitter does the best it can.
        :return: Lines of encoded strings, each with RFC 2047 chrome.
        """
        # See which encoding we should use.
        codec = self.output_codec or 'us-ascii'
        header_bytes = _encode(string, codec)
        encoder_module = self._get_encoder(header_bytes)
        encoder = partial(encoder_module.header_encode, charset=codec)
        # Calculate the number of characters that the RFC 2047 chrome will
        # contribute to each line.
        charset = self.get_output_charset()
        extra = len(charset) + RFC2047_CHROME_LEN
        # Now comes the hard part.  We must encode bytes but we can't split on
        # bytes because some character sets are variable length and each
        # encoded word must stand on its own.  So the problem is you have to
        # encode to bytes to figure out this word's length, but you must split
        # on characters.  This causes two problems: first, we don't know how
        # many octets a specific substring of unicode characters will get
        # encoded to, and second, we don't know how many ASCII characters
        # those octets will get encoded to.  Unless we try it.  Which seems
        # inefficient.  In the interest of being correct rather than fast (and
        # in the hope that there will be few encoded headers in any such
        # message), brute force it. :(
        lines = []
        current_line = []
        maxlen = next(maxlengths) - extra
        for character in string:
            current_line.append(character)
            this_line = EMPTYSTRING.join(current_line)
            length = encoder_module.header_length(_encode(this_line, charset))
            if length > maxlen:
                # This last character doesn't fit so pop it off.
                current_line.pop()
                # Does nothing fit on the first line?
                if not lines and not current_line:
                    lines.append(None)
                else:
                    separator = (' ' if lines else '')
                    joined_line = EMPTYSTRING.join(current_line)
                    header_bytes = _encode(joined_line, codec)
                    lines.append(encoder(header_bytes))
                current_line = [character]
                maxlen = next(maxlengths) - extra
        joined_line = EMPTYSTRING.join(current_line)
        header_bytes = _encode(joined_line, codec)
        lines.append(encoder(header_bytes))
        return lines

    def _get_encoder(self, header_bytes):
        if self.header_encoding == BASE64:
            return email.base64mime
        elif self.header_encoding == QP:
            return email.quoprimime
        elif self.header_encoding == SHORTEST:
            len64 = email.base64mime.header_length(header_bytes)
            lenqp = email.quoprimime.header_length(header_bytes)
            if len64 < lenqp:
                return email.base64mime
            else:
                return email.quoprimime
        else:
            return None

    def body_encode(self, string):
        """Body-encode a string by converting it first to bytes.

        The type of encoding (base64 or quoted-printable) will be based on
        self.body_encoding.  If body_encoding is None, we assume the
        output charset is a 7bit encoding, so re-encoding the decoded
        string using the ascii codec produces the correct string version
        of the content.
        """
        if not string:
            return string
        if self.body_encoding is BASE64:
            if isinstance(string, str):
                string = string.encode(self.output_charset)
            return email.base64mime.body_encode(string)
        elif self.body_encoding is QP:
            # quopromime.body_encode takes a string, but operates on it as if
            # it were a list of byte codes.  For a (minimal) history on why
            # this is so, see changeset 0cf700464177.  To correctly encode a
            # character set, then, we must turn it into pseudo bytes via the
            # latin1 charset, which will encode any byte as a single code point
            # between 0 and 255, which is what body_encode is expecting.
            if isinstance(string, str):
                string = string.encode(self.output_charset)
            string = string.decode('latin1')
            return email.quoprimime.body_encode(string)
        else:
            if isinstance(string, str):
                string = string.encode(self.output_charset).decode('ascii')
            return string
PKlDu\髠��X�X$future/backports/email/feedparser.pynu�[���# Copyright (C) 2004-2006 Python Software Foundation
# Authors: Baxter, Wouters and Warsaw
# Contact: email-sig@python.org

"""FeedParser - An email feed parser.

The feed parser implements an interface for incrementally parsing an email
message, line by line.  This has advantages for certain applications, such as
those reading email messages off a socket.

FeedParser.feed() is the primary interface for pushing new data into the
parser.  It returns when there's nothing more it can do with the available
data.  When you have no more data to push into the parser, call .close().
This completes the parsing and returns the root message object.

The other advantage of this parser is that it will never raise a parsing
exception.  Instead, when it finds something unexpected, it adds a 'defect' to
the current message.  Defects are just instances that live on the message
object's .defects attribute.
"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import object, range, super
from future.utils import implements_iterator, PY3

__all__ = ['FeedParser', 'BytesFeedParser']

import re

from future.backports.email import errors
from future.backports.email import message
from future.backports.email._policybase import compat32

NLCRE = re.compile('\r\n|\r|\n')
NLCRE_bol = re.compile('(\r\n|\r|\n)')
NLCRE_eol = re.compile('(\r\n|\r|\n)\Z')
NLCRE_crack = re.compile('(\r\n|\r|\n)')
# RFC 2822 $3.6.8 Optional fields.  ftext is %d33-57 / %d59-126, Any character
# except controls, SP, and ":".
headerRE = re.compile(r'^(From |[\041-\071\073-\176]{1,}:|[\t ])')
EMPTYSTRING = ''
NL = '\n'

NeedMoreData = object()


# @implements_iterator
class BufferedSubFile(object):
    """A file-ish object that can have new data loaded into it.

    You can also push and pop line-matching predicates onto a stack.  When the
    current predicate matches the current line, a false EOF response
    (i.e. empty string) is returned instead.  This lets the parser adhere to a
    simple abstraction -- it parses until EOF closes the current message.
    """
    def __init__(self):
        # The last partial line pushed into this object.
        self._partial = ''
        # The list of full, pushed lines, in reverse order
        self._lines = []
        # The stack of false-EOF checking predicates.
        self._eofstack = []
        # A flag indicating whether the file has been closed or not.
        self._closed = False

    def push_eof_matcher(self, pred):
        self._eofstack.append(pred)

    def pop_eof_matcher(self):
        return self._eofstack.pop()

    def close(self):
        # Don't forget any trailing partial line.
        self._lines.append(self._partial)
        self._partial = ''
        self._closed = True

    def readline(self):
        if not self._lines:
            if self._closed:
                return ''
            return NeedMoreData
        # Pop the line off the stack and see if it matches the current
        # false-EOF predicate.
        line = self._lines.pop()
        # RFC 2046, section 5.1.2 requires us to recognize outer level
        # boundaries at any level of inner nesting.  Do this, but be sure it's
        # in the order of most to least nested.
        for ateof in self._eofstack[::-1]:
            if ateof(line):
                # We're at the false EOF.  But push the last line back first.
                self._lines.append(line)
                return ''
        return line

    def unreadline(self, line):
        # Let the consumer push a line back into the buffer.
        assert line is not NeedMoreData
        self._lines.append(line)

    def push(self, data):
        """Push some new data into this object."""
        # Handle any previous leftovers
        data, self._partial = self._partial + data, ''
        # Crack into lines, but preserve the newlines on the end of each
        parts = NLCRE_crack.split(data)
        # The *ahem* interesting behaviour of re.split when supplied grouping
        # parentheses is that the last element of the resulting list is the
        # data after the final RE.  In the case of a NL/CR terminated string,
        # this is the empty string.
        self._partial = parts.pop()
        #GAN 29Mar09  bugs 1555570, 1721862  Confusion at 8K boundary ending with \r:
        # is there a \n to follow later?
        if not self._partial and parts and parts[-1].endswith('\r'):
            self._partial = parts.pop(-2)+parts.pop()
        # parts is a list of strings, alternating between the line contents
        # and the eol character(s).  Gather up a list of lines after
        # re-attaching the newlines.
        lines = []
        for i in range(len(parts) // 2):
            lines.append(parts[i*2] + parts[i*2+1])
        self.pushlines(lines)

    def pushlines(self, lines):
        # Reverse and insert at the front of the lines.
        self._lines[:0] = lines[::-1]

    def __iter__(self):
        return self

    def __next__(self):
        line = self.readline()
        if line == '':
            raise StopIteration
        return line


class FeedParser(object):
    """A feed-style parser of email."""

    def __init__(self, _factory=message.Message, **_3to2kwargs):
        if 'policy' in _3to2kwargs: policy = _3to2kwargs['policy']; del _3to2kwargs['policy']
        else: policy = compat32
        """_factory is called with no arguments to create a new message obj

        The policy keyword specifies a policy object that controls a number of
        aspects of the parser's operation.  The default policy maintains
        backward compatibility.

        """
        self._factory = _factory
        self.policy = policy
        try:
            _factory(policy=self.policy)
            self._factory_kwds = lambda: {'policy': self.policy}
        except TypeError:
            # Assume this is an old-style factory
            self._factory_kwds = lambda: {}
        self._input = BufferedSubFile()
        self._msgstack = []
        if PY3:
            self._parse = self._parsegen().__next__
        else:
            self._parse = self._parsegen().next
        self._cur = None
        self._last = None
        self._headersonly = False

    # Non-public interface for supporting Parser's headersonly flag
    def _set_headersonly(self):
        self._headersonly = True

    def feed(self, data):
        """Push more data into the parser."""
        self._input.push(data)
        self._call_parse()

    def _call_parse(self):
        try:
            self._parse()
        except StopIteration:
            pass

    def close(self):
        """Parse all remaining data and return the root message object."""
        self._input.close()
        self._call_parse()
        root = self._pop_message()
        assert not self._msgstack
        # Look for final set of defects
        if root.get_content_maintype() == 'multipart' \
               and not root.is_multipart():
            defect = errors.MultipartInvariantViolationDefect()
            self.policy.handle_defect(root, defect)
        return root

    def _new_message(self):
        msg = self._factory(**self._factory_kwds())
        if self._cur and self._cur.get_content_type() == 'multipart/digest':
            msg.set_default_type('message/rfc822')
        if self._msgstack:
            self._msgstack[-1].attach(msg)
        self._msgstack.append(msg)
        self._cur = msg
        self._last = msg

    def _pop_message(self):
        retval = self._msgstack.pop()
        if self._msgstack:
            self._cur = self._msgstack[-1]
        else:
            self._cur = None
        return retval

    def _parsegen(self):
        # Create a new message and start by parsing headers.
        self._new_message()
        headers = []
        # Collect the headers, searching for a line that doesn't match the RFC
        # 2822 header or continuation pattern (including an empty line).
        for line in self._input:
            if line is NeedMoreData:
                yield NeedMoreData
                continue
            if not headerRE.match(line):
                # If we saw the RFC defined header/body separator
                # (i.e. newline), just throw it away. Otherwise the line is
                # part of the body so push it back.
                if not NLCRE.match(line):
                    defect = errors.MissingHeaderBodySeparatorDefect()
                    self.policy.handle_defect(self._cur, defect)
                    self._input.unreadline(line)
                break
            headers.append(line)
        # Done with the headers, so parse them and figure out what we're
        # supposed to see in the body of the message.
        self._parse_headers(headers)
        # Headers-only parsing is a backwards compatibility hack, which was
        # necessary in the older parser, which could raise errors.  All
        # remaining lines in the input are thrown into the message body.
        if self._headersonly:
            lines = []
            while True:
                line = self._input.readline()
                if line is NeedMoreData:
                    yield NeedMoreData
                    continue
                if line == '':
                    break
                lines.append(line)
            self._cur.set_payload(EMPTYSTRING.join(lines))
            return
        if self._cur.get_content_type() == 'message/delivery-status':
            # message/delivery-status contains blocks of headers separated by
            # a blank line.  We'll represent each header block as a separate
            # nested message object, but the processing is a bit different
            # than standard message/* types because there is no body for the
            # nested messages.  A blank line separates the subparts.
            while True:
                self._input.push_eof_matcher(NLCRE.match)
                for retval in self._parsegen():
                    if retval is NeedMoreData:
                        yield NeedMoreData
                        continue
                    break
                msg = self._pop_message()
                # We need to pop the EOF matcher in order to tell if we're at
                # the end of the current file, not the end of the last block
                # of message headers.
                self._input.pop_eof_matcher()
                # The input stream must be sitting at the newline or at the
                # EOF.  We want to see if we're at the end of this subpart, so
                # first consume the blank line, then test the next line to see
                # if we're at this subpart's EOF.
                while True:
                    line = self._input.readline()
                    if line is NeedMoreData:
                        yield NeedMoreData
                        continue
                    break
                while True:
                    line = self._input.readline()
                    if line is NeedMoreData:
                        yield NeedMoreData
                        continue
                    break
                if line == '':
                    break
                # Not at EOF so this is a line we're going to need.
                self._input.unreadline(line)
            return
        if self._cur.get_content_maintype() == 'message':
            # The message claims to be a message/* type, then what follows is
            # another RFC 2822 message.
            for retval in self._parsegen():
                if retval is NeedMoreData:
                    yield NeedMoreData
                    continue
                break
            self._pop_message()
            return
        if self._cur.get_content_maintype() == 'multipart':
            boundary = self._cur.get_boundary()
            if boundary is None:
                # The message /claims/ to be a multipart but it has not
                # defined a boundary.  That's a problem which we'll handle by
                # reading everything until the EOF and marking the message as
                # defective.
                defect = errors.NoBoundaryInMultipartDefect()
                self.policy.handle_defect(self._cur, defect)
                lines = []
                for line in self._input:
                    if line is NeedMoreData:
                        yield NeedMoreData
                        continue
                    lines.append(line)
                self._cur.set_payload(EMPTYSTRING.join(lines))
                return
            # Make sure a valid content type was specified per RFC 2045:6.4.
            if (self._cur.get('content-transfer-encoding', '8bit').lower()
                    not in ('7bit', '8bit', 'binary')):
                defect = errors.InvalidMultipartContentTransferEncodingDefect()
                self.policy.handle_defect(self._cur, defect)
            # Create a line match predicate which matches the inter-part
            # boundary as well as the end-of-multipart boundary.  Don't push
            # this onto the input stream until we've scanned past the
            # preamble.
            separator = '--' + boundary
            boundaryre = re.compile(
                '(?P<sep>' + re.escape(separator) +
                r')(?P<end>--)?(?P<ws>[ \t]*)(?P<linesep>\r\n|\r|\n)?$')
            capturing_preamble = True
            preamble = []
            linesep = False
            close_boundary_seen = False
            while True:
                line = self._input.readline()
                if line is NeedMoreData:
                    yield NeedMoreData
                    continue
                if line == '':
                    break
                mo = boundaryre.match(line)
                if mo:
                    # If we're looking at the end boundary, we're done with
                    # this multipart.  If there was a newline at the end of
                    # the closing boundary, then we need to initialize the
                    # epilogue with the empty string (see below).
                    if mo.group('end'):
                        close_boundary_seen = True
                        linesep = mo.group('linesep')
                        break
                    # We saw an inter-part boundary.  Were we in the preamble?
                    if capturing_preamble:
                        if preamble:
                            # According to RFC 2046, the last newline belongs
                            # to the boundary.
                            lastline = preamble[-1]
                            eolmo = NLCRE_eol.search(lastline)
                            if eolmo:
                                preamble[-1] = lastline[:-len(eolmo.group(0))]
                            self._cur.preamble = EMPTYSTRING.join(preamble)
                        capturing_preamble = False
                        self._input.unreadline(line)
                        continue
                    # We saw a boundary separating two parts.  Consume any
                    # multiple boundary lines that may be following.  Our
                    # interpretation of RFC 2046 BNF grammar does not produce
                    # body parts within such double boundaries.
                    while True:
                        line = self._input.readline()
                        if line is NeedMoreData:
                            yield NeedMoreData
                            continue
                        mo = boundaryre.match(line)
                        if not mo:
                            self._input.unreadline(line)
                            break
                    # Recurse to parse this subpart; the input stream points
                    # at the subpart's first line.
                    self._input.push_eof_matcher(boundaryre.match)
                    for retval in self._parsegen():
                        if retval is NeedMoreData:
                            yield NeedMoreData
                            continue
                        break
                    # Because of RFC 2046, the newline preceding the boundary
                    # separator actually belongs to the boundary, not the
                    # previous subpart's payload (or epilogue if the previous
                    # part is a multipart).
                    if self._last.get_content_maintype() == 'multipart':
                        epilogue = self._last.epilogue
                        if epilogue == '':
                            self._last.epilogue = None
                        elif epilogue is not None:
                            mo = NLCRE_eol.search(epilogue)
                            if mo:
                                end = len(mo.group(0))
                                self._last.epilogue = epilogue[:-end]
                    else:
                        payload = self._last._payload
                        if isinstance(payload, str):
                            mo = NLCRE_eol.search(payload)
                            if mo:
                                payload = payload[:-len(mo.group(0))]
                                self._last._payload = payload
                    self._input.pop_eof_matcher()
                    self._pop_message()
                    # Set the multipart up for newline cleansing, which will
                    # happen if we're in a nested multipart.
                    self._last = self._cur
                else:
                    # I think we must be in the preamble
                    assert capturing_preamble
                    preamble.append(line)
            # We've seen either the EOF or the end boundary.  If we're still
            # capturing the preamble, we never saw the start boundary.  Note
            # that as a defect and store the captured text as the payload.
            if capturing_preamble:
                defect = errors.StartBoundaryNotFoundDefect()
                self.policy.handle_defect(self._cur, defect)
                self._cur.set_payload(EMPTYSTRING.join(preamble))
                epilogue = []
                for line in self._input:
                    if line is NeedMoreData:
                        yield NeedMoreData
                        continue
                self._cur.epilogue = EMPTYSTRING.join(epilogue)
                return
            # If we're not processing the preamble, then we might have seen
            # EOF without seeing that end boundary...that is also a defect.
            if not close_boundary_seen:
                defect = errors.CloseBoundaryNotFoundDefect()
                self.policy.handle_defect(self._cur, defect)
                return
            # Everything from here to the EOF is epilogue.  If the end boundary
            # ended in a newline, we'll need to make sure the epilogue isn't
            # None
            if linesep:
                epilogue = ['']
            else:
                epilogue = []
            for line in self._input:
                if line is NeedMoreData:
                    yield NeedMoreData
                    continue
                epilogue.append(line)
            # Any CRLF at the front of the epilogue is not technically part of
            # the epilogue.  Also, watch out for an empty string epilogue,
            # which means a single newline.
            if epilogue:
                firstline = epilogue[0]
                bolmo = NLCRE_bol.match(firstline)
                if bolmo:
                    epilogue[0] = firstline[len(bolmo.group(0)):]
            self._cur.epilogue = EMPTYSTRING.join(epilogue)
            return
        # Otherwise, it's some non-multipart type, so the entire rest of the
        # file contents becomes the payload.
        lines = []
        for line in self._input:
            if line is NeedMoreData:
                yield NeedMoreData
                continue
            lines.append(line)
        self._cur.set_payload(EMPTYSTRING.join(lines))

    def _parse_headers(self, lines):
        # Passed a list of lines that make up the headers for the current msg
        lastheader = ''
        lastvalue = []
        for lineno, line in enumerate(lines):
            # Check for continuation
            if line[0] in ' \t':
                if not lastheader:
                    # The first line of the headers was a continuation.  This
                    # is illegal, so let's note the defect, store the illegal
                    # line, and ignore it for purposes of headers.
                    defect = errors.FirstHeaderLineIsContinuationDefect(line)
                    self.policy.handle_defect(self._cur, defect)
                    continue
                lastvalue.append(line)
                continue
            if lastheader:
                self._cur.set_raw(*self.policy.header_source_parse(lastvalue))
                lastheader, lastvalue = '', []
            # Check for envelope header, i.e. unix-from
            if line.startswith('From '):
                if lineno == 0:
                    # Strip off the trailing newline
                    mo = NLCRE_eol.search(line)
                    if mo:
                        line = line[:-len(mo.group(0))]
                    self._cur.set_unixfrom(line)
                    continue
                elif lineno == len(lines) - 1:
                    # Something looking like a unix-from at the end - it's
                    # probably the first line of the body, so push back the
                    # line and stop.
                    self._input.unreadline(line)
                    return
                else:
                    # Weirdly placed unix-from line.  Note this as a defect
                    # and ignore it.
                    defect = errors.MisplacedEnvelopeHeaderDefect(line)
                    self._cur.defects.append(defect)
                    continue
            # Split the line on the colon separating field name from value.
            # There will always be a colon, because if there wasn't the part of
            # the parser that calls us would have started parsing the body.
            i = line.find(':')
            assert i>0, "_parse_headers fed line with no : and no leading WS"
            lastheader = line[:i]
            lastvalue = [line]
        # Done with all the lines, so handle the last header.
        if lastheader:
            self._cur.set_raw(*self.policy.header_source_parse(lastvalue))


class BytesFeedParser(FeedParser):
    """Like FeedParser, but feed accepts bytes."""

    def feed(self, data):
        super().feed(data.decode('ascii', 'surrogateescape'))
PKoDu\��@L@L#future/backports/email/generator.pynu�[���# Copyright (C) 2001-2010 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Classes to generate plain text from a message object tree."""
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import super
from future.builtins import str

__all__ = ['Generator', 'DecodedGenerator', 'BytesGenerator']

import re
import sys
import time
import random
import warnings

from io import StringIO, BytesIO
from future.backports.email._policybase import compat32
from future.backports.email.header import Header
from future.backports.email.utils import _has_surrogates
import future.backports.email.charset as _charset

UNDERSCORE = '_'
NL = '\n'  # XXX: no longer used by the code below.

fcre = re.compile(r'^From ', re.MULTILINE)


class Generator(object):
    """Generates output from a Message object tree.

    This basic generator writes the message to the given file object as plain
    text.
    """
    #
    # Public interface
    #

    def __init__(self, outfp, mangle_from_=True, maxheaderlen=None, **_3to2kwargs):
        if 'policy' in _3to2kwargs: policy = _3to2kwargs['policy']; del _3to2kwargs['policy']
        else: policy = None
        """Create the generator for message flattening.

        outfp is the output file-like object for writing the message to.  It
        must have a write() method.

        Optional mangle_from_ is a flag that, when True (the default), escapes
        From_ lines in the body of the message by putting a `>' in front of
        them.

        Optional maxheaderlen specifies the longest length for a non-continued
        header.  When a header line is longer (in characters, with tabs
        expanded to 8 spaces) than maxheaderlen, the header will split as
        defined in the Header class.  Set maxheaderlen to zero to disable
        header wrapping.  The default is 78, as recommended (but not required)
        by RFC 2822.

        The policy keyword specifies a policy object that controls a number of
        aspects of the generator's operation.  The default policy maintains
        backward compatibility.

        """
        self._fp = outfp
        self._mangle_from_ = mangle_from_
        self.maxheaderlen = maxheaderlen
        self.policy = policy

    def write(self, s):
        # Just delegate to the file object
        self._fp.write(s)

    def flatten(self, msg, unixfrom=False, linesep=None):
        r"""Print the message object tree rooted at msg to the output file
        specified when the Generator instance was created.

        unixfrom is a flag that forces the printing of a Unix From_ delimiter
        before the first object in the message tree.  If the original message
        has no From_ delimiter, a `standard' one is crafted.  By default, this
        is False to inhibit the printing of any From_ delimiter.

        Note that for subobjects, no From_ line is printed.

        linesep specifies the characters used to indicate a new line in
        the output.  The default value is determined by the policy.

        """
        # We use the _XXX constants for operating on data that comes directly
        # from the msg, and _encoded_XXX constants for operating on data that
        # has already been converted (to bytes in the BytesGenerator) and
        # inserted into a temporary buffer.
        policy = msg.policy if self.policy is None else self.policy
        if linesep is not None:
            policy = policy.clone(linesep=linesep)
        if self.maxheaderlen is not None:
            policy = policy.clone(max_line_length=self.maxheaderlen)
        self._NL = policy.linesep
        self._encoded_NL = self._encode(self._NL)
        self._EMPTY = ''
        self._encoded_EMTPY = self._encode('')
        # Because we use clone (below) when we recursively process message
        # subparts, and because clone uses the computed policy (not None),
        # submessages will automatically get set to the computed policy when
        # they are processed by this code.
        old_gen_policy = self.policy
        old_msg_policy = msg.policy
        try:
            self.policy = policy
            msg.policy = policy
            if unixfrom:
                ufrom = msg.get_unixfrom()
                if not ufrom:
                    ufrom = 'From nobody ' + time.ctime(time.time())
                self.write(ufrom + self._NL)
            self._write(msg)
        finally:
            self.policy = old_gen_policy
            msg.policy = old_msg_policy

    def clone(self, fp):
        """Clone this generator with the exact same options."""
        return self.__class__(fp,
                              self._mangle_from_,
                              None, # Use policy setting, which we've adjusted
                              policy=self.policy)

    #
    # Protected interface - undocumented ;/
    #

    # Note that we use 'self.write' when what we are writing is coming from
    # the source, and self._fp.write when what we are writing is coming from a
    # buffer (because the Bytes subclass has already had a chance to transform
    # the data in its write method in that case).  This is an entirely
    # pragmatic split determined by experiment; we could be more general by
    # always using write and having the Bytes subclass write method detect when
    # it has already transformed the input; but, since this whole thing is a
    # hack anyway this seems good enough.

    # Similarly, we have _XXX and _encoded_XXX attributes that are used on
    # source and buffer data, respectively.
    _encoded_EMPTY = ''

    def _new_buffer(self):
        # BytesGenerator overrides this to return BytesIO.
        return StringIO()

    def _encode(self, s):
        # BytesGenerator overrides this to encode strings to bytes.
        return s

    def _write_lines(self, lines):
        # We have to transform the line endings.
        if not lines:
            return
        lines = lines.splitlines(True)
        for line in lines[:-1]:
            self.write(line.rstrip('\r\n'))
            self.write(self._NL)
        laststripped = lines[-1].rstrip('\r\n')
        self.write(laststripped)
        if len(lines[-1]) != len(laststripped):
            self.write(self._NL)

    def _write(self, msg):
        # We can't write the headers yet because of the following scenario:
        # say a multipart message includes the boundary string somewhere in
        # its body.  We'd have to calculate the new boundary /before/ we write
        # the headers so that we can write the correct Content-Type:
        # parameter.
        #
        # The way we do this, so as to make the _handle_*() methods simpler,
        # is to cache any subpart writes into a buffer.  The we write the
        # headers and the buffer contents.  That way, subpart handlers can
        # Do The Right Thing, and can still modify the Content-Type: header if
        # necessary.
        oldfp = self._fp
        try:
            self._fp = sfp = self._new_buffer()
            self._dispatch(msg)
        finally:
            self._fp = oldfp
        # Write the headers.  First we see if the message object wants to
        # handle that itself.  If not, we'll do it generically.
        meth = getattr(msg, '_write_headers', None)
        if meth is None:
            self._write_headers(msg)
        else:
            meth(self)
        self._fp.write(sfp.getvalue())

    def _dispatch(self, msg):
        # Get the Content-Type: for the message, then try to dispatch to
        # self._handle_<maintype>_<subtype>().  If there's no handler for the
        # full MIME type, then dispatch to self._handle_<maintype>().  If
        # that's missing too, then dispatch to self._writeBody().
        main = msg.get_content_maintype()
        sub = msg.get_content_subtype()
        specific = UNDERSCORE.join((main, sub)).replace('-', '_')
        meth = getattr(self, '_handle_' + specific, None)
        if meth is None:
            generic = main.replace('-', '_')
            meth = getattr(self, '_handle_' + generic, None)
            if meth is None:
                meth = self._writeBody
        meth(msg)

    #
    # Default handlers
    #

    def _write_headers(self, msg):
        for h, v in msg.raw_items():
            self.write(self.policy.fold(h, v))
        # A blank line always separates headers from body
        self.write(self._NL)

    #
    # Handlers for writing types and subtypes
    #

    def _handle_text(self, msg):
        payload = msg.get_payload()
        if payload is None:
            return
        if not isinstance(payload, str):
            raise TypeError('string payload expected: %s' % type(payload))
        if _has_surrogates(msg._payload):
            charset = msg.get_param('charset')
            if charset is not None:
                del msg['content-transfer-encoding']
                msg.set_payload(payload, charset)
                payload = msg.get_payload()
        if self._mangle_from_:
            payload = fcre.sub('>From ', payload)
        self._write_lines(payload)

    # Default body handler
    _writeBody = _handle_text

    def _handle_multipart(self, msg):
        # The trick here is to write out each part separately, merge them all
        # together, and then make sure that the boundary we've chosen isn't
        # present in the payload.
        msgtexts = []
        subparts = msg.get_payload()
        if subparts is None:
            subparts = []
        elif isinstance(subparts, str):
            # e.g. a non-strict parse of a message with no starting boundary.
            self.write(subparts)
            return
        elif not isinstance(subparts, list):
            # Scalar payload
            subparts = [subparts]
        for part in subparts:
            s = self._new_buffer()
            g = self.clone(s)
            g.flatten(part, unixfrom=False, linesep=self._NL)
            msgtexts.append(s.getvalue())
        # BAW: What about boundaries that are wrapped in double-quotes?
        boundary = msg.get_boundary()
        if not boundary:
            # Create a boundary that doesn't appear in any of the
            # message texts.
            alltext = self._encoded_NL.join(msgtexts)
            boundary = self._make_boundary(alltext)
            msg.set_boundary(boundary)
        # If there's a preamble, write it out, with a trailing CRLF
        if msg.preamble is not None:
            if self._mangle_from_:
                preamble = fcre.sub('>From ', msg.preamble)
            else:
                preamble = msg.preamble
            self._write_lines(preamble)
            self.write(self._NL)
        # dash-boundary transport-padding CRLF
        self.write('--' + boundary + self._NL)
        # body-part
        if msgtexts:
            self._fp.write(msgtexts.pop(0))
        # *encapsulation
        # --> delimiter transport-padding
        # --> CRLF body-part
        for body_part in msgtexts:
            # delimiter transport-padding CRLF
            self.write(self._NL + '--' + boundary + self._NL)
            # body-part
            self._fp.write(body_part)
        # close-delimiter transport-padding
        self.write(self._NL + '--' + boundary + '--')
        if msg.epilogue is not None:
            self.write(self._NL)
            if self._mangle_from_:
                epilogue = fcre.sub('>From ', msg.epilogue)
            else:
                epilogue = msg.epilogue
            self._write_lines(epilogue)

    def _handle_multipart_signed(self, msg):
        # The contents of signed parts has to stay unmodified in order to keep
        # the signature intact per RFC1847 2.1, so we disable header wrapping.
        # RDM: This isn't enough to completely preserve the part, but it helps.
        p = self.policy
        self.policy = p.clone(max_line_length=0)
        try:
            self._handle_multipart(msg)
        finally:
            self.policy = p

    def _handle_message_delivery_status(self, msg):
        # We can't just write the headers directly to self's file object
        # because this will leave an extra newline between the last header
        # block and the boundary.  Sigh.
        blocks = []
        for part in msg.get_payload():
            s = self._new_buffer()
            g = self.clone(s)
            g.flatten(part, unixfrom=False, linesep=self._NL)
            text = s.getvalue()
            lines = text.split(self._encoded_NL)
            # Strip off the unnecessary trailing empty line
            if lines and lines[-1] == self._encoded_EMPTY:
                blocks.append(self._encoded_NL.join(lines[:-1]))
            else:
                blocks.append(text)
        # Now join all the blocks with an empty line.  This has the lovely
        # effect of separating each block with an empty line, but not adding
        # an extra one after the last one.
        self._fp.write(self._encoded_NL.join(blocks))

    def _handle_message(self, msg):
        s = self._new_buffer()
        g = self.clone(s)
        # The payload of a message/rfc822 part should be a multipart sequence
        # of length 1.  The zeroth element of the list should be the Message
        # object for the subpart.  Extract that object, stringify it, and
        # write it out.
        # Except, it turns out, when it's a string instead, which happens when
        # and only when HeaderParser is used on a message of mime type
        # message/rfc822.  Such messages are generated by, for example,
        # Groupwise when forwarding unadorned messages.  (Issue 7970.)  So
        # in that case we just emit the string body.
        payload = msg._payload
        if isinstance(payload, list):
            g.flatten(msg.get_payload(0), unixfrom=False, linesep=self._NL)
            payload = s.getvalue()
        else:
            payload = self._encode(payload)
        self._fp.write(payload)

    # This used to be a module level function; we use a classmethod for this
    # and _compile_re so we can continue to provide the module level function
    # for backward compatibility by doing
    #   _make_boudary = Generator._make_boundary
    # at the end of the module.  It *is* internal, so we could drop that...
    @classmethod
    def _make_boundary(cls, text=None):
        # Craft a random boundary.  If text is given, ensure that the chosen
        # boundary doesn't appear in the text.
        token = random.randrange(sys.maxsize)
        boundary = ('=' * 15) + (_fmt % token) + '=='
        if text is None:
            return boundary
        b = boundary
        counter = 0
        while True:
            cre = cls._compile_re('^--' + re.escape(b) + '(--)?$', re.MULTILINE)
            if not cre.search(text):
                break
            b = boundary + '.' + str(counter)
            counter += 1
        return b

    @classmethod
    def _compile_re(cls, s, flags):
        return re.compile(s, flags)

class BytesGenerator(Generator):
    """Generates a bytes version of a Message object tree.

    Functionally identical to the base Generator except that the output is
    bytes and not string.  When surrogates were used in the input to encode
    bytes, these are decoded back to bytes for output.  If the policy has
    cte_type set to 7bit, then the message is transformed such that the
    non-ASCII bytes are properly content transfer encoded, using the charset
    unknown-8bit.

    The outfp object must accept bytes in its write method.
    """

    # Bytes versions of this constant for use in manipulating data from
    # the BytesIO buffer.
    _encoded_EMPTY = b''

    def write(self, s):
        self._fp.write(str(s).encode('ascii', 'surrogateescape'))

    def _new_buffer(self):
        return BytesIO()

    def _encode(self, s):
        return s.encode('ascii')

    def _write_headers(self, msg):
        # This is almost the same as the string version, except for handling
        # strings with 8bit bytes.
        for h, v in msg.raw_items():
            self._fp.write(self.policy.fold_binary(h, v))
        # A blank line always separates headers from body
        self.write(self._NL)

    def _handle_text(self, msg):
        # If the string has surrogates the original source was bytes, so
        # just write it back out.
        if msg._payload is None:
            return
        if _has_surrogates(msg._payload) and not self.policy.cte_type=='7bit':
            if self._mangle_from_:
                msg._payload = fcre.sub(">From ", msg._payload)
            self._write_lines(msg._payload)
        else:
            super(BytesGenerator,self)._handle_text(msg)

    # Default body handler
    _writeBody = _handle_text

    @classmethod
    def _compile_re(cls, s, flags):
        return re.compile(s.encode('ascii'), flags)


_FMT = '[Non-text (%(type)s) part of message omitted, filename %(filename)s]'

class DecodedGenerator(Generator):
    """Generates a text representation of a message.

    Like the Generator base class, except that non-text parts are substituted
    with a format string representing the part.
    """
    def __init__(self, outfp, mangle_from_=True, maxheaderlen=78, fmt=None):
        """Like Generator.__init__() except that an additional optional
        argument is allowed.

        Walks through all subparts of a message.  If the subpart is of main
        type `text', then it prints the decoded payload of the subpart.

        Otherwise, fmt is a format string that is used instead of the message
        payload.  fmt is expanded with the following keywords (in
        %(keyword)s format):

        type       : Full MIME type of the non-text part
        maintype   : Main MIME type of the non-text part
        subtype    : Sub-MIME type of the non-text part
        filename   : Filename of the non-text part
        description: Description associated with the non-text part
        encoding   : Content transfer encoding of the non-text part

        The default value for fmt is None, meaning

        [Non-text (%(type)s) part of message omitted, filename %(filename)s]
        """
        Generator.__init__(self, outfp, mangle_from_, maxheaderlen)
        if fmt is None:
            self._fmt = _FMT
        else:
            self._fmt = fmt

    def _dispatch(self, msg):
        for part in msg.walk():
            maintype = part.get_content_maintype()
            if maintype == 'text':
                print(part.get_payload(decode=False), file=self)
            elif maintype == 'multipart':
                # Just skip this
                pass
            else:
                print(self._fmt % {
                    'type'       : part.get_content_type(),
                    'maintype'   : part.get_content_maintype(),
                    'subtype'    : part.get_content_subtype(),
                    'filename'   : part.get_filename('[no filename]'),
                    'description': part.get('Content-Description',
                                            '[no description]'),
                    'encoding'   : part.get('Content-Transfer-Encoding',
                                            '[no encoding]'),
                    }, file=self)


# Helper used by Generator._make_boundary
_width = len(repr(sys.maxsize-1))
_fmt = '%%0%dd' % _width

# Backward compatibility
_make_boundary = Generator._make_boundary
PKrDu\������"future/backports/email/__init__.pynu�[���# Copyright (C) 2001-2007 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""
Backport of the Python 3.3 email package for Python-Future.

A package for parsing, handling, and generating email messages.
"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

# Install the surrogate escape handler here because this is used by many
# modules in the email package.
from future.utils import surrogateescape
surrogateescape.register_surrogateescape()
# (Should this be done globally by ``future``?)


__version__ = '5.1.0'

__all__ = [
    'base64mime',
    'charset',
    'encoders',
    'errors',
    'feedparser',
    'generator',
    'header',
    'iterators',
    'message',
    'message_from_file',
    'message_from_binary_file',
    'message_from_string',
    'message_from_bytes',
    'mime',
    'parser',
    'quoprimime',
    'utils',
    ]



# Some convenience routines.  Don't import Parser and Message as side-effects
# of importing email since those cascadingly import most of the rest of the
# email package.
def message_from_string(s, *args, **kws):
    """Parse a string into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    """
    from future.backports.email.parser import Parser
    return Parser(*args, **kws).parsestr(s)

def message_from_bytes(s, *args, **kws):
    """Parse a bytes string into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    """
    from future.backports.email.parser import BytesParser
    return BytesParser(*args, **kws).parsebytes(s)

def message_from_file(fp, *args, **kws):
    """Read a file and parse its contents into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    """
    from future.backports.email.parser import Parser
    return Parser(*args, **kws).parse(fp)

def message_from_binary_file(fp, *args, **kws):
    """Read a binary file and parse its contents into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    """
    from future.backports.email.parser import BytesParser
    return BytesParser(*args, **kws).parse(fp)
PKtDu\�'�7979%future/backports/email/_policybase.pynu�[���"""Policy framework for the email package.

Allows fine grained feature control of how the package parses and emits data.
"""
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from future.builtins import super
from future.builtins import str
from future.utils import with_metaclass

import abc
from future.backports.email import header
from future.backports.email import charset as _charset
from future.backports.email.utils import _has_surrogates

__all__ = [
    'Policy',
    'Compat32',
    'compat32',
    ]


class _PolicyBase(object):

    """Policy Object basic framework.

    This class is useless unless subclassed.  A subclass should define
    class attributes with defaults for any values that are to be
    managed by the Policy object.  The constructor will then allow
    non-default values to be set for these attributes at instance
    creation time.  The instance will be callable, taking these same
    attributes keyword arguments, and returning a new instance
    identical to the called instance except for those values changed
    by the keyword arguments.  Instances may be added, yielding new
    instances with any non-default values from the right hand
    operand overriding those in the left hand operand.  That is,

        A + B == A(<non-default values of B>)

    The repr of an instance can be used to reconstruct the object
    if and only if the repr of the values can be used to reconstruct
    those values.

    """

    def __init__(self, **kw):
        """Create new Policy, possibly overriding some defaults.

        See class docstring for a list of overridable attributes.

        """
        for name, value in kw.items():
            if hasattr(self, name):
                super(_PolicyBase,self).__setattr__(name, value)
            else:
                raise TypeError(
                    "{!r} is an invalid keyword argument for {}".format(
                        name, self.__class__.__name__))

    def __repr__(self):
        args = [ "{}={!r}".format(name, value)
                 for name, value in self.__dict__.items() ]
        return "{}({})".format(self.__class__.__name__, ', '.join(args))

    def clone(self, **kw):
        """Return a new instance with specified attributes changed.

        The new instance has the same attribute values as the current object,
        except for the changes passed in as keyword arguments.

        """
        newpolicy = self.__class__.__new__(self.__class__)
        for attr, value in self.__dict__.items():
            object.__setattr__(newpolicy, attr, value)
        for attr, value in kw.items():
            if not hasattr(self, attr):
                raise TypeError(
                    "{!r} is an invalid keyword argument for {}".format(
                        attr, self.__class__.__name__))
            object.__setattr__(newpolicy, attr, value)
        return newpolicy

    def __setattr__(self, name, value):
        if hasattr(self, name):
            msg = "{!r} object attribute {!r} is read-only"
        else:
            msg = "{!r} object has no attribute {!r}"
        raise AttributeError(msg.format(self.__class__.__name__, name))

    def __add__(self, other):
        """Non-default values from right operand override those from left.

        The object returned is a new instance of the subclass.

        """
        return self.clone(**other.__dict__)


def _append_doc(doc, added_doc):
    doc = doc.rsplit('\n', 1)[0]
    added_doc = added_doc.split('\n', 1)[1]
    return doc + '\n' + added_doc

def _extend_docstrings(cls):
    if cls.__doc__ and cls.__doc__.startswith('+'):
        cls.__doc__ = _append_doc(cls.__bases__[0].__doc__, cls.__doc__)
    for name, attr in cls.__dict__.items():
        if attr.__doc__ and attr.__doc__.startswith('+'):
            for c in (c for base in cls.__bases__ for c in base.mro()):
                doc = getattr(getattr(c, name), '__doc__')
                if doc:
                    attr.__doc__ = _append_doc(doc, attr.__doc__)
                    break
    return cls


class Policy(with_metaclass(abc.ABCMeta, _PolicyBase)):

    r"""Controls for how messages are interpreted and formatted.

    Most of the classes and many of the methods in the email package accept
    Policy objects as parameters.  A Policy object contains a set of values and
    functions that control how input is interpreted and how output is rendered.
    For example, the parameter 'raise_on_defect' controls whether or not an RFC
    violation results in an error being raised or not, while 'max_line_length'
    controls the maximum length of output lines when a Message is serialized.

    Any valid attribute may be overridden when a Policy is created by passing
    it as a keyword argument to the constructor.  Policy objects are immutable,
    but a new Policy object can be created with only certain values changed by
    calling the Policy instance with keyword arguments.  Policy objects can
    also be added, producing a new Policy object in which the non-default
    attributes set in the right hand operand overwrite those specified in the
    left operand.

    Settable attributes:

    raise_on_defect     -- If true, then defects should be raised as errors.
                           Default: False.

    linesep             -- string containing the value to use as separation
                           between output lines.  Default '\n'.

    cte_type            -- Type of allowed content transfer encodings

                           7bit  -- ASCII only
                           8bit  -- Content-Transfer-Encoding: 8bit is allowed

                           Default: 8bit.  Also controls the disposition of
                           (RFC invalid) binary data in headers; see the
                           documentation of the binary_fold method.

    max_line_length     -- maximum length of lines, excluding 'linesep',
                           during serialization.  None or 0 means no line
                           wrapping is done.  Default is 78.

    """

    raise_on_defect = False
    linesep = '\n'
    cte_type = '8bit'
    max_line_length = 78

    def handle_defect(self, obj, defect):
        """Based on policy, either raise defect or call register_defect.

            handle_defect(obj, defect)

        defect should be a Defect subclass, but in any case must be an
        Exception subclass.  obj is the object on which the defect should be
        registered if it is not raised.  If the raise_on_defect is True, the
        defect is raised as an error, otherwise the object and the defect are
        passed to register_defect.

        This method is intended to be called by parsers that discover defects.
        The email package parsers always call it with Defect instances.

        """
        if self.raise_on_defect:
            raise defect
        self.register_defect(obj, defect)

    def register_defect(self, obj, defect):
        """Record 'defect' on 'obj'.

        Called by handle_defect if raise_on_defect is False.  This method is
        part of the Policy API so that Policy subclasses can implement custom
        defect handling.  The default implementation calls the append method of
        the defects attribute of obj.  The objects used by the email package by
        default that get passed to this method will always have a defects
        attribute with an append method.

        """
        obj.defects.append(defect)

    def header_max_count(self, name):
        """Return the maximum allowed number of headers named 'name'.

        Called when a header is added to a Message object.  If the returned
        value is not 0 or None, and there are already a number of headers with
        the name 'name' equal to the value returned, a ValueError is raised.

        Because the default behavior of Message's __setitem__ is to append the
        value to the list of headers, it is easy to create duplicate headers
        without realizing it.  This method allows certain headers to be limited
        in the number of instances of that header that may be added to a
        Message programmatically.  (The limit is not observed by the parser,
        which will faithfully produce as many headers as exist in the message
        being parsed.)

        The default implementation returns None for all header names.
        """
        return None

    @abc.abstractmethod
    def header_source_parse(self, sourcelines):
        """Given a list of linesep terminated strings constituting the lines of
        a single header, return the (name, value) tuple that should be stored
        in the model.  The input lines should retain their terminating linesep
        characters.  The lines passed in by the email package may contain
        surrogateescaped binary data.
        """
        raise NotImplementedError

    @abc.abstractmethod
    def header_store_parse(self, name, value):
        """Given the header name and the value provided by the application
        program, return the (name, value) that should be stored in the model.
        """
        raise NotImplementedError

    @abc.abstractmethod
    def header_fetch_parse(self, name, value):
        """Given the header name and the value from the model, return the value
        to be returned to the application program that is requesting that
        header.  The value passed in by the email package may contain
        surrogateescaped binary data if the lines were parsed by a BytesParser.
        The returned value should not contain any surrogateescaped data.

        """
        raise NotImplementedError

    @abc.abstractmethod
    def fold(self, name, value):
        """Given the header name and the value from the model, return a string
        containing linesep characters that implement the folding of the header
        according to the policy controls.  The value passed in by the email
        package may contain surrogateescaped binary data if the lines were
        parsed by a BytesParser.  The returned value should not contain any
        surrogateescaped data.

        """
        raise NotImplementedError

    @abc.abstractmethod
    def fold_binary(self, name, value):
        """Given the header name and the value from the model, return binary
        data containing linesep characters that implement the folding of the
        header according to the policy controls.  The value passed in by the
        email package may contain surrogateescaped binary data.

        """
        raise NotImplementedError


@_extend_docstrings
class Compat32(Policy):

    """+
    This particular policy is the backward compatibility Policy.  It
    replicates the behavior of the email package version 5.1.
    """

    def _sanitize_header(self, name, value):
        # If the header value contains surrogates, return a Header using
        # the unknown-8bit charset to encode the bytes as encoded words.
        if not isinstance(value, str):
            # Assume it is already a header object
            return value
        if _has_surrogates(value):
            return header.Header(value, charset=_charset.UNKNOWN8BIT,
                                 header_name=name)
        else:
            return value

    def header_source_parse(self, sourcelines):
        """+
        The name is parsed as everything up to the ':' and returned unmodified.
        The value is determined by stripping leading whitespace off the
        remainder of the first line, joining all subsequent lines together, and
        stripping any trailing carriage return or linefeed characters.

        """
        name, value = sourcelines[0].split(':', 1)
        value = value.lstrip(' \t') + ''.join(sourcelines[1:])
        return (name, value.rstrip('\r\n'))

    def header_store_parse(self, name, value):
        """+
        The name and value are returned unmodified.
        """
        return (name, value)

    def header_fetch_parse(self, name, value):
        """+
        If the value contains binary data, it is converted into a Header object
        using the unknown-8bit charset.  Otherwise it is returned unmodified.
        """
        return self._sanitize_header(name, value)

    def fold(self, name, value):
        """+
        Headers are folded using the Header folding algorithm, which preserves
        existing line breaks in the value, and wraps each resulting line to the
        max_line_length.  Non-ASCII binary data are CTE encoded using the
        unknown-8bit charset.

        """
        return self._fold(name, value, sanitize=True)

    def fold_binary(self, name, value):
        """+
        Headers are folded using the Header folding algorithm, which preserves
        existing line breaks in the value, and wraps each resulting line to the
        max_line_length.  If cte_type is 7bit, non-ascii binary data is CTE
        encoded using the unknown-8bit charset.  Otherwise the original source
        header is used, with its existing line breaks and/or binary data.

        """
        folded = self._fold(name, value, sanitize=self.cte_type=='7bit')
        return folded.encode('ascii', 'surrogateescape')

    def _fold(self, name, value, sanitize):
        parts = []
        parts.append('%s: ' % name)
        if isinstance(value, str):
            if _has_surrogates(value):
                if sanitize:
                    h = header.Header(value,
                                      charset=_charset.UNKNOWN8BIT,
                                      header_name=name)
                else:
                    # If we have raw 8bit data in a byte string, we have no idea
                    # what the encoding is.  There is no safe way to split this
                    # string.  If it's ascii-subset, then we could do a normal
                    # ascii split, but if it's multibyte then we could break the
                    # string.  There's no way to know so the least harm seems to
                    # be to not split the string and risk it being too long.
                    parts.append(value)
                    h = None
            else:
                h = header.Header(value, header_name=name)
        else:
            # Assume it is a Header-like object.
            h = value
        if h is not None:
            parts.append(h.encode(linesep=self.linesep,
                                  maxlinelen=self.max_line_length))
        parts.append(self.linesep)
        return ''.join(parts)


compat32 = Compat32()
PKwDu\��0�,	,	#future/backports/email/iterators.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Various types of useful iterators and generators."""
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = [
    'body_line_iterator',
    'typed_subpart_iterator',
    'walk',
    # Do not include _structure() since it's part of the debugging API.
    ]

import sys
from io import StringIO


# This function will become a method of the Message class
def walk(self):
    """Walk over the message tree, yielding each subpart.

    The walk is performed in depth-first order.  This method is a
    generator.
    """
    yield self
    if self.is_multipart():
        for subpart in self.get_payload():
            for subsubpart in subpart.walk():
                yield subsubpart


# These two functions are imported into the Iterators.py interface module.
def body_line_iterator(msg, decode=False):
    """Iterate over the parts, returning string payloads line-by-line.

    Optional decode (default False) is passed through to .get_payload().
    """
    for subpart in msg.walk():
        payload = subpart.get_payload(decode=decode)
        if isinstance(payload, str):
            for line in StringIO(payload):
                yield line


def typed_subpart_iterator(msg, maintype='text', subtype=None):
    """Iterate over the subparts with a given MIME type.

    Use `maintype' as the main MIME type to match against; this defaults to
    "text".  Optional `subtype' is the MIME subtype to match against; if
    omitted, only the main type is matched.
    """
    for subpart in msg.walk():
        if subpart.get_content_maintype() == maintype:
            if subtype is None or subpart.get_content_subtype() == subtype:
                yield subpart


def _structure(msg, fp=None, level=0, include_default=False):
    """A handy debugging aid"""
    if fp is None:
        fp = sys.stdout
    tab = ' ' * (level * 4)
    print(tab + msg.get_content_type(), end='', file=fp)
    if include_default:
        print(' [%s]' % msg.get_default_type(), file=fp)
    else:
        print(file=fp)
    if msg.is_multipart():
        for subpart in msg.get_payload():
            _structure(subpart, fp, level+1, include_default)
PKyDu\)��� � (future/backports/email/_encoded_words.pynu�[���""" Routines for manipulating RFC2047 encoded words.

This is currently a package-private API, but will be considered for promotion
to a public API if there is demand.

"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import bytes
from future.builtins import chr
from future.builtins import int
from future.builtins import str

# An ecoded word looks like this:
#
#        =?charset[*lang]?cte?encoded_string?=
#
# for more information about charset see the charset module.  Here it is one
# of the preferred MIME charset names (hopefully; you never know when parsing).
# cte (Content Transfer Encoding) is either 'q' or 'b' (ignoring case).  In
# theory other letters could be used for other encodings, but in practice this
# (almost?) never happens.  There could be a public API for adding entries
# to the CTE tables, but YAGNI for now.  'q' is Quoted Printable, 'b' is
# Base64.  The meaning of encoded_string should be obvious.  'lang' is optional
# as indicated by the brackets (they are not part of the syntax) but is almost
# never encountered in practice.
#
# The general interface for a CTE decoder is that it takes the encoded_string
# as its argument, and returns a tuple (cte_decoded_string, defects).  The
# cte_decoded_string is the original binary that was encoded using the
# specified cte.  'defects' is a list of MessageDefect instances indicating any
# problems encountered during conversion.  'charset' and 'lang' are the
# corresponding strings extracted from the EW, case preserved.
#
# The general interface for a CTE encoder is that it takes a binary sequence
# as input and returns the cte_encoded_string, which is an ascii-only string.
#
# Each decoder must also supply a length function that takes the binary
# sequence as its argument and returns the length of the resulting encoded
# string.
#
# The main API functions for the module are decode, which calls the decoder
# referenced by the cte specifier, and encode, which adds the appropriate
# RFC 2047 "chrome" to the encoded string, and can optionally automatically
# select the shortest possible encoding.  See their docstrings below for
# details.

import re
import base64
import binascii
import functools
from string import ascii_letters, digits
from future.backports.email import errors

__all__ = ['decode_q',
           'encode_q',
           'decode_b',
           'encode_b',
           'len_q',
           'len_b',
           'decode',
           'encode',
           ]

#
# Quoted Printable
#

# regex based decoder.
_q_byte_subber = functools.partial(re.compile(br'=([a-fA-F0-9]{2})').sub,
        lambda m: bytes([int(m.group(1), 16)]))

def decode_q(encoded):
    encoded = bytes(encoded.replace(b'_', b' '))
    return _q_byte_subber(encoded), []


# dict mapping bytes to their encoded form
class _QByteMap(dict):

    safe = bytes(b'-!*+/' + ascii_letters.encode('ascii') + digits.encode('ascii'))

    def __missing__(self, key):
        if key in self.safe:
            self[key] = chr(key)
        else:
            self[key] = "={:02X}".format(key)
        return self[key]

_q_byte_map = _QByteMap()

# In headers spaces are mapped to '_'.
_q_byte_map[ord(' ')] = '_'

def encode_q(bstring):
    return str(''.join(_q_byte_map[x] for x in bytes(bstring)))

def len_q(bstring):
    return sum(len(_q_byte_map[x]) for x in bytes(bstring))


#
# Base64
#

def decode_b(encoded):
    defects = []
    pad_err = len(encoded) % 4
    if pad_err:
        defects.append(errors.InvalidBase64PaddingDefect())
        padded_encoded = encoded + b'==='[:4-pad_err]
    else:
        padded_encoded = encoded
    try:
        # The validate kwarg to b64decode is not supported in Py2.x
        if not re.match(b'^[A-Za-z0-9+/]*={0,2}$', padded_encoded):
            raise binascii.Error('Non-base64 digit found')
        return base64.b64decode(padded_encoded), defects
    except binascii.Error:
        # Since we had correct padding, this must an invalid char error.
        defects = [errors.InvalidBase64CharactersDefect()]
        # The non-alphabet characters are ignored as far as padding
        # goes, but we don't know how many there are.  So we'll just
        # try various padding lengths until something works.
        for i in 0, 1, 2, 3:
            try:
                return base64.b64decode(encoded+b'='*i), defects
            except (binascii.Error, TypeError):    # Py2 raises a TypeError
                if i==0:
                    defects.append(errors.InvalidBase64PaddingDefect())
        else:
            # This should never happen.
            raise AssertionError("unexpected binascii.Error")

def encode_b(bstring):
    return base64.b64encode(bstring).decode('ascii')

def len_b(bstring):
    groups_of_3, leftover = divmod(len(bstring), 3)
    # 4 bytes out for each 3 bytes (or nonzero fraction thereof) in.
    return groups_of_3 * 4 + (4 if leftover else 0)


_cte_decoders = {
    'q': decode_q,
    'b': decode_b,
    }

def decode(ew):
    """Decode encoded word and return (string, charset, lang, defects) tuple.

    An RFC 2047/2243 encoded word has the form:

        =?charset*lang?cte?encoded_string?=

    where '*lang' may be omitted but the other parts may not be.

    This function expects exactly such a string (that is, it does not check the
    syntax and may raise errors if the string is not well formed), and returns
    the encoded_string decoded first from its Content Transfer Encoding and
    then from the resulting bytes into unicode using the specified charset.  If
    the cte-decoded string does not successfully decode using the specified
    character set, a defect is added to the defects list and the unknown octets
    are replaced by the unicode 'unknown' character \uFDFF.

    The specified charset and language are returned.  The default for language,
    which is rarely if ever encountered, is the empty string.

    """
    _, charset, cte, cte_string, _ = str(ew).split('?')
    charset, _, lang = charset.partition('*')
    cte = cte.lower()
    # Recover the original bytes and do CTE decoding.
    bstring = cte_string.encode('ascii', 'surrogateescape')
    bstring, defects = _cte_decoders[cte](bstring)
    # Turn the CTE decoded bytes into unicode.
    try:
        string = bstring.decode(charset)
    except UnicodeError:
        defects.append(errors.UndecodableBytesDefect("Encoded word "
            "contains bytes not decodable using {} charset".format(charset)))
        string = bstring.decode(charset, 'surrogateescape')
    except LookupError:
        string = bstring.decode('ascii', 'surrogateescape')
        if charset.lower() != 'unknown-8bit':
            defects.append(errors.CharsetError("Unknown charset {} "
                "in encoded word; decoded as unknown bytes".format(charset)))
    return string, charset, lang, defects


_cte_encoders = {
    'q': encode_q,
    'b': encode_b,
    }

_cte_encode_length = {
    'q': len_q,
    'b': len_b,
    }

def encode(string, charset='utf-8', encoding=None, lang=''):
    """Encode string using the CTE encoding that produces the shorter result.

    Produces an RFC 2047/2243 encoded word of the form:

        =?charset*lang?cte?encoded_string?=

    where '*lang' is omitted unless the 'lang' parameter is given a value.
    Optional argument charset (defaults to utf-8) specifies the charset to use
    to encode the string to binary before CTE encoding it.  Optional argument
    'encoding' is the cte specifier for the encoding that should be used ('q'
    or 'b'); if it is None (the default) the encoding which produces the
    shortest encoded sequence is used, except that 'q' is preferred if it is up
    to five characters longer.  Optional argument 'lang' (default '') gives the
    RFC 2243 language string to specify in the encoded word.

    """
    string = str(string)
    if charset == 'unknown-8bit':
        bstring = string.encode('ascii', 'surrogateescape')
    else:
        bstring = string.encode(charset)
    if encoding is None:
        qlen = _cte_encode_length['q'](bstring)
        blen = _cte_encode_length['b'](bstring)
        # Bias toward q.  5 is arbitrary.
        encoding = 'q' if qlen - blen < 5 else 'b'
    encoded = _cte_encoders[encoding](bstring)
    if lang:
        lang = '*' + lang
    return "=?{0}{1}?{2}?{3}?=".format(charset, lang, encoding, encoded)
PK|Du\���C�C$future/backports/email/_parseaddr.pynu�[���# Copyright (C) 2002-2007 Python Software Foundation
# Contact: email-sig@python.org

"""Email address parsing code.

Lifted directly from rfc822.py.  This should eventually be rewritten.
"""

from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from future.builtins import int

__all__ = [
    'mktime_tz',
    'parsedate',
    'parsedate_tz',
    'quote',
    ]

import time, calendar

SPACE = ' '
EMPTYSTRING = ''
COMMASPACE = ', '

# Parse a date field
_monthnames = ['jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul',
               'aug', 'sep', 'oct', 'nov', 'dec',
               'january', 'february', 'march', 'april', 'may', 'june', 'july',
               'august', 'september', 'october', 'november', 'december']

_daynames = ['mon', 'tue', 'wed', 'thu', 'fri', 'sat', 'sun']

# The timezone table does not include the military time zones defined
# in RFC822, other than Z.  According to RFC1123, the description in
# RFC822 gets the signs wrong, so we can't rely on any such time
# zones.  RFC1123 recommends that numeric timezone indicators be used
# instead of timezone names.

_timezones = {'UT':0, 'UTC':0, 'GMT':0, 'Z':0,
              'AST': -400, 'ADT': -300,  # Atlantic (used in Canada)
              'EST': -500, 'EDT': -400,  # Eastern
              'CST': -600, 'CDT': -500,  # Central
              'MST': -700, 'MDT': -600,  # Mountain
              'PST': -800, 'PDT': -700   # Pacific
              }


def parsedate_tz(data):
    """Convert a date string to a time tuple.

    Accounts for military timezones.
    """
    res = _parsedate_tz(data)
    if not res:
        return
    if res[9] is None:
        res[9] = 0
    return tuple(res)

def _parsedate_tz(data):
    """Convert date to extended time tuple.

    The last (additional) element is the time zone offset in seconds, except if
    the timezone was specified as -0000.  In that case the last element is
    None.  This indicates a UTC timestamp that explicitly declaims knowledge of
    the source timezone, as opposed to a +0000 timestamp that indicates the
    source timezone really was UTC.

    """
    if not data:
        return
    data = data.split()
    # The FWS after the comma after the day-of-week is optional, so search and
    # adjust for this.
    if data[0].endswith(',') or data[0].lower() in _daynames:
        # There's a dayname here. Skip it
        del data[0]
    else:
        i = data[0].rfind(',')
        if i >= 0:
            data[0] = data[0][i+1:]
    if len(data) == 3: # RFC 850 date, deprecated
        stuff = data[0].split('-')
        if len(stuff) == 3:
            data = stuff + data[1:]
    if len(data) == 4:
        s = data[3]
        i = s.find('+')
        if i == -1:
            i = s.find('-')
        if i > 0:
            data[3:] = [s[:i], s[i:]]
        else:
            data.append('') # Dummy tz
    if len(data) < 5:
        return None
    data = data[:5]
    [dd, mm, yy, tm, tz] = data
    mm = mm.lower()
    if mm not in _monthnames:
        dd, mm = mm, dd.lower()
        if mm not in _monthnames:
            return None
    mm = _monthnames.index(mm) + 1
    if mm > 12:
        mm -= 12
    if dd[-1] == ',':
        dd = dd[:-1]
    i = yy.find(':')
    if i > 0:
        yy, tm = tm, yy
    if yy[-1] == ',':
        yy = yy[:-1]
    if not yy[0].isdigit():
        yy, tz = tz, yy
    if tm[-1] == ',':
        tm = tm[:-1]
    tm = tm.split(':')
    if len(tm) == 2:
        [thh, tmm] = tm
        tss = '0'
    elif len(tm) == 3:
        [thh, tmm, tss] = tm
    elif len(tm) == 1 and '.' in tm[0]:
        # Some non-compliant MUAs use '.' to separate time elements.
        tm = tm[0].split('.')
        if len(tm) == 2:
            [thh, tmm] = tm
            tss = 0
        elif len(tm) == 3:
            [thh, tmm, tss] = tm
    else:
        return None
    try:
        yy = int(yy)
        dd = int(dd)
        thh = int(thh)
        tmm = int(tmm)
        tss = int(tss)
    except ValueError:
        return None
    # Check for a yy specified in two-digit format, then convert it to the
    # appropriate four-digit format, according to the POSIX standard. RFC 822
    # calls for a two-digit yy, but RFC 2822 (which obsoletes RFC 822)
    # mandates a 4-digit yy. For more information, see the documentation for
    # the time module.
    if yy < 100:
        # The year is between 1969 and 1999 (inclusive).
        if yy > 68:
            yy += 1900
        # The year is between 2000 and 2068 (inclusive).
        else:
            yy += 2000
    tzoffset = None
    tz = tz.upper()
    if tz in _timezones:
        tzoffset = _timezones[tz]
    else:
        try:
            tzoffset = int(tz)
        except ValueError:
            pass
        if tzoffset==0 and tz.startswith('-'):
            tzoffset = None
    # Convert a timezone offset into seconds ; -0500 -> -18000
    if tzoffset:
        if tzoffset < 0:
            tzsign = -1
            tzoffset = -tzoffset
        else:
            tzsign = 1
        tzoffset = tzsign * ( (tzoffset//100)*3600 + (tzoffset % 100)*60)
    # Daylight Saving Time flag is set to -1, since DST is unknown.
    return [yy, mm, dd, thh, tmm, tss, 0, 1, -1, tzoffset]


def parsedate(data):
    """Convert a time string to a time tuple."""
    t = parsedate_tz(data)
    if isinstance(t, tuple):
        return t[:9]
    else:
        return t


def mktime_tz(data):
    """Turn a 10-tuple as returned by parsedate_tz() into a POSIX timestamp."""
    if data[9] is None:
        # No zone info, so localtime is better assumption than GMT
        return time.mktime(data[:8] + (-1,))
    else:
        t = calendar.timegm(data)
        return t - data[9]


def quote(str):
    """Prepare string to be used in a quoted string.

    Turns backslash and double quote characters into quoted pairs.  These
    are the only characters that need to be quoted inside a quoted string.
    Does not add the surrounding double quotes.
    """
    return str.replace('\\', '\\\\').replace('"', '\\"')


class AddrlistClass(object):
    """Address parser class by Ben Escoto.

    To understand what this class does, it helps to have a copy of RFC 2822 in
    front of you.

    Note: this class interface is deprecated and may be removed in the future.
    Use email.utils.AddressList instead.
    """

    def __init__(self, field):
        """Initialize a new instance.

        `field' is an unparsed address header field, containing
        one or more addresses.
        """
        self.specials = '()<>@,:;.\"[]'
        self.pos = 0
        self.LWS = ' \t'
        self.CR = '\r\n'
        self.FWS = self.LWS + self.CR
        self.atomends = self.specials + self.LWS + self.CR
        # Note that RFC 2822 now specifies `.' as obs-phrase, meaning that it
        # is obsolete syntax.  RFC 2822 requires that we recognize obsolete
        # syntax, so allow dots in phrases.
        self.phraseends = self.atomends.replace('.', '')
        self.field = field
        self.commentlist = []

    def gotonext(self):
        """Skip white space and extract comments."""
        wslist = []
        while self.pos < len(self.field):
            if self.field[self.pos] in self.LWS + '\n\r':
                if self.field[self.pos] not in '\n\r':
                    wslist.append(self.field[self.pos])
                self.pos += 1
            elif self.field[self.pos] == '(':
                self.commentlist.append(self.getcomment())
            else:
                break
        return EMPTYSTRING.join(wslist)

    def getaddrlist(self):
        """Parse all addresses.

        Returns a list containing all of the addresses.
        """
        result = []
        while self.pos < len(self.field):
            ad = self.getaddress()
            if ad:
                result += ad
            else:
                result.append(('', ''))
        return result

    def getaddress(self):
        """Parse the next address."""
        self.commentlist = []
        self.gotonext()

        oldpos = self.pos
        oldcl = self.commentlist
        plist = self.getphraselist()

        self.gotonext()
        returnlist = []

        if self.pos >= len(self.field):
            # Bad email address technically, no domain.
            if plist:
                returnlist = [(SPACE.join(self.commentlist), plist[0])]

        elif self.field[self.pos] in '.@':
            # email address is just an addrspec
            # this isn't very efficient since we start over
            self.pos = oldpos
            self.commentlist = oldcl
            addrspec = self.getaddrspec()
            returnlist = [(SPACE.join(self.commentlist), addrspec)]

        elif self.field[self.pos] == ':':
            # address is a group
            returnlist = []

            fieldlen = len(self.field)
            self.pos += 1
            while self.pos < len(self.field):
                self.gotonext()
                if self.pos < fieldlen and self.field[self.pos] == ';':
                    self.pos += 1
                    break
                returnlist = returnlist + self.getaddress()

        elif self.field[self.pos] == '<':
            # Address is a phrase then a route addr
            routeaddr = self.getrouteaddr()

            if self.commentlist:
                returnlist = [(SPACE.join(plist) + ' (' +
                               ' '.join(self.commentlist) + ')', routeaddr)]
            else:
                returnlist = [(SPACE.join(plist), routeaddr)]

        else:
            if plist:
                returnlist = [(SPACE.join(self.commentlist), plist[0])]
            elif self.field[self.pos] in self.specials:
                self.pos += 1

        self.gotonext()
        if self.pos < len(self.field) and self.field[self.pos] == ',':
            self.pos += 1
        return returnlist

    def getrouteaddr(self):
        """Parse a route address (Return-path value).

        This method just skips all the route stuff and returns the addrspec.
        """
        if self.field[self.pos] != '<':
            return

        expectroute = False
        self.pos += 1
        self.gotonext()
        adlist = ''
        while self.pos < len(self.field):
            if expectroute:
                self.getdomain()
                expectroute = False
            elif self.field[self.pos] == '>':
                self.pos += 1
                break
            elif self.field[self.pos] == '@':
                self.pos += 1
                expectroute = True
            elif self.field[self.pos] == ':':
                self.pos += 1
            else:
                adlist = self.getaddrspec()
                self.pos += 1
                break
            self.gotonext()

        return adlist

    def getaddrspec(self):
        """Parse an RFC 2822 addr-spec."""
        aslist = []

        self.gotonext()
        while self.pos < len(self.field):
            preserve_ws = True
            if self.field[self.pos] == '.':
                if aslist and not aslist[-1].strip():
                    aslist.pop()
                aslist.append('.')
                self.pos += 1
                preserve_ws = False
            elif self.field[self.pos] == '"':
                aslist.append('"%s"' % quote(self.getquote()))
            elif self.field[self.pos] in self.atomends:
                if aslist and not aslist[-1].strip():
                    aslist.pop()
                break
            else:
                aslist.append(self.getatom())
            ws = self.gotonext()
            if preserve_ws and ws:
                aslist.append(ws)

        if self.pos >= len(self.field) or self.field[self.pos] != '@':
            return EMPTYSTRING.join(aslist)

        aslist.append('@')
        self.pos += 1
        self.gotonext()
        return EMPTYSTRING.join(aslist) + self.getdomain()

    def getdomain(self):
        """Get the complete domain name from an address."""
        sdlist = []
        while self.pos < len(self.field):
            if self.field[self.pos] in self.LWS:
                self.pos += 1
            elif self.field[self.pos] == '(':
                self.commentlist.append(self.getcomment())
            elif self.field[self.pos] == '[':
                sdlist.append(self.getdomainliteral())
            elif self.field[self.pos] == '.':
                self.pos += 1
                sdlist.append('.')
            elif self.field[self.pos] in self.atomends:
                break
            else:
                sdlist.append(self.getatom())
        return EMPTYSTRING.join(sdlist)

    def getdelimited(self, beginchar, endchars, allowcomments=True):
        """Parse a header fragment delimited by special characters.

        `beginchar' is the start character for the fragment.
        If self is not looking at an instance of `beginchar' then
        getdelimited returns the empty string.

        `endchars' is a sequence of allowable end-delimiting characters.
        Parsing stops when one of these is encountered.

        If `allowcomments' is non-zero, embedded RFC 2822 comments are allowed
        within the parsed fragment.
        """
        if self.field[self.pos] != beginchar:
            return ''

        slist = ['']
        quote = False
        self.pos += 1
        while self.pos < len(self.field):
            if quote:
                slist.append(self.field[self.pos])
                quote = False
            elif self.field[self.pos] in endchars:
                self.pos += 1
                break
            elif allowcomments and self.field[self.pos] == '(':
                slist.append(self.getcomment())
                continue        # have already advanced pos from getcomment
            elif self.field[self.pos] == '\\':
                quote = True
            else:
                slist.append(self.field[self.pos])
            self.pos += 1

        return EMPTYSTRING.join(slist)

    def getquote(self):
        """Get a quote-delimited fragment from self's field."""
        return self.getdelimited('"', '"\r', False)

    def getcomment(self):
        """Get a parenthesis-delimited fragment from self's field."""
        return self.getdelimited('(', ')\r', True)

    def getdomainliteral(self):
        """Parse an RFC 2822 domain-literal."""
        return '[%s]' % self.getdelimited('[', ']\r', False)

    def getatom(self, atomends=None):
        """Parse an RFC 2822 atom.

        Optional atomends specifies a different set of end token delimiters
        (the default is to use self.atomends).  This is used e.g. in
        getphraselist() since phrase endings must not include the `.' (which
        is legal in phrases)."""
        atomlist = ['']
        if atomends is None:
            atomends = self.atomends

        while self.pos < len(self.field):
            if self.field[self.pos] in atomends:
                break
            else:
                atomlist.append(self.field[self.pos])
            self.pos += 1

        return EMPTYSTRING.join(atomlist)

    def getphraselist(self):
        """Parse a sequence of RFC 2822 phrases.

        A phrase is a sequence of words, which are in turn either RFC 2822
        atoms or quoted-strings.  Phrases are canonicalized by squeezing all
        runs of continuous whitespace into one space.
        """
        plist = []

        while self.pos < len(self.field):
            if self.field[self.pos] in self.FWS:
                self.pos += 1
            elif self.field[self.pos] == '"':
                plist.append(self.getquote())
            elif self.field[self.pos] == '(':
                self.commentlist.append(self.getcomment())
            elif self.field[self.pos] in self.phraseends:
                break
            else:
                plist.append(self.getatom(self.phraseends))

        return plist

class AddressList(AddrlistClass):
    """An AddressList encapsulates a list of parsed RFC 2822 addresses."""
    def __init__(self, field):
        AddrlistClass.__init__(self, field)
        if field:
            self.addresslist = self.getaddrlist()
        else:
            self.addresslist = []

    def __len__(self):
        return len(self.addresslist)

    def __add__(self, other):
        # Set union
        newaddr = AddressList(None)
        newaddr.addresslist = self.addresslist[:]
        for x in other.addresslist:
            if not x in self.addresslist:
                newaddr.addresslist.append(x)
        return newaddr

    def __iadd__(self, other):
        # Set union, in-place
        for x in other.addresslist:
            if not x in self.addresslist:
                self.addresslist.append(x)
        return self

    def __sub__(self, other):
        # Set difference
        newaddr = AddressList(None)
        for x in self.addresslist:
            if not x in other.addresslist:
                newaddr.addresslist.append(x)
        return newaddr

    def __isub__(self, other):
        # Set difference, in-place
        for x in other.addresslist:
            if x in self.addresslist:
                self.addresslist.remove(x)
        return self

    def __getitem__(self, index):
        # Make indexing, slices, and 'in' work
        return self.addresslist[index]
PK�Du\P�U�
�
"future/backports/email/encoders.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Encodings and related functions."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import str

__all__ = [
    'encode_7or8bit',
    'encode_base64',
    'encode_noop',
    'encode_quopri',
    ]


try:
    from base64 import encodebytes as _bencode
except ImportError:
    # Py2 compatibility. TODO: test this!
    from base64 import encodestring as _bencode
from quopri import encodestring as _encodestring


def _qencode(s):
    enc = _encodestring(s, quotetabs=True)
    # Must encode spaces, which quopri.encodestring() doesn't do
    return enc.replace(' ', '=20')


def encode_base64(msg):
    """Encode the message's payload in Base64.

    Also, add an appropriate Content-Transfer-Encoding header.
    """
    orig = msg.get_payload()
    encdata = str(_bencode(orig), 'ascii')
    msg.set_payload(encdata)
    msg['Content-Transfer-Encoding'] = 'base64'


def encode_quopri(msg):
    """Encode the message's payload in quoted-printable.

    Also, add an appropriate Content-Transfer-Encoding header.
    """
    orig = msg.get_payload()
    encdata = _qencode(orig)
    msg.set_payload(encdata)
    msg['Content-Transfer-Encoding'] = 'quoted-printable'


def encode_7or8bit(msg):
    """Set the Content-Transfer-Encoding header to 7bit or 8bit."""
    orig = msg.get_payload()
    if orig is None:
        # There's no payload.  For backwards compatibility we use 7bit
        msg['Content-Transfer-Encoding'] = '7bit'
        return
    # We play a trick to make this go fast.  If encoding/decode to ASCII
    # succeeds, we know the data must be 7bit, otherwise treat it as 8bit.
    try:
        if isinstance(orig, str):
            orig.encode('ascii')
        else:
            orig.decode('ascii')
    except UnicodeError:
        charset = msg.get_charset()
        output_cset = charset and charset.output_charset
        # iso-2022-* is non-ASCII but encodes to a 7-bit representation
        if output_cset and output_cset.lower().startswith('iso-2022-'):
            msg['Content-Transfer-Encoding'] = '7bit'
        else:
            msg['Content-Transfer-Encoding'] = '8bit'
    else:
        msg['Content-Transfer-Encoding'] = '7bit'
    if not isinstance(orig, str):
        msg.set_payload(orig.decode('ascii', 'surrogateescape'))


def encode_noop(msg):
    """Do nothing."""
    # Well, not quite *nothing*: in Python3 we have to turn bytes into a string
    # in our internal surrogateescaped form in order to keep the model
    # consistent.
    orig = msg.get_payload()
    if not isinstance(orig, str):
        msg.set_payload(orig.decode('ascii', 'surrogateescape'))
PK�Du\���fBfB8future/backports/email/__pycache__/header.cpython-39.pycnu�[���a

��?h�_�@sNdZddlmZddlmZddlmZddlmZmZmZm	Z	m
Z
gd�ZddlZddl
Z
ddlmZdd	lmZdd
lmZddlmmmZddlmZmZejZdZd
ZdZdZdZ dZ!dZ"ed�Z#ed�Z$e�%dej&ej'Bej(B�Z)e�%d�Z*e�%d�Z+dd�Z,d"dd�Z-Gdd�de.�Z/Gdd�de.�Z0Gd d!�d!e1�Z2dS)#z+Header encoding and decoding functionality.�)�unicode_literals)�division)�absolute_import)�bytes�range�str�super�zip)�Header�
decode_header�make_headerN)�email)�
base64mime)�HeaderParseError)�_max_append�
header_decode�
� � z        ��Nz 	�us-asciizutf-8ai
  =\?                   # literal =?
  (?P<charset>[^?]*?)   # non-greedy up to the next ? is the charset
  \?                    # literal ?
  (?P<encoding>[qb])    # either a "q" or a "b", case insensitive
  \?                    # literal ?
  (?P<encoded>.*?)      # non-greedy up to the next ?= is the encoded string
  \?=                   # literal ?=
  z[\041-\176]+:$z
\n[^ \t]+:c	Cs�t|d�rdd�|jD�St�|�s.|dfgSg}|��D]�}t�|�}d}|r:|�d�}|rj|��}d}|r~|�|ddf�|rL|�d��	�}|�d��	�}|�d�}|�|||f�qLq:ddl
}	g}
t|�D]J\}}|dkr�|dr�||d	dr�||dd��r�|
�|d�q�t
|
�D]}
||
=�q$g}|D]�\}}}|du�r^|�||f�n�|d
k�r�t|�}|�||f�nz|dk�r�t|�d}|�r�|d
dd|�7}zt�|�}Wntj�y�td��Yn0|�||f�ntd|���q:g}d}}|D]v\}}t|t��r,t|d�}|du�r@|}|}nB||k�rb|�||f�|}|}n |du�rz|t|7}n||7}�q|�||f�|S)a;Decode a message header value without converting charset.

    Returns a list of (string, charset) pairs containing each of the decoded
    parts of the header.  Charset is None for non-encoded parts of the header,
    otherwise a lower-case string containing the name of the character set
    specified in the encoded string.

    header may be a string that may or may not contain RFC2047 encoded words,
    or it may be a Header object.

    An email.errors.HeaderParseError may be raised when certain decoding error
    occurs (e.g. a base64 decoding exception).
    �_chunkscSs(g|] \}}t�|t|��t|�f�qS�)�_charset�_encoder)�.0�string�charsetrr�G/usr/local/lib/python3.9/site-packages/future/backports/email/header.py�
<listcomp>Ns�z!decode_header.<locals>.<listcomp>NTrF���q�b�z===zBase64 decoding errorzUnexpected encoding: zraw-unicode-escape)�hasattrr�ecre�search�
splitlines�split�pop�lstrip�append�lower�sys�	enumerate�isspace�reversedr�lenr�decode�binascii�Errorr�AssertionError�
isinstancerr�BSPACE)�header�words�line�parts�first�	unencodedr�encoding�encodedr/�droplist�n�w�d�
decoded_words�encoded_string�word�paderr�	collapsed�	last_word�last_charsetrrrr>s~
�




4







rcCsFt|||d�}|D].\}}|dur4t|t�s4t|�}|�||�q|S)a�Create a Header from a sequence of pairs as returned by decode_header()

    decode_header() takes a header value string and returns a sequence of
    pairs of the format (decoded_string, charset) where charset is the string
    name of the character set.

    This function takes one of those sequence of pairs and returns a Header
    instance.  Optional maxlinelen, header_name, and continuation_ws are as in
    the Header constructor.
    )�
maxlinelen�header_name�continuation_wsN)r
r8�Charsetr-)�decoded_seqrMrNrO�h�srrrrr�s�rc@sReZdZddd�Zdd�Zdd	�Zd
d�Zddd
�Zdd�Zddd�Z	dd�Z
dS)r
Nr�strictcCs||durt}nt|t�s t|�}||_||_g|_|durH|�|||�|durTt}||_|durjd|_	nt
|�d|_	dS)aDCreate a MIME-compliant header that can contain many character sets.

        Optional s is the initial header value.  If None, the initial header
        value is not set.  You can later append to the header with .append()
        method calls.  s may be a byte string or a Unicode string, but see the
        .append() documentation for semantics.

        Optional charset serves two purposes: it has the same meaning as the
        charset argument to the .append() method.  It also sets the default
        character set for all subsequent .append() calls that omit the charset
        argument.  If charset is not provided in the constructor, the us-ascii
        charset is used both as s's initial charset and as the default for
        subsequent .append() calls.

        The maximum line length can be specified explicitly via maxlinelen. For
        splitting the first line to a shorter value (to account for the field
        header which isn't included in s, e.g. `Subject') pass in the name of
        the field in header_name.  The default maxlinelen is 78 as recommended
        by RFC 2822.

        continuation_ws must be RFC 2822 compliant folding whitespace (usually
        either a space or a hard tab) which will be prepended to continuation
        lines.

        errors is passed through to the .append() call.
        Nrr")�USASCIIr8rPr�_continuation_wsrr-�
MAXLINELEN�_maxlinelen�
_headerlenr3)�selfrSrrMrNrO�errorsrrr�__init__�s
zHeader.__init__c	Cs�|��g}d}d}|jD]�\}}|}|tjkrH|�dd�}|�dd�}|r�|o\|�|d�}|dvr�|dvr�|s�|�t�d}n|dvr�|s�|�t�|o�|�|d�}|}|�|�qt	�
|�S)z&Return the string value of the header.N�ascii�surrogateescape�replacer�Nr���)�
_normalizerr�UNKNOWN8BIT�encoder4�	_nonctextr-�SPACE�EMPTYSTRING�join)	rZ�uchunks�lastcs�	lastspacerr�nextcs�original_bytes�hasspacerrr�__str__�s*


zHeader.__str__cCs|t|�kS�N)r�rZ�otherrrr�__eq__sz
Header.__eq__cCs
||kSrprrqrrr�__ne__	sz
Header.__ne__cCs�|dur|j}nt|t�s"t|�}t|t�sZ|jp4d}|tjkrN|�dd�}n|�||�}|jpbd}|tjkr�z|�||�Wn t	y�|dkr��t
}Yn0|j�||f�dS)a.Append a string to the MIME header.

        Optional charset, if given, should be a Charset instance or the name
        of a character set (which will be converted to a Charset instance).  A
        value of None (the default) means that the charset given in the
        constructor is used.

        s may be a byte string or a Unicode string.  If it is a byte string
        (i.e. isinstance(s, str) is false), then charset is the encoding of
        that byte string, and a UnicodeError will be raised if the string
        cannot be decoded with that charset.  If s is a Unicode string, then
        charset is a hint specifying the character set of the characters in
        the string.  In either case, when producing an RFC 2822 compliant
        header using RFC 2047 rules, the string will be encoded using the
        output codec of the charset.  If the string cannot be encoded to the
        output codec, a UnicodeError will be raised.

        Optional `errors' is passed as the errors argument to the decode
        call if s is a byte string.
        Nrr^)
rr8rPr�input_codecrcr4�output_codecrd�UnicodeEncodeError�UTF8rr-)rZrSrr[�
input_charset�output_charsetrrrr-s$






z
Header.appendcCs|��p|dvS)z=True if string s is not a ctext character of RFC822.
        )�(�)�\)r1)rZrSrrrre7szHeader._nonctext�;, 	rcCs�|��|dur|j}|dkr"d}t|j||j|�}d}d}}|jD�]&\}}	|dur�|oh|�|d�}ddl}
|dvr�|r�|	dvr�|��n|	dvr�|s�|��|o�|�|d�}|	}d}|�	�}|r�|�
d|d|	�n|�
dd|	�|dd�D]`}|��|	jdu�r*|�
|jd	|�
�|	�q�|�
�}
|dt|�t|
��}|�
||
|	�q�t|�dkrF|��qF|j�r�|��|�|�}t�|��r�td
�|���|S)a�Encode a message header into an RFC-compliant format.

        There are many issues involved in converting a given string for use in
        an email header.  Only certain character sets are readable in most
        email clients, and as header strings can only contain a subset of
        7-bit ASCII, care must be taken to properly convert and encode (with
        Base64 or quoted-printable) header strings.  In addition, there is a
        75-character length limit on any given encoded header field, so
        line-wrapping must be performed, even with double-byte character sets.

        Optional maxlinelen specifies the maximum length of each generated
        line, exclusive of the linesep string.  Individual lines may be longer
        than maxlinelen if a folding point cannot be found.  The first line
        will be shorter by the length of the header name plus ": " if a header
        name was specified at Header construction time.  The default value for
        maxlinelen is determined at header construction time.

        Optional splitchars is a string containing characters which should be
        given extra weight by the splitting algorithm during normal header
        wrapping.  This is in very rough support of RFC 2822's `higher level
        syntactic breaks':  split points preceded by a splitchar are preferred
        during line splitting, with the characters preferred in the order in
        which they appear in the string.  Space and tab may be included in the
        string to indicate whether preference should be given to one over the
        other as a split point when other split chars do not appear in the line
        being split.  Splitchars does not affect RFC 2047 encoded lines.

        Optional linesep is a string to be used to separate the lines of
        the value.  The default value is the most useful for typical
        Python applications, but it can be set to \r\n to produce RFC-compliant
        line separators when needed.
        Nri@Br`raFrr!rz8header value appears to contain an embedded header: {!r})rbrX�_ValueFormatterrYrVrrer/�add_transitionr)�feed�newline�header_encodingr,r3�_str�_embeded_headerr(r�format)rZ�
splitcharsrM�linesep�	formatterrjrnrkrrr/�linesr<�sline�fws�valuerrrrd<sZ!�
�

�z
Header.encodecCsxg}d}g}|jD]B\}}||kr.|�|�q|durJ|�t�|�|f�|g}|}q|rn|�t�|�|f�||_dSrp)rr-rfrh)rZ�chunksrL�
last_chunkrrrrrrb�szHeader._normalize)NNNNrrT)NrT)r~Nr)�__name__�
__module__�__qualname__r\rorsrtr-rerdrbrrrrr
�s�
/ 
+
Qr
c@sTeZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zdd�Zd
d�Z	dd�Z
dd�ZdS)rcCs0||_||_t|�|_||_g|_t|�|_dSrp)�_maxlenrVr3�_continuation_ws_len�_splitchars�_lines�_Accumulator�
_current_line)rZ�	headerlen�maxlenrOr�rrrr\�s
z_ValueFormatter.__init__cCs|��|�|j�Srp)r�rhr�)rZr�rrrr��sz_ValueFormatter._strcCs
|�t�Srp)r��NL�rZrrrro�sz_ValueFormatter.__str__cCsp|j��}|dkr|jj|�t|j�dkrb|j��rP|jdt|j�7<n|j�t|j��|j��dS)N)rrrra)	r�r+�pushr3�	is_onlywsr�rr-�reset)rZ�end_of_linerrrr��s

z_ValueFormatter.newlinecCs|j�dd�dS)Nrr)r�r�r�rrrr��sz_ValueFormatter.add_transitioncCs�|jdur|�|||j�dS|�||���}z|�d�}WntyPYdS0|durf|�||�z|��}Wnty�YdS0|��|j	�
|j|�|D]}|j�
|j|�q�dS�Nr)r��_ascii_splitr��header_encode_lines�_maxlengthsr+�
IndexError�
_append_chunkr�r�r�rVr�r-)rZr�rr�
encoded_lines�
first_line�	last_liner<rrrr��s$
z_ValueFormatter.feedccs&|jt|j�V|j|jVqdSrp)r�r3r�r�r�rrrr��sz_ValueFormatter._maxlengthscCsft�dtd||�}|dr0dg|dd�<n
|�d�tt|�gd�D]\}}|�||�qLdS)Nz([z]+)rrr")�rer*�FWSr+r	�iterr�)rZr�rr�r=�partrrrr��s
z_ValueFormatter._ascii_splitcCs|j�||�t|j�|jk�r|jD]v}t|j��ddd�D]T}|��rn|j|d}|rn|d|krnq�|j|dd}|r@|d|kr@q�q@q&q�q&|j��\}}|jj	dkr�|�
�|s�d}|j�||�dS|j�|�}|j�
t|j��|j�|�dS)Nr!rrar)r�r�r3r�r�r�
part_countr1r+�
_initial_sizer��pop_fromr�r-rr�)rZr�r�ch�i�prevpartr��	remainderrrrr��s.
z_ValueFormatter._append_chunkN)r�r�r�r\r�ror�r�r�r�r�r�rrrrr�s%rcsjeZdZd�fdd�	Zdd�Zddd�Z�fdd	�Zd
d�Zdd
�Zddd�Z	dd�Z
�fdd�Z�ZS)r�rcs||_t���dSrp)r�rr\)rZ�initial_size��	__class__rrr\"sz_Accumulator.__init__cCs|�||f�dSrp)r-)rZr�rrrrr�&sz_Accumulator.pushcCs||d�}g||d�<|Srpr)rZr��poppedrrrr�)sz_Accumulator.pop_fromcs|��dkrdSt���S)Nr)rr)r�rr+r�r�rrr+.sz_Accumulator.popcCstdd�|D�|j�S)Ncss"|]\}}t|�t|�VqdSrp)r3�rr�r�rrr�	<genexpr>4�z'_Accumulator.__len__.<locals>.<genexpr>)�sumr�r�rrr�__len__3s�z_Accumulator.__len__cCst�dd�|D��S)Ncss |]\}}t�||f�VqdSrp�rgrhr�rrrr�8s�z'_Accumulator.__str__.<locals>.<genexpr>r�r�rrrro7s
�z_Accumulator.__str__NcCs"|durg}||dd�<d|_dSr�)r�)rZ�startvalrrrr�;sz_Accumulator.resetcCs|jdko|pt|���Sr�)r�rr1r�rrrr�Asz_Accumulator.is_onlywscs
t���Srp)rr�r�r�rrr�Dsz_Accumulator.part_count)r)r)N)
r�r�r�r\r�r�r+r�ror�r�r��
__classcell__rrr�rr� s

r�)NNr)3�__doc__�
__future__rrrZfuture.builtinsrrrrr	�__all__r�r5Zfuture.backportsr
Zfuture.backports.emailrZfuture.backports.email.errorsrZfuture.backports.email.charsetZ	backportsrrZ!future.backports.email.quoprimimerrrPr�rfr9�SPACE8rgrWr�rUrx�compile�VERBOSE�
IGNORECASE�	MULTILINEr'�fcrer�rr�objectr
r�listr�rrrr�<module>sH�

_�
nPK�Du\�����:future/backports/email/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@sldZddlmZddlmZddlmZddlmZe��dZgd�Z	dd	�Z
d
d�Zdd
�Zdd�Z
dS)z~
Backport of the Python 3.3 email package for Python-Future.

A package for parsing, handling, and generating email messages.
�)�unicode_literals)�division)�absolute_import)�surrogateescapez5.1.0)�
base64mime�charset�encoders�errors�
feedparser�	generator�header�	iterators�message�message_from_file�message_from_binary_file�message_from_string�message_from_bytes�mime�parser�
quoprimime�utilscOs ddlm}||i|���|�S)zvParse a string into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    r��Parser)�future.backports.email.parserr�parsestr)�s�args�kwsr�r�I/usr/local/lib/python3.9/site-packages/future/backports/email/__init__.pyr0srcOs ddlm}||i|���|�S)z|Parse a bytes string into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    r��BytesParser)rr!�
parsebytes)rrrr!rrrr8srcOs ddlm}||i|���|�S)z�Read a file and parse its contents into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    rr)rr�parse)�fprrrrrrr@srcOs ddlm}||i|���|�S)z�Read a binary file and parse its contents into a Message object model.

    Optional _class and strict are passed to the Parser constructor.
    rr )rr!r#)r$rrr!rrrrHsrN)�__doc__�
__future__rrrZfuture.utilsrZregister_surrogateescape�__version__�__all__rrrrrrrr�<module>sPK�Du\+��mm;future/backports/email/__pycache__/iterators.cpython-39.pycnu�[���a

��?h,	�@szdZddlmZddlmZddlmZddlmZgd�ZddlZddlm	Z	d	d
�Z
ddd
�Zddd�Zddd�Z
dS)z1Various types of useful iterators and generators.�)�print_function)�unicode_literals)�division)�absolute_import)�body_line_iterator�typed_subpart_iterator�walkN)�StringIOccs4|V|��r0|��D]}|��D]
}|Vq"qdS)z�Walk over the message tree, yielding each subpart.

    The walk is performed in depth-first order.  This method is a
    generator.
    N)�is_multipart�get_payloadr)�self�subpartZ
subsubpart�r�J/usr/local/lib/python3.9/site-packages/future/backports/email/iterators.pyrs
rFccs<|��D].}|j|d�}t|t�rt|�D]
}|Vq*qdS)z�Iterate over the parts, returning string payloads line-by-line.

    Optional decode (default False) is passed through to .get_payload().
    )�decodeN)rr�
isinstance�strr	)�msgrr
�payload�linerrrr%s

r�textccs8|��D]*}|��|kr|dus,|��|kr|VqdS)z�Iterate over the subparts with a given MIME type.

    Use `maintype' as the main MIME type to match against; this defaults to
    "text".  Optional `subtype' is the MIME subtype to match against; if
    omitted, only the main type is matched.
    N)r�get_content_maintype�get_content_subtype)r�maintype�subtyper
rrrr1srcCs�|durtj}d|d}t||��d|d�|rJtd|��|d�n
t|d�|��r||��D]}t|||d|�qddS)	zA handy debugging aidN� ��)�end�filez [%s])r�)�sys�stdout�print�get_content_type�get_default_typer
r�
_structure)r�fp�level�include_default�tabr
rrrr&>s
r&)F)rN)NrF)�__doc__�
__future__rrrr�__all__r!�ior	rrrr&rrrr�<module>s


PK�Du\y�
��R�R@future/backports/email/__pycache__/headerregistry.cpython-39.pycnu�[���a

��?h�P�@s�dZddlmZddlmZddlmZddlmZddlmZddlm	Z	ddl
mZdd	l
mZdd
l
m
ZGdd�de�ZGd
d�de�ZGdd�de�Zdd�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd �d e�ZGd!d"�d"e�ZGd#d$�d$e�ZGd%d&�d&e�ZGd'd(�d(e�ZGd)d*�d*e�ZGd+d,�d,e�Z eeeeeeeeeeeeeeeeeee d-�Z!Gd.d/�d/e�Z"d0S)1a;Representing and manipulating email headers via custom objects.

This module provides an implementation of the HeaderRegistry API.
The implementation is designed to flexibly follow RFC5322 rules.

Eventually HeaderRegistry will be a public API, but it isn't yet,
and will probably change some before that happens.

�)�unicode_literals)�division)�absolute_import)�super��str)�text_to_native_str)�utils)�errors)�_header_value_parserc@s^eZdZddd�Zedd��Zedd��Zed	d
��Zedd��Zd
d�Z	dd�Z
dd�ZdS)�Address�NcCsl|durV|s|rtd��t�|�\}}|r:td�||���|jrJ|jd�|j}|j}||_||_	||_
dS)a�Create an object represeting a full email address.

        An address can have a 'display_name', a 'username', and a 'domain'.  In
        addition to specifying the username and domain separately, they may be
        specified together by using the addr_spec keyword *instead of* the
        username and domain keywords.  If an addr_spec string is specified it
        must be properly quoted according to RFC 5322 rules; an error will be
        raised if it is not.

        An Address object has display_name, username, domain, and addr_spec
        attributes, all of which are read-only.  The addr_spec and the string
        value of the object are both quoted according to RFC5322 rules, but
        without any Content Transfer Encoding.

        Nz=addrspec specified when username and/or domain also specifiedz6Invalid addr_spec; only '{}' could be parsed from '{}'r)�	TypeError�parserZ
get_addr_spec�
ValueError�format�all_defects�
local_part�domain�
_display_name�	_username�_domain)�self�display_name�usernamer�	addr_specZa_s�rest�r�O/usr/local/lib/python3.9/site-packages/future/backports/email/headerregistry.py�__init__s�
zAddress.__init__cCs|jS�N�r�rrrrr<szAddress.display_namecCs|jSr )rr"rrrr@szAddress.usernamecCs|jSr )rr"rrrrDszAddress.domaincCsTt|j�}t|�t|tj�kr.t�|j�}n|j}|jrH|d|jS|sPdS|S)z�The addr_spec (username@domain) portion of the address, quoted
        according to RFC 5322 rules, but with no Content Transfer Encoding.
        �@�<>)�setr�lenrZ
DOT_ATOM_ENDS�quote_stringr)r�namesetZlprrrrHs
zAddress.addr_speccCsd�|j|j|j�S)Nz6Address(display_name={!r}, username={!r}, domain={!r}))rrrrr"rrr�__repr__Xs�zAddress.__repr__cCs^t|j�}t|�t|tj�kr.t�|j�}n|j}|rX|jdkrFdn|j}d�||�S|jS)Nr$r
z{} <{}>)r%rr&r�SPECIALSr'rr)rr(�disprrrr�__str__\s
zAddress.__str__cCs8t|�t|�krdS|j|jko6|j|jko6|j|jkS�NF)�typerrr�r�otherrrr�__eq__gs
�
�zAddress.__eq__)r
r
r
N)�__name__�
__module__�__qualname__r�propertyrrrrr)r,r1rrrrrs
%



rc@sFeZdZddd�Zedd��Zedd��Zdd	�Zd
d�Zdd
�Z	dS)�GroupNcCs||_|rt|�nt�|_dS)aCreate an object representing an address group.

        An address group consists of a display_name followed by colon and an
        list of addresses (see Address) terminated by a semi-colon.  The Group
        is created by specifying a display_name and a possibly empty list of
        Address objects.  A Group can also be used to represent a single
        address that is not in a group, which is convenient when manipulating
        lists that are a combination of Groups and individual Addresses.  In
        this case the display_name should be set to None.  In particular, the
        string representation of a Group whose display_name is None is the same
        as the Address object, if there is one and only one Address object in
        the addresses list.

        N)r�tuple�
_addresses)rr�	addressesrrrrqszGroup.__init__cCs|jSr r!r"rrrr�szGroup.display_namecCs|jSr )r8r"rrrr9�szGroup.addressescCsd�|j|j�S)Nz'Group(display_name={!r}, addresses={!r})rrr9r"rrrr)�s�zGroup.__repr__cCs�|jdur&t|j�dkr&t|jd�S|j}|dur\t|�}t|�t|tj�kr\t�|�}d�dd�|jD��}|r~d|n|}d�	||�S)N�r�, css|]}t|�VqdSr r)�.0�xrrr�	<genexpr>��z Group.__str__.<locals>.<genexpr>� z{}:{};)
rr&r9rr%rr*r'�joinr)rr+r(Zadrstrrrrr,�s
z
Group.__str__cCs,t|�t|�krdS|j|jko*|j|jkSr-)r.rr9r/rrrr1�s

�zGroup.__eq__)NN)
r2r3r4rr5rr9r)r,r1rrrrr6os


r6c@sTeZdZdZdd�Zdd�Zedd��Zedd	��Zd
d�Z	e
dd
��Zdd�ZdS)�
BaseHeadera|Base class for message headers.

    Implements generic behavior and provides tools for subclasses.

    A subclass must define a classmethod named 'parse' that takes an unfolded
    value string and a dictionary as its arguments.  The dictionary will
    contain one key, 'defects', initialized to an empty list.  After the call
    the dictionary must contain two additional keys: parse_tree, set to the
    parse tree obtained from parsing the header, and 'decoded', set to the
    string value of the idealized representation of the data from the value.
    (That is, encoded words are decoded, and values that have canonical
    representations are so represented.)

    The defects key is intended to collect parsing defects, which the message
    parser will subsequently dispose of as appropriate.  The parser should not,
    insofar as practical, raise any errors.  Defects should be added to the
    list instead.  The standard header parsers register defects for RFC
    compliance issues, for obsolete RFC syntax, and for unrecoverable parsing
    errors.

    The parse method may add additional keys to the dictionary.  In this case
    the subclass must define an 'init' method, which will be passed the
    dictionary as its keyword arguments.  The method should use (usually by
    setting them as the value of similarly named attributes) and remove all the
    extra keys added by its parse method, and then use super to call its parent
    class with the remaining arguments and keywords.

    The subclass should also make sure that a 'max_count' attribute is defined
    that is either None or 1. XXX: need to better define this API.

    cCsZdgi}|�||�t�|d�r4t�|d�|d<t�||d�}|j|fi|��|S)N�defects�decoded)�parser	�_has_surrogates�	_sanitizer�__new__�init)�cls�name�value�kwdsrrrrrH�szBaseHeader.__new__cKs2|d}|d=|d}|d=||_||_||_dS)NrC�
parse_tree)�_name�_parse_tree�_defects)rrK�_3to2kwargsrCrNrrrrI�s
zBaseHeader.initcCs|jSr )rOr"rrrrK�szBaseHeader.namecCs
t|j�Sr )r7rQr"rrrrC�szBaseHeader.defectscCst|jj|jjt|�f|jfSr )�_reconstruct_header�	__class__r2�	__bases__r�__dict__r"rrr�
__reduce__�s��zBaseHeader.__reduce__cCst�||�Sr )rrH)rJrLrrr�_reconstruct�szBaseHeader._reconstructc	KsX|d}|d=t�t�t�|jd�t�dd�g�t�t�dd�g�|jg�}|j|d�S)N�policyzheader-name�:z
header-sepr@�fws)rY)	r�HeaderZHeaderLabelZ
ValueTerminalrKZCFWSListZWhiteSpaceTerminalrP�fold)rrRrY�headerrrrr]�s
��zBaseHeader.foldN)
r2r3r4�__doc__rHrIr5rKrCrW�classmethodrXr]rrrrrB�s 




rBcCstt|�|i��|�Sr )r.rrX)�cls_name�basesrLrrrrSsrSc@s&eZdZdZeej�Zedd��Z	dS)�UnstructuredHeaderNcCs"|�|�|d<t|d�|d<dS)NrNrD)�value_parserr�rJrLrMrrrrEszUnstructuredHeader.parse)
r2r3r4�	max_count�staticmethodr�get_unstructuredrdr`rErrrrrcs
rcc@seZdZdZdS)�UniqueUnstructuredHeaderr:N�r2r3r4rfrrrrrisricsFeZdZdZdZeej�Ze	dd��Z
�fdd�Zedd��Z
�ZS)	�
DateHeadera�Header whose value consists of a single timestamp.

    Provides an additional attribute, datetime, which is either an aware
    datetime using a timezone, or a naive datetime if the timezone
    in the input string is -0000.  Also accepts a datetime as input.
    The 'value' attribute is the normalized form of the timestamp,
    which means it is the output of format_datetime on the datetime.
    NcCsz|s6|d�t���d|d<d|d<t��|d<dSt|t�rJt�|�}||d<t�	|d�|d<|�
|d�|d<dS)NrC�datetimer
rDrN)�appendr
�HeaderMissingRequiredValuerZ	TokenList�
isinstancerr	�parsedate_to_datetime�format_datetimerdrerrrrE,s

zDateHeader.parsecs"|�d�|_t�j|i|��dS)Nrl)�pop�	_datetimerrI�r�args�kw�rTrrrI:szDateHeader.initcCs|jSr )rsr"rrrrl>szDateHeader.datetime)r2r3r4r_rfrgrrhrdr`rErIr5rl�
__classcell__rrrwrrks	


rkc@seZdZdZdS)�UniqueDateHeaderr:NrjrrrrryCsrycsPeZdZdZedd��Zedd��Z�fdd�Ze	dd	��Z
e	d
d��Z�ZS)�
AddressHeaderNcCst�|�\}}|rJd��|S)Nzthis should not happen)rZget_address_list)rL�address_listrrrrdLszAddressHeader.value_parsercCs�t|t�rV|�|�|d<}g}|jD]"}|�t|jdd�|jD���q&t|j	�}n"t
|d�sf|g}dd�|D�}g}||d<||d<d�d	d�|D��|d
<d|vr�|�|d
�|d<dS)NrNcSs*g|]"}t|jpd|jpd|jp"d��qS)r
)rrrr)r<�mbrrr�
<listcomp>[s
�
�z'AddressHeader.parse.<locals>.<listcomp>�__iter__cSs&g|]}t|d�std|g�n|�qS)r9N)�hasattrr6�r<�itemrrrr}ds��groupsrCr;cSsg|]}t|��qSrrr�rrrr}jr?rD)rorrdr9rmr6rZ
all_mailboxes�listrrrA)rJrLrMr{r��addrrCrrrrERs*


��
�zAddressHeader.parsecs,t|�d��|_d|_t�j|i|��dS)Nr�)r7rr�_groupsr8rrIrtrwrrrInszAddressHeader.initcCs|jSr )r�r"rrrr�sszAddressHeader.groupscCs&|jdur tdd�|jD��|_|jS)NcSsg|]}|jD]}|�qqSr)r9)r<�group�addressrrrr}zs
�z+AddressHeader.addresses.<locals>.<listcomp>)r8r7r�r"rrrr9ws
zAddressHeader.addresses)
r2r3r4rfrgrdr`rErIr5r�r9rxrrrwrrzHs


rzc@seZdZdZdS)�UniqueAddressHeaderr:Nrjrrrrr�sr�c@seZdZedd��ZdS)�SingleAddressHeadercCs(t|j�dkrtd�|j���|jdS)Nr:z9value of single address header {} is not a single addressr)r&r9rrrKr"rrrr��s
�zSingleAddressHeader.addressN)r2r3r4r5r�rrrrr��sr�c@seZdZdZdS)�UniqueSingleAddressHeaderr:Nrjrrrrr��sr�csZeZdZdZeej�Zedd��Z	�fdd�Z
edd��Zedd	��Z
ed
d��Z�ZS)�MIMEVersionHeaderr:cCs�|�|�|d<}t|�|d<|d�|j�|jdur<dn|j|d<|j|d<|jdurtd�|d|d�|d<nd|d<dS)NrNrDrC�major�minorz{}.{}�version)rdr�extendrr�r�r�rJrLrMrNrrrrE�s

zMIMEVersionHeader.parsecs:|�d�|_|�d�|_|�d�|_t�j|i|��dS)Nr�r�r�)rr�_version�_major�_minorrrIrtrwrrrI�szMIMEVersionHeader.initcCs|jSr )r�r"rrrr��szMIMEVersionHeader.majorcCs|jSr )r�r"rrrr��szMIMEVersionHeader.minorcCs|jSr )r�r"rrrr��szMIMEVersionHeader.version)r2r3r4rfrgrZparse_mime_versionrdr`rErIr5r�r�r�rxrrrwrr��s



r�cs8eZdZdZedd��Z�fdd�Zedd��Z�Z	S)�ParameterizedMIMEHeaderr:cCs^|�|�|d<}t|�|d<|d�|j�|jdurBi|d<ntdd�|jD��|d<dS)NrNrDrC�paramscss*|]"\}}t�|���t�|�fVqdSr )r	rG�lower)r<rKrLrrrr>�s��z0ParameterizedMIMEHeader.parse.<locals>.<genexpr>)rdrr�rr��dictr�rrrrE�s

�zParameterizedMIMEHeader.parsecs"|�d�|_t�j|i|��dS)Nr�)rr�_paramsrrIrtrwrrrI�szParameterizedMIMEHeader.initcCs
|j��Sr )r��copyr"rrrr��szParameterizedMIMEHeader.params)
r2r3r4rfr`rErIr5r�rxrrrwrr��s
r�csJeZdZeej�Z�fdd�Zedd��Z	edd��Z
edd��Z�ZS)	�ContentTypeHeadercs6t�j|i|��t�|jj�|_t�|jj�|_dSr )	rrIr	rGrP�maintype�	_maintype�subtype�_subtypertrwrrrI�szContentTypeHeader.initcCs|jSr )r�r"rrrr��szContentTypeHeader.maintypecCs|jSr )r�r"rrrr��szContentTypeHeader.subtypecCs|jd|jS)N�/)r�r�r"rrr�content_type�szContentTypeHeader.content_type)
r2r3r4rgrZparse_content_type_headerrdrIr5r�r�r�rxrrrwrr��s


r�cs2eZdZeej�Z�fdd�Zedd��Z	�Z
S)�ContentDispositionHeadercs6t�j|i|��|jj}|dur&|nt�|�|_dSr )rrIrP�content_dispositionr	rG�_content_disposition)rrurvZcdrwrrrI�szContentDispositionHeader.initcCs|jSr )r�r"rrrr��sz,ContentDispositionHeader.content_disposition)r2r3r4rgrZ parse_content_disposition_headerrdrIr5r�rxrrrwrr��s
r�csBeZdZdZeej�Zedd��Z	�fdd�Z
edd��Z�Z
S)�ContentTransferEncodingHeaderr:cCs2|�|�|d<}t|�|d<|d�|j�dS)NrNrDrC)rdrr�rr�rrrrE�sz#ContentTransferEncodingHeader.parsecs&t�j|i|��t�|jj�|_dSr )rrIr	rGrP�cte�_ctertrwrrrIsz"ContentTransferEncodingHeader.initcCs|jSr )r�r"rrrr�	sz!ContentTransferEncodingHeader.cte)r2r3r4rfrgrZ&parse_content_transfer_encoding_headerrdr`rErIr5r�rxrrrwrr��s

r�)�subject�datezresent-datez	orig-dateZsenderz
resent-sender�toz	resent-to�ccz	resent-ccZbccz
resent-bcc�fromzresent-fromzreply-tozmime-versionzcontent-typezcontent-dispositionzcontent-transfer-encodingc@s8eZdZdZeedfdd�Zdd�Zdd�Zd	d
�Z	dS)�HeaderRegistryz%A header_factory and header registry.TcCs&i|_||_||_|r"|j�t�dS)a�Create a header_factory that works with the Policy API.

        base_class is the class that will be the last class in the created
        header class's __bases__ list.  default_class is the class that will be
        used if "name" (see __call__) does not appear in the registry.
        use_default_map controls whether or not the default mapping of names to
        specialized classes is copied in to the registry when the factory is
        created.  The default is True.

        N)�registry�
base_class�
default_class�update�_default_header_map)rr�r�Zuse_default_maprrrr*s
zHeaderRegistry.__init__cCs||j|��<dS)zLRegister cls as the specialized class for handling "name" headers.

        N)r�r��rrKrJrrr�map_to_type<szHeaderRegistry.map_to_typecCs0|j�|��|j�}ttd|j�||jfi�S)N�_)r��getr�r�r.rr2r�r�rrr�__getitem__BszHeaderRegistry.__getitem__cCs||||�S)a�Create a header instance for header 'name' from 'value'.

        Creates a header instance by creating a specialized class for parsing
        and representing the specified header by combining the factory
        base_class with a specialized class from the registry or the
        default_class, and passing the name and value to the constructed
        class's constructor.

        r)rrKrLrrr�__call__Fs
zHeaderRegistry.__call__N)
r2r3r4r_rBrcrr�r�r�rrrrr�&s�
r�N)#r_�
__future__rrrZfuture.builtinsrrZfuture.utilsrZfuture.backports.emailr	r
rr�objectrr6rBrSrcrirkryrzr�r�r�r�r�r�r�r�r�r�rrrr�<module>s^	Z5d'7
%�PK�Du\�yI��@future/backports/email/__pycache__/_encoded_words.cpython-39.pycnu�[���a

��?h� �@s:dZddlmZddlmZddlmZddlmZddlmZddlmZddlm	Z	dd	l
Z
dd	lZdd	lZdd	l
Z
dd
lmZmZddlmZgd�Ze
�e
�d
�jdd��Zdd�ZGdd�de�Ze�Zdeed�<dd�Zdd�Zdd�Zdd�Z dd�Z!eed �Z"d!d"�Z#ee d �Z$ee!d �Z%d'd%d&�Z&d	S)(z� Routines for manipulating RFC2047 encoded words.

This is currently a package-private API, but will be considered for promotion
to a public API if there is demand.

�)�unicode_literals)�division)�absolute_import)�bytes)�chr)�int)�strN)�
ascii_letters�digits)�errors)�decode_q�encode_q�decode_b�encode_b�len_q�len_b�decode�encodes=([a-fA-F0-9]{2})cCstt|�d�d�g�S)N��)rr�group)�m�r�O/usr/local/lib/python3.9/site-packages/future/backports/email/_encoded_words.py�<lambda>H�rcCst|�dd��}t|�gfS)N�_� )r�replace�_q_byte_subber)�encodedrrrrJsrc@s0eZdZede�d�e�d��Zdd�ZdS)�	_QByteMaps-!*+/�asciicCs.||jvrt|�||<nd�|�||<||S)Nz={:02X})�safer�format)�self�keyrrr�__missing__Ts
z_QByteMap.__missing__N)	�__name__�
__module__�__qualname__rr	rr
r#r'rrrrr!Psr!�_� cCstd�dd�t|�D���S)N�css|]}t|VqdS�N)�_q_byte_map��.0�xrrr�	<genexpr>arzencode_q.<locals>.<genexpr>)r�joinr��bstringrrrr
`sr
cCstdd�t|�D��S)Ncss|]}tt|�VqdSr.)�lenr/r0rrrr3drzlen_q.<locals>.<genexpr>)�sumrr5rrrrcsrc
Cs�g}t|�d}|r8|�t���|ddd|�}n|}z&t�d|�sTt�d��t�	|�|fWStjy�t�
�g}dD]T}z t�	|d|�|fWYStjtfy�|dkr�|�t���Yq�0q�td��Yn0dS)	N�s===s^[A-Za-z0-9+/]*={0,2}$zNon-base64 digit found)rr���=rzunexpected binascii.Error)
r7�appendr�InvalidBase64PaddingDefect�re�match�binascii�Error�base64�	b64decode�InvalidBase64CharactersDefect�	TypeError�AssertionError)r �defects�pad_errZpadded_encoded�irrrrks&

 rcCst�|��d�S)Nr")rC�	b64encoderr5rrrr�srcCs&tt|�d�\}}|d|r dndS)Nr;r9r)�divmodr7)r6�groups_of_3�leftoverrrrr�sr)�q�bc	
Cs�t|��d�\}}}}}|�d�\}}}|��}|�dd�}t||�\}}z|�|�}Wnrty�|�t	�
d�|���|�|d�}Yn@ty�|�dd�}|��dkr�|�t	�
d�|���Yn0||||fS)u�Decode encoded word and return (string, charset, lang, defects) tuple.

    An RFC 2047/2243 encoded word has the form:

        =?charset*lang?cte?encoded_string?=

    where '*lang' may be omitted but the other parts may not be.

    This function expects exactly such a string (that is, it does not check the
    syntax and may raise errors if the string is not well formed), and returns
    the encoded_string decoded first from its Content Transfer Encoding and
    then from the resulting bytes into unicode using the specified charset.  If
    the cte-decoded string does not successfully decode using the specified
    character set, a defect is added to the defects list and the unknown octets
    are replaced by the unicode 'unknown' character ﷿.

    The specified charset and language are returned.  The default for language,
    which is rarely if ever encountered, is the empty string.

    �?�*r"�surrogateescapez:Encoded word contains bytes not decodable using {} charset�unknown-8bitz<Unknown charset {} in encoded word; decoded as unknown bytes)r�split�	partition�lowerr�
_cte_decodersr�UnicodeErrorr=r�UndecodableBytesDefectr$�LookupError�CharsetError)	�ewr+�charset�cte�
cte_string�langr6rH�stringrrrr�s&��r�utf-8r-cCs�t|�}|dkr|�dd�}n
|�|�}|dur\td|�}td|�}||dkrXdnd}t||�}|rtd|}d	�||||�S)
aEncode string using the CTE encoding that produces the shorter result.

    Produces an RFC 2047/2243 encoded word of the form:

        =?charset*lang?cte?encoded_string?=

    where '*lang' is omitted unless the 'lang' parameter is given a value.
    Optional argument charset (defaults to utf-8) specifies the charset to use
    to encode the string to binary before CTE encoding it.  Optional argument
    'encoding' is the cte specifier for the encoding that should be used ('q'
    or 'b'); if it is None (the default) the encoding which produces the
    shortest encoded sequence is used, except that 'q' is preferred if it is up
    to five characters longer.  Optional argument 'lang' (default '') gives the
    RFC 2243 language string to specify in the encoded word.

    rTr"rSNrOrP�rRz=?{0}{1}?{2}?{3}?=)rr�_cte_encode_length�
_cte_encodersr$)rbr^�encodingrar6�qlen�blenr rrrr�s
r)rcNr-)'�__doc__�
__future__rrrZfuture.builtinsrrrrr?rCrA�	functoolsrbr	r
Zfuture.backports.emailr�__all__�partial�compile�subrr�dictr!r/�ordr
rrrrrXrrfrerrrrr�<module>sJ$��+��PK�Du\=ި�99=future/backports/email/__pycache__/_policybase.cpython-39.pycnu�[���a

��?h79�@s�dZddlmZddlmZddlmZddlmZddlmZddlmZddl	m
Z
dd	lZdd
lm
Z
ddlmZddlmZgd
�ZGdd�de�Zdd�Zdd�ZGdd�de
eje��ZeGdd�de��Ze�Zd	S)zwPolicy framework for the email package.

Allows fine grained feature control of how the package parses and emits data.
�)�unicode_literals)�print_function)�division)�absolute_import)�super)�str)�with_metaclassN)�header)�charset)�_has_surrogates)�Policy�Compat32�compat32cs@eZdZdZ�fdd�Zdd�Zdd�Zdd	�Zd
d�Z�Z	S)�_PolicyBasea�Policy Object basic framework.

    This class is useless unless subclassed.  A subclass should define
    class attributes with defaults for any values that are to be
    managed by the Policy object.  The constructor will then allow
    non-default values to be set for these attributes at instance
    creation time.  The instance will be callable, taking these same
    attributes keyword arguments, and returning a new instance
    identical to the called instance except for those values changed
    by the keyword arguments.  Instances may be added, yielding new
    instances with any non-default values from the right hand
    operand overriding those in the left hand operand.  That is,

        A + B == A(<non-default values of B>)

    The repr of an instance can be used to reconstruct the object
    if and only if the repr of the values can be used to reconstruct
    those values.

    csH|��D]:\}}t||�r.tt|��||�qtd�||jj���qdS)z�Create new Policy, possibly overriding some defaults.

        See class docstring for a list of overridable attributes.

        �*{!r} is an invalid keyword argument for {}N)	�items�hasattrrr�__setattr__�	TypeError�format�	__class__�__name__)�self�kw�name�value�r��L/usr/local/lib/python3.9/site-packages/future/backports/email/_policybase.py�__init__0s
��z_PolicyBase.__init__cCs*dd�|j��D�}d�|jjd�|��S)NcSsg|]\}}d�||��qS)z{}={!r})r)�.0rrrrr�
<listcomp>?s�z(_PolicyBase.__repr__.<locals>.<listcomp>z{}({})z, )�__dict__rrrr�join)r�argsrrr�__repr__>s�z_PolicyBase.__repr__cKsr|j�|j�}|j��D]\}}t�|||�q|��D]4\}}t||�s^td�||jj	���t�|||�q8|S)z�Return a new instance with specified attributes changed.

        The new instance has the same attribute values as the current object,
        except for the changes passed in as keyword arguments.

        r)
r�__new__r"r�objectrrrrr)rr�	newpolicy�attrrrrr�cloneCs
��z_PolicyBase.clonecCs,t||�rd}nd}t|�|jj|���dS)Nz'{!r} object attribute {!r} is read-onlyz!{!r} object has no attribute {!r})r�AttributeErrorrrr)rrr�msgrrrrUs
z_PolicyBase.__setattr__cCs|jfi|j��S)z�Non-default values from right operand override those from left.

        The object returned is a new instance of the subclass.

        )r*r")r�otherrrr�__add__\sz_PolicyBase.__add__)
r�
__module__�__qualname__�__doc__rr%r*rr.�
__classcell__rrrrrsrcCs,|�dd�d}|�dd�d}|d|S)N�
�r)�rsplit�split)�doc�	added_docrrr�_append_docesr9cCs�|jr(|j�d�r(t|jdj|j�|_|j��D]V\}}|jr2|j�d�r2dd�|jD�D]*}tt||�d�}|r\t||j�|_q2q\q2|S)N�+rcss |]}|��D]
}|VqqdS)N)�mro)r �base�crrr�	<genexpr>o�z%_extend_docstrings.<locals>.<genexpr>r1)r1�
startswithr9�	__bases__r"r�getattr)�clsrr)r=r7rrr�_extend_docstringsjsrDc@s~eZdZdZdZdZdZdZdd�Zdd	�Z	d
d�Z
ejdd
��Z
ejdd��Zejdd��Zejdd��Zejdd��ZdS)ra�Controls for how messages are interpreted and formatted.

    Most of the classes and many of the methods in the email package accept
    Policy objects as parameters.  A Policy object contains a set of values and
    functions that control how input is interpreted and how output is rendered.
    For example, the parameter 'raise_on_defect' controls whether or not an RFC
    violation results in an error being raised or not, while 'max_line_length'
    controls the maximum length of output lines when a Message is serialized.

    Any valid attribute may be overridden when a Policy is created by passing
    it as a keyword argument to the constructor.  Policy objects are immutable,
    but a new Policy object can be created with only certain values changed by
    calling the Policy instance with keyword arguments.  Policy objects can
    also be added, producing a new Policy object in which the non-default
    attributes set in the right hand operand overwrite those specified in the
    left operand.

    Settable attributes:

    raise_on_defect     -- If true, then defects should be raised as errors.
                           Default: False.

    linesep             -- string containing the value to use as separation
                           between output lines.  Default '\n'.

    cte_type            -- Type of allowed content transfer encodings

                           7bit  -- ASCII only
                           8bit  -- Content-Transfer-Encoding: 8bit is allowed

                           Default: 8bit.  Also controls the disposition of
                           (RFC invalid) binary data in headers; see the
                           documentation of the binary_fold method.

    max_line_length     -- maximum length of lines, excluding 'linesep',
                           during serialization.  None or 0 means no line
                           wrapping is done.  Default is 78.

    Fr3�8bit�NcCs|jr
|�|�||�dS)aZBased on policy, either raise defect or call register_defect.

            handle_defect(obj, defect)

        defect should be a Defect subclass, but in any case must be an
        Exception subclass.  obj is the object on which the defect should be
        registered if it is not raised.  If the raise_on_defect is True, the
        defect is raised as an error, otherwise the object and the defect are
        passed to register_defect.

        This method is intended to be called by parsers that discover defects.
        The email package parsers always call it with Defect instances.

        N)�raise_on_defect�register_defect�r�obj�defectrrr�
handle_defect�szPolicy.handle_defectcCs|j�|�dS)a�Record 'defect' on 'obj'.

        Called by handle_defect if raise_on_defect is False.  This method is
        part of the Policy API so that Policy subclasses can implement custom
        defect handling.  The default implementation calls the append method of
        the defects attribute of obj.  The objects used by the email package by
        default that get passed to this method will always have a defects
        attribute with an append method.

        N)�defects�appendrIrrrrH�szPolicy.register_defectcCsdS)a[Return the maximum allowed number of headers named 'name'.

        Called when a header is added to a Message object.  If the returned
        value is not 0 or None, and there are already a number of headers with
        the name 'name' equal to the value returned, a ValueError is raised.

        Because the default behavior of Message's __setitem__ is to append the
        value to the list of headers, it is easy to create duplicate headers
        without realizing it.  This method allows certain headers to be limited
        in the number of instances of that header that may be added to a
        Message programmatically.  (The limit is not observed by the parser,
        which will faithfully produce as many headers as exist in the message
        being parsed.)

        The default implementation returns None for all header names.
        Nr)rrrrr�header_max_count�szPolicy.header_max_countcCst�dS)aZGiven a list of linesep terminated strings constituting the lines of
        a single header, return the (name, value) tuple that should be stored
        in the model.  The input lines should retain their terminating linesep
        characters.  The lines passed in by the email package may contain
        surrogateescaped binary data.
        N��NotImplementedError)r�sourcelinesrrr�header_source_parse�szPolicy.header_source_parsecCst�dS)z�Given the header name and the value provided by the application
        program, return the (name, value) that should be stored in the model.
        NrP�rrrrrr�header_store_parse�szPolicy.header_store_parsecCst�dS)awGiven the header name and the value from the model, return the value
        to be returned to the application program that is requesting that
        header.  The value passed in by the email package may contain
        surrogateescaped binary data if the lines were parsed by a BytesParser.
        The returned value should not contain any surrogateescaped data.

        NrPrTrrr�header_fetch_parse�s	zPolicy.header_fetch_parsecCst�dS)a�Given the header name and the value from the model, return a string
        containing linesep characters that implement the folding of the header
        according to the policy controls.  The value passed in by the email
        package may contain surrogateescaped binary data if the lines were
        parsed by a BytesParser.  The returned value should not contain any
        surrogateescaped data.

        NrPrTrrr�fold�s
zPolicy.foldcCst�dS)a%Given the header name and the value from the model, return binary
        data containing linesep characters that implement the folding of the
        header according to the policy controls.  The value passed in by the
        email package may contain surrogateescaped binary data.

        NrPrTrrr�fold_binaryszPolicy.fold_binaryN)rr/r0r1rG�linesep�cte_type�max_line_lengthrLrHrO�abc�abstractmethodrSrUrVrWrXrrrrrws$(

	



rc@sHeZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dS)r
z�+
    This particular policy is the backward compatibility Policy.  It
    replicates the behavior of the email package version 5.1.
    cCs0t|t�s|St|�r(tj|tj|d�S|SdS)N�r
�header_name)�
isinstancerrr	�Header�_charset�UNKNOWN8BITrTrrr�_sanitize_headers

�zCompat32._sanitize_headercCs>|d�dd�\}}|�d�d�|dd��}||�d�fS)a:+
        The name is parsed as everything up to the ':' and returned unmodified.
        The value is determined by stripping leading whitespace off the
        remainder of the first line, joining all subsequent lines together, and
        stripping any trailing carriage return or linefeed characters.

        r�:r4z 	�Nz
)r6�lstripr#�rstrip)rrRrrrrrrS szCompat32.header_source_parsecCs||fS)z>+
        The name and value are returned unmodified.
        rrTrrrrU,szCompat32.header_store_parsecCs|�||�S)z�+
        If the value contains binary data, it is converted into a Header object
        using the unknown-8bit charset.  Otherwise it is returned unmodified.
        )rdrTrrrrV2szCompat32.header_fetch_parsecCs|j||dd�S)a+
        Headers are folded using the Header folding algorithm, which preserves
        existing line breaks in the value, and wraps each resulting line to the
        max_line_length.  Non-ASCII binary data are CTE encoded using the
        unknown-8bit charset.

        T��sanitize)�_foldrTrrrrW9sz
Compat32.foldcCs"|j|||jdkd�}|�dd�S)a�+
        Headers are folded using the Header folding algorithm, which preserves
        existing line breaks in the value, and wraps each resulting line to the
        max_line_length.  If cte_type is 7bit, non-ascii binary data is CTE
        encoded using the unknown-8bit charset.  Otherwise the original source
        header is used, with its existing line breaks and/or binary data.

        �7bitri�ascii�surrogateescape)rkrZ�encode)rrr�foldedrrrrXCs	zCompat32.fold_binarycCs�g}|�d|�t|t�r\t|�rL|r<tj|tj|d�}qZ|�|�d}q`tj||d�}n|}|dur�|�|j|j	|j
d��|�|j	�d�|�S)Nz%s: r^)r_)rY�
maxlinelenrf)rNr`rrr	rarbrcrorYr[r#)rrrrj�parts�hrrrrkOs&
�

�zCompat32._foldN)rr/r0r1rdrSrUrVrWrXrkrrrrr
s
r
)r1�
__future__rrrrZfuture.builtinsrrZfuture.utilsrr\Zfuture.backports.emailr	r
rbZfuture.backports.email.utilsr�__all__r'rr9rD�ABCMetarr
rrrrr�<module>s(L
`PK�Du\�L�
� � 8future/backports/email/__pycache__/policy.cpython-39.pycnu�[���a

��?hw"�@s�dZddlmZddlmZddlmZddlmZddlmZm	Z	m
Z
mZddlm
Z
ddlmZgd	�ZeGd
d�de��Ze�Ze`ejdd
�Zejdd�Zejddd�ZdS)zcThis will be the home for the policy that hooks in the new
code that adds all the email6 features.
�)�unicode_literals)�division)�absolute_import)�super)�Policy�Compat32�compat32�_extend_docstrings)�_has_surrogates)�HeaderRegistry)rrr�EmailPolicy�default�strict�SMTP�HTTPcsdeZdZdZdZe�Z�fdd�Zdd�Zdd�Z	d	d
�Z
dd�Zd
d�Zdd�Z
ddd�Z�ZS)ra+
    PROVISIONAL

    The API extensions enabled by this policy are currently provisional.
    Refer to the documentation for details.

    This policy adds new header parsing and folding algorithms.  Instead of
    simple strings, headers are custom objects with custom attributes
    depending on the type of the field.  The folding algorithm fully
    implements RFCs 2047 and 5322.

    In addition to the settable attributes listed above that apply to
    all Policies, this policy adds the following additional attributes:

    refold_source       -- if the value for a header in the Message object
                           came from the parsing of some source, this attribute
                           indicates whether or not a generator should refold
                           that value when transforming the message back into
                           stream form.  The possible values are:

                           none  -- all source values use original folding
                           long  -- source values that have any line that is
                                    longer than max_line_length will be
                                    refolded
                           all  -- all values are refolded.

                           The default is 'long'.

    header_factory      -- a callable that takes two arguments, 'name' and
                           'value', where 'name' is a header field name and
                           'value' is an unfolded header field value, and
                           returns a string-like object that represents that
                           header.  A default header_factory is provided that
                           understands some of the RFC5322 header field types.
                           (Currently address fields and date fields have
                           special treatment, while all other fields are
                           treated as unstructured.  This list will be
                           completed before the extension is marked stable.)
    �longcs.d|vrt�|dt��t�jfi|��dS)N�header_factory)�object�__setattr__rr�__init__)�self�kw��	__class__��G/usr/local/lib/python3.9/site-packages/future/backports/email/policy.pyrGszEmailPolicy.__init__cCs|j|jS)z�+
        The implementation for this class returns the max_count attribute from
        the specialized header class that would be used to construct a header
        of type 'name'.
        )r�	max_count)r�namerrr�header_max_countNszEmailPolicy.header_max_countcCs>|d�dd�\}}|�d�d�|dd��}||�d�fS)ac+
        The name is parsed as everything up to the ':' and returned unmodified.
        The value is determined by stripping leading whitespace off the
        remainder of the first line, joining all subsequent lines together, and
        stripping any trailing carriage return or linefeed characters.  (This
        is the same as Compat32).

        r�:�z 	�N�
)�split�lstrip�join�rstrip)r�sourcelinesr�valuerrr�header_source_parse`s	zEmailPolicy.header_source_parsecCsVt|d�r$|j��|��kr$||fSt|t�rFt|���dkrFtd��||�||�fS)a�+
        The name is returned unchanged.  If the input value has a 'name'
        attribute and it matches the name ignoring case, the value is returned
        unchanged.  Otherwise the name and value are passed to header_factory
        method, and the resulting custom header object is returned as the
        value.  In this case a ValueError is raised if the input value contains
        CR or LF characters.

        rr zDHeader values may not contain linefeed or carriage return characters)	�hasattrr�lower�
isinstance�str�len�
splitlines�
ValueErrorr�rrr(rrr�header_store_parsems

zEmailPolicy.header_store_parsecCs$t|d�r|S|�|d�|����S)ai+
        If the value has a 'name' attribute, it is returned to unmodified.
        Otherwise the name and the value with any linesep characters removed
        are passed to the header_factory method, and the resulting custom
        header object is returned.  Any surrogateescaped bytes get turned
        into the unicode unknown-character glyph.

        rr!)r*rr%r/r1rrr�header_fetch_parse~s	
zEmailPolicy.header_fetch_parsecCs|j||dd�S)a+
        Header folding is controlled by the refold_source policy setting.  A
        value is considered to be a 'source value' if and only if it does not
        have a 'name' attribute (having a 'name' attribute means it is a header
        object of some sort).  If a source value needs to be refolded according
        to the policy, it is converted into a custom header object by passing
        the name and the value with any linesep characters removed to the
        header_factory method.  Folding of a custom header object is done by
        calling its fold method with the current policy.

        Source values are split into lines using splitlines.  If the value is
        not to be refolded, the lines are rejoined using the linesep from the
        policy and returned.  The exception is lines containing non-ascii
        binary data.  In that case the value is refolded regardless of the
        refold_source setting, which causes the binary data to be CTE encoded
        using the unknown-8bit charset.

        T��
refold_binary)�_foldr1rrr�fold�szEmailPolicy.foldcCs"|j|||jdkd�}|�dd�S)a�+
        The same as fold if cte_type is 7bit, except that the returned value is
        bytes.

        If cte_type is 8bit, non-ASCII binary data is converted back into
        bytes.  Headers with binary data are not refolded, regardless of the
        refold_header setting, since there is no way to know whether the binary
        data consists of single byte characters or multibyte characters.

        �7bitr4�ascii�surrogateescape)r6�cte_type�encode)rrr(�foldedrrr�fold_binary�szEmailPolicy.fold_binaryFcs�t|d�r|j|d�S|jr"|jntd��|��}|jdkp�|jdko�|rft|d�t|�d�kp�t�fdd	�|d
d�D��}|s�|r�t|�r�|�	|d�
|��j|d�S|d|j�
|�|jS)
Nr)�policy�inf�allrr�c3s|]}t|��kVqdS)N)r.)�.0�x��maxlenrr�	<genexpr>��z$EmailPolicy._fold.<locals>.<genexpr>r r!z: )r*r7�max_line_length�floatr/�
refold_sourcer.�anyr
rr%�linesep)rrr(r5�linesZrefoldrrErr6�s


 �zEmailPolicy._fold)F)�__name__�
__module__�__qualname__�__doc__rKrrrrr)r2r3r7r>r6�
__classcell__rrrrrs(

rT)�raise_on_defectr")rMN)rMrI)rR�
__future__rrrZfuture.builtinsrZ)future.standard_library.email._policybaserrrr	Z#future.standard_library.email.utilsr
Z,future.standard_library.email.headerregistryr�__all__rr
r�clonerrrrrrr�<module>s #PK�Du\UKj^�)�)<future/backports/email/__pycache__/feedparser.cpython-39.pycnu�[���a

��?h�X�@s�dZddlmZddlmZddlmZddlmZmZmZddl	m
Z
mZddgZdd	l
Z
dd
lmZddlmZddlmZe
�d
�Ze
�d�Ze
�d�Ze
�d�Ze
�d�ZdZdZe�ZGdd�de�ZGdd�de�ZGdd�de�Zd	S)aFeedParser - An email feed parser.

The feed parser implements an interface for incrementally parsing an email
message, line by line.  This has advantages for certain applications, such as
those reading email messages off a socket.

FeedParser.feed() is the primary interface for pushing new data into the
parser.  It returns when there's nothing more it can do with the available
data.  When you have no more data to push into the parser, call .close().
This completes the parsing and returns the root message object.

The other advantage of this parser is that it will never raise a parsing
exception.  Instead, when it finds something unexpected, it adds a 'defect' to
the current message.  Defects are just instances that live on the message
object's .defects attribute.
�)�unicode_literals)�division)�absolute_import)�object�range�super)�implements_iterator�PY3�
FeedParser�BytesFeedParserN)�errors)�message)�compat32z
|
|
z(
|
|
)z
(
|
|
)\Zz(^(From |[\041-\071\073-\176]{1,}:|[\t ])��
c@s`eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dd�Zdd�Zdd�Z
dS)�BufferedSubFileakA file-ish object that can have new data loaded into it.

    You can also push and pop line-matching predicates onto a stack.  When the
    current predicate matches the current line, a false EOF response
    (i.e. empty string) is returned instead.  This lets the parser adhere to a
    simple abstraction -- it parses until EOF closes the current message.
    cCsd|_g|_g|_d|_dS)NrF)�_partial�_lines�	_eofstack�_closed��self�r�K/usr/local/lib/python3.9/site-packages/future/backports/email/feedparser.py�__init__9szBufferedSubFile.__init__cCs|j�|�dS�N)r�append)r�predrrr�push_eof_matcherCsz BufferedSubFile.push_eof_matchercCs
|j��Sr)r�poprrrr�pop_eof_matcherFszBufferedSubFile.pop_eof_matchercCs|j�|j�d|_d|_dS)NrT)rrrrrrrr�closeIszBufferedSubFile.closecCsR|js|jrdStS|j��}|jddd�D]}||�r.|j�|�dSq.|S)Nr���)rr�NeedMoreDatarrr)r�line�ateofrrr�readlineOs
zBufferedSubFile.readlinecCs|tusJ�|j�|�dSr)r#rr�rr$rrr�
unreadlineaszBufferedSubFile.unreadlinecCs�|j|d}|_t�|�}|��|_|jsR|rR|d�d�rR|�d�|��|_g}tt|�d�D]&}|�||d||dd�qf|�|�dS)z$Push some new data into this object.rr"�
�����N)	r�NLCRE_crack�splitr�endswithr�lenr�	pushlines)r�data�parts�lines�irrr�pushfs

$zBufferedSubFile.pushcCs|ddd�|jdd�<dS)Nr"r)r)rr4rrrr1}szBufferedSubFile.pushlinescCs|Srrrrrr�__iter__�szBufferedSubFile.__iter__cCs|��}|dkrt�|S)Nr)r&�
StopIterationr'rrr�__next__�szBufferedSubFile.__next__N)�__name__�
__module__�__qualname__�__doc__rrr r!r&r(r6r1r7r9rrrrr1s
rc@s^eZdZdZejfdd�Zdd�Zdd�Zdd	�Z	d
d�Z
dd
�Zdd�Zdd�Z
dd�ZdS)r
zA feed-style parser of email.cs�d|vr|d}|d=nt}|�_|�_z|�jd��fdd��_Wntybdd��_Yn0t��_g�_tr���	�j
�_n��	�j�_d�_
d�_d�_dS)N�policy�r>cs
d�jiS)Nr>r?rrrr�<lambda>��z%FeedParser.__init__.<locals>.<lambda>cSsiSrrrrrrr@�rAF)r�_factoryr>�
_factory_kwds�	TypeErrorr�_input�	_msgstackr	�	_parsegenr9�_parse�next�_cur�_last�_headersonly)rrBZ_3to2kwargsr>rrrr�s"zFeedParser.__init__cCs
d|_dS)NT)rLrrrr�_set_headersonly�szFeedParser._set_headersonlycCs|j�|�|��dS)zPush more data into the parser.N)rEr6�_call_parse�rr2rrr�feed�szFeedParser.feedcCs$z|��WntyYn0dSr)rHr8rrrrrN�szFeedParser._call_parsecCsR|j��|��|��}|jr$J�|��dkrN|��sNt��}|j	�
||�|S)z<Parse all remaining data and return the root message object.�	multipart)rEr!rN�_pop_messagerF�get_content_maintype�is_multipartr�!MultipartInvariantViolationDefectr>�
handle_defect)r�root�defectrrrr!�s

�zFeedParser.closecCsd|jfi|����}|jr2|j��dkr2|�d�|jrH|jd�|�|j�|�||_||_dS)Nzmultipart/digestzmessage/rfc822r")	rBrCrJ�get_content_type�set_default_typerF�attachrrK)r�msgrrr�_new_message�s
zFeedParser._new_messagecCs(|j��}|jr|jd|_nd|_|S)Nr")rFrrJ)r�retvalrrrrR�s

zFeedParser._pop_messageccs |��g}|jD]Z}|tur&tVqt�|�sbt�|�s^t��}|j�	|j
|�|j�|�qn|�|�q|�
|�|jr�g}|j��}|tur�tVq�|dkr�q�|�|�q�|j
�t�|��dS|j
��dk�r�|j�tj�|��D]}|tu�rtVq��qq�|��}|j��|j��}|tu�rDtV�q�qD�q|j��}|tu�rjtV�qD�qj�qD|dk�rx�q�|j�|�q�dS|j
��dk�r�|��D] }|tu�r�tV�q��qĐq�|��dS|j
��dk�r�|j
��}|du�rRt��}|j�	|j
|�g}|jD]$}|tu�r.tV�q|�|��q|j
�t�|��dS|j
�dd���dv�r�t��}|j�	|j
|�d|}t�d	t� |�d
�}	d}
g}d}d}
|j��}|tu�r�tV�q�|dk�rސq�|	�|�}|�r�|�!d
��rd}
|�!d�}�q�|
�rn|�rZ|d}t"�#|�}|�rL|dt$|�!d���|d<t�|�|j
_%d}
|j�|��q�|j��}|tu�r�tV�qn|	�|�}|�sn|j�|��q��qn|j�|	j�|��D] }|tu�r�tV�q��q�q�|j&��dk�rP|j&j'}|dk�rd|j&_'n:|du�r�t"�#|�}|�r�t$|�!d��}|d|�|j&_'nD|j&j(}t)|t*��r�t"�#|�}|�r�|dt$|�!d���}||j&_(|j��|��|j
|_&n|
�s�J�|�|��q�|
�r0t�+�}|j�	|j
|�|j
�t�|��g}|jD]}|tu�rtV�q�qt�|�|j
_'dS|
�sRt�,�}|j�	|j
|�dS|�r`dg}ng}|jD]$}|tu�r�tV�qj|�|��qj|�r�|d}t-�|�}|�r�|t$|�!d��d�|d<t�|�|j
_'dSg}|jD]$}|tu�r�tV�q�|�|��q�|j
�t�|��dS)Nrzmessage/delivery-statusr
rQzcontent-transfer-encoding�8bit)�7bitr_�binaryz--z(?P<sep>z4)(?P<end>--)?(?P<ws>[ \t]*)(?P<linesep>\r\n|\r|\n)?$TF�end�linesepr"r).r]rEr#�headerRE�match�NLCREr� MissingHeaderBodySeparatorDefectr>rVrJr(r�_parse_headersrLr&�set_payload�EMPTYSTRING�joinrYrrGrRr rS�get_boundary�NoBoundaryInMultipartDefect�get�lower�-InvalidMultipartContentTransferEncodingDefect�re�compile�escape�group�	NLCRE_eol�searchr0�preamblerK�epilogue�_payload�
isinstance�str�StartBoundaryNotFoundDefect�CloseBoundaryNotFoundDefect�	NLCRE_bol)r�headersr$rXr4r^r\�boundary�	separator�
boundaryre�capturing_preamblerwrc�close_boundary_seen�mo�lastline�eolmorxrb�payload�	firstline�bolmorrrrG�sb

















���

























zFeedParser._parsegenc	CsFd}g}t|�D�]\}}|ddvrR|sFt�|�}|j�|j|�q|�|�q|rt|jj|j�|��dg}}|�	d�r�|dkr�t
�|�}|r�|dt|�
d���}|j�|�qn<|t|�dkr�|j�|�dSt�|�}|jj�|�q|�d�}|dk�sJd��|d|�}|g}q|�rB|jj|j�|��dS)Nrrz 	zFrom r,�:z3_parse_headers fed line with no : and no leading WS)�	enumerater�#FirstHeaderLineIsContinuationDefectr>rVrJr�set_raw�header_source_parse�
startswithrurvr0rt�set_unixfromrEr(�MisplacedEnvelopeHeaderDefect�defects�find)	rr4�
lastheader�	lastvalue�linenor$rXr�r5rrrrh�s@






zFeedParser._parse_headersN)r:r;r<r=r
�MessagerrMrPrNr!r]rRrGrhrrrrr
�s

~cs eZdZdZ�fdd�Z�ZS)rz(Like FeedParser, but feed accepts bytes.cst��|�dd��dS)N�ascii�surrogateescape)rrP�decoderO��	__class__rrrPszBytesFeedParser.feed)r:r;r<r=rP�
__classcell__rrr�rr	s)r=�
__future__rrrZfuture.builtinsrrrZfuture.utilsrr	�__all__rqZfuture.backports.emailrr
Z"future.backports.email._policybaserrrrfr~rur-rdrj�NLr#rr
rrrrr�<module>s0




ZPK�Du\�{�/�
�
<future/backports/email/__pycache__/base64mime.cpython-39.pycnu�[���a

��?h��@s�dZddlmZddlmZddlmZddlmZddlmZddlmZgd�Z	dd	l
mZdd
lm
Z
mZdZdZd
ZdZdd�Zddd�Zdefdd�Zdd�ZeZeZdS)a�Base64 content transfer encoding per RFCs 2045-2047.

This module handles the content transfer encoding method defined in RFC 2045
to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit
characters encoding known as Base64.

It is used in the MIME standards for email to attach images, audio, and text
using some 8-bit character sets to messages.

This module provides an interface to encode and decode both headers and bodies
with Base64 encoding.

RFC 2045 defines a method for including character set information in an
`encoded-word' in a header.  This method is commonly used for 8-bit real names
in To:, From:, Cc:, etc. fields, as well as Subject: lines.

This module does not do the line wrapping or end-of-line character conversion
necessary for proper internationalized headers; it only does dumb encoding and
decoding.  To deal with the various line wrapping issues, use the email.header
module.
�)�unicode_literals)�division)�absolute_import)�range)�bytes)�str)�body_decode�body_encode�decode�decodestring�
header_encode�
header_length)�	b64encode)�
b2a_base64�
a2b_base64z
�
��cCs*tt|�d�\}}|d}|r&|d7}|S)z6Return the length of s when it is encoded with base64.��)�divmod�len)�	bytearray�groups_of_3�leftover�n�r�K/usr/local/lib/python3.9/site-packages/future/backports/email/base64mime.pyr
7s
r
�
iso-8859-1cCs6|sdSt|t�r|�|�}t|��d�}d||fS)z�Encode a single header line with Base64 encoding in a given charset.

    charset names the character set to use to encode the header.  It defaults
    to iso-8859-1.  Base64 encoding is defined in RFC 2045.
    r�asciiz=?%s?b?%s?=)�
isinstancer�encoderr
)�header_bytes�charset�encodedrrrrAs

r�LcCs~|s|Sg}|dd}tdt|�|�D]J}t||||���d�}|�t�rh|tkrh|dd�|}|�|�q(t�|�S)a1Encode a string with base64.

    Each line will be wrapped at, at most, maxlinelen characters (defaults to
    76 characters).

    Each line of encoded text will end with eol, which defaults to "\n".  Set
    this to "\r\n" if you will be using the result of this function directly
    in an email.
    rrrrN���)	rrrr
�endswith�NL�append�EMPTYSTRING�join)�s�
maxlinelen�eol�encvec�
max_unencoded�i�encrrrr	Os
r	cCs.|s
t�St|t�r"t|�d��St|�SdS)z�Decode a raw base64 string, returning a bytes object.

    This function does not parse a full MIME header value encoded with
    base64 (like =?iso-8895-1?b?bmloISBuaWgh?=) -- please use the high
    level email.header class for that functionality.
    zraw-unicode-escapeN)rr rrr!)�stringrrrr
hs

r
N)r)�__doc__�
__future__rrrZfuture.builtinsrrr�__all__�base64r�binasciirr�CRLFr(r*�MISC_LENr
rr	r
rrrrrr�<module>s&


PK�Du\4t_N$N$<future/backports/email/__pycache__/quoprimime.cpython-39.pycnu�[���a

��?h�*�@s~dZddlmZddlmZddlmZddlmZmZmZm	Z	m
Z
mZgd�Zddl
Z
ddlZddlmZmZmZd	Zd
ZdZedd
�e
d�D��Ze��Zede�d�e�d��D]Zee�ee<q�deed�<ed�D]Zee�ee<q�dd�Zdd�Zdd�Zdd�Zd0dd�Z dd�Z!d d!�Z"d1d#d$�Z#Gd%d&�d&ej$�Z%d'efd(d)�Z&efd*d+�Z'e'Z(e'Z)d,d-�Z*d.d/�Z+dS)2aFQuoted-printable content transfer encoding per RFCs 2045-2047.

This module handles the content transfer encoding method defined in RFC 2045
to encode US ASCII-like 8-bit data called `quoted-printable'.  It is used to
safely encode text that is in a character set similar to the 7-bit US ASCII
character set, but that includes some 8-bit characters that are normally not
allowed in email bodies or headers.

Quoted-printable is very space-inefficient for encoding binary files; use the
email.base64mime module for that instead.

This module provides an interface to encode and decode both headers and bodies
with quoted-printable encoding.

RFC 2045 defines a method for including character set information in an
`encoded-word' in a header.  This method is commonly used for 8-bit real names
in To:/From:/Cc: etc. fields, as well as Subject: lines.

This module does not do the line wrapping or end-of-line character
conversion necessary for proper internationalized headers; it only
does dumb encoding and decoding.  To deal with the various line
wrapping issues, use the email.header module.
�)�unicode_literals)�division)�absolute_import)�bytes�chr�dict�int�range�super)
�body_decode�body_encode�body_length�decode�decodestring�
header_decode�
header_encode�
header_length�quote�unquoteN)�
ascii_letters�digits�	hexdigits�
�
�ccs|]}|d|fVqdS)�=%02XN�)�.0�crr�K/usr/local/lib/python3.9/site-packages/future/backports/email/quoprimime.py�	<genexpr><�r �s-!*+/�ascii�_� s_ !"#$%&'()*+,-./0123456789:;<>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~	cCst|�t|kS)z>Return True if the octet should be escaped with header quopri.)r�_QUOPRI_HEADER_MAP��octetrrr�header_checkNsr)cCst|�t|kS)z<Return True if the octet should be escaped with body quopri.)r�_QUOPRI_BODY_MAPr'rrr�
body_checkSsr+cCstdd�|D��S)a:Return a header quoted-printable encoding length.

    Note that this does not include any RFC 2047 chrome added by
    `header_encode()`.

    :param bytearray: An array of bytes (a.k.a. octets).
    :return: The length in bytes of the byte array when it is encoded with
        quoted-printable for headers.
    css|]}tt|�VqdS�N)�lenr&�rr(rrrr br!z header_length.<locals>.<genexpr>��sum��	bytearrayrrrrXs
rcCstdd�|D��S)z�Return a body quoted-printable encoding length.

    :param bytearray: An array of bytes (a.k.a. octets).
    :return: The length in bytes of the byte array when it is encoded with
        quoted-printable for bodies.
    css|]}tt|�VqdSr,)r-r*r.rrrr lr!zbody_length.<locals>.<genexpr>r/r1rrrr
esr
cCsft|t�st|�}|s&|�|���n<t|d�t|�|krT|d||7<n|�|���dS)N���)�
isinstance�strr�append�lstripr-)�L�s�maxlen�extrarrr�_max_appendos
r<cCstt|dd�d��S)zDTurn a string in the form =AB to the ASCII character with value 0xab���)rr�r9rrrrzsrcCsdt|�S)Nr)�ord)rrrrrsr�
iso-8859-1cCs6|sdSg}|D]}|�t|�qd|t�|�fS)a�Encode a single header line with quoted-printable (like) encoding.

    Defined in RFC 2045, this `Q' encoding is similar to quoted-printable, but
    used specifically for email header fields to allow charsets with mostly 7
    bit characters (and some 8 bit) to remain more or less readable in non-RFC
    2045 aware mail clients.

    charset names the character set to use in the RFC 2046 header.  It
    defaults to iso-8859-1.
    rz=?%s?q?%s?=)r6r&�EMPTYSTRING�join)�header_bytes�charset�encodedr(rrrr�srcsFeZdZ�fdd�Zdd�Zdd�Zdd�Zdd
d�Zdd
�Z�Z	S)�_body_accumulatorcs(t�j|i|��||_||_|_dSr,)r
�__init__�eol�
maxlinelen�room)�selfrKrJ�args�kw��	__class__rrrI�sz_body_accumulator.__init__cCs |�|�|jt|�8_dS)z%Add string s to the accumulated body.N)�writerLr-)rMr9rrr�	write_str�s
z_body_accumulator.write_strcCs|�|j�|j|_dS)zWrite eol, then start new line.N)rSrJrKrL�rMrrr�newline�sz_body_accumulator.newlinecCs|�d�|��dS)z*Write a soft break, then start a new line.�=N)rSrUrTrrr�write_soft_break�s
z"_body_accumulator.write_soft_breakrcCs(|jt|�|kr|��|�|�dS)z.Add a soft line break if needed, then write s.N)rLr-rWrS)rMr9�
extra_roomrrr�
write_wrapped�sz_body_accumulator.write_wrappedcCsz|s|j|dd�nb|dvr(|�|�nN|jdkrB|�t|��n4|jdkr`|�|�|��n|��|�t|��dS)Nr=)rXz 	r>�)rYrLrRrrW)rMrZis_last_charrrr�
write_char�s



z_body_accumulator.write_char)r)
�__name__�
__module__�__qualname__rIrSrUrWrYr[�
__classcell__rrrPrrH�s
rH�LcCs�|dkrtd��|s|S|ddv}t||�}|��}t|�d}t|�D]^\}}t|�d}	t|�D],\}
}tt|��r�t|�}|�||
|	k�qf||ks�|rJ|�	�qJ|�
�S)a�Encode with quoted-printable, wrapping at maxlinelen characters.

    Each line of encoded text will end with eol, which defaults to "\n".  Set
    this to "\r\n" if you will be using the result of this function directly
    in an email.

    Each line will be wrapped at, at most, maxlinelen characters before the
    eol string (maxlinelen defaults to 76 characters, the maximum value
    permitted by RFC 2045).  Long lines will have the 'soft line break'
    quoted-printable character "=" appended to them, so the decoded text will
    be identical to the original text.

    The minimum maxlinelen is 4 to have room for a quoted character ("=XX")
    followed by a soft line break.  Smaller values will generate a
    ValueError.

    �zmaxlinelen must be at least 4r3rr=)�
ValueErrorrH�
splitlinesr-�	enumerater+rArr[rU�getvalue)�bodyrKrJZlast_has_eol�encoded_body�linesZlast_line_noZline_no�lineZlast_char_index�irrrrr�s"

rcCs|s|Sd}|��D]�}|��}|s.||7}qd}t|�}||kr||}|dkrd||7}|d7}nv|d|kr||d7}q:n^|d|kr�||dtvr�||dtvr�|t|||d��7}|d7}n||7}|d7}||kr:||7}q:q|ddv�r|�|��r|d	d�}|S)
z_Decode a quoted-printable string.

    Lines are separated with eol, which defaults to \n.
    rrrVr=rZr>r3rN)rc�rstripr-rr�endswith)rGrJ�decodedrirj�nrrrrrs8
,
rcCs|�d�}t|�S)zCTurn a match in the form =AB to the ASCII character with value 0xabr)�groupr)�matchr9rrr�_unquote_match7s
rqcCs|�dd�}t�dt|tj�S)aDecode a string encoded with RFC 2045 MIME header `Q' encoding.

    This function does not parse a full MIME header value encoded with
    quoted-printable (like =?iso-8895-1?q?Hello_World?=) -- please use
    the high level email.header class for that functionality.
    r$r%z=[a-fA-F0-9]{2})�replace�re�subrq�ASCIIr@rrrr>sr)r)rB),�__doc__�
__future__rrrZfuture.builtinsrrrrr	r
�__all__rs�io�stringrrr�CRLF�NLrCr&�copyr*�encoderrAr)r+rr
r<rrr�StringIOrHrrrrrqrrrrr�<module>sB 
 



610PK�Du\����-�-;future/backports/email/__pycache__/generator.cpython-39.pycnu�[���a

��?h@L�@s6dZddlmZddlmZddlmZddlmZddlmZddlmZgd�Z	dd	l
Z
dd	lZdd	lZdd	l
Z
dd	lZdd
lmZmZddlmZddlmZdd
lmZdd	lmmmZdZdZe
�de
j �Z!Gdd�de"�Z#Gdd�de#�Z$dZ%Gdd�de#�Z&e'e(ej)d��Z*de*Z+e#j,Z,d	S)z:Classes to generate plain text from a message object tree.�)�print_function)�unicode_literals)�division)�absolute_import)�super)�str)�	Generator�DecodedGenerator�BytesGeneratorN)�StringIO�BytesIO)�compat32)�Header)�_has_surrogates�_�
z^From c@s�eZdZdZd(dd�Zdd�Zd)d	d
�Zdd�Zd
Zdd�Z	dd�Z
dd�Zdd�Zdd�Z
dd�Zdd�ZeZdd�Zdd�Zd d!�Zd"d#�Zed*d$d%��Zed&d'��ZdS)+rz�Generates output from a Message object tree.

    This basic generator writes the message to the given file object as plain
    text.
    TNcKs8d|vr|d}|d=nd}||_||_||_||_dS)N�policy)�_fp�
_mangle_from_�maxheaderlenr)�self�outfp�mangle_from_rZ_3to2kwargsr�r�J/usr/local/lib/python3.9/site-packages/future/backports/email/generator.py�__init__+szGenerator.__init__cCs|j�|�dS�N)r�write�r�srrrrHszGenerator.writeFcCs�|jdur|jn|j}|dur*|j|d�}|jdurB|j|jd�}|j|_|�|j�|_d|_|�d�|_|j}|j}zX||_||_|r�|�	�}|s�dt
�t
�
��}|�||j�|�
|�W||_||_n||_||_0dS)azPrint the message object tree rooted at msg to the output file
        specified when the Generator instance was created.

        unixfrom is a flag that forces the printing of a Unix From_ delimiter
        before the first object in the message tree.  If the original message
        has no From_ delimiter, a `standard' one is crafted.  By default, this
        is False to inhibit the printing of any From_ delimiter.

        Note that for subobjects, no From_ line is printed.

        linesep specifies the characters used to indicate a new line in
        the output.  The default value is determined by the policy.

        N)�linesep��max_line_length�zFrom nobody )r�clonerr �_NL�_encode�_encoded_NLZ_EMPTYZ_encoded_EMTPY�get_unixfrom�time�ctimer�_write)r�msg�unixfromr rZold_gen_policyZold_msg_policyZufromrrr�flattenLs0
�zGenerator.flattencCs|j||jd|jd�S)z1Clone this generator with the exact same options.N)r)�	__class__rr)r�fprrrr${s
�zGenerator.cloner#cCst�Sr)r�rrrr�_new_buffer�szGenerator._new_buffercCs|Srrrrrrr&�szGenerator._encodecCs||sdS|�d�}|dd�D] }|�|�d��|�|j�q|d�d�}|�|�t|d�t|�krx|�|j�dS)NT���z
)�
splitlinesr�rstripr%�len)r�lines�lineZlaststrippedrrr�_write_lines�s

zGenerator._write_linescCsn|j}z"|��|_}|�|�W||_n||_0t|dd�}|durR|�|�n||�|j�|���dS)N�_write_headers)rr2�	_dispatch�getattrr:r�getvalue)rr,ZoldfpZsfp�methrrrr+�szGenerator._writecCst|��}|��}t�||f��dd�}t|d|d�}|durh|�dd�}t|d|d�}|durh|j}||�dS)N�-rZ_handle_)�get_content_maintype�get_content_subtype�
UNDERSCORE�join�replacer<�
_writeBody)rr,�main�subZspecificr>Zgenericrrrr;�szGenerator._dispatchcCs6|��D]\}}|�|j�||��q|�|j�dSr)�	raw_itemsrr�foldr%�rr,�h�vrrrr:�szGenerator._write_headerscCs�|��}|durdSt|t�s.tdt|���t|j�rd|�d�}|durd|d=|�||�|��}|j	rvt
�d|�}|�|�dS)Nzstring payload expected: %s�charsetzcontent-transfer-encoding�>From )
�get_payload�
isinstancer�	TypeError�typer�_payload�	get_param�set_payloadr�fcrerGr9)rr,�payloadrMrrr�_handle_text�s


zGenerator._handle_textcCs�g}|��}|durg}n(t|t�r2|�|�dSt|t�sB|g}|D]6}|��}|�|�}|j|d|jd�|�	|�
��qF|��}|s�|j�
|�}|�|�}|�|�|jdur�|jr�t�d|j�}	n|j}	|�|	�|�|j�|�d||j�|�r|j�|�d��|D],}
|�|jd||j�|j�|
��q|�|jd|d�|jdu�r�|�|j�|j�r�t�d|j�}n|j}|�|�dS)NF�r-r rNz--r)rOrPrr�listr2r$r.r%�appendr=�get_boundaryr'rC�_make_boundary�set_boundary�preamblerrVrGr9r�pop�epilogue)rr,Zmsgtexts�subparts�partr�g�boundaryZalltextr_Z	body_partrarrr�_handle_multipart�sL







zGenerator._handle_multipartcCs6|j}|jdd�|_z|�|�W||_n||_0dS)Nrr!)rr$rf)rr,�prrr�_handle_multipart_signed/s
z"Generator._handle_multipart_signedcCs�g}|��D]t}|��}|�|�}|j|d|jd�|��}|�|j�}|rv|d|jkrv|�	|j�
|dd���q|�	|�q|j�|j�
|��dS)NFrYr3)
rOr2r$r.r%r=�splitr'�_encoded_EMPTYr[rCrr)rr,�blocksrcrrd�textr7rrr�_handle_message_delivery_status:s
z)Generator._handle_message_delivery_statuscCs^|��}|�|�}|j}t|t�rD|j|�d�d|jd�|��}n
|�	|�}|j
�|�dS)NrFrY)r2r$rSrPrZr.rOr%r=r&rr)rr,rrdrWrrr�_handle_messageOs




zGenerator._handle_messagecCsvt�tj�}dt|d}|dur(|S|}d}|�dt�|�dtj�}|�	|�sXqr|dt
|�}|d7}q0|S)Nz===============z==rz^--z(--)?$�.�)�random�	randrange�sys�maxsize�_fmt�_compile_re�re�escape�	MULTILINE�searchr)�clsrl�tokenre�b�counterZcrerrrr]hs

zGenerator._make_boundarycCst�||�Sr)rw�compile�r{r�flagsrrrrvzszGenerator._compile_re)TN)FN)N)�__name__�
__module__�__qualname__�__doc__rrr.r$rjr2r&r9r+r;r:rXrErfrhrmrn�classmethodr]rvrrrrr!s,	

/

;rcsTeZdZdZdZdd�Zdd�Zdd�Zd	d
�Z�fdd�Z	e	Z
ed
d��Z�Z
S)r
a�Generates a bytes version of a Message object tree.

    Functionally identical to the base Generator except that the output is
    bytes and not string.  When surrogates were used in the input to encode
    bytes, these are decoded back to bytes for output.  If the policy has
    cte_type set to 7bit, then the message is transformed such that the
    non-ASCII bytes are properly content transfer encoded, using the charset
    unknown-8bit.

    The outfp object must accept bytes in its write method.
    �cCs|j�t|��dd��dS)N�ascii�surrogateescape)rrr�encoderrrrr�szBytesGenerator.writecCst�Sr)rr1rrrr2�szBytesGenerator._new_buffercCs
|�d�S�Nr�)r�rrrrr&�szBytesGenerator._encodecCs8|��D]\}}|j�|j�||��q|�|j�dSr)rHrrr�fold_binaryr%rJrrrr:�szBytesGenerator._write_headerscs\|jdurdSt|j�rH|jjdksH|jr:t�d|j�|_|�|j�ntt	|��
|�dS)N�7bitrN)rSrr�cte_typerrVrGr9rr
rX)rr,�r/rrrX�s
zBytesGenerator._handle_textcCst�|�d�|�Sr�)rwrr�r�rrrrv�szBytesGenerator._compile_re)r�r�r�r�rjrr2r&r:rXrEr�rv�
__classcell__rrr�rr
~s
r
zD[Non-text (%(type)s) part of message omitted, filename %(filename)s]c@s"eZdZdZd	dd�Zdd�ZdS)
r	z�Generates a text representation of a message.

    Like the Generator base class, except that non-text parts are substituted
    with a format string representing the part.
    T�NNcCs*t�||||�|dur t|_n||_dS)a�Like Generator.__init__() except that an additional optional
        argument is allowed.

        Walks through all subparts of a message.  If the subpart is of main
        type `text', then it prints the decoded payload of the subpart.

        Otherwise, fmt is a format string that is used instead of the message
        payload.  fmt is expanded with the following keywords (in
        %(keyword)s format):

        type       : Full MIME type of the non-text part
        maintype   : Main MIME type of the non-text part
        subtype    : Sub-MIME type of the non-text part
        filename   : Filename of the non-text part
        description: Description associated with the non-text part
        encoding   : Content transfer encoding of the non-text part

        The default value for fmt is None, meaning

        [Non-text (%(type)s) part of message omitted, filename %(filename)s]
        N)rr�_FMTru)rrrr�fmtrrrr�szDecodedGenerator.__init__cCs�|��D]v}|��}|dkr2t|jdd�|d�q|dkr<qt|j|��|��|��|�d�|�dd�|�d	d
�d�|d�qdS)NrlF)�decode)�file�	multipartz
[no filename]zContent-Descriptionz[no description]zContent-Transfer-Encodingz
[no encoding])rR�maintype�subtype�filename�description�encoding)	�walkr@�printrOru�get_content_typerA�get_filename�get)rr,rcr�rrrr;�s(���	�zDecodedGenerator._dispatch)Tr�N)r�r�r�r�rr;rrrrr	�s
r	rpz%%0%dd)-r��
__future__rrrrZfuture.builtinsrr�__all__rwrsr)rq�warnings�iorrZ"future.backports.email._policybaser
Zfuture.backports.email.headerrZfuture.backports.email.utilsrZfuture.backports.email.charsetZ	backports�emailrM�_charsetrB�NLrryrV�objectrr
r�r	r6�reprrt�_widthrur]rrrr�<module>s:_68PK�Du\��g��o�o9future/backports/email/__pycache__/message.cpython-39.pycnu�[���a

��?h���@s�dZddlmZmZmZddlmZmZmZm	Z	dgZ
ddlZddlZddl
Z
ddlZddlmZmZddlmZddlmZdd	lmZdd
lmZddlmZddlmZejZd
Ze�d�Z dd�Z!ddd�Z"dd�Z#dd�Z$Gdd�de%�Z&dS)z8Basic message object for the email package object model.�)�absolute_import�division�unicode_literals)�list�range�str�zip�MessageN)�BytesIO�StringIO)�
as_native_str)�utils)�errors)�compat32��charset)�decode_bz; z[ \(\)<>@,;:\\"/\[\]\?=]cCs4t|��d�\}}}|s$|��dfS|��|��fS)N�;)r�	partition�strip)�param�a�sep�b�r�H/usr/local/lib/python3.9/site-packages/future/backports/email/message.py�_splitparam"srTcCs�|dur�t|�dkr�t|t�rL|d7}t�|d|d|d�}d||fSz|�d�Wn4ty�|d7}t�|dd	�}d||fYS0|s�t�|�r�d
|t�	|�fSd||fSn|SdS)a~Convenience function to format and return a key=value pair.

    This will quote the value if needed or if quote is true.  If value is a
    three tuple (charset, language, value), it will be encoded according
    to RFC2231 rules.  If it contains non-ascii characters it will likewise
    be encoded according to RFC2231 rules, using the utf-8 charset and
    a null language.
    Nr�*���%s=%s�ascii�utf-8�z%s="%s")
�len�
isinstance�tupler
�encode_rfc2231�encode�UnicodeEncodeError�	tspecials�search�quote)r�valuer,rrr�_formatparam,s	
r.cCs�dt|�}g}|dd�dkr�|dd�}|�d�}|dkrp|�dd|�|�dd|�drp|�d|d�}q6|dkr�t|�}|d|�}d|vr�|�d�}|d|�����d||dd���}|�|���||d�}q|S)Nrrr�"z\"r�=)r�find�countr$�indexr�lower�append)�s�plist�end�f�irrr�_parseparamNs 
(
,r;cCs4t|t�r&|d|dt�|d�fSt�|�SdS)Nrrr)r%r&r
�unquote)r-rrr�
_unquotevaluebs
r=c@s�eZdZdZefdd�Zedd�dd��Zd`d
d�Zdd
�Z	dd�Z
dd�Zdd�Zdadd�Z
dbdd�Zdd�Zdd�Zdd�Zdd �Zd!d"�Zd#d$�Zd%d&�Zd'd(�Zd)d*�Zd+d,�Zd-d.�Zdcd/d0�Zd1d2�Zd3d4�Zddd5d6�Zd7d8�Zd9d:�Zd;d<�Z d=d>�Z!d?d@�Z"dAdB�Z#dCdD�Z$dEdF�Z%dedIdJ�Z&dfdKdL�Z'dgdOdP�Z(dhdQdR�Z)didSdT�Z*djdUdV�Z+dkdWdX�Z,dYdZ�Z-dld[d\�Z.dmd]d^�Z/d	d_l0m1Z1dS)nr	a�Basic message object.

    A message object is defined as something that has a bunch of RFC 2822
    headers and a payload.  It may optionally have an envelope header
    (a.k.a. Unix-From or From_ header).  If the message is a container (i.e. a
    multipart or a message/rfc822), then the payload is a list of Message
    objects, otherwise it is a string.

    Message objects implement part of the `mapping' interface, which assumes
    there is exactly one occurrence of the header per message.  Some headers
    do in fact appear multiple times (e.g. Received) and for those headers,
    you must use the explicit API to set or get all the headers.  Not all of
    the mapping methods are implemented.
    cCs<||_t�|_d|_d|_d|_d|_|_g|_d|_	dS)N�
text/plain)
�policyr�_headers�	_unixfrom�_payload�_charset�preamble�epilogue�defects�
_default_type)�selfr?rrr�__init__|szMessage.__init__r")�encodingcCs|��S)zwReturn the entire formatted message as a string.
        This includes the headers, body, and envelope header.
        )�	as_string�rHrrr�__str__�szMessage.__str__FrcCs6ddlm}t�}||d|d�}|j||d�|��S)aWReturn the entire formatted message as a (unicode) string.
        Optional `unixfrom' when True, means include the Unix From_ envelope
        header.

        This is a convenience method and may not generate the message exactly
        as you intend.  For more flexibility, use the flatten() method of a
        Generator instance.
        r)�	GeneratorF)�mangle_from_�maxheaderlen)�unixfrom)Z future.backports.email.generatorrNr�flatten�getvalue)rHrQrPrN�fp�grrrrK�s
	zMessage.as_stringcCst|jt�S)z6Return True if the message consists of multiple parts.)r%rBrrLrrr�is_multipart�szMessage.is_multipartcCs
||_dS�N�rA)rHrQrrr�set_unixfrom�szMessage.set_unixfromcCs|jSrWrXrLrrr�get_unixfrom�szMessage.get_unixfromcCs$|jdur|g|_n|j�|�dS)z�Add the given payload to the current payload.

        The current payload will always be a list of objects after this method
        is called.  If you want to set the payload to a scalar object, use
        set_payload() instead.
        N)rBr5)rH�payloadrrr�attach�s

zMessage.attachNcCs�|��r(|rdS|dur|jS|j|S|durNt|jt�sNtdt|j���|j}t|�dd����}t|t��rt|�}t	�
|�r�|�dd�}|s�z|�|�
dd�d�}Wnty�|�dd�}Yn0n4|�rz|�d�}Wnt�y|�d	�}Yn0|�s|S|d
k�r"t	�|�S|dk�r`td�|����\}}|D]}|j�||��qF|S|d
v�r�t|�}	t�}
ztj|	|
dd�|
��WStj�y�|YS0t|t��r�|S|S)aZReturn a reference to the payload.

        The payload will either be a list object or a string.  If you mutate
        the list object, you modify the message's payload in place.  Optional
        i returns that index into the payload.

        Optional decode is a flag indicating whether the payload should be
        decoded or not, according to the Content-Transfer-Encoding header
        (default is False).

        When True and the message is not a multipart, the payload will be
        decoded if this header's value is `quoted-printable' or `base64'.  If
        some other encoding is used, or the header is missing, or if the
        payload has bogus data (i.e. bogus base64 or uuencoded data), the
        payload is returned as-is.

        If the message is a multipart and the decode flag is True, then None
        is returned.
        NzExpected list, got %szcontent-transfer-encodingr#r!�surrogateescaper�replace�raw-unicode-escapezquoted-printable�base64�)z
x-uuencode�uuencode�uuezx-uueT)�quiet)rVrBr%r�	TypeError�typer�getr4r
�_has_surrogatesr(�decode�	get_param�LookupError�UnicodeErrorZ_qdecoder�join�
splitlinesr?�
handle_defectr
�uurS�Error)rHr:rir[�cte�bpayloadr-rF�defect�in_file�out_filerrr�get_payload�sX"







zMessage.get_payloadcCs||_|dur|�|�dS)z�Set the payload to the given value.

        Optional charset sets the message's default character set.  See
        set_charset() for details.
        N)rB�set_charset)rHr[rrrr�set_payloadszMessage.set_payloadcCs�|dur|�d�d|_dSt|t�s.t|�}||_d|vrH|�dd�d|vrf|jdd|��d�n|�d|���||��kr�|�|j�|_d|vr�|�	�}z||�Wn,t
y�|�|j�|_|�d|�Yn0dS)	a�Set the charset of the payload to a given character set.

        charset can be a Charset instance, a string naming a character set, or
        None.  If it is a string it will be converted to a Charset instance.
        If charset is None, the charset parameter will be removed from the
        Content-Type field.  Anything else will generate a TypeError.

        The message will be assumed to be of type text/* encoded with
        charset.input_charset.  It will be converted to charset.output_charset
        and encoded properly, if needed, when generating the plain text
        representation of the message.  MIME headers (MIME-Version,
        Content-Type, Content-Transfer-Encoding) will be added as needed.
        Nr�MIME-Version�1.0�Content-Typer>rzContent-Transfer-Encoding)�	del_paramrCr%�Charset�
add_header�get_output_charset�	set_param�body_encoderB�get_body_encodingre)rHrrrrrrrxs.

�zMessage.set_charsetcCs|jS)zKReturn the Charset instance associated with the message's payload.
        )rCrLrrr�get_charsetEszMessage.get_charsetcCs
t|j�S)z9Return the total number of headers, including duplicates.)r$r@rLrrr�__len__MszMessage.__len__cCs
|�|�S)a-Get a header value.

        Return None if the header is missing instead of raising an exception.

        Note that if the header appeared multiple times, exactly which
        occurrence gets returned is undefined.  Use get_all() to get all
        the values matching a header field name.
        )rg�rH�namerrr�__getitem__Qs	zMessage.__getitem__cCsr|j�|�}|rX|��}d}|jD]4\}}|��|kr"|d7}||kr"td�||���q"|j�|j�||��dS)z�Set the value of a header.

        Note: this does not overwrite an existing header with the same field
        name.  Use __delitem__() first to delete any existing headers.
        rrz/There may be at most {} {} headers in a messageN)r?�header_max_countr4r@�
ValueError�formatr5�header_store_parse)rHr��val�	max_count�lname�found�k�vrrr�__setitem__\s�zMessage.__setitem__cCsB|��}t�}|jD]"\}}|��|kr|�||f�q||_dS)zwDelete all occurrences of a header, if present.

        Does not raise an exception if the header is missing.
        N)r4rr@r5)rHr��
newheadersr�r�rrr�__delitem__nszMessage.__delitem__cCs|��dd�|jD�vS)NcSsg|]\}}|���qSr)r4��.0r�r�rrr�
<listcomp>{raz(Message.__contains__.<locals>.<listcomp>)r4r@r�rrr�__contains__zszMessage.__contains__ccs|jD]\}}|VqdSrW�r@)rH�fieldr-rrr�__iter__}szMessage.__iter__cCsdd�|jD�S)a.Return a list of all the message's header field names.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        cSsg|]\}}|�qSrrr�rrrr��raz Message.keys.<locals>.<listcomp>r�rLrrr�keys�szMessage.keyscs�fdd��jD�S)a)Return a list of all the message's header values.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        csg|]\}}�j�||��qSr�r?�header_fetch_parser�rLrrr��s�z"Message.values.<locals>.<listcomp>r�rLrrLr�values�s
�zMessage.valuescs�fdd��jD�S)a'Get all the message's header fields and values.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        cs"g|]\}}|�j�||�f�qSrr�r�rLrrr��s�z!Message.items.<locals>.<listcomp>r�rLrrLr�items�s
�z
Message.itemscCs:|��}|jD]&\}}|��|kr|j�||�Sq|S)z~Get a header value.

        Like __getitem__() but return failobj instead of None when the field
        is missing.
        )r4r@r?r�)rHr��failobjr�r�rrrrg�s
zMessage.getcCs|j�||f�dS)z�Store name and value in the model without modification.

        This is an "internal" API, intended only for use by a parser.
        N)r@r5)rHr�r-rrr�set_raw�szMessage.set_rawcCst|j���S)z�Return the (name, value) header pairs without modification.

        This is an "internal" API, intended only for use by a generator.
        )�iterr@�copyrLrrr�	raw_items�szMessage.raw_itemscCsHg}|��}|jD](\}}|��|kr|�|j�||��q|sD|S|S)aQReturn a list of all the values for the named field.

        These will be sorted in the order they appeared in the original
        message, and may contain duplicates.  Any fields deleted and
        re-inserted are always appended to the header list.

        If no such fields exist, failobj is returned (defaults to None).
        )r4r@r5r?r�)rHr�r�r�r�r�rrr�get_all�s	zMessage.get_allcKspg}|��D]<\}}|dur0|�|�dd��q|�t|�dd�|��q|dur^|�d|�t�|�||<dS)u�Extended header setting.

        name is the header field to add.  keyword arguments can be used to set
        additional parameters for the header field, with underscores converted
        to dashes.  Normally the parameter will be added as key="value" unless
        value is None, in which case only the key will be added.  If a
        parameter value contains non-ASCII characters it can be specified as a
        three-tuple of (charset, language, value), in which case it will be
        encoded according to RFC2231 rules.  Otherwise it will be encoded using
        the utf-8 charset and a language of ''.

        Examples:

        msg.add_header('content-disposition', 'attachment', filename='bud.gif')
        msg.add_header('content-disposition', 'attachment',
                       filename=('utf-8', '', 'Fußballer.ppt'))
        msg.add_header('content-disposition', 'attachment',
                       filename='Fußballer.ppt'))
        N�_�-r)r�r5r^r.�insert�	SEMISPACErm)rH�_name�_value�_params�partsr�r�rrrr�szMessage.add_headercCs\|��}ttt|j��|j�D]0\}\}}|��|kr|j�||�|j|<qXqt|��dS)z�Replace a header.

        Replace the first matching header found in the message, retaining
        header order and case.  If no matching header was found, a KeyError is
        raised.
        N)r4rrr$r@r?r��KeyError)rHr�r�r:r�r�rrr�replace_header�s"zMessage.replace_headercCsHt�}|�d|�}||ur"|��St|�d��}|�d�dkrDdS|S)a0Return the message's content type.

        The returned string is coerced to lower case of the form
        `maintype/subtype'.  If there was no Content-Type header in the
        message, the default type as given by get_default_type() will be
        returned.  Since according to RFC 2045, messages always have a default
        type this will always return a value.

        RFC 2045 defines a message's default type to be text/plain unless it
        appears inside a multipart/digest container, in which case it would be
        message/rfc822.
        �content-typer�/rr>)�objectrg�get_default_typerr4r2)rH�missingr-�ctyperrr�get_content_types
zMessage.get_content_typecCs|��}|�d�dS)z�Return the message's main content type.

        This is the `maintype' part of the string returned by
        get_content_type().
        r�r�r��split�rHr�rrr�get_content_maintypeszMessage.get_content_maintypecCs|��}|�d�dS)z�Returns the message's sub-content type.

        This is the `subtype' part of the string returned by
        get_content_type().
        r�rr�r�rrr�get_content_subtype(szMessage.get_content_subtypecCs|jS)aReturn the `default' content type.

        Most messages have a default content type of text/plain, except for
        messages that are subparts of multipart/digest containers.  Such
        subparts have a default content type of message/rfc822.
        �rGrLrrrr�1szMessage.get_default_typecCs
||_dS)z�Set the `default' content type.

        ctype should be either "text/plain" or "message/rfc822", although this
        is not enforced.  The default content type is not stored in the
        Content-Type header.
        Nr�r�rrr�set_default_type:szMessage.set_default_typec		Cs�t�}|�||�}||ur|Sg}t|�D]V}z$|�dd�\}}|��}|��}Wntyp|��}d}Yn0|�||f�q*t�|�}|S)Nr0rr#)	r�rgr;r�rr�r5r
�
decode_params)	rHr��headerr�r-�params�pr�r�rrr�_get_params_preserveCs 

zMessage._get_params_preserver�TcCs8t�}|�||�}||ur|S|r0dd�|D�S|SdS)amReturn the message's Content-Type parameters, as a list.

        The elements of the returned list are 2-tuples of key/value pairs, as
        split on the `=' sign.  The left hand side of the `=' is the key,
        while the right hand side is the value.  If there is no `=' sign in
        the parameter the value is the empty string.  The value is as
        described in the get_param() method.

        Optional failobj is the object to return if there is no Content-Type
        header.  Optional header is the header to search instead of
        Content-Type.  If unquote is True, the value is unquoted.
        cSsg|]\}}|t|�f�qSr)r=r�rrrr�jraz&Message.get_params.<locals>.<listcomp>N)r�r�)rHr�r�r<r�r�rrr�
get_paramsXs
zMessage.get_paramscCsN||vr|S|�||�D]0\}}|��|��kr|r@t|�S|Sq|S)aReturn the parameter value if found in the Content-Type header.

        Optional failobj is the object to return if there is no Content-Type
        header, or the Content-Type header has no such parameter.  Optional
        header is the header to search instead of Content-Type.

        Parameter keys are always compared case insensitively.  The return
        value can either be a string, or a 3-tuple if the parameter was RFC
        2231 encoded.  When it's a 3-tuple, the elements of the value are of
        the form (CHARSET, LANGUAGE, VALUE).  Note that both CHARSET and
        LANGUAGE can be None, in which case you should consider VALUE to be
        encoded in the us-ascii charset.  You can usually ignore LANGUAGE.
        The parameter value (either the returned string, or the VALUE item in
        the 3-tuple) is always unquoted, unless unquote is set to False.

        If your application doesn't care whether the parameter was RFC 2231
        encoded, it can turn the return value into a string as follows:

            param = msg.get_param('foo')
            param = email.utils.collapse_rfc2231_value(rawparam)

        )r�r4r=)rHrr�r�r<r�r�rrrrjns
zMessage.get_paramr|r#cCs�t|t�s|r|||f}||vr2|��dkr2d}n
|�|�}|j||d�st|s\t|||�}q�t�|t|||�g�}nbd}|j||d�D]N\}}	d}
|��|��kr�t|||�}
nt||	|�}
|s�|
}q�t�||
g�}q�||�|�kr�||=|||<dS)a�Set a parameter in the Content-Type header.

        If the parameter already exists in the header, its value will be
        replaced with the new value.

        If header is Content-Type and has not yet been defined for this
        message, it will be set to "text/plain" and the new parameter and
        value will be appended as per RFC 2045.

        An alternate header can specified in the header argument, and all
        parameters will be quoted as necessary unless requote is False.

        If charset is specified, the parameter will be encoded according to RFC
        2231.  Optional language specifies the RFC 2231 language, defaulting
        to the empty string.  Both charset and language should be strings.
        r�r>)r�r#�r�r<N)	r%r&r4rgrjr.r�rmr�)rHrr-r��requoter�languager��	old_param�	old_value�append_paramrrrr��s2

��zMessage.set_paramcCs�||vrdSd}|j||d�D]@\}}|��|��kr|sHt|||�}qt�|t|||�g�}q||�|�kr|||=|||<dS)a>Remove the given parameter completely from the Content-Type header.

        The header will be re-written in place without the parameter or its
        value. All values will be quoted as necessary unless requote is
        False.  Optional header specifies an alternative to the Content-Type
        header.
        Nr#r�)r�r4r.r�rmrg)rHrr�r��	new_ctyper�r�rrrr}�s
�zMessage.del_paramcCs�|�d�dkst�|��dkr,|d=d|d<||vr@|||<dS|j||d�}||=|||<|dd�D]\}}|�||||�qhdS)	aKSet the main type and subtype for the Content-Type header.

        type must be a string in the form "maintype/subtype", otherwise a
        ValueError is raised.

        This method replaces the Content-Type header, keeping all the
        parameters in place.  If requote is False, this leaves the existing
        header's quoting as is.  Otherwise, the parameters will be quoted (the
        default).

        An alternative header can be specified in the header argument.  When
        the Content-Type header is set, we'll always also add a MIME-Version
        header.
        r�rr�zmime-versionr{rzNr�)r2r�r4r�r�)rHrfr�r�r�r�r�rrr�set_type�szMessage.set_typecCsDt�}|�d|d�}||ur*|�d|d�}||ur6|St�|���S)a@Return the filename associated with the payload if present.

        The filename is extracted from the Content-Disposition header's
        `filename' parameter, and it is unquoted.  If that header is missing
        the `filename' parameter, this method falls back to looking for the
        `name' parameter.
        �filenamezcontent-dispositionr�r�)r�rjr
�collapse_rfc2231_valuer)rHr�r�r�rrr�get_filename�szMessage.get_filenamecCs,t�}|�d|�}||ur|St�|���S)z�Return the boundary associated with the payload if present.

        The boundary is extracted from the Content-Type header's `boundary'
        parameter, and it is unquoted.
        �boundary)r�rjr
r��rstrip)rHr�r�r�rrr�get_boundarys
zMessage.get_boundarycCst�}|�|d�}||ur$t�d��t�}d}|D]:\}}|��dkr^|�dd|f�d}q2|�||f�q2|s�|�dd|f�t�}|jD]|\}	}
|	��dkr�t�}|D].\}}
|
dkr�|�|�q�|�d||
f�q�t�	|�}
|�|j
�|	|
��q�|�|	|
f�q�||_d	S)
a�Set the boundary parameter in Content-Type to 'boundary'.

        This is subtly different than deleting the Content-Type header and
        adding a new one with a new boundary parameter via add_header().  The
        main difference is that using the set_boundary() method preserves the
        order of the Content-Type header in the original message.

        HeaderParseError is raised if the message has no Content-Type header.
        r�zNo Content-Type header foundFr�z"%s"Tr#r N)r�r�r�HeaderParseErrorrr4r5r@r�rmr?r�)rHr�r�r��	newparams�foundp�pk�pvr��hr�r�r�r�rrr�set_boundarys2


zMessage.set_boundaryc	Cs�t�}|�d|�}||ur|St|t�rp|dp2d}z|d�d�}t||�}Wnttfyn|d}Yn0z|�d�Wnty�|YS0|��S)z�Return the charset parameter of the Content-Type header.

        The returned string is always coerced to lower case.  If there is no
        Content-Type header, or if that header has no charset parameter,
        failobj is returned.
        rrzus-asciirr_)	r�rjr%r&r(rrkrlr4)rHr�r�r�pcharset�as_bytesrrr�get_content_charsetAs 

zMessage.get_content_charsetcs�fdd�|��D�S)a�Return a list containing the charset(s) used in this message.

        The returned list of items describes the Content-Type headers'
        charset parameter for this message and all the subparts in its
        payload.

        Each item will either be a string (the value of the charset parameter
        in the Content-Type header of that part) or the value of the
        'failobj' parameter (defaults to None), if the part does not have a
        main MIME type of "text", or the charset is not defined.

        The list will contain one string for each part of the message, plus
        one for the container message (i.e. self), so that a non-multipart
        message will still return a list of length 1.
        csg|]}|����qSr)r�)r��part�r�rrr�oraz(Message.get_charsets.<locals>.<listcomp>��walk)rHr�rr�r�get_charsets_szMessage.get_charsetsr�)Fr)NF)N)N)N)Nr�T)Nr�T)r|TNr#)r�T)r|T)N)N)N)N)2�__name__�
__module__�__qualname__�__doc__rrIrrMrKrVrYrZr\rwryrxr�r�r�r�r�r�r�r�r�r�rgr�r�r�rr�r�r�r�r�r�r�r�rjr�r}r�r�r�r�r�r�Z future.backports.email.iteratorsr�rrrrr	msb


[

&


				
�
"�
0

 


.

)NT)'r��
__future__rrrZfuture.builtinsrrrr�__all__�rerpr`�binascii�ior
rZfuture.utilsrZfuture.backports.emailr
rZ"future.backports.email._policybaserrrCZ%future.backports.email._encoded_wordsrr~r��compiler*rr.r;r=r�r	rrrr�<module>s,


"PK�Du\���X.X.9future/backports/email/__pycache__/charset.cpython-39.pycnu�[���a

��?hD�@s�ddlmZddlmZddlmZddlmZddlmZgd�Zddlm	Z	ddl
mZdd	lm
Z
dd
lmZdZdZd
ZdZdZdZdZeedfeedfeedfeedfeedfeedfeedfeedfeedfeedfeedfeedfdeedfeedfeddfeddfeddfeedfeedfd�Zddddddddddddddddddd d d!d"d#dd$�Zd%d&dd'�Zd2d(d)�Zd*d+�Zd,d-�Zd.d/�ZGd0d1�d1e�ZdS)3�)�unicode_literals)�division)�absolute_import)�str)�next)�Charset�	add_alias�add_charset�	add_codec)�partial)�email)�errors)�encode_7or8bit�����us-asciizunknown-8bit�N)NNN�iso-2022-jp�utf-8)�
iso-8859-1�
iso-8859-2�
iso-8859-3�
iso-8859-4�
iso-8859-9�iso-8859-10�iso-8859-13�iso-8859-14�iso-8859-15�iso-8859-16zwindows-1252�visciir�big5�gb2312�euc-jp�	shift_jisrzkoi8-rrrrrrrrrrrr zks_c_5601-1987r$zeuc-kr)�latin_1zlatin-1�latin_2zlatin-2�latin_3zlatin-3�latin_4zlatin-4�latin_5zlatin-5�latin_6zlatin-6�latin_7zlatin-7�latin_8zlatin-8�latin_9zlatin-9�latin_10zlatin-10�cp949�euc_jp�euc_kr�ascii�eucgb2312_cn�big5_tw)r#r"rcCs"|tkrtd��|||ft|<dS)a>Add character set properties to the global registry.

    charset is the input character set, and must be the canonical name of a
    character set.

    Optional header_enc and body_enc is either Charset.QP for
    quoted-printable, Charset.BASE64 for base64 encoding, Charset.SHORTEST for
    the shortest of qp or base64 encoding, or None for no encoding.  SHORTEST
    is only valid for header_enc.  It describes how message headers and
    message bodies in the input charset are to be encoded.  Default is no
    encoding.

    Optional output_charset is the character set that the output should be
    in.  Conversions will proceed from input charset, to Unicode, to the
    output charset when the method Charset.convert() is called.  The default
    is to output in the same character set as the input.

    Both input_charset and output_charset must have Unicode codec entries in
    the module's charset-to-codec mapping; use add_codec(charset, codecname)
    to add codecs the module does not know about.  See the codecs module's
    documentation for more information.
    z!SHORTEST not allowed for body_encN)�SHORTEST�
ValueError�CHARSETS)�charset�
header_enc�body_enc�output_charset�r=�H/usr/local/lib/python3.9/site-packages/future/backports/email/charset.pyr	nsr	cCs|t|<dS)z�Add a character set alias.

    alias is the alias name, e.g. latin-1
    canonical is the character set's canonical name, e.g. iso-8859-1
    N)�ALIASES)�alias�	canonicalr=r=r>r�srcCs|t|<dS)a$Add a codec that map characters in the given charset to/from Unicode.

    charset is the canonical name of a character set.  codecname is the name
    of a Python codec, as appropriate for the second argument to the unicode()
    built-in, or to the encode() method of a Unicode string.
    N)�	CODEC_MAP)r9�	codecnamer=r=r>r
�sr
cCs*t|�}|tkr|�dd�S|�|�SdS)Nr3�surrogateescape)r�UNKNOWN8BIT�encode)�string�codecr=r=r>�_encode�srIc@sheZdZdZefdd�Zdd�ZeZdd�Zdd	�Z	d
d�Z
dd
�Zdd�Zdd�Z
dd�Zdd�ZdS)ra@	Map character sets to their email properties.

    This class provides information about the requirements imposed on email
    for a specific character set.  It also provides convenience routines for
    converting between character sets, given the availability of the
    applicable codecs.  Given a character set, it will do its best to provide
    information on how to use that character set in an email in an
    RFC-compliant way.

    Certain character sets must be encoded with quoted-printable or base64
    when used in email headers or bodies.  Certain character sets must be
    converted outright, and are not allowed in email.  Instances of this
    module expose the following information about a character set:

    input_charset: The initial character set specified.  Common aliases
                   are converted to their `official' email names (e.g. latin_1
                   is converted to iso-8859-1).  Defaults to 7-bit us-ascii.

    header_encoding: If the character set must be encoded before it can be
                     used in an email header, this attribute will be set to
                     Charset.QP (for quoted-printable), Charset.BASE64 (for
                     base64 encoding), or Charset.SHORTEST for the shortest of
                     QP or BASE64 encoding.  Otherwise, it will be None.

    body_encoding: Same as header_encoding, but describes the encoding for the
                   mail message's body, which indeed may be different than the
                   header encoding.  Charset.SHORTEST is not allowed for
                   body_encoding.

    output_charset: Some character sets must be converted before they can be
                    used in email headers or bodies.  If the input_charset is
                    one of them, this attribute will contain the name of the
                    charset output will be converted to.  Otherwise, it will
                    be None.

    input_codec: The name of the Python codec used to convert the
                 input_charset to Unicode.  If no conversion codec is
                 necessary, this attribute will be None.

    output_codec: The name of the Python codec used to convert Unicode
                  to the output_charset.  If no conversion codec is necessary,
                  this attribute will have the same value as the input_codec.
    cCs�z$t|t�r|�d�n
t|d�}Wnty@t�|��Yn0|��}t�||�|_	t
�|j	ttdf�\}}}|s||j	}||_
||_t�||�|_t�|j	|j	�|_t�|j|j�|_dS)Nr3)�
isinstancerrF�UnicodeErrorr
�CharsetError�lowerr?�get�
input_charsetr8r6�BASE64�header_encoding�
body_encodingr<rB�input_codec�output_codec)�selfrO�henc�benc�convr=r=r>�__init__�s,
�
��zCharset.__init__cCs
|j��S�N)rOrM�rUr=r=r>�__str__�szCharset.__str__cCst|�t|���kSrZ)rrM�rU�otherr=r=r>�__eq__�szCharset.__eq__cCs|�|�SrZ)r_r]r=r=r>�__ne__�szCharset.__ne__cCs2|jtksJ�|jtkrdS|jtkr*dStSdS)aPReturn the content-transfer-encoding used for body encoding.

        This is either the string `quoted-printable' or `base64' depending on
        the encoding used, or it is a function in which case you should call
        the function with a single argument, the Message object being
        encoded.  The function should then set the Content-Transfer-Encoding
        header itself to whatever is appropriate.

        Returns "quoted-printable" if self.body_encoding is QP.
        Returns "base64" if self.body_encoding is BASE64.
        Returns conversion function otherwise.
        zquoted-printable�base64N)rRr6�QPrPrr[r=r=r>�get_body_encoding�s


zCharset.get_body_encodingcCs|jp
|jS)z�Return the output character set.

        This is self.output_charset if that is not None, otherwise it is
        self.input_charset.
        )r<rOr[r=r=r>�get_output_charsetszCharset.get_output_charsetcCs6|jpd}t||�}|�|�}|dur*|S|�||�S)a�Header-encode a string by converting it first to bytes.

        The type of encoding (base64 or quoted-printable) will be based on
        this charset's `header_encoding`.

        :param string: A unicode string for the header.  It must be possible
            to encode this string to bytes using the character set's
            output codec.
        :return: The encoded string, with RFC 2047 chrome.
        rN)rTrI�_get_encoder�
header_encode)rUrGrH�header_bytes�encoder_moduler=r=r>rfs


zCharset.header_encodecCs|jpd}t||�}|�|�}t|j|d�}|��}t|�t}g}	g}
t|�|}|D]�}|
�	|�t
�|
�}
|�t|
|��}||krX|
�
�|	s�|
s�|	�	d�n.|	r�dnd}t
�|
�}t||�}|	�	||��|g}
t|�|}qXt
�|
�}t||�}|	�	||��|	S)afHeader-encode a string by converting it first to bytes.

        This is similar to `header_encode()` except that the string is fit
        into maximum line lengths as given by the argument.

        :param string: A unicode string for the header.  It must be possible
            to encode this string to bytes using the character set's
            output codec.
        :param maxlengths: Maximum line length iterator.  Each element
            returned from this iterator will provide the next maximum line
            length.  This parameter is used as an argument to built-in next()
            and should never be exhausted.  The maximum line lengths should
            not count the RFC 2047 chrome.  These line lengths are only a
            hint; the splitter does the best it can.
        :return: Lines of encoded strings, each with RFC 2047 chrome.
        r)r9N� r)rTrIrerrfrd�len�RFC2047_CHROME_LENr�append�EMPTYSTRING�join�
header_length�pop)rUrG�
maxlengthsrHrgrh�encoderr9�extra�lines�current_line�maxlen�	character�	this_line�length�	separator�joined_liner=r=r>�header_encode_lines/s6








zCharset.header_encode_linescCs`|jtkrtjS|jtkr tjS|jtkrXtj�|�}tj�|�}||krPtjStjSndSdSrZ)rQrPr�
base64mimerb�
quoprimimer6ro)rUrg�len64�lenqpr=r=r>rems


zCharset._get_encodercCs�|s|S|jtur4t|t�r(|�|j�}tj�|�S|jt	urjt|t�rT|�|j�}|�
d�}tj�|�St|t�r�|�|j��
d�}|SdS)avBody-encode a string by converting it first to bytes.

        The type of encoding (base64 or quoted-printable) will be based on
        self.body_encoding.  If body_encoding is None, we assume the
        output charset is a 7bit encoding, so re-encoding the decoded
        string using the ascii codec produces the correct string version
        of the content.
        �latin1r3N)rRrPrJrrFr<rr}�body_encoderb�decoder~)rUrGr=r=r>r�|s	





zCharset.body_encodeN)�__name__�
__module__�__qualname__�__doc__�DEFAULT_CHARSETrYr\�__repr__r_r`rcrdrfr|rer�r=r=r=r>r�s+!>r)NNN) �
__future__rrrZfuture.builtinsrr�__all__�	functoolsrZfuture.backportsrZfuture.backports.emailr
Zfuture.backports.email.encodersrrbrPr6rkr�rErmr8r?rBr	rr
rI�objectrr=r=r=r>�<module>s�� ��
	PK�Du\�H�rjj:future/backports/email/__pycache__/encoders.cpython-39.pycnu�[���a

��?h�
�@s�dZddlmZddlmZddlmZddlmZgd�Zzddlm	Z
WneyjddlmZ
Yn0ddl
mZd	d
�Zdd�Zd
d�Zdd�Zdd�ZdS)z Encodings and related functions.�)�unicode_literals)�division)�absolute_import)�str)�encode_7or8bit�
encode_base64�encode_noop�
encode_quopri)�encodebytes)�encodestringcCst|dd�}|�dd�S)NT)�	quotetabs� z=20)�
_encodestring�replace)�s�enc�r�I/usr/local/lib/python3.9/site-packages/future/backports/email/encoders.py�_qencodesrcCs,|��}tt|�d�}|�|�d|d<dS)zlEncode the message's payload in Base64.

    Also, add an appropriate Content-Transfer-Encoding header.
    �ascii�base64�Content-Transfer-EncodingN)�get_payloadr�_bencode�set_payload��msg�orig�encdatarrrr!s
rcCs&|��}t|�}|�|�d|d<dS)zvEncode the message's payload in quoted-printable.

    Also, add an appropriate Content-Transfer-Encoding header.
    zquoted-printablerN)rrrrrrrr	,s
r	cCs�|��}|durd|d<dSz$t|t�r4|�d�n
|�d�WnHty�|��}|o^|j}|r||���	d�r|d|d<nd|d<Yn
0d|d<t|t�s�|�
|�dd��dS)z9Set the Content-Transfer-Encoding header to 7bit or 8bit.N�7bitrrz	iso-2022-�8bit�surrogateescape)r�
isinstancer�encode�decode�UnicodeError�get_charset�output_charset�lower�
startswithr)rr�charsetZoutput_csetrrrr7s"



rcCs(|��}t|t�s$|�|�dd��dS)zDo nothing.rr!N)rr"rrr$)rrrrrrSs
rN)�__doc__�
__future__rrrZfuture.builtinsr�__all__rr
r�ImportErrorr�quoprirrrr	rrrrrr�<module>sPK�Du\��N�x(x(7future/backports/email/__pycache__/utils.cpython-39.pycnu�[���a

��?h�7�@s�dZddlmZddlmZddlmZddlmZddlmZm	Z	m
Z
gd�ZddlZddl
Z
ejrlde
_ddlZddlZddlZddlZdd	lmZdd
lmZmZddlZddlmZddlmZdd
lmZ ddlm!Z!ddlm"Z"m#Z#m$Z$ddl%m&Z'ddl(m)Z)m*Z*ddl+m,Z,dZ-dZ.dZ/dZ0dZ1e
�2d�Z3e
�2d�Z4e
�2d�j5Z6dd�Z7d?dd�Z8dd �Z9e
�2d!e
j:e
j;B�Z<d"d#�Z=d@d%d&�Z>dAd'd(�Z?dBd)d*�Z@d+d,�ZAd-d.�ZBd/d0�Zd1d2�ZCdCd3d4�ZDe
�2d5e
j�ZEd6d7�ZFdDd:d;�ZGdEd=d>�ZHdS)FzMiscellaneous utilities.�)�unicode_literals)�division)�absolute_import)�utils)�bytes�int�str)�collapse_rfc2231_value�
decode_params�decode_rfc2231�encode_rfc2231�
formataddr�
formatdate�format_datetime�getaddresses�
make_msgid�	mktime_tz�	parseaddr�	parsedate�parsedate_tz�parsedate_to_datetime�unquoteN)�datetime)�quoter)�StringIO)r)�AddressList)r)rr�
_parsedate_tz)�decodestring)�_bencode�_qencode)�Charsetz, �z
�'z[][\\()<>@,:;".]z[\\"]u'([^�-�]|\A)[�-�]([^�-�]|\Z)cCs|�dd�}|�dd�S)N�ascii�surrogateescape�replace)�encode�decode)�string�original_bytes�r*�F/usr/local/lib/python3.9/site-packages/future/backports/email/utils.py�	_sanitizeHsr,�utf-8cCs�|\}}|�d�|r�z|�d�Wn:ty^t|t�rDt|�}|�|�}d||fYS0d}t�|�rrd}t�	d|�}d||||fS|S)a�The inverse of parseaddr(), this takes a 2-tuple of the form
    (realname, email_address) and returns the string value suitable
    for an RFC 2822 From, To or Cc header.

    If the first element of pair is false, then the second element is
    returned unmodified.

    Optional charset if given is the character set that is used to encode
    realname in case realname is not ASCII safe.  Can be an instance of str or
    a Charset-like object which has a header_encode method.  Default is
    'utf-8'.
    r#z%s <%s>r!�"z\\\g<0>z%s%s%s <%s>)
r&�UnicodeEncodeError�
isinstancerr �
header_encode�
specialsre�search�	escapesre�sub)�pair�charset�name�address�encoded_name�quotesr*r*r+r
Ps 




r
cCst�|�}t|�}|jS)z7Return a list of (REALNAME, EMAIL) for each fieldvalue.)�
COMMASPACE�join�_AddressList�addresslist)�fieldvalues�all�ar*r*r+rrs
ra_
  =\?                   # literal =?
  (?P<charset>[^?]*?)   # non-greedy up to the next ? is the charset
  \?                    # literal ?
  (?P<encoding>[qb])    # either a "q" or a "b", case insensitive
  \?                    # literal ?
  (?P<atom>.*?)         # non-greedy up to the next ?= is the atom
  \?=                   # literal ?=
  c	CsHdgd�|d|dgd�|dd|d|d|d	|d
|fS)Nz"%s, %02d %s %04d %02d:%02d:%02d %s)�Mon�Tue�Wed�Thu�Fri�Sat�Sun��)�Jan�Feb�Mar�Apr�May�Jun�Jul�Aug�Sep�Oct�Nov�Dec�r���r*)�	timetuple�zoner*r*r+�_format_timetuple_and_zone�s
��r^Fc	Cs�|durt��}|rrt�|�}tjr4|dr4tj}ntj}tt|�d�\}}|dkrZd}nd}d|||df}nt�|�}|r�d	}nd
}t||�S)a�Returns a date string as specified by RFC 2822, e.g.:

    Fri, 09 Nov 2001 01:08:47 -0000

    Optional timeval if given is a floating point time value as accepted by
    gmtime() and localtime(), otherwise the current time is used.

    Optional localtime is a flag that when True, interprets timeval, and
    returns a date relative to the local timezone instead of UTC, properly
    taking daylight savings time into account.

    Optional argument usegmt means that the timezone is written out as
    an ascii string, not numeric one (so "GMT" instead of "+0000"). This
    is needed for HTTP, and is only used when localtime==False.
    N���ir�-�+z
%s%02d%02d�<�GMT�-0000)	�time�	localtime�daylight�altzone�timezone�divmod�abs�gmtimer^)	�timevalrf�usegmt�now�offset�hours�minutes�signr]r*r*r+r�s"

rcCsV|��}|r2|jdus$|jtjjkr,td��d}n|jdurBd}n
|�d�}t||�S)a$Turn a datetime into a date string as specified in RFC 2822.

    If usegmt is True, dt must be an aware datetime with an offset of zero.  In
    this case 'GMT' will be rendered instead of the normal +0000 required by
    RFC2822.  This is to support HTTP headers involving date stamps.
    Nz%usegmt option requires a UTC datetimercrdz%z)r\�tzinforri�utc�
ValueError�strftimer^)�dtrnror]r*r*r+r�s

rcCsht��}t�dt�|��}t��}t�d�}|dur:d}nd|}|durRt��}d|||||f}|S)anReturns a string suitable for RFC 2822 compliant Message-ID, e.g:

    <20020201195627.33539.96671@nightshade.la.mastaler.com>

    Optional idstring if given is a string used to strengthen the
    uniqueness of the message id.  Optional domain if given provides the
    portion of the message id after the '@'.  It defaults to the locally
    defined hostname.
    z%Y%m%d%H%M%Si��Nr!�.z<%s.%s.%s%s@%s>)	rerwrl�os�getpid�random�	randrange�socket�getfqdn)�idstring�domainrmZutcdate�pid�randint�msgidr*r*r+r�s

rcCsjtt|��}|dd�g|dd�\}}|durDtj|dd��Stj|dd�dt�tj|d��i�S)Nr_rJrt��seconds)�listrrri�	timedelta)�dataZ	_3to2list�dtuple�tzr*r*r+r�s�rcCst|�j}|sdS|dS)N)r!r!r)r>r?)�addr�addrsr*r*r+r�s
rcCs`t|�dkr\|�d�r<|�d�r<|dd��dd��dd�S|�d�r\|�d�r\|dd�S|S)	zRemove quotes from a string.rXr.r_z\\�\z\"�<�>)�len�
startswith�endswithr%)rr*r*r+r�srcCs&|�td�}t|�dkr"dd|fS|S)z#Decode string according to RFC 2231rKN)�split�TICKr�)�s�partsr*r*r+rs
rcCs@t|d|pdd�}|dur&|dur&|S|dur2d}d|||fS)z�Encode string according to RFC 2231.

    If neither charset nor language is given, then s is returned as-is.  If
    charset is given but not language, the string is encoded using the empty
    string for language.
    r!r#)�safe�encodingNz%s'%s'%s)�	url_quote)r�r7�languager*r*r+rsrz&^(?P<name>\w+)\*((?P<num>[0-9]+)\*?)?$c
Csh|dd�}g}i}|�d�\}}|�||f�|r�|�d�\}}|�d�rRd}nd}t|�}t�|�}|r�|�dd�\}}|dur�t|�}|�|g��|||f�q0|�|dt	|�f�q0|�rd|�
�D]�\}}g}d}	|��|D]*\}}
}|�rt|
d	d
�}
d}	|�|
�q�t	t
�|��}|	�rPt|�\}}}|�|||d|ff�q�|�|d|f�q�|S)zDecode parameters list according to RFC 2231.

    params is a sequence of 2-tuples containing (param name, string value).
    Nr�*TFr8�numz"%s"zlatin-1)r�)�pop�appendr�r�rfc2231_continuation�match�groupr�
setdefaultr�items�sort�url_unquote�EMPTYSTRINGr=r)
�params�
new_params�rfc2231_paramsr8�value�encoded�mor��
continuations�extendedr�r7r�r*r*r+r
sD

r
r%�us-asciicCs`t|t�rt|�dkrt|�S|\}}}t|d�}zt|||�WStyZt|�YS0dS)NrYzraw-unicode-escape)r0�tupler�rrr�LookupError)r��errors�fallback_charsetr7r��text�rawbytesr*r*r+r	Us

r	r_c	Cs|durtj�tjj���S|jdur.|��S|��dd�|f}t�|�}t�	|�}z tj
|jd�}t�||j�}Wn~t
y�|tjt�|�dd��}tjo�|jdk}|r�tjntj}|tj
|d�kr�t�|tj|�}n
t�|�}Yn0|j|d�S)a�Return local time as an aware datetime object.

    If called without arguments, return current time.  Otherwise *dt*
    argument should be a datetime instance, and it is converted to the
    local time zone according to the system time zone database.  If *dt* is
    naive (that is, dt.tzinfo is None), it is assumed to be in local time.
    In this case, a positive or zero value for *isdst* causes localtime to
    presume initially that summer time (for example, Daylight Saving Time)
    is or is not (respectively) in effect for the specified time.  A
    negative value for *isdst* causes the localtime() function to attempt
    to divine whether summer time is in effect for the specified time.

    Nr_r�rJr)rt)rroriru�
astimezonertr\re�mktimerfr��	tm_gmtoff�tm_zone�AttributeErrorrlrg�tm_isdstrh�tznamer%)	rx�isdst�tmr��localtm�deltar��dst�gmtoffr*r*r+rfks$


rf)r-)NFF)F)NN)NN)r%r�)Nr_)I�__doc__�
__future__rrr�futurerZfuture.builtinsrrr�__all__rz�re�PY2�ASCIIre�base64r|r~Zfuture.backportsrZfuture.backports.urllib.parserr�rr��warnings�iorZ!future.backports.email._parseaddrrr>rrrr�quoprirZ_qdecodeZfuture.backports.email.encodersrrZfuture.backports.email.charsetr r<r��UEMPTYSTRING�CRLFr��compiler2r4r3�_has_surrogatesr,r
r�VERBOSE�
IGNORECASE�ecrer^rrrrrrrr�r
r	rfr*r*r*r+�<module>st

�
"
�	
-

	
�8�
PK�Du\�ɐ��8future/backports/email/__pycache__/parser.cpython-39.pycnu�[���a

��?h��@s�dZddlmZddlmZddlmZgd�ZddlZddlmZm	Z	ddl
mZmZdd	l
mZdd
lmZGdd�de�ZGd
d�de�ZGdd�de�ZGdd�de�ZdS)z-A parser of RFC 2822 and MIME email messages.�)�unicode_literals)�division)�absolute_import)�Parser�HeaderParser�BytesParser�BytesHeaderParserN)�StringIO�
TextIOWrapper)�
FeedParser�BytesFeedParser)�Message)�compat32c@s,eZdZefdd�Zd	dd�Zd
dd�ZdS)rcKs,d|vr|d}|d=nt}||_||_dS)a�Parser of RFC 2822 and MIME email messages.

        Creates an in-memory object tree representing the email message, which
        can then be manipulated and turned over to a Generator to return the
        textual representation of the message.

        The string must be formatted as a block of RFC 2822 headers and header
        continuation lines, optionally preceded by a `Unix-from' header.  The
        header block is terminated either by the end of the string or by a
        blank line.

        _class is the class to instantiate for new message objects when they
        must be created.  This class must have a constructor that can take
        zero arguments.  Default is Message.Message.

        The policy keyword specifies a policy object that controls a number of
        aspects of the parser's operation.  The default policy maintains
        backward compatibility.

        �policyN)r�_classr)�selfrZ_3to2kwargsr�r�G/usr/local/lib/python3.9/site-packages/future/backports/email/parser.py�__init__szParser.__init__FcCs@t|j|jd�}|r|��|�d�}|s,q8|�|�q|��S)a\Create a message structure from the data in a file.

        Reads all the data from the file and returns the root of the message
        structure.  Optional headersonly is a flag specifying whether to stop
        parsing after reading the headers or not.  The default is False,
        meaning it parses the entire contents of the file.
        )ri )rrr�_set_headersonly�read�feed�close)r�fp�headersonly�
feedparser�datarrr�parse/s
zParser.parsecCs|jt|�|d�S)a-Create a message structure from a string.

        Returns the root of the message structure.  Optional headersonly is a
        flag specifying whether to stop parsing after reading the headers or
        not.  The default is False, meaning it parses the entire contents of
        the file.
        �r)rr	�r�textrrrr�parsestrAszParser.parsestrN)F)F)�__name__�
__module__�__qualname__r
rrr!rrrrrs
rc@s eZdZddd�Zddd�ZdS)	rTcCst�||d�S�NT)rr�rrrrrrrNszHeaderParser.parsecCst�||d�Sr%)rr!rrrrr!QszHeaderParser.parsestrN)T)T)r"r#r$rr!rrrrrMs
rc@s(eZdZdd�Zd	dd�Zd
dd�ZdS)rcOst|i|��|_dS)a�Parser of binary RFC 2822 and MIME email messages.

        Creates an in-memory object tree representing the email message, which
        can then be manipulated and turned over to a Generator to return the
        textual representation of the message.

        The input must be formatted as a block of RFC 2822 headers and header
        continuation lines, optionally preceded by a `Unix-from' header.  The
        header block is terminated either by the end of the input or by a
        blank line.

        _class is the class to instantiate for new message objects when they
        must be created.  This class must have a constructor that can take
        zero arguments.  Default is Message.Message.
        N)r�parser)r�args�kwrrrrWszBytesParser.__init__FcCsDt|ddd�}|�|j�||�Wd�S1s60YdS)acCreate a message structure from the data in a binary file.

        Reads all the data from the file and returns the root of the message
        structure.  Optional headersonly is a flag specifying whether to stop
        parsing after reading the headers or not.  The default is False,
        meaning it parses the entire contents of the file.
        �ascii�surrogateescape)�encoding�errorsN)r
r'rr&rrrriszBytesParser.parsecCs|jddd�}|j�||�S)a2Create a message structure from a byte string.

        Returns the root of the message structure.  Optional headersonly is a
        flag specifying whether to stop parsing after reading the headers or
        not.  The default is False, meaning it parses the entire contents of
        the file.
        �ASCIIr+)r-)�decoder'r!rrrr�
parsebytesvszBytesParser.parsebytesN)F)F)r"r#r$rrr0rrrrrUs

rc@s eZdZddd�Zddd�ZdS)	rTcCstj||dd�S�NTr)rrr&rrrr�szBytesHeaderParser.parsecCstj||dd�Sr1)rr0rrrrr0�szBytesHeaderParser.parsebytesN)T)T)r"r#r$rr0rrrrr�s
r)�__doc__�
__future__rrr�__all__�warnings�ior	r
Z!future.backports.email.feedparserrrZfuture.backports.email.messager
Z"future.backports.email._policybaser�objectrrrrrrrr�<module>s9-PK�Du\�m}^11<future/backports/email/__pycache__/_parseaddr.cpython-39.pycnu�[���a

��?h�C�@s�dZddlmZddlmZddlmZddlmZddlmZgd�Zddl	Z	ddl
Z
d	Zd
ZdZ
gd�Zgd
�Zddddddddddddddd�Zdd�Zdd�Zdd�Zdd�Zdd�ZGdd �d e�ZGd!d"�d"e�ZdS)#zcEmail address parsing code.

Lifted directly from rfc822.py.  This should eventually be rewritten.
�)�unicode_literals)�print_function)�division)�absolute_import)�int)�	mktime_tz�	parsedate�parsedate_tz�quoteN� �z, )�jan�feb�mar�apr�may�jun�jul�aug�sep�oct�nov�dec�january�february�march�aprilr�june�july�august�	september�october�november�december)�mon�tue�wed�thu�fri�sat�sunip���i���i���i����iD���i��)�UT�UTC�GMT�Z�AST�ADT�EST�EDT�CST�CDT�MST�MDT�PST�PDTcCs,t|�}|sdS|ddur$d|d<t|�S)zQConvert a date string to a time tuple.

    Accounts for military timezones.
    N�	r)�
_parsedate_tz�tuple)�data�res�r>�K/usr/local/lib/python3.9/site-packages/future/backports/email/_parseaddr.pyr	3sr	c
Cs�|sdS|��}|d�d�s.|d��tvr6|d=n.|d�d�}|dkrd|d|dd�|d<t|�dkr�|d�d�}t|�dkr�||dd�}t|�dkr�|d}|�d�}|d	kr�|�d�}|dkr�|d|�||d�g|dd�<n
|�d
�t|�dk�rdS|dd�}|\}}}}}|��}|tv�rX||��}}|tv�rXdSt�	|�d}|dk�rx|d8}|d	dk�r�|dd	�}|�d
�}|dk�r�||}}|d	dk�r�|dd	�}|d�
��s�||}}|d	dk�r�|dd	�}|�d
�}t|�dk�r"|\}	}
d}n~t|�dk�r<|\}	}
}ndt|�dk�r�d|dv�r�|d�d�}t|�dk�r�|\}	}
d}nt|�dk�r�|\}	}
}ndSz,t|�}t|�}t|	�}	t|
�}
t|�}Wnt�y�YdS0|dk�r
|dk�r|d7}n|d7}d}|�
�}|tv�r*t|}n<zt|�}Wnt�yJYn0|dk�rf|�d��rfd}|�r�|dk�r�d	}
|}nd}
|
|dd|dd}||||	|
|ddd	|g
S)a�Convert date to extended time tuple.

    The last (additional) element is the time zone offset in seconds, except if
    the timezone was specified as -0000.  In that case the last element is
    None.  This indicates a UTC timestamp that explicitly declaims knowledge of
    the source timezone, as opposed to a +0000 timestamp that indicates the
    source timezone really was UTC.

    Nr�,���-��+���r���:��0�.�d�Dili�i�<)�split�endswith�lower�	_daynames�rfind�len�find�append�_monthnames�index�isdigitr�
ValueError�upper�
_timezones�
startswith)r<�i�stuff�s�dd�mm�yy�tm�tz�thh�tmm�tss�tzoffset�tzsignr>r>r?r:?s�


"














r:cCs&t|�}t|t�r|dd�S|SdS)z&Convert a time string to a time tuple.Nr9)r	�
isinstancer;�r<�tr>r>r?r�s
rcCs<|ddur"t�|dd�d�St�|�}||dSdS)zETurn a 10-tuple as returned by parsedate_tz() into a POSIX timestamp.r9N�)rF)�time�mktime�calendar�timegmrmr>r>r?r�s
rcCs|�dd��dd�S)z�Prepare string to be used in a quoted string.

    Turns backslash and double quote characters into quoted pairs.  These
    are the only characters that need to be quoted inside a quoted string.
    Does not add the surrounding double quotes.
    �\z\\�"z\")�replace)�strr>r>r?r
�sr
c@s|eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
ddd�Zdd�Zdd�Z
dd�Zddd�Zdd�ZdS) �
AddrlistClassaAddress parser class by Ben Escoto.

    To understand what this class does, it helps to have a copy of RFC 2822 in
    front of you.

    Note: this class interface is deprecated and may be removed in the future.
    Use email.utils.AddressList instead.
    cCsZd|_d|_d|_d|_|j|j|_|j|j|j|_|j�dd�|_||_g|_	dS)z�Initialize a new instance.

        `field' is an unparsed address header field, containing
        one or more addresses.
        z()<>@,:;."[]rz 	z
rLrN)
�specials�pos�LWS�CR�FWS�atomendsrv�
phraseends�field�commentlist��selfr�r>r>r?�__init__�szAddrlistClass.__init__cCs�g}|jt|j�kr�|j|j|jdvr\|j|jdvrL|�|j|j�|jd7_q|j|jdkr�|j�|���qq�qt�|�S)z&Skip white space and extract comments.z

rA�()	rzrUr�r{rWr��
getcomment�EMPTYSTRING�join)r��wslistr>r>r?�gotonext�szAddrlistClass.gotonextcCs:g}|jt|j�kr6|��}|r*||7}q|�d�q|S)zVParse all addresses.

        Returns a list containing all of the addresses.
        )rr)rzrUr��
getaddressrW)r��result�adr>r>r?�getaddrlist�s
zAddrlistClass.getaddrlistcCs�g|_|��|j}|j}|��}|��g}|jt|j�kr\|rXt�|j�|dfg}�n\|j|jdvr�||_||_|��}t�|j�|fg}�n"|j|jdk�rg}t|j�}|jd7_|jt|j�k�r�|��|j|k�r|j|jdk�r|jd7_�q�||�	�}q�n�|j|jdk�rx|�
�}|j�rft�|�dd�|j�d	|fg}nt�|�|fg}n@|�r�t�|j�|dfg}n"|j|j|jv�r�|jd7_|��|jt|j�k�r�|j|jd
k�r�|jd7_|S)zParse the next address.rz.@rIrA�;�<z (r�)r@)r�r�rz�
getphraselistrUr��SPACEr��getaddrspecr��getrouteaddrry)r��oldpos�oldcl�plist�
returnlist�addrspec�fieldlen�	routeaddrr>r>r?r�
sX

���$zAddrlistClass.getaddresscCs�|j|jdkrdSd}|jd7_|��d}|jt|j�kr�|rT|��d}n~|j|jdkrv|jd7_q�n\|j|jdkr�|jd7_d}n8|j|jd	kr�|jd7_n|��}|jd7_q�|��q2|S)
z�Parse a route address (Return-path value).

        This method just skips all the route stuff and returns the addrspec.
        r�NFrAr�>�@TrI)r�rzr�rU�	getdomainr�)r��expectroute�adlistr>r>r?r�Es.
zAddrlistClass.getrouteaddrcCsFg}|��|jt|j�kr�d}|j|jdkrf|rH|d��sH|��|�d�|jd7_d}nd|j|jdkr�|�dt|����n<|j|j|j	vr�|r�|d��s�|��q�n|�|�
��|��}|r|r|�|�q|jt|j�k�s
|j|jdk�rt�|�S|�d�|jd7_|��t�|�|�
�S)	zParse an RFC 2822 addr-spec.TrLrFrAFruz"%s"r�)r�rzrUr��strip�poprWr
�getquoter~�getatomr�r�r�)r��aslist�preserve_ws�wsr>r>r?r�es4
$

zAddrlistClass.getaddrspeccCs�g}|jt|j�kr�|j|j|jvr6|jd7_q|j|jdkrX|j�|���q|j|jdkrx|�|���q|j|jdkr�|jd7_|�d�q|j|j|jvr�q�q|�|�	��qt
�|�S)z-Get the complete domain name from an address.rAr��[rL)rzrUr�r{r�rWr��getdomainliteralr~r�r�r�)r��sdlistr>r>r?r��szAddrlistClass.getdomainTcCs�|j|j|krdSdg}d}|jd7_|jt|j�kr�|rX|�|j|j�d}np|j|j|vrz|jd7_q�nN|r�|j|jdkr�|�|���q,n(|j|jdkr�d}n|�|j|j�|jd7_q,t�|�S)a�Parse a header fragment delimited by special characters.

        `beginchar' is the start character for the fragment.
        If self is not looking at an instance of `beginchar' then
        getdelimited returns the empty string.

        `endchars' is a sequence of allowable end-delimiting characters.
        Parsing stops when one of these is encountered.

        If `allowcomments' is non-zero, embedded RFC 2822 comments are allowed
        within the parsed fragment.
        rFrAr�rtT)r�rzrUrWr�r�r�)r��	beginchar�endchars�
allowcomments�slistr
r>r>r?�getdelimited�s(
zAddrlistClass.getdelimitedcCs|�ddd�S)z1Get a quote-delimited fragment from self's field.ruz"
F�r��r�r>r>r?r��szAddrlistClass.getquotecCs|�ddd�S)z7Get a parenthesis-delimited fragment from self's field.r�z)
Tr�r�r>r>r?r��szAddrlistClass.getcommentcCsd|�ddd�S)z!Parse an RFC 2822 domain-literal.z[%s]r�z]
Fr�r�r>r>r?r��szAddrlistClass.getdomainliteralNcCsddg}|dur|j}|jt|j�krZ|j|j|vr8qZn|�|j|j�|jd7_qt�|�S)aParse an RFC 2822 atom.

        Optional atomends specifies a different set of end token delimiters
        (the default is to use self.atomends).  This is used e.g. in
        getphraselist() since phrase endings must not include the `.' (which
        is legal in phrases).rNrA)r~rzrUr�rWr�r�)r�r~�atomlistr>r>r?r��szAddrlistClass.getatomcCs�g}|jt|j�kr�|j|j|jvr6|jd7_q|j|jdkrV|�|���q|j|jdkrx|j�|���q|j|j|jvr�q�q|�|�	|j��q|S)z�Parse a sequence of RFC 2822 phrases.

        A phrase is a sequence of words, which are in turn either RFC 2822
        atoms or quoted-strings.  Phrases are canonicalized by squeezing all
        runs of continuous whitespace into one space.
        rArur�)
rzrUr�r}rWr�r�r�rr�)r�r�r>r>r?r��szAddrlistClass.getphraselist)T)N)�__name__�
__module__�__qualname__�__doc__r�r�r�r�r�r�r�r�r�r�r�r�r�r>r>r>r?rx�s	; !
%
rxc@sHeZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�Z	dd�Z
dS)�AddressListz@An AddressList encapsulates a list of parsed RFC 2822 addresses.cCs&t�||�|r|��|_ng|_dS�N)rxr�r��addresslistr�r>r>r?r��szAddressList.__init__cCs
t|j�Sr�)rUr�r�r>r>r?�__len__�szAddressList.__len__cCs>td�}|jdd�|_|jD]}||jvr|j�|�q|Sr��r�r�rW�r��other�newaddr�xr>r>r?�__add__s

zAddressList.__add__cCs&|jD]}||jvr|j�|�q|Sr�)r�rW�r�r�r�r>r>r?�__iadd__
s

zAddressList.__iadd__cCs.td�}|jD]}||jvr|j�|�q|Sr�r�r�r>r>r?�__sub__s


zAddressList.__sub__cCs&|jD]}||jvr|j�|�q|Sr�)r��remover�r>r>r?�__isub__s

zAddressList.__isub__cCs
|j|Sr�)r�)r�rYr>r>r?�__getitem__ szAddressList.__getitem__N)r�r�r�r�r�r�r�r�r�r�r�r>r>r>r?r��s	r�)r��
__future__rrrrZfuture.builtinsr�__all__rprrr�r��
COMMASPACErXrSr]r	r:rrr
�objectrxr�r>r>r>r?�<module>s8�	u	

&PK�Du\�n^r8future/backports/email/__pycache__/errors.cpython-39.pycnu�[���a

��?h`�@s�dZddlmZddlmZddlmZddlmZGdd�de�ZGdd	�d	e�Z	Gd
d�de	�Z
Gdd
�d
e	�ZGdd�dee�Z
Gdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZGdd�de�ZeZGd d!�d!e�ZGd"d#�d#e�ZGd$d%�d%e�ZGd&d'�d'e�ZGd(d)�d)e�ZGd*d+�d+e�ZGd,d-�d-e�ZGd.d/�d/e�ZGd0d1�d1e�Z Gd2d3�d3e�Z!Gd4d5�d5e�Z"d6S)7z email package exception classes.�)�unicode_literals)�division)�absolute_import)�superc@seZdZdZdS)�MessageErrorz+Base class for errors in the email package.N��__name__�
__module__�__qualname__�__doc__�rr�G/usr/local/lib/python3.9/site-packages/future/backports/email/errors.pyrsrc@seZdZdZdS)�MessageParseErrorz&Base class for message parsing errors.Nrrrrr
rsrc@seZdZdZdS)�HeaderParseErrorzError while parsing headers.Nrrrrr
rsrc@seZdZdZdS)�
BoundaryErrorz#Couldn't find terminating boundary.Nrrrrr
rsrc@seZdZdZdS)�MultipartConversionErrorz(Conversion to a multipart is prohibited.Nrrrrr
rsrc@seZdZdZdS)�CharsetErrorzAn illegal charset was given.Nrrrrr
r srcs"eZdZdZd�fdd�	Z�ZS)�
MessageDefectz Base class for a message defect.Ncs|durt��|�||_dS�N)r�__init__�line)�selfr��	__class__rr
r(szMessageDefect.__init__)N�rr	r
rr�
__classcell__rrrr
r%src@seZdZdZdS)�NoBoundaryInMultipartDefectzBA message claimed to be a multipart but had no boundary parameter.Nrrrrr
r-src@seZdZdZdS)�StartBoundaryNotFoundDefectz+The claimed start boundary was never found.Nrrrrr
r0src@seZdZdZdS)�CloseBoundaryNotFoundDefectzEA start boundary was found, but not the corresponding close boundary.Nrrrrr
r3src@seZdZdZdS)�#FirstHeaderLineIsContinuationDefectz;A message had a continuation line as its first header line.Nrrrrr
r6src@seZdZdZdS)�MisplacedEnvelopeHeaderDefectz?A 'Unix-from' header was found in the middle of a header block.Nrrrrr
r 9sr c@seZdZdZdS)� MissingHeaderBodySeparatorDefectzEFound line with no leading whitespace and no colon before blank line.Nrrrrr
r!<sr!c@seZdZdZdS)�!MultipartInvariantViolationDefectz?A message claimed to be a multipart but no subparts were found.Nrrrrr
r"Asr"c@seZdZdZdS)�-InvalidMultipartContentTransferEncodingDefectzEAn invalid content transfer encoding was set on the multipart itself.Nrrrrr
r#Dsr#c@seZdZdZdS)�UndecodableBytesDefectz0Header contained bytes that could not be decodedNrrrrr
r$Gsr$c@seZdZdZdS)�InvalidBase64PaddingDefectz/base64 encoded sequence had an incorrect lengthNrrrrr
r%Jsr%c@seZdZdZdS)�InvalidBase64CharactersDefectz=base64 encoded sequence had characters not in base64 alphabetNrrrrr
r&Msr&cs eZdZdZ�fdd�Z�ZS)�HeaderDefectzBase class for a header defect.cst�j|i|��dSr)rr)r�args�kwrrr
rUszHeaderDefect.__init__rrrrr
r'Rsr'c@seZdZdZdS)�InvalidHeaderDefectz+Header is not valid, message gives details.Nrrrrr
r*Xsr*c@seZdZdZdS)�HeaderMissingRequiredValuez(A header that must have a value had noneNrrrrr
r+[sr+cs(eZdZdZ�fdd�Zdd�Z�ZS)�NonPrintableDefectz8ASCII characters outside the ascii-printable range foundcst��|�||_dSr)rr�non_printables)rr-rrr
raszNonPrintableDefect.__init__cCsd�|j�S)Nz6the following ASCII non-printables found in header: {})�formatr-)rrrr
�__str__es�zNonPrintableDefect.__str__)rr	r
rrr/rrrrr
r,^sr,c@seZdZdZdS)�ObsoleteHeaderDefectz0Header uses syntax declared obsolete by RFC 5322Nrrrrr
r0isr0c@seZdZdZdS)�NonASCIILocalPartDefectz(local_part contains non-ASCII charactersNrrrrr
r1lsr1N)#r�
__future__rrrZfuture.builtinsr�	Exceptionrrrr�	TypeErrorrr�
ValueErrorrrrrrr r!�MalformedHeaderDefectr"r#r$r%r&r'r*r+r,r0r1rrrr
�<module>s:PK�Du\�}�8�8Ffuture/backports/email/__pycache__/_header_value_parser.cpython-39.pycnu�[���a

��?h��@s�dZddlmZddlmZddlmZddlmZddlmZmZm	Z	m
Z
mZddlZddl
mZmZdd	lmZmZdd
lmZddlmZddlmZed
�Zeed�BZed�ZeeBZeed�Zeed�Zeed�Bed�ZeeBZ eed�BZ!e!eBZ"e"ed�Z#dd�Z$Gdd�de%�Z&Gdd�de�Z'Gdd�de'�Z(Gdd�de'�Z)Gdd �d e'�Z*Gd!d"�d"e'�Z+Gd#d$�d$e(�Z,Gd%d&�d&e'�Z-Gd'd(�d(e'�Z.Gd)d*�d*e'�Z/Gd+d,�d,e'�Z0Gd-d.�d.e0�Z1Gd/d0�d0e(�Z2Gd1d2�d2e'�Z3Gd3d4�d4e'�Z4Gd5d6�d6e'�Z5Gd7d8�d8e'�Z6Gd9d:�d:e'�Z7Gd;d<�d<e'�Z8Gd=d>�d>e'�Z9Gd?d@�d@e'�Z:GdAdB�dBe'�Z;GdCdD�dDe'�Z<GdEdF�dFe'�Z=GdGdH�dHe'�Z>GdIdJ�dJe'�Z?GdKdL�dLe'�Z@GdMdN�dNe'�ZAGdOdP�dPe*�ZBGdQdR�dRe'�ZCGdSdT�dTe'�ZDGdUdV�dVe'�ZEGdWdX�dXe'�ZFGdYdZ�dZeF�ZGGd[d\�d\e'�ZHGd]d^�d^e'�ZIGd_d`�d`e'�ZJGdadb�dbe'�ZKGdcdd�dde'�ZLGdedf�dfeL�ZMGdgdh�dheL�ZNGdidj�dje'�ZOGdkdl�dle'�ZPGdmdn�dne'�ZQGdodp�dpe	�ZRGdqdr�dreR�ZSGdsdt�dteR�ZTGdudv�dveS�ZUeTddw�ZVeTdxdy�ZWeTdzd{�ZXe�Yd|�Zd}�[e���j\Z]e�Yd~�Zd}�[e��^dd���^d�d����j_Z`e�Yd��jaZbe�Yd~�Zd}�[e ��^dd���^d�d����j_Zce�Yd~�Zd}�[e"��^dd���^d�d����j_Zde�Yd~�Zd}�[e#��^dd���^d�d����j_Zed�d��Zfd�d��Zgd�d��Zhd�d��Zid�d��Zjd�d��Zkd�d��Zld�d��Zmd�d��Znd�d��Zod�d��Zpd�d��Zqd�d��Zrd�d��Zsd�d��Ztd�d��Zud�d��Zvd�d��Zwd�d��Zxd�d��Zyd�d��Zzd�d��Z{d�d��Z|d�d��Z}d�d��Z~d�d��Zd�d��Z�d�d��Z�d�d��Z�d�d��Z�d�d��Z�d�dÄZ�d�dńZ�d�dDŽZ�d�dɄZ�d�d˄Z�d�d̈́Z�d�dτZ�d�dфZ�d�dӄZ�d�dՄZ�d�dׄZ�d�dلZ�d�dۄZ�d�d݄Z�d�d߄Z�d�d�Z�d�d�Z�d�d�Z�d�d�Z�d�d�Z�d�d�Z�dS)�alHeader value parser implementing various email-related RFC parsing rules.

The parsing methods defined in this module implement various email related
parsing rules.  Principal among them is RFC 5322, which is the followon
to RFC 2822 and primarily a clarification of the former.  It also implements
RFC 2047 encoded word decoding.

RFC 5322 goes to considerable trouble to maintain backward compatibility with
RFC 822 in the parse phase, while cleaning up the structure on the generation
phase.  This parser supports correct RFC 5322 generation by tagging white space
as folding white space only when folding is allowed in the non-obsolete rule
sets.  Actually, the parser is even more generous when accepting input than RFC
5322 mandates, following the spirit of Postel's Law, which RFC 5322 encourages.
Where possible deviations from the standard are annotated on the 'defects'
attribute of tokens that deviate.

The general structure of the parser follows RFC 5322, and uses its terminology
where there is a direct correspondence.  Where the implementation requires a
somewhat different structure than that used by the formal grammar, new terms
that mimic the closest existing terms are used.  Thus, it really helps to have
a copy of RFC 5322 handy when studying this code.

Input to the parser is a string that has already been unfolded according to
RFC 5322 rules.  According to the RFC this unfolding is the very first step, and
this parser leaves the unfolding step to a higher level message parser, which
will have already detected the line breaks that need unfolding while
determining the beginning and end of each header.

The output of the parser is a TokenList object, which is a list subclass.  A
TokenList is a recursive data structure.  The terminal nodes of the structure
are Terminal objects, which are subclasses of str.  These do not correspond
directly to terminal objects in the formal grammar, but are instead more
practical higher level combinations of true terminals.

All TokenList and Terminal objects have a 'value' attribute, which produces the
semantically meaningful value of that part of the parse subtree.  The value of
all whitespace tokens (no matter how many sub-tokens they may contain) is a
single space, as per the RFC rules.  This includes 'CFWS', which is herein
included in the general class of whitespace tokens.  There is one exception to
the rule that whitespace tokens are collapsed into single spaces in values: in
the value of a 'bare-quoted-string' (a quoted-string with no leading or
trailing whitespace), any whitespace that appeared between the quotation marks
is preserved in the returned value.  Note that in all Terminal strings quoted
pairs are turned into their unquoted values.

All TokenList and Terminal objects also have a string value, which attempts to
be a "canonical" representation of the RFC-compliant form of the substring that
produced the parsed subtree, including minimal use of quoted pair quoting.
Whitespace runs are not collapsed.

Comment tokens also have a 'content' attribute providing the string found
between the parens (including any nested comments) with whitespace preserved.

All TokenList and Terminal objects have a 'defects' attribute which is a
possibly empty list all of the defects found while creating the token.  Defects
may appear on any token in the tree, and a composite list of all defects in the
subtree is available through the 'all_defects' attribute of any node.  (For
Terminal notes x.defects == x.all_defects.)

Each object in a parse tree is called a 'token', and each has a 'token_type'
attribute that gives the name from the RFC 5322 grammar that it represents.
Not all RFC 5322 nodes are produced, and there is one non-RFC 5322 node that
may be produced: 'ptext'.  A 'ptext' is a string of printable ascii characters.
It is returned in place of lists of (ctext/quoted-pair) and
(qtext/quoted-pair).

XXX: provide complete list of token types.
�)�print_function)�unicode_literals)�division)�absolute_import)�int�range�str�super�listN)�
namedtuple�OrderedDict)�unquote�unquote_to_bytes)�_encoded_words)�errors)�utilsz 	�(z
()<>@,:;.\"[]�.z."(z/?=z*'%�%cCs dt|��dd��dd�dS)N�"�\�\\z\")r�replace��value�r�U/usr/local/lib/python3.9/site-packages/future/backports/email/_header_value_parser.py�quote_stringdsrc@s>eZdZdd�Zdd�Zdd�Zdd�Zd	d
�Zddd
�ZdS)�_FoldedcCs0||_||_d|_d|_d|_g|_t�|_dS)NrT)�maxlen�policy�lastlen�stickyspace�	firstline�doner
�current)�selfrr rrr�__init__msz_Folded.__init__cCs2|j�|j�|j�|jj�|j��d|_dS�Nr)r$�extendr%�appendr �linesep�clearr!�r&rrr�newlinevs
z_Folded.newlinecCs|jr|��dS�N)r%r.r-rrr�finalize|sz_Folded.finalizecCsd�|j�S�N�)�joinr$r-rrr�__str__�sz_Folded.__str__cCs|j�|�dSr/)r%r*)r&�stokenrrrr*�sz_Folded.appendNcCs|durt|�}t|�}|jdu�r�t|j�}|j|||jkr�|j�|j�|j|7_|j�|�|j|7_d|_d|_dS|jr�|�	�}|dur�|jt|�7_|t|�7}|�
|�dS|�rt|d|jk�rt|j|}d|k�r|k�r8nn2||}|j�|jd|��|j|d�|_|}|��|j�|j�|j�|�|||_d|_d|_dS|j�s�|��|j�|j�|j�|�d|_d|_dS|j||jk�r�|j�|�|j|7_dS||jk�r|��|j�|�||_dSdS)NFT�r)r�lenr"r!rr%r*r#�has_fws�pop_leading_fws�_foldr.)r&�tokenr5�lZstickyspace_len�ws�marginZtrimrrr�append_if_fits�sf



z_Folded.append_if_fits)N)	�__name__�
__module__�__qualname__r'r.r0r4r*r?rrrrrks	rcs�eZdZdZ�fdd�Zdd�Z�fdd�Zedd	��Zed
d��Z	edd
��Z
dd�Zdd�Zdd�Z
edd��Zdd�Zedd��Zdd�Zdd�Zdd�Zd d!�Zd)d#d$�Zd*d%d&�Zd+d'd(�Z�ZS),�	TokenListNcs tt|�j|i|��g|_dSr/)r	rCr'�defects)r&�args�kw��	__class__rrr'�szTokenList.__init__cCsd�dd�|D��S)Nr2css|]}t|�VqdSr/�r��.0�xrrr�	<genexpr>��z$TokenList.__str__.<locals>.<genexpr>�r3r-rrrr4�szTokenList.__str__csd�|jjtt|����S�Nz{}({}))�formatrHr@r	rC�__repr__r-rGrrrR�s
�zTokenList.__repr__cCsd�dd�|D��S)Nr2css|]}|jr|jVqdSr/rrJrrrrM�rNz"TokenList.value.<locals>.<genexpr>rOr-rrrr�szTokenList.valuecCstdd�|D�|j�S)Ncss|]}|jVqdSr/)�all_defectsrJrrrrM�rNz(TokenList.all_defects.<locals>.<genexpr>)�sumrDr-rrrrS�szTokenList.all_defectsccs�|j}t�}|D]\}|��rF|rFt|�dkr4|dn||�V|��|��}|�|�|r||�V|g}q|r�t|�dkr�|dn||�VdS�Nr6r)rHr
�startswith_fwsr7r,�pop_trailing_wsr*)r&�klass�thisr;Zend_wsrrr�parts�s

zTokenList.partscCs|d��Sr()rVr-rrrrVszTokenList.startswith_fwscCs$|djdkr|�d�S|d��S)Nr�fws)�
token_type�popr9r-rrrr9s
zTokenList.pop_leading_fwscCs$|djdkr|�d�S|d��S)N����cfws)r\r]rWr-rrrrWs
zTokenList.pop_trailing_wscCs|D]}|jrdSqdS)NTF)r8)r&�partrrrr8szTokenList.has_fwscCs|d��Sr()�has_leading_commentr-rrrra"szTokenList.has_leading_commentcCsg}|D]}|�|j�q|Sr/)r)�comments)r&rbr;rrrrb%szTokenList.commentscKs@|d}|d=|jptd�}t||�}|�|�|��t|�S)Nr z+inf)�max_line_length�floatrr:r0r)r&Z_3to2kwargsr r�foldedrrr�fold,s

zTokenList.foldcCs`g}|��}|r|�|�|djdkr2|�d�nd}|�t�t|�|��|�|�d�|�S)Nr^r[r2)r9r*r\r]�_ew�encoderr3)r&�charset�resr=�trailerrrr�as_encoded_word5s

zTokenList.as_encoded_wordcCs*g}|D]}|�|�||��qd�|�Sr1)r*�
cte_encoder3)r&rir rjr`rrrrmAszTokenList.cte_encodec	Cs�|jD]�}t|�}t|�}zt|��d�WnFtyrtdd�|jD��rTd}nd}|�||j�}t|�}Yn0|�	||�r�q|�
�}|dur�t|�d��|_|�	|�r�q|j
r�|�|�q|�|�|��qdS)N�us-asciicss|]}t|tj�VqdSr/��
isinstancer�UndecodableBytesDefectrJrrrrMNs�z"TokenList._fold.<locals>.<genexpr>�unknown-8bit�utf-8r)rZrr7rh�UnicodeEncodeError�anyrSrmr r?r9r]r"r8r:r*r.)r&rer`�tstr�tlenrir=rrrr:Gs2
�


zTokenList._foldr2cCstd�|jdd���dS�N�
r2)�indent)�printr3�_pp�r&rzrrr�pprintiszTokenList.pprintcCsd�|jdd��Srx)r3r|r}rrr�ppstrlszTokenList.ppstrccs�d�||jj|j�V|D]:}t|d�s:|d�|�Vq|�|d�D]
}|VqHq|jrjd�|j�}nd}d�||�VdS)Nz{}{}/{}(r|z*    !! invalid element in token list: {!r}z    z Defects: {}r2z{}){})rQrHr@r\�hasattrr|rD)r&rzr;�line�extrarrrr|os �
�

z
TokenList._pp)r2)r2)r2)r@rArBr\r'r4rR�propertyrrSrZrVr9rWr8rarbrfrlrmr:r~rr|�
__classcell__rrrGrrC�s2

*


	"

rCc@s$eZdZedd��Zedd��ZdS)�WhiteSpaceTokenListcCsdS�N� rr-rrrr�szWhiteSpaceTokenList.valuecCsdd�|D�S)NcSsg|]}|jdkr|j�qS)�comment)r\�contentrJrrr�
<listcomp>�rNz0WhiteSpaceTokenList.comments.<locals>.<listcomp>rr-rrrrb�szWhiteSpaceTokenList.commentsN)r@rArBr�rrbrrrrr��s
r�c@s eZdZdZdd�Zdd�ZdS)�UnstructuredTokenList�unstructuredc
Cs�tdd�|D��r|�|�Sd}|jD�]�}t|�}d}zt|��d�Wn�t�ytdd�|jD��rtd}nd}|du�rtd�|j	|d�|g���
|�}td	d�|j	d|�D��}t|�}	t|	�}
||
|j
k�r|j	|d�=|�|	�||
|_Yq&|�
|�}d
}Yn0|�||��r<|r&t|j	�d}q&|�sH|�rT|�|�q&|��}|du�r~t|�|_|�|��r~q&|j�r�|�|�q&|�|�|��d}q&dS)Ncss|]}|jdkVqdS)�encoded-wordN�r\rJrrrrM�rNz.UnstructuredTokenList._fold.<locals>.<genexpr>Frncss|]}t|tj�VqdSr/rorJrrrrM�s�rrrsr2css|]}t|�VqdSr/�r7rJrrrrM�rNTr6)ruZ
_fold_encodedrZrrhrtrS�get_unstructuredr3r%rlrTr7rr*r!r?Z_fold_as_ewr9r"r8rfr.)r&re�last_ewr`rv�is_ewri�chunkZ
oldlastlen�schunk�lchunkr=rrrr:�s`
�
��








zUnstructuredTokenList._foldc
Cs�g}d}|D]�}t|�}z|�d�|�|�Wqty�|durb|�|�||��t|�}n*td�||d�|g��}|�|���Yq0qd�|�S)Nrnr2)	rrhr*rtrmr7r�r3rl)r&rir rjr�r`�spart�tlrrrrm�s

z UnstructuredTokenList.cte_encodeN�r@rArBr\r:rmrrrrr��s8r�c@s eZdZdZdd�Zdd�ZdS)�Phrase�phrasec
Cs�d}|jD�]�}t|�}t|�}d}zt|��d�W�n$t�y\tdd�|jD��rbd}nd}|du�rB|���sB|djdkr�|j	r�|�
d�}nd	}t|�D]&\}	}
|
jd
kr�t|
dd��||	<q�t
d	�|j|d�|g���|�}t|�}t|�}
||
|jk�rB|j|d�=|�|�tdd�|jD��|_Yq
|�|�}t|�}d}Yn0|�||��r�|�r�|j	�s�t|j�d
}q
|j	�s�|jdkr
d}q
|�|�q
dS)NFrncss|]}t|tj�VqdSr/rorJrrrrM�s�zPhrase._fold.<locals>.<genexpr>rrrsr^r_r2�bare-quoted-stringcss|]}t|�VqdSr/r�rJrrrrM
rNTr6�
quoted-string)rZrr7rhrtrurSrar\rbr]�	enumerater�r�r3r%rlrr*rTr!r?r:)r&rer�r`rvrwZhas_ewri�	remainder�ir;r�r�r�rrrr:�sT�
��


zPhrase._foldc
Cs.g}d}d}|D�]}t|�}z|�d�|�|�Wn�t�yd}|durt|js`t|�}|�|�||��n�|���s|djdkr�|jr�|�	d�}nd}t
|�D]&\}	}
|
jdkr�t|
dd��||	<q�td�
||d�|g��}|�|�g||d�<Yn0|j�s|s|jdkrd}qd�
|�S)	NFrnTr^r_r2r�r�)rrhr*rtrbr7rmrar\r]r�r�r�r3rl)r&rir rjr�r�r`r�r�r�r;r�rrrrms4



zPhrase.cte_encodeNr�rrrrr��s>r�c@seZdZdZdS)�Word�wordN�r@rArBr\rrrrr�:sr�c@seZdZdZdd�ZdS)�CFWSListr_cCs
t|j�Sr/)�boolrbr-rrrraCszCFWSList.has_leading_commentN)r@rArBr\rarrrrr�?sr�c@seZdZdZdS)�Atom�atomNr�rrrrr�Gsr�c@seZdZdZdS)�Tokenr;Nr�rrrrr�Lsr�c@s(eZdZdZdZdZdZedd��ZdS)�EncodedWordr�NcCs&|jdur|jSt�t|�|j�dSr/)�ctergrhrrir-rrr�encodedXs
zEncodedWord.encoded)	r@rArBr\r�ri�langr�r�rrrrr�Qsr�c@s4eZdZdZedd��Zedd��Zedd��ZdS)	�QuotedStringr�cCs"|D]}|jdkr|jSqdS�Nr��r\r�r&rLrrrr�ds
zQuotedString.contentcCs>g}|D]*}|jdkr&|�t|��q|�|j�qd�|�S)Nr�r2)r\r*rrr3)r&rjrLrrr�quoted_valuejs
zQuotedString.quoted_valuecCs"|D]}|jdkr|jSqdSr�r��r&r;rrr�stripped_valuets
zQuotedString.stripped_valueN)r@rArBr\r�r�r�r�rrrrr�`s

	r�c@s$eZdZdZdd�Zedd��ZdS)�BareQuotedStringr�cCstd�dd�|D���S)Nr2css|]}t|�VqdSr/rIrJrrrrM�rNz+BareQuotedString.__str__.<locals>.<genexpr>)rr3r-rrrr4szBareQuotedString.__str__cCsd�dd�|D��S)Nr2css|]}t|�VqdSr/rIrJrrrrM�rNz)BareQuotedString.value.<locals>.<genexpr>rOr-rrrr�szBareQuotedString.valueN)r@rArBr\r4r�rrrrrr�{sr�c@s8eZdZdZdd�Zdd�Zedd��Zedd	��Zd
S)�Commentr�cs(d�tdg�fdd��D�dggg��S)Nr2rcsg|]}��|��qSr)�quoterJr-rrr��rNz#Comment.__str__.<locals>.<listcomp>�))r3rTr-rr-rr4�s��zComment.__str__cCs2|jdkrt|�St|��dd��dd��dd�S)Nr�rrrz\(r�z\))r\rr)r&rrrrr��s
��z
Comment.quotecCsd�dd�|D��S)Nr2css|]}t|�VqdSr/rIrJrrrrM�rNz"Comment.content.<locals>.<genexpr>rOr-rrrr��szComment.contentcCs|jgSr/)r�r-rrrrb�szComment.commentsN)	r@rArBr\r4r�r�r�rbrrrrr��s
r�c@s4eZdZdZedd��Zedd��Zedd��ZdS)	�AddressListzaddress-listcCsdd�|D�S)NcSsg|]}|jdkr|�qS)�addressr�rJrrrr��rNz)AddressList.addresses.<locals>.<listcomp>rr-rrr�	addresses�szAddressList.addressescCstdd�|D�g�S)Ncss|]}|jdkr|jVqdS�r�N�r\�	mailboxesrJrrrrM�s�z(AddressList.mailboxes.<locals>.<genexpr>�rTr-rrrr��s
��zAddressList.mailboxescCstdd�|D�g�S)Ncss|]}|jdkr|jVqdSr��r\�
all_mailboxesrJrrrrM�s�z,AddressList.all_mailboxes.<locals>.<genexpr>r�r-rrrr��s
��zAddressList.all_mailboxesN)r@rArBr\r�r�r�r�rrrrr��s

r�c@s4eZdZdZedd��Zedd��Zedd��ZdS)	�Addressr�cCs|djdkr|djSdS)Nr�group�r\�display_namer-rrrr��szAddress.display_namecCs4|djdkr|dgS|djdkr*gS|djS�Nr�mailbox�invalid-mailboxr�r-rrrr��s

zAddress.mailboxescCs:|djdkr|dgS|djdkr0|dgS|djSr�r�r-rrrr��s


zAddress.all_mailboxesN)r@rArBr\r�r�r�r�rrrrr��s

r�c@s(eZdZdZedd��Zedd��ZdS)�MailboxList�mailbox-listcCsdd�|D�S)NcSsg|]}|jdkr|�qS)r�r�rJrrrr��rNz)MailboxList.mailboxes.<locals>.<listcomp>rr-rrrr��szMailboxList.mailboxescCsdd�|D�S)NcSsg|]}|jdvr|�qS))r�r�r�rJrrrr��s
�z-MailboxList.all_mailboxes.<locals>.<listcomp>rr-rrrr��szMailboxList.all_mailboxesN�r@rArBr\r�r�r�rrrrr��s

r�c@s(eZdZdZedd��Zedd��ZdS)�	GroupList�
group-listcCs |r|djdkrgS|djS�Nrr�r�r-rrrr��szGroupList.mailboxescCs |r|djdkrgS|djSr�r�r-rrrr��szGroupList.all_mailboxesNr�rrrrr��s

r�c@s4eZdZdZedd��Zedd��Zedd��ZdS)	�Groupr�cCs|djdkrgS|djS�N�r�r�r-rrrr��szGroup.mailboxescCs|djdkrgS|djSr�r�r-rrrr��szGroup.all_mailboxescCs
|djSr()r�r-rrrr��szGroup.display_nameN)r@rArBr\r�r�r�r�rrrrr��s

r�c@sLeZdZdZedd��Zedd��Zedd��Zedd	��Zed
d��Z	dS)
�NameAddr�	name-addrcCst|�dkrdS|djSrU)r7r�r-rrrr�szNameAddr.display_namecCs
|djS�Nr^��
local_partr-rrrr�szNameAddr.local_partcCs
|djSr���domainr-rrrr�szNameAddr.domaincCs
|djSr�)�router-rrrr�szNameAddr.routecCs
|djSr���	addr_specr-rrrr�szNameAddr.addr_specN�
r@rArBr\r�r�r�r�r�r�rrrrr�s



r�c@s@eZdZdZedd��Zedd��Zedd��Zedd	��Zd
S)�	AngleAddrz
angle-addrcCs"|D]}|jdkr|jSqdS�N�	addr-spec)r\r�r�rrrr� s
zAngleAddr.local_partcCs"|D]}|jdkr|jSqdSr��r\r�r�rrrr�&s
zAngleAddr.domaincCs"|D]}|jdkr|jSqdS)N�	obs-route)r\�domainsr�rrrr�,s
zAngleAddr.routecCs"|D]}|jdkr|jSqdS)Nr�z<>)r\r�r�rrrr�2s
zAngleAddr.addr_specN)	r@rArBr\r�r�r�r�r�rrrrr�s


r�c@seZdZdZedd��ZdS)�ObsRouter�cCsdd�|D�S)NcSsg|]}|jdkr|j�qSr�r�rJrrrr�ArNz$ObsRoute.domains.<locals>.<listcomp>rr-rrrr�?szObsRoute.domainsN)r@rArBr\r�r�rrrrr�;sr�c@sLeZdZdZedd��Zedd��Zedd��Zedd	��Zed
d��Z	dS)
�Mailboxr�cCs|djdkr|djSdS�Nrr�r�r-rrrr�HszMailbox.display_namecCs
|djSr(r�r-rrrr�MszMailbox.local_partcCs
|djSr(r�r-rrrr�QszMailbox.domaincCs|djdkr|djSdSr�)r\r�r-rrrr�Usz
Mailbox.routecCs
|djSr(r�r-rrrr�ZszMailbox.addr_specNr�rrrrr�Ds



r�c@s,eZdZdZedd��ZeZZZZ	dS)�InvalidMailboxr�cCsdSr/rr-rrrr�cszInvalidMailbox.display_nameNr�rrrrr�_s
r�cs$eZdZdZe�fdd��Z�ZS)�Domainr�csd�tt|�j���Sr1)r3r	r�r�splitr-rGrrr�nsz
Domain.domain)r@rArBr\r�r�r�rrrGrr�jsr�c@seZdZdZdS)�DotAtom�dot-atomNr�rrrrr�ssr�c@seZdZdZdS)�DotAtomTextz
dot-atom-textNr�rrrrr�xsr�c@s@eZdZdZedd��Zedd��Zedd��Zedd	��Zd
S)�AddrSpecr�cCs
|djSr(r�r-rrrr��szAddrSpec.local_partcCst|�dkrdS|djS)N�r^)r7r�r-rrrr��szAddrSpec.domaincCs<t|�dkr|djS|dj��|dj|dj��S)Nr�rr6r�)r7r�rstrip�lstripr-rrrr�s
zAddrSpec.valuecCsLt|j�}t|�t|t�kr*t|j�}n|j}|jdurH|d|jS|S)N�@)�setr�r7�
DOT_ATOM_ENDSrr�)r&ZnamesetZlprrrr��s

zAddrSpec.addr_specN)	r@rArBr\r�r�r�rr�rrrrr�}s


r�c@seZdZdZdS)�ObsLocalPartzobs-local-partNr�rrrrr��sr�cs0eZdZdZedd��Ze�fdd��Z�ZS)�DisplayNamezdisplay-namecCs�t|�}|djdkr"|�d�n*|ddjdkrLt|ddd��|d<|djdkrd|��n*|ddjdkr�t|ddd��|d<|jS)Nrr_r6r^)rCr\r]r)r&rjrrrr��s
zDisplayName.display_namecs�d}|jrd}n|D]}|jdkrd}q|r�d}}|djdksT|ddjdkrXd}|djdksx|ddjdkr|d}|t|j�|Stt|�jSdS)	NFTr�r2rr_r�r^)rDr\rr�r	r�r)r&r�rL�pre�postrGrrr�s
  zDisplayName.value)r@rArBr\r�r�rr�rrrGrr��s

r�c@s(eZdZdZedd��Zedd��ZdS)�	LocalPartz
local-partcCs&|djdkr|djS|djSdS)Nrr�)r\r�rr-rrrr�s
zLocalPart.valuecCs�tg}t}d}|dtgD]�}|jdkr,q|r\|jdkr\|djdkr\t|dd��|d<t|t�}|r�|jdkr�|djdkr�|�t|dd���n
|�|�|d}|}qt|dd��}|jS)NFrr_�dotr^r6)�DOTr\rCrpr*r)r&rj�lastZ
last_is_tl�tokZis_tlrrrr��s(
�
�
zLocalPart.local_partN)r@rArBr\r�rr�rrrrr��s

r�cs0eZdZdZe�fdd��Zedd��Z�ZS)�
DomainLiteralzdomain-literalcsd�tt|�j���Sr1)r3r	r�rr�r-rGrrr��szDomainLiteral.domaincCs"|D]}|jdkr|jSqdS)N�ptextr�r�rrr�ip�s
zDomainLiteral.ip)r@rArBr\r�r�r�r�rrrGrr��s
r�c@seZdZdZdZdZdS)�MIMEVersionzmime-versionN)r@rArBr\�major�minorrrrrr��sr�c@s4eZdZdZdZdZdZedd��Zedd��Z	dS)	�	Parameter�	parameterFrncCs|jr|djSdSrU)�	sectioned�numberr-rrr�section_number	szParameter.section_numbercCsf|D]\}|jdkr|jS|jdkr|D]4}|jdkr*|D] }|jdkr<|jSq<q*qdS)Nrr�r�r2)r\r�r�rrr�param_values




zParameter.param_valueN)
r@rArBr\r�extendedrir�rrrrrrrs
rc@seZdZdZdS)�InvalidParameter�invalid-parameterNr�rrrrrsrc@seZdZdZedd��ZdS)�	Attribute�	attributecCs$|D]}|j�d�r|jSqdS)N�attrtext)r\�endswithrr�rrrr�'szAttribute.stripped_valueN�r@rArBr\r�r�rrrrr	#sr	c@seZdZdZdZdS)�Section�sectionN)r@rArBr\rrrrrr-src@seZdZdZedd��ZdS)�ValuercCs2|d}|jdkr|d}|j�d�r,|jS|jS)Nrr_r6)r�r
zextended-attribute)r\rr�rr�rrrr�7s
�zValue.stripped_valueNr
rrrrr3src@s$eZdZdZedd��Zdd�ZdS)�MimeParameters�mime-parametersc
csft�}|D]T}|j�d�sq
|djdkr,q
|dj��}||vrJg||<||�|j|f�q
|��D]�\}}t|�}g}|ddj	}t
|�D]�\}\}}	||kr�|	j�t�
d��|	j}
|	j�r@zt|
�}
Wnty�t|
dd�}
YnP0z|
�|d�}
Wn t�y"|
�d	d�}
Yn0t�|
��r@|	j�t���|�|
�q�d
�|�}
||
fVqhdS)Nrrr
r6z*inconsistent multipart parameter numberingzlatin-1)�encoding�surrogateescapernr2)rr\rr�stripr*r�items�sortedrir�rDr�InvalidHeaderDefectrrrrtr
�decode�LookupErrorr�_has_surrogatesrqr3)r&�paramsr;�namerZZvalue_partsrir�r�paramrrrrrFsD
�
zMimeParameters.paramscCsTg}|jD].\}}|r.|�d�|t|���q
|�|�q
d�|�}|rPd|SdS)Nz{}={}z; r�r2)rr*rQrr3)r&rrrrrrr4ys
zMimeParameters.__str__N)r@rArBr\r�rr4rrrrrBs
2rc@s$eZdZedd��Zedd��ZdS)�ParameterizedHeaderValuecCs&t|�D]}|jdkr|jSqiS)Nr)�reversedr\rr�rrrr�s
zParameterizedHeaderValue.paramscCs4|r*|djdkr*t|dd�|d�St|�jS)Nr^r)r\rCrZr-rrrrZ�szParameterizedHeaderValue.partsN)r@rArBr�rrZrrrrr�s
rc@seZdZdZdZdZdS)�ContentTypezcontent-type�text�plainN)r@rArBr\�maintype�subtyperrrrr!�sr!c@seZdZdZdZdS)�ContentDispositionzcontent-dispositionN)r@rArBr\�content_dispositionrrrrr&�sr&c@seZdZdZdZdS)�ContentTransferEncodingzcontent-transfer-encoding�7bitN)r@rArBr\r�rrrrr(�sr(c@seZdZdZdS)�HeaderLabelzheader-labelNr�rrrrr*�sr*c@seZdZdZdd�ZdS)�Header�headercCsj|�t|�d���t|jd�|_|djdkr@t|�d��nd|_|�d�}|r\td��|�	|�dS)Nrr_r2zMalformed Header token list)
r*rr]r7r%r!r\r"�
ValueErrorr:)r&re�restrrrr:�s"
zHeader._foldN)r@rArBr\r:rrrrr+�sr+csveZdZ�fdd�Z�fdd�Zedd��Zd�fdd	�	Zd
d�Zdd
�Z	dd�Z
edd��Zdd�Zdd�Z
�ZS)�Terminalcs"tt|��||�}||_g|_|Sr/)r	r/�__new__r\rD)�clsrr\r&rGrrr0�szTerminal.__new__csd�|jjtt|����SrP)rQrHr@r	r/rRr-rGrrrR�szTerminal.__repr__cCs
t|j�Sr/)r
rDr-rrrrS�szTerminal.all_defectsr2c	s6d�||jj|jtt|���|js&dn
d�|j��gS)Nz
{}{}/{}({}){}r2z {})rQrHr@r\r	r/rRrDr}rGrrr|�s�zTerminal._ppcCs<t|�}z|�d�|WSty6t�||�YS0dS)Nrn)rrhrtrg)r&rir rrrrrm�s
zTerminal.cte_encodecCsdSr/rr-rrrrW�szTerminal.pop_trailing_wscCsdSr/rr-rrrr9�szTerminal.pop_leading_fwscCsgSr/rr-rrrrb�szTerminal.commentscCsdS�NFrr-rrrra�szTerminal.has_leading_commentcCst|�|jfSr/)rr\r-rrr�__getnewargs__�szTerminal.__getnewargs__)r2)r@rArBr0rRr�rSr|rmrWr9rbrar3r�rrrGrr/�s
	
r/c@s$eZdZedd��Zdd�ZdZdS)�WhiteSpaceTerminalcCsdSr�rr-rrrr�szWhiteSpaceTerminal.valuecCsdS)NTrr-rrrrVsz!WhiteSpaceTerminal.startswith_fwsTN)r@rArBr�rrVr8rrrrr4�s
r4c@s,eZdZedd��Zdd�ZdZdd�ZdS)	�
ValueTerminalcCs|Sr/rr-rrrrszValueTerminal.valuecCsdSr2rr-rrrrVszValueTerminal.startswith_fwsFcCst�t|�|�Sr/)rgrhr)r&rirrrrlszValueTerminal.as_encoded_wordN)r@rArBr�rrVr8rlrrrrr5s

r5c@s0eZdZedd��Zedd��Zdd�ZdZdS)	�EWWhiteSpaceTerminalcCsdSr1rr-rrrrszEWWhiteSpaceTerminal.valuecCs|dd�Sr/rr-rrrr�szEWWhiteSpaceTerminal.encodedcCsdSr1rr-rrrr4szEWWhiteSpaceTerminal.__str__TN)r@rArBr�rr�r4r8rrrrr6s

r6r��,�list-separatorr�zroute-component-markerz([{}]+)r2z[^{}]+rr�]z\]z[\x00-\x20\x7F]cCs>t|�}|r|j�t�|��t�|�r:|j�t�d��dS)z@If input token contains ASCII non-printables, register a defect.z*Non-ASCII characters found in header tokenN)�_non_printable_finderrDr*r�NonPrintableDefectrrrq)�xtext�non_printablesrrr�_validate_xtextOs

�r>c	Cs�tt|d��}|dd�|dd�g\}}g}d}d}tt|��D]L}||dkrh|rbd}d}nd}qD|rrd}n|||vr�q�|�||�qD|d}d�|�d�||d�g|�|fS)akScan printables/quoted-pairs until endchars and return unquoted ptext.

    This function turns a run of qcontent, ccontent-without-comments, or
    dtext-with-quoted-printables into a single string by unquoting any
    quoted printables.  It returns the string, the remaining value, and
    a flag that is True iff there were any quoted printables decoded.

    r6NFrTr2)r
�
_wsp_splitterrr7r*r3)	r�endcharsZ	_3to2list�fragmentr�Zvchars�escape�had_qp�posrrr�_get_ptext_to_endcharsYs&	rEc
Cs�g}g}d}|r�zt|d�\}}}Wn"tyF|dd}}}Yn0|�d�r\|�d�sxd�|�|||||fSt�|�\}}}}	|�|�|�|	�|}qd�|�||fS)a" Decode a run of RFC2047 encoded words.

        _decode_ew_run(value) -> (text, value, defects)

    Scans the supplied value for a run of tokens that look like they are RFC
    2047 encoded words, decodes those words into text according to RFC 2047
    rules (whitespace between encoded words is discarded), and returns the text
    and the remaining value (including any leading whitespace on the remaining
    value), as well as a list of any defects encountered while decoding.  The
    input value may not have any leading whitespace.

    r2r6�=?�?=)	r?r-�
startswithrr3rgrr*r))
rrjrDZlast_wsr�r=r"rir�Znew_defectsrrr�_decode_ew_runxs


rIcCs.|��}t|dt|�t|��d�}||fS)z�FWS = 1*WSP

    This isn't the RFC definition.  We're using fws to represent tokens where
    folding can be done, but when we are parsing the *un*folding has already
    been done so we don't need to watch out for CRLF.

    Nr[)r�r4r7)rZnewvaluer[rrr�get_fws�srJcCs�t�}|�d�s t�d�|���t|dd��dd��}|dd�|dd�g\}}||dd�krvt�d�|���d�|�}|dd���r�t|�dd��}|dd�|dd�g\}}|d|}t	|���dkr�|j
�t�d��||_
d�|�}zt�d|d�\}}	}
}Wn&t�y@t�d	�|j
���Yn0|	|_|
|_|j
�|�|�r�|d
tv�r�t|�\}}|�|��qZtt|d��}
|
dd�|
dd�g\}}t|d�}t|�|�|�d�|�}�qZ||fS)zE encoded-word = "=?" charset "?" encoding "?" encoded-text "?="

    rFz"expected encoded word but found {}r�NrGr6r2zwhitespace inside encoded wordz!encoded word format invalid: '{}'r�vtext)r�rHr�HeaderParseErrorrQr
r�r3�isdigitr7rDr*rr�rgrr-rir�r)�WSPrJr?r5r>)r�ewZ
_3to2list1r�r�ZremstrZ
_3to2list3r.r"rir�rDr;Z
_3to2list5�charsrKrrr�get_encoded_word�sV
��

�

�



rQcCs(t�}|�r$|dtvr0t|�\}}|�|�q|�d�r�zt|�\}}Wntjy^Ynr0d}t|�dkr�|dj	dkr�|j
�t�d��d}|r�t|�dkr�|d	j	d
kr�t|dd�|d<|�|�qt
t|d��}|dd�|dd�g\}}t|d�}t|�|�|�d
�|�}q|S)aOunstructured = (*([FWS] vchar) *WSP) / obs-unstruct
       obs-unstruct = *((*LF *CR *(obs-utext) *LF *CR)) / FWS)
       obs-utext = %d0 / obs-NO-WS-CTL / LF / CR

       obs-NO-WS-CTL is control characters except WSP/CR/LF.

    So, basically, we have printable runs, plus control characters or nulls in
    the obsolete syntax, separated by whitespace.  Since RFC 2047 uses the
    obsolete syntax in its specification, but requires whitespace on either
    side of the encoded words, I can see no reason to need to separate the
    non-printable-non-whitespace from the printable runs if they occur, so we
    parse this into xtext tokens separated by WSP tokens.

    Because an 'unstructured' value must by definition constitute the entire
    value, this 'get' routine does not return a remaining value, only the
    parsed TokenList.

    rrFTr^r[z&missing whitespace before encoded wordFr6���r�NrKr2)r�rNrJr*rHrQrrLr7r\rDrr6r
r?r5r>r3)rr�r;Zhave_wsZ
_3to2list7r�r�rKrrrr��s@


��


r�cCs*t|d�\}}}t|d�}t|�||fS)actext = <printable ascii except \ ( )>

    This is not the RFC ctext, since we are handling nested comments in comment
    and unquoting quoted-pairs here.  We allow anything except the '()'
    characters, but if we find any ASCII other than the RFC defined printable
    ASCII an NonPrintableDefect is added to the token's defects list.  Since
    quoted pairs are converted to their unquoted values, what is returned is
    a 'ptext' token.  In this case it is a WhiteSpaceTerminal, so it's value
    is ' '.

    z()r�)rEr4r>�rr��_rrr�get_qp_ctexts
rUcCs*t|d�\}}}t|d�}t|�||fS)aoqcontent = qtext / quoted-pair

    We allow anything except the DQUOTE character, but if we find any ASCII
    other than the RFC defined printable ASCII an NonPrintableDefect is
    added to the token's defects list.  Any quoted pairs are converted to their
    unquoted values, so what is returned is a 'ptext' token.  In this case it
    is a ValueTerminal.

    rr�)rEr5r>rSrrr�get_qcontents

rVcCsNt|�}|st�d�|���|��}|t|�d�}t|d�}t|�||fS)z�atext = <matches _atext_matcher>

    We allow any non-ATOM_ENDS in atext, but add an InvalidATextDefect to
    the token's defects list if we find non-atext characters.
    zexpected atext but found '{}'N�atext)�_non_atom_end_matcherrrLrQr�r7r5r>)r�mrWrrr�	get_atext$s�
rZcCs�|ddkrt�d�|���t�}|dd�}|rp|ddkrp|dtvrXt|�\}}nt|�\}}|�|�q.|s�|j�t�	d��||fS||dd�fS)z�bare-quoted-string = DQUOTE *([FWS] qcontent) [FWS] DQUOTE

    A quoted-string without the leading or trailing white space.  Its
    value is the text between the quote marks, with whitespace
    preserved and quoted pairs decoded.
    rrzexpected '"' but found '{}'r6Nz"end of header inside quoted string)
rrLrQr�rNrJrVr*rDr)rZbare_quoted_stringr;rrr�get_bare_quoted_string4s"�
�r[cCs�|r |ddkr t�d�|���t�}|dd�}|r�|ddkr�|dtvr\t|�\}}n&|ddkrvt|�\}}nt|�\}}|�|�q2|s�|j	�t�
d��||fS||dd�fS)z�comment = "(" *([FWS] ccontent) [FWS] ")"
       ccontent = ctext / quoted-pair / comment

    We handle nested comments here, and quoted-pair in our qp-ctext routine.
    rrzexpected '(' but found '{}'r6Nr�zend of header inside comment)rrLrQr�rNrJ�get_commentrUr*rDr)rr�r;rrrr\Ls&�
�r\cCsPt�}|rH|dtvrH|dtvr0t|�\}}nt|�\}}|�|�q||fS)z,CFWS = (1*([FWS] comment) [FWS]) / FWS

    r)r��CFWS_LEADERrNrJr\r*)rr_r;rrr�get_cfwsesr^cCspt�}|r,|dtvr,t|�\}}|�|�t|�\}}|�|�|rh|dtvrht|�\}}|�|�||fS)z�quoted-string = [CFWS] <bare-quoted-string> [CFWS]

    'bare-quoted-string' is an intermediate class defined by this
    parser and not by the RFC grammar.  It is the quoted string
    without any attached CFWS.
    r)r�r]r^r*r[)rZ
quoted_stringr;rrr�get_quoted_stringrs


r_cCs�t�}|r,|dtvr,t|�\}}|�|�|rL|dtvrLt�d�|���t|�\}}|�|�|r�|dtvr�t|�\}}|�|�||fS)z"atom = [CFWS] 1*atext [CFWS]

    rzexpected atom but found '{}')	r�r]r^r*�	ATOM_ENDSrrLrQrZ)rr�r;rrr�get_atom�s
�

racCs�t�}|r|dtvr&t�d�|���|rt|dtvrtt|�\}}|�|�|r&|ddkr&|�t�|dd�}q&|dtur�t�d�d|���||fS)z( dot-text = 1*atext *("." 1*atext)

    rz8expected atom at a start of dot-atom-text but found '{}'rr6Nr^z4expected atom at end of dot-atom-text but found '{}')r�r`rrLrQrZr*r�)rZ
dot_atom_textr;rrr�get_dot_atom_text�s �

�rbcCslt�}|dtvr(t|�\}}|�|�t|�\}}|�|�|rd|dtvrdt|�\}}|�|�||fS)z- dot-atom = [CFWS] dot-atom-text [CFWS]

    r)r�r]r^r*rb)rZdot_atomr;rrr�get_dot_atom�s


rccCs�|dtvrt|�\}}nd}|ddkr8t|�\}}n*|dtvrVt�d�|���nt|�\}}|durx|g|dd�<||fS)a�word = atom / quoted-string

    Either atom or quoted-string may start with CFWS.  We have to peel off this
    CFWS first to determine which type of word to parse.  Afterward we splice
    the leading CFWS, if any, into the parsed sub-token.

    If neither an atom or a quoted-string is found before the next special, a
    HeaderParseError is raised.

    The token returned is either an Atom or a QuotedString, as appropriate.
    This means the 'word' level of the formal grammar is not represented in the
    parse tree; this is because having that extra layer when manipulating the
    parse tree is more confusing than it is helpful.

    rNrz1Expected 'atom' or 'quoted-string' but found '{}')r]r^r_�SPECIALSrrLrQra)r�leaderr;rrr�get_word�s�rfcCs�t�}zt|�\}}|�|�Wn&tjyF|j�t�d��Yn0|r�|dtvr�|ddkr�|�t�|j�t�	d��|dd�}qHzt|�\}}WnBtjy�|dt
vr�t|�\}}|j�t�	d��n�Yn0|�|�qH||fS)a� phrase = 1*word / obs-phrase
        obs-phrase = word *(word / "." / CFWS)

    This means a phrase can be a sequence of words, periods, and CFWS in any
    order as long as it starts with at least one word.  If anything other than
    words is detected, an ObsoleteHeaderDefect is added to the token's defect
    list.  We also accept a phrase that starts with CFWS followed by a dot;
    this is registered as an InvalidHeaderDefect, since it is not supported by
    even the obsolete grammar.

    zphrase does not start with wordrrzperiod in 'phrase'r6Nzcomment found without atom)r�rfr*rrLrDr�PHRASE_ENDSr��ObsoleteHeaderDefectr]r^)rr�r;rrr�
get_phrase�s4
�

�
�ricCspt�}d}|dtvr"t|�\}}|s6t�d�|���zt|�\}}WnZtjy�zt|�\}}Wn4tjy�|ddkr�|dtvr��t	�}Yn0Yn0|dur�|g|dd�<|�
|�|�r0|ddks�|dtv�r0tt|�|�\}}|j
dk�r|j�
t�d��n|j�
t�d��||d<z|j�d�Wn&t�yf|j�
t�d	��Yn0||fS)
z= local-part = dot-atom / quoted-string / obs-local-part

    Nrz"expected local-part but found '{}'r�invalid-obs-local-partz<local-part is not dot-atom, quoted-string, or obs-local-partz,local-part is not a dot-atom (contains CFWS)�asciiz)local-part contains non-ASCII characters))r�r]r^rrLrQrcrfrgrCr*�get_obs_local_partrr\rDrrhrrhrt�NonASCIILocalPartDefect)rr�rer;�obs_local_partrrr�get_local_part�sJ�
 
�
�
�rocCs�t�}d}|�r&|ddks*|dtv�r&|ddkrj|rL|j�t�d��|�t�d}|dd�}q
nD|ddkr�|�t|dd	��|dd�}|j�t�d
��d}q
|r�|djdkr�|j�t�d
��zt	|�\}}d}Wn2tj
�y|dtv�r�t|�\}}Yn0|�|�q
|djdk�sV|djdk�rh|djdk�rh|j�t�d��|djdk�s�|djdk�r�|djdk�r�|j�t�d��|j�r�d|_||fS)z' obs-local-part = word *("." word)
    Frrrzinvalid repeated '.'Tr6N�misplaced-specialz/'\' character outside of quoted-string/ccontentr^r�zmissing '.' between wordsr_z!Invalid leading '.' in local partrRz"Invalid trailing '.' in local partrj)
r�rgrDr*rrr�r5r\rfrLr]r^)rrnZlast_non_ws_was_dotr;rrrrl$sj 
�
�
�
���
���
�rlcCs@t|d�\}}}t|d�}|r0|j�t�d��t|�||fS)a dtext = <printable ascii except \ [ ]> / obs-dtext
        obs-dtext = obs-NO-WS-CTL / quoted-pair

    We allow anything except the excluded characters, but if we find any
    ASCII other than the RFC defined printable ASCII an NonPrintableDefect is
    added to the token's defects list.  Quoted pairs are converted to their
    unquoted values, so what is returned is a ptext token, in this case a
    ValueTerminal.  If there were quoted-printables, an ObsoleteHeaderDefect is
    added to the returned token's defect list.

    z[]r�z(quoted printable found in domain-literal)rEr5rDr*rrhr>)rr�rCrrr�	get_dtextSs

�rqcCs,|rdS|�t�d��|�tdd��dS)NFz"end of input inside domain-literalr9�domain-literal-endT)r*rrr5)r�domain_literalrrr�_check_for_early_dl_endgs�rtcCsjt�}|dtvr(t|�\}}|�|�|s6t�d��|ddkrRt�d�|���|dd�}t||�rp||fS|�tdd��|dt	vr�t
|�\}}|�|�t|�\}}|�|�t||�r�||fS|dt	vr�t
|�\}}|�|�t||�r�||fS|ddk�rt�d	�|���|�tdd
��|dd�}|�rb|dtv�rbt|�\}}|�|�||fS)zB domain-literal = [CFWS] "[" *([FWS] dtext) [FWS] "]" [CFWS]

    rzexpected domain-literal�[z6expected '[' at start of domain-literal but found '{}'r6Nzdomain-literal-startr9z4expected ']' at end of domain-literal but found '{}'rr)r�r]r^r*rrLrQrtr5rNrJrq)rrsr;rrr�get_domain_literalosH

�





�
rvcCsVt�}d}|dtvr"t|�\}}|s6t�d�|���|ddkrvt|�\}}|durd|g|dd�<|�|�||fSzt|�\}}Wn tjy�t	|�\}}Yn0|dur�|g|dd�<|�|�|�rN|ddk�rN|j
�t�d��|djdk�r|d|dd�<|�rN|ddk�rN|�t
�t	|dd��\}}|�|��q||fS)	z] domain = dot-atom / domain-literal / obs-domain
        obs-domain = atom *("." atom))

    Nrzexpected domain but found '{}'rurz(domain is not a dot-atom (contains CFWS)r�r6)r�r]r^rrLrQrvr*rcrarDrhr\r�)rr�rer;rrr�
get_domain�s@�


�
rwcCs|t�}t|�\}}|�|�|r,|ddkrF|j�t�d��||fS|�tdd��t|dd��\}}|�|�||fS)z( addr-spec = local-part "@" domain

    rr�z"add-spec local part with no domainzaddress-at-symbolr6N)r�ror*rDrrr5rw)rr�r;rrr�
get_addr_spec�s

�
rxcCs�t�}|rj|ddks"|dtvrj|dtvrFt|�\}}|�|�q|ddkr|�t�|dd�}q|rz|ddkr�t�d�|���|�t�t	|dd��\}}|�|�|�r>|ddk�r>|�t�|dd�}|s�q>|dtv�rt|�\}}|�|�|ddkr�|�t�t	|dd��\}}|�|�q�|�sNt�d��|ddk�rlt�d	�|���|�t
dd
��||dd�fS)z� obs-route = obs-domain-list ":"
        obs-domain-list = *(CFWS / ",") "@" domain *("," [CFWS] ["@" domain])

        Returns an obs-route token with the appropriate sub-tokens (that is,
        there is no obs-domain-list in the parse tree).
    rr7r6Nr�z(expected obs-route domain but found '{}'z%end of header while parsing obs-route�:z4expected ':' marking end of obs-route but found '{}'zend-of-obs-route-marker)r�r]r^r*�
ListSeparatorrrLrQ�RouteComponentMarkerrwr5)rZ	obs_router;rrr�
get_obs_route�sF
�





�r|cCs�t�}|dtvr(t|�\}}|�|�|r8|ddkrHt�d�|���|�tdd��|dd�}|ddkr�|�tdd��|j�t�	d	��|dd�}||fSzt
|�\}}Wnvtj�y,z"t|�\}}|j�t�d
��Wn&tj�yt�d�|���Yn0|�|�t
|�\}}Yn0|�|�|�rZ|ddk�rZ|dd�}n|j�t�	d��|�tdd��|�r�|dtv�r�t|�\}}|�|�||fS)
z� angle-addr = [CFWS] "<" addr-spec ">" [CFWS] / obs-angle-addr
        obs-angle-addr = [CFWS] "<" obs-route addr-spec ">" [CFWS]

    r�<z"expected angle-addr but found '{}'zangle-addr-startr6N�>zangle-addr-endznull addr-spec in angle-addrz*obsolete route specification in angle-addrz.expected addr-spec or obs-route but found '{}'z"missing trailing '>' on angle-addr)
r�r]r^r*rrLrQr5rDrrxr|rh)rZ
angle_addrr;rrr�get_angle_addr�sT
�
�
�
�



�
rcCs<t�}t|�\}}|�|dd��|jdd�|_||fS)z� display-name = phrase

    Because this is simply a name-rule, we don't return a display-name
    token containing a phrase, but rather a display-name token with
    the content of the phrase.

    N)r�rir)rD)rr�r;rrr�get_display_name!s
r�cCs�t�}d}|dtvr6t|�\}}|s6t�d�|���|ddkr�|dtvr^t�d�|���t|�\}}|s~t�d�|���|dur�|g|ddd�<d}|�|�t	|�\}}|dur�|g|dd�<|�|�||fS)z, name-addr = [display-name] angle-addr

    Nrz!expected name-addr but found '{}'r})
r�r]r^rrLrQrgr�r*r)rZ	name_addrrer;rrr�
get_name_addr0s6���

r�cCs�t�}zt|�\}}WnJtjy`zt|�\}}Wn$tjyZt�d�|���Yn0Yn0tdd�|jD��r|d|_|�	|�||fS)z& mailbox = name-addr / addr-spec

    zexpected mailbox but found '{}'css|]}t|tj�VqdSr/)rprrrJrrrrM]s�zget_mailbox.<locals>.<genexpr>r�)
r�r�rrLrxrQrurSr\r*)rr�r;rrr�get_mailboxNs ��
r�cCsdt�}|r\|d|vr\|dtvrD|�t|dd��|dd�}qt|�\}}|�|�q||fS)z� Read everything up to one of the chars in endchars.

    This is outside the formal grammar.  The InvalidMailbox TokenList that is
    returned acts like a Mailbox, but the data attributes are None.

    rrpr6N)r�rgr*r5ri)rr@Zinvalid_mailboxr;rrr�get_invalid_mailboxcs�r�cCs�t�}|�r�|ddk�r�zt|�\}}|�|�W�ntj�y:d}|dtvr�t|�\}}|rt|ddvr�|�|�|j�t�d��n@t	|d�\}}|dur�|g|dd�<|�|�|j�t�
d��nb|ddkr�|j�t�d��nBt	|d�\}}|du�r|g|dd�<|�|�|j�t�
d��Yn0|�r�|ddv�r�|d}d	|_t	|d�\}}|�|�|j�t�
d��|r|ddkr|�t
�|d
d�}q||fS)aJ mailbox-list = (mailbox *("," mailbox)) / obs-mbox-list
        obs-mbox-list = *([CFWS] ",") mailbox *("," [mailbox / CFWS])

    For this routine we go outside the formal grammar in order to improve error
    handling.  We recognize the end of the mailbox list only at the end of the
    value or at a ';' (the group terminator).  This is so that we can turn
    invalid mailboxes into InvalidMailbox tokens and continue parsing any
    remaining valid mailboxes.  We also allow all mailbox entries to be null,
    and this condition is handled appropriately at a higher level.

    r�;Nz,;zempty element in mailbox-listzinvalid mailbox in mailbox-listr7r^r�r6)r�r�r*rrLr]r^rDrhr�rr\r)rz)rZmailbox_listr;rer�rrr�get_mailbox_listusX

�

�
�


�

�
r�cCst�}|s$|j�t�d��||fSd}|r�|dtvr�t|�\}}|sl|j�t�d��|�|�||fS|ddkr�|�|�||fSt|�\}}t|j	�dkr�|dur�|�|�|�
|�|j�t�d��||fS|dur�|g|dd�<|�|�||fS)zg group-list = mailbox-list / CFWS / obs-group-list
        obs-group-list = 1*([CFWS] ",") [CFWS]

    zend of header before group-listNrzend of header in group-listr�zgroup-list with empty entries)r�rDr*rrr]r^r�r7r�r)rh)rZ
group_listrer;rrr�get_group_list�s>
�
�




�
r�cCst�}t|�\}}|r"|ddkr2t�d�|���|�|�|�tdd��|dd�}|r�|ddkr�|�tdd��||dd�fSt|�\}}|�|�|s�|j�t�	d	��|ddkr�t�d
�|���|�tdd��|dd�}|�r|dt
v�rt|�\}}|�|�||fS)z7 group = display-name ":" [group-list] ";" [CFWS]

    rryz8expected ':' at end of group display name but found '{}'zgroup-display-name-terminatorr6Nr�zgroup-terminatorzend of header in groupz)expected ';' at end of group but found {})r�r�rrLrQr*r5r�rDrr]r^)rr�r;rrr�	get_group�s8�


��
r�cCstt�}zt|�\}}WnJtjy`zt|�\}}Wn$tjyZt�d�|���Yn0Yn0|�|�||fS)a� address = mailbox / group

    Note that counter-intuitively, an address can be either a single address or
    a list of addresses (a group).  This is why the returned Address object has
    a 'mailboxes' attribute which treats a single address as a list of length
    one.  When you need to differentiate between to two cases, extract the single
    element, which is either a mailbox or a group token.

    zexpected address but found '{}')r�r�rrLr�rQr*)rr�r;rrr�get_address�s�
r�c
Cs�t�}|�r�zt|�\}}|�|�W�n$tj�yL}�zd}|dtvr�t|�\}}|rj|ddkr�|�|�|j�t�d��nFt	|d�\}}|dur�|g|dd�<|�t
|g��|j�t�d��nh|ddkr�|j�t�d��nHt	|d�\}}|du�r|g|dd�<|�t
|g��|j�t�d��WYd}~n
d}~00|�r�|ddk�r�|dd}d|_t	|d�\}}|�
|�|j�t�d��|r|�tdd	��|d
d�}q||fS)a� address_list = (address *("," address)) / obs-addr-list
        obs-addr-list = *([CFWS] ",") address *("," [address / CFWS])

    We depart from the formal grammar here by continuing to parse until the end
    of the input, assuming the input to be entirely composed of an
    address-list.  This is always true in email parsing, and allows us
    to skip invalid addresses to parse additional valid ones.

    Nrr7z"address-list entry with no contentzinvalid address in address-listzempty element in address-listr^r�r8r6)r�r�r*rrLr]r^rDrhr�r�rr\r)r5)rZaddress_listr;�errrer�rrr�get_address_list	sX


�
�
�

�

�r�cCs�t�}|s |j�t�d��|S|dtvrXt|�\}}|�|�|sX|j�t�d��d}|r�|ddkr�|dtvr�||d7}|dd�}q\|��s�|j�t�d�	|���|�t
|d	��nt|�|_|�t
|d
��|�r|dtv�rt|�\}}|�|�|�r|ddk�rT|jdu�r:|j�t�d��|�rP|�t
|d	��|S|�t
dd��|dd�}|�r�|dtv�r�t|�\}}|�|�|�s�|jdu�r�|j�t�d��|Sd}|�r�|dtv�r�||d7}|dd�}�q�|���s*|j�t�d
�	|���|�t
|d	��nt|�|_
|�t
|d
��|�rn|dtv�rnt|�\}}|�|�|�r�|j�t�d��|�t
|d	��|S)zE mime-version = [CFWS] 1*digit [CFWS] "." [CFWS] 1*digit [CFWS]

    z%Missing MIME version number (eg: 1.0)rz0Expected MIME version number but found only CFWSr2rr6Nz1Expected MIME major version number but found {!r}r<�digitsz0Incomplete MIME version; found only major numberzversion-separatorz1Expected MIME minor version number but found {!r}z'Excess non-CFWS text after MIME version)r�rDr*r�HeaderMissingRequiredValuer]r^rMrrQr5rr�r�)rZmime_versionr;r�rrr�parse_mime_versionL	s�
�

�
�


�

�

�


�r�cCsdt�}|r\|ddkr\|dtvrD|�t|dd��|dd�}qt|�\}}|�|�q||fS)z� Read everything up to the next ';'.

    This is outside the formal grammar.  The InvalidParameter TokenList that is
    returned acts like a Parameter, but the data attributes are None.

    rr�rpr6N)rrgr*r5ri)rZinvalid_parameterr;rrr�get_invalid_parameter�	s�r�cCsNt|�}|st�d�|���|��}|t|�d�}t|d�}t|�||fS)a8ttext = <matches _ttext_matcher>

    We allow any non-TOKEN_ENDS in ttext, but add defects to the token's
    defects list if we find non-ttext characters.  We also register defects for
    *any* non-printables even though the RFC doesn't exclude all of them,
    because we follow the spirit of RFC 5322.

    zexpected ttext but found '{}'N�ttext)�_non_token_end_matcherrrLrQr�r7r5r>)rrYr�rrr�	get_ttext�	s	�
r�cCs�t�}|r,|dtvr,t|�\}}|�|�|rL|dtvrLt�d�|���t|�\}}|�|�|r�|dtvr�t|�\}}|�|�||fS)z�token = [CFWS] 1*ttext [CFWS]

    The RFC equivalent of ttext is any US-ASCII chars except space, ctls, or
    tspecials.  We also exclude tabs even though the RFC doesn't.

    The RFC implies the CFWS but is not explicit about it in the BNF.

    r�expected token but found '{}')	r�r]r^r*�
TOKEN_ENDSrrLrQr�)rZmtokenr;rrr�	get_token�	s	
�

r�cCsNt|�}|st�d�|���|��}|t|�d�}t|d�}t|�||fS)aQattrtext = 1*(any non-ATTRIBUTE_ENDS character)

    We allow any non-ATTRIBUTE_ENDS in attrtext, but add defects to the
    token's defects list if we find non-attrtext characters.  We also register
    defects for *any* non-printables even though the RFC doesn't exclude all of
    them, because we follow the spirit of RFC 5322.

    z expected attrtext but found {!r}Nr)�_non_attribute_end_matcherrrLrQr�r7r5r>�rrYrrrr�get_attrtext�	s	�
r�cCs�t�}|r,|dtvr,t|�\}}|�|�|rL|dtvrLt�d�|���t|�\}}|�|�|r�|dtvr�t|�\}}|�|�||fS)aH [CFWS] 1*attrtext [CFWS]

    This version of the BNF makes the CFWS explicit, and as usual we use a
    value terminal for the actual run of characters.  The RFC equivalent of
    attrtext is the token characters, with the subtraction of '*', "'", and '%'.
    We include tab in the excluded set just as we do for token.

    rr�)	r	r]r^r*�ATTRIBUTE_ENDSrrLrQr��rr
r;rrr�
get_attribute�	s	
�

r�cCsNt|�}|st�d�|���|��}|t|�d�}t|d�}t|�||fS)z�attrtext = 1*(any non-ATTRIBUTE_ENDS character plus '%')

    This is a special parsing routine so that we get a value that
    includes % escapes as a single string (which we decode as a single
    string later).

    z)expected extended attrtext but found {!r}N�extended-attrtext)�#_non_extended_attribute_end_matcherrrLrQr�r7r5r>r�rrr�get_extended_attrtext�	s�
r�cCs�t�}|r,|dtvr,t|�\}}|�|�|rL|dtvrLt�d�|���t|�\}}|�|�|r�|dtvr�t|�\}}|�|�||fS)z� [CFWS] 1*extended_attrtext [CFWS]

    This is like the non-extended version except we allow % characters, so that
    we can pick up an encoded value as a single string.

    rr�)	r	r]r^r*�EXTENDED_ATTRIBUTE_ENDSrrLrQr�r�rrr�get_extended_attribute
s
�

r�cCs�t�}|r|ddkr&t�d�|���|�tdd��|dd�}|rR|d��sbt�d�|���d}|r�|d��r�||d7}|dd�}qf|dd	kr�|d	kr�|j�t�d
��t	|�|_
|�t|d��||fS)a6 '*' digits

    The formal BNF is more complicated because leading 0s are not allowed.  We
    check for that and add a defect.  We also assume no CFWS is allowed between
    the '*' and the digits, though the RFC is not crystal clear on that.
    The caller should already have dealt with leading CFWS.

    r�*zExpected section but found {}zsection-markerr6Nz$Expected section number but found {}r2�0z&section numberhas an invalid leading 0r�)rrrLrQr*r5rMrD�InvalidHeaderErrorrr)rrr�rrr�get_section
s(	��
r�cCs�t�}|st�d��d}|dtvr0t|�\}}|sDt�d�|���|ddkr^t|�\}}nt|�\}}|dur�|g|dd�<|�|�||fS)z  quoted-string / attribute

    z&Expected value but found end of stringNrz Expected value but found only {}r)	rrrLr]r^rQr_r�r*)r�vrer;rrr�	get_value<
s"
�
r�cCs�t�}t|�\}}|�|�|r,|ddkrL|j�t�d�|���||fS|ddkr�z t|�\}}d|_|�|�Wntj	y�Yn0|s�t�	d��|ddkr�|�t
dd��|dd	�}d|_|dd
kr�t�	d��|�t
d
d��|dd	�}d	}|�r*|dtv�r*t
|�\}}|�|�d	}|}|j�rD|�rD|dd
k�rDt|�\}}|j}d}|jdk�r�|�r�|ddk�r�d}n$t|�\}}	|	�r�|	ddk�r�d}n(zt|�\}}	WnYn0|	�s�d}|�r.|j�t�d��|�|�|D](}
|
jdk�r�g|
d	d	�<|
}�q(�q�|}nd	}|j�t�d��|�r^|ddk�r^d	}nt|�\}}|j�r~|jdk�r�|�r�|ddk�r�|�|�|d	u�r�|�r�J|��|}||fS|j�t�d��|�s|j�t�d��|�|�|d	u�r�||fSn�|d	u�rL|D]}
|
jdk�r�q0�q|
jdk|�|
�|
j|_|ddk�rjt�	d�|���|�t
dd��|dd	�}|�r�|ddk�r�t|�\}}|�|�|j|_|�r�|ddk�r�t�	d�|���|�t
dd��|dd	�}|d	u�rJt�}|�rD|dtv�r*t|�\}}nt|�\}}|�|��q|}nt|�\}}|�|�|d	u�r||�rxJ|��|}||fS)aY attribute [section] ["*"] [CFWS] "=" value

    The CFWS is implied by the RFC but not made explicit in the BNF.  This
    simplified form of the BNF from the RFC is made to conform with the RFC BNF
    through some extra checks.  We do it this way because it makes both error
    recovery and working with the resulting parse tree easier.
    rr�z)Parameter contains name ({}) but no valuer�TzIncomplete parameterzextended-parameter-markerr6N�=zParameter not followed by '='�parameter-separatorrF�'z5Quoted string value for extended parameter is invalidr�zZParameter marked as extended but appears to have a quoted string value that is non-encodedzcApparent initial-extended-value but attribute was not marked as extended or was not initial sectionz(Missing required charset/lang delimitersr�rz=Expected RFC2231 char/lang encoding delimiter, but found {!r}zRFC2231 delimiterz;Expected RFC2231 char/lang encoding delimiter, but found {})rr�r*rDrrrQr�rrLr5rr]r^r_r�rr�r�r\r�rrir�rrNrJrV)rrr;rer�ZappendtoZqstringZinner_valueZ
semi_validr.�tr�rrr�
get_parameterR
s�
�



�


�


�
�






�
�


r�c
Cspt�}|�rlzt|�\}}|�|�Wn�tjy�}z�d}|dtvrTt|�\}}|sr|�|�|WYd}~S|ddkr�|dur�|�|�|j�t�d��n@t	|�\}}|r�|g|dd�<|�|�|j�t�d�
|���WYd}~n
d}~00|�rJ|ddk�rJ|d}d|_t	|�\}}|�|�|j�t�d�
|���|r|�t
dd	��|d
d�}q|S)a! parameter *( ";" parameter )

    That BNF is meant to indicate this routine should only be called after
    finding and handling the leading ';'.  There is no corresponding rule in
    the formal RFC grammar, but it is more convenient for us for the set of
    parameters to be treated as its own TokenList.

    This is 'parse' routine because it consumes the reminaing value, but it
    would never be called to parse a full header.  Instead it is called to
    parse everything after the non-parameter value of a specific MIME header.

    Nrr�zparameter entry with no contentzinvalid parameter {!r}r^rz)parameter with invalid trailing text {!r}r�r6)rr�r*rrLr]r^rDrr�rQr\r)r5)rZmime_parametersr;r�rerrrr�parse_mime_parameters�
sJ



�

�

�r�cCs�|rV|ddkrV|dtvr>|�t|dd��|dd�}qt|�\}}|�|�q|s^dS|�tdd��|�t|dd���dS)zBDo our best to find the parameters in an invalid MIME header

    rr�rpr6Nr�)rgr*r5rir�)�	tokenlistrr;rrr�_find_mime_parameterssr�c
Cs�t�}d}|s$|j�t�d��|Szt|�\}}Wn:tjyn|j�t�d�|���t	||�|YS0|�|�|r�|ddkr�|j�t�d��|r�t	||�|S|j
����|_
|�tdd��|dd	�}zt|�\}}Wn<tj�y&|j�t�d
�|���t	||�|YS0|�|�|j
����|_|�sL|S|ddk�r�|j�t�d�|���|`
|`t	||�|S|�tdd
��|�t|dd	���|S)z� maintype "/" subtype *( ";" parameter )

    The maintype and substype are tokens.  Theoretically they could
    be checked against the official IANA list + x-token, but we
    don't do that.
    Fz"Missing content type specificationz(Expected content maintype but found {!r}r�/zInvalid content typezcontent-type-separatorr6Nz'Expected content subtype but found {!r}r�z<Only parameters are valid after content type, but found {!r}r�)r!rDr*rr�r�rLrrQr�rr�lowerr$r5r%r�)r�ctypeZrecoverr;rrr�parse_content_type_header!sd
�
�



�

�



��
r�c
Cs�t�}|s |j�t�d��|Szt|�\}}Wn:tjyjtj�t�d�	|���t
||�|YS0|�|�|j���
�|_|s�|S|ddkr�|j�t�d�	|���t
||�|S|�tdd��|�t|dd���|S)	z* disposition-type *( ";" parameter )

    zMissing content dispositionz+Expected content disposition but found {!r}rr�zCOnly parameters are valid after content disposition, but found {!r}r�r6N)r&rDr*rr�r�rLr�rrQr�rrr�r'r5r�)rZdisp_headerr;rrr� parse_content_disposition_headerZs:
�
�



��
r�c
Cs�t�}|s |j�t�d��|Szt|�\}}Wn,tjy\tj�t�d�	|���Yn0|�|�|j
����|_
|s�|S|r�|j�t�d��|dtvr�|�t|dd��|dd�}q�t|�\}}|�|�q�|S)z mechanism

    z!Missing content transfer encodingz1Expected content trnasfer encoding but found {!r}z*Extra text after content transfer encodingrrpr6N)r(rDr*rr�r�rLr�rrQrrr�r�rgr5ri)rZ
cte_headerr;rrr�&parse_content_transfer_encoding_headerxs4
�
�

�r�)��__doc__�
__future__rrrrZfuture.builtinsrrrr	r
�re�collectionsrrZfuture.backports.urllib.parser
rZfuture.backports.emailrrgrrr�rNr]rdr`r�rgZ	TSPECIALSr�Z	ASPECIALSr�r�r�objectrrCr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrr	rrrrr!r&r(r*r+r/r4r5r6r�rzr{�compilerQr3r�r?r�matchrX�findallr:r�r�r�r>rErIrJrQr�rUrVrZr[r\r^r_rarbrcrfrirorlrqrtrvrwrxr|rr�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�r�rrrr�<module>s$DTDN_		 '#
B3


���
���
,7
&'/'$).9%>D49PK�Du\)��!�*�*$future/backports/email/quoprimime.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Ben Gertzfield
# Contact: email-sig@python.org

"""Quoted-printable content transfer encoding per RFCs 2045-2047.

This module handles the content transfer encoding method defined in RFC 2045
to encode US ASCII-like 8-bit data called `quoted-printable'.  It is used to
safely encode text that is in a character set similar to the 7-bit US ASCII
character set, but that includes some 8-bit characters that are normally not
allowed in email bodies or headers.

Quoted-printable is very space-inefficient for encoding binary files; use the
email.base64mime module for that instead.

This module provides an interface to encode and decode both headers and bodies
with quoted-printable encoding.

RFC 2045 defines a method for including character set information in an
`encoded-word' in a header.  This method is commonly used for 8-bit real names
in To:/From:/Cc: etc. fields, as well as Subject: lines.

This module does not do the line wrapping or end-of-line character
conversion necessary for proper internationalized headers; it only
does dumb encoding and decoding.  To deal with the various line
wrapping issues, use the email.header module.
"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import bytes, chr, dict, int, range, super

__all__ = [
    'body_decode',
    'body_encode',
    'body_length',
    'decode',
    'decodestring',
    'header_decode',
    'header_encode',
    'header_length',
    'quote',
    'unquote',
    ]

import re
import io

from string import ascii_letters, digits, hexdigits

CRLF = '\r\n'
NL = '\n'
EMPTYSTRING = ''

# Build a mapping of octets to the expansion of that octet.  Since we're only
# going to have 256 of these things, this isn't terribly inefficient
# space-wise.  Remember that headers and bodies have different sets of safe
# characters.  Initialize both maps with the full expansion, and then override
# the safe bytes with the more compact form.
_QUOPRI_HEADER_MAP = dict((c, '=%02X' % c) for c in range(256))
_QUOPRI_BODY_MAP = _QUOPRI_HEADER_MAP.copy()

# Safe header bytes which need no encoding.
for c in bytes(b'-!*+/' + ascii_letters.encode('ascii') + digits.encode('ascii')):
    _QUOPRI_HEADER_MAP[c] = chr(c)
# Headers have one other special encoding; spaces become underscores.
_QUOPRI_HEADER_MAP[ord(' ')] = '_'

# Safe body bytes which need no encoding.
for c in bytes(b' !"#$%&\'()*+,-./0123456789:;<>'
               b'?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`'
               b'abcdefghijklmnopqrstuvwxyz{|}~\t'):
    _QUOPRI_BODY_MAP[c] = chr(c)



# Helpers
def header_check(octet):
    """Return True if the octet should be escaped with header quopri."""
    return chr(octet) != _QUOPRI_HEADER_MAP[octet]


def body_check(octet):
    """Return True if the octet should be escaped with body quopri."""
    return chr(octet) != _QUOPRI_BODY_MAP[octet]


def header_length(bytearray):
    """Return a header quoted-printable encoding length.

    Note that this does not include any RFC 2047 chrome added by
    `header_encode()`.

    :param bytearray: An array of bytes (a.k.a. octets).
    :return: The length in bytes of the byte array when it is encoded with
        quoted-printable for headers.
    """
    return sum(len(_QUOPRI_HEADER_MAP[octet]) for octet in bytearray)


def body_length(bytearray):
    """Return a body quoted-printable encoding length.

    :param bytearray: An array of bytes (a.k.a. octets).
    :return: The length in bytes of the byte array when it is encoded with
        quoted-printable for bodies.
    """
    return sum(len(_QUOPRI_BODY_MAP[octet]) for octet in bytearray)


def _max_append(L, s, maxlen, extra=''):
    if not isinstance(s, str):
        s = chr(s)
    if not L:
        L.append(s.lstrip())
    elif len(L[-1]) + len(s) <= maxlen:
        L[-1] += extra + s
    else:
        L.append(s.lstrip())


def unquote(s):
    """Turn a string in the form =AB to the ASCII character with value 0xab"""
    return chr(int(s[1:3], 16))


def quote(c):
    return '=%02X' % ord(c)



def header_encode(header_bytes, charset='iso-8859-1'):
    """Encode a single header line with quoted-printable (like) encoding.

    Defined in RFC 2045, this `Q' encoding is similar to quoted-printable, but
    used specifically for email header fields to allow charsets with mostly 7
    bit characters (and some 8 bit) to remain more or less readable in non-RFC
    2045 aware mail clients.

    charset names the character set to use in the RFC 2046 header.  It
    defaults to iso-8859-1.
    """
    # Return empty headers as an empty string.
    if not header_bytes:
        return ''
    # Iterate over every byte, encoding if necessary.
    encoded = []
    for octet in header_bytes:
        encoded.append(_QUOPRI_HEADER_MAP[octet])
    # Now add the RFC chrome to each encoded chunk and glue the chunks
    # together.
    return '=?%s?q?%s?=' % (charset, EMPTYSTRING.join(encoded))


class _body_accumulator(io.StringIO):

    def __init__(self, maxlinelen, eol, *args, **kw):
        super().__init__(*args, **kw)
        self.eol = eol
        self.maxlinelen = self.room = maxlinelen

    def write_str(self, s):
        """Add string s to the accumulated body."""
        self.write(s)
        self.room -= len(s)

    def newline(self):
        """Write eol, then start new line."""
        self.write_str(self.eol)
        self.room = self.maxlinelen

    def write_soft_break(self):
        """Write a soft break, then start a new line."""
        self.write_str('=')
        self.newline()

    def write_wrapped(self, s, extra_room=0):
        """Add a soft line break if needed, then write s."""
        if self.room < len(s) + extra_room:
            self.write_soft_break()
        self.write_str(s)

    def write_char(self, c, is_last_char):
        if not is_last_char:
            # Another character follows on this line, so we must leave
            # extra room, either for it or a soft break, and whitespace
            # need not be quoted.
            self.write_wrapped(c, extra_room=1)
        elif c not in ' \t':
            # For this and remaining cases, no more characters follow,
            # so there is no need to reserve extra room (since a hard
            # break will immediately follow).
            self.write_wrapped(c)
        elif self.room >= 3:
            # It's a whitespace character at end-of-line, and we have room
            # for the three-character quoted encoding.
            self.write(quote(c))
        elif self.room == 2:
            # There's room for the whitespace character and a soft break.
            self.write(c)
            self.write_soft_break()
        else:
            # There's room only for a soft break.  The quoted whitespace
            # will be the only content on the subsequent line.
            self.write_soft_break()
            self.write(quote(c))


def body_encode(body, maxlinelen=76, eol=NL):
    """Encode with quoted-printable, wrapping at maxlinelen characters.

    Each line of encoded text will end with eol, which defaults to "\\n".  Set
    this to "\\r\\n" if you will be using the result of this function directly
    in an email.

    Each line will be wrapped at, at most, maxlinelen characters before the
    eol string (maxlinelen defaults to 76 characters, the maximum value
    permitted by RFC 2045).  Long lines will have the 'soft line break'
    quoted-printable character "=" appended to them, so the decoded text will
    be identical to the original text.

    The minimum maxlinelen is 4 to have room for a quoted character ("=XX")
    followed by a soft line break.  Smaller values will generate a
    ValueError.

    """

    if maxlinelen < 4:
        raise ValueError("maxlinelen must be at least 4")
    if not body:
        return body

    # The last line may or may not end in eol, but all other lines do.
    last_has_eol = (body[-1] in '\r\n')

    # This accumulator will make it easier to build the encoded body.
    encoded_body = _body_accumulator(maxlinelen, eol)

    lines = body.splitlines()
    last_line_no = len(lines) - 1
    for line_no, line in enumerate(lines):
        last_char_index = len(line) - 1
        for i, c in enumerate(line):
            if body_check(ord(c)):
                c = quote(c)
            encoded_body.write_char(c, i==last_char_index)
        # Add an eol if input line had eol.  All input lines have eol except
        # possibly the last one.
        if line_no < last_line_no or last_has_eol:
            encoded_body.newline()

    return encoded_body.getvalue()



# BAW: I'm not sure if the intent was for the signature of this function to be
# the same as base64MIME.decode() or not...
def decode(encoded, eol=NL):
    """Decode a quoted-printable string.

    Lines are separated with eol, which defaults to \\n.
    """
    if not encoded:
        return encoded
    # BAW: see comment in encode() above.  Again, we're building up the
    # decoded string with string concatenation, which could be done much more
    # efficiently.
    decoded = ''

    for line in encoded.splitlines():
        line = line.rstrip()
        if not line:
            decoded += eol
            continue

        i = 0
        n = len(line)
        while i < n:
            c = line[i]
            if c != '=':
                decoded += c
                i += 1
            # Otherwise, c == "=".  Are we at the end of the line?  If so, add
            # a soft line break.
            elif i+1 == n:
                i += 1
                continue
            # Decode if in form =AB
            elif i+2 < n and line[i+1] in hexdigits and line[i+2] in hexdigits:
                decoded += unquote(line[i:i+3])
                i += 3
            # Otherwise, not in form =AB, pass literally
            else:
                decoded += c
                i += 1

            if i == n:
                decoded += eol
    # Special case if original string did not end with eol
    if encoded[-1] not in '\r\n' and decoded.endswith(eol):
        decoded = decoded[:-1]
    return decoded


# For convenience and backwards compatibility w/ standard base64 module
body_decode = decode
decodestring = decode



def _unquote_match(match):
    """Turn a match in the form =AB to the ASCII character with value 0xab"""
    s = match.group(0)
    return unquote(s)


# Header decoding is done a bit differently
def header_decode(s):
    """Decode a string encoded with RFC 2045 MIME header `Q' encoding.

    This function does not parse a full MIME header value encoded with
    quoted-printable (like =?iso-8895-1?q?Hello_World?=) -- please use
    the high level email.header class for that functionality.
    """
    s = s.replace('_', ' ')
    return re.sub(r'=[a-fA-F0-9]{2}', _unquote_match, s, re.ASCII)
PK�Du\Wxl�� future/backports/email/parser.pynu�[���# Copyright (C) 2001-2007 Python Software Foundation
# Author: Barry Warsaw, Thomas Wouters, Anthony Baxter
# Contact: email-sig@python.org

"""A parser of RFC 2822 and MIME email messages."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['Parser', 'HeaderParser', 'BytesParser', 'BytesHeaderParser']

import warnings
from io import StringIO, TextIOWrapper

from future.backports.email.feedparser import FeedParser, BytesFeedParser
from future.backports.email.message import Message
from future.backports.email._policybase import compat32


class Parser(object):
    def __init__(self, _class=Message, **_3to2kwargs):
        """Parser of RFC 2822 and MIME email messages.

        Creates an in-memory object tree representing the email message, which
        can then be manipulated and turned over to a Generator to return the
        textual representation of the message.

        The string must be formatted as a block of RFC 2822 headers and header
        continuation lines, optionally preceded by a `Unix-from' header.  The
        header block is terminated either by the end of the string or by a
        blank line.

        _class is the class to instantiate for new message objects when they
        must be created.  This class must have a constructor that can take
        zero arguments.  Default is Message.Message.

        The policy keyword specifies a policy object that controls a number of
        aspects of the parser's operation.  The default policy maintains
        backward compatibility.

        """
        if 'policy' in _3to2kwargs: policy = _3to2kwargs['policy']; del _3to2kwargs['policy']
        else: policy = compat32
        self._class = _class
        self.policy = policy

    def parse(self, fp, headersonly=False):
        """Create a message structure from the data in a file.

        Reads all the data from the file and returns the root of the message
        structure.  Optional headersonly is a flag specifying whether to stop
        parsing after reading the headers or not.  The default is False,
        meaning it parses the entire contents of the file.
        """
        feedparser = FeedParser(self._class, policy=self.policy)
        if headersonly:
            feedparser._set_headersonly()
        while True:
            data = fp.read(8192)
            if not data:
                break
            feedparser.feed(data)
        return feedparser.close()

    def parsestr(self, text, headersonly=False):
        """Create a message structure from a string.

        Returns the root of the message structure.  Optional headersonly is a
        flag specifying whether to stop parsing after reading the headers or
        not.  The default is False, meaning it parses the entire contents of
        the file.
        """
        return self.parse(StringIO(text), headersonly=headersonly)



class HeaderParser(Parser):
    def parse(self, fp, headersonly=True):
        return Parser.parse(self, fp, True)

    def parsestr(self, text, headersonly=True):
        return Parser.parsestr(self, text, True)


class BytesParser(object):

    def __init__(self, *args, **kw):
        """Parser of binary RFC 2822 and MIME email messages.

        Creates an in-memory object tree representing the email message, which
        can then be manipulated and turned over to a Generator to return the
        textual representation of the message.

        The input must be formatted as a block of RFC 2822 headers and header
        continuation lines, optionally preceded by a `Unix-from' header.  The
        header block is terminated either by the end of the input or by a
        blank line.

        _class is the class to instantiate for new message objects when they
        must be created.  This class must have a constructor that can take
        zero arguments.  Default is Message.Message.
        """
        self.parser = Parser(*args, **kw)

    def parse(self, fp, headersonly=False):
        """Create a message structure from the data in a binary file.

        Reads all the data from the file and returns the root of the message
        structure.  Optional headersonly is a flag specifying whether to stop
        parsing after reading the headers or not.  The default is False,
        meaning it parses the entire contents of the file.
        """
        fp = TextIOWrapper(fp, encoding='ascii', errors='surrogateescape')
        with fp:
            return self.parser.parse(fp, headersonly)


    def parsebytes(self, text, headersonly=False):
        """Create a message structure from a byte string.

        Returns the root of the message structure.  Optional headersonly is a
        flag specifying whether to stop parsing after reading the headers or
        not.  The default is False, meaning it parses the entire contents of
        the file.
        """
        text = text.decode('ASCII', errors='surrogateescape')
        return self.parser.parsestr(text, headersonly)


class BytesHeaderParser(BytesParser):
    def parse(self, fp, headersonly=True):
        return BytesParser.parse(self, fp, headersonly=True)

    def parsebytes(self, text, headersonly=True):
        return BytesParser.parsebytes(self, text, headersonly=True)
PK�Du\�';�P�P(future/backports/email/headerregistry.pynu�[���"""Representing and manipulating email headers via custom objects.

This module provides an implementation of the HeaderRegistry API.
The implementation is designed to flexibly follow RFC5322 rules.

Eventually HeaderRegistry will be a public API, but it isn't yet,
and will probably change some before that happens.

"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

from future.builtins import super
from future.builtins import str
from future.utils import text_to_native_str
from future.backports.email import utils
from future.backports.email import errors
from future.backports.email import _header_value_parser as parser

class Address(object):

    def __init__(self, display_name='', username='', domain='', addr_spec=None):
        """Create an object represeting a full email address.

        An address can have a 'display_name', a 'username', and a 'domain'.  In
        addition to specifying the username and domain separately, they may be
        specified together by using the addr_spec keyword *instead of* the
        username and domain keywords.  If an addr_spec string is specified it
        must be properly quoted according to RFC 5322 rules; an error will be
        raised if it is not.

        An Address object has display_name, username, domain, and addr_spec
        attributes, all of which are read-only.  The addr_spec and the string
        value of the object are both quoted according to RFC5322 rules, but
        without any Content Transfer Encoding.

        """
        # This clause with its potential 'raise' may only happen when an
        # application program creates an Address object using an addr_spec
        # keyword.  The email library code itself must always supply username
        # and domain.
        if addr_spec is not None:
            if username or domain:
                raise TypeError("addrspec specified when username and/or "
                                "domain also specified")
            a_s, rest = parser.get_addr_spec(addr_spec)
            if rest:
                raise ValueError("Invalid addr_spec; only '{}' "
                                 "could be parsed from '{}'".format(
                                    a_s, addr_spec))
            if a_s.all_defects:
                raise a_s.all_defects[0]
            username = a_s.local_part
            domain = a_s.domain
        self._display_name = display_name
        self._username = username
        self._domain = domain

    @property
    def display_name(self):
        return self._display_name

    @property
    def username(self):
        return self._username

    @property
    def domain(self):
        return self._domain

    @property
    def addr_spec(self):
        """The addr_spec (username@domain) portion of the address, quoted
        according to RFC 5322 rules, but with no Content Transfer Encoding.
        """
        nameset = set(self.username)
        if len(nameset) > len(nameset-parser.DOT_ATOM_ENDS):
            lp = parser.quote_string(self.username)
        else:
            lp = self.username
        if self.domain:
            return lp + '@' + self.domain
        if not lp:
            return '<>'
        return lp

    def __repr__(self):
        return "Address(display_name={!r}, username={!r}, domain={!r})".format(
                        self.display_name, self.username, self.domain)

    def __str__(self):
        nameset = set(self.display_name)
        if len(nameset) > len(nameset-parser.SPECIALS):
            disp = parser.quote_string(self.display_name)
        else:
            disp = self.display_name
        if disp:
            addr_spec = '' if self.addr_spec=='<>' else self.addr_spec
            return "{} <{}>".format(disp, addr_spec)
        return self.addr_spec

    def __eq__(self, other):
        if type(other) != type(self):
            return False
        return (self.display_name == other.display_name and
                self.username == other.username and
                self.domain == other.domain)


class Group(object):

    def __init__(self, display_name=None, addresses=None):
        """Create an object representing an address group.

        An address group consists of a display_name followed by colon and an
        list of addresses (see Address) terminated by a semi-colon.  The Group
        is created by specifying a display_name and a possibly empty list of
        Address objects.  A Group can also be used to represent a single
        address that is not in a group, which is convenient when manipulating
        lists that are a combination of Groups and individual Addresses.  In
        this case the display_name should be set to None.  In particular, the
        string representation of a Group whose display_name is None is the same
        as the Address object, if there is one and only one Address object in
        the addresses list.

        """
        self._display_name = display_name
        self._addresses = tuple(addresses) if addresses else tuple()

    @property
    def display_name(self):
        return self._display_name

    @property
    def addresses(self):
        return self._addresses

    def __repr__(self):
        return "Group(display_name={!r}, addresses={!r}".format(
                 self.display_name, self.addresses)

    def __str__(self):
        if self.display_name is None and len(self.addresses)==1:
            return str(self.addresses[0])
        disp = self.display_name
        if disp is not None:
            nameset = set(disp)
            if len(nameset) > len(nameset-parser.SPECIALS):
                disp = parser.quote_string(disp)
        adrstr = ", ".join(str(x) for x in self.addresses)
        adrstr = ' ' + adrstr if adrstr else adrstr
        return "{}:{};".format(disp, adrstr)

    def __eq__(self, other):
        if type(other) != type(self):
            return False
        return (self.display_name == other.display_name and
                self.addresses == other.addresses)


# Header Classes #

class BaseHeader(str):

    """Base class for message headers.

    Implements generic behavior and provides tools for subclasses.

    A subclass must define a classmethod named 'parse' that takes an unfolded
    value string and a dictionary as its arguments.  The dictionary will
    contain one key, 'defects', initialized to an empty list.  After the call
    the dictionary must contain two additional keys: parse_tree, set to the
    parse tree obtained from parsing the header, and 'decoded', set to the
    string value of the idealized representation of the data from the value.
    (That is, encoded words are decoded, and values that have canonical
    representations are so represented.)

    The defects key is intended to collect parsing defects, which the message
    parser will subsequently dispose of as appropriate.  The parser should not,
    insofar as practical, raise any errors.  Defects should be added to the
    list instead.  The standard header parsers register defects for RFC
    compliance issues, for obsolete RFC syntax, and for unrecoverable parsing
    errors.

    The parse method may add additional keys to the dictionary.  In this case
    the subclass must define an 'init' method, which will be passed the
    dictionary as its keyword arguments.  The method should use (usually by
    setting them as the value of similarly named attributes) and remove all the
    extra keys added by its parse method, and then use super to call its parent
    class with the remaining arguments and keywords.

    The subclass should also make sure that a 'max_count' attribute is defined
    that is either None or 1. XXX: need to better define this API.

    """

    def __new__(cls, name, value):
        kwds = {'defects': []}
        cls.parse(value, kwds)
        if utils._has_surrogates(kwds['decoded']):
            kwds['decoded'] = utils._sanitize(kwds['decoded'])
        self = str.__new__(cls, kwds['decoded'])
        # del kwds['decoded']
        self.init(name, **kwds)
        return self

    def init(self, name, **_3to2kwargs):
        defects = _3to2kwargs['defects']; del _3to2kwargs['defects']
        parse_tree = _3to2kwargs['parse_tree']; del _3to2kwargs['parse_tree']
        self._name = name
        self._parse_tree = parse_tree
        self._defects = defects

    @property
    def name(self):
        return self._name

    @property
    def defects(self):
        return tuple(self._defects)

    def __reduce__(self):
        return (
            _reconstruct_header,
            (
                self.__class__.__name__,
                self.__class__.__bases__,
                str(self),
            ),
            self.__dict__)

    @classmethod
    def _reconstruct(cls, value):
        return str.__new__(cls, value)

    def fold(self, **_3to2kwargs):
        policy = _3to2kwargs['policy']; del _3to2kwargs['policy']
        """Fold header according to policy.

        The parsed representation of the header is folded according to
        RFC5322 rules, as modified by the policy.  If the parse tree
        contains surrogateescaped bytes, the bytes are CTE encoded using
        the charset 'unknown-8bit".

        Any non-ASCII characters in the parse tree are CTE encoded using
        charset utf-8. XXX: make this a policy setting.

        The returned value is an ASCII-only string possibly containing linesep
        characters, and ending with a linesep character.  The string includes
        the header name and the ': ' separator.

        """
        # At some point we need to only put fws here if it was in the source.
        header = parser.Header([
            parser.HeaderLabel([
                parser.ValueTerminal(self.name, 'header-name'),
                parser.ValueTerminal(':', 'header-sep')]),
            parser.CFWSList([parser.WhiteSpaceTerminal(' ', 'fws')]),
                             self._parse_tree])
        return header.fold(policy=policy)


def _reconstruct_header(cls_name, bases, value):
    return type(text_to_native_str(cls_name), bases, {})._reconstruct(value)


class UnstructuredHeader(object):

    max_count = None
    value_parser = staticmethod(parser.get_unstructured)

    @classmethod
    def parse(cls, value, kwds):
        kwds['parse_tree'] = cls.value_parser(value)
        kwds['decoded'] = str(kwds['parse_tree'])


class UniqueUnstructuredHeader(UnstructuredHeader):

    max_count = 1


class DateHeader(object):

    """Header whose value consists of a single timestamp.

    Provides an additional attribute, datetime, which is either an aware
    datetime using a timezone, or a naive datetime if the timezone
    in the input string is -0000.  Also accepts a datetime as input.
    The 'value' attribute is the normalized form of the timestamp,
    which means it is the output of format_datetime on the datetime.
    """

    max_count = None

    # This is used only for folding, not for creating 'decoded'.
    value_parser = staticmethod(parser.get_unstructured)

    @classmethod
    def parse(cls, value, kwds):
        if not value:
            kwds['defects'].append(errors.HeaderMissingRequiredValue())
            kwds['datetime'] = None
            kwds['decoded'] = ''
            kwds['parse_tree'] = parser.TokenList()
            return
        if isinstance(value, str):
            value = utils.parsedate_to_datetime(value)
        kwds['datetime'] = value
        kwds['decoded'] = utils.format_datetime(kwds['datetime'])
        kwds['parse_tree'] = cls.value_parser(kwds['decoded'])

    def init(self, *args, **kw):
        self._datetime = kw.pop('datetime')
        super().init(*args, **kw)

    @property
    def datetime(self):
        return self._datetime


class UniqueDateHeader(DateHeader):

    max_count = 1


class AddressHeader(object):

    max_count = None

    @staticmethod
    def value_parser(value):
        address_list, value = parser.get_address_list(value)
        assert not value, 'this should not happen'
        return address_list

    @classmethod
    def parse(cls, value, kwds):
        if isinstance(value, str):
            # We are translating here from the RFC language (address/mailbox)
            # to our API language (group/address).
            kwds['parse_tree'] = address_list = cls.value_parser(value)
            groups = []
            for addr in address_list.addresses:
                groups.append(Group(addr.display_name,
                                    [Address(mb.display_name or '',
                                             mb.local_part or '',
                                             mb.domain or '')
                                     for mb in addr.all_mailboxes]))
            defects = list(address_list.all_defects)
        else:
            # Assume it is Address/Group stuff
            if not hasattr(value, '__iter__'):
                value = [value]
            groups = [Group(None, [item]) if not hasattr(item, 'addresses')
                                          else item
                                    for item in value]
            defects = []
        kwds['groups'] = groups
        kwds['defects'] = defects
        kwds['decoded'] = ', '.join([str(item) for item in groups])
        if 'parse_tree' not in kwds:
            kwds['parse_tree'] = cls.value_parser(kwds['decoded'])

    def init(self, *args, **kw):
        self._groups = tuple(kw.pop('groups'))
        self._addresses = None
        super().init(*args, **kw)

    @property
    def groups(self):
        return self._groups

    @property
    def addresses(self):
        if self._addresses is None:
            self._addresses = tuple([address for group in self._groups
                                             for address in group.addresses])
        return self._addresses


class UniqueAddressHeader(AddressHeader):

    max_count = 1


class SingleAddressHeader(AddressHeader):

    @property
    def address(self):
        if len(self.addresses)!=1:
            raise ValueError(("value of single address header {} is not "
                "a single address").format(self.name))
        return self.addresses[0]


class UniqueSingleAddressHeader(SingleAddressHeader):

    max_count = 1


class MIMEVersionHeader(object):

    max_count = 1

    value_parser = staticmethod(parser.parse_mime_version)

    @classmethod
    def parse(cls, value, kwds):
        kwds['parse_tree'] = parse_tree = cls.value_parser(value)
        kwds['decoded'] = str(parse_tree)
        kwds['defects'].extend(parse_tree.all_defects)
        kwds['major'] = None if parse_tree.minor is None else parse_tree.major
        kwds['minor'] = parse_tree.minor
        if parse_tree.minor is not None:
            kwds['version'] = '{}.{}'.format(kwds['major'], kwds['minor'])
        else:
            kwds['version'] = None

    def init(self, *args, **kw):
        self._version = kw.pop('version')
        self._major = kw.pop('major')
        self._minor = kw.pop('minor')
        super().init(*args, **kw)

    @property
    def major(self):
        return self._major

    @property
    def minor(self):
        return self._minor

    @property
    def version(self):
        return self._version


class ParameterizedMIMEHeader(object):

    # Mixin that handles the params dict.  Must be subclassed and
    # a property value_parser for the specific header provided.

    max_count = 1

    @classmethod
    def parse(cls, value, kwds):
        kwds['parse_tree'] = parse_tree = cls.value_parser(value)
        kwds['decoded'] = str(parse_tree)
        kwds['defects'].extend(parse_tree.all_defects)
        if parse_tree.params is None:
            kwds['params'] = {}
        else:
            # The MIME RFCs specify that parameter ordering is arbitrary.
            kwds['params'] = dict((utils._sanitize(name).lower(),
                                   utils._sanitize(value))
                                  for name, value in parse_tree.params)

    def init(self, *args, **kw):
        self._params = kw.pop('params')
        super().init(*args, **kw)

    @property
    def params(self):
        return self._params.copy()


class ContentTypeHeader(ParameterizedMIMEHeader):

    value_parser = staticmethod(parser.parse_content_type_header)

    def init(self, *args, **kw):
        super().init(*args, **kw)
        self._maintype = utils._sanitize(self._parse_tree.maintype)
        self._subtype = utils._sanitize(self._parse_tree.subtype)

    @property
    def maintype(self):
        return self._maintype

    @property
    def subtype(self):
        return self._subtype

    @property
    def content_type(self):
        return self.maintype + '/' + self.subtype


class ContentDispositionHeader(ParameterizedMIMEHeader):

    value_parser = staticmethod(parser.parse_content_disposition_header)

    def init(self, *args, **kw):
        super().init(*args, **kw)
        cd = self._parse_tree.content_disposition
        self._content_disposition = cd if cd is None else utils._sanitize(cd)

    @property
    def content_disposition(self):
        return self._content_disposition


class ContentTransferEncodingHeader(object):

    max_count = 1

    value_parser = staticmethod(parser.parse_content_transfer_encoding_header)

    @classmethod
    def parse(cls, value, kwds):
        kwds['parse_tree'] = parse_tree = cls.value_parser(value)
        kwds['decoded'] = str(parse_tree)
        kwds['defects'].extend(parse_tree.all_defects)

    def init(self, *args, **kw):
        super().init(*args, **kw)
        self._cte = utils._sanitize(self._parse_tree.cte)

    @property
    def cte(self):
        return self._cte


# The header factory #

_default_header_map = {
    'subject':                      UniqueUnstructuredHeader,
    'date':                         UniqueDateHeader,
    'resent-date':                  DateHeader,
    'orig-date':                    UniqueDateHeader,
    'sender':                       UniqueSingleAddressHeader,
    'resent-sender':                SingleAddressHeader,
    'to':                           UniqueAddressHeader,
    'resent-to':                    AddressHeader,
    'cc':                           UniqueAddressHeader,
    'resent-cc':                    AddressHeader,
    'bcc':                          UniqueAddressHeader,
    'resent-bcc':                   AddressHeader,
    'from':                         UniqueAddressHeader,
    'resent-from':                  AddressHeader,
    'reply-to':                     UniqueAddressHeader,
    'mime-version':                 MIMEVersionHeader,
    'content-type':                 ContentTypeHeader,
    'content-disposition':          ContentDispositionHeader,
    'content-transfer-encoding':    ContentTransferEncodingHeader,
    }

class HeaderRegistry(object):

    """A header_factory and header registry."""

    def __init__(self, base_class=BaseHeader, default_class=UnstructuredHeader,
                       use_default_map=True):
        """Create a header_factory that works with the Policy API.

        base_class is the class that will be the last class in the created
        header class's __bases__ list.  default_class is the class that will be
        used if "name" (see __call__) does not appear in the registry.
        use_default_map controls whether or not the default mapping of names to
        specialized classes is copied in to the registry when the factory is
        created.  The default is True.

        """
        self.registry = {}
        self.base_class = base_class
        self.default_class = default_class
        if use_default_map:
            self.registry.update(_default_header_map)

    def map_to_type(self, name, cls):
        """Register cls as the specialized class for handling "name" headers.

        """
        self.registry[name.lower()] = cls

    def __getitem__(self, name):
        cls = self.registry.get(name.lower(), self.default_class)
        return type(text_to_native_str('_'+cls.__name__), (cls, self.base_class), {})

    def __call__(self, name, value):
        """Create a header instance for header 'name' from 'value'.

        Creates a header instance by creating a specialized class for parsing
        and representing the specified header by combining the factory
        base_class with a specialized class from the registry or the
        default_class, and passing the name and value to the constructed
        class's constructor.

        """
        return self[name](name, value)
PK�Du\ӌ��_�_ future/backports/email/header.pynu�[���# Copyright (C) 2002-2007 Python Software Foundation
# Author: Ben Gertzfield, Barry Warsaw
# Contact: email-sig@python.org

"""Header encoding and decoding functionality."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import bytes, range, str, super, zip

__all__ = [
    'Header',
    'decode_header',
    'make_header',
    ]

import re
import binascii

from future.backports import email
from future.backports.email import base64mime
from future.backports.email.errors import HeaderParseError
import future.backports.email.charset as _charset

# Helpers
from future.backports.email.quoprimime import _max_append, header_decode

Charset = _charset.Charset

NL = '\n'
SPACE = ' '
BSPACE = b' '
SPACE8 = ' ' * 8
EMPTYSTRING = ''
MAXLINELEN = 78
FWS = ' \t'

USASCII = Charset('us-ascii')
UTF8 = Charset('utf-8')

# Match encoded-word strings in the form =?charset?q?Hello_World?=
ecre = re.compile(r'''
  =\?                   # literal =?
  (?P<charset>[^?]*?)   # non-greedy up to the next ? is the charset
  \?                    # literal ?
  (?P<encoding>[qb])    # either a "q" or a "b", case insensitive
  \?                    # literal ?
  (?P<encoded>.*?)      # non-greedy up to the next ?= is the encoded string
  \?=                   # literal ?=
  ''', re.VERBOSE | re.IGNORECASE | re.MULTILINE)

# Field name regexp, including trailing colon, but not separating whitespace,
# according to RFC 2822.  Character range is from tilde to exclamation mark.
# For use with .match()
fcre = re.compile(r'[\041-\176]+:$')

# Find a header embedded in a putative header value.  Used to check for
# header injection attack.
_embeded_header = re.compile(r'\n[^ \t]+:')


def decode_header(header):
    """Decode a message header value without converting charset.

    Returns a list of (string, charset) pairs containing each of the decoded
    parts of the header.  Charset is None for non-encoded parts of the header,
    otherwise a lower-case string containing the name of the character set
    specified in the encoded string.

    header may be a string that may or may not contain RFC2047 encoded words,
    or it may be a Header object.

    An email.errors.HeaderParseError may be raised when certain decoding error
    occurs (e.g. a base64 decoding exception).
    """
    # If it is a Header object, we can just return the encoded chunks.
    if hasattr(header, '_chunks'):
        return [(_charset._encode(string, str(charset)), str(charset))
                    for string, charset in header._chunks]
    # If no encoding, just return the header with no charset.
    if not ecre.search(header):
        return [(header, None)]
    # First step is to parse all the encoded parts into triplets of the form
    # (encoded_string, encoding, charset).  For unencoded strings, the last
    # two parts will be None.
    words = []
    for line in header.splitlines():
        parts = ecre.split(line)
        first = True
        while parts:
            unencoded = parts.pop(0)
            if first:
                unencoded = unencoded.lstrip()
                first = False
            if unencoded:
                words.append((unencoded, None, None))
            if parts:
                charset = parts.pop(0).lower()
                encoding = parts.pop(0).lower()
                encoded = parts.pop(0)
                words.append((encoded, encoding, charset))
    # Now loop over words and remove words that consist of whitespace
    # between two encoded strings.
    import sys
    droplist = []
    for n, w in enumerate(words):
        if n>1 and w[1] and words[n-2][1] and words[n-1][0].isspace():
            droplist.append(n-1)
    for d in reversed(droplist):
        del words[d]

    # The next step is to decode each encoded word by applying the reverse
    # base64 or quopri transformation.  decoded_words is now a list of the
    # form (decoded_word, charset).
    decoded_words = []
    for encoded_string, encoding, charset in words:
        if encoding is None:
            # This is an unencoded word.
            decoded_words.append((encoded_string, charset))
        elif encoding == 'q':
            word = header_decode(encoded_string)
            decoded_words.append((word, charset))
        elif encoding == 'b':
            paderr = len(encoded_string) % 4   # Postel's law: add missing padding
            if paderr:
                encoded_string += '==='[:4 - paderr]
            try:
                word = base64mime.decode(encoded_string)
            except binascii.Error:
                raise HeaderParseError('Base64 decoding error')
            else:
                decoded_words.append((word, charset))
        else:
            raise AssertionError('Unexpected encoding: ' + encoding)
    # Now convert all words to bytes and collapse consecutive runs of
    # similarly encoded words.
    collapsed = []
    last_word = last_charset = None
    for word, charset in decoded_words:
        if isinstance(word, str):
            word = bytes(word, 'raw-unicode-escape')
        if last_word is None:
            last_word = word
            last_charset = charset
        elif charset != last_charset:
            collapsed.append((last_word, last_charset))
            last_word = word
            last_charset = charset
        elif last_charset is None:
            last_word += BSPACE + word
        else:
            last_word += word
    collapsed.append((last_word, last_charset))
    return collapsed


def make_header(decoded_seq, maxlinelen=None, header_name=None,
                continuation_ws=' '):
    """Create a Header from a sequence of pairs as returned by decode_header()

    decode_header() takes a header value string and returns a sequence of
    pairs of the format (decoded_string, charset) where charset is the string
    name of the character set.

    This function takes one of those sequence of pairs and returns a Header
    instance.  Optional maxlinelen, header_name, and continuation_ws are as in
    the Header constructor.
    """
    h = Header(maxlinelen=maxlinelen, header_name=header_name,
               continuation_ws=continuation_ws)
    for s, charset in decoded_seq:
        # None means us-ascii but we can simply pass it on to h.append()
        if charset is not None and not isinstance(charset, Charset):
            charset = Charset(charset)
        h.append(s, charset)
    return h


class Header(object):
    def __init__(self, s=None, charset=None,
                 maxlinelen=None, header_name=None,
                 continuation_ws=' ', errors='strict'):
        """Create a MIME-compliant header that can contain many character sets.

        Optional s is the initial header value.  If None, the initial header
        value is not set.  You can later append to the header with .append()
        method calls.  s may be a byte string or a Unicode string, but see the
        .append() documentation for semantics.

        Optional charset serves two purposes: it has the same meaning as the
        charset argument to the .append() method.  It also sets the default
        character set for all subsequent .append() calls that omit the charset
        argument.  If charset is not provided in the constructor, the us-ascii
        charset is used both as s's initial charset and as the default for
        subsequent .append() calls.

        The maximum line length can be specified explicitly via maxlinelen. For
        splitting the first line to a shorter value (to account for the field
        header which isn't included in s, e.g. `Subject') pass in the name of
        the field in header_name.  The default maxlinelen is 78 as recommended
        by RFC 2822.

        continuation_ws must be RFC 2822 compliant folding whitespace (usually
        either a space or a hard tab) which will be prepended to continuation
        lines.

        errors is passed through to the .append() call.
        """
        if charset is None:
            charset = USASCII
        elif not isinstance(charset, Charset):
            charset = Charset(charset)
        self._charset = charset
        self._continuation_ws = continuation_ws
        self._chunks = []
        if s is not None:
            self.append(s, charset, errors)
        if maxlinelen is None:
            maxlinelen = MAXLINELEN
        self._maxlinelen = maxlinelen
        if header_name is None:
            self._headerlen = 0
        else:
            # Take the separating colon and space into account.
            self._headerlen = len(header_name) + 2

    def __str__(self):
        """Return the string value of the header."""
        self._normalize()
        uchunks = []
        lastcs = None
        lastspace = None
        for string, charset in self._chunks:
            # We must preserve spaces between encoded and non-encoded word
            # boundaries, which means for us we need to add a space when we go
            # from a charset to None/us-ascii, or from None/us-ascii to a
            # charset.  Only do this for the second and subsequent chunks.
            # Don't add a space if the None/us-ascii string already has
            # a space (trailing or leading depending on transition)
            nextcs = charset
            if nextcs == _charset.UNKNOWN8BIT:
                original_bytes = string.encode('ascii', 'surrogateescape')
                string = original_bytes.decode('ascii', 'replace')
            if uchunks:
                hasspace = string and self._nonctext(string[0])
                if lastcs not in (None, 'us-ascii'):
                    if nextcs in (None, 'us-ascii') and not hasspace:
                        uchunks.append(SPACE)
                        nextcs = None
                elif nextcs not in (None, 'us-ascii') and not lastspace:
                    uchunks.append(SPACE)
            lastspace = string and self._nonctext(string[-1])
            lastcs = nextcs
            uchunks.append(string)
        return EMPTYSTRING.join(uchunks)

    # Rich comparison operators for equality only.  BAW: does it make sense to
    # have or explicitly disable <, <=, >, >= operators?
    def __eq__(self, other):
        # other may be a Header or a string.  Both are fine so coerce
        # ourselves to a unicode (of the unencoded header value), swap the
        # args and do another comparison.
        return other == str(self)

    def __ne__(self, other):
        return not self == other

    def append(self, s, charset=None, errors='strict'):
        """Append a string to the MIME header.

        Optional charset, if given, should be a Charset instance or the name
        of a character set (which will be converted to a Charset instance).  A
        value of None (the default) means that the charset given in the
        constructor is used.

        s may be a byte string or a Unicode string.  If it is a byte string
        (i.e. isinstance(s, str) is false), then charset is the encoding of
        that byte string, and a UnicodeError will be raised if the string
        cannot be decoded with that charset.  If s is a Unicode string, then
        charset is a hint specifying the character set of the characters in
        the string.  In either case, when producing an RFC 2822 compliant
        header using RFC 2047 rules, the string will be encoded using the
        output codec of the charset.  If the string cannot be encoded to the
        output codec, a UnicodeError will be raised.

        Optional `errors' is passed as the errors argument to the decode
        call if s is a byte string.
        """
        if charset is None:
            charset = self._charset
        elif not isinstance(charset, Charset):
            charset = Charset(charset)
        if not isinstance(s, str):
            input_charset = charset.input_codec or 'us-ascii'
            if input_charset == _charset.UNKNOWN8BIT:
                s = s.decode('us-ascii', 'surrogateescape')
            else:
                s = s.decode(input_charset, errors)
        # Ensure that the bytes we're storing can be decoded to the output
        # character set, otherwise an early error is raised.
        output_charset = charset.output_codec or 'us-ascii'
        if output_charset != _charset.UNKNOWN8BIT:
            try:
                s.encode(output_charset, errors)
            except UnicodeEncodeError:
                if output_charset!='us-ascii':
                    raise
                charset = UTF8
        self._chunks.append((s, charset))

    def _nonctext(self, s):
        """True if string s is not a ctext character of RFC822.
        """
        return s.isspace() or s in ('(', ')', '\\')

    def encode(self, splitchars=';, \t', maxlinelen=None, linesep='\n'):
        r"""Encode a message header into an RFC-compliant format.

        There are many issues involved in converting a given string for use in
        an email header.  Only certain character sets are readable in most
        email clients, and as header strings can only contain a subset of
        7-bit ASCII, care must be taken to properly convert and encode (with
        Base64 or quoted-printable) header strings.  In addition, there is a
        75-character length limit on any given encoded header field, so
        line-wrapping must be performed, even with double-byte character sets.

        Optional maxlinelen specifies the maximum length of each generated
        line, exclusive of the linesep string.  Individual lines may be longer
        than maxlinelen if a folding point cannot be found.  The first line
        will be shorter by the length of the header name plus ": " if a header
        name was specified at Header construction time.  The default value for
        maxlinelen is determined at header construction time.

        Optional splitchars is a string containing characters which should be
        given extra weight by the splitting algorithm during normal header
        wrapping.  This is in very rough support of RFC 2822's `higher level
        syntactic breaks':  split points preceded by a splitchar are preferred
        during line splitting, with the characters preferred in the order in
        which they appear in the string.  Space and tab may be included in the
        string to indicate whether preference should be given to one over the
        other as a split point when other split chars do not appear in the line
        being split.  Splitchars does not affect RFC 2047 encoded lines.

        Optional linesep is a string to be used to separate the lines of
        the value.  The default value is the most useful for typical
        Python applications, but it can be set to \r\n to produce RFC-compliant
        line separators when needed.
        """
        self._normalize()
        if maxlinelen is None:
            maxlinelen = self._maxlinelen
        # A maxlinelen of 0 means don't wrap.  For all practical purposes,
        # choosing a huge number here accomplishes that and makes the
        # _ValueFormatter algorithm much simpler.
        if maxlinelen == 0:
            maxlinelen = 1000000
        formatter = _ValueFormatter(self._headerlen, maxlinelen,
                                    self._continuation_ws, splitchars)
        lastcs = None
        hasspace = lastspace = None
        for string, charset in self._chunks:
            if hasspace is not None:
                hasspace = string and self._nonctext(string[0])
                import sys
                if lastcs not in (None, 'us-ascii'):
                    if not hasspace or charset not in (None, 'us-ascii'):
                        formatter.add_transition()
                elif charset not in (None, 'us-ascii') and not lastspace:
                    formatter.add_transition()
            lastspace = string and self._nonctext(string[-1])
            lastcs = charset
            hasspace = False
            lines = string.splitlines()
            if lines:
                formatter.feed('', lines[0], charset)
            else:
                formatter.feed('', '', charset)
            for line in lines[1:]:
                formatter.newline()
                if charset.header_encoding is not None:
                    formatter.feed(self._continuation_ws, ' ' + line.lstrip(),
                                   charset)
                else:
                    sline = line.lstrip()
                    fws = line[:len(line)-len(sline)]
                    formatter.feed(fws, sline, charset)
            if len(lines) > 1:
                formatter.newline()
        if self._chunks:
            formatter.add_transition()
        value = formatter._str(linesep)
        if _embeded_header.search(value):
            raise HeaderParseError("header value appears to contain "
                "an embedded header: {!r}".format(value))
        return value

    def _normalize(self):
        # Step 1: Normalize the chunks so that all runs of identical charsets
        # get collapsed into a single unicode string.
        chunks = []
        last_charset = None
        last_chunk = []
        for string, charset in self._chunks:
            if charset == last_charset:
                last_chunk.append(string)
            else:
                if last_charset is not None:
                    chunks.append((SPACE.join(last_chunk), last_charset))
                last_chunk = [string]
                last_charset = charset
        if last_chunk:
            chunks.append((SPACE.join(last_chunk), last_charset))
        self._chunks = chunks


class _ValueFormatter(object):
    def __init__(self, headerlen, maxlen, continuation_ws, splitchars):
        self._maxlen = maxlen
        self._continuation_ws = continuation_ws
        self._continuation_ws_len = len(continuation_ws)
        self._splitchars = splitchars
        self._lines = []
        self._current_line = _Accumulator(headerlen)

    def _str(self, linesep):
        self.newline()
        return linesep.join(self._lines)

    def __str__(self):
        return self._str(NL)

    def newline(self):
        end_of_line = self._current_line.pop()
        if end_of_line != (' ', ''):
            self._current_line.push(*end_of_line)
        if len(self._current_line) > 0:
            if self._current_line.is_onlyws():
                self._lines[-1] += str(self._current_line)
            else:
                self._lines.append(str(self._current_line))
        self._current_line.reset()

    def add_transition(self):
        self._current_line.push(' ', '')

    def feed(self, fws, string, charset):
        # If the charset has no header encoding (i.e. it is an ASCII encoding)
        # then we must split the header at the "highest level syntactic break"
        # possible. Note that we don't have a lot of smarts about field
        # syntax; we just try to break on semi-colons, then commas, then
        # whitespace.  Eventually, this should be pluggable.
        if charset.header_encoding is None:
            self._ascii_split(fws, string, self._splitchars)
            return
        # Otherwise, we're doing either a Base64 or a quoted-printable
        # encoding which means we don't need to split the line on syntactic
        # breaks.  We can basically just find enough characters to fit on the
        # current line, minus the RFC 2047 chrome.  What makes this trickier
        # though is that we have to split at octet boundaries, not character
        # boundaries but it's only safe to split at character boundaries so at
        # best we can only get close.
        encoded_lines = charset.header_encode_lines(string, self._maxlengths())
        # The first element extends the current line, but if it's None then
        # nothing more fit on the current line so start a new line.
        try:
            first_line = encoded_lines.pop(0)
        except IndexError:
            # There are no encoded lines, so we're done.
            return
        if first_line is not None:
            self._append_chunk(fws, first_line)
        try:
            last_line = encoded_lines.pop()
        except IndexError:
            # There was only one line.
            return
        self.newline()
        self._current_line.push(self._continuation_ws, last_line)
        # Everything else are full lines in themselves.
        for line in encoded_lines:
            self._lines.append(self._continuation_ws + line)

    def _maxlengths(self):
        # The first line's length.
        yield self._maxlen - len(self._current_line)
        while True:
            yield self._maxlen - self._continuation_ws_len

    def _ascii_split(self, fws, string, splitchars):
        # The RFC 2822 header folding algorithm is simple in principle but
        # complex in practice.  Lines may be folded any place where "folding
        # white space" appears by inserting a linesep character in front of the
        # FWS.  The complication is that not all spaces or tabs qualify as FWS,
        # and we are also supposed to prefer to break at "higher level
        # syntactic breaks".  We can't do either of these without intimate
        # knowledge of the structure of structured headers, which we don't have
        # here.  So the best we can do here is prefer to break at the specified
        # splitchars, and hope that we don't choose any spaces or tabs that
        # aren't legal FWS.  (This is at least better than the old algorithm,
        # where we would sometimes *introduce* FWS after a splitchar, or the
        # algorithm before that, where we would turn all white space runs into
        # single spaces or tabs.)
        parts = re.split("(["+FWS+"]+)", fws+string)
        if parts[0]:
            parts[:0] = ['']
        else:
            parts.pop(0)
        for fws, part in zip(*[iter(parts)]*2):
            self._append_chunk(fws, part)

    def _append_chunk(self, fws, string):
        self._current_line.push(fws, string)
        if len(self._current_line) > self._maxlen:
            # Find the best split point, working backward from the end.
            # There might be none, on a long first line.
            for ch in self._splitchars:
                for i in range(self._current_line.part_count()-1, 0, -1):
                    if ch.isspace():
                        fws = self._current_line[i][0]
                        if fws and fws[0]==ch:
                            break
                    prevpart = self._current_line[i-1][1]
                    if prevpart and prevpart[-1]==ch:
                        break
                else:
                    continue
                break
            else:
                fws, part = self._current_line.pop()
                if self._current_line._initial_size > 0:
                    # There will be a header, so leave it on a line by itself.
                    self.newline()
                    if not fws:
                        # We don't use continuation_ws here because the whitespace
                        # after a header should always be a space.
                        fws = ' '
                self._current_line.push(fws, part)
                return
            remainder = self._current_line.pop_from(i)
            self._lines.append(str(self._current_line))
            self._current_line.reset(remainder)


class _Accumulator(list):

    def __init__(self, initial_size=0):
        self._initial_size = initial_size
        super().__init__()

    def push(self, fws, string):
        self.append((fws, string))

    def pop_from(self, i=0):
        popped = self[i:]
        self[i:] = []
        return popped

    def pop(self):
        if self.part_count()==0:
            return ('', '')
        return super().pop()

    def __len__(self):
        return sum((len(fws)+len(part) for fws, part in self),
                   self._initial_size)

    def __str__(self):
        return EMPTYSTRING.join((EMPTYSTRING.join((fws, part))
                                for fws, part in self))

    def reset(self, startval=None):
        if startval is None:
            startval = []
        self[:] = startval
        self._initial_size = 0

    def is_onlyws(self):
        return self._initial_size==0 and (not self or str(self).isspace())

    def part_count(self):
        return super().__len__()
PK�Du\��Ա�$future/backports/email/base64mime.pynu�[���# Copyright (C) 2002-2007 Python Software Foundation
# Author: Ben Gertzfield
# Contact: email-sig@python.org

"""Base64 content transfer encoding per RFCs 2045-2047.

This module handles the content transfer encoding method defined in RFC 2045
to encode arbitrary 8-bit data using the three 8-bit bytes in four 7-bit
characters encoding known as Base64.

It is used in the MIME standards for email to attach images, audio, and text
using some 8-bit character sets to messages.

This module provides an interface to encode and decode both headers and bodies
with Base64 encoding.

RFC 2045 defines a method for including character set information in an
`encoded-word' in a header.  This method is commonly used for 8-bit real names
in To:, From:, Cc:, etc. fields, as well as Subject: lines.

This module does not do the line wrapping or end-of-line character conversion
necessary for proper internationalized headers; it only does dumb encoding and
decoding.  To deal with the various line wrapping issues, use the email.header
module.
"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import range
from future.builtins import bytes
from future.builtins import str

__all__ = [
    'body_decode',
    'body_encode',
    'decode',
    'decodestring',
    'header_encode',
    'header_length',
    ]


from base64 import b64encode
from binascii import b2a_base64, a2b_base64

CRLF = '\r\n'
NL = '\n'
EMPTYSTRING = ''

# See also Charset.py
MISC_LEN = 7


# Helpers
def header_length(bytearray):
    """Return the length of s when it is encoded with base64."""
    groups_of_3, leftover = divmod(len(bytearray), 3)
    # 4 bytes out for each 3 bytes (or nonzero fraction thereof) in.
    n = groups_of_3 * 4
    if leftover:
        n += 4
    return n


def header_encode(header_bytes, charset='iso-8859-1'):
    """Encode a single header line with Base64 encoding in a given charset.

    charset names the character set to use to encode the header.  It defaults
    to iso-8859-1.  Base64 encoding is defined in RFC 2045.
    """
    if not header_bytes:
        return ""
    if isinstance(header_bytes, str):
        header_bytes = header_bytes.encode(charset)
    encoded = b64encode(header_bytes).decode("ascii")
    return '=?%s?b?%s?=' % (charset, encoded)


def body_encode(s, maxlinelen=76, eol=NL):
    r"""Encode a string with base64.

    Each line will be wrapped at, at most, maxlinelen characters (defaults to
    76 characters).

    Each line of encoded text will end with eol, which defaults to "\n".  Set
    this to "\r\n" if you will be using the result of this function directly
    in an email.
    """
    if not s:
        return s

    encvec = []
    max_unencoded = maxlinelen * 3 // 4
    for i in range(0, len(s), max_unencoded):
        # BAW: should encode() inherit b2a_base64()'s dubious behavior in
        # adding a newline to the encoded string?
        enc = b2a_base64(s[i:i + max_unencoded]).decode("ascii")
        if enc.endswith(NL) and eol != NL:
            enc = enc[:-1] + eol
        encvec.append(enc)
    return EMPTYSTRING.join(encvec)


def decode(string):
    """Decode a raw base64 string, returning a bytes object.

    This function does not parse a full MIME header value encoded with
    base64 (like =?iso-8895-1?b?bmloISBuaWgh?=) -- please use the high
    level email.header class for that functionality.
    """
    if not string:
        return bytes()
    elif isinstance(string, str):
        return a2b_base64(string.encode('raw-unicode-escape'))
    else:
        return a2b_base64(string)


# For convenience and backwards compatibility w/ standard base64 module
body_decode = decode
decodestring = decode
PK�Du\6ޜ�@@+future/backports/email/mime/nonmultipart.pynu�[���# Copyright (C) 2002-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Base class for MIME type messages that are not multipart."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMENonMultipart']

from future.backports.email import errors
from future.backports.email.mime.base import MIMEBase


class MIMENonMultipart(MIMEBase):
    """Base class for MIME multipart/* type messages."""

    def attach(self, payload):
        # The public API prohibits attaching multiple subparts to MIMEBase
        # derived subtypes since none of them are, by definition, of content
        # type multipart/*
        raise errors.MultipartConversionError(
            'Cannot attach additional subparts to non-multipart/*')
PK�Du\'future/backports/email/mime/__init__.pynu�[���PK�Du\ԁyy*future/backports/email/mime/application.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Keith Dart
# Contact: email-sig@python.org

"""Class representing application/* type MIME documents."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

from future.backports.email import encoders
from future.backports.email.mime.nonmultipart import MIMENonMultipart

__all__ = ["MIMEApplication"]


class MIMEApplication(MIMENonMultipart):
    """Class for generating application/* MIME documents."""

    def __init__(self, _data, _subtype='octet-stream',
                 _encoder=encoders.encode_base64, **_params):
        """Create an application/* type MIME document.

        _data is a string containing the raw application data.

        _subtype is the MIME content type subtype, defaulting to
        'octet-stream'.

        _encoder is a function which will perform the actual encoding for
        transport of the application data, defaulting to base64 encoding.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        """
        if _subtype is None:
            raise TypeError('Invalid application MIME subtype')
        MIMENonMultipart.__init__(self, 'application', _subtype, **_params)
        self.set_payload(_data)
        _encoder(self)
PK�Du\��?�
�
<future/backports/email/mime/__pycache__/audio.cpython-39.pycnu�[���a

��?h�
�@s�dZddlmZddlmZddlmZdgZddlZddlmZddl	m
Z
dd	lmZd
dddd
�Z
dd�ZGdd�de�ZdS)z/Class representing audio/* type MIME documents.�)�unicode_literals)�division)�absolute_import�	MIMEAudioN)�BytesIO)�encoders)�MIMENonMultipart�basiczx-wavzx-aiff)�auZwavZaiffZaifccCsH|dd�}t|�}tjD](}|||�}|durt�|d�SqdS)aTry to identify a sound file type.

    sndhdr.what() has a pretty cruddy interface, unfortunately.  This is why
    we re-do it here.  It would be easier to reverse engineer the Unix 'file'
    command and use the standard 'magic' file, as shipped with a modern Unix.
    Nir)r�sndhdr�tests�_sndhdr_MIMEmap�get)�data�hdrZfakefileZtestfn�res�r�K/usr/local/lib/python3.9/site-packages/future/backports/email/mime/audio.py�_whatsnds

rc@s eZdZdZdejfdd�ZdS)rz,Class for generating audio/* MIME documents.NcKsL|durt|�}|dur td��tj|d|fi|��|�|�||�dS)aCreate an audio/* type MIME document.

        _audiodata is a string containing the raw audio data.  If this data
        can be decoded by the standard Python `sndhdr' module, then the
        subtype will be automatically included in the Content-Type header.
        Otherwise, you can specify  the specific audio subtype via the
        _subtype parameter.  If _subtype is not given, and no subtype can be
        guessed, a TypeError is raised.

        _encoder is a function which will perform the actual encoding for
        transport of the image data.  It takes one argument, which is this
        Image instance.  It should use get_payload() and set_payload() to
        change the payload to the encoded form.  It should also add any
        Content-Transfer-Encoding or other headers to the message as
        necessary.  The default encoding is Base64.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        Nz!Could not find audio MIME subtype�audio)r�	TypeErrorr�__init__�set_payload)�selfZ
_audiodata�_subtype�_encoder�_paramsrrrr.s
zMIMEAudio.__init__)�__name__�
__module__�__qualname__�__doc__r�
encode_base64rrrrrr+s�)r �
__future__rrr�__all__r�iorZfuture.backports.emailrZ(future.backports.email.mime.nonmultipartrr
rrrrrr�<module>s�PK�Du\��a;��?future/backports/email/mime/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�N/usr/local/lib/python3.9/site-packages/future/backports/email/mime/__init__.py�<module>�PK�Du\��~jj@future/backports/email/mime/__pycache__/multipart.cpython-39.pycnu�[���a

��?h��@sNdZddlmZddlmZddlmZdgZddlmZGdd�de�ZdS)	�.Base class for MIME multipart/* type messages.�)�unicode_literals)�division)�absolute_import�
MIMEMultipart)�MIMEBasec@seZdZdZddd�ZdS)rr�mixedNcKsFtj|d|fi|��g|_|r4|D]}|�|�q$|rB|�|�dS)a�Creates a multipart/* type message.

        By default, creates a multipart/mixed message, with proper
        Content-Type and MIME-Version headers.

        _subtype is the subtype of the multipart content type, defaulting to
        `mixed'.

        boundary is the multipart boundary string.  By default it is
        calculated as needed.

        _subparts is a sequence of initial subparts for the payload.  It
        must be an iterable object, such as a list.  You can always
        attach new subparts to the message by using the attach() method.

        Additional parameters for the Content-Type header are taken from the
        keyword arguments (or passed into the _params argument).
        �	multipartN)r�__init__�_payload�attach�set_boundary)�self�_subtype�boundaryZ	_subparts�_params�p�r�O/usr/local/lib/python3.9/site-packages/future/backports/email/mime/multipart.pyr
szMIMEMultipart.__init__)rNN)�__name__�
__module__�__qualname__�__doc__r
rrrrrsN)	r�
__future__rrr�__all__Z future.backports.email.mime.baserrrrrr�<module>sPK�Du\�d�LLBfuture/backports/email/mime/__pycache__/application.cpython-39.pycnu�[���a

��?hy�@sZdZddlmZddlmZddlmZddlmZddlmZdgZ	Gdd�de�Z
d	S)
z5Class representing application/* type MIME documents.�)�unicode_literals)�division)�absolute_import)�encoders)�MIMENonMultipart�MIMEApplicationc@s eZdZdZdejfdd�ZdS)rz2Class for generating application/* MIME documents.zoctet-streamcKs<|durtd��tj|d|fi|��|�|�||�dS)aCreate an application/* type MIME document.

        _data is a string containing the raw application data.

        _subtype is the MIME content type subtype, defaulting to
        'octet-stream'.

        _encoder is a function which will perform the actual encoding for
        transport of the application data, defaulting to base64 encoding.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        Nz Invalid application MIME subtypeZapplication)�	TypeErrorr�__init__�set_payload)�self�_data�_subtype�_encoder�_params�r�Q/usr/local/lib/python3.9/site-packages/future/backports/email/mime/application.pyr	s

zMIMEApplication.__init__N)�__name__�
__module__�__qualname__�__doc__r�
encode_base64r	rrrrrs�N)r�
__future__rrrZfuture.backports.emailrZ(future.backports.email.mime.nonmultipartr�__all__rrrrr�<module>sPK�Du\�X�MM;future/backports/email/mime/__pycache__/base.cpython-39.pycnu�[���a

��?hk�@s@dZddlmZmZmZddlmZdgZGdd�dej�Z	dS)�$Base class for MIME specializations.�)�absolute_import�division�unicode_literals)�message�MIMEBasec@seZdZdZdd�ZdS)rrcKs8tj�|�d||f}|jd|fi|��d|d<dS)z�This constructor adds a Content-Type: and a MIME-Version: header.

        The Content-Type: header is taken from the _maintype and _subtype
        arguments.  Additional parameters for this header are taken from the
        keyword arguments.
        z%s/%szContent-Typez1.0zMIME-VersionN)r�Message�__init__�
add_header)�selfZ	_maintype�_subtype�_params�ctype�r�J/usr/local/lib/python3.9/site-packages/future/backports/email/mime/base.pyr	szMIMEBase.__init__N)�__name__�
__module__�__qualname__�__doc__r	rrrrrsN)
r�
__future__rrrZfuture.backports.emailr�__all__rrrrrr�<module>sPK�Du\^th��Cfuture/backports/email/mime/__pycache__/nonmultipart.cpython-39.pycnu�[���a

��?h@�@sZdZddlmZddlmZddlmZdgZddlmZddlm	Z	Gdd�de	�Z
d	S)
z9Base class for MIME type messages that are not multipart.�)�unicode_literals)�division)�absolute_import�MIMENonMultipart)�errors)�MIMEBasec@seZdZdZdd�ZdS)rz.Base class for MIME multipart/* type messages.cCst�d��dS)Nz4Cannot attach additional subparts to non-multipart/*)r�MultipartConversionError)�self�payload�r�R/usr/local/lib/python3.9/site-packages/future/backports/email/mime/nonmultipart.py�attachs�zMIMENonMultipart.attachN)�__name__�
__module__�__qualname__�__doc__r
rrrrrsN)r�
__future__rrr�__all__Zfuture.backports.emailrZ future.backports.email.mime.baserrrrrr�<module>sPK�Du\��q	��>future/backports/email/mime/__pycache__/message.cpython-39.pycnu�[���a

��?h��@sZdZddlmZddlmZddlmZdgZddlmZddlm	Z	Gdd�de	�Z
d	S)
�,Class representing message/* MIME documents.�)�unicode_literals)�division)�absolute_import�MIMEMessage)�message)�MIMENonMultipartc@seZdZdZddd�ZdS)rr�rfc822cCs>t�|d|�t|tj�s"td��tj�||�|�d�dS)a�Create a message/* type MIME document.

        _msg is a message object and must be an instance of Message, or a
        derived class of Message, otherwise a TypeError is raised.

        Optional _subtype defines the subtype of the contained message.  The
        default is "rfc822" (this is defined by the MIME standard, even though
        the term "rfc822" is technically outdated by RFC 2822).
        rz&Argument is not an instance of Messagezmessage/rfc822N)r�__init__�
isinstancer�Message�	TypeError�attach�set_default_type)�self�_msg�_subtype�r�M/usr/local/lib/python3.9/site-packages/future/backports/email/mime/message.pyr
s

zMIMEMessage.__init__N)r	)�__name__�
__module__�__qualname__�__doc__r
rrrrrsN)r�
__future__rrr�__all__Zfuture.backports.emailrZ(future.backports.email.mime.nonmultipartrrrrrr�<module>sPK�Du\,���		<future/backports/email/mime/__pycache__/image.cpython-39.pycnu�[���a

��?hs�@sbdZddlmZddlmZddlmZdgZddlZddlmZddl	m
Z
Gd	d�de
�ZdS)
z/Class representing image/* type MIME documents.�)�unicode_literals)�division)�absolute_import�	MIMEImageN)�encoders)�MIMENonMultipartc@s eZdZdZdejfdd�ZdS)rz1Class for generating image/* type MIME documents.NcKsP|durt�d|�}|dur$td��tj|d|fi|��|�|�||�dS)a�Create an image/* type MIME document.

        _imagedata is a string containing the raw image data.  If this data
        can be decoded by the standard Python `imghdr' module, then the
        subtype will be automatically included in the Content-Type header.
        Otherwise, you can specify the specific image subtype via the _subtype
        parameter.

        _encoder is a function which will perform the actual encoding for
        transport of the image data.  It takes one argument, which is this
        Image instance.  It should use get_payload() and set_payload() to
        change the payload to the encoded form.  It should also add any
        Content-Transfer-Encoding or other headers to the message as
        necessary.  The default encoding is Base64.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        Nz"Could not guess image MIME subtype�image)�imghdr�what�	TypeErrorr�__init__�set_payload)�selfZ
_imagedata�_subtype�_encoder�_params�r�K/usr/local/lib/python3.9/site-packages/future/backports/email/mime/image.pyrs
zMIMEImage.__init__)�__name__�
__module__�__qualname__�__doc__r�
encode_base64rrrrrrs�)r�
__future__rrr�__all__r	Zfuture.backports.emailrZ(future.backports.email.mime.nonmultipartrrrrrr�<module>sPK�Du\�[����;future/backports/email/mime/__pycache__/text.cpython-39.pycnu�[���a

��?h�@sZdZddlmZddlmZddlmZdgZddlmZddlm	Z	Gdd�de	�Z
d	S)
z.Class representing text/* type MIME documents.�)�unicode_literals)�division)�absolute_import�MIMEText)�encode_7or8bit)�MIMENonMultipartc@seZdZdZddd�ZdS)rz0Class for generating text/* type MIME documents.�plainNcCs\|dur2z|�d�d}Wnty0d}Yn0tj|d|fid|i��|�||�dS)a~Create a text/* type MIME document.

        _text is the string for this message object.

        _subtype is the MIME sub content type, defaulting to "plain".

        _charset is the character set parameter added to the Content-Type
        header.  This defaults to "us-ascii".  Note that as a side-effect, the
        Content-Transfer-Encoding header will also be set.
        Nzus-asciizutf-8�text�charset)�encode�UnicodeEncodeErrorr�__init__�set_payload)�selfZ_text�_subtype�_charset�r�J/usr/local/lib/python3.9/site-packages/future/backports/email/mime/text.pyr
s

�zMIMEText.__init__)rN)�__name__�
__module__�__qualname__�__doc__r
rrrrrsN)r�
__future__rrr�__all__Zfuture.backports.email.encodersrZ(future.backports.email.mime.nonmultipartrrrrrr�<module>sPK�Du\v��#future/backports/email/mime/text.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Class representing text/* type MIME documents."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMEText']

from future.backports.email.encoders import encode_7or8bit
from future.backports.email.mime.nonmultipart import MIMENonMultipart


class MIMEText(MIMENonMultipart):
    """Class for generating text/* type MIME documents."""

    def __init__(self, _text, _subtype='plain', _charset=None):
        """Create a text/* type MIME document.

        _text is the string for this message object.

        _subtype is the MIME sub content type, defaulting to "plain".

        _charset is the character set parameter added to the Content-Type
        header.  This defaults to "us-ascii".  Note that as a side-effect, the
        Content-Transfer-Encoding header will also be set.
        """

        # If no _charset was specified, check to see if there are non-ascii
        # characters present. If not, use 'us-ascii', otherwise use utf-8.
        # XXX: This can be removed once #7304 is fixed.
        if _charset is None:
            try:
                _text.encode('us-ascii')
                _charset = 'us-ascii'
            except UnicodeEncodeError:
                _charset = 'utf-8'

        MIMENonMultipart.__init__(self, 'text', _subtype,
                                  **{'charset': _charset})

        self.set_payload(_text, _charset)
PK�Du\R�t��(future/backports/email/mime/multipart.pynu�[���# Copyright (C) 2002-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Base class for MIME multipart/* type messages."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMEMultipart']

from future.backports.email.mime.base import MIMEBase


class MIMEMultipart(MIMEBase):
    """Base class for MIME multipart/* type messages."""

    def __init__(self, _subtype='mixed', boundary=None, _subparts=None,
                 **_params):
        """Creates a multipart/* type message.

        By default, creates a multipart/mixed message, with proper
        Content-Type and MIME-Version headers.

        _subtype is the subtype of the multipart content type, defaulting to
        `mixed'.

        boundary is the multipart boundary string.  By default it is
        calculated as needed.

        _subparts is a sequence of initial subparts for the payload.  It
        must be an iterable object, such as a list.  You can always
        attach new subparts to the message by using the attach() method.

        Additional parameters for the Content-Type header are taken from the
        keyword arguments (or passed into the _params argument).
        """
        MIMEBase.__init__(self, 'multipart', _subtype, **_params)

        # Initialise _payload to an empty list as the Message superclass's
        # implementation of is_multipart assumes that _payload is a list for
        # multipart messages.
        self._payload = []

        if _subparts:
            for p in _subparts:
                self.attach(p)
        if boundary:
            self.set_boundary(boundary)
PK�Du\��֕�&future/backports/email/mime/message.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Class representing message/* MIME documents."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMEMessage']

from future.backports.email import message
from future.backports.email.mime.nonmultipart import MIMENonMultipart


class MIMEMessage(MIMENonMultipart):
    """Class representing message/* MIME documents."""

    def __init__(self, _msg, _subtype='rfc822'):
        """Create a message/* type MIME document.

        _msg is a message object and must be an instance of Message, or a
        derived class of Message, otherwise a TypeError is raised.

        Optional _subtype defines the subtype of the contained message.  The
        default is "rfc822" (this is defined by the MIME standard, even though
        the term "rfc822" is technically outdated by RFC 2822).
        """
        MIMENonMultipart.__init__(self, 'message', _subtype)
        if not isinstance(_msg, message.Message):
            raise TypeError('Argument is not an instance of Message')
        # It's convenient to use this base class method.  We need to do it
        # this way or we'll get an exception
        message.Message.attach(self, _msg)
        # And be sure our default type is set correctly
        self.set_default_type('message/rfc822')
PK�Du\�s��
�
$future/backports/email/mime/audio.pynu�[���# Copyright (C) 2001-2007 Python Software Foundation
# Author: Anthony Baxter
# Contact: email-sig@python.org

"""Class representing audio/* type MIME documents."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMEAudio']

import sndhdr

from io import BytesIO
from future.backports.email import encoders
from future.backports.email.mime.nonmultipart import MIMENonMultipart


_sndhdr_MIMEmap = {'au'  : 'basic',
                   'wav' :'x-wav',
                   'aiff':'x-aiff',
                   'aifc':'x-aiff',
                   }

# There are others in sndhdr that don't have MIME types. :(
# Additional ones to be added to sndhdr? midi, mp3, realaudio, wma??
def _whatsnd(data):
    """Try to identify a sound file type.

    sndhdr.what() has a pretty cruddy interface, unfortunately.  This is why
    we re-do it here.  It would be easier to reverse engineer the Unix 'file'
    command and use the standard 'magic' file, as shipped with a modern Unix.
    """
    hdr = data[:512]
    fakefile = BytesIO(hdr)
    for testfn in sndhdr.tests:
        res = testfn(hdr, fakefile)
        if res is not None:
            return _sndhdr_MIMEmap.get(res[0])
    return None


class MIMEAudio(MIMENonMultipart):
    """Class for generating audio/* MIME documents."""

    def __init__(self, _audiodata, _subtype=None,
                 _encoder=encoders.encode_base64, **_params):
        """Create an audio/* type MIME document.

        _audiodata is a string containing the raw audio data.  If this data
        can be decoded by the standard Python `sndhdr' module, then the
        subtype will be automatically included in the Content-Type header.
        Otherwise, you can specify  the specific audio subtype via the
        _subtype parameter.  If _subtype is not given, and no subtype can be
        guessed, a TypeError is raised.

        _encoder is a function which will perform the actual encoding for
        transport of the image data.  It takes one argument, which is this
        Image instance.  It should use get_payload() and set_payload() to
        change the payload to the encoded form.  It should also add any
        Content-Transfer-Encoding or other headers to the message as
        necessary.  The default encoding is Base64.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        """
        if _subtype is None:
            _subtype = _whatsnd(_audiodata)
        if _subtype is None:
            raise TypeError('Could not find audio MIME subtype')
        MIMENonMultipart.__init__(self, 'audio', _subtype, **_params)
        self.set_payload(_audiodata)
        _encoder(self)
PK�Du\�G�kk#future/backports/email/mime/base.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Base class for MIME specializations."""
from __future__ import absolute_import, division, unicode_literals
from future.backports.email import message

__all__ = ['MIMEBase']


class MIMEBase(message.Message):
    """Base class for MIME specializations."""

    def __init__(self, _maintype, _subtype, **_params):
        """This constructor adds a Content-Type: and a MIME-Version: header.

        The Content-Type: header is taken from the _maintype and _subtype
        arguments.  Additional parameters for this header are taken from the
        keyword arguments.
        """
        message.Message.__init__(self)
        ctype = '%s/%s' % (_maintype, _subtype)
        self.add_header('Content-Type', ctype, **_params)
        self['MIME-Version'] = '1.0'
PK�Du\�6�ss$future/backports/email/mime/image.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Class representing image/* type MIME documents."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import

__all__ = ['MIMEImage']

import imghdr

from future.backports.email import encoders
from future.backports.email.mime.nonmultipart import MIMENonMultipart


class MIMEImage(MIMENonMultipart):
    """Class for generating image/* type MIME documents."""

    def __init__(self, _imagedata, _subtype=None,
                 _encoder=encoders.encode_base64, **_params):
        """Create an image/* type MIME document.

        _imagedata is a string containing the raw image data.  If this data
        can be decoded by the standard Python `imghdr' module, then the
        subtype will be automatically included in the Content-Type header.
        Otherwise, you can specify the specific image subtype via the _subtype
        parameter.

        _encoder is a function which will perform the actual encoding for
        transport of the image data.  It takes one argument, which is this
        Image instance.  It should use get_payload() and set_payload() to
        change the payload to the encoded form.  It should also add any
        Content-Transfer-Encoding or other headers to the message as
        necessary.  The default encoding is Base64.

        Any additional keyword arguments are passed to the base class
        constructor, which turns them into parameters on the Content-Type
        header.
        """
        if _subtype is None:
            _subtype = imghdr.what(None, _imagedata)
        if _subtype is None:
            raise TypeError('Could not guess image MIME subtype')
        MIMENonMultipart.__init__(self, 'image', _subtype, **_params)
        self.set_payload(_imagedata)
        _encoder(self)
PK�Du\��d�����!future/backports/email/message.pynu�[���# -*- coding: utf-8 -*-
# Copyright (C) 2001-2007 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Basic message object for the email package object model."""
from __future__ import absolute_import, division, unicode_literals
from future.builtins import list, range, str, zip

__all__ = ['Message']

import re
import uu
import base64
import binascii
from io import BytesIO, StringIO

# Intrapackage imports
from future.utils import as_native_str
from future.backports.email import utils
from future.backports.email import errors
from future.backports.email._policybase import compat32
from future.backports.email import charset as _charset
from future.backports.email._encoded_words import decode_b
Charset = _charset.Charset

SEMISPACE = '; '

# Regular expression that matches `special' characters in parameters, the
# existence of which force quoting of the parameter value.
tspecials = re.compile(r'[ \(\)<>@,;:\\"/\[\]\?=]')


def _splitparam(param):
    # Split header parameters.  BAW: this may be too simple.  It isn't
    # strictly RFC 2045 (section 5.1) compliant, but it catches most headers
    # found in the wild.  We may eventually need a full fledged parser.
    # RDM: we might have a Header here; for now just stringify it.
    a, sep, b = str(param).partition(';')
    if not sep:
        return a.strip(), None
    return a.strip(), b.strip()

def _formatparam(param, value=None, quote=True):
    """Convenience function to format and return a key=value pair.

    This will quote the value if needed or if quote is true.  If value is a
    three tuple (charset, language, value), it will be encoded according
    to RFC2231 rules.  If it contains non-ascii characters it will likewise
    be encoded according to RFC2231 rules, using the utf-8 charset and
    a null language.
    """
    if value is not None and len(value) > 0:
        # A tuple is used for RFC 2231 encoded parameter values where items
        # are (charset, language, value).  charset is a string, not a Charset
        # instance.  RFC 2231 encoded values are never quoted, per RFC.
        if isinstance(value, tuple):
            # Encode as per RFC 2231
            param += '*'
            value = utils.encode_rfc2231(value[2], value[0], value[1])
            return '%s=%s' % (param, value)
        else:
            try:
                value.encode('ascii')
            except UnicodeEncodeError:
                param += '*'
                value = utils.encode_rfc2231(value, 'utf-8', '')
                return '%s=%s' % (param, value)
        # BAW: Please check this.  I think that if quote is set it should
        # force quoting even if not necessary.
        if quote or tspecials.search(value):
            return '%s="%s"' % (param, utils.quote(value))
        else:
            return '%s=%s' % (param, value)
    else:
        return param

def _parseparam(s):
    # RDM This might be a Header, so for now stringify it.
    s = ';' + str(s)
    plist = []
    while s[:1] == ';':
        s = s[1:]
        end = s.find(';')
        while end > 0 and (s.count('"', 0, end) - s.count('\\"', 0, end)) % 2:
            end = s.find(';', end + 1)
        if end < 0:
            end = len(s)
        f = s[:end]
        if '=' in f:
            i = f.index('=')
            f = f[:i].strip().lower() + '=' + f[i+1:].strip()
        plist.append(f.strip())
        s = s[end:]
    return plist


def _unquotevalue(value):
    # This is different than utils.collapse_rfc2231_value() because it doesn't
    # try to convert the value to a unicode.  Message.get_param() and
    # Message.get_params() are both currently defined to return the tuple in
    # the face of RFC 2231 parameters.
    if isinstance(value, tuple):
        return value[0], value[1], utils.unquote(value[2])
    else:
        return utils.unquote(value)


class Message(object):
    """Basic message object.

    A message object is defined as something that has a bunch of RFC 2822
    headers and a payload.  It may optionally have an envelope header
    (a.k.a. Unix-From or From_ header).  If the message is a container (i.e. a
    multipart or a message/rfc822), then the payload is a list of Message
    objects, otherwise it is a string.

    Message objects implement part of the `mapping' interface, which assumes
    there is exactly one occurrence of the header per message.  Some headers
    do in fact appear multiple times (e.g. Received) and for those headers,
    you must use the explicit API to set or get all the headers.  Not all of
    the mapping methods are implemented.
    """
    def __init__(self, policy=compat32):
        self.policy = policy
        self._headers = list()
        self._unixfrom = None
        self._payload = None
        self._charset = None
        # Defaults for multipart messages
        self.preamble = self.epilogue = None
        self.defects = []
        # Default content type
        self._default_type = 'text/plain'

    @as_native_str(encoding='utf-8')
    def __str__(self):
        """Return the entire formatted message as a string.
        This includes the headers, body, and envelope header.
        """
        return self.as_string()

    def as_string(self, unixfrom=False, maxheaderlen=0):
        """Return the entire formatted message as a (unicode) string.
        Optional `unixfrom' when True, means include the Unix From_ envelope
        header.

        This is a convenience method and may not generate the message exactly
        as you intend.  For more flexibility, use the flatten() method of a
        Generator instance.
        """
        from future.backports.email.generator import Generator
        fp = StringIO()
        g = Generator(fp, mangle_from_=False, maxheaderlen=maxheaderlen)
        g.flatten(self, unixfrom=unixfrom)
        return fp.getvalue()

    def is_multipart(self):
        """Return True if the message consists of multiple parts."""
        return isinstance(self._payload, list)

    #
    # Unix From_ line
    #
    def set_unixfrom(self, unixfrom):
        self._unixfrom = unixfrom

    def get_unixfrom(self):
        return self._unixfrom

    #
    # Payload manipulation.
    #
    def attach(self, payload):
        """Add the given payload to the current payload.

        The current payload will always be a list of objects after this method
        is called.  If you want to set the payload to a scalar object, use
        set_payload() instead.
        """
        if self._payload is None:
            self._payload = [payload]
        else:
            self._payload.append(payload)

    def get_payload(self, i=None, decode=False):
        """Return a reference to the payload.

        The payload will either be a list object or a string.  If you mutate
        the list object, you modify the message's payload in place.  Optional
        i returns that index into the payload.

        Optional decode is a flag indicating whether the payload should be
        decoded or not, according to the Content-Transfer-Encoding header
        (default is False).

        When True and the message is not a multipart, the payload will be
        decoded if this header's value is `quoted-printable' or `base64'.  If
        some other encoding is used, or the header is missing, or if the
        payload has bogus data (i.e. bogus base64 or uuencoded data), the
        payload is returned as-is.

        If the message is a multipart and the decode flag is True, then None
        is returned.
        """
        # Here is the logic table for this code, based on the email5.0.0 code:
        #   i     decode  is_multipart  result
        # ------  ------  ------------  ------------------------------
        #  None   True    True          None
        #   i     True    True          None
        #  None   False   True          _payload (a list)
        #   i     False   True          _payload element i (a Message)
        #   i     False   False         error (not a list)
        #   i     True    False         error (not a list)
        #  None   False   False         _payload
        #  None   True    False         _payload decoded (bytes)
        # Note that Barry planned to factor out the 'decode' case, but that
        # isn't so easy now that we handle the 8 bit data, which needs to be
        # converted in both the decode and non-decode path.
        if self.is_multipart():
            if decode:
                return None
            if i is None:
                return self._payload
            else:
                return self._payload[i]
        # For backward compatibility, Use isinstance and this error message
        # instead of the more logical is_multipart test.
        if i is not None and not isinstance(self._payload, list):
            raise TypeError('Expected list, got %s' % type(self._payload))
        payload = self._payload
        # cte might be a Header, so for now stringify it.
        cte = str(self.get('content-transfer-encoding', '')).lower()
        # payload may be bytes here.
        if isinstance(payload, str):
            payload = str(payload)    # for Python-Future, so surrogateescape works
            if utils._has_surrogates(payload):
                bpayload = payload.encode('ascii', 'surrogateescape')
                if not decode:
                    try:
                        payload = bpayload.decode(self.get_param('charset', 'ascii'), 'replace')
                    except LookupError:
                        payload = bpayload.decode('ascii', 'replace')
            elif decode:
                try:
                    bpayload = payload.encode('ascii')
                except UnicodeError:
                    # This won't happen for RFC compliant messages (messages
                    # containing only ASCII codepoints in the unicode input).
                    # If it does happen, turn the string into bytes in a way
                    # guaranteed not to fail.
                    bpayload = payload.encode('raw-unicode-escape')
        if not decode:
            return payload
        if cte == 'quoted-printable':
            return utils._qdecode(bpayload)
        elif cte == 'base64':
            # XXX: this is a bit of a hack; decode_b should probably be factored
            # out somewhere, but I haven't figured out where yet.
            value, defects = decode_b(b''.join(bpayload.splitlines()))
            for defect in defects:
                self.policy.handle_defect(self, defect)
            return value
        elif cte in ('x-uuencode', 'uuencode', 'uue', 'x-uue'):
            in_file = BytesIO(bpayload)
            out_file = BytesIO()
            try:
                uu.decode(in_file, out_file, quiet=True)
                return out_file.getvalue()
            except uu.Error:
                # Some decoding problem
                return bpayload
        if isinstance(payload, str):
            return bpayload
        return payload

    def set_payload(self, payload, charset=None):
        """Set the payload to the given value.

        Optional charset sets the message's default character set.  See
        set_charset() for details.
        """
        self._payload = payload
        if charset is not None:
            self.set_charset(charset)

    def set_charset(self, charset):
        """Set the charset of the payload to a given character set.

        charset can be a Charset instance, a string naming a character set, or
        None.  If it is a string it will be converted to a Charset instance.
        If charset is None, the charset parameter will be removed from the
        Content-Type field.  Anything else will generate a TypeError.

        The message will be assumed to be of type text/* encoded with
        charset.input_charset.  It will be converted to charset.output_charset
        and encoded properly, if needed, when generating the plain text
        representation of the message.  MIME headers (MIME-Version,
        Content-Type, Content-Transfer-Encoding) will be added as needed.
        """
        if charset is None:
            self.del_param('charset')
            self._charset = None
            return
        if not isinstance(charset, Charset):
            charset = Charset(charset)
        self._charset = charset
        if 'MIME-Version' not in self:
            self.add_header('MIME-Version', '1.0')
        if 'Content-Type' not in self:
            self.add_header('Content-Type', 'text/plain',
                            charset=charset.get_output_charset())
        else:
            self.set_param('charset', charset.get_output_charset())
        if charset != charset.get_output_charset():
            self._payload = charset.body_encode(self._payload)
        if 'Content-Transfer-Encoding' not in self:
            cte = charset.get_body_encoding()
            try:
                cte(self)
            except TypeError:
                self._payload = charset.body_encode(self._payload)
                self.add_header('Content-Transfer-Encoding', cte)

    def get_charset(self):
        """Return the Charset instance associated with the message's payload.
        """
        return self._charset

    #
    # MAPPING INTERFACE (partial)
    #
    def __len__(self):
        """Return the total number of headers, including duplicates."""
        return len(self._headers)

    def __getitem__(self, name):
        """Get a header value.

        Return None if the header is missing instead of raising an exception.

        Note that if the header appeared multiple times, exactly which
        occurrence gets returned is undefined.  Use get_all() to get all
        the values matching a header field name.
        """
        return self.get(name)

    def __setitem__(self, name, val):
        """Set the value of a header.

        Note: this does not overwrite an existing header with the same field
        name.  Use __delitem__() first to delete any existing headers.
        """
        max_count = self.policy.header_max_count(name)
        if max_count:
            lname = name.lower()
            found = 0
            for k, v in self._headers:
                if k.lower() == lname:
                    found += 1
                    if found >= max_count:
                        raise ValueError("There may be at most {} {} headers "
                                         "in a message".format(max_count, name))
        self._headers.append(self.policy.header_store_parse(name, val))

    def __delitem__(self, name):
        """Delete all occurrences of a header, if present.

        Does not raise an exception if the header is missing.
        """
        name = name.lower()
        newheaders = list()
        for k, v in self._headers:
            if k.lower() != name:
                newheaders.append((k, v))
        self._headers = newheaders

    def __contains__(self, name):
        return name.lower() in [k.lower() for k, v in self._headers]

    def __iter__(self):
        for field, value in self._headers:
            yield field

    def keys(self):
        """Return a list of all the message's header field names.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        """
        return [k for k, v in self._headers]

    def values(self):
        """Return a list of all the message's header values.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        """
        return [self.policy.header_fetch_parse(k, v)
                for k, v in self._headers]

    def items(self):
        """Get all the message's header fields and values.

        These will be sorted in the order they appeared in the original
        message, or were added to the message, and may contain duplicates.
        Any fields deleted and re-inserted are always appended to the header
        list.
        """
        return [(k, self.policy.header_fetch_parse(k, v))
                for k, v in self._headers]

    def get(self, name, failobj=None):
        """Get a header value.

        Like __getitem__() but return failobj instead of None when the field
        is missing.
        """
        name = name.lower()
        for k, v in self._headers:
            if k.lower() == name:
                return self.policy.header_fetch_parse(k, v)
        return failobj

    #
    # "Internal" methods (public API, but only intended for use by a parser
    # or generator, not normal application code.
    #

    def set_raw(self, name, value):
        """Store name and value in the model without modification.

        This is an "internal" API, intended only for use by a parser.
        """
        self._headers.append((name, value))

    def raw_items(self):
        """Return the (name, value) header pairs without modification.

        This is an "internal" API, intended only for use by a generator.
        """
        return iter(self._headers.copy())

    #
    # Additional useful stuff
    #

    def get_all(self, name, failobj=None):
        """Return a list of all the values for the named field.

        These will be sorted in the order they appeared in the original
        message, and may contain duplicates.  Any fields deleted and
        re-inserted are always appended to the header list.

        If no such fields exist, failobj is returned (defaults to None).
        """
        values = []
        name = name.lower()
        for k, v in self._headers:
            if k.lower() == name:
                values.append(self.policy.header_fetch_parse(k, v))
        if not values:
            return failobj
        return values

    def add_header(self, _name, _value, **_params):
        """Extended header setting.

        name is the header field to add.  keyword arguments can be used to set
        additional parameters for the header field, with underscores converted
        to dashes.  Normally the parameter will be added as key="value" unless
        value is None, in which case only the key will be added.  If a
        parameter value contains non-ASCII characters it can be specified as a
        three-tuple of (charset, language, value), in which case it will be
        encoded according to RFC2231 rules.  Otherwise it will be encoded using
        the utf-8 charset and a language of ''.

        Examples:

        msg.add_header('content-disposition', 'attachment', filename='bud.gif')
        msg.add_header('content-disposition', 'attachment',
                       filename=('utf-8', '', 'Fußballer.ppt'))
        msg.add_header('content-disposition', 'attachment',
                       filename='Fußballer.ppt'))
        """
        parts = []
        for k, v in _params.items():
            if v is None:
                parts.append(k.replace('_', '-'))
            else:
                parts.append(_formatparam(k.replace('_', '-'), v))
        if _value is not None:
            parts.insert(0, _value)
        self[_name] = SEMISPACE.join(parts)

    def replace_header(self, _name, _value):
        """Replace a header.

        Replace the first matching header found in the message, retaining
        header order and case.  If no matching header was found, a KeyError is
        raised.
        """
        _name = _name.lower()
        for i, (k, v) in zip(range(len(self._headers)), self._headers):
            if k.lower() == _name:
                self._headers[i] = self.policy.header_store_parse(k, _value)
                break
        else:
            raise KeyError(_name)

    #
    # Use these three methods instead of the three above.
    #

    def get_content_type(self):
        """Return the message's content type.

        The returned string is coerced to lower case of the form
        `maintype/subtype'.  If there was no Content-Type header in the
        message, the default type as given by get_default_type() will be
        returned.  Since according to RFC 2045, messages always have a default
        type this will always return a value.

        RFC 2045 defines a message's default type to be text/plain unless it
        appears inside a multipart/digest container, in which case it would be
        message/rfc822.
        """
        missing = object()
        value = self.get('content-type', missing)
        if value is missing:
            # This should have no parameters
            return self.get_default_type()
        ctype = _splitparam(value)[0].lower()
        # RFC 2045, section 5.2 says if its invalid, use text/plain
        if ctype.count('/') != 1:
            return 'text/plain'
        return ctype

    def get_content_maintype(self):
        """Return the message's main content type.

        This is the `maintype' part of the string returned by
        get_content_type().
        """
        ctype = self.get_content_type()
        return ctype.split('/')[0]

    def get_content_subtype(self):
        """Returns the message's sub-content type.

        This is the `subtype' part of the string returned by
        get_content_type().
        """
        ctype = self.get_content_type()
        return ctype.split('/')[1]

    def get_default_type(self):
        """Return the `default' content type.

        Most messages have a default content type of text/plain, except for
        messages that are subparts of multipart/digest containers.  Such
        subparts have a default content type of message/rfc822.
        """
        return self._default_type

    def set_default_type(self, ctype):
        """Set the `default' content type.

        ctype should be either "text/plain" or "message/rfc822", although this
        is not enforced.  The default content type is not stored in the
        Content-Type header.
        """
        self._default_type = ctype

    def _get_params_preserve(self, failobj, header):
        # Like get_params() but preserves the quoting of values.  BAW:
        # should this be part of the public interface?
        missing = object()
        value = self.get(header, missing)
        if value is missing:
            return failobj
        params = []
        for p in _parseparam(value):
            try:
                name, val = p.split('=', 1)
                name = name.strip()
                val = val.strip()
            except ValueError:
                # Must have been a bare attribute
                name = p.strip()
                val = ''
            params.append((name, val))
        params = utils.decode_params(params)
        return params

    def get_params(self, failobj=None, header='content-type', unquote=True):
        """Return the message's Content-Type parameters, as a list.

        The elements of the returned list are 2-tuples of key/value pairs, as
        split on the `=' sign.  The left hand side of the `=' is the key,
        while the right hand side is the value.  If there is no `=' sign in
        the parameter the value is the empty string.  The value is as
        described in the get_param() method.

        Optional failobj is the object to return if there is no Content-Type
        header.  Optional header is the header to search instead of
        Content-Type.  If unquote is True, the value is unquoted.
        """
        missing = object()
        params = self._get_params_preserve(missing, header)
        if params is missing:
            return failobj
        if unquote:
            return [(k, _unquotevalue(v)) for k, v in params]
        else:
            return params

    def get_param(self, param, failobj=None, header='content-type',
                  unquote=True):
        """Return the parameter value if found in the Content-Type header.

        Optional failobj is the object to return if there is no Content-Type
        header, or the Content-Type header has no such parameter.  Optional
        header is the header to search instead of Content-Type.

        Parameter keys are always compared case insensitively.  The return
        value can either be a string, or a 3-tuple if the parameter was RFC
        2231 encoded.  When it's a 3-tuple, the elements of the value are of
        the form (CHARSET, LANGUAGE, VALUE).  Note that both CHARSET and
        LANGUAGE can be None, in which case you should consider VALUE to be
        encoded in the us-ascii charset.  You can usually ignore LANGUAGE.
        The parameter value (either the returned string, or the VALUE item in
        the 3-tuple) is always unquoted, unless unquote is set to False.

        If your application doesn't care whether the parameter was RFC 2231
        encoded, it can turn the return value into a string as follows:

            param = msg.get_param('foo')
            param = email.utils.collapse_rfc2231_value(rawparam)

        """
        if header not in self:
            return failobj
        for k, v in self._get_params_preserve(failobj, header):
            if k.lower() == param.lower():
                if unquote:
                    return _unquotevalue(v)
                else:
                    return v
        return failobj

    def set_param(self, param, value, header='Content-Type', requote=True,
                  charset=None, language=''):
        """Set a parameter in the Content-Type header.

        If the parameter already exists in the header, its value will be
        replaced with the new value.

        If header is Content-Type and has not yet been defined for this
        message, it will be set to "text/plain" and the new parameter and
        value will be appended as per RFC 2045.

        An alternate header can specified in the header argument, and all
        parameters will be quoted as necessary unless requote is False.

        If charset is specified, the parameter will be encoded according to RFC
        2231.  Optional language specifies the RFC 2231 language, defaulting
        to the empty string.  Both charset and language should be strings.
        """
        if not isinstance(value, tuple) and charset:
            value = (charset, language, value)

        if header not in self and header.lower() == 'content-type':
            ctype = 'text/plain'
        else:
            ctype = self.get(header)
        if not self.get_param(param, header=header):
            if not ctype:
                ctype = _formatparam(param, value, requote)
            else:
                ctype = SEMISPACE.join(
                    [ctype, _formatparam(param, value, requote)])
        else:
            ctype = ''
            for old_param, old_value in self.get_params(header=header,
                                                        unquote=requote):
                append_param = ''
                if old_param.lower() == param.lower():
                    append_param = _formatparam(param, value, requote)
                else:
                    append_param = _formatparam(old_param, old_value, requote)
                if not ctype:
                    ctype = append_param
                else:
                    ctype = SEMISPACE.join([ctype, append_param])
        if ctype != self.get(header):
            del self[header]
            self[header] = ctype

    def del_param(self, param, header='content-type', requote=True):
        """Remove the given parameter completely from the Content-Type header.

        The header will be re-written in place without the parameter or its
        value. All values will be quoted as necessary unless requote is
        False.  Optional header specifies an alternative to the Content-Type
        header.
        """
        if header not in self:
            return
        new_ctype = ''
        for p, v in self.get_params(header=header, unquote=requote):
            if p.lower() != param.lower():
                if not new_ctype:
                    new_ctype = _formatparam(p, v, requote)
                else:
                    new_ctype = SEMISPACE.join([new_ctype,
                                                _formatparam(p, v, requote)])
        if new_ctype != self.get(header):
            del self[header]
            self[header] = new_ctype

    def set_type(self, type, header='Content-Type', requote=True):
        """Set the main type and subtype for the Content-Type header.

        type must be a string in the form "maintype/subtype", otherwise a
        ValueError is raised.

        This method replaces the Content-Type header, keeping all the
        parameters in place.  If requote is False, this leaves the existing
        header's quoting as is.  Otherwise, the parameters will be quoted (the
        default).

        An alternative header can be specified in the header argument.  When
        the Content-Type header is set, we'll always also add a MIME-Version
        header.
        """
        # BAW: should we be strict?
        if not type.count('/') == 1:
            raise ValueError
        # Set the Content-Type, you get a MIME-Version
        if header.lower() == 'content-type':
            del self['mime-version']
            self['MIME-Version'] = '1.0'
        if header not in self:
            self[header] = type
            return
        params = self.get_params(header=header, unquote=requote)
        del self[header]
        self[header] = type
        # Skip the first param; it's the old type.
        for p, v in params[1:]:
            self.set_param(p, v, header, requote)

    def get_filename(self, failobj=None):
        """Return the filename associated with the payload if present.

        The filename is extracted from the Content-Disposition header's
        `filename' parameter, and it is unquoted.  If that header is missing
        the `filename' parameter, this method falls back to looking for the
        `name' parameter.
        """
        missing = object()
        filename = self.get_param('filename', missing, 'content-disposition')
        if filename is missing:
            filename = self.get_param('name', missing, 'content-type')
        if filename is missing:
            return failobj
        return utils.collapse_rfc2231_value(filename).strip()

    def get_boundary(self, failobj=None):
        """Return the boundary associated with the payload if present.

        The boundary is extracted from the Content-Type header's `boundary'
        parameter, and it is unquoted.
        """
        missing = object()
        boundary = self.get_param('boundary', missing)
        if boundary is missing:
            return failobj
        # RFC 2046 says that boundaries may begin but not end in w/s
        return utils.collapse_rfc2231_value(boundary).rstrip()

    def set_boundary(self, boundary):
        """Set the boundary parameter in Content-Type to 'boundary'.

        This is subtly different than deleting the Content-Type header and
        adding a new one with a new boundary parameter via add_header().  The
        main difference is that using the set_boundary() method preserves the
        order of the Content-Type header in the original message.

        HeaderParseError is raised if the message has no Content-Type header.
        """
        missing = object()
        params = self._get_params_preserve(missing, 'content-type')
        if params is missing:
            # There was no Content-Type header, and we don't know what type
            # to set it to, so raise an exception.
            raise errors.HeaderParseError('No Content-Type header found')
        newparams = list()
        foundp = False
        for pk, pv in params:
            if pk.lower() == 'boundary':
                newparams.append(('boundary', '"%s"' % boundary))
                foundp = True
            else:
                newparams.append((pk, pv))
        if not foundp:
            # The original Content-Type header had no boundary attribute.
            # Tack one on the end.  BAW: should we raise an exception
            # instead???
            newparams.append(('boundary', '"%s"' % boundary))
        # Replace the existing Content-Type header with the new value
        newheaders = list()
        for h, v in self._headers:
            if h.lower() == 'content-type':
                parts = list()
                for k, v in newparams:
                    if v == '':
                        parts.append(k)
                    else:
                        parts.append('%s=%s' % (k, v))
                val = SEMISPACE.join(parts)
                newheaders.append(self.policy.header_store_parse(h, val))

            else:
                newheaders.append((h, v))
        self._headers = newheaders

    def get_content_charset(self, failobj=None):
        """Return the charset parameter of the Content-Type header.

        The returned string is always coerced to lower case.  If there is no
        Content-Type header, or if that header has no charset parameter,
        failobj is returned.
        """
        missing = object()
        charset = self.get_param('charset', missing)
        if charset is missing:
            return failobj
        if isinstance(charset, tuple):
            # RFC 2231 encoded, so decode it, and it better end up as ascii.
            pcharset = charset[0] or 'us-ascii'
            try:
                # LookupError will be raised if the charset isn't known to
                # Python.  UnicodeError will be raised if the encoded text
                # contains a character not in the charset.
                as_bytes = charset[2].encode('raw-unicode-escape')
                charset = str(as_bytes, pcharset)
            except (LookupError, UnicodeError):
                charset = charset[2]
        # charset characters must be in us-ascii range
        try:
            charset.encode('us-ascii')
        except UnicodeError:
            return failobj
        # RFC 2046, $4.1.2 says charsets are not case sensitive
        return charset.lower()

    def get_charsets(self, failobj=None):
        """Return a list containing the charset(s) used in this message.

        The returned list of items describes the Content-Type headers'
        charset parameter for this message and all the subparts in its
        payload.

        Each item will either be a string (the value of the charset parameter
        in the Content-Type header of that part) or the value of the
        'failobj' parameter (defaults to None), if the part does not have a
        main MIME type of "text", or the charset is not defined.

        The list will contain one string for each part of the message, plus
        one for the container message (i.e. self), so that a non-multipart
        message will still return a list of length 1.
        """
        return [part.get_content_charset(failobj) for part in self.walk()]

    # I.e. def walk(self): ...
    from future.backports.email.iterators import walk
PK�Du\-��/�7�7future/backports/email/utils.pynu�[���# Copyright (C) 2001-2010 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""Miscellaneous utilities."""

from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future import utils
from future.builtins import bytes, int, str

__all__ = [
    'collapse_rfc2231_value',
    'decode_params',
    'decode_rfc2231',
    'encode_rfc2231',
    'formataddr',
    'formatdate',
    'format_datetime',
    'getaddresses',
    'make_msgid',
    'mktime_tz',
    'parseaddr',
    'parsedate',
    'parsedate_tz',
    'parsedate_to_datetime',
    'unquote',
    ]

import os
import re
if utils.PY2:
    re.ASCII = 0
import time
import base64
import random
import socket
from future.backports import datetime
from future.backports.urllib.parse import quote as url_quote, unquote as url_unquote
import warnings
from io import StringIO

from future.backports.email._parseaddr import quote
from future.backports.email._parseaddr import AddressList as _AddressList
from future.backports.email._parseaddr import mktime_tz

from future.backports.email._parseaddr import parsedate, parsedate_tz, _parsedate_tz

from quopri import decodestring as _qdecode

# Intrapackage imports
from future.backports.email.encoders import _bencode, _qencode
from future.backports.email.charset import Charset

COMMASPACE = ', '
EMPTYSTRING = ''
UEMPTYSTRING = ''
CRLF = '\r\n'
TICK = "'"

specialsre = re.compile(r'[][\\()<>@,:;".]')
escapesre = re.compile(r'[\\"]')

# How to figure out if we are processing strings that come from a byte
# source with undecodable characters.
_has_surrogates = re.compile(
    '([^\ud800-\udbff]|\A)[\udc00-\udfff]([^\udc00-\udfff]|\Z)').search

# How to deal with a string containing bytes before handing it to the
# application through the 'normal' interface.
def _sanitize(string):
    # Turn any escaped bytes into unicode 'unknown' char.
    original_bytes = string.encode('ascii', 'surrogateescape')
    return original_bytes.decode('ascii', 'replace')


# Helpers

def formataddr(pair, charset='utf-8'):
    """The inverse of parseaddr(), this takes a 2-tuple of the form
    (realname, email_address) and returns the string value suitable
    for an RFC 2822 From, To or Cc header.

    If the first element of pair is false, then the second element is
    returned unmodified.

    Optional charset if given is the character set that is used to encode
    realname in case realname is not ASCII safe.  Can be an instance of str or
    a Charset-like object which has a header_encode method.  Default is
    'utf-8'.
    """
    name, address = pair
    # The address MUST (per RFC) be ascii, so raise an UnicodeError if it isn't.
    address.encode('ascii')
    if name:
        try:
            name.encode('ascii')
        except UnicodeEncodeError:
            if isinstance(charset, str):
                charset = Charset(charset)
            encoded_name = charset.header_encode(name)
            return "%s <%s>" % (encoded_name, address)
        else:
            quotes = ''
            if specialsre.search(name):
                quotes = '"'
            name = escapesre.sub(r'\\\g<0>', name)
            return '%s%s%s <%s>' % (quotes, name, quotes, address)
    return address



def getaddresses(fieldvalues):
    """Return a list of (REALNAME, EMAIL) for each fieldvalue."""
    all = COMMASPACE.join(fieldvalues)
    a = _AddressList(all)
    return a.addresslist



ecre = re.compile(r'''
  =\?                   # literal =?
  (?P<charset>[^?]*?)   # non-greedy up to the next ? is the charset
  \?                    # literal ?
  (?P<encoding>[qb])    # either a "q" or a "b", case insensitive
  \?                    # literal ?
  (?P<atom>.*?)         # non-greedy up to the next ?= is the atom
  \?=                   # literal ?=
  ''', re.VERBOSE | re.IGNORECASE)


def _format_timetuple_and_zone(timetuple, zone):
    return '%s, %02d %s %04d %02d:%02d:%02d %s' % (
        ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'][timetuple[6]],
        timetuple[2],
        ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
         'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'][timetuple[1] - 1],
        timetuple[0], timetuple[3], timetuple[4], timetuple[5],
        zone)

def formatdate(timeval=None, localtime=False, usegmt=False):
    """Returns a date string as specified by RFC 2822, e.g.:

    Fri, 09 Nov 2001 01:08:47 -0000

    Optional timeval if given is a floating point time value as accepted by
    gmtime() and localtime(), otherwise the current time is used.

    Optional localtime is a flag that when True, interprets timeval, and
    returns a date relative to the local timezone instead of UTC, properly
    taking daylight savings time into account.

    Optional argument usegmt means that the timezone is written out as
    an ascii string, not numeric one (so "GMT" instead of "+0000"). This
    is needed for HTTP, and is only used when localtime==False.
    """
    # Note: we cannot use strftime() because that honors the locale and RFC
    # 2822 requires that day and month names be the English abbreviations.
    if timeval is None:
        timeval = time.time()
    if localtime:
        now = time.localtime(timeval)
        # Calculate timezone offset, based on whether the local zone has
        # daylight savings time, and whether DST is in effect.
        if time.daylight and now[-1]:
            offset = time.altzone
        else:
            offset = time.timezone
        hours, minutes = divmod(abs(offset), 3600)
        # Remember offset is in seconds west of UTC, but the timezone is in
        # minutes east of UTC, so the signs differ.
        if offset > 0:
            sign = '-'
        else:
            sign = '+'
        zone = '%s%02d%02d' % (sign, hours, minutes // 60)
    else:
        now = time.gmtime(timeval)
        # Timezone offset is always -0000
        if usegmt:
            zone = 'GMT'
        else:
            zone = '-0000'
    return _format_timetuple_and_zone(now, zone)

def format_datetime(dt, usegmt=False):
    """Turn a datetime into a date string as specified in RFC 2822.

    If usegmt is True, dt must be an aware datetime with an offset of zero.  In
    this case 'GMT' will be rendered instead of the normal +0000 required by
    RFC2822.  This is to support HTTP headers involving date stamps.
    """
    now = dt.timetuple()
    if usegmt:
        if dt.tzinfo is None or dt.tzinfo != datetime.timezone.utc:
            raise ValueError("usegmt option requires a UTC datetime")
        zone = 'GMT'
    elif dt.tzinfo is None:
        zone = '-0000'
    else:
        zone = dt.strftime("%z")
    return _format_timetuple_and_zone(now, zone)


def make_msgid(idstring=None, domain=None):
    """Returns a string suitable for RFC 2822 compliant Message-ID, e.g:

    <20020201195627.33539.96671@nightshade.la.mastaler.com>

    Optional idstring if given is a string used to strengthen the
    uniqueness of the message id.  Optional domain if given provides the
    portion of the message id after the '@'.  It defaults to the locally
    defined hostname.
    """
    timeval = time.time()
    utcdate = time.strftime('%Y%m%d%H%M%S', time.gmtime(timeval))
    pid = os.getpid()
    randint = random.randrange(100000)
    if idstring is None:
        idstring = ''
    else:
        idstring = '.' + idstring
    if domain is None:
        domain = socket.getfqdn()
    msgid = '<%s.%s.%s%s@%s>' % (utcdate, pid, randint, idstring, domain)
    return msgid


def parsedate_to_datetime(data):
    _3to2list = list(_parsedate_tz(data))
    dtuple, tz, = [_3to2list[:-1]] + _3to2list[-1:]
    if tz is None:
        return datetime.datetime(*dtuple[:6])
    return datetime.datetime(*dtuple[:6],
            tzinfo=datetime.timezone(datetime.timedelta(seconds=tz)))


def parseaddr(addr):
    addrs = _AddressList(addr).addresslist
    if not addrs:
        return '', ''
    return addrs[0]


# rfc822.unquote() doesn't properly de-backslash-ify in Python pre-2.3.
def unquote(str):
    """Remove quotes from a string."""
    if len(str) > 1:
        if str.startswith('"') and str.endswith('"'):
            return str[1:-1].replace('\\\\', '\\').replace('\\"', '"')
        if str.startswith('<') and str.endswith('>'):
            return str[1:-1]
    return str



# RFC2231-related functions - parameter encoding and decoding
def decode_rfc2231(s):
    """Decode string according to RFC 2231"""
    parts = s.split(TICK, 2)
    if len(parts) <= 2:
        return None, None, s
    return parts


def encode_rfc2231(s, charset=None, language=None):
    """Encode string according to RFC 2231.

    If neither charset nor language is given, then s is returned as-is.  If
    charset is given but not language, the string is encoded using the empty
    string for language.
    """
    s = url_quote(s, safe='', encoding=charset or 'ascii')
    if charset is None and language is None:
        return s
    if language is None:
        language = ''
    return "%s'%s'%s" % (charset, language, s)


rfc2231_continuation = re.compile(r'^(?P<name>\w+)\*((?P<num>[0-9]+)\*?)?$',
    re.ASCII)

def decode_params(params):
    """Decode parameters list according to RFC 2231.

    params is a sequence of 2-tuples containing (param name, string value).
    """
    # Copy params so we don't mess with the original
    params = params[:]
    new_params = []
    # Map parameter's name to a list of continuations.  The values are a
    # 3-tuple of the continuation number, the string value, and a flag
    # specifying whether a particular segment is %-encoded.
    rfc2231_params = {}
    name, value = params.pop(0)
    new_params.append((name, value))
    while params:
        name, value = params.pop(0)
        if name.endswith('*'):
            encoded = True
        else:
            encoded = False
        value = unquote(value)
        mo = rfc2231_continuation.match(name)
        if mo:
            name, num = mo.group('name', 'num')
            if num is not None:
                num = int(num)
            rfc2231_params.setdefault(name, []).append((num, value, encoded))
        else:
            new_params.append((name, '"%s"' % quote(value)))
    if rfc2231_params:
        for name, continuations in rfc2231_params.items():
            value = []
            extended = False
            # Sort by number
            continuations.sort()
            # And now append all values in numerical order, converting
            # %-encodings for the encoded segments.  If any of the
            # continuation names ends in a *, then the entire string, after
            # decoding segments and concatenating, must have the charset and
            # language specifiers at the beginning of the string.
            for num, s, encoded in continuations:
                if encoded:
                    # Decode as "latin-1", so the characters in s directly
                    # represent the percent-encoded octet values.
                    # collapse_rfc2231_value treats this as an octet sequence.
                    s = url_unquote(s, encoding="latin-1")
                    extended = True
                value.append(s)
            value = quote(EMPTYSTRING.join(value))
            if extended:
                charset, language, value = decode_rfc2231(value)
                new_params.append((name, (charset, language, '"%s"' % value)))
            else:
                new_params.append((name, '"%s"' % value))
    return new_params

def collapse_rfc2231_value(value, errors='replace',
                           fallback_charset='us-ascii'):
    if not isinstance(value, tuple) or len(value) != 3:
        return unquote(value)
    # While value comes to us as a unicode string, we need it to be a bytes
    # object.  We do not want bytes() normal utf-8 decoder, we want a straight
    # interpretation of the string as character bytes.
    charset, language, text = value
    rawbytes = bytes(text, 'raw-unicode-escape')
    try:
        return str(rawbytes, charset, errors)
    except LookupError:
        # charset is not a known codec.
        return unquote(text)


#
# datetime doesn't provide a localtime function yet, so provide one.  Code
# adapted from the patch in issue 9527.  This may not be perfect, but it is
# better than not having it.
#

def localtime(dt=None, isdst=-1):
    """Return local time as an aware datetime object.

    If called without arguments, return current time.  Otherwise *dt*
    argument should be a datetime instance, and it is converted to the
    local time zone according to the system time zone database.  If *dt* is
    naive (that is, dt.tzinfo is None), it is assumed to be in local time.
    In this case, a positive or zero value for *isdst* causes localtime to
    presume initially that summer time (for example, Daylight Saving Time)
    is or is not (respectively) in effect for the specified time.  A
    negative value for *isdst* causes the localtime() function to attempt
    to divine whether summer time is in effect for the specified time.

    """
    if dt is None:
        return datetime.datetime.now(datetime.timezone.utc).astimezone()
    if dt.tzinfo is not None:
        return dt.astimezone()
    # We have a naive datetime.  Convert to a (localtime) timetuple and pass to
    # system mktime together with the isdst hint.  System mktime will return
    # seconds since epoch.
    tm = dt.timetuple()[:-1] + (isdst,)
    seconds = time.mktime(tm)
    localtm = time.localtime(seconds)
    try:
        delta = datetime.timedelta(seconds=localtm.tm_gmtoff)
        tz = datetime.timezone(delta, localtm.tm_zone)
    except AttributeError:
        # Compute UTC offset and compare with the value implied by tm_isdst.
        # If the values match, use the zone name implied by tm_isdst.
        delta = dt - datetime.datetime(*time.gmtime(seconds)[:6])
        dst = time.daylight and localtm.tm_isdst > 0
        gmtoff = -(time.altzone if dst else time.timezone)
        if delta == datetime.timedelta(seconds=gmtoff):
            tz = datetime.timezone(delta, time.tzname[dst])
        else:
            tz = datetime.timezone(delta)
    return dt.replace(tzinfo=tz)
PKEu\!��{`` future/backports/email/errors.pynu�[���# Copyright (C) 2001-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: email-sig@python.org

"""email package exception classes."""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import super


class MessageError(Exception):
    """Base class for errors in the email package."""


class MessageParseError(MessageError):
    """Base class for message parsing errors."""


class HeaderParseError(MessageParseError):
    """Error while parsing headers."""


class BoundaryError(MessageParseError):
    """Couldn't find terminating boundary."""


class MultipartConversionError(MessageError, TypeError):
    """Conversion to a multipart is prohibited."""


class CharsetError(MessageError):
    """An illegal charset was given."""


# These are parsing defects which the parser was able to work around.
class MessageDefect(ValueError):
    """Base class for a message defect."""

    def __init__(self, line=None):
        if line is not None:
            super().__init__(line)
        self.line = line

class NoBoundaryInMultipartDefect(MessageDefect):
    """A message claimed to be a multipart but had no boundary parameter."""

class StartBoundaryNotFoundDefect(MessageDefect):
    """The claimed start boundary was never found."""

class CloseBoundaryNotFoundDefect(MessageDefect):
    """A start boundary was found, but not the corresponding close boundary."""

class FirstHeaderLineIsContinuationDefect(MessageDefect):
    """A message had a continuation line as its first header line."""

class MisplacedEnvelopeHeaderDefect(MessageDefect):
    """A 'Unix-from' header was found in the middle of a header block."""

class MissingHeaderBodySeparatorDefect(MessageDefect):
    """Found line with no leading whitespace and no colon before blank line."""
# XXX: backward compatibility, just in case (it was never emitted).
MalformedHeaderDefect = MissingHeaderBodySeparatorDefect

class MultipartInvariantViolationDefect(MessageDefect):
    """A message claimed to be a multipart but no subparts were found."""

class InvalidMultipartContentTransferEncodingDefect(MessageDefect):
    """An invalid content transfer encoding was set on the multipart itself."""

class UndecodableBytesDefect(MessageDefect):
    """Header contained bytes that could not be decoded"""

class InvalidBase64PaddingDefect(MessageDefect):
    """base64 encoded sequence had an incorrect length"""

class InvalidBase64CharactersDefect(MessageDefect):
    """base64 encoded sequence had characters not in base64 alphabet"""

# These errors are specific to header parsing.

class HeaderDefect(MessageDefect):
    """Base class for a header defect."""

    def __init__(self, *args, **kw):
        super().__init__(*args, **kw)

class InvalidHeaderDefect(HeaderDefect):
    """Header is not valid, message gives details."""

class HeaderMissingRequiredValue(HeaderDefect):
    """A header that must have a value had none"""

class NonPrintableDefect(HeaderDefect):
    """ASCII characters outside the ascii-printable range found"""

    def __init__(self, non_printables):
        super().__init__(non_printables)
        self.non_printables = non_printables

    def __str__(self):
        return ("the following ASCII non-printables found in header: "
            "{}".format(self.non_printables))

class ObsoleteHeaderDefect(HeaderDefect):
    """Header uses syntax declared obsolete by RFC 5322"""

class NonASCIILocalPartDefect(HeaderDefect):
    """local_part contains non-ASCII characters"""
    # This defect only occurs during unicode parsing, not when
    # parsing messages decoded from binary.
PKEu\ӥ�w"w" future/backports/email/policy.pynu�[���"""This will be the home for the policy that hooks in the new
code that adds all the email6 features.
"""
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from future.builtins import super

from future.standard_library.email._policybase import (Policy, Compat32,
                                                  compat32, _extend_docstrings)
from future.standard_library.email.utils import _has_surrogates
from future.standard_library.email.headerregistry import HeaderRegistry as HeaderRegistry

__all__ = [
    'Compat32',
    'compat32',
    'Policy',
    'EmailPolicy',
    'default',
    'strict',
    'SMTP',
    'HTTP',
    ]

@_extend_docstrings
class EmailPolicy(Policy):

    """+
    PROVISIONAL

    The API extensions enabled by this policy are currently provisional.
    Refer to the documentation for details.

    This policy adds new header parsing and folding algorithms.  Instead of
    simple strings, headers are custom objects with custom attributes
    depending on the type of the field.  The folding algorithm fully
    implements RFCs 2047 and 5322.

    In addition to the settable attributes listed above that apply to
    all Policies, this policy adds the following additional attributes:

    refold_source       -- if the value for a header in the Message object
                           came from the parsing of some source, this attribute
                           indicates whether or not a generator should refold
                           that value when transforming the message back into
                           stream form.  The possible values are:

                           none  -- all source values use original folding
                           long  -- source values that have any line that is
                                    longer than max_line_length will be
                                    refolded
                           all  -- all values are refolded.

                           The default is 'long'.

    header_factory      -- a callable that takes two arguments, 'name' and
                           'value', where 'name' is a header field name and
                           'value' is an unfolded header field value, and
                           returns a string-like object that represents that
                           header.  A default header_factory is provided that
                           understands some of the RFC5322 header field types.
                           (Currently address fields and date fields have
                           special treatment, while all other fields are
                           treated as unstructured.  This list will be
                           completed before the extension is marked stable.)
    """

    refold_source = 'long'
    header_factory = HeaderRegistry()

    def __init__(self, **kw):
        # Ensure that each new instance gets a unique header factory
        # (as opposed to clones, which share the factory).
        if 'header_factory' not in kw:
            object.__setattr__(self, 'header_factory', HeaderRegistry())
        super().__init__(**kw)

    def header_max_count(self, name):
        """+
        The implementation for this class returns the max_count attribute from
        the specialized header class that would be used to construct a header
        of type 'name'.
        """
        return self.header_factory[name].max_count

    # The logic of the next three methods is chosen such that it is possible to
    # switch a Message object between a Compat32 policy and a policy derived
    # from this class and have the results stay consistent.  This allows a
    # Message object constructed with this policy to be passed to a library
    # that only handles Compat32 objects, or to receive such an object and
    # convert it to use the newer style by just changing its policy.  It is
    # also chosen because it postpones the relatively expensive full rfc5322
    # parse until as late as possible when parsing from source, since in many
    # applications only a few headers will actually be inspected.

    def header_source_parse(self, sourcelines):
        """+
        The name is parsed as everything up to the ':' and returned unmodified.
        The value is determined by stripping leading whitespace off the
        remainder of the first line, joining all subsequent lines together, and
        stripping any trailing carriage return or linefeed characters.  (This
        is the same as Compat32).

        """
        name, value = sourcelines[0].split(':', 1)
        value = value.lstrip(' \t') + ''.join(sourcelines[1:])
        return (name, value.rstrip('\r\n'))

    def header_store_parse(self, name, value):
        """+
        The name is returned unchanged.  If the input value has a 'name'
        attribute and it matches the name ignoring case, the value is returned
        unchanged.  Otherwise the name and value are passed to header_factory
        method, and the resulting custom header object is returned as the
        value.  In this case a ValueError is raised if the input value contains
        CR or LF characters.

        """
        if hasattr(value, 'name') and value.name.lower() == name.lower():
            return (name, value)
        if isinstance(value, str) and len(value.splitlines())>1:
            raise ValueError("Header values may not contain linefeed "
                             "or carriage return characters")
        return (name, self.header_factory(name, value))

    def header_fetch_parse(self, name, value):
        """+
        If the value has a 'name' attribute, it is returned to unmodified.
        Otherwise the name and the value with any linesep characters removed
        are passed to the header_factory method, and the resulting custom
        header object is returned.  Any surrogateescaped bytes get turned
        into the unicode unknown-character glyph.

        """
        if hasattr(value, 'name'):
            return value
        return self.header_factory(name, ''.join(value.splitlines()))

    def fold(self, name, value):
        """+
        Header folding is controlled by the refold_source policy setting.  A
        value is considered to be a 'source value' if and only if it does not
        have a 'name' attribute (having a 'name' attribute means it is a header
        object of some sort).  If a source value needs to be refolded according
        to the policy, it is converted into a custom header object by passing
        the name and the value with any linesep characters removed to the
        header_factory method.  Folding of a custom header object is done by
        calling its fold method with the current policy.

        Source values are split into lines using splitlines.  If the value is
        not to be refolded, the lines are rejoined using the linesep from the
        policy and returned.  The exception is lines containing non-ascii
        binary data.  In that case the value is refolded regardless of the
        refold_source setting, which causes the binary data to be CTE encoded
        using the unknown-8bit charset.

        """
        return self._fold(name, value, refold_binary=True)

    def fold_binary(self, name, value):
        """+
        The same as fold if cte_type is 7bit, except that the returned value is
        bytes.

        If cte_type is 8bit, non-ASCII binary data is converted back into
        bytes.  Headers with binary data are not refolded, regardless of the
        refold_header setting, since there is no way to know whether the binary
        data consists of single byte characters or multibyte characters.

        """
        folded = self._fold(name, value, refold_binary=self.cte_type=='7bit')
        return folded.encode('ascii', 'surrogateescape')

    def _fold(self, name, value, refold_binary=False):
        if hasattr(value, 'name'):
            return value.fold(policy=self)
        maxlen = self.max_line_length if self.max_line_length else float('inf')
        lines = value.splitlines()
        refold = (self.refold_source == 'all' or
                  self.refold_source == 'long' and
                    (lines and len(lines[0])+len(name)+2 > maxlen or
                     any(len(x) > maxlen for x in lines[1:])))
        if refold or refold_binary and _has_surrogates(value):
            return self.header_factory(name, ''.join(lines)).fold(policy=self)
        return name + ': ' + self.linesep.join(lines) + self.linesep


default = EmailPolicy()
# Make the default policy use the class default header_factory
del default.header_factory
strict = default.clone(raise_on_defect=True)
SMTP = default.clone(linesep='\r\n')
HTTP = default.clone(linesep='\r\n', max_line_length=None)
PKEu\:���U�Ufuture/utils/__init__.pynu�[���"""
A selection of cross-compatible functions for Python 2 and 3.

This module exports useful functions for 2/3 compatible code:

    * bind_method: binds functions to classes
    * ``native_str_to_bytes`` and ``bytes_to_native_str``
    * ``native_str``: always equal to the native platform string object (because
      this may be shadowed by imports from future.builtins)
    * lists: lrange(), lmap(), lzip(), lfilter()
    * iterable method compatibility:
        - iteritems, iterkeys, itervalues
        - viewitems, viewkeys, viewvalues

        These use the original method if available, otherwise they use items,
        keys, values.

    * types:

        * text_type: unicode in Python 2, str in Python 3
        * string_types: basestring in Python 2, str in Python 3
        * binary_type: str in Python 2, bytes in Python 3
        * integer_types: (int, long) in Python 2, int in Python 3
        * class_types: (type, types.ClassType) in Python 2, type in Python 3

    * bchr(c):
        Take an integer and make a 1-character byte string
    * bord(c)
        Take the result of indexing on a byte string and make an integer
    * tobytes(s)
        Take a text string, a byte string, or a sequence of characters taken
        from a byte string, and make a byte string.

    * raise_from()
    * raise_with_traceback()

This module also defines these decorators:

    * ``python_2_unicode_compatible``
    * ``with_metaclass``
    * ``implements_iterator``

Some of the functions in this module come from the following sources:

    * Jinja2 (BSD licensed: see
      https://github.com/mitsuhiko/jinja2/blob/master/LICENSE)
    * Pandas compatibility module pandas.compat
    * six.py by Benjamin Peterson
    * Django
"""

import types
import sys
import numbers
import functools
import copy
import inspect


PY3 = sys.version_info[0] >= 3
PY34_PLUS = sys.version_info[0:2] >= (3, 4)
PY35_PLUS = sys.version_info[0:2] >= (3, 5)
PY36_PLUS = sys.version_info[0:2] >= (3, 6)
PY37_PLUS = sys.version_info[0:2] >= (3, 7)
PY38_PLUS = sys.version_info[0:2] >= (3, 8)
PY39_PLUS = sys.version_info[0:2] >= (3, 9)
PY2 = sys.version_info[0] == 2
PY26 = sys.version_info[0:2] == (2, 6)
PY27 = sys.version_info[0:2] == (2, 7)
PYPY = hasattr(sys, 'pypy_translation_info')


def python_2_unicode_compatible(cls):
    """
    A decorator that defines __unicode__ and __str__ methods under Python
    2. Under Python 3, this decorator is a no-op.

    To support Python 2 and 3 with a single code base, define a __str__
    method returning unicode text and apply this decorator to the class, like
    this::

    >>> from future.utils import python_2_unicode_compatible

    >>> @python_2_unicode_compatible
    ... class MyClass(object):
    ...     def __str__(self):
    ...         return u'Unicode string: \u5b54\u5b50'

    >>> a = MyClass()

    Then, after this import:

    >>> from future.builtins import str

    the following is ``True`` on both Python 3 and 2::

    >>> str(a) == a.encode('utf-8').decode('utf-8')
    True

    and, on a Unicode-enabled terminal with the right fonts, these both print the
    Chinese characters for Confucius::

    >>> print(a)
    >>> print(str(a))

    The implementation comes from django.utils.encoding.
    """
    if not PY3:
        cls.__unicode__ = cls.__str__
        cls.__str__ = lambda self: self.__unicode__().encode('utf-8')
    return cls


def with_metaclass(meta, *bases):
    """
    Function from jinja2/_compat.py. License: BSD.

    Use it like this::

        class BaseForm(object):
            pass

        class FormType(type):
            pass

        class Form(with_metaclass(FormType, BaseForm)):
            pass

    This requires a bit of explanation: the basic idea is to make a
    dummy metaclass for one level of class instantiation that replaces
    itself with the actual metaclass.  Because of internal type checks
    we also need to make sure that we downgrade the custom metaclass
    for one level to something closer to type (that's why __call__ and
    __init__ comes back from type etc.).

    This has the advantage over six.with_metaclass of not introducing
    dummy classes into the final MRO.
    """
    class metaclass(meta):
        __call__ = type.__call__
        __init__ = type.__init__
        def __new__(cls, name, this_bases, d):
            if this_bases is None:
                return type.__new__(cls, name, (), d)
            return meta(name, bases, d)
    return metaclass('temporary_class', None, {})


# Definitions from pandas.compat and six.py follow:
if PY3:
    def bchr(s):
        return bytes([s])
    def bstr(s):
        if isinstance(s, str):
            return bytes(s, 'latin-1')
        else:
            return bytes(s)
    def bord(s):
        return s

    string_types = str,
    integer_types = int,
    class_types = type,
    text_type = str
    binary_type = bytes

else:
    # Python 2
    def bchr(s):
        return chr(s)
    def bstr(s):
        return str(s)
    def bord(s):
        return ord(s)

    string_types = basestring,
    integer_types = (int, long)
    class_types = (type, types.ClassType)
    text_type = unicode
    binary_type = str

###

if PY3:
    def tobytes(s):
        if isinstance(s, bytes):
            return s
        else:
            if isinstance(s, str):
                return s.encode('latin-1')
            else:
                return bytes(s)
else:
    # Python 2
    def tobytes(s):
        if isinstance(s, unicode):
            return s.encode('latin-1')
        else:
            return ''.join(s)

tobytes.__doc__ = """
    Encodes to latin-1 (where the first 256 chars are the same as
    ASCII.)
    """

if PY3:
    def native_str_to_bytes(s, encoding='utf-8'):
        return s.encode(encoding)

    def bytes_to_native_str(b, encoding='utf-8'):
        return b.decode(encoding)

    def text_to_native_str(t, encoding=None):
        return t
else:
    # Python 2
    def native_str_to_bytes(s, encoding=None):
        from future.types import newbytes    # to avoid a circular import
        return newbytes(s)

    def bytes_to_native_str(b, encoding=None):
        return native(b)

    def text_to_native_str(t, encoding='ascii'):
        """
        Use this to create a Py2 native string when "from __future__ import
        unicode_literals" is in effect.
        """
        return unicode(t).encode(encoding)

native_str_to_bytes.__doc__ = """
    On Py3, returns an encoded string.
    On Py2, returns a newbytes type, ignoring the ``encoding`` argument.
    """

if PY3:
    # list-producing versions of the major Python iterating functions
    def lrange(*args, **kwargs):
        return list(range(*args, **kwargs))

    def lzip(*args, **kwargs):
        return list(zip(*args, **kwargs))

    def lmap(*args, **kwargs):
        return list(map(*args, **kwargs))

    def lfilter(*args, **kwargs):
        return list(filter(*args, **kwargs))
else:
    import __builtin__
    # Python 2-builtin ranges produce lists
    lrange = __builtin__.range
    lzip = __builtin__.zip
    lmap = __builtin__.map
    lfilter = __builtin__.filter


def isidentifier(s, dotted=False):
    '''
    A function equivalent to the str.isidentifier method on Py3
    '''
    if dotted:
        return all(isidentifier(a) for a in s.split('.'))
    if PY3:
        return s.isidentifier()
    else:
        import re
        _name_re = re.compile(r"[a-zA-Z_][a-zA-Z0-9_]*$")
        return bool(_name_re.match(s))


def viewitems(obj, **kwargs):
    """
    Function for iterating over dictionary items with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method."""
    func = getattr(obj, "viewitems", None)
    if not func:
        func = obj.items
    return func(**kwargs)


def viewkeys(obj, **kwargs):
    """
    Function for iterating over dictionary keys with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method."""
    func = getattr(obj, "viewkeys", None)
    if not func:
        func = obj.keys
    return func(**kwargs)


def viewvalues(obj, **kwargs):
    """
    Function for iterating over dictionary values with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method."""
    func = getattr(obj, "viewvalues", None)
    if not func:
        func = obj.values
    return func(**kwargs)


def iteritems(obj, **kwargs):
    """Use this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewitems().
    """
    func = getattr(obj, "iteritems", None)
    if not func:
        func = obj.items
    return func(**kwargs)


def iterkeys(obj, **kwargs):
    """Use this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewkeys().
    """
    func = getattr(obj, "iterkeys", None)
    if not func:
        func = obj.keys
    return func(**kwargs)


def itervalues(obj, **kwargs):
    """Use this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewvalues().
    """
    func = getattr(obj, "itervalues", None)
    if not func:
        func = obj.values
    return func(**kwargs)


def bind_method(cls, name, func):
    """Bind a method to class, python 2 and python 3 compatible.

    Parameters
    ----------

    cls : type
        class to receive bound method
    name : basestring
        name of method on class instance
    func : function
        function to be bound as method

    Returns
    -------
    None
    """
    # only python 2 has an issue with bound/unbound methods
    if not PY3:
        setattr(cls, name, types.MethodType(func, None, cls))
    else:
        setattr(cls, name, func)


def getexception():
    return sys.exc_info()[1]


def _get_caller_globals_and_locals():
    """
    Returns the globals and locals of the calling frame.

    Is there an alternative to frame hacking here?
    """
    caller_frame = inspect.stack()[2]
    myglobals = caller_frame[0].f_globals
    mylocals = caller_frame[0].f_locals
    return myglobals, mylocals


def _repr_strip(mystring):
    """
    Returns the string without any initial or final quotes.
    """
    r = repr(mystring)
    if r.startswith("'") and r.endswith("'"):
        return r[1:-1]
    else:
        return r


if PY3:
    def raise_from(exc, cause):
        """
        Equivalent to:

            raise EXCEPTION from CAUSE

        on Python 3. (See PEP 3134).
        """
        myglobals, mylocals = _get_caller_globals_and_locals()

        # We pass the exception and cause along with other globals
        # when we exec():
        myglobals = myglobals.copy()
        myglobals['__python_future_raise_from_exc'] = exc
        myglobals['__python_future_raise_from_cause'] = cause
        execstr = "raise __python_future_raise_from_exc from __python_future_raise_from_cause"
        exec(execstr, myglobals, mylocals)

    def raise_(tp, value=None, tb=None):
        """
        A function that matches the Python 2.x ``raise`` statement. This
        allows re-raising exceptions with the cls value and traceback on
        Python 2 and 3.
        """
        if isinstance(tp, BaseException):
            # If the first object is an instance, the type of the exception
            # is the class of the instance, the instance itself is the value,
            # and the second object must be None.
            if value is not None:
                raise TypeError("instance exception may not have a separate value")
            exc = tp
        elif isinstance(tp, type) and not issubclass(tp, BaseException):
            # If the first object is a class, it becomes the type of the
            # exception.
            raise TypeError("class must derive from BaseException, not %s" % tp.__name__)
        else:
            # The second object is used to determine the exception value: If it
            # is an instance of the class, the instance becomes the exception
            # value. If the second object is a tuple, it is used as the argument
            # list for the class constructor; if it is None, an empty argument
            # list is used, and any other object is treated as a single argument
            # to the constructor. The instance so created by calling the
            # constructor is used as the exception value.
            if isinstance(value, tp):
                exc = value
            elif isinstance(value, tuple):
                exc = tp(*value)
            elif value is None:
                exc = tp()
            else:
                exc = tp(value)

        if exc.__traceback__ is not tb:
            raise exc.with_traceback(tb)
        raise exc

    def raise_with_traceback(exc, traceback=Ellipsis):
        if traceback == Ellipsis:
            _, _, traceback = sys.exc_info()
        raise exc.with_traceback(traceback)

else:
    def raise_from(exc, cause):
        """
        Equivalent to:

            raise EXCEPTION from CAUSE

        on Python 3. (See PEP 3134).
        """
        # Is either arg an exception class (e.g. IndexError) rather than
        # instance (e.g. IndexError('my message here')? If so, pass the
        # name of the class undisturbed through to "raise ... from ...".
        if isinstance(exc, type) and issubclass(exc, Exception):
            e = exc()
            # exc = exc.__name__
            # execstr = "e = " + _repr_strip(exc) + "()"
            # myglobals, mylocals = _get_caller_globals_and_locals()
            # exec(execstr, myglobals, mylocals)
        else:
            e = exc
        e.__suppress_context__ = False
        if isinstance(cause, type) and issubclass(cause, Exception):
            e.__cause__ = cause()
            e.__cause__.__traceback__ = sys.exc_info()[2]
            e.__suppress_context__ = True
        elif cause is None:
            e.__cause__ = None
            e.__suppress_context__ = True
        elif isinstance(cause, BaseException):
            e.__cause__ = cause
            object.__setattr__(e.__cause__,  '__traceback__', sys.exc_info()[2])
            e.__suppress_context__ = True
        else:
            raise TypeError("exception causes must derive from BaseException")
        e.__context__ = sys.exc_info()[1]
        raise e

    exec('''
def raise_(tp, value=None, tb=None):
    raise tp, value, tb

def raise_with_traceback(exc, traceback=Ellipsis):
    if traceback == Ellipsis:
        _, _, traceback = sys.exc_info()
    raise exc, None, traceback
'''.strip())


raise_with_traceback.__doc__ = (
"""Raise exception with existing traceback.
If traceback is not passed, uses sys.exc_info() to get traceback."""
)


# Deprecated alias for backward compatibility with ``future`` versions < 0.11:
reraise = raise_


def implements_iterator(cls):
    '''
    From jinja2/_compat.py. License: BSD.

    Use as a decorator like this::

        @implements_iterator
        class UppercasingIterator(object):
            def __init__(self, iterable):
                self._iter = iter(iterable)
            def __iter__(self):
                return self
            def __next__(self):
                return next(self._iter).upper()

    '''
    if PY3:
        return cls
    else:
        cls.next = cls.__next__
        del cls.__next__
        return cls

if PY3:
    get_next = lambda x: x.__next__
else:
    get_next = lambda x: x.next


def encode_filename(filename):
    if PY3:
        return filename
    else:
        if isinstance(filename, unicode):
            return filename.encode('utf-8')
        return filename


def is_new_style(cls):
    """
    Python 2.7 has both new-style and old-style classes. Old-style classes can
    be pesky in some circumstances, such as when using inheritance.  Use this
    function to test for whether a class is new-style. (Python 3 only has
    new-style classes.)
    """
    return hasattr(cls, '__class__') and ('__dict__' in dir(cls)
                                          or hasattr(cls, '__slots__'))

# The native platform string and bytes types. Useful because ``str`` and
# ``bytes`` are redefined on Py2 by ``from future.builtins import *``.
native_str = str
native_bytes = bytes


def istext(obj):
    """
    Deprecated. Use::
        >>> isinstance(obj, str)
    after this import:
        >>> from future.builtins import str
    """
    return isinstance(obj, type(u''))


def isbytes(obj):
    """
    Deprecated. Use::
        >>> isinstance(obj, bytes)
    after this import:
        >>> from future.builtins import bytes
    """
    return isinstance(obj, type(b''))


def isnewbytes(obj):
    """
    Equivalent to the result of ``type(obj)  == type(newbytes)``
    in other words, it is REALLY a newbytes instance, not a Py2 native str
    object?

    Note that this does not cover subclasses of newbytes, and it is not
    equivalent to ininstance(obj, newbytes)
    """
    return type(obj).__name__ == 'newbytes'


def isint(obj):
    """
    Deprecated. Tests whether an object is a Py3 ``int`` or either a Py2 ``int`` or
    ``long``.

    Instead of using this function, you can use:

        >>> from future.builtins import int
        >>> isinstance(obj, int)

    The following idiom is equivalent:

        >>> from numbers import Integral
        >>> isinstance(obj, Integral)
    """

    return isinstance(obj, numbers.Integral)


def native(obj):
    """
    On Py3, this is a no-op: native(obj) -> obj

    On Py2, returns the corresponding native Py2 types that are
    superclasses for backported objects from Py3:

    >>> from builtins import str, bytes, int

    >>> native(str(u'ABC'))
    u'ABC'
    >>> type(native(str(u'ABC')))
    unicode

    >>> native(bytes(b'ABC'))
    b'ABC'
    >>> type(native(bytes(b'ABC')))
    bytes

    >>> native(int(10**20))
    100000000000000000000L
    >>> type(native(int(10**20)))
    long

    Existing native types on Py2 will be returned unchanged:

    >>> type(native(u'ABC'))
    unicode
    """
    if hasattr(obj, '__native__'):
        return obj.__native__()
    else:
        return obj


# Implementation of exec_ is from ``six``:
if PY3:
    import builtins
    exec_ = getattr(builtins, "exec")
else:
    def exec_(code, globs=None, locs=None):
        """Execute code in a namespace."""
        if globs is None:
            frame = sys._getframe(1)
            globs = frame.f_globals
            if locs is None:
                locs = frame.f_locals
            del frame
        elif locs is None:
            locs = globs
        exec("""exec code in globs, locs""")


# Defined here for backward compatibility:
def old_div(a, b):
    """
    DEPRECATED: import ``old_div`` from ``past.utils`` instead.

    Equivalent to ``a / b`` on Python 2 without ``from __future__ import
    division``.

    TODO: generalize this to other objects (like arrays etc.)
    """
    if isinstance(a, numbers.Integral) and isinstance(b, numbers.Integral):
        return a // b
    else:
        return a / b


def as_native_str(encoding='utf-8'):
    '''
    A decorator to turn a function or method call that returns text, i.e.
    unicode, into one that returns a native platform str.

    Use it as a decorator like this::

        from __future__ import unicode_literals

        class MyClass(object):
            @as_native_str(encoding='ascii')
            def __repr__(self):
                return next(self._iter).upper()
    '''
    if PY3:
        return lambda f: f
    else:
        def encoder(f):
            @functools.wraps(f)
            def wrapper(*args, **kwargs):
                return f(*args, **kwargs).encode(encoding=encoding)
            return wrapper
        return encoder

# listvalues and listitems definitions from Nick Coghlan's (withdrawn)
# PEP 496:
try:
    dict.iteritems
except AttributeError:
    # Python 3
    def listvalues(d):
        return list(d.values())
    def listitems(d):
        return list(d.items())
else:
    # Python 2
    def listvalues(d):
        return d.values()
    def listitems(d):
        return d.items()

if PY3:
    def ensure_new_type(obj):
        return obj
else:
    def ensure_new_type(obj):
        from future.types.newbytes import newbytes
        from future.types.newstr import newstr
        from future.types.newint import newint
        from future.types.newdict import newdict

        native_type = type(native(obj))

        # Upcast only if the type is already a native (non-future) type
        if issubclass(native_type, type(obj)):
            # Upcast
            if native_type == str:  # i.e. Py2 8-bit str
                return newbytes(obj)
            elif native_type == unicode:
                return newstr(obj)
            elif native_type == int:
                return newint(obj)
            elif native_type == long:
                return newint(obj)
            elif native_type == dict:
                return newdict(obj)
            else:
                return obj
        else:
            # Already a new type
            assert type(obj) in [newbytes, newstr]
            return obj


__all__ = ['PY2', 'PY26', 'PY3', 'PYPY',
           'as_native_str', 'binary_type', 'bind_method', 'bord', 'bstr',
           'bytes_to_native_str', 'class_types', 'encode_filename',
           'ensure_new_type', 'exec_', 'get_next', 'getexception',
           'implements_iterator', 'integer_types', 'is_new_style', 'isbytes',
           'isidentifier', 'isint', 'isnewbytes', 'istext', 'iteritems',
           'iterkeys', 'itervalues', 'lfilter', 'listitems', 'listvalues',
           'lmap', 'lrange', 'lzip', 'native', 'native_bytes', 'native_str',
           'native_str_to_bytes', 'old_div',
           'python_2_unicode_compatible', 'raise_',
           'raise_with_traceback', 'reraise', 'string_types',
           'text_to_native_str', 'text_type', 'tobytes', 'viewitems',
           'viewkeys', 'viewvalues', 'with_metaclass'
           ]
PKEu\��ާO�O0future/utils/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�U�@s�dZddlZddlZddlZddlZddlZddlZejddkZejdd�dkZ	ejdd�dkZ
ejdd�dkZejdd�dkZejdd�d	kZ
ejdd�d
kZejddkZejdd�dkZejdd�dkZeed
�Zdd�Zdd�Ze�r4dd�Zdd�Zdd�ZefZefZefZeZe Z!n8dd�Zdd�Zdd�Ze"fZee#fZeej$fZe%ZeZ!e�r|dd�Z&ndd�Z&de&_e�r�dwd d!�Z'dxd"d#�Z(dyd$d%�Z)ndzd&d!�Z'd{d'd#�Z(d|d)d%�Z)d*e'_e�r�d+d,�Z*d-d.�Z+d/d0�Z,d1d2�Z-n ddl.Z.e.j/Z*e.j0Z+e.j1Z,e.j2Z-d}d4d5�Z3d6d7�Z4d8d9�Z5d:d;�Z6d<d=�Z7d>d?�Z8d@dA�Z9dBdC�Z:dDdE�Z;dFdG�Z<dHdI�Z=e�r�dJdK�Z>d~dLdM�Z?e@fdNdO�ZAndPdK�Z>eBdQ�C��dReA_e?ZDdSdT�ZEe�r�dUdV�ZFndWdV�ZFdXdY�ZGdZd[�ZHeZIe ZJd\d]�ZKd^d_�ZLd`da�ZMdbdc�ZNddde�ZOe�r4ddlPZPeQePdf�ZRn
ddgdh�ZRdidj�ZSd�dkdl�ZTz
eUj7Wn$eV�y~dmdn�ZWdodp�ZXYn0dqdn�ZWdrdp�ZXe�r�dsdt�ZYndudt�ZYgdv�ZZdS)�a�
A selection of cross-compatible functions for Python 2 and 3.

This module exports useful functions for 2/3 compatible code:

    * bind_method: binds functions to classes
    * ``native_str_to_bytes`` and ``bytes_to_native_str``
    * ``native_str``: always equal to the native platform string object (because
      this may be shadowed by imports from future.builtins)
    * lists: lrange(), lmap(), lzip(), lfilter()
    * iterable method compatibility:
        - iteritems, iterkeys, itervalues
        - viewitems, viewkeys, viewvalues

        These use the original method if available, otherwise they use items,
        keys, values.

    * types:

        * text_type: unicode in Python 2, str in Python 3
        * string_types: basestring in Python 2, str in Python 3
        * binary_type: str in Python 2, bytes in Python 3
        * integer_types: (int, long) in Python 2, int in Python 3
        * class_types: (type, types.ClassType) in Python 2, type in Python 3

    * bchr(c):
        Take an integer and make a 1-character byte string
    * bord(c)
        Take the result of indexing on a byte string and make an integer
    * tobytes(s)
        Take a text string, a byte string, or a sequence of characters taken
        from a byte string, and make a byte string.

    * raise_from()
    * raise_with_traceback()

This module also defines these decorators:

    * ``python_2_unicode_compatible``
    * ``with_metaclass``
    * ``implements_iterator``

Some of the functions in this module come from the following sources:

    * Jinja2 (BSD licensed: see
      https://github.com/mitsuhiko/jinja2/blob/master/LICENSE)
    * Pandas compatibility module pandas.compat
    * six.py by Benjamin Peterson
    * Django
�N��)r�)r�)r�)r�)r�)r�	)rr)rrZpypy_translation_infocCsts|j|_dd�|_|S)u�
    A decorator that defines __unicode__ and __str__ methods under Python
    2. Under Python 3, this decorator is a no-op.

    To support Python 2 and 3 with a single code base, define a __str__
    method returning unicode text and apply this decorator to the class, like
    this::

    >>> from future.utils import python_2_unicode_compatible

    >>> @python_2_unicode_compatible
    ... class MyClass(object):
    ...     def __str__(self):
    ...         return u'Unicode string: 孔子'

    >>> a = MyClass()

    Then, after this import:

    >>> from future.builtins import str

    the following is ``True`` on both Python 3 and 2::

    >>> str(a) == a.encode('utf-8').decode('utf-8')
    True

    and, on a Unicode-enabled terminal with the right fonts, these both print the
    Chinese characters for Confucius::

    >>> print(a)
    >>> print(str(a))

    The implementation comes from django.utils.encoding.
    cSs|���d�S�N�utf-8)�__unicode__�encode)�self�r�?/usr/local/lib/python3.9/site-packages/future/utils/__init__.py�<lambda>n�z-python_2_unicode_compatible.<locals>.<lambda>)�PY3�__str__r��clsrrr�python_2_unicode_compatibleIs#
rcs"G��fdd�d��}|ddi�S)a�
    Function from jinja2/_compat.py. License: BSD.

    Use it like this::

        class BaseForm(object):
            pass

        class FormType(type):
            pass

        class Form(with_metaclass(FormType, BaseForm)):
            pass

    This requires a bit of explanation: the basic idea is to make a
    dummy metaclass for one level of class instantiation that replaces
    itself with the actual metaclass.  Because of internal type checks
    we also need to make sure that we downgrade the custom metaclass
    for one level to something closer to type (that's why __call__ and
    __init__ comes back from type etc.).

    This has the advantage over six.with_metaclass of not introducing
    dummy classes into the final MRO.
    cs&eZdZejZejZ��fdd�ZdS)z!with_metaclass.<locals>.metaclasscs$|durt�||d|�S�|�|�S)Nr)�type�__new__)r�name�
this_bases�d��bases�metarrr�sz)with_metaclass.<locals>.metaclass.__new__N)�__name__�
__module__�__qualname__r�__call__�__init__rrrrr�	metaclass�sr%�temporary_classNr)rrr%rrr�with_metaclassrsr'cCs
t|g�S�N)�bytes��srrr�bchr�sr,cCs t|t�rt|d�St|�SdS�N�latin-1)�
isinstance�strr)r*rrr�bstr�s

r1cCs|Sr(rr*rrr�bord�sr2cCst|�Sr()�chrr*rrrr,�scCst|�Sr()r0r*rrrr1�scCst|�Sr()�ordr*rrrr2�scCs.t|t�r|St|t�r"|�d�St|�SdSr-)r/r)r0r
r*rrr�tobytes�s



r5cCs"t|t�r|�d�Sd�|�SdS)Nr.�)r/�unicoder
�joinr*rrrr5�s

zS
    Encodes to latin-1 (where the first 256 chars are the same as
    ASCII.)
    rcCs
|�|�Sr(�r
)r+�encodingrrr�native_str_to_bytes�sr;cCs
|�|�Sr()�decode��br:rrr�bytes_to_native_str�sr?cCs|Sr(r��tr:rrr�text_to_native_str�srBcCsddlm}||�S)Nr��newbytes)Zfuture.typesrD)r+r:rDrrrr;�scCst|�Sr()�nativer=rrrr?�s�asciicCst|��|�S)z}
        Use this to create a Py2 native string when "from __future__ import
        unicode_literals" is in effect.
        )r7r
r@rrrrB�szu
    On Py3, returns an encoded string.
    On Py2, returns a newbytes type, ignoring the ``encoding`` argument.
    cOstt|i|���Sr()�list�range��args�kwargsrrr�lrange�srLcOstt|i|���Sr()rG�ziprIrrr�lzip�srNcOstt|i|���Sr()rG�maprIrrr�lmap�srPcOstt|i|���Sr()rG�filterrIrrr�lfilter�srRFcCsL|rtdd�|�d�D��Str(|��Sddl}|�d�}t|�|��SdS)zE
    A function equivalent to the str.isidentifier method on Py3
    css|]}t|�VqdSr()�isidentifier)�.0�arrr�	<genexpr>rzisidentifier.<locals>.<genexpr>�.rNz[a-zA-Z_][a-zA-Z0-9_]*$)�all�splitrrS�re�compile�bool�match)r+ZdottedrZZ_name_rerrrrSs
rScKs$t|dd�}|s|j}|fi|��S)z�
    Function for iterating over dictionary items with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method.�	viewitemsN��getattr�items��objrK�funcrrrr^sr^cKs$t|dd�}|s|j}|fi|��S)z�
    Function for iterating over dictionary keys with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method.�viewkeysN�r`�keysrbrrrresrecKs$t|dd�}|s|j}|fi|��S)z�
    Function for iterating over dictionary values with the same set-like
    behaviour on Py2.7 as on Py3.

    Passes kwargs to method.�
viewvaluesN�r`�valuesrbrrrrh(srhcKs$t|dd�}|s|j}|fi|��S)zsUse this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewitems().
    �	iteritemsNr_rbrrrrk4srkcKs$t|dd�}|s|j}|fi|��S)zrUse this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewkeys().
    �iterkeysNrfrbrrrrl>srlcKs$t|dd�}|s|j}|fi|��S)ztUse this only if compatibility with Python versions before 2.7 is
    required. Otherwise, prefer viewvalues().
    �
itervaluesNrirbrrrrmHsrmcCs,tst||t�|d|��nt|||�dS)a/Bind a method to class, python 2 and python 3 compatible.

    Parameters
    ----------

    cls : type
        class to receive bound method
    name : basestring
        name of method on class instance
    func : function
        function to be bound as method

    Returns
    -------
    None
    N)r�setattr�types�
MethodType)rrrdrrr�bind_methodRsrqcCst��dS)N�)�sys�exc_inforrrr�getexceptionjsrucCs(t��d}|dj}|dj}||fS)zr
    Returns the globals and locals of the calling frame.

    Is there an alternative to frame hacking here?
    rr)�inspect�stack�	f_globals�f_locals)Zcaller_frame�	myglobals�mylocalsrrr�_get_caller_globals_and_localsns

r|cCs0t|�}|�d�r(|�d�r(|dd�S|SdS)zA
    Returns the string without any initial or final quotes.
    �'rr���N)�repr�
startswith�endswith)Zmystring�rrrr�_repr_stripzsr�cCs6t�\}}|��}||d<||d<d}t|||�dS)�n
        Equivalent to:

            raise EXCEPTION from CAUSE

        on Python 3. (See PEP 3134).
        Z__python_future_raise_from_excZ __python_future_raise_from_causezJraise __python_future_raise_from_exc from __python_future_raise_from_causeN)r|�copy�exec)�exc�causerzr{Zexecstrrrr�
raise_from�s
r�cCs�t|t�r |durtd��|}n`t|t�rDt|t�sDtd|j��n<t||�rT|}n,t|t�rh||�}n|durx|�}n||�}|j|ur�|�|��|�dS)z�
        A function that matches the Python 2.x ``raise`` statement. This
        allows re-raising exceptions with the cls value and traceback on
        Python 2 and 3.
        Nz0instance exception may not have a separate valuez,class must derive from BaseException, not %s)	r/�
BaseException�	TypeErrorr�
issubclassr �tuple�
__traceback__�with_traceback)�tp�value�tbr�rrr�raise_�s 
	




r�cCs$|tkrt��\}}}|�|��dSr()�Ellipsisrsrtr�)r��	traceback�_rrr�raise_with_traceback�sr�cCs�t|t�rt|t�r|�}n|}d|_t|t�rZt|t�rZ|�|_t��d|j_d|_nN|durpd|_d|_n8t|t	�r�||_t
�|jdt��d�d|_ntd��t��d|_
|�dS)r�FrTNr�z/exception causes must derive from BaseExceptionrr)r/rr��	Exception�__suppress_context__�	__cause__rsrtr�r��object�__setattr__r��__context__)r�r��errrr��s$
z�
def raise_(tp, value=None, tb=None):
    raise tp, value, tb

def raise_with_traceback(exc, traceback=Ellipsis):
    if traceback == Ellipsis:
        _, _, traceback = sys.exc_info()
    raise exc, None, traceback
zjRaise exception with existing traceback.
If traceback is not passed, uses sys.exc_info() to get traceback.cCstr|S|j|_|`|SdS)a
    From jinja2/_compat.py. License: BSD.

    Use as a decorator like this::

        @implements_iterator
        class UppercasingIterator(object):
            def __init__(self, iterable):
                self._iter = iter(iterable)
            def __iter__(self):
                return self
            def __next__(self):
                return next(self._iter).upper()

    N)r�__next__�nextrrrr�implements_iterator�s
r�cCs|jSr()r���xrrrrrrcCs|jSr()r�r�rrrrrcCs$tr|St|t�r|�d�S|SdSr
)rr/r7r
)�filenamerrr�encode_filenames


r�cCs t|d�odt|�vpt|d�S)a
    Python 2.7 has both new-style and old-style classes. Old-style classes can
    be pesky in some circumstances, such as when using inheritance.  Use this
    function to test for whether a class is new-style. (Python 3 only has
    new-style classes.)
    �	__class__�__dict__�	__slots__)�hasattr�dirrrrr�is_new_style#s�r�cCst|td��S)z
    Deprecated. Use::
        >>> isinstance(obj, str)
    after this import:
        >>> from future.builtins import str
    r6�r/r�rcrrr�istext3sr�cCst|td��S)z�
    Deprecated. Use::
        >>> isinstance(obj, bytes)
    after this import:
        >>> from future.builtins import bytes
    rr�r�rrr�isbytes=sr�cCst|�jdkS)a
    Equivalent to the result of ``type(obj)  == type(newbytes)``
    in other words, it is REALLY a newbytes instance, not a Py2 native str
    object?

    Note that this does not cover subclasses of newbytes, and it is not
    equivalent to ininstance(obj, newbytes)
    rD)rr r�rrr�
isnewbytesGs	r�cCst|tj�S)a_
    Deprecated. Tests whether an object is a Py3 ``int`` or either a Py2 ``int`` or
    ``long``.

    Instead of using this function, you can use:

        >>> from future.builtins import int
        >>> isinstance(obj, int)

    The following idiom is equivalent:

        >>> from numbers import Integral
        >>> isinstance(obj, Integral)
    �r/�numbers�Integralr�rrr�isintSsr�cCst|d�r|��S|SdS)aO
    On Py3, this is a no-op: native(obj) -> obj

    On Py2, returns the corresponding native Py2 types that are
    superclasses for backported objects from Py3:

    >>> from builtins import str, bytes, int

    >>> native(str(u'ABC'))
    u'ABC'
    >>> type(native(str(u'ABC')))
    unicode

    >>> native(bytes(b'ABC'))
    b'ABC'
    >>> type(native(bytes(b'ABC')))
    bytes

    >>> native(int(10**20))
    100000000000000000000L
    >>> type(native(int(10**20)))
    long

    Existing native types on Py2 will be returned unchanged:

    >>> type(native(u'ABC'))
    unicode
    �
__native__N)r�r�r�rrrrEfs
rEr�cCsB|dur*t�d�}|j}|dur&|j}~n|dur6|}td�dS)zExecute code in a namespace.Nrrzexec code in globs, locs)rs�	_getframerxryr�)�codeZglobsZlocs�framerrr�exec_�s
r�cCs,t|tj�r t|tj�r ||S||SdS)z�
    DEPRECATED: import ``old_div`` from ``past.utils`` instead.

    Equivalent to ``a / b`` on Python 2 without ``from __future__ import
    division``.

    TODO: generalize this to other objects (like arrays etc.)
    Nr�)rUr>rrr�old_div�s	r�cs trdd�S�fdd�}|SdS)a~
    A decorator to turn a function or method call that returns text, i.e.
    unicode, into one that returns a native platform str.

    Use it as a decorator like this::

        from __future__ import unicode_literals

        class MyClass(object):
            @as_native_str(encoding='ascii')
            def __repr__(self):
                return next(self._iter).upper()
    cSs|Sr(r��frrrr�rzas_native_str.<locals>.<lambda>cst�����fdd��}|S)Ncs�|i|��j�d�S)N�r:r9rI)r:r�rr�wrapper�sz/as_native_str.<locals>.encoder.<locals>.wrapper)�	functools�wraps)r�r�r�r�r�encoder�szas_native_str.<locals>.encoderN)r)r:r�rr�r�
as_native_str�sr�cCst|���Sr()rGrj�rrrr�
listvalues�sr�cCst|���Sr()rGrar�rrr�	listitems�sr�cCs|��Sr()rjr�rrrr��scCs|��Sr()rar�rrrr��scCs|Sr(rr�rrr�ensure_new_type�sr�cCs�ddlm}ddlm}ddlm}ddlm}tt	|��}t
|t|��r�|tkrZ||�S|tkrj||�S|t
krz||�S|tkr�||�S|tkr�||�S|Snt|�||fvs�J�|SdS)NrrC)�newstr)�newint)�newdict)Zfuture.types.newbytesrDZfuture.types.newstrr�Zfuture.types.newintr�Zfuture.types.newdictr�rrEr�r0r7�int�long�dict)rcrDr�r�r�Znative_typerrrr��s&)2�PY2�PY26r�PYPYr��binary_typerqr2r1r?�class_typesr�r�r��get_nextrur��
integer_typesr�r�rSr�r�r�rkrlrmrRr�r�rPrLrNrE�native_bytes�
native_strr;r�rr�r��reraise�string_typesrB�	text_typer5r^rerhr')r)r)N)N)N)rF)F)NN)NN)r)[�__doc__rorsr�r�r�rv�version_inforZ	PY34_PLUSZ	PY35_PLUSZ	PY36_PLUSZ	PY37_PLUSZ	PY38_PLUSZ	PY39_PLUSr�r�ZPY27r�r�rr'r,r1r2r0r�r�r�rr�r�r)r��
basestringr�Z	ClassTyper7r5r;r?rBrLrNrPrR�__builtin__rHrMrOrQrSr^rerhrkrlrmrqrur|r�r�r�r�r�r��stripr�r�r�r�r�r�r�r�r�r�r�rE�builtinsr`r�r�r�r��AttributeErrorr�r�r��__all__rrrr�<module>s�3
)$













&$�
	

$



PKEu\0����7future/utils/__pycache__/surrogateescape.cpython-39.pycnu�[���a

��?h��@s�dZddlZddlZddlmZdZdd�Zdd�ZejrHe	Z
d	d
�ZneZ
e	Zdd�Z
Gd
d�de�Zdd�Zdd�Zdd�Zdd�ZdZed�Zed�Ze�e�jZdd�Zedkr�dS)z�
This is Victor Stinner's pure-Python implementation of PEP 383: the "surrogateescape" error
handler of Python 3.

Source: misc/python/surrogateescape.py in https://bitbucket.org/haypo/misc
�N)�utils�surrogateescapecCstjr
|S|�d�SdS)N�unicode_escape)r�PY3�decode)�text�r�F/usr/local/lib/python3.9/site-packages/future/utils/surrogateescape.py�usr
cCstjr|�d�S|SdS)N�latin1)rr�encode)�datarrr	�bs
rcCs
t|f�S�N)�bytes)�coderrr	�<lambda>#�rcCsd|j|j|j�}z0t|t�r(t|�}nt|t�r<t|�}n|�WntyX|�Yn0||jfS)z�
    Pure Python implementation of the PEP 383: the "surrogateescape" error
    handler of Python 3. Undecodable bytes will be replaced by a Unicode
    character U+DCxx on decoding, and these are translated into the
    original bytes on encoding.
    )	�object�start�end�
isinstance�UnicodeDecodeError�replace_surrogate_decode�UnicodeEncodeError�replace_surrogate_encode�NotASurrogateError)�exc�mystring�decodedrrr	�surrogateescape_handler(s




r c@seZdZdS)rN)�__name__�
__module__�__qualname__rrrr	rCsrcCs�g}|D]r}t|�}d|kr(dks.nt�d|krBdkrZnn|�t|d��q|dkrv|�t|d��qt�qt��|�S)z�
    Returns a (unicode) string, not the more logical bytes, because the codecs
    register_error functionality expects this.
    ������i�)�ordr�append�_unichr�str�join)rr�chrrrr	rGsrcCszg}|D]d}t|t�r|}nt|�}d|kr8dkrPnn|�td|��q|dkrh|�t|��qt�qt��|�S)z$
    Returns a (unicode) string
    ��r&�)r�intr'r(r)rr*r+)Zmybytesrr,rrrr	rds
rcCs@tdkr�g}t|�D]f\}}t|�}|dkr6t|�}n:d|krJdkr\nnt|d�}ntt|||dd��|�|�qt��|�Stdk�r0g}t|�D]�\}}t|�}d	|kr�d
k�rnnFd|kr�dkr�nnt|d�}|�|�ntt|||dd��q�|�d�}|�|�q�t��|�S|�tt	�SdS)N�asciir-i��r%r&�zordinal not in range(128)zutf-8r$i��zsurrogates not allowed)
�FS_ENCODING�	enumerater'�	bytes_chrrr(rr+r�	FS_ERRORS)�fn�encoded�indexr,rZch_utf8rrr	�encodefilename}s<

�
�
r:cCs|�tt�Sr)rr3r6)r7rrr	�decodefilename�sr;r1u[abcÿ]u[abc�]cCs<tjr
dSzt�t�Wnty6t�tt�Yn0dS)zH
    Registers the surrogateescape error handler on Python 2 (only)
    N)rr�codecs�lookup_errorr6�LookupError�register_errorr rrrr	�register_surrogateescape�sr@�__main__)�__doc__r<�sys�futurerr6r
rr�chrr)r5�unichrr �	Exceptionrrrr:r;r3r7r8�lookup�namer@r!rrrr	�<module>s,	
'PKEu\
�Z��future/utils/surrogateescape.pynu�[���"""
This is Victor Stinner's pure-Python implementation of PEP 383: the "surrogateescape" error
handler of Python 3.

Source: misc/python/surrogateescape.py in https://bitbucket.org/haypo/misc
"""

# This code is released under the Python license and the BSD 2-clause license

import codecs
import sys

from future import utils


FS_ERRORS = 'surrogateescape'

#     # -- Python 2/3 compatibility -------------------------------------
#     FS_ERRORS = 'my_surrogateescape'

def u(text):
    if utils.PY3:
        return text
    else:
        return text.decode('unicode_escape')

def b(data):
    if utils.PY3:
        return data.encode('latin1')
    else:
        return data

if utils.PY3:
    _unichr = chr
    bytes_chr = lambda code: bytes((code,))
else:
    _unichr = unichr
    bytes_chr = chr

def surrogateescape_handler(exc):
    """
    Pure Python implementation of the PEP 383: the "surrogateescape" error
    handler of Python 3. Undecodable bytes will be replaced by a Unicode
    character U+DCxx on decoding, and these are translated into the
    original bytes on encoding.
    """
    mystring = exc.object[exc.start:exc.end]

    try:
        if isinstance(exc, UnicodeDecodeError):
            # mystring is a byte-string in this case
            decoded = replace_surrogate_decode(mystring)
        elif isinstance(exc, UnicodeEncodeError):
            # In the case of u'\udcc3'.encode('ascii',
            # 'this_surrogateescape_handler'), both Python 2.x and 3.x raise an
            # exception anyway after this function is called, even though I think
            # it's doing what it should. It seems that the strict encoder is called
            # to encode the unicode string that this function returns ...
            decoded = replace_surrogate_encode(mystring)
        else:
            raise exc
    except NotASurrogateError:
        raise exc
    return (decoded, exc.end)


class NotASurrogateError(Exception):
    pass


def replace_surrogate_encode(mystring):
    """
    Returns a (unicode) string, not the more logical bytes, because the codecs
    register_error functionality expects this.
    """
    decoded = []
    for ch in mystring:
        # if utils.PY3:
        #     code = ch
        # else:
        code = ord(ch)

        # The following magic comes from Py3.3's Python/codecs.c file:
        if not 0xD800 <= code <= 0xDCFF:
            # Not a surrogate. Fail with the original exception.
            raise NotASurrogateError
        # mybytes = [0xe0 | (code >> 12),
        #            0x80 | ((code >> 6) & 0x3f),
        #            0x80 | (code & 0x3f)]
        # Is this a good idea?
        if 0xDC00 <= code <= 0xDC7F:
            decoded.append(_unichr(code - 0xDC00))
        elif code <= 0xDCFF:
            decoded.append(_unichr(code - 0xDC00))
        else:
            raise NotASurrogateError
    return str().join(decoded)


def replace_surrogate_decode(mybytes):
    """
    Returns a (unicode) string
    """
    decoded = []
    for ch in mybytes:
        # We may be parsing newbytes (in which case ch is an int) or a native
        # str on Py2
        if isinstance(ch, int):
            code = ch
        else:
            code = ord(ch)
        if 0x80 <= code <= 0xFF:
            decoded.append(_unichr(0xDC00 + code))
        elif code <= 0x7F:
            decoded.append(_unichr(code))
        else:
            # # It may be a bad byte
            # # Try swallowing it.
            # continue
            # print("RAISE!")
            raise NotASurrogateError
    return str().join(decoded)


def encodefilename(fn):
    if FS_ENCODING == 'ascii':
        # ASCII encoder of Python 2 expects that the error handler returns a
        # Unicode string encodable to ASCII, whereas our surrogateescape error
        # handler has to return bytes in 0x80-0xFF range.
        encoded = []
        for index, ch in enumerate(fn):
            code = ord(ch)
            if code < 128:
                ch = bytes_chr(code)
            elif 0xDC80 <= code <= 0xDCFF:
                ch = bytes_chr(code - 0xDC00)
            else:
                raise UnicodeEncodeError(FS_ENCODING,
                    fn, index, index+1,
                    'ordinal not in range(128)')
            encoded.append(ch)
        return bytes().join(encoded)
    elif FS_ENCODING == 'utf-8':
        # UTF-8 encoder of Python 2 encodes surrogates, so U+DC80-U+DCFF
        # doesn't go through our error handler
        encoded = []
        for index, ch in enumerate(fn):
            code = ord(ch)
            if 0xD800 <= code <= 0xDFFF:
                if 0xDC80 <= code <= 0xDCFF:
                    ch = bytes_chr(code - 0xDC00)
                    encoded.append(ch)
                else:
                    raise UnicodeEncodeError(
                        FS_ENCODING,
                        fn, index, index+1, 'surrogates not allowed')
            else:
                ch_utf8 = ch.encode('utf-8')
                encoded.append(ch_utf8)
        return bytes().join(encoded)
    else:
        return fn.encode(FS_ENCODING, FS_ERRORS)

def decodefilename(fn):
    return fn.decode(FS_ENCODING, FS_ERRORS)

FS_ENCODING = 'ascii'; fn = b('[abc\xff]'); encoded = u('[abc\udcff]')
# FS_ENCODING = 'cp932'; fn = b('[abc\x81\x00]'); encoded = u('[abc\udc81\x00]')
# FS_ENCODING = 'UTF-8'; fn = b('[abc\xff]'); encoded = u('[abc\udcff]')


# normalize the filesystem encoding name.
# For example, we expect "utf-8", not "UTF8".
FS_ENCODING = codecs.lookup(FS_ENCODING).name


def register_surrogateescape():
    """
    Registers the surrogateescape error handler on Python 2 (only)
    """
    if utils.PY3:
        return
    try:
        codecs.lookup_error(FS_ERRORS)
    except LookupError:
        codecs.register_error(FS_ERRORS, surrogateescape_handler)


if __name__ == '__main__':
    pass
    # # Tests:
    # register_surrogateescape()

    # b = decodefilename(fn)
    # assert b == encoded, "%r != %r" % (b, encoded)
    # c = encodefilename(b)
    # assert c == fn, '%r != %r' % (c, fn)
    # # print("ok")
PK#Eu\distro/py.typednu�[���PK#Eu\�`p���distro/__init__.pynu�[���from .distro import (
    NORMALIZED_DISTRO_ID,
    NORMALIZED_LSB_ID,
    NORMALIZED_OS_ID,
    LinuxDistribution,
    __version__,
    build_number,
    codename,
    distro_release_attr,
    distro_release_info,
    id,
    info,
    like,
    linux_distribution,
    lsb_release_attr,
    lsb_release_info,
    major_version,
    minor_version,
    name,
    os_release_attr,
    os_release_info,
    uname_attr,
    uname_info,
    version,
    version_parts,
)

__all__ = [
    "NORMALIZED_DISTRO_ID",
    "NORMALIZED_LSB_ID",
    "NORMALIZED_OS_ID",
    "LinuxDistribution",
    "build_number",
    "codename",
    "distro_release_attr",
    "distro_release_info",
    "id",
    "info",
    "like",
    "linux_distribution",
    "lsb_release_attr",
    "lsb_release_info",
    "major_version",
    "minor_version",
    "name",
    "os_release_attr",
    "os_release_info",
    "uname_attr",
    "uname_info",
    "version",
    "version_parts",
]

__version__ = __version__
PK%Eu\T�i@@distro/__main__.pynu�[���from .distro import main

if __name__ == "__main__":
    main()
PK+Eu\�n$ii*distro/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@sxddlmZmZmZmZmZmZmZmZm	Z	m
Z
mZmZm
Z
mZmZmZmZmZmZmZmZmZmZmZgd�ZeZdS)�)�NORMALIZED_DISTRO_ID�NORMALIZED_LSB_ID�NORMALIZED_OS_ID�LinuxDistribution�__version__�build_number�codename�distro_release_attr�distro_release_info�id�info�like�linux_distribution�lsb_release_attr�lsb_release_info�
major_version�
minor_version�name�os_release_attr�os_release_info�
uname_attr�
uname_info�version�
version_parts)rrrrrrr	r
rrr
rrrrrrrrrrrrN)�distrorrrrrrrr	r
rrr
rrrrrrrrrrrr�__all__�rr�9/usr/local/lib/python3.9/site-packages/distro/__init__.py�<module>shPK.Eu\y�
P�P�(distro/__pycache__/distro.cpython-39.pycnu�[���a

��?h��	@sdZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
mZmZm
Z
mZmZmZmZmZmZzddl
mZWney�eZYn0dZGdd�de�ZGdd	�d	e�Zej�d
d�Zej�dd
�ZdZddd�Zdddddd�Z ddiZ!e�"d�Z#e�"d�Z$gd�Z%dddedddddf	Z&dVe'ee(e(e(fd!�d"d#�Z)e(d$�d%d&�Z*dWe'e(d(�d)d*�Z+dXe'e'e(d+�d,d-�Z,dYe'ee(e(e(fd.�d/d0�Z-dZe'e(d.�d1d2�Z.d[e'e(d.�d3d4�Z/d\e'e(d.�d5d6�Z0e(d$�d7d8�Z1e(d$�d9d:�Z2d]e'e'ed+�d;d<�Z3e
e(e(fd$�d=d>�Z4e
e(e(fd$�d?d@�Z5e
e(e(fd$�dAdB�Z6e
e(e(fd$�dCdD�Z7e(e(dE�dFdG�Z8e(e(dE�dHdI�Z9e(e(dE�dJdK�Z:e(e(dE�dLdM�Z;zddNl<m=Z=Wn"e�y�GdOdP�dP�Z=Yn0GdQdR�dR�Z>e>�Z?dd$�dSdT�Z@eAdUk�re@�dS)^a�
The ``distro`` package (``distro`` stands for Linux Distribution) provides
information about the Linux distribution it runs on, such as a reliable
machine-readable distro ID, or version information.

It is the recommended replacement for Python's original
:py:func:`platform.linux_distribution` function, but it provides much more
functionality. An alternative implementation became necessary because Python
3.5 deprecated this function, and Python 3.8 removed it altogether. Its
predecessor function :py:func:`platform.dist` was already deprecated since
Python 2.6 and removed in Python 3.8. Still, there are many cases in which
access to OS distribution information is needed. See `Python issue 1322
<https://bugs.python.org/issue1322>`_ for more information.
�N)	�Any�Callable�Dict�Iterable�Optional�Sequence�TextIO�Tuple�Type)�	TypedDictz1.9.0c@s&eZdZUeed<eed<eed<dS)�VersionDict�major�minor�build_numberN)�__name__�
__module__�__qualname__�str�__annotations__�rr�7/usr/local/lib/python3.9/site-packages/distro/distro.pyr=s
rc@s6eZdZUeed<eed<eed<eed<eed<dS)�InfoDict�id�version�
version_parts�like�codenameN)rrrrrrrrrrrCs

rZUNIXCONFDIRz/etcZ
UNIXUSRLIBDIRz/usr/libz
os-release�oracleZopensuse)�olz
opensuse-leap�rhel)�enterpriseenterpriseas�enterpriseenterpriseserver�redhatenterpriseworkstation�redhatenterpriseserver�redhatenterprisecomputenode�redhatzA(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)z(\w+)[-_](release|version)$)zSuSE-releasezaltlinux-releasezarch-releasezbase-releasezcentos-releasezfedora-releasezgentoo-releasezmageia-releasezmandrake-releasezmandriva-releasezmandrivalinux-releasezmanjaro-releasezoracle-releasezredhat-releasez
rocky-releasez
sl-releasezslackware-version�debian_versionzlsb-releasezoem-releasezsystem-releasez
plesk-releaseziredmail-releasez
board-releaseZec2_versionT��full_distribution_name�returncCstjdtdd�t�|�S)a�
    .. deprecated:: 1.6.0

        :func:`distro.linux_distribution()` is deprecated. It should only be
        used as a compatibility shim with Python's
        :py:func:`platform.linux_distribution()`. Please use :func:`distro.id`,
        :func:`distro.version` and :func:`distro.name` instead.

    Return information about the current OS distribution as a tuple
    ``(id_name, version, codename)`` with items as follows:

    * ``id_name``:  If *full_distribution_name* is false, the result of
      :func:`distro.id`. Otherwise, the result of :func:`distro.name`.

    * ``version``:  The result of :func:`distro.version`.

    * ``codename``:  The extra item (usually in parentheses) after the
      os-release version number, or the result of :func:`distro.codename`.

    The interface of this function is compatible with the original
    :py:func:`platform.linux_distribution` function, supporting a subset of
    its parameters.

    The data it returns may not exactly be the same, because it uses more data
    sources than the original function, and that may lead to different data if
    the OS distribution is not consistent across multiple data sources it
    provides (there are indeed such distributions ...).

    Another reason for differences is the fact that the :func:`distro.id`
    method normalizes the distro ID string to a reliable machine-readable value
    for a number of popular OS distributions.
    z�distro.linux_distribution() is deprecated. It should only be used as a compatibility shim with Python's platform.linux_distribution(). Please use distro.id(), distro.version() and distro.name() instead.�)�
stacklevel)�warnings�warn�DeprecationWarning�_distro�linux_distribution)r(rrrr0�s!�r0�r)cCst��S)a�
    Return the distro ID of the current distribution, as a
    machine-readable string.

    For a number of OS distributions, the returned distro ID value is
    *reliable*, in the sense that it is documented and that it does not change
    across releases of the distribution.

    This package maintains the following reliable distro ID values:

    ==============  =========================================
    Distro ID       Distribution
    ==============  =========================================
    "ubuntu"        Ubuntu
    "debian"        Debian
    "rhel"          RedHat Enterprise Linux
    "centos"        CentOS
    "fedora"        Fedora
    "sles"          SUSE Linux Enterprise Server
    "opensuse"      openSUSE
    "amzn"          Amazon Linux
    "arch"          Arch Linux
    "buildroot"     Buildroot
    "cloudlinux"    CloudLinux OS
    "exherbo"       Exherbo Linux
    "gentoo"        GenToo Linux
    "ibm_powerkvm"  IBM PowerKVM
    "kvmibm"        KVM for IBM z Systems
    "linuxmint"     Linux Mint
    "mageia"        Mageia
    "mandriva"      Mandriva Linux
    "parallels"     Parallels
    "pidora"        Pidora
    "raspbian"      Raspbian
    "oracle"        Oracle Linux (and Oracle Enterprise Linux)
    "scientific"    Scientific Linux
    "slackware"     Slackware
    "xenserver"     XenServer
    "openbsd"       OpenBSD
    "netbsd"        NetBSD
    "freebsd"       FreeBSD
    "midnightbsd"   MidnightBSD
    "rocky"         Rocky Linux
    "aix"           AIX
    "guix"          Guix System
    "altlinux"      ALT Linux
    ==============  =========================================

    If you have a need to get distros for reliable IDs added into this set,
    or if you find that the :func:`distro.id` function returns a different
    distro ID for one of the listed distros, please create an issue in the
    `distro issue tracker`_.

    **Lookup hierarchy and transformations:**

    First, the ID is obtained from the following sources, in the specified
    order. The first available and non-empty value is used:

    * the value of the "ID" attribute of the os-release file,

    * the value of the "Distributor ID" attribute returned by the lsb_release
      command,

    * the first part of the file name of the distro release file,

    The so determined ID value then passes the following transformations,
    before it is returned by this method:

    * it is translated to lower case,

    * blanks (which should not be there anyway) are translated to underscores,

    * a normalization of the ID is performed, based upon
      `normalization tables`_. The purpose of this normalization is to ensure
      that the ID is as reliable as possible, even across incompatible changes
      in the OS distributions. A common reason for an incompatible change is
      the addition of an os-release file, or the addition of the lsb_release
      command, with ID values that differ from what was previously determined
      from the distro release file name.
    )r/rrrrrr�sQrF��prettyr)cCs
t�|�S)ak
    Return the name of the current OS distribution, as a human-readable
    string.

    If *pretty* is false, the name is returned without version or codename.
    (e.g. "CentOS Linux")

    If *pretty* is true, the version and codename are appended.
    (e.g. "CentOS Linux 7.1.1503 (Core)")

    **Lookup hierarchy:**

    The name is obtained from the following sources, in the specified order.
    The first available and non-empty value is used:

    * If *pretty* is false:

      - the value of the "NAME" attribute of the os-release file,

      - the value of the "Distributor ID" attribute returned by the lsb_release
        command,

      - the value of the "<name>" field of the distro release file.

    * If *pretty* is true:

      - the value of the "PRETTY_NAME" attribute of the os-release file,

      - the value of the "Description" attribute returned by the lsb_release
        command,

      - the value of the "<name>" field of the distro release file, appended
        with the value of the pretty version ("<version_id>" and "<codename>"
        fields) of the distro release file, if available.
    )r/�name�r3rrrr4s$r4�r3�bestr)cCst�||�S)aN
    Return the version of the current OS distribution, as a human-readable
    string.

    If *pretty* is false, the version is returned without codename (e.g.
    "7.0").

    If *pretty* is true, the codename in parenthesis is appended, if the
    codename is non-empty (e.g. "7.0 (Maipo)").

    Some distributions provide version numbers with different precisions in
    the different sources of distribution information. Examining the different
    sources in a fixed priority order does not always yield the most precise
    version (e.g. for Debian 8.2, or CentOS 7.1).

    Some other distributions may not provide this kind of information. In these
    cases, an empty string would be returned. This behavior can be observed
    with rolling releases distributions (e.g. Arch Linux).

    The *best* parameter can be used to control the approach for the returned
    version:

    If *best* is false, the first non-empty version number in priority order of
    the examined sources is returned.

    If *best* is true, the most precise version number out of all examined
    sources is returned.

    **Lookup hierarchy:**

    In all cases, the version number is obtained from the following sources.
    If *best* is false, this order represents the priority order:

    * the value of the "VERSION_ID" attribute of the os-release file,
    * the value of the "Release" attribute returned by the lsb_release
      command,
    * the version number parsed from the "<version_id>" field of the first line
      of the distro release file,
    * the version number parsed from the "PRETTY_NAME" attribute of the
      os-release file, if it follows the format of the distro release files.
    * the version number parsed from the "Description" attribute returned by
      the lsb_release command, if it follows the format of the distro release
      files.
    )r/r�r3r7rrrrFs-r�r7r)cCs
t�|�S)a�
    Return the version of the current OS distribution as a tuple
    ``(major, minor, build_number)`` with items as follows:

    * ``major``:  The result of :func:`distro.major_version`.

    * ``minor``:  The result of :func:`distro.minor_version`.

    * ``build_number``:  The result of :func:`distro.build_number`.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    )r/r�r7rrrrvsrcCs
t�|�S)a5
    Return the major version of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The major version is the first
    part of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    )r/�
major_versionr:rrrr;�s
r;cCs
t�|�S)a6
    Return the minor version of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The minor version is the second
    part of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    )r/�
minor_versionr:rrrr<�s
r<cCs
t�|�S)a3
    Return the build number of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The build number is the third part
    of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    )r/rr:rrrr�s
rcCst��S)a
    Return a space-separated list of distro IDs of distributions that are
    closely related to the current OS distribution in regards to packaging
    and programming interfaces, for example distributions the current
    distribution is a derivative from.

    **Lookup hierarchy:**

    This information item is only provided by the os-release file.
    For details, see the description of the "ID_LIKE" attribute in the
    `os-release man page
    <http://www.freedesktop.org/software/systemd/man/os-release.html>`_.
    )r/rrrrrr�srcCst��S)a�
    Return the codename for the release of the current OS distribution,
    as a string.

    If the distribution does not have a codename, an empty string is returned.

    Note that the returned codename is not always really a codename. For
    example, openSUSE returns "x86_64". This function does not handle such
    cases in any special way and just returns the string it finds, if any.

    **Lookup hierarchy:**

    * the codename within the "VERSION" attribute of the os-release file, if
      provided,

    * the value of the "Codename" attribute returned by the lsb_release
      command,

    * the value of the "<codename>" field of the distro release file.
    )r/rrrrrr�srcCst�||�S)a�
    Return certain machine-readable information items about the current OS
    distribution in a dictionary, as shown in the following example:

    .. sourcecode:: python

        {
            'id': 'rhel',
            'version': '7.0',
            'version_parts': {
                'major': '7',
                'minor': '0',
                'build_number': ''
            },
            'like': 'fedora',
            'codename': 'Maipo'
        }

    The dictionary structure and keys are always the same, regardless of which
    information items are available in the underlying data sources. The values
    for the various keys are as follows:

    * ``id``:  The result of :func:`distro.id`.

    * ``version``:  The result of :func:`distro.version`.

    * ``version_parts -> major``:  The result of :func:`distro.major_version`.

    * ``version_parts -> minor``:  The result of :func:`distro.minor_version`.

    * ``version_parts -> build_number``:  The result of
      :func:`distro.build_number`.

    * ``like``:  The result of :func:`distro.like`.

    * ``codename``:  The result of :func:`distro.codename`.

    For a description of the *pretty* and *best* parameters, see the
    :func:`distro.version` method.
    )r/�infor8rrrr=�s)r=cCst��S)z�
    Return a dictionary containing key-value pairs for the information items
    from the os-release file data source of the current OS distribution.

    See `os-release file`_ for details about these information items.
    )r/�os_release_inforrrrr>sr>cCst��S)z�
    Return a dictionary containing key-value pairs for the information items
    from the lsb_release command data source of the current OS distribution.

    See `lsb_release command output`_ for details about these information
    items.
    )r/�lsb_release_inforrrrr?
sr?cCst��S)z�
    Return a dictionary containing key-value pairs for the information items
    from the distro release file data source of the current OS distribution.

    See `distro release file`_ for details about these information items.
    )r/�distro_release_inforrrrr@sr@cCst��S)z�
    Return a dictionary containing key-value pairs for the information items
    from the distro release file data source of the current OS distribution.
    )r/�
uname_inforrrrrA"srA��	attributer)cCs
t�|�S)a�
    Return a single named information item from the os-release file data source
    of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `os-release file`_ for details about these information items.
    )r/�os_release_attr�rCrrrrD*srDcCs
t�|�S)a�
    Return a single named information item from the lsb_release command output
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `lsb_release command output`_ for details about these information
    items.
    )r/�lsb_release_attrrErrrrF=srFcCs
t�|�S)a�
    Return a single named information item from the distro release file
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `distro release file`_ for details about these information items.
    )r/�distro_release_attrrErrrrGQsrGcCs
t�|�S)aZ
    Return a single named information item from the distro release file
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
                The empty string, if the item does not exist.
    )r/�
uname_attrrErrrrHdsrH)�cached_propertyc@s@eZdZdZeegefdd�dd�Zeeeed�dd�ZdS)	rIz�A version of @property which caches the value.  On access, it calls the
        underlying function and sets the value in `__dict__` so future accesses
        will not re-call the property.
        N)�fr)cCs|j|_||_dS�N)r�_fname�_f)�selfrJrrr�__init__szcached_property.__init__)�obj�ownerr)cCs4|dusJd|j�d���|�|�}|j|j<|S)Nzcall z on an instance)rLrM�__dict__)rNrPrQ�retrrr�__get__�szcached_property.__get__)	rrr�__doc__rrrOr
rTrrrrrIysrIc	@s�eZdZdZdUeeeeeeeeeedd�dd�Zed�dd	�ZdVee	eeefd�dd
�Z
ed�dd�ZdWeed�dd�ZdXeeed�dd�Z
dYee	eeefd�dd�ZdZeed�dd�Zd[eed�dd�Zd\eed�dd�Zed�d d!�Zed�d"d#�Zd]eeed�d$d%�Zeeefd�d&d'�Zeeefd�d(d)�Zeeefd�d*d+�Zeeefd�d,d-�Zed�d.d/�Zeed0�d1d2�Zeed0�d3d4�Zeed0�d5d6�Zeed0�d7d8�Ze eeefd�d9d:��Z!e"e#eeefd;�d<d=��Z$e eeefd�d>d?��Z%e"e&eeeefd;�d@dA��Z'e eeefd�dBdC��Z(e ed�dDdE��Z)e ed�dFdG��Z*e"e+eeeefd;�dHdI��Z,e"e-edJ�dKdL��Z.e eeefd�dMdN��Z/eeeefdO�dPdQ�Z0e"eeeefdR�dSdT��Z1dS)^�LinuxDistributiona
    Provides information about a OS distribution.

    This package creates a private module-global instance of this class with
    default initialization arguments, that is used by the
    `consolidated accessor functions`_ and `single source accessor functions`_.
    By using default initialization arguments, that module-global instance
    returns data about the current OS distribution (i.e. the distro this
    package runs on).

    Normally, it is not necessary to create additional instances of this class.
    However, in situations where control is needed over the exact data sources
    that are used, instances of this class can be created with a specific
    distro release file, or a specific os-release file, or without invoking the
    lsb_release command.
    N�)�include_lsb�os_release_file�distro_release_file�
include_uname�root_dir�include_oslevelr)c
Cs�||_|rtj�|d�nt|_|r0tj�|d�nt|_|rB||_nFtj�|jt	�}tj�|jt	�}tj�
|�sztj�
|�s�||_n||_|p�d|_|du}	|	r�|s�|s�|r�td��|dur�|n|	|_
|dur�|n|	|_|dur�|n|	|_dS)ah
        The initialization method of this class gathers information from the
        available data sources, and stores that in private instance attributes.
        Subsequent access to the information items uses these private instance
        attributes, so that the data sources are read only once.

        Parameters:

        * ``include_lsb`` (bool): Controls whether the
          `lsb_release command output`_ is included as a data source.

          If the lsb_release command is not available in the program execution
          path, the data source for the lsb_release command will be empty.

        * ``os_release_file`` (string): The path name of the
          `os-release file`_ that is to be used as a data source.

          An empty string (the default) will cause the default path name to
          be used (see `os-release file`_ for details).

          If the specified or defaulted os-release file does not exist, the
          data source for the os-release file will be empty.

        * ``distro_release_file`` (string): The path name of the
          `distro release file`_ that is to be used as a data source.

          An empty string (the default) will cause a default search algorithm
          to be used (see `distro release file`_ for details).

          If the specified distro release file does not exist, or if no default
          distro release file can be found, the data source for the distro
          release file will be empty.

        * ``include_uname`` (bool): Controls whether uname command output is
          included as a data source. If the uname command is not available in
          the program execution path the data source for the uname command will
          be empty.

        * ``root_dir`` (string): The absolute path to the root directory to use
          to find distro-related information files. Note that ``include_*``
          parameters must not be enabled in combination with ``root_dir``.

        * ``include_oslevel`` (bool): Controls whether (AIX) oslevel command
          output is included as a data source. If the oslevel command is not
          available in the program execution path the data source will be
          empty.

        Public instance attributes:

        * ``os_release_file`` (string): The path name of the
          `os-release file`_ that is actually used as a data source. The
          empty string if no distro release file is used as a data source.

        * ``distro_release_file`` (string): The path name of the
          `distro release file`_ that is actually used as a data source. The
          empty string if no distro release file is used as a data source.

        * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
          This controls whether the lsb information will be loaded.

        * ``include_uname`` (bool): The result of the ``include_uname``
          parameter. This controls whether the uname information will
          be loaded.

        * ``include_oslevel`` (bool): The result of the ``include_oslevel``
          parameter. This controls whether (AIX) oslevel information will be
          loaded.

        * ``root_dir`` (string): The result of the ``root_dir`` parameter.
          The absolute path to the root directory to use to find distro-related
          information files.

        Raises:

        * :py:exc:`ValueError`: Initialization parameters combination is not
           supported.

        * :py:exc:`OSError`: Some I/O issue with an os-release file or distro
          release file.

        * :py:exc:`UnicodeError`: A data source has unexpected characters or
          uses an unexpected encoding.
        �etczusr/librWNzcIncluding subprocess data sources from specific root_dir is disallowed to prevent false information)r\�os�path�join�_UNIXCONFDIR�etc_dir�_UNIXUSRLIBDIR�usr_lib_dirrY�_OS_RELEASE_BASENAME�isfilerZ�
ValueErrorrXr[r])
rNrXrYrZr[r\r]�etc_dir_os_release_file�usr_lib_os_release_fileZis_root_dir_definedrrrrO�s6\���
����zLinuxDistribution.__init__r1cCsdj|d�S)zReturn repr of all infoa�LinuxDistribution(os_release_file={self.os_release_file!r}, distro_release_file={self.distro_release_file!r}, include_lsb={self.include_lsb!r}, include_uname={self.include_uname!r}, include_oslevel={self.include_oslevel!r}, root_dir={self.root_dir!r}, _os_release_info={self._os_release_info!r}, _lsb_release_info={self._lsb_release_info!r}, _distro_release_info={self._distro_release_info!r}, _uname_info={self._uname_info!r}, _oslevel_info={self._oslevel_info!r})�rN)�formatrkrrr�__repr__ s��zLinuxDistribution.__repr__Tr'cCs.|r|��n|��|��|j�d�p*|��fS)z�
        Return information about the OS distribution that is compatible
        with Python's :func:`platform.linux_distribution`, supporting a subset
        of its parameters.

        For details, see :func:`distro.linux_distribution`.
        �release_codename)r4rr�_os_release_info�getr)rNr(rrrr01s�z$LinuxDistribution.linux_distributioncCs~ttttftd�dd�}|�d�}|r2||t�S|�d�}|rJ||t�S|�d�}|rb||t�S|�d�}|rz||t�SdS)zoReturn the distro ID of the OS distribution, as a string.

        For details, see :func:`distro.id`.
        )�	distro_id�tabler)cSs|���dd�}|�||�S)N� �_)�lower�replacerp)rqrrrrr�	normalizeGsz'LinuxDistribution.id.<locals>.normalizer�distributor_idrW)	rrrD�NORMALIZED_OS_IDrF�NORMALIZED_LSB_IDrG�NORMALIZED_DISTRO_IDrH)rNrwrqrrrrAs







zLinuxDistribution.idFr2cCs~|�d�p&|�d�p&|�d�p&|�d�}|rv|�d�p>|�d�}|sv|�d�pV|�d�}|jdd�}|rv|�d|��}|p|dS)	zu
        Return the name of the OS distribution, as a string.

        For details, see :func:`distro.name`.
        r4rx�pretty_name�descriptionTr5rsrW)rDrFrGrHr)rNr3r4rrrrr4]s"
����zLinuxDistribution.namer6c	Cs|�d�|�d�|�d�|�|�d���dd�|�|�d���dd�|�d�g}|�d��d�rr|�d|���n(|�	�d	ks�d	|�
���vr�|�|j
�d}|r�|D]$}|�d
�|�d
�ks�|dkr�|}q�n|D]}|dkr�|}q�q�|�r|�r|���r|�d|���d�}|S)
z{
        Return the version of the OS distribution, as a string.

        For details, see :func:`distro.version`.
        �
version_id�releaser|rWr}r�aixrZdebian�.z (�))rDrFrG�_parse_distro_release_contentrprH�
startswith�insert�oslevel_inforr�split�append�_debian_version�countr)rNr3r7�versionsr�vrrrrts:����zLinuxDistribution.versionr9cCsL|j|d�}|rHt�d�}|�|�}|rH|��\}}}||p>d|pDdfSdS)z�
        Return the version of the OS distribution, as a tuple of version
        numbers.

        For details, see :func:`distro.version_parts`.
        r:z(\d+)\.?(\d+)?\.?(\d+)?rW)rWrWrW)r�re�compile�match�groups)rNr7�version_str�
version_regex�matchesr
rrrrrr�s

zLinuxDistribution.version_partscCs|�|�dS)z�
        Return the major version number of the current distribution.

        For details, see :func:`distro.major_version`.
        r�r�rNr7rrrr;�szLinuxDistribution.major_versioncCs|�|�dS)z�
        Return the minor version number of the current distribution.

        For details, see :func:`distro.minor_version`.
        �r�r�rrrr<�szLinuxDistribution.minor_versioncCs|�|�dS)z}
        Return the build number of the current distribution.

        For details, see :func:`distro.build_number`.
        r*r�r�rrrr�szLinuxDistribution.build_numbercCs|�d�pdS)z�
        Return the IDs of distributions that are like the OS distribution.

        For details, see :func:`distro.like`.
        �id_likerW)rDrkrrrr�szLinuxDistribution.likecCs<z|jdWSty6|�d�p0|�d�p0dYS0dS)zp
        Return the codename of the OS distribution.

        For details, see :func:`distro.codename`.
        rrWN)ro�KeyErrorrFrGrkrrrr�s
��zLinuxDistribution.codenamec	CsBt|��|�||�t|�|�|�|�|�|�d�|��|��d�S)z�
        Return certain machine-readable information about the OS
        distribution.

        For details, see :func:`distro.info`.
        )r
rr)rrrrr)	rrrrr;r<rrr)rNr3r7rrrr=�s
��zLinuxDistribution.infocCs|jS)z�
        Return a dictionary containing key-value pairs for the information
        items from the os-release file data source of the OS distribution.

        For details, see :func:`distro.os_release_info`.
        )rorkrrrr>�sz!LinuxDistribution.os_release_infocCs|jS)z�
        Return a dictionary containing key-value pairs for the information
        items from the lsb_release command data source of the OS
        distribution.

        For details, see :func:`distro.lsb_release_info`.
        )�_lsb_release_inforkrrrr?�sz"LinuxDistribution.lsb_release_infocCs|jS)z�
        Return a dictionary containing key-value pairs for the information
        items from the distro release file data source of the OS
        distribution.

        For details, see :func:`distro.distro_release_info`.
        )�_distro_release_inforkrrrr@sz%LinuxDistribution.distro_release_infocCs|jS)z�
        Return a dictionary containing key-value pairs for the information
        items from the uname command data source of the OS distribution.

        For details, see :func:`distro.uname_info`.
        )�_uname_inforkrrrrAszLinuxDistribution.uname_infocCs|jS)z5
        Return AIX' oslevel command output.
        )�
_oslevel_inforkrrrr�szLinuxDistribution.oslevel_inforBcCs|j�|d�S)z�
        Return a single named information item from the os-release file data
        source of the OS distribution.

        For details, see :func:`distro.os_release_attr`.
        rW)rorp�rNrCrrrrDsz!LinuxDistribution.os_release_attrcCs|j�|d�S)z�
        Return a single named information item from the lsb_release command
        output data source of the OS distribution.

        For details, see :func:`distro.lsb_release_attr`.
        rW)r�rpr�rrrrF'sz"LinuxDistribution.lsb_release_attrcCs|j�|d�S)z�
        Return a single named information item from the distro release file
        data source of the OS distribution.

        For details, see :func:`distro.distro_release_attr`.
        rW)r�rpr�rrrrG0sz%LinuxDistribution.distro_release_attrcCs|j�|d�S)z�
        Return a single named information item from the uname command
        output data source of the OS distribution.

        For details, see :func:`distro.uname_attr`.
        rW)r�rpr�rrrrH9szLinuxDistribution.uname_attrcCsJtj�|j�rFt|jdd��}|�|�Wd�S1s<0YiS)z�
        Get the information items from the specified os-release file.

        Returns:
            A dictionary containing all information items.
        �utf-8��encodingN)r_r`rgrY�open�_parse_os_release_content)rN�release_filerrrroBs(z"LinuxDistribution._os_release_info)�linesr)c	Cs�i}tj|dd�}d|_t|�}|D](}d|vr$|�dd�\}}|||��<q$d|vr�t�d|d�}|r�|�d�p||�d�}||d<|d	<d
|vr�|d
|d<nd|vr�|d|d<|S)aD
        Parse the lines of an os-release file.

        Parameters:

        * lines: Iterable through the lines in the os-release file.
                 Each line must be a unicode string or a UTF-8 encoded byte
                 string.

        Returns:
            A dictionary containing all information items.
        T)�posix�=r�rz\((\D+)\)|,\s*(\D+)r*rrn�version_codename�ubuntu_codename)�shlex�whitespace_split�listr�rur��search�group)	r��props�lexer�tokens�token�kr�r�rnrrrr�Os$z+LinuxDistribution._parse_os_release_contentc	CsX|js
iSzd}tj|tjd�}Wnttjfy>iYS0|�|���}|�|�S)z�
        Get the information items from the lsb_release command output.

        Returns:
            A dictionary containing all information items.
        )�lsb_releasez-a��stderr)	rX�
subprocess�check_output�DEVNULL�OSError�CalledProcessError�_to_str�
splitlines�_parse_lsb_release_content�rN�cmd�stdout�contentrrrr��s
z#LinuxDistribution._lsb_release_infocCsXi}|D]J}|�d��dd�}t|�dkr,q|\}}|�|�dd���|��i�q|S)aM
        Parse the output of the lsb_release command.

        Parameters:

        * lines: Iterable through the lines of the lsb_release output.
                 Each line must be a unicode string or a UTF-8 encoded byte
                 string.

        Returns:
            A dictionary containing all information items.
        �
�:r�r*rsrt)�stripr��len�updatervru)r�r��line�kvr�r�rrrr��s z,LinuxDistribution._parse_lsb_release_contentcCsR|js
iSzd}tj|tjd�}Wnty8iYS0|�|���}|�|�S)N)�unamez-rsr�)r[r�r�r�r�r�r��_parse_uname_contentr�rrrr��s
zLinuxDistribution._uname_infoc	CsH|js
dSztjdtjd�}Wnttjfy8YdS0|�|���S)NrWZoslevelr�)r]r�r�r�r�r�r�r�)rNr�rrrr��szLinuxDistribution._oslevel_infocCsdzJttj�|jd�dd��}|����Wd�WS1s>0YWnty^YdS0dS)Nr&�asciir�rW)r�r_r`rarc�readline�rstrip�FileNotFoundError)rN�fprrrr��s�0z!LinuxDistribution._debian_versioncCs\|siSi}t�d|d���}|rX|��\}}|dkr<iS|��|d<||d<||d<|S)Nz^([^\s]+)\s+([\d\.]+)r�Linuxrr4r)r�r�r�r�ru)r�r�r�r4rrrrr��sz&LinuxDistribution._parse_uname_content)�
bytestringr)cCst��}|�|�SrK)�sys�getfilesystemencoding�decode)r�r�rrrr��szLinuxDistribution._to_strcs��jr,���j�}tj��j�}t�|�}n�z&�fdd�t��j�D�}|�	�Wnt
yht}Yn0|D]F}t�|�}|dur�qntj��j|�}��|�}d|vr�qn|�_q�qniS|dur�|�
d�|d<d|�dd���vr�d|d<|S)	z�
        Get the information items from the specified distro release file.

        Returns:
            A dictionary containing all information items.
        cs0g|](}|tvrtj�tj��j|��r|�qSr)� _DISTRO_RELEASE_IGNORE_BASENAMESr_r`rgrarc)�.0�basenamerkrr�
<listcomp>�s�z:LinuxDistribution._distro_release_info.<locals>.<listcomp>Nr4r�r�
cloudlinuxrW)rZ�_parse_distro_release_filer_r`r�� _DISTRO_RELEASE_BASENAME_PATTERNr��listdirrc�sortr��_DISTRO_RELEASE_BASENAMESrar�rpru)rN�distro_infor�r��	basenames�filepathrrkrr��s6

�	


z&LinuxDistribution._distro_release_info)r�r)cCs\z@t|dd�� }|�|���Wd�WS1s40YWntyViYS0dS)z�
        Parse a distro release file.

        Parameters:

        * filepath: Path name of the distro release file.

        Returns:
            A dictionary containing all information items.
        r�r�N)r�r�r�r�)rNr�r�rrrr�s
2z,LinuxDistribution._parse_distro_release_file)r�r)cCs�t�|��ddd��}i}|r~|�d�ddd�|d<|�d�rZ|�d�ddd�|d<|�d�r�|�d�ddd�|d<n|r�|��|d<|S)	a
        Parse a line from a distro release file.

        Parameters:
        * line: Line from the distro release file. Must be a unicode string
                or a UTF-8 encoded byte string.

        Returns:
            A dictionary containing all information items.
        N����r4r*r~r�r)�(_DISTRO_RELEASE_CONTENT_REVERSED_PATTERNr�r�r�)r�r�r�rrrr�5s

z/LinuxDistribution._parse_distro_release_content)NrWrWNNN)T)F)FF)F)F)F)F)FF)2rrrrUr�boolrrOrmr	r0rr4rrr;r<rrrrr=rr>r?r@rAr�rDrFrGrHrIro�staticmethodrr�r�rr�r�r�r�rr��bytesr�r�r�r�rrrrrV�s�����*	

					1		:rVcCs�t�t�}|�tj�|�t�tj��t	j
dd�}|jddddd�|jdd	td
dd�|�
�}|jr|td
d
d
|jd�}nt}|jr�|�tj|��ddd��n@|�d|jdd��|jdd�}|�d|�|��}|�d|�dS)NzOS distro info tool)r}z--jsonz-jz!Output in machine readable format�
store_true)�help�actionz
--root-dirz-rr\z5Path to the root filesystem directory (defaults to /))�type�destr�F)rXr[r]r\�T)�indent�	sort_keyszName: %sr5zVersion: %szCodename: %s)�logging�	getLoggerr�setLevel�DEBUG�
addHandler�
StreamHandlerr�r��argparse�ArgumentParser�add_argumentr�
parse_argsr\rVr/�jsonr=�dumpsr4rr)�logger�parser�args�dist�distribution_version�distribution_codenamerrr�mainRs<
���r�__main__)T)F)FF)F)F)F)F)FF)BrUr�r�r�r_r�r�r�r�r,�typingrrrrrrrr	r
r�ImportError�dict�__version__rr�environrprbrdrfryrzr{r�r�r�r�r�r�rr0rr4rrr;r<rrrr=r>r?r@rArDrFrGrH�	functoolsrIrVr/rrrrrr�<module>s�,

�
���
�
+T'0


,

K(
PK0Eu\�2���*distro/__pycache__/__main__.cpython-39.pycnu�[���a

��?h@�@sddlmZedkre�dS)�)�main�__main__N)�distror�__name__�rr�9/usr/local/lib/python3.9/site-packages/distro/__main__.py�<module>sPK3Eu\J:mk��distro/distro.pynu�[���#!/usr/bin/env python
# Copyright 2015-2021 Nir Cohen
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
The ``distro`` package (``distro`` stands for Linux Distribution) provides
information about the Linux distribution it runs on, such as a reliable
machine-readable distro ID, or version information.

It is the recommended replacement for Python's original
:py:func:`platform.linux_distribution` function, but it provides much more
functionality. An alternative implementation became necessary because Python
3.5 deprecated this function, and Python 3.8 removed it altogether. Its
predecessor function :py:func:`platform.dist` was already deprecated since
Python 2.6 and removed in Python 3.8. Still, there are many cases in which
access to OS distribution information is needed. See `Python issue 1322
<https://bugs.python.org/issue1322>`_ for more information.
"""

import argparse
import json
import logging
import os
import re
import shlex
import subprocess
import sys
import warnings
from typing import (
    Any,
    Callable,
    Dict,
    Iterable,
    Optional,
    Sequence,
    TextIO,
    Tuple,
    Type,
)

try:
    from typing import TypedDict
except ImportError:
    # Python 3.7
    TypedDict = dict

__version__ = "1.9.0"


class VersionDict(TypedDict):
    major: str
    minor: str
    build_number: str


class InfoDict(TypedDict):
    id: str
    version: str
    version_parts: VersionDict
    like: str
    codename: str


_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc")
_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib")
_OS_RELEASE_BASENAME = "os-release"

#: Translation table for normalizing the "ID" attribute defined in os-release
#: files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as defined in the os-release file, translated to lower case,
#:   with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_OS_ID = {
    "ol": "oracle",  # Oracle Linux
    "opensuse-leap": "opensuse",  # Newer versions of OpenSuSE report as opensuse-leap
}

#: Translation table for normalizing the "Distributor ID" attribute returned by
#: the lsb_release command, for use by the :func:`distro.id` method.
#:
#: * Key: Value as returned by the lsb_release command, translated to lower
#:   case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_LSB_ID = {
    "enterpriseenterpriseas": "oracle",  # Oracle Enterprise Linux 4
    "enterpriseenterpriseserver": "oracle",  # Oracle Linux 5
    "redhatenterpriseworkstation": "rhel",  # RHEL 6, 7 Workstation
    "redhatenterpriseserver": "rhel",  # RHEL 6, 7 Server
    "redhatenterprisecomputenode": "rhel",  # RHEL 6 ComputeNode
}

#: Translation table for normalizing the distro ID derived from the file name
#: of distro release files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as derived from the file name of a distro release file,
#:   translated to lower case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_DISTRO_ID = {
    "redhat": "rhel",  # RHEL 6.x, 7.x
}

# Pattern for content of distro release file (reversed)
_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile(
    r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)"
)

# Pattern for base file name of distro release file
_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$")

# Base file names to be looked up for if _UNIXCONFDIR is not readable.
_DISTRO_RELEASE_BASENAMES = [
    "SuSE-release",
    "altlinux-release",
    "arch-release",
    "base-release",
    "centos-release",
    "fedora-release",
    "gentoo-release",
    "mageia-release",
    "mandrake-release",
    "mandriva-release",
    "mandrivalinux-release",
    "manjaro-release",
    "oracle-release",
    "redhat-release",
    "rocky-release",
    "sl-release",
    "slackware-version",
]

# Base file names to be ignored when searching for distro release file
_DISTRO_RELEASE_IGNORE_BASENAMES = (
    "debian_version",
    "lsb-release",
    "oem-release",
    _OS_RELEASE_BASENAME,
    "system-release",
    "plesk-release",
    "iredmail-release",
    "board-release",
    "ec2_version",
)


def linux_distribution(full_distribution_name: bool = True) -> Tuple[str, str, str]:
    """
    .. deprecated:: 1.6.0

        :func:`distro.linux_distribution()` is deprecated. It should only be
        used as a compatibility shim with Python's
        :py:func:`platform.linux_distribution()`. Please use :func:`distro.id`,
        :func:`distro.version` and :func:`distro.name` instead.

    Return information about the current OS distribution as a tuple
    ``(id_name, version, codename)`` with items as follows:

    * ``id_name``:  If *full_distribution_name* is false, the result of
      :func:`distro.id`. Otherwise, the result of :func:`distro.name`.

    * ``version``:  The result of :func:`distro.version`.

    * ``codename``:  The extra item (usually in parentheses) after the
      os-release version number, or the result of :func:`distro.codename`.

    The interface of this function is compatible with the original
    :py:func:`platform.linux_distribution` function, supporting a subset of
    its parameters.

    The data it returns may not exactly be the same, because it uses more data
    sources than the original function, and that may lead to different data if
    the OS distribution is not consistent across multiple data sources it
    provides (there are indeed such distributions ...).

    Another reason for differences is the fact that the :func:`distro.id`
    method normalizes the distro ID string to a reliable machine-readable value
    for a number of popular OS distributions.
    """
    warnings.warn(
        "distro.linux_distribution() is deprecated. It should only be used as a "
        "compatibility shim with Python's platform.linux_distribution(). Please use "
        "distro.id(), distro.version() and distro.name() instead.",
        DeprecationWarning,
        stacklevel=2,
    )
    return _distro.linux_distribution(full_distribution_name)


def id() -> str:
    """
    Return the distro ID of the current distribution, as a
    machine-readable string.

    For a number of OS distributions, the returned distro ID value is
    *reliable*, in the sense that it is documented and that it does not change
    across releases of the distribution.

    This package maintains the following reliable distro ID values:

    ==============  =========================================
    Distro ID       Distribution
    ==============  =========================================
    "ubuntu"        Ubuntu
    "debian"        Debian
    "rhel"          RedHat Enterprise Linux
    "centos"        CentOS
    "fedora"        Fedora
    "sles"          SUSE Linux Enterprise Server
    "opensuse"      openSUSE
    "amzn"          Amazon Linux
    "arch"          Arch Linux
    "buildroot"     Buildroot
    "cloudlinux"    CloudLinux OS
    "exherbo"       Exherbo Linux
    "gentoo"        GenToo Linux
    "ibm_powerkvm"  IBM PowerKVM
    "kvmibm"        KVM for IBM z Systems
    "linuxmint"     Linux Mint
    "mageia"        Mageia
    "mandriva"      Mandriva Linux
    "parallels"     Parallels
    "pidora"        Pidora
    "raspbian"      Raspbian
    "oracle"        Oracle Linux (and Oracle Enterprise Linux)
    "scientific"    Scientific Linux
    "slackware"     Slackware
    "xenserver"     XenServer
    "openbsd"       OpenBSD
    "netbsd"        NetBSD
    "freebsd"       FreeBSD
    "midnightbsd"   MidnightBSD
    "rocky"         Rocky Linux
    "aix"           AIX
    "guix"          Guix System
    "altlinux"      ALT Linux
    ==============  =========================================

    If you have a need to get distros for reliable IDs added into this set,
    or if you find that the :func:`distro.id` function returns a different
    distro ID for one of the listed distros, please create an issue in the
    `distro issue tracker`_.

    **Lookup hierarchy and transformations:**

    First, the ID is obtained from the following sources, in the specified
    order. The first available and non-empty value is used:

    * the value of the "ID" attribute of the os-release file,

    * the value of the "Distributor ID" attribute returned by the lsb_release
      command,

    * the first part of the file name of the distro release file,

    The so determined ID value then passes the following transformations,
    before it is returned by this method:

    * it is translated to lower case,

    * blanks (which should not be there anyway) are translated to underscores,

    * a normalization of the ID is performed, based upon
      `normalization tables`_. The purpose of this normalization is to ensure
      that the ID is as reliable as possible, even across incompatible changes
      in the OS distributions. A common reason for an incompatible change is
      the addition of an os-release file, or the addition of the lsb_release
      command, with ID values that differ from what was previously determined
      from the distro release file name.
    """
    return _distro.id()


def name(pretty: bool = False) -> str:
    """
    Return the name of the current OS distribution, as a human-readable
    string.

    If *pretty* is false, the name is returned without version or codename.
    (e.g. "CentOS Linux")

    If *pretty* is true, the version and codename are appended.
    (e.g. "CentOS Linux 7.1.1503 (Core)")

    **Lookup hierarchy:**

    The name is obtained from the following sources, in the specified order.
    The first available and non-empty value is used:

    * If *pretty* is false:

      - the value of the "NAME" attribute of the os-release file,

      - the value of the "Distributor ID" attribute returned by the lsb_release
        command,

      - the value of the "<name>" field of the distro release file.

    * If *pretty* is true:

      - the value of the "PRETTY_NAME" attribute of the os-release file,

      - the value of the "Description" attribute returned by the lsb_release
        command,

      - the value of the "<name>" field of the distro release file, appended
        with the value of the pretty version ("<version_id>" and "<codename>"
        fields) of the distro release file, if available.
    """
    return _distro.name(pretty)


def version(pretty: bool = False, best: bool = False) -> str:
    """
    Return the version of the current OS distribution, as a human-readable
    string.

    If *pretty* is false, the version is returned without codename (e.g.
    "7.0").

    If *pretty* is true, the codename in parenthesis is appended, if the
    codename is non-empty (e.g. "7.0 (Maipo)").

    Some distributions provide version numbers with different precisions in
    the different sources of distribution information. Examining the different
    sources in a fixed priority order does not always yield the most precise
    version (e.g. for Debian 8.2, or CentOS 7.1).

    Some other distributions may not provide this kind of information. In these
    cases, an empty string would be returned. This behavior can be observed
    with rolling releases distributions (e.g. Arch Linux).

    The *best* parameter can be used to control the approach for the returned
    version:

    If *best* is false, the first non-empty version number in priority order of
    the examined sources is returned.

    If *best* is true, the most precise version number out of all examined
    sources is returned.

    **Lookup hierarchy:**

    In all cases, the version number is obtained from the following sources.
    If *best* is false, this order represents the priority order:

    * the value of the "VERSION_ID" attribute of the os-release file,
    * the value of the "Release" attribute returned by the lsb_release
      command,
    * the version number parsed from the "<version_id>" field of the first line
      of the distro release file,
    * the version number parsed from the "PRETTY_NAME" attribute of the
      os-release file, if it follows the format of the distro release files.
    * the version number parsed from the "Description" attribute returned by
      the lsb_release command, if it follows the format of the distro release
      files.
    """
    return _distro.version(pretty, best)


def version_parts(best: bool = False) -> Tuple[str, str, str]:
    """
    Return the version of the current OS distribution as a tuple
    ``(major, minor, build_number)`` with items as follows:

    * ``major``:  The result of :func:`distro.major_version`.

    * ``minor``:  The result of :func:`distro.minor_version`.

    * ``build_number``:  The result of :func:`distro.build_number`.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    """
    return _distro.version_parts(best)


def major_version(best: bool = False) -> str:
    """
    Return the major version of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The major version is the first
    part of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    """
    return _distro.major_version(best)


def minor_version(best: bool = False) -> str:
    """
    Return the minor version of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The minor version is the second
    part of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    """
    return _distro.minor_version(best)


def build_number(best: bool = False) -> str:
    """
    Return the build number of the current OS distribution, as a string,
    if provided.
    Otherwise, the empty string is returned. The build number is the third part
    of the dot-separated version string.

    For a description of the *best* parameter, see the :func:`distro.version`
    method.
    """
    return _distro.build_number(best)


def like() -> str:
    """
    Return a space-separated list of distro IDs of distributions that are
    closely related to the current OS distribution in regards to packaging
    and programming interfaces, for example distributions the current
    distribution is a derivative from.

    **Lookup hierarchy:**

    This information item is only provided by the os-release file.
    For details, see the description of the "ID_LIKE" attribute in the
    `os-release man page
    <http://www.freedesktop.org/software/systemd/man/os-release.html>`_.
    """
    return _distro.like()


def codename() -> str:
    """
    Return the codename for the release of the current OS distribution,
    as a string.

    If the distribution does not have a codename, an empty string is returned.

    Note that the returned codename is not always really a codename. For
    example, openSUSE returns "x86_64". This function does not handle such
    cases in any special way and just returns the string it finds, if any.

    **Lookup hierarchy:**

    * the codename within the "VERSION" attribute of the os-release file, if
      provided,

    * the value of the "Codename" attribute returned by the lsb_release
      command,

    * the value of the "<codename>" field of the distro release file.
    """
    return _distro.codename()


def info(pretty: bool = False, best: bool = False) -> InfoDict:
    """
    Return certain machine-readable information items about the current OS
    distribution in a dictionary, as shown in the following example:

    .. sourcecode:: python

        {
            'id': 'rhel',
            'version': '7.0',
            'version_parts': {
                'major': '7',
                'minor': '0',
                'build_number': ''
            },
            'like': 'fedora',
            'codename': 'Maipo'
        }

    The dictionary structure and keys are always the same, regardless of which
    information items are available in the underlying data sources. The values
    for the various keys are as follows:

    * ``id``:  The result of :func:`distro.id`.

    * ``version``:  The result of :func:`distro.version`.

    * ``version_parts -> major``:  The result of :func:`distro.major_version`.

    * ``version_parts -> minor``:  The result of :func:`distro.minor_version`.

    * ``version_parts -> build_number``:  The result of
      :func:`distro.build_number`.

    * ``like``:  The result of :func:`distro.like`.

    * ``codename``:  The result of :func:`distro.codename`.

    For a description of the *pretty* and *best* parameters, see the
    :func:`distro.version` method.
    """
    return _distro.info(pretty, best)


def os_release_info() -> Dict[str, str]:
    """
    Return a dictionary containing key-value pairs for the information items
    from the os-release file data source of the current OS distribution.

    See `os-release file`_ for details about these information items.
    """
    return _distro.os_release_info()


def lsb_release_info() -> Dict[str, str]:
    """
    Return a dictionary containing key-value pairs for the information items
    from the lsb_release command data source of the current OS distribution.

    See `lsb_release command output`_ for details about these information
    items.
    """
    return _distro.lsb_release_info()


def distro_release_info() -> Dict[str, str]:
    """
    Return a dictionary containing key-value pairs for the information items
    from the distro release file data source of the current OS distribution.

    See `distro release file`_ for details about these information items.
    """
    return _distro.distro_release_info()


def uname_info() -> Dict[str, str]:
    """
    Return a dictionary containing key-value pairs for the information items
    from the distro release file data source of the current OS distribution.
    """
    return _distro.uname_info()


def os_release_attr(attribute: str) -> str:
    """
    Return a single named information item from the os-release file data source
    of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `os-release file`_ for details about these information items.
    """
    return _distro.os_release_attr(attribute)


def lsb_release_attr(attribute: str) -> str:
    """
    Return a single named information item from the lsb_release command output
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `lsb_release command output`_ for details about these information
    items.
    """
    return _distro.lsb_release_attr(attribute)


def distro_release_attr(attribute: str) -> str:
    """
    Return a single named information item from the distro release file
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
      The empty string, if the item does not exist.

    See `distro release file`_ for details about these information items.
    """
    return _distro.distro_release_attr(attribute)


def uname_attr(attribute: str) -> str:
    """
    Return a single named information item from the distro release file
    data source of the current OS distribution.

    Parameters:

    * ``attribute`` (string): Key of the information item.

    Returns:

    * (string): Value of the information item, if the item exists.
                The empty string, if the item does not exist.
    """
    return _distro.uname_attr(attribute)


try:
    from functools import cached_property
except ImportError:
    # Python < 3.8
    class cached_property:  # type: ignore
        """A version of @property which caches the value.  On access, it calls the
        underlying function and sets the value in `__dict__` so future accesses
        will not re-call the property.
        """

        def __init__(self, f: Callable[[Any], Any]) -> None:
            self._fname = f.__name__
            self._f = f

        def __get__(self, obj: Any, owner: Type[Any]) -> Any:
            assert obj is not None, f"call {self._fname} on an instance"
            ret = obj.__dict__[self._fname] = self._f(obj)
            return ret


class LinuxDistribution:
    """
    Provides information about a OS distribution.

    This package creates a private module-global instance of this class with
    default initialization arguments, that is used by the
    `consolidated accessor functions`_ and `single source accessor functions`_.
    By using default initialization arguments, that module-global instance
    returns data about the current OS distribution (i.e. the distro this
    package runs on).

    Normally, it is not necessary to create additional instances of this class.
    However, in situations where control is needed over the exact data sources
    that are used, instances of this class can be created with a specific
    distro release file, or a specific os-release file, or without invoking the
    lsb_release command.
    """

    def __init__(
        self,
        include_lsb: Optional[bool] = None,
        os_release_file: str = "",
        distro_release_file: str = "",
        include_uname: Optional[bool] = None,
        root_dir: Optional[str] = None,
        include_oslevel: Optional[bool] = None,
    ) -> None:
        """
        The initialization method of this class gathers information from the
        available data sources, and stores that in private instance attributes.
        Subsequent access to the information items uses these private instance
        attributes, so that the data sources are read only once.

        Parameters:

        * ``include_lsb`` (bool): Controls whether the
          `lsb_release command output`_ is included as a data source.

          If the lsb_release command is not available in the program execution
          path, the data source for the lsb_release command will be empty.

        * ``os_release_file`` (string): The path name of the
          `os-release file`_ that is to be used as a data source.

          An empty string (the default) will cause the default path name to
          be used (see `os-release file`_ for details).

          If the specified or defaulted os-release file does not exist, the
          data source for the os-release file will be empty.

        * ``distro_release_file`` (string): The path name of the
          `distro release file`_ that is to be used as a data source.

          An empty string (the default) will cause a default search algorithm
          to be used (see `distro release file`_ for details).

          If the specified distro release file does not exist, or if no default
          distro release file can be found, the data source for the distro
          release file will be empty.

        * ``include_uname`` (bool): Controls whether uname command output is
          included as a data source. If the uname command is not available in
          the program execution path the data source for the uname command will
          be empty.

        * ``root_dir`` (string): The absolute path to the root directory to use
          to find distro-related information files. Note that ``include_*``
          parameters must not be enabled in combination with ``root_dir``.

        * ``include_oslevel`` (bool): Controls whether (AIX) oslevel command
          output is included as a data source. If the oslevel command is not
          available in the program execution path the data source will be
          empty.

        Public instance attributes:

        * ``os_release_file`` (string): The path name of the
          `os-release file`_ that is actually used as a data source. The
          empty string if no distro release file is used as a data source.

        * ``distro_release_file`` (string): The path name of the
          `distro release file`_ that is actually used as a data source. The
          empty string if no distro release file is used as a data source.

        * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
          This controls whether the lsb information will be loaded.

        * ``include_uname`` (bool): The result of the ``include_uname``
          parameter. This controls whether the uname information will
          be loaded.

        * ``include_oslevel`` (bool): The result of the ``include_oslevel``
          parameter. This controls whether (AIX) oslevel information will be
          loaded.

        * ``root_dir`` (string): The result of the ``root_dir`` parameter.
          The absolute path to the root directory to use to find distro-related
          information files.

        Raises:

        * :py:exc:`ValueError`: Initialization parameters combination is not
           supported.

        * :py:exc:`OSError`: Some I/O issue with an os-release file or distro
          release file.

        * :py:exc:`UnicodeError`: A data source has unexpected characters or
          uses an unexpected encoding.
        """
        self.root_dir = root_dir
        self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR
        self.usr_lib_dir = (
            os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR
        )

        if os_release_file:
            self.os_release_file = os_release_file
        else:
            etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME)
            usr_lib_os_release_file = os.path.join(
                self.usr_lib_dir, _OS_RELEASE_BASENAME
            )

            # NOTE: The idea is to respect order **and** have it set
            #       at all times for API backwards compatibility.
            if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile(
                usr_lib_os_release_file
            ):
                self.os_release_file = etc_dir_os_release_file
            else:
                self.os_release_file = usr_lib_os_release_file

        self.distro_release_file = distro_release_file or ""  # updated later

        is_root_dir_defined = root_dir is not None
        if is_root_dir_defined and (include_lsb or include_uname or include_oslevel):
            raise ValueError(
                "Including subprocess data sources from specific root_dir is disallowed"
                " to prevent false information"
            )
        self.include_lsb = (
            include_lsb if include_lsb is not None else not is_root_dir_defined
        )
        self.include_uname = (
            include_uname if include_uname is not None else not is_root_dir_defined
        )
        self.include_oslevel = (
            include_oslevel if include_oslevel is not None else not is_root_dir_defined
        )

    def __repr__(self) -> str:
        """Return repr of all info"""
        return (
            "LinuxDistribution("
            "os_release_file={self.os_release_file!r}, "
            "distro_release_file={self.distro_release_file!r}, "
            "include_lsb={self.include_lsb!r}, "
            "include_uname={self.include_uname!r}, "
            "include_oslevel={self.include_oslevel!r}, "
            "root_dir={self.root_dir!r}, "
            "_os_release_info={self._os_release_info!r}, "
            "_lsb_release_info={self._lsb_release_info!r}, "
            "_distro_release_info={self._distro_release_info!r}, "
            "_uname_info={self._uname_info!r}, "
            "_oslevel_info={self._oslevel_info!r})".format(self=self)
        )

    def linux_distribution(
        self, full_distribution_name: bool = True
    ) -> Tuple[str, str, str]:
        """
        Return information about the OS distribution that is compatible
        with Python's :func:`platform.linux_distribution`, supporting a subset
        of its parameters.

        For details, see :func:`distro.linux_distribution`.
        """
        return (
            self.name() if full_distribution_name else self.id(),
            self.version(),
            self._os_release_info.get("release_codename") or self.codename(),
        )

    def id(self) -> str:
        """Return the distro ID of the OS distribution, as a string.

        For details, see :func:`distro.id`.
        """

        def normalize(distro_id: str, table: Dict[str, str]) -> str:
            distro_id = distro_id.lower().replace(" ", "_")
            return table.get(distro_id, distro_id)

        distro_id = self.os_release_attr("id")
        if distro_id:
            return normalize(distro_id, NORMALIZED_OS_ID)

        distro_id = self.lsb_release_attr("distributor_id")
        if distro_id:
            return normalize(distro_id, NORMALIZED_LSB_ID)

        distro_id = self.distro_release_attr("id")
        if distro_id:
            return normalize(distro_id, NORMALIZED_DISTRO_ID)

        distro_id = self.uname_attr("id")
        if distro_id:
            return normalize(distro_id, NORMALIZED_DISTRO_ID)

        return ""

    def name(self, pretty: bool = False) -> str:
        """
        Return the name of the OS distribution, as a string.

        For details, see :func:`distro.name`.
        """
        name = (
            self.os_release_attr("name")
            or self.lsb_release_attr("distributor_id")
            or self.distro_release_attr("name")
            or self.uname_attr("name")
        )
        if pretty:
            name = self.os_release_attr("pretty_name") or self.lsb_release_attr(
                "description"
            )
            if not name:
                name = self.distro_release_attr("name") or self.uname_attr("name")
                version = self.version(pretty=True)
                if version:
                    name = f"{name} {version}"
        return name or ""

    def version(self, pretty: bool = False, best: bool = False) -> str:
        """
        Return the version of the OS distribution, as a string.

        For details, see :func:`distro.version`.
        """
        versions = [
            self.os_release_attr("version_id"),
            self.lsb_release_attr("release"),
            self.distro_release_attr("version_id"),
            self._parse_distro_release_content(self.os_release_attr("pretty_name")).get(
                "version_id", ""
            ),
            self._parse_distro_release_content(
                self.lsb_release_attr("description")
            ).get("version_id", ""),
            self.uname_attr("release"),
        ]
        if self.uname_attr("id").startswith("aix"):
            # On AIX platforms, prefer oslevel command output.
            versions.insert(0, self.oslevel_info())
        elif self.id() == "debian" or "debian" in self.like().split():
            # On Debian-like, add debian_version file content to candidates list.
            versions.append(self._debian_version)
        version = ""
        if best:
            # This algorithm uses the last version in priority order that has
            # the best precision. If the versions are not in conflict, that
            # does not matter; otherwise, using the last one instead of the
            # first one might be considered a surprise.
            for v in versions:
                if v.count(".") > version.count(".") or version == "":
                    version = v
        else:
            for v in versions:
                if v != "":
                    version = v
                    break
        if pretty and version and self.codename():
            version = f"{version} ({self.codename()})"
        return version

    def version_parts(self, best: bool = False) -> Tuple[str, str, str]:
        """
        Return the version of the OS distribution, as a tuple of version
        numbers.

        For details, see :func:`distro.version_parts`.
        """
        version_str = self.version(best=best)
        if version_str:
            version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?")
            matches = version_regex.match(version_str)
            if matches:
                major, minor, build_number = matches.groups()
                return major, minor or "", build_number or ""
        return "", "", ""

    def major_version(self, best: bool = False) -> str:
        """
        Return the major version number of the current distribution.

        For details, see :func:`distro.major_version`.
        """
        return self.version_parts(best)[0]

    def minor_version(self, best: bool = False) -> str:
        """
        Return the minor version number of the current distribution.

        For details, see :func:`distro.minor_version`.
        """
        return self.version_parts(best)[1]

    def build_number(self, best: bool = False) -> str:
        """
        Return the build number of the current distribution.

        For details, see :func:`distro.build_number`.
        """
        return self.version_parts(best)[2]

    def like(self) -> str:
        """
        Return the IDs of distributions that are like the OS distribution.

        For details, see :func:`distro.like`.
        """
        return self.os_release_attr("id_like") or ""

    def codename(self) -> str:
        """
        Return the codename of the OS distribution.

        For details, see :func:`distro.codename`.
        """
        try:
            # Handle os_release specially since distros might purposefully set
            # this to empty string to have no codename
            return self._os_release_info["codename"]
        except KeyError:
            return (
                self.lsb_release_attr("codename")
                or self.distro_release_attr("codename")
                or ""
            )

    def info(self, pretty: bool = False, best: bool = False) -> InfoDict:
        """
        Return certain machine-readable information about the OS
        distribution.

        For details, see :func:`distro.info`.
        """
        return InfoDict(
            id=self.id(),
            version=self.version(pretty, best),
            version_parts=VersionDict(
                major=self.major_version(best),
                minor=self.minor_version(best),
                build_number=self.build_number(best),
            ),
            like=self.like(),
            codename=self.codename(),
        )

    def os_release_info(self) -> Dict[str, str]:
        """
        Return a dictionary containing key-value pairs for the information
        items from the os-release file data source of the OS distribution.

        For details, see :func:`distro.os_release_info`.
        """
        return self._os_release_info

    def lsb_release_info(self) -> Dict[str, str]:
        """
        Return a dictionary containing key-value pairs for the information
        items from the lsb_release command data source of the OS
        distribution.

        For details, see :func:`distro.lsb_release_info`.
        """
        return self._lsb_release_info

    def distro_release_info(self) -> Dict[str, str]:
        """
        Return a dictionary containing key-value pairs for the information
        items from the distro release file data source of the OS
        distribution.

        For details, see :func:`distro.distro_release_info`.
        """
        return self._distro_release_info

    def uname_info(self) -> Dict[str, str]:
        """
        Return a dictionary containing key-value pairs for the information
        items from the uname command data source of the OS distribution.

        For details, see :func:`distro.uname_info`.
        """
        return self._uname_info

    def oslevel_info(self) -> str:
        """
        Return AIX' oslevel command output.
        """
        return self._oslevel_info

    def os_release_attr(self, attribute: str) -> str:
        """
        Return a single named information item from the os-release file data
        source of the OS distribution.

        For details, see :func:`distro.os_release_attr`.
        """
        return self._os_release_info.get(attribute, "")

    def lsb_release_attr(self, attribute: str) -> str:
        """
        Return a single named information item from the lsb_release command
        output data source of the OS distribution.

        For details, see :func:`distro.lsb_release_attr`.
        """
        return self._lsb_release_info.get(attribute, "")

    def distro_release_attr(self, attribute: str) -> str:
        """
        Return a single named information item from the distro release file
        data source of the OS distribution.

        For details, see :func:`distro.distro_release_attr`.
        """
        return self._distro_release_info.get(attribute, "")

    def uname_attr(self, attribute: str) -> str:
        """
        Return a single named information item from the uname command
        output data source of the OS distribution.

        For details, see :func:`distro.uname_attr`.
        """
        return self._uname_info.get(attribute, "")

    @cached_property
    def _os_release_info(self) -> Dict[str, str]:
        """
        Get the information items from the specified os-release file.

        Returns:
            A dictionary containing all information items.
        """
        if os.path.isfile(self.os_release_file):
            with open(self.os_release_file, encoding="utf-8") as release_file:
                return self._parse_os_release_content(release_file)
        return {}

    @staticmethod
    def _parse_os_release_content(lines: TextIO) -> Dict[str, str]:
        """
        Parse the lines of an os-release file.

        Parameters:

        * lines: Iterable through the lines in the os-release file.
                 Each line must be a unicode string or a UTF-8 encoded byte
                 string.

        Returns:
            A dictionary containing all information items.
        """
        props = {}
        lexer = shlex.shlex(lines, posix=True)
        lexer.whitespace_split = True

        tokens = list(lexer)
        for token in tokens:
            # At this point, all shell-like parsing has been done (i.e.
            # comments processed, quotes and backslash escape sequences
            # processed, multi-line values assembled, trailing newlines
            # stripped, etc.), so the tokens are now either:
            # * variable assignments: var=value
            # * commands or their arguments (not allowed in os-release)
            # Ignore any tokens that are not variable assignments
            if "=" in token:
                k, v = token.split("=", 1)
                props[k.lower()] = v

        if "version" in props:
            # extract release codename (if any) from version attribute
            match = re.search(r"\((\D+)\)|,\s*(\D+)", props["version"])
            if match:
                release_codename = match.group(1) or match.group(2)
                props["codename"] = props["release_codename"] = release_codename

        if "version_codename" in props:
            # os-release added a version_codename field.  Use that in
            # preference to anything else Note that some distros purposefully
            # do not have code names.  They should be setting
            # version_codename=""
            props["codename"] = props["version_codename"]
        elif "ubuntu_codename" in props:
            # Same as above but a non-standard field name used on older Ubuntus
            props["codename"] = props["ubuntu_codename"]

        return props

    @cached_property
    def _lsb_release_info(self) -> Dict[str, str]:
        """
        Get the information items from the lsb_release command output.

        Returns:
            A dictionary containing all information items.
        """
        if not self.include_lsb:
            return {}
        try:
            cmd = ("lsb_release", "-a")
            stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
        # Command not found or lsb_release returned error
        except (OSError, subprocess.CalledProcessError):
            return {}
        content = self._to_str(stdout).splitlines()
        return self._parse_lsb_release_content(content)

    @staticmethod
    def _parse_lsb_release_content(lines: Iterable[str]) -> Dict[str, str]:
        """
        Parse the output of the lsb_release command.

        Parameters:

        * lines: Iterable through the lines of the lsb_release output.
                 Each line must be a unicode string or a UTF-8 encoded byte
                 string.

        Returns:
            A dictionary containing all information items.
        """
        props = {}
        for line in lines:
            kv = line.strip("\n").split(":", 1)
            if len(kv) != 2:
                # Ignore lines without colon.
                continue
            k, v = kv
            props.update({k.replace(" ", "_").lower(): v.strip()})
        return props

    @cached_property
    def _uname_info(self) -> Dict[str, str]:
        if not self.include_uname:
            return {}
        try:
            cmd = ("uname", "-rs")
            stdout = subprocess.check_output(cmd, stderr=subprocess.DEVNULL)
        except OSError:
            return {}
        content = self._to_str(stdout).splitlines()
        return self._parse_uname_content(content)

    @cached_property
    def _oslevel_info(self) -> str:
        if not self.include_oslevel:
            return ""
        try:
            stdout = subprocess.check_output("oslevel", stderr=subprocess.DEVNULL)
        except (OSError, subprocess.CalledProcessError):
            return ""
        return self._to_str(stdout).strip()

    @cached_property
    def _debian_version(self) -> str:
        try:
            with open(
                os.path.join(self.etc_dir, "debian_version"), encoding="ascii"
            ) as fp:
                return fp.readline().rstrip()
        except FileNotFoundError:
            return ""

    @staticmethod
    def _parse_uname_content(lines: Sequence[str]) -> Dict[str, str]:
        if not lines:
            return {}
        props = {}
        match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip())
        if match:
            name, version = match.groups()

            # This is to prevent the Linux kernel version from
            # appearing as the 'best' version on otherwise
            # identifiable distributions.
            if name == "Linux":
                return {}
            props["id"] = name.lower()
            props["name"] = name
            props["release"] = version
        return props

    @staticmethod
    def _to_str(bytestring: bytes) -> str:
        encoding = sys.getfilesystemencoding()
        return bytestring.decode(encoding)

    @cached_property
    def _distro_release_info(self) -> Dict[str, str]:
        """
        Get the information items from the specified distro release file.

        Returns:
            A dictionary containing all information items.
        """
        if self.distro_release_file:
            # If it was specified, we use it and parse what we can, even if
            # its file name or content does not match the expected pattern.
            distro_info = self._parse_distro_release_file(self.distro_release_file)
            basename = os.path.basename(self.distro_release_file)
            # The file name pattern for user-specified distro release files
            # is somewhat more tolerant (compared to when searching for the
            # file), because we want to use what was specified as best as
            # possible.
            match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
        else:
            try:
                basenames = [
                    basename
                    for basename in os.listdir(self.etc_dir)
                    if basename not in _DISTRO_RELEASE_IGNORE_BASENAMES
                    and os.path.isfile(os.path.join(self.etc_dir, basename))
                ]
                # We sort for repeatability in cases where there are multiple
                # distro specific files; e.g. CentOS, Oracle, Enterprise all
                # containing `redhat-release` on top of their own.
                basenames.sort()
            except OSError:
                # This may occur when /etc is not readable but we can't be
                # sure about the *-release files. Check common entries of
                # /etc for information. If they turn out to not be there the
                # error is handled in `_parse_distro_release_file()`.
                basenames = _DISTRO_RELEASE_BASENAMES
            for basename in basenames:
                match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
                if match is None:
                    continue
                filepath = os.path.join(self.etc_dir, basename)
                distro_info = self._parse_distro_release_file(filepath)
                # The name is always present if the pattern matches.
                if "name" not in distro_info:
                    continue
                self.distro_release_file = filepath
                break
            else:  # the loop didn't "break": no candidate.
                return {}

        if match is not None:
            distro_info["id"] = match.group(1)

        # CloudLinux < 7: manually enrich info with proper id.
        if "cloudlinux" in distro_info.get("name", "").lower():
            distro_info["id"] = "cloudlinux"

        return distro_info

    def _parse_distro_release_file(self, filepath: str) -> Dict[str, str]:
        """
        Parse a distro release file.

        Parameters:

        * filepath: Path name of the distro release file.

        Returns:
            A dictionary containing all information items.
        """
        try:
            with open(filepath, encoding="utf-8") as fp:
                # Only parse the first line. For instance, on SLES there
                # are multiple lines. We don't want them...
                return self._parse_distro_release_content(fp.readline())
        except OSError:
            # Ignore not being able to read a specific, seemingly version
            # related file.
            # See https://github.com/python-distro/distro/issues/162
            return {}

    @staticmethod
    def _parse_distro_release_content(line: str) -> Dict[str, str]:
        """
        Parse a line from a distro release file.

        Parameters:
        * line: Line from the distro release file. Must be a unicode string
                or a UTF-8 encoded byte string.

        Returns:
            A dictionary containing all information items.
        """
        matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1])
        distro_info = {}
        if matches:
            # regexp ensures non-None
            distro_info["name"] = matches.group(3)[::-1]
            if matches.group(2):
                distro_info["version_id"] = matches.group(2)[::-1]
            if matches.group(1):
                distro_info["codename"] = matches.group(1)[::-1]
        elif line:
            distro_info["name"] = line.strip()
        return distro_info


_distro = LinuxDistribution()


def main() -> None:
    logger = logging.getLogger(__name__)
    logger.setLevel(logging.DEBUG)
    logger.addHandler(logging.StreamHandler(sys.stdout))

    parser = argparse.ArgumentParser(description="OS distro info tool")
    parser.add_argument(
        "--json", "-j", help="Output in machine readable format", action="store_true"
    )

    parser.add_argument(
        "--root-dir",
        "-r",
        type=str,
        dest="root_dir",
        help="Path to the root filesystem directory (defaults to /)",
    )

    args = parser.parse_args()

    if args.root_dir:
        dist = LinuxDistribution(
            include_lsb=False,
            include_uname=False,
            include_oslevel=False,
            root_dir=args.root_dir,
        )
    else:
        dist = _distro

    if args.json:
        logger.info(json.dumps(dist.info(), indent=4, sort_keys=True))
    else:
        logger.info("Name: %s", dist.name(pretty=True))
        distribution_version = dist.version(pretty=True)
        logger.info("Version: %s", distribution_version)
        distribution_codename = dist.codename()
        logger.info("Codename: %s", distribution_codename)


if __name__ == "__main__":
    main()
PK8Eu\�+[[agent360-1.3.1.dist-info/WHEELnu�[���Wheel-Version: 1.0
Generator: setuptools (75.3.0)
Root-Is-Purelib: true
Tag: py3-none-any

PK;Eu\�fc�33 agent360-1.3.1.dist-info/LICENSEnu�[���BSD Simplified License

Copyright (c) 2021, Plesk International GmbH <vincent.vanmegen@webpros.com>

All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

    * Redistributions of source code must retain the above copyright notice,
      this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright notice,
      this list of conditions and the following disclaimer in the documentation
      and/or other materials provided with the distribution.
    * Neither the name of the copyright holder nor the names of its contributors
      may be used to endorse or promote products derived from this software
      without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PK=Eu\#X�D&D&agent360-1.3.1.dist-info/RECORDnu�[���../../../bin/agent360,sha256=B0m6tR1wlh0Q_6RtwWYVU_xP27jdSWtguEq_0ynISEQ,216
../../../bin/hello360,sha256=RI0JCeyIXruYtOrG2QikB76dKCbpkJq_vI_uH9jL23g,218
../../../share/doc/agent360/LICENSE,sha256=rlX8l535b1VXIQ6lk-YDSV5nFQ-pkftddB61x8XTR68,1587
../../../share/doc/agent360/README.md,sha256=AVkUGuQF4v0ehLkj-fQk6XEQzhDgWBUENZBAhnDdfs8,3494
../../../share/doc/agent360/agent360-example.ini,sha256=RPmnYns6aQ0vh_QwxGmSqaFrotMD4ySW4LVf1QZWPZE,1558
agent360-1.3.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
agent360-1.3.1.dist-info/LICENSE,sha256=rlX8l535b1VXIQ6lk-YDSV5nFQ-pkftddB61x8XTR68,1587
agent360-1.3.1.dist-info/METADATA,sha256=Yy214Zx_1J_qw5noeQH5fh3g1gsYWoeyKjgQZYMrjKc,4524
agent360-1.3.1.dist-info/RECORD,,
agent360-1.3.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
agent360-1.3.1.dist-info/WHEEL,sha256=P9jw-gEje8ByB7_hXoICnHtVCrEwMQh-630tKvQWehc,91
agent360-1.3.1.dist-info/entry_points.txt,sha256=fm63-mUwnvIDDP9D0uK8usm7CFmIKMxZ_-OnVgMtvPs,87
agent360-1.3.1.dist-info/top_level.txt,sha256=nEiBZhAn7vjXpeOJoErGJwWdcxFTRWLKCV9erQ3uNhQ,9
agent360/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
agent360/__pycache__/__init__.cpython-39.pyc,,
agent360/__pycache__/agent360.cpython-39.pyc,,
agent360/agent360.py,sha256=7E2G_f_3_Dl0NQ-QHvj_9wusCXIu4-Yjk5p9_KjOtrI,28651
agent360/plugins/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
agent360/plugins/__pycache__/__init__.cpython-39.pyc,,
agent360/plugins/__pycache__/apt-updates.cpython-39.pyc,,
agent360/plugins/__pycache__/asterisk.cpython-39.pyc,,
agent360/plugins/__pycache__/bind.cpython-39.pyc,,
agent360/plugins/__pycache__/bird.cpython-39.pyc,,
agent360/plugins/__pycache__/bitninja.cpython-39.pyc,,
agent360/plugins/__pycache__/cloudlinux-dbgov.cpython-39.pyc,,
agent360/plugins/__pycache__/cloudlinux.cpython-39.pyc,,
agent360/plugins/__pycache__/cpanel.cpython-39.pyc,,
agent360/plugins/__pycache__/cpu.cpython-39.pyc,,
agent360/plugins/__pycache__/cpu_freq.cpython-39.pyc,,
agent360/plugins/__pycache__/dirsize.cpython-39.pyc,,
agent360/plugins/__pycache__/diskinodes.cpython-39.pyc,,
agent360/plugins/__pycache__/diskstatus-nvme.cpython-39.pyc,,
agent360/plugins/__pycache__/diskstatus.cpython-39.pyc,,
agent360/plugins/__pycache__/diskusage.cpython-39.pyc,,
agent360/plugins/__pycache__/docker.cpython-39.pyc,,
agent360/plugins/__pycache__/dovecot.cpython-39.pyc,,
agent360/plugins/__pycache__/elasticsearch.cpython-39.pyc,,
agent360/plugins/__pycache__/exim.cpython-39.pyc,,
agent360/plugins/__pycache__/fail2ban.cpython-39.pyc,,
agent360/plugins/__pycache__/gpu.cpython-39.pyc,,
agent360/plugins/__pycache__/haproxy.cpython-39.pyc,,
agent360/plugins/__pycache__/httpd.cpython-39.pyc,,
agent360/plugins/__pycache__/iostat.cpython-39.pyc,,
agent360/plugins/__pycache__/janus.cpython-39.pyc,,
agent360/plugins/__pycache__/kamailio.cpython-39.pyc,,
agent360/plugins/__pycache__/litespeed.cpython-39.pyc,,
agent360/plugins/__pycache__/loadavg.cpython-39.pyc,,
agent360/plugins/__pycache__/loggedin.cpython-39.pyc,,
agent360/plugins/__pycache__/mailq.cpython-39.pyc,,
agent360/plugins/__pycache__/mdstat.cpython-39.pyc,,
agent360/plugins/__pycache__/megacli.cpython-39.pyc,,
agent360/plugins/__pycache__/memcached.cpython-39.pyc,,
agent360/plugins/__pycache__/memory.cpython-39.pyc,,
agent360/plugins/__pycache__/minecraft.cpython-39.pyc,,
agent360/plugins/__pycache__/mongodb.cpython-39.pyc,,
agent360/plugins/__pycache__/mysql.cpython-39.pyc,,
agent360/plugins/__pycache__/network.cpython-39.pyc,,
agent360/plugins/__pycache__/nginx.cpython-39.pyc,,
agent360/plugins/__pycache__/openvpn.cpython-39.pyc,,
agent360/plugins/__pycache__/phpfpm.cpython-39.pyc,,
agent360/plugins/__pycache__/ping.cpython-39.pyc,,
agent360/plugins/__pycache__/plesk-cgroups.cpython-39.pyc,,
agent360/plugins/__pycache__/plugins.cpython-39.pyc,,
agent360/plugins/__pycache__/postfix.cpython-39.pyc,,
agent360/plugins/__pycache__/powerdns.cpython-39.pyc,,
agent360/plugins/__pycache__/process.cpython-39.pyc,,
agent360/plugins/__pycache__/proftpd.cpython-39.pyc,,
agent360/plugins/__pycache__/rabbitmq.cpython-39.pyc,,
agent360/plugins/__pycache__/redis_stat.cpython-39.pyc,,
agent360/plugins/__pycache__/sleeper.cpython-39.pyc,,
agent360/plugins/__pycache__/swap.cpython-39.pyc,,
agent360/plugins/__pycache__/system.cpython-39.pyc,,
agent360/plugins/__pycache__/tcpports.cpython-39.pyc,,
agent360/plugins/__pycache__/temp.cpython-39.pyc,,
agent360/plugins/__pycache__/unbound.cpython-39.pyc,,
agent360/plugins/__pycache__/vms.cpython-39.pyc,,
agent360/plugins/__pycache__/wp-toolkit.cpython-39.pyc,,
agent360/plugins/__pycache__/yum-updates.cpython-39.pyc,,
agent360/plugins/apt-updates.py,sha256=J_GHiJa1hIZ_Kz1EL6fU8d52uUWpMgtbrJewu8sft3I,1260
agent360/plugins/asterisk.py,sha256=ZjWqeDwvjYVeTI3siUDVh_IpNHND249aJHGRQFeDQnE,1012
agent360/plugins/bind.py,sha256=5Nrkvb5ZWWtNHLT4NitUQghad7uTZEIvXShBEZIbpQc,732
agent360/plugins/bird.py,sha256=sTfJVI1x0GuaDX2MmVgTtx9ZGjjAvEZT-yUywvDTYl4,841
agent360/plugins/bitninja.py,sha256=hJGFYFLI0GoC6t9r09FxSJKA0duw9KIYuvXLNYA2cos,1302
agent360/plugins/cloudlinux-dbgov.py,sha256=z-54qxZeDuWuFlaRZzPb6Y7KTIba4GndBpqz1OZg8ic,1066
agent360/plugins/cloudlinux.py,sha256=lVsMoGhvaxCZQ9syZ4fR8SLB2lpEH-iCHpjyHtyWGuE,1020
agent360/plugins/cpanel.py,sha256=9ZjWYyo8OO9SlLaCyiur0cVS4r8kt5ip550qB-BJVTE,1441
agent360/plugins/cpu.py,sha256=2wH_xaqxzgHPKRe0iiNBao6Xu6sWI0O6z4xDrAEAEak,2020
agent360/plugins/cpu_freq.py,sha256=PSO0Sn_u5OgesWhJ2yA1577uni8I1xxa6R9ItMPSrko,583
agent360/plugins/dirsize.py,sha256=hmlhIcdhFls-E2qqvB_-f8679pJcnftQn_6qDcGvIE4,617
agent360/plugins/diskinodes.py,sha256=0QB930_72RSETzZFyZo52eXABnv0cvVHajRZmz2-wWc,652
agent360/plugins/diskstatus-nvme.py,sha256=3yf4H33PDZGIpVz8xq3ELmHgH5eVWlOhAEuvt9EJkQg,1618
agent360/plugins/diskstatus.py,sha256=iFG4eHOLYr2ajUrvhPnqj-d64nEfQTXGbvreIBEZLbA,2005
agent360/plugins/diskusage.py,sha256=hXjAWw4OH8kXtgkLn-RjFfdUE6fJS4xi0lD9ABRupAQ,4140
agent360/plugins/docker.py,sha256=C2J8gpqfpZezd-AqgchEwW3PL1f_B2H2OhGfJJJcSn0,3442
agent360/plugins/dovecot.py,sha256=Q2eOrA2fKOcAi-YS8Fk4dGZG5Id_5P-UEijNBRLmP-c,1304
agent360/plugins/elasticsearch.py,sha256=FKhwWAgnS5w5mbmMIegwOQjFmLm4uexv7aKyHbJt4FE,4342
agent360/plugins/exim.py,sha256=G29PLSuvZJnsSH-SgpfVpjWG_xiu4mAXXbV37QelIKY,484
agent360/plugins/fail2ban.py,sha256=zWZtBGt3s5J41ap0VvjuLJiYN8S_-pou9ewIdokf7E4,831
agent360/plugins/gpu.py,sha256=skd5NVE5CSNPemgp_VQOC4zh32kAhjlTX6PD8TlnzfU,959
agent360/plugins/haproxy.py,sha256=A3jfTp3XrtARDWNhULZX237LVBRLioyPpDS9eGayaY4,4274
agent360/plugins/httpd.py,sha256=rOXffo8oourMfWhTbSiyqvkP6Y66AzaJXkNrMRgF-vM,2800
agent360/plugins/iostat.py,sha256=06vIVDraZhs1_KSSmO7M9CqY_d02wiXT3ihBbe4zVfk,5227
agent360/plugins/janus.py,sha256=kEx750HYckpq2ytOM20HxFlVWgXkUetPh3Db4Btku_0,705
agent360/plugins/kamailio.py,sha256=pdVEqCjK5un1KxaRl7o4as5LdhGQvltO7hunt1us3LU,556
agent360/plugins/litespeed.py,sha256=BPkc6IQWqn1kNn--I-m3z2SBscBIgmMbH--BRsRWPaM,3114
agent360/plugins/loadavg.py,sha256=RUcVOW0ARuWdu1z1quiEadCyhA_f1cqw0i6Qz0hCjKE,332
agent360/plugins/loggedin.py,sha256=b75Ytq4pI009wngG1TvR613vbRp6rWAFVg0leWR63wI,400
agent360/plugins/mailq.py,sha256=E8A_RxVVoQZtFW8TuRegX4mSC9SkBNnsaBCDGl9JFIo,528
agent360/plugins/mdstat.py,sha256=JsgShooY7H9gFsUHnXtNAuC88IIPFG5RtY1LRaTkTN0,1388
agent360/plugins/megacli.py,sha256=OicA97qoISTlIoqUIVh2Q713u3iVt4rBkzSxgNardc0,2787
agent360/plugins/memcached.py,sha256=GG7baBI6lyUf9ayltTj0b3bnbI1S-tlHFCLbiWbZ7l0,3165
agent360/plugins/memory.py,sha256=j7opFfwpxIlklSepaO19PEkSErWv5N20bK_0qOVBzK4,932
agent360/plugins/minecraft.py,sha256=H47Sz4b5mecv1A8Qe9DJTffSD8cwkk0afROnrZL_qEs,2410
agent360/plugins/mongodb.py,sha256=hiJC04IxeB3JeMB3vU81XJms8f43JlZQK7MLaeJcFa4,5766
agent360/plugins/mysql.py,sha256=zA7N0s0cZr2-x3Z_x0n2KhiNPzdiaGAlu_QX7ojGKvo,5256
agent360/plugins/network.py,sha256=9AwWtLKIiQvBgoVRf1Bqn77g6h-7rlzqbhtk5l18D1E,2786
agent360/plugins/nginx.py,sha256=5N813PVo4Pq55Qgyf9nPWNJwyv5Cz1KEB_RpVQ3I148,3275
agent360/plugins/openvpn.py,sha256=GnK_mBggxVugVcf0fYdmO625cYXrh9FNbWYTCa1D6jk,2223
agent360/plugins/phpfpm.py,sha256=2vAKevVETekmINjx4IUI3eeWOJuH0Vh_0LNgsh45sa0,2781
agent360/plugins/ping.py,sha256=Gcv7OTNMz0zSQG_jmuMWNQkG4Y8mSfn4tcAd8K4wWgc,3165
agent360/plugins/plesk-cgroups.py,sha256=O-sfLzwxcLVvs3k7NTmomHDj30Y60FwWsxokIDr-xjw,6330
agent360/plugins/plugins.py,sha256=uqAfswqOxJ4W3S2O29al_cc2lUw-riihkOgeiy2WhrU,2538
agent360/plugins/postfix.py,sha256=Cwt1YHi4ck17rKHPUhUvgHxi06x-ewQpuslBC91fCrU,1774
agent360/plugins/powerdns.py,sha256=eFsyyvMCfjj056DlP0ZEX72vC8zDAOauX88jQcQoYz4,4669
agent360/plugins/process.py,sha256=GlAg2MRsLn9Zhadxl6oCmHLfu7dWEGHyxuV51bSHEPg,4365
agent360/plugins/proftpd.py,sha256=E0PPEaOQxPCAZi2wkMXQdx0Mc7ud2k0zfYPxkGdzWA0,1130
agent360/plugins/rabbitmq.py,sha256=LXlhROb8MgzsJBUdbxtOTJ9wvQenk_MZOh6iQzQIOoo,3569
agent360/plugins/redis_stat.py,sha256=VwdCK8Qba0DShBpoH1it_GmRoCz7Cap3uubhQiU0FgI,5615
agent360/plugins/sleeper.py,sha256=eu1Hh1Ut95PIUVn9e80IX9HPiQZ7J5Xki05nr7Jig8g,244
agent360/plugins/swap.py,sha256=dM2tMUevsg5z427xVq3GqOofGls9UhO__nZ8ieuDuMw,364
agent360/plugins/system.py,sha256=kVNxjnEsqrHW-qYTNCHY6d11RdeqJ8RXPp1a-lXPehs,5811
agent360/plugins/tcpports.py,sha256=j1K94S3y40M8W8pAjX0JAswax8xvgTqgmr-ZzG1XSEM,1110
agent360/plugins/temp.py,sha256=tEdkXbq4J8Po6YfwJtjVgnXaaVfKv0norua4Kcr1kaw,1480
agent360/plugins/unbound.py,sha256=4uwzh5R0vdcQIsmq6xrQrU007svcKAGo6C7AwBQa1Lw,3316
agent360/plugins/vms.py,sha256=ybUqoznYgxtCTcFCifhwiTWpzBZb9e0PuaFR1qOLNjc,6352
agent360/plugins/wp-toolkit.py,sha256=3ZPaD1q_K2gD_VziTvnqhVV39VrlaP10XaC9UQF5mvY,1845
agent360/plugins/yum-updates.py,sha256=5U22i0LIa6FfXJnhJK1u3FpM4VQQbmfTLQi6MntSrug,810
PKBEu\����!agent360-1.3.1.dist-info/METADATAnu�[���Metadata-Version: 2.1
Name: agent360
Version: 1.3.1
Summary: 360 agent
Home-page: https://github.com/plesk/agent360
Author: 360
Author-email: 360support@webpros.com
Maintainer: 360
Maintainer-email: 360support@webpros.com
License: BSD-3-Clause
Keywords: 360 system monitoring agent
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: No Input/Output (Daemon)
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: BSD License
Classifier: Natural Language :: English
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Topic :: System :: Monitoring
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: psutil
Requires-Dist: netifaces
Requires-Dist: configparser
Requires-Dist: future
Requires-Dist: distro
Requires-Dist: certifi

# Agent360

360 Monitoring ([360monitoring.com](https://360monitoring.com)) is a web service that monitors and displays statistics of
your server performance.

Agent360 is OS agnostic software compatible with Python 3.7 and 3.8.
It's been optimized to have low CPU consumption and comes with an
extendable set of useful plugins.

[![Build Status](https://github.com/plesk/agent360/workflows/Agent360-Test-And-Deploy/badge.svg?branch=master)](https://github.com/plesk/agent360/actions/workflows/test-and-deploy.yml)

## Documentation

You can find the full documentation including the feature complete REST API at [docs.360monitoring.com](https://docs.360monitoring.com/docs) and [docs.360monitoring.com/docs/api](https://docs.360monitoring.com/docs/api).

## Automatic Installation (All Linux Distributions)

You can install the default configuration of Agent360 on all Linux distributions with just one click.

1. Connect to your server via SSH.

2. Find your USERTOKEN. To do so, [go to the servers page](https://monitoring.platform360.io/servers/overview) and then click the "Add server" button.

3. Run the following command:

    ```sh
    wget -q -N https://monitoring.platform360.io/agent360.sh && bash agent360.sh USERTOKEN
    ```

## Automatic Installation (Windows)

Download the [setup](https://github.com/plesk/agent360/releases) and install it on your Windows server.

The installer will ask for your USERTOKEN which you can get [from the servers page](https://monitoring.platform360.io/servers/overview).

## Manual Installation

To customize installation options, install Agent360 manually.

1. Connect to your server via SSH.
2. Run the following command, which differs depending on your server platform:

    - Debian GNU/Linux:

        ```sh
        apt-get install python3-dev python3-setuptools python3-pip
        pip3 install agent360
        wget -O /etc/agent360.ini https://monitoring.platform360.io/agent360.ini
        ```

    - Fedora/CentOS version 6 or earlier (python 2.7):

        ```sh
        yum install python-devel python-setuptools gcc
        easy_install agent360 netifaces psutil
        wget -O /etc/agent360.ini https://monitoring.platform360.io/agent360.ini
        ```

    - Fedora/CentOS version 7 and later (python 3):

        ```sh
        yum install python36-devel python36 gcc
        pip3 install agent360
        wget -O /etc/agent360.ini https://monitoring.platform360.io/agent360.ini
        ```

3. Find your USERTOKEN. To do so, [go to the servers page](https://monitoring.platform360.io/servers/overview) and then click the "Add server" button. You need this to generate a serverid.

4. Run the following command (USERTOKEN is the one you got during the previous step):

    ```sh
    agent360 hello USERTOKEN /etc/agent360-token.ini
    ```

5. Create a systemd service at `/etc/systemd/system/agent360.service` by adding the following:

    ```ini
    [Unit]
    Description=Agent360

    [Service]
    ExecStart=/usr/local/bin/agent360
    User=agent360

    [Install]
    WantedBy=multi-user.target
    ```

6. Run the following command:

    ```sh
    chmod 644 /etc/systemd/system/agent360.service
    systemctl daemon-reload
    systemctl enable agent360
    systemctl start agent360
    ```

## Building Windows setup

Prerequisite: [InnoSetup](https://jrsoftware.org/isdl.php) is used as the installer, build script assumes that it is installed in the default location.

Run `php windows/build.php` to create setup file.
PKDEu\"agent360-1.3.1.dist-info/REQUESTEDnu�[���PKDEu\r�m		&agent360-1.3.1.dist-info/top_level.txtnu�[���agent360
PKGEu\ ��WW)agent360-1.3.1.dist-info/entry_points.txtnu�[���[console_scripts]
agent360 = agent360.agent360:main
hello360 = agent360.agent360:hello
PKIEu\���"agent360-1.3.1.dist-info/INSTALLERnu�[���pip
PKNEu\�e@�o�oagent360/agent360.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8; tab-width: 4; indent-tabs: nil; -*-
# by Al Nikolov <roottoorfieuorg@gmail.com>
from __future__ import print_function
import bz2
import sys
if sys.version_info >= (3,):
    try:
        from past.builtins import basestring
    except ImportError:
        basestring = str
    import configparser
    import http.client
    from queue import Queue, Empty
    import io
else:
    import ConfigParser
    import httplib
    import StringIO
    from Queue import Queue, Empty

if sys.version_info >= (3,4):
    import importlib.util
else:
    import imp

import glob
import certifi
import ssl

try:
    import json
except ImportError:
    import simplejson as json
import logging
import os
import pickle
import signal
import socket
import subprocess
import threading
import time
import types
from optparse import OptionParser

try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError

__version__ = '1.3.1'
__FILEABSDIRNAME__ = os.path.dirname(os.path.abspath(__file__))

ini_files = (
    os.path.join('/etc', 'agent360.ini'),
    os.path.join('/etc', 'agent360-custom.ini'),
    os.path.join('/etc', 'agent360-token.ini'),
    os.path.join(os.path.dirname(__FILEABSDIRNAME__), 'agent360.ini'),
    os.path.join(os.path.dirname(__FILEABSDIRNAME__), 'agent360-custom.ini'),
    os.path.join(os.path.dirname(__FILEABSDIRNAME__), 'agent360-token.ini'),
    os.path.abspath('agent360.ini'),
    os.path.abspath('agent360-custom.ini'),
    os.path.abspath('agent360-token.ini'),
)

if os.name == 'nt':
    ini_files = (
        os.path.join(__FILEABSDIRNAME__, '..', 'config', 'agent360.ini'),
        os.path.join(__FILEABSDIRNAME__, '..', 'config', 'agent360-custom.ini'),
        os.path.join(__FILEABSDIRNAME__, '..', 'config', 'agent360-token.ini'),
    )

def info():
    '''
    Return string with info about agent360:
        - version
        - plugins enabled
        - absolute path to plugin directory
        - server id from configuration file
    '''
    agent = Agent(dry_instance=True)
    plugins_path = agent._get_plugins_path()

    plugins_enabled = agent._get_plugins(state='enabled')

    return '\n'.join((
        'Version: %s' % __version__,
        'Plugins enabled: %s' % ', '.join(plugins_enabled),
        'Plugins directory: %s' % plugins_path,
        'Server: %s' % agent.config.get('agent', 'server'),
    ))


def hello(proto='https'):
    parser = OptionParser()
    parser.add_option("-t", "--tags", help="Comma-separated list of tags")
    parser.add_option("-a", "--automon", type=int, default=0, help="Enable/disable automatic monitoring of hosted websites")

    (options, args) = parser.parse_args()

    user_id = args[0]
    agent = Agent(dry_instance=True)

    if len(args) > 1:
        token_filename = args[1]
    else:
        token_filename = os.path.join(__FILEABSDIRNAME__, 'agent360-token.ini')

    if len(args) > 2:
        unique_id = args[2]
    else:
        unique_id = ''

    if options.tags is None:
        tags = ''
    else:
        tags = options.tags

    if options.automon == 1:
        domains = ','.join(_get_domains())
    else:
        domains = ''

    if '_' in user_id:
        server_id = user_id.split('_')[1]
        user_id = user_id.split('_')[0]
    else:
        try:
            hostname = os.uname()[1]
        except AttributeError:
            hostname = socket.getfqdn()
        server_id = urlopen(
            proto + '://' + agent.config.get('data', 'hello_api_host') + '/hello',
            data=urlencode({
                    'user': user_id,
                    'hostname': hostname,
                    'unique_id': unique_id,
                    'tags': tags,
                    'domains': domains,
            }).encode("utf-8")
           ).read().decode()

    if len(server_id) == 24:
        print('Got server_id: %s' % server_id)
        open(token_filename, 'w').\
            write('[DEFAULT]\nuser=%s\nserver=%s\n' % (user_id, server_id))
    else:
        print('Could not retrieve server_id: %s' % server_id)

def _get_apache_domains():
    domains = []

    try:
        output = subprocess.check_output(['apachectl', '-S'])

        for line in output.decode().splitlines():
            if 'namevhost' not in line:
                continue

            cols = line.strip().split(' ')
            domains.append(cols[3])
    except FileNotFoundError:
        pass

    return domains

def _get_nginx_domains():
    domains = []

    try:
        output = subprocess.check_output(['nginx', '-T'])

        for line in output.decode().splitlines():
            if 'server_name' not in line:
                continue

            cols = line.strip().split(' ')

            if len(cols) == 2:
                domain = cols[1].replace(';', '').replace('"', '')
                domains.append(domain)
    except FileNotFoundError:
        pass

    return domains

def _get_domains():
    domains = []

    try:
        json_str = subprocess.check_output(['whmapi1', '--output=jsonpretty', 'get_domain_info'])
        response = json.loads(json_str)

        for domain in response['data']['domains']:
            domains.append(domain['domain'])
    except FileNotFoundError:
        try:
            str = subprocess.check_output(['plesk', 'bin', 'domain', '--list'])

            for domain in str.decode().splitlines():
                domains.append(domain)
        except FileNotFoundError:
            for domain in list(set(_get_apache_domains() + _get_nginx_domains())):
                if '.' not in domain:
                    continue

                if domain.endswith('.localdomain'):
                    continue

                if domain.endswith('.localhost'):
                    continue

                if domain.endswith('.local'):
                    continue

                domains.append(domain)

    return domains

def count_domains():
    print(len(_get_domains()))

def _plugin_name(plugin):
    if isinstance(plugin, basestring):
        basename = os.path.basename(plugin)
        return os.path.splitext(basename)[0]
    else:
        return plugin.__name__


def test_plugins(plugins=[]):
    '''
    Test specified plugins and print their data output after single check.
    If plugins list is empty test all enabled plugins.
    '''
    agent = Agent(dry_instance=True)
    plugins_path = agent._get_plugins_path()
    if plugins_path not in sys.path:
        sys.path.insert(0, plugins_path)

    if not plugins:
        plugins = agent._get_plugins(state='enabled')
        print('Check all enabled plugins: %s' % ', '.join(plugins))

    for plugin_name in plugins:
        print('%s:' % plugin_name)

        try:
            if sys.version_info >= (3,4):
                spec = importlib.util.find_spec(plugin_name)
            else:
                fp, pathname, description = imp.find_module(plugin_name)
        except Exception as e:
            print('Find error:', e)
            continue

        try:
            if sys.version_info >= (3,4):
                module = importlib.util.module_from_spec(spec)
                spec.loader.exec_module(module)
            else:
                module = imp.load_module(plugin_name, fp, pathname, description)
        except Exception as e:
            print('Load error:', e)
            continue
        finally:
            if sys.version_info < (3,4):
                # Since we may exit via an exception, close fp explicitly.
                if fp:
                    fp.close()

        try:
            payload = module.Plugin().run(agent.config)
            print(json.dumps(payload, indent=4, sort_keys=True))
        except Exception as e:
            print('Execution error:', e)


class Agent:
    execute = Queue()
    metrics = Queue()
    data = Queue()
    cemetery = Queue()
    shutdown = False

    def __init__(self, dry_instance=False):
        '''
        Initialize internal strictures
        '''
        self._config_init()

        # Cache for plugins so they can store values related to previous checks
        self.plugins_cache = {}

        if dry_instance:
            return

        self._logging_init()
        self._plugins_init()
        self._data_worker_init()
        self._dump_config()

    def _get_plugins_path(self):
        if os.name == 'nt':
            return os.path.expandvars(self.config.get('agent', 'plugins'))
        else:
            return self.config.get('agent', 'plugins')


    def _config_init(self):
        '''
        Initialize configuration object
        '''
        defaults = {
            'max_data_span': 60,
            'max_data_age': 60 * 10,
            'logging_level': logging.INFO,
            'threads': 100,
            'ttl': 60,
            'interval': 60,
            'plugins': os.path.join(__FILEABSDIRNAME__, 'plugins'),
            'enabled': 'no',
            'subprocess': 'no',
            'user': '',
            'server': '',
            'api_host': 'ingest.monitoring360.io',
            'hello_api_host': 'api.monitoring360.io',
            'api_path': '/v2/server/poll',
            'log_file': '/var/log/agent360.log',
            'log_file_mode': 'a',
            'max_cached_collections': 10,
        }
        sections = [
            'agent',
            'execution',
            'data',
        ]
        if sys.version_info >= (3,):
            config = configparser.RawConfigParser(defaults)
        else:
            config = ConfigParser.RawConfigParser(defaults)
        config.read(ini_files)
        self.config = config
        for section in sections:
            self._config_section_create(section)
            if section == 'data':
                self.config.set(section, 'interval', 1)
            if section == 'agent':
                self.config.set(section, 'interval', .5)

    def _config_section_create(self, section):
        '''
        Create an addition section in the configuration object
        if it's not exists
        '''
        if not self.config.has_section(section):
            self.config.add_section(section)

    def _logging_init(self):
        '''
        Initialize logging faculty
        '''
        level = self.config.getint('agent', 'logging_level')

        if os.name == 'nt':
            log_file = os.path.expandvars(self.config.get('agent', 'log_file'))
        else:
            log_file = self.config.get('agent', 'log_file')

        log_file_mode = self.config.get('agent', 'log_file_mode')
        if log_file_mode in ('w', 'a'):
            pass
        elif log_file_mode == 'truncate':
            log_file_mode = 'w'
        elif log_file_mode == 'append':
            log_file_mode = 'a'
        else:
            log_file_mode = 'a'

        if log_file == '-':
            logging.basicConfig(level=level)  # Log to sys.stderr by default
        else:
            try:
                logging.basicConfig(filename=log_file, filemode=log_file_mode, level=level, format="%(asctime)-15s  %(levelname)s    %(message)s")
            except IOError as e:
                logging.basicConfig(level=level)
                logging.info('IOError: %s', e)
                logging.info('Drop logging to stderr')

        logging.info('Agent logging_level %i', level)

    def _plugins_init(self):
        '''
        Discover the plugins
        '''
        logging.info('_plugins_init')
        plugins_path = self._get_plugins_path()
        filenames = glob.glob(os.path.join(plugins_path, '*.py'))
        if plugins_path not in sys.path:
            sys.path.insert(0, plugins_path)
        self.schedule = {}
        for filename in filenames:
            name = _plugin_name(filename)
            if name == 'plugins':
                continue
            self._config_section_create(name)
            if self.config.getboolean(name, 'enabled'):
                if self.config.getboolean(name, 'subprocess'):
                    self.schedule[filename] = 0
                else:
                    if sys.version_info >= (3,4):
                        spec = importlib.util.find_spec(name)
                    else:
                        fp, pathname, description = imp.find_module(name)

                    try:
                        if sys.version_info >= (3,4):
                            module = importlib.util.module_from_spec(spec)
                            spec.loader.exec_module(module)
                        else:
                            module = imp.load_module(name, fp, pathname, description)
                    except Exception:
                        module = None
                        logging.error('import_plugin_exception:%s', str(sys.exc_info()[0]))
                    finally:
                        if sys.version_info < (3,4):
                            # Since we may exit via an exception, close fp explicitly.
                            if fp:
                                fp.close()

                    if module:
                        self.schedule[module] = 0
                    else:
                        logging.error('import_plugin:%s', name)

    def _subprocess_execution(self, task):
        '''
        Execute /task/ in a subprocess
        '''
        process = subprocess.Popen((sys.executable, task),
            stdout=subprocess.PIPE, stderr=subprocess.PIPE,
            universal_newlines=True)
        logging.debug('%s:process:%i', threading.currentThread(), process.pid)
        interval = self.config.getint('execution', 'interval')
        name = _plugin_name(task)
        ttl = self.config.getint(name, 'ttl')
        ticks = ttl / interval or 1
        process.poll()
        while process.returncode is None and ticks > 0:
            logging.debug('%s:tick:%i', threading.currentThread(), ticks)
            time.sleep(interval)
            ticks -= 1
            process.poll()
        if process.returncode is None:
            logging.error('%s:kill:%i', threading.currentThread(), process.pid)
            os.kill(process.pid, signal.SIGTERM)
        stdout, stderr = process.communicate()
        if process.returncode != 0 or stderr:
            logging.error('%s:%s:%s:%s', threading.currentThread(),
                task, process.returncode, stderr)
        if stdout:
            ret = pickle.loads(stdout)
        else:
            ret = None
        return ret

    def _execution(self):
        '''
        Take queued execution requests, execute plugins and queue the results
        '''
        while True:
            if self.shutdown:
                logging.info('%s:shutdown', threading.currentThread())
                break
            logging.debug('%s:exec_queue:%i', threading.currentThread(), self.execute.qsize())
            try:
                task = self.execute.get_nowait()
            except Empty:
                break
            logging.debug('%s:task:%s', threading.currentThread(), task)
            name = _plugin_name(task)
            try:
                interval = self.config.get(name, 'interval')
            except:
                interval = 60
            ts = time.time()
            if isinstance(task, basestring):
                payload = self._subprocess_execution(task)
            else:
                try:
                    # Setup cache for plugin instance
                    # if name not in self.plugins_cache.iterkeys():
                    #     self.plugins_cache[name] = []
                    self.plugins_cache.update({
                        name: self.plugins_cache.get(name, [])
                    })

                    plugin = task.Plugin(agent_cache=self.plugins_cache[name])
                    payload = plugin.run(self.config)
                except Exception:
                    logging.exception('plugin_exception')
                    payload = {'exception': str(sys.exc_info()[0])}
            self.metrics.put({
                'ts': ts,
                'task': task,
                'name': name,
                'interval': interval,
                'payload': payload,
            })
        self.cemetery.put(threading.currentThread())
        self.hire.release()


    def _data(self):
        '''
        Take and collect data, send and clean if needed
        '''
        logging.info('%s', threading.currentThread())
        api_host = self.config.get('data', 'api_host')
        api_path = self.config.get('data', 'api_path')
        max_age = self.config.getint('agent', 'max_data_age')
        max_span = self.config.getint('agent', 'max_data_span')
        server = self.config.get('agent', 'server')
        user = self.config.get('agent', 'user')
        interval = self.config.getint('data', 'interval')
        max_cached_collections = self.config.get('agent', 'max_cached_collections')
        cached_collections = []
        collection = []
        initial_data = True
        while True:
            if initial_data:
                max_span = 10
            else:
                max_span = self.config.getint('agent', 'max_data_span')
            loop_ts = time.time()
            if self.shutdown:
                logging.info('%s:shutdown', threading.currentThread())
                break
            logging.debug('%s:data_queue:%i:collection:%i',
                threading.currentThread(), self.data.qsize(), len(collection))
            while self.data.qsize():
                try:
                    collection.append(self.data.get_nowait())
                except Exception as e:
                    logging.error('Data queue error: %s' % e)
            if collection:
                first_ts = min((e['ts'] for e in collection))
                last_ts = max((e['ts'] for e in collection))
                now = time.time()
                send = False
                if last_ts - first_ts >= max_span:
                    logging.debug('Max data span')
                    send = True
                    clean = False
                elif now - first_ts >= max_age:
                    logging.warning('Max data age')
                    send = True
                    clean = True
                if send:
                    initial_data = False
                    headers = {
                        "Content-type": "application/json",
                        "Authorization": "ApiKey %s:%s" % (user, server),
                    }
                    logging.debug('collection: %s',
                        json.dumps(collection, indent=2, sort_keys=True))
                    if not (server and user):
                        logging.warning('Empty server or user, nowhere to send.')
                        clean = True
                    else:

                        try:
                            ctx = ssl.create_default_context(cafile=certifi.where())

                            if sys.version_info >= (3,):
                                connection = http.client.HTTPSConnection(api_host, context=ctx, timeout=15)
                            else:
                                connection = httplib.HTTPSConnection(api_host, context=ctx, timeout=15)

                            # Trying to send cached collections if any
                            if cached_collections:
                                logging.info('Sending cached collections: %i', len(cached_collections))
                                while cached_collections:
                                    connection.request('PUT', '%s?version=%s' % (api_path, __version__),
                                            cached_collections[0],
                                            headers=headers)
                                    response = connection.getresponse()
                                    response.read()
                                    if response.status == 200:
                                        del cached_collections[0]  # Remove just sent collection
                                        logging.debug('Successful response: %s', response.status)
                                    else:
                                        raise ValueError('Unsuccessful response: %s' % response.status)
                                logging.info('All cached collections sent')

                            # Send recent collection (reuse existing connection)
                            connection.request('PUT', '%s?version=%s' % (api_path, __version__),
                                    bz2.compress(str(json.dumps(collection)+"\n").encode()),
                                    headers=headers)
                            response = connection.getresponse()
                            response.read()

                            if response.status == 200:
                                logging.debug('Successful response: %s', response.status)
                                clean = True
                            else:
                                raise ValueError('Unsuccessful response: %s' % response.status)
                        except Exception as e:
                            logging.error('Failed to submit collection: %s' % e)

                            # Store recent collection in cached_collections if send failed
                            if max_cached_collections > 0:
                                if len(cached_collections) >= max_cached_collections:
                                    del cached_collections[0]  # Remove oldest collection
                                    logging.info('Reach max_cached_collections (%s): oldest cached collection dropped',
                                        max_cached_collections)
                                logging.info('Cache current collection to resend next time')
                                cached_collections.append(bz2.compress(str(json.dumps(collection)+"\n").encode()))
                                collection = []
                        finally:
                            connection.close()
                    if clean:
                        collection = []
            sleep_interval = interval - (time.time() - loop_ts)
            if sleep_interval > 0:
                time.sleep(sleep_interval)

    def _data_worker_init(self):
        '''
        Initialize data worker thread
        '''
        logging.info('_data_worker_init')
        threading.Thread(target=self._data).start()

    def _dump_config(self):
        '''
        Dumps configuration object
        '''
        if sys.version_info >= (3,):
            buf = io.StringIO()
        else:
            buf = StringIO.StringIO()

        self.config.write(buf)
        logging.info('Config: %s', buf.getvalue())

    def _get_plugins(self, state='enabled'):
        '''
        Return list with plugins names
        '''
        plugins_path = self._get_plugins_path()
        plugins = []
        for filename in glob.glob(os.path.join(plugins_path, '*.py')):
            plugin_name = _plugin_name(filename)
            if plugin_name == 'plugins':
                continue
            self._config_section_create(plugin_name)

            if state == 'enabled':
                if self.config.getboolean(plugin_name, 'enabled'):
                    plugins.append(plugin_name)
            elif state == 'disabled':
                if not self.config.getboolean(plugin_name, 'enabled'):
                    plugins.append(plugin_name)

        return plugins


    def _rip(self):
        '''
        Join with dead workers
        Workaround for https://bugs.python.org/issue37788
        '''
        logging.debug('cemetery:%i', self.cemetery.qsize())
        while True:
            try:
                thread = self.cemetery.get_nowait()
            except Empty:
                break
            logging.debug('joining:%s', thread)
            thread.join()


    def run(self):
        '''
        Start all the worker threads
        '''
        logging.info('Agent main loop')
        interval = self.config.getfloat('agent', 'interval')
        self.hire = threading.Semaphore(
            self.config.getint('execution', 'threads'))
        try:
            while True:
                self._rip()
                now = time.time()
                logging.debug('%i threads', threading.activeCount())
                while self.metrics.qsize():
                    metrics = self.metrics.get_nowait()
                    name = metrics['name']
                    logging.debug('metrics:%s', name)
                    plugin = metrics.get('task')
                    if plugin:
                        self.schedule[plugin] = \
                            int(now) + self.config.getint(name, 'interval')
                        if isinstance(plugin, types.ModuleType):
                            metrics['task'] = plugin.__file__
                    self.data.put(metrics)
                execute = [
                    what
                    for what, when in self.schedule.items()
                    if when <= now
                ]
                for name in execute:
                    logging.debug('scheduling:%s', name)
                    del self.schedule[name]
                    self.execute.put(name)
                    if self.hire.acquire(False):
                        try:
                            thread = threading.Thread(target=self._execution)
                            thread.start()
                            logging.debug('new_execution_worker_thread:%s', thread)
                        except Exception as e:
                            logging.warning('Can not start new thread: %s', e)
                    else:
                        logging.warning('threads_capped')
                        self.metrics.put({
                            'ts': now,
                            'name': 'agent_internal',
                            'payload': {
                                'threads_capping':
                                    self.config.getint('execution', 'threads')}
                        })
                sleep_interval = .5-(time.time()-now)
                if sleep_interval > 0:
                    time.sleep(sleep_interval)
                else:
                    logging.warning('not enough time to start worker threads')
                    time.sleep(.1)

        except KeyboardInterrupt:
            logging.warning(sys.exc_info()[0])
            logging.info('Shutting down')
            self._rip()
            wait_for = True
            while wait_for:
                all_threads = threading.enumerate()
                logging.info('Remaining threads: %s', all_threads)
                wait_for = [
                    thread for thread in all_threads
                    if not thread.isDaemon() and
                    not isinstance(thread, threading._MainThread)
                ]
                if not wait_for:
                    logging.info('Bye!')
                    sys.exit(0)
                self.shutdown = True
                logging.info('Waiting for %i threads to exit', len(wait_for))
                for thread in wait_for:
                    logging.info('Joining with %s/%f', thread, interval)
                    thread.join(interval)
        except Exception as e:
            logging.error('Worker error: %s' % e)


def main():
    if len(sys.argv) > 1:
        if sys.argv[1].startswith('--'):
            sys.argv[1] = sys.argv[1][2:]

        if sys.argv[1] == 'help':
            print('\n'.join((
                'Run without options to run agent.',
                'Acceptable options (leading -- is optional):',
                '    help, info, version, hello, insecure-hello, test',
            )))
            sys.exit()
        elif sys.argv[1] == 'info':
            print(info())
            sys.exit()
        elif sys.argv[1] == 'version':
            print(__version__)
            sys.exit()
        elif sys.argv[1] == 'hello':
            del sys.argv[1]
            sys.exit(hello())
        elif sys.argv[1] == 'count-domains':
            del sys.argv[1]
            sys.exit(count_domains())
        elif sys.argv[1] == 'insecure-hello':
            del sys.argv[1]
            sys.exit(hello(proto='http'))
        elif sys.argv[1] == 'test':
            sys.exit(test_plugins(sys.argv[2:]))
        else:
            print('Invalid option:', sys.argv[1], file=sys.stderr)
            sys.exit(1)
    else:
        Agent().run()


if __name__ == '__main__':
    main()
PKQEu\agent360/__init__.pynu�[���PKTEu\�ǎ]]agent360/plugins/ping.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
from subprocess import Popen, PIPE, CalledProcessError
import sys
import plugins


def _get_match_groups(ping_output, regex):
    match = regex.search(ping_output)
    if not match:
        return False
    else:
        return match.groups()


def system_command(Command, newlines=True):
    Output = ""
    Error = ""
    try:
        proc = Popen(Command.split(), stdout=PIPE)
        Output = proc.communicate()[0]
    except Exception:
        pass

    if Output:
        if newlines is True:
            Stdout = Output.split("\\n")
        else:
            Stdout = Output
    else:
        Stdout = []
    if Error:
        Stderr = Error.split("\n")
    else:
        Stderr = []

    return (Stdout, Stderr)


def collect_ping(hostname):
    if sys.platform.startswith('linux') or sys.platform.startswith('freebsd'):
    #if sys.platform == "linux" or sys.platform == "linux2":
        response = str(system_command("ping -W 5 -c 1 " + hostname, False)[0])
        try:
            matcher = re.compile(r'(\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+)')
            minping, avgping, maxping, jitter = _get_match_groups(response, matcher)
            response = avgping
        except Exception:
            #response = 9999
            response = -1
    elif sys.platform == "darwin":
        response = str(system_command("ping -c 1 " + hostname, False)[0])
        # matcher = re.compile(r'min/avg/max/stddev = (\d+)/(\d+)/(\d+)/(\d+) ms')
        # min, avg, max, stddev = _get_match_groups(response, matcher)
        matcher = re.compile(r'(\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+)')
        matched = _get_match_groups(response, matcher)
        if matched is False:
            #response = 0
            response = -1
        else:
            minping, avgping, maxping, jitter = matched
            response = avgping
    elif sys.platform == "win32":
        #response = 0
        response = -1
        try:
            ping = Popen(["ping", "-n", "1 ", hostname], stdout=PIPE, stderr=PIPE)
            out, error = ping.communicate()
            if out:
                try:
                    response = int(re.findall(r"Average = (\d+)", out)[0])
                except Exception:
                    pass
            else:
                #response = 0
                response = -1
        except CalledProcessError:
            pass
        if response == -1:
            try:
                rxresponse = re.findall(br" + .+ = [0-9]{1,9}ms, .+ = [0-9]{1,9}ms, .+ = (\d+){1,9}ms", out)
                response = rxresponse[0].decode()
            except Exception:
                pass
    else:
        #response = float(system_command("ping -W -c 1 " + hostname))
        response = -1
    return {'avgping': response, 'host': hostname}


class Plugin(plugins.BasePlugin):
    __name__ = 'ping'

    def run(self, config):
        data = {}
        my_hosts = config.get('ping', 'hosts').split(',')
        data['ping'] = []
        for host in my_hosts:
            data['ping'].append(collect_ping(host))
        return data['ping']


if __name__ == '__main__':
    Plugin().execute()
PKVEu\z8��agent360/plugins/redis_stat.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import redis

### Uncomment/Comment the Attribute Names to be monitored
METRICS = {
# Server section
    #"redis_version": "redis version",
    #"redis_git_sha1": "redis git sha1",
    #"redis_git_dirty": "redis git dirty", 
    #"redis_build_id": "redis build id", 
    #"redis_mode": "redis mode", 
    #"os": "os",    
    #"arch_bits": "arch bits", 
    #"multiplexing_api": "multiplexing api", 
    #"gcc_version": "gcc version",
    #"process_id": "process id",
    #"run_id": "run id", 
    #"tcp_port": "tcp port",
    "uptime_in_seconds": "uptime", 
    #"uptime_in_days": "uptime in days", 
    #"hz": "hz", 
    #"lru_clock": "lru clock", 
    #"executable": "redis path",
    #"config_file": "config file",

# Clients section
    "connected_clients": "connected clients",
    #"client_longest_output_list": "client longest output list", 
    #"client_biggest_input_buf": "client biggest input buf", 
    #"blocked_clients": "blocked clients", 

# Memory section
    "used_memory": "used memory", 
    #"used_memory_human": "used memory human", 
    #"used_memory_rss": "used memory rss",
    #"used_memory_rss_human": "used memory rss human",
    #"used_memory_peak": "used memory peak", 
    "used_memory_peak_human": "used memory peak human", 
    #"total_system_memory": "total system memory",
    #"total_system_memory_human": "total system memory human",
    #"used_memory_lua": "used memory lua",
    #"used_memory_lua_human": "used memory lua human",
    "maxmemory": "maxmemory",
    #"maxmemory_human": "maxmemory human",
    "maxmemory_policy": "maxmemory policy",
    #"mem_fragmentation_ratio": "mem fragmentation ratio",
    #"mem_allocator": "mem allocator",

# Persistence section     
    #"loading": "loading",
    #"rdb_changes_since_last_save": "rdb changes since last save",
    #"rdb_bgsave_in_progress": "rdb bgsave in progress",
    #"rdb_last_save_time": "rdb last save time",
    #"rdb_last_bgsave_status": "rdb last bgsave status",
    #"rdb_last_bgsave_time_sec": "rdb last bgsave time sec",
    #"rdb_current_bgsave_time_sec": "rdb current bgsave time sec",
    #"aof_enabled": "aof enabled",
    #"aof_rewrite_in_progress": "aof rewrite in progress",
    #"aof_rewrite_scheduled": "aof rewrite scheduled",    
    #"aof_last_rewrite_time_sec": "aof last rewrite time",
    #"aof_current_rewrite_time_sec": "aof current rewrite time",
    #"aof_last_bgrewrite_status": "aof last bgrewrite status",
    #"aof_last_write_status": "aof last write status",
    #"aof_current_size": "aof current size",
    #"aof_base_size": "aof base size",
    #"aof_pending_rewrite": "aof pending rewrite",
    #"aof_buffer_length": "aof buffer length",
    #"aof_rewrite_buffer_length": "aof rewrite buffer length",
    #"aof_pending_bio_fsync": "aof pending bio fsync",
    #"aof_delayed_fsync": "aof delayed fsync",

# Stats section
    #"total_connections_received": "total connections received",
    "total_commands_processed": "total commands processed",
    #"instantaneous_ops_per_sec": "instantaneous ops per sec",
    "total_net_input_bytes": "total net input bytes",
    "total_net_output_bytes": "total net output bytes",
    #"instantaneous_input_kbps": "instantaneous input kbps",
    #"instantaneous_output_kbps": "instantaneous output kbps",
    #"rejected_connections": "rejected connections",
    #"sync_full": "sync full",
    #"sync_partial_ok": "sync partial ok",
    #"sync_partial_err": "sync partial err",
    "expired_keys": "expired keys",
    "evicted_keys": "evicted keys",
    "keyspace_hits": "keyspace hits",
    "keyspace_misses": "keyspace misses",
    #"pubsub_channels": "pubsub channels",
    #"pubsub_patterns": "pubsub patterns",
    #"latest_fork_usec": "latest fork usec",
    #"migrate_cached_sockets": "migrate cached sockets",

# Replication section
    #"role": "role",
    #"connected_slaves": "connected slaves",
    #"master_repl_offset": "master repl offset",
    #"repl_backlog_active": "repl backlog active",
    #"repl_backlog_size": "repl backlog size",
    #"repl_backlog_first_byte_offset": "repl backlog first byte offset",
    #"repl_backlog_histlen": "repl backlog histlen",

# CPU section
    #"used_cpu_sys": "used cpu sys",
    #"used_cpu_user": "used cpu user",
    #"used_cpu_sys_children": "used cpu sys children",
    #"used_cpu_user_children": "used cpu user children",    

# Cluster section
    "cluster_enabled": "cluster enabled"
}

class Plugin(plugins.BasePlugin):
    __name__ = 'redis_stat'

    def run(self, config):
        data = {}
        stats = None
        try:
            redis_host = (config.get(__name__, 'host'))
        except:
            redis_host = '127.0.0.1'
        try:
            redis_port = (config.get(__name__, 'port'))
        except:
            redis_port = '6379'
        try:
            redis_db = (config.get(__name__, 'db'))
        except:
            redis_db = '0'
        try:
            redis_password = (config.get(__name__, 'password'))
        except:
            redis_password = ''

        try:
            redis_connection = redis.StrictRedis(host=redis_host,port=redis_port,db=redis_db,password=redis_password)
            stats = redis_connection.info()
        except Exception as e:
            data['status']=0
            data['msg']='Connection Error'
            if not stats:
                return data

        for name, value in stats.items():
            if name in METRICS.keys() :
                data[METRICS[name]] = value
        return data

if __name__ == '__main__':
    Plugin().execute()PKYEu\gԒr
r
agent360/plugins/docker.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import time


class Plugin(plugins.BasePlugin):
    __name__ = 'docker'

    def run(self, config):
        '''
        Docker monitoring, needs sudo access!
        Instructions at:
        https://docs.360monitoring.com/docs/docker-plugin
        '''
        containers = {}
        last_value = {}
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        try:
            lines = [s.split(' / ') for s in os.popen('sudo docker stats --no-stream --no-trunc --format "{{.CPUPerc}} / {{.Name}} / {{.ID}} / {{.MemUsage}} / {{.NetIO}} / {{.BlockIO}} / {{.MemPerc}}"').read().splitlines()]
            for row in lines:
                container = {}
                container['cpu'] = row[0].strip('%')
                name = row[1]
                container_id = row[2]
                container['mem_usage_bytes'] = self.computerReadable(row[3])
                container['mem_total_bytes'] = self.computerReadable(row[4])
                container['net_in_bytes'] = self.absolute_to_per_second('%s_%s' % (name, 'net_in_bytes'), self.computerReadable(row[5]), prev_cache)
                container['net_out_bytes'] = self.absolute_to_per_second('%s_%s' % (name, 'net_out_bytes'), self.computerReadable(row[6]), prev_cache)
                container['disk_in_bytes'] = self.absolute_to_per_second('%s_%s' % (name, 'disk_in_bytes'), self.computerReadable(row[7]), prev_cache)
                container['disk_out_bytes'] = self.absolute_to_per_second('%s_%s' % (name, 'disk_out_bytes'), self.computerReadable(row[8]), prev_cache)
                container['mem_pct'] = row[9].strip('%')

                last_value['%s_%s' % (name, 'mem_usage_bytes')] = self.computerReadable(row[3])
                last_value['%s_%s' % (name, 'net_in_bytes')] = self.computerReadable(row[5])
                last_value['%s_%s' % (name, 'net_out_bytes')] = self.computerReadable(row[6])
                last_value['%s_%s' % (name, 'disk_in_bytes')] = self.computerReadable(row[7])
                last_value['%s_%s' % (name, 'disk_out_bytes')] = self.computerReadable(row[8])
                containers[name] = container
        except Exception as e:
            return e.message
        containers['containers'] = len(containers)
        last_value['ts'] = time.time()
        self.set_agent_cache(last_value)

        return containers

    def computerReadable(self, value):
        if value[-3:] == 'KiB':
            return float(value[:-3])*1024
        elif value[-3:] == 'MiB':
            return float(value[:-3])*1024*1024
        elif value[-3:] == 'GiB':
            return float(value[:-3])*1024*1024*1024
        elif value[-3:] == 'TiB':
            return float(value[:-3])*1024*1024*1024*1024
        elif value[-3:] == 'PiB':
            return float(value[:-3])*1024*1024*1024*1024*1024
        elif value[-2:] == 'kB':
            return float(value[:-2])*1024
        elif value[-2:] == 'MB':
            return float(value[:-2])*1024*1024
        elif value[-2:] == 'GB':
            return float(value[:-2])*1024*1024*1024
        elif value[-2:] == 'TB':
            return float(value[:-2])*1024*1024*1024*1024
        elif value[-2:] == 'PB':
            return float(value[:-2])*1024*1024*1024*1024*1024
        elif value[-1:] == 'B':
            return float(value[:-1])

if __name__ == '__main__':
    Plugin().execute()
PK[Eu\���I�
�
agent360/plugins/phpfpm.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
#from past.builtins import basestring    # pip install future
try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError
import sys
import time
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'phpfpm'

    def run(self, config):
        '''
        php-fpm status page metrics
        '''
        def ascii_encode_dict(data):
            ascii_encode = lambda x: x.encode('ascii') if isinstance(x, unicode) else x
            return dict(map(ascii_encode, pair) for pair in data.items())

        results = dict()
        next_cache = dict()
        my_pools = config.get(__name__, 'status_page_url').split(',')
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        for pool in my_pools:
            request = Request(pool)
            raw_response = urlopen(request)

            try:
                data = raw_response.read().decode('utf-8')
#                pprint.pprint(data)
                if sys.version_info >= (3,):
                    j = json.loads(data)
                else:
                    j = json.loads(data, object_hook=ascii_encode_dict)
                results[j['pool']] = {}
                next_cache['%s_ts' % j['pool']] = time.time()
                for k, v in j.items():
                    results[j['pool']][k.replace(" ", "_")] = v

                next_cache['%s_accepted_conn' % j['pool']] = int(results[j['pool']]['accepted_conn'])
            except Exception as e:
                return e

            try:
                if next_cache['%s_accepted_conn' % j['pool']] >= prev_cache['%s_accepted_conn' % j['pool']]:
                    results[j['pool']]['accepted_conn_per_second'] = \
                        (next_cache['%s_accepted_conn' % j['pool']] - prev_cache['%s_accepted_conn' % j['pool']]) / \
                        (next_cache['%s_ts' % j['pool']] - prev_cache['%s_ts' % j['pool']])
                else:  # Was restarted after previous caching
                    results[j['pool']]['accepted_conn_per_second'] = \
                        next_cache['%s_accepted_conn' % j['pool']] / \
                        (next_cache['%s_ts' % j['pool']] - prev_cache['%s_ts' % j['pool']])
            except KeyError:  # No cache yet, can't calculate
                results[j['pool']]['accepted_conn_per_second'] = 0.0

        # Cache absolute values for next check calculations
        self.set_agent_cache(next_cache)

        return results


if __name__ == '__main__':
    Plugin().execute()
PK`Eu\��p_��agent360/plugins/asterisk.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import subprocess


class Plugin(plugins.BasePlugin):
    __name__ = 'asterisk'

    def run(self, config):
        ip = config.get(__name__, 'sbcip')
        p = subprocess.Popen("sudo asterisk -rx 'core show calls' | grep 'active' | cut -f1 -d ' '", stdout=subprocess.PIPE, shell=True)
        p = p.communicate()[0].decode('utf-8').replace("\n", "")
        incoming = subprocess.Popen("sudo asterisk -rx 'core show channels verbose' | cut -c1-15 | grep 'pstn_' | wc -l", stdout=subprocess.PIPE, shell=True)
        incoming = incoming.communicate()[0].decode('utf-8').replace("\n", "")

        devices = subprocess.Popen("sudo asterisk -rx 'sip show peers' | grep '%s' | wc -l" % (ip), stdout=subprocess.PIPE, shell=True)
        devices = devices.communicate()[0].decode('utf-8').replace("\n", "")

        res = { "calls": p, "incomingcalls": incoming, "devices": devices }
        return res

if __name__ == '__main__':
    Plugin().execute()
PKbEu\��#,,agent360/plugins/kamailio.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*- 
import plugins
import subprocess

### You need to add `agent360 ALL=(ALL) NOPASSWD: /usr/sbin/kamctl` to /etc/sudoers in order for this to work
class Plugin(plugins.BasePlugin):
    __name__ = 'asterisk'

    def run(self, *unused):
        p = subprocess.Popen("sudo kamctl ul show | grep AOR | wc -l", stdout=subprocess.PIPE, shell=True)
        p = p.communicate()[0].decode('utf-8').replace("\n", "")
        p = { "devices_online": p }
        return p

if __name__ == '__main__':
    Plugin().execute()
PKeEu\�q��	�	agent360/plugins/plugins.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pickle
import time
import sys
if sys.version_info >= (3,):
    import configparser
else:
    import ConfigParser


class BasePlugin:
    '''
    Abstract class for plugins
    '''
    __name__ = ''

    def __init__(self, agent_cache=[]):
        if isinstance(agent_cache, list):
            self.agent_cache = agent_cache
        else:
            raise TypeError('Type of agent_cache have to be list')

        # if not self.__name__:
        #     self.__name__ = os.path.splitext(os.path.basename(__file__))[0]

    def run(self, config=None):
        '''
        Virtual method for running the plugin
        '''
        pass

    def execute(self):
        '''
        Execution wrapper for the plugin
        argv[1]: ini_file
        '''
        config = None
        if len(sys.argv) > 1:
            if sys.version_info >= (3,):
                config = configparser.RawConfigParser(defaults)
            else:
                config = ConfigParser.RawConfigParser(defaults)
            config.read(sys.argv[1])
        pickle.dump(self.run(config), sys.stdout.buffer)

    def get_agent_cache(self):
        '''
        Return agent cached value for this specific plugin.
        '''
        try:
            return self.agent_cache[0]
        except Exception:
            return {}

    def set_agent_cache(self, cache):
        '''
        Set agent cache value previously passed to this plugin instance.
        To enable caching existing agent_cache list have to be passed
        to Plugin on initialization.
        Minimally it should be list().
        Agent will be able to see only changes in zero element of agent_cache, so
        do not manually override self.agent_cache, othervice cache will not be saved!

        If self.agent_cache is not a list appropriate exception will be raised.
        '''
        try:
            self.agent_cache[0] = cache
        except IndexError:
            self.agent_cache.append(cache)

    def absolute_to_per_second(self, key, val, prev_cache):
        try:
            if val >= prev_cache[key]:
                value = \
                    (val - prev_cache[key]) / \
                    (time.time() - prev_cache['ts'])
            else:  # previous cached value should not be higher than current value (service was restarted?)
                value = val / \
                    (time.time() - prev_cache['ts'])
        except Exception:  # No cache yet, can't calculate
            value = 0
        return value
PKhEu\<'�
�
agent360/plugins/network.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins
import time

class Plugin(plugins.BasePlugin):
    __name__ = 'network'

    def run(self, config):
        '''
        Network monitoring plugin.
        To only enable certain interfaces add below [network]:
        interfaces = eth1,eth3,...
        '''

        absolute = dict()
        absolute['ts'] = time.time()
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check

        try:
            enabled_interfaces = config.get('network', 'interfaces').split(',')
        except:
            enabled_interfaces = False

        returndata = {}
        interfaces = psutil.net_io_counters(pernic=True)
        for interface, stats in interfaces.items():
            if enabled_interfaces is not False:
                if interface not in enabled_interfaces:
                    continue
            try:
                prev_cache[interface]
            except:
                prev_cache[interface] = {}
            absolute[interface] = {}
            absolute[interface]['ts'] = time.time()
            absolute[interface]['bytes_sent'] = stats.bytes_sent
            absolute[interface]['bytes_recv'] = stats.bytes_recv
            absolute[interface]['packets_sent'] = stats.packets_sent
            absolute[interface]['packets_recv'] = stats.packets_recv
            absolute[interface]['errin'] = stats.errin
            absolute[interface]['errout'] = stats.errout
            absolute[interface]['dropin'] = stats.dropin
            absolute[interface]['dropout'] = stats.dropout
            returndata[interface] = {}
            returndata[interface]['bytes_sent'] = self.absolute_to_per_second('bytes_sent', stats.bytes_sent, prev_cache[interface])
            returndata[interface]['bytes_recv'] = self.absolute_to_per_second('bytes_recv', stats.bytes_recv, prev_cache[interface])
            returndata[interface]['packets_sent'] = self.absolute_to_per_second('packets_sent', stats.packets_sent, prev_cache[interface])
            returndata[interface]['packets_recv'] = self.absolute_to_per_second('packets_recv', stats.packets_recv, prev_cache[interface])
            returndata[interface]['errin'] = self.absolute_to_per_second('errin', stats.errin, prev_cache[interface])
            returndata[interface]['errout'] = self.absolute_to_per_second('errout', stats.errout, prev_cache[interface])
            returndata[interface]['dropin'] = self.absolute_to_per_second('dropin', stats.dropin, prev_cache[interface])
            returndata[interface]['dropout'] = self.absolute_to_per_second('dropout', stats.dropout, prev_cache[interface])
        self.set_agent_cache(absolute)
        return returndata


if __name__ == '__main__':
    Plugin().execute()
PKkEu\9��agent360/plugins/dovecot.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'dovecot'

    def run(self, config):
        '''
        Returns active dovecot IMAP and POP3 session and the current version.
        Sudo permission to acces doveadm and dovecot commands are required.

        Exampel config for /etc/agent360.ini:
        [dovecot]
        enabled = yes
        '''
        data = {}
        output = os.popen('sudo doveadm who').read()
        output2 = os.popen('sudo dovecot --version').read()
        output3 = os.popen('sudo dovecot --hostdomain').read()
        imapsum = 0
        pop3sum = 0

        for row in output.split("\n"):
            if re.search(r'.*(imap|pop3).*', row):
                imapr = re.search(r' +([0-9]+) +imap +', row, re.IGNORECASE)
                popr = re.search(r' +([0-9]+) +pop3 +', row, re.IGNORECASE)
                if imapr is not None: imapsum = imapsum + int(imapr.group(1))
                if popr is not None: pop3sum = pop3sum + int(popr.group(1))


        data['imap'] = imapsum
        data['pop3'] = pop3sum
        data['meta'] = {'version': output2.strip(), 'hostdomain': output3.strip()}


        return data

if __name__ == '__main__':
    Plugin().execute()
PKmEu\4�<���!agent360/plugins/plesk-cgroups.pynu�[���#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import glob
import pathlib
import pwd
import re
import time

import plugins


class Plugin(plugins.BasePlugin):
    __name__ = 'plesk-cgroups'

    def run(self, *unused):
        accounting = {}
        cache = self.get_agent_cache()
        uid_re = re.compile('\d+')

        # This silently depends on cgroup mounted virtual file systems
        # and SystemD user slices

        sysfs_prefix = '/sys/fs/cgroup/'
        sysfs_suffix = 'user.slice/user-*.slice'
        for user_slice in (
            glob.glob(sysfs_prefix + sysfs_suffix) +  # cgroup v2
            glob.glob(sysfs_prefix + 'systemd/' + sysfs_suffix)  # cgroup v1
        ):

            accounting[user_slice] = {}
            if not user_slice in cache:
                cache[user_slice] = {}

            accounting[user_slice]['uid'] = uid = \
                int(uid_re.search(user_slice).group())

            # Resolve uid to username
            try:
                accounting[user_slice]['username'] = pwd.getpwuid(uid).pw_name
            except KeyError:
                # There's no guarantee that the uid has a name
                pass

            # This silently depends on the corresponding CGroup controllers
            # being individually enabled

            # Memory accounting
            try:
                # cgroup v2
                with pathlib.Path(user_slice, 'memory.current').open() as f:
                    accounting[user_slice]['memory.current'] = \
                        int(f.read().strip())
            except FileNotFoundError:
                try:
                    # cgroup v1
                    with pathlib.Path(
                        sysfs_prefix,
                        'memory',
                        'user.slice',
                        'user-{}.slice'.format(uid),
                        'memory.usage_in_bytes'
                    ).open() as f:
                        accounting[user_slice]['memory.usage_in_bytes'] = \
                            int(f.read().strip())
                except FileNotFoundError:
                    pass

            # IO accounting
            try:
                # cgroup v2
                with pathlib.Path(user_slice, 'io.stat').open() as f:
                    accounting[user_slice]['io.stat'] = a = {}
                    if not 'io.stat' in cache[user_slice]:
                        cache[user_slice]['io.stat'] = c = {}
                    for line in f.readlines():
                        devnum, metrics = line.split(maxsplit=1)
                        if not devnum in a:
                            a[devnum] = {}
                            a[devnum]['ts'] = time.time()
                        if not devnum in c:
                            c[devnum] = {}
                        for kv in metrics.split():
                            k, v = kv.split('=')
                            a[devnum][k] = self.absolute_to_per_second(
                                k, int(v), c[devnum]
                            )
                            c[devnum][k] = int(v)
            except FileNotFoundError:
                try:
                    # cgroup v1
                    with pathlib.Path(
                        sysfs_prefix,
                        'blkio',
                        'user.slice',
                        'user-{}.slice'.format(uid),
                        'blkio.throttle.io_service_bytes'
                    ).open() as f:
                        accounting[user_slice]\
                            ['blkio.throttle.io_service_bytes'] = a = {}
                        if not 'blkio.throttle.io_service_bytes' \
                            in cache[user_slice]:
                            cache[user_slice]\
                                ['blkio.throttle.io_service_bytes'] = c = {}
                        for line in f.readlines():
                            try:
                                devnum, k, v = line.split()
                            except ValueError:
                                # The last line is "Total",
                                # it has only 2 values and is ignored
                                pass
                            if not devnum in a:
                                a[devnum] = {}
                                a[devnum]['ts'] = time.time()
                            if not devnum in c:
                                c[devnum] = {}
                            a[devnum][k] = self.absolute_to_per_second(
                                k, int(v), c[devnum]
                            )
                            c[devnum][k] = int(v)
                except FileNotFoundError:
                    pass

            # CPU accounting
            try:
                # cgroup v2
                with pathlib.Path(user_slice, 'cpu.stat').open() as f:
                    accounting[user_slice]['cpu.stat'] = a = {}
                    if not 'cpu.stat' in cache[user_slice]:
                        cache[user_slice]['cpu.stat'] = c = {}
                    a['ts'] = time.time()
                    for line in f.readlines():
                        k, v = line.split()
                        a[k] = self.absolute_to_per_second(k, int(v), c)
                        c[k] = int(v)
            except FileNotFoundError:
                try:
                    # cgroup v1
                    with pathlib.Path(
                        sysfs_prefix,
                        'cpu',
                        'user.slice',
                        'user-{}.slice'.format(uid),
                        'cpuacct.stat'
                    ).open() as f:
                        accounting[user_slice]['cpuacct.stat'] = a = {}
                        if not 'cpuacct.stat' in cache[user_slice]:
                            cache[user_slice]['cpuacct.stat'] = c = {}
                        a['ts'] = time.time()
                        for line in f.readlines():
                            k, v = line.split()
                            a[k] = self.absolute_to_per_second(k, int(v), c)
                            c[k] = int(v)
                except FileNotFoundError:
                    pass

        self.set_agent_cache(cache)
        return accounting


if __name__ == '__main__':
    Plugin().execute()
PKpEu\�ܓ��agent360/plugins/gpu.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import sys

class Plugin(plugins.BasePlugin):
    __name__ = 'gpu'

    def run(self, *unused):
        '''
        expirimental plugin used to collect GPU load from OpenHardWareMonitor (Windows)
        '''
        data = {}

        if sys.platform == "win32":
            try:
                import wmi
            except:
                return 'wmi module not installed.'
            try:
                w = wmi.WMI(namespace="root\OpenHardwareMonitor")
                temperature_infos = w.Sensor()
                for sensor in temperature_infos:
                    if sensor.SensorType==u'Load' and sensor.Name==u'GPU Core':
                        data[sensor.Parent.replace('/','-').strip('-')] = sensor.Value
            except:
                return 'Could not fetch GPU Load data from OpenHardwareMonitor.'

        return data


if __name__ == '__main__':
    Plugin().execute()
PKrEu\a��agent360/plugins/unbound.pynu�[���#!/usr/bin/env python
import plugins
import subprocess

# Needs: agent360 ALL=(ALL) NOPASSWD: /usr/sbin/unbound-control


class Plugin(plugins.BasePlugin):

    __name__ = 'unbound'

    floatKeys = [ '.avg', '.median', '.now', '.up', '.elapsed' ]

    rate_metrics = [
        "num.answer.bogus",
        "num.answer.rcode",
        "num.answer.secure",
        "num.cachehits",
        "num.cachemiss",
        "num.dnscrypt.cert",
        "num.dnscrypt.cleartext",
        "num.dnscrypt.crypted",
        "num.dnscrypt.malformed",
        "num.prefetch",
        "num.queries",
        "num.queries_ip_ratelimited",
        "num.query.aggressive",
        "num.query.authzone.down",
        "num.query.authzone.up",
        "num.query.class",
        "num.query.dnscrypt.replay",
        "num.query.dnscrypt.shared_secret.cachemiss",
        "num.query.edns",
        "num.query.flags",
        "num.query.ipv6",
        "num.query.opcode",
        "num.query.ratelimited",
        "num.query.subnet",
        "num.query.subnet_cache",
        "num.query.tcp",
        "num.query.tcpout",
        "num.query.tls",
        "num.query.tls.resume",
        "num.query.type",
        "num.recursivereplies",
        "num.rrset.bogus",
        "num.zero_ttl",
        "requestlist.exceeded",
        "requestlist.overwritten",
        "unwanted.queries",
        "unwanted.replies",
    ]

    gauge_metrics = [
        "dnscrypt_nonce.cache.count",
        "dnscrypt_shared_secret.cache.count",
        "infra.cache.count",
        "key.cache.count",
        "mem.cache.dnscrypt_nonce",
        "mem.cache.dnscrypt_shared_secret",
        "mem.cache.message",
        "mem.cache.rrset",
        "mem.mod.iterator",
        "mem.mod.validator",
        "mem.streamwait",
        "msg.cache.count",
        "recursion.time.avg",
        "recursion.time.median",
        "requestlist.avg",
        "requestlist.current.all",
        "requestlist.current.user",
        "requestlist.max",
        "rrset.cache.count",
        "tcpusage",
        "time.elapsed",
        "time.now",
        "time.up",
    ]

    by_tag_labels = [
        "num.answer.rcode",
        "num.query.aggressive",
        "num.query.class",
        "num.query.edns",
        "num.query.flags",
        "num.query.opcode",
        "num.query.type",
    ]

    def get_stats(self):
        cmd = 'sudo /usr/sbin/unbound-control stats'
        try:
            output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True)
        except subprocess.CalledProcessError as e:
            error_msg = 'ERROR CALLING {0}: {1}  {2}'.format(cmd, e, e.output)
            return None

        return output


    def parse_stat(self, stat):

        stats = {t[0]: t[2] for line in stat.splitlines() for t in [line.partition('=')]}
        for key, value in stats.items():
                if key.endswith(tuple(self.floatKeys)):
                        stats[key] = float(value)
                else:
                        stats[key] = int(value)
        return stats

    def run(self, *unused):

        resdata = self.get_stats()
        final = {}

        if resdata is None:
            return False
        else:
            final = self.parse_stat(resdata)

        return final

if __name__ == '__main__':

    Plugin().execute()
PKuEu\��.==agent360/plugins/powerdns.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
    from urllib.request import urlopen, Request
except ImportError:
    from urllib2 import urlopen, Request
import time
import plugins
import urllib2
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'powerdns'

    def run(self, config):
        '''
        Experimental plugin for PowerDNS authoritative server. Might also work with PowerDNS recursor,
        but it may need extra delta_keys / absolute_keys.
        Add to /etc/agent360.ini:
        [powerdns]
        enabled=yes
        statistics_url=http://localhost:8081/api/v1/servers/localhost/statistics
        api_key=changeme
        ;ca_file=
        ;ca_path=
        ;timeout=10
        '''
        # Create request to configured URL
        request = urllib2.Request(config.get(__name__, 'statistics_url'), headers={'X-API-Key': '%s' % config.get(__name__, 'api_key')})
        # defaults
        timeout = 10
        results = dict()
        raw_response = None
        # next / previous cached metrics (for calculating deltas)
        next_cache = dict()
        prev_cache = self.get_agent_cache()
        # use timeout from config if specified
        if config.has_option(__name__, 'timeout'):
            timeout = int(config.get(__name__, 'timeout'))
        # create response based on configuration
        if config.has_option(__name__, 'ca_file'):
            raw_response = urllib2.urlopen(request, timeout=timeout, cafile=config.get(__name__, 'ca_file'))
        elif config.has_option(__name__, 'ca_path'):
            raw_response = urllib2.urlopen(request, timeout=timeout, capath=config.get(__name__, 'ca_path'))
        else:
            raw_response = urllib2.urlopen(request, timeout=timeout)
        # set next_cache timestamp
        next_cache['ts'] = time.time()
        # parse raw response as JSON
        try:
            stats = json.loads(raw_response.read())
        except Exception:
            return False
        # keys for which we should calculate the delta
        delta_keys = (
            'corrupt-packets',
            'deferred-cache-inserts',
            'deferred-cache-lookup',
            'deferred-packetcache-inserts',
            'deferred-packetcache-lookup',
            'dnsupdate-answers',
            'dnsupdate-changes',
            'dnsupdate-queries',
            'dnsupdate-refused',
            'incoming-notifications',
            'overload-drops',
            'packetcache-hit',
            'packetcache-miss',
            'query-cache-hit',
            'query-cache-miss',
            'rd-queries',
            'recursing-answers',
            'recursing-questions',
            'recursion-unanswered',
            'servfail-packets',
            'signatures',
            'sys-msec',
            'tcp-answers',
            'tcp-answers-bytes',
            'tcp-queries',
            'tcp4-answers',
            'tcp4-answers-bytes',
            'tcp4-queries',
            'tcp6-answers',
            'tcp6-answers-bytes',
            'tcp6-queries',
            'timedout-packets',
            'udp-answers',
            'udp-answers-bytes',
            'udp-do-queries',
            'udp-in-errors',
            'udp-noport-errors',
            'udp-queries',
            'udp-recvbuf-errors',
            'udp-sndbuf-errors',
            'udp4-answers',
            'udp4-answers-bytes',
            'udp4-queries',
            'udp6-answers',
            'udp6-answers-bytes',
            'udp6-queries',
            'user-msec'
        )

        # keys for which we should store the absolute value:
        absolute_keys = (
            'key-cache-size',
            'latency',
            'fd-usage',
            'meta-cache-size',
            'open-tcp-connections',
            'packetcache-size',
            'qsize-q',
            'query-cache-size',
            'real-memory-usage',
            'security-status',
            'signature-cache-size',
            'uptime'
        )
        data = dict()
        for stat in stats:
            if 'name' in stat and 'value' in stat and 'type' in stat:
                if stat['type'] == 'StatisticItem':
                    if stat['name'] in delta_keys:
                        results[stat['name']] = self.absolute_to_per_second(stat['name'], float(stat['value']), prev_cache)
                        data[stat['name']] = float(stat['value'])
                    elif stat['name'] in absolute_keys:
                        results[stat['name']] = float(stat['value'])
        data['ts'] = time.time()
        self.set_agent_cache(data)
        return results

if __name__ == '__main__':
    Plugin().execute()
PKxEu\�h)pjjagent360/plugins/proftpd.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'proftpd'

    def run(self, config):
        '''
        Current acitive ProFTPD sessions
        '''
        data = {}
        result = os.popen('/bin/ftpwho -o json').read()
        cnt = 0
        uploading = 0
        idle = 0
        rawdata=json.loads(result)
        for item in rawdata['connections']:
              if item['pid']: cnt += 1
              try:
                  if item['uploading'] == True: uploading += 1
              except Exception:
                  pass
              try:
                  if item['idling'] == True: idle += 1
              except Exception:
                  pass
        data['connections'] = cnt
        data['uploading'] = uploading
        data['idle'] = idle
        data['server_type'] = rawdata['server']['server_type']
        data['pid'] =  "PID " + str(rawdata['server']['pid'])
        updt = rawdata['server']['started_ms']
        data['uptime'] = {'ms': updt}
        return data

if __name__ == '__main__':
    Plugin().execute()
PKzEu\�M�llagent360/plugins/swap.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins


class Plugin(plugins.BasePlugin):
    __name__ = 'swap'

    def run(self, *unused):
        swap = {}
        mem = psutil.swap_memory()
        for name in mem._fields:
            swap[name] = getattr(mem, name)
        return swap


if __name__ == '__main__':
    Plugin().execute()
PK}Eu\�%�**$agent360/plugins/cloudlinux-dbgov.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'cloudlinux-dbgov'

    def run(self, config):
        '''
        Beta plugin to monitor cloudlinux db governor users
        Requires sudo access to lveinfo (whereis lveinfo) add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /REPLACE/PATH/TO/lveinfo

        To enable add to /etc/agent360.ini:
        [cloudlinux-dbgov]
        enabled = yes
        '''
        data = os.popen('sudo lveinfo --dbgov --period 5m -o cpu --limit 20 --json').read()
        results = {}

        try:
            data = json.loads(data)
        except Exception:
            return "Could not load lveinfo dbgov data"

        if data['success'] is not 1:
            return "Failed to load lveinfo dbgov"

        results = {}

        for line in data['data']:
            username = line['USER']
            del line['USER']
            results[username] = line

        return results

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\����
�
agent360/plugins/rabbitmq.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError
import time
import plugins
import json
import requests
from requests.auth import HTTPBasicAuth
import sys


class Plugin(plugins.BasePlugin):
    __name__ = 'rabbitmq'

    def run(self, config):
        '''
        rabbitmq status page metrics
        '''
        def ascii_encode_dict(data):
            ascii_encode = lambda x: x.encode('ascii') if isinstance(x, unicode) else x
            return dict(map(ascii_encode, pair) for pair in data.items())

        results = dict()
        next_cache = dict()

        try:
            username = config.get('rabbitmq', 'username')
            password = config.get('rabbitmq', 'password')
            user_pass = (username, password)
        except:
            user_pass = False

        request = requests.get(config.get('rabbitmq', 'status_page_url'), auth=user_pass)

        if request.status_code == 401:
            request = requests.get(config.get('rabbitmq', 'status_page_url'), auth=HTTPBasicAuth(username, password))

        if request.status_code is 200:
            try:
                j = request.json()
            except Exception as e:
                return e
        else:
            return "Could not load status page: {}".format(request.text)

        next_cache['ts'] = time.time()
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        try:
            prev_cache['message_stats']
        except:
            prev_cache['message_stats'] = {}
        next_cache['message_stats'] = j
        next_cache['message_stats']['ts'] = time.time();
        results['published'] = self.absolute_to_per_second('published', j['message_stats']['publish'], prev_cache['message_stats'])
        results['published_total'] = j['message_stats']['publish']
        results['ack'] = self.absolute_to_per_second('ack', j['message_stats']['ack'], prev_cache['message_stats'])
        results['ack_total'] = j['message_stats']['ack']
        results['deliver_get'] = self.absolute_to_per_second('deliver_get', j['message_stats']['deliver_get'], prev_cache['message_stats'])
        results['deliver_get_total'] = j['message_stats']['deliver_get']
        results['redeliver'] = self.absolute_to_per_second('redeliver', j['message_stats']['redeliver'], prev_cache['message_stats'])
        results['redeliver_total'] = j['message_stats']['redeliver']
        results['deliver'] = self.absolute_to_per_second('deliver', j['message_stats']['deliver'], prev_cache['message_stats'])
        results['deliver_total'] = j['message_stats']['deliver']

        results['messages'] = j['queue_totals']['messages']
        results['messages_ready'] = j['queue_totals']['messages_ready']
        results['messages_unacknowledged'] = j['queue_totals']['messages_unacknowledged']

        results['listeners'] = len(j['listeners'])

        results['consumers'] = j['object_totals']['consumers']
        results['queues'] = j['object_totals']['queues']
        results['exchanges'] = j['object_totals']['exchanges']
        results['connections'] = j['object_totals']['connections']
        results['channels'] = j['object_totals']['channels']

        self.set_agent_cache(next_cache)

        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\agent360/plugins/__init__.pynu�[���PK�Eu\�h����agent360/plugins/bind.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'bind'

    def run(self, config):
        '''
        Monitoring bind nameserver

        You must have the bind statistics-channels configured
        and you must have jq installed for json processing.
        
        Usage for /etc/agent360.ini:
        [bind]
        enabled = yes
        port = 8053
        '''
        bport = config.get('bind', 'port')
        result = os.popen('curl -j http://localhost:' + str(bport) + '/json 2>/dev/null | jq ".qtypes * .rcodes * .nsstats"').read()
        data=json.loads(result)
        return data

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\&:0�GGagent360/plugins/cpu_freq.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'cpu_freq'

    def run(self, *unused):
        results = {}
        data = psutil.cpu_freq(percpu=True)
        cpu_number = -1
        for cpu in data:
            core = {}
            cpu_number = cpu_number+1
            results[cpu_number] = {}
            for key in cpu._fields:
                core[key] = getattr(cpu, key)
            results[cpu_number] = core['current']
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\��3]]agent360/plugins/memcached.pynu�[���import plugins
import struct
import time
import memcache

class Plugin(plugins.BasePlugin):
    __name__ = 'memcached'

    def run(self, config):
        '''
        pip install python-memcached
        add to /etc/agent360.ini
        [memcached]
        enabled=yes
        host=127.0.0.1
        port=11211
        '''
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        try:
            socket = config.get('memcached', 'socket')
        except:
            socket = False
        try:
            # Connect
            if socket is False:
                mc = memcache.Client(['%s:%s' % (config.get('memcached', 'host'), config.get('memcached', 'port'))], debug=0)
            else:
                mc = memcache.Client(['unix:/%s' % socket], debug=0)
        except:
            return "Could not connect to memcached"

        non_delta = (
            'accepting_conns',
            'bytes',
            'uptime',
            'total_items',
            'total_connections',
            'time_in_listen_disabled_us',
            'threads',
            'rusage_user',
            'rusage_system',
            'reserved_fds',
            'pointer_size',
            'malloc_fails',
            'lrutail_reflocked',
            'listen_disabled_num',
            'limit_maxbytes',
            'hash_power_level',
            'hash_bytes',
            'curr_items',
            'curr_connections',
            'connection_structures',
            'conn_yields',
            'reclaimed'
        )
        delta_keys = (
            'auth_cmds',
            'auth_errors',
            'bytes_read',
            'bytes_written',
            'touch_misses',
            'touch_hits',
            'incr_misses',
            'incr_hits',
            'cas_misses',
            'cas_badval',
            'incr_hits',
            'get_misses',
            'get_hits',
            'expired_unfetched',
            'evictions',
            'evicted_unfetched',
            'delete_misses',
            'delete_hits',
            'decr_misses',
            'decr_hits',
            'crawler_reclaimed',
            'crawler_items_checked',
            'cmd_touch',
            'cmd_get',
            'cmd_set',
            'cmd_flush',
            'cmd_misses',
            'cmd_badval',
            'cmd_hits'
        )

        results = {}
        data = {}
        try:
            result = mc.get_stats()
            for key, key_value in enumerate(result[0][1]):
                value = result[0][1][key_value]
                key = key_value.lower().strip()
                if key in non_delta:
                    results[key] = float(value)
                elif key in delta_keys:
                    value = float(value)
                    results[key] = self.absolute_to_per_second(key, float(value), prev_cache)
                    data[key] = float(value)
                else:
                    pass
        except:
            return 'Could not fetch memcached stats'

        data['ts'] = time.time()
        self.set_agent_cache(data)
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\�F�?

agent360/plugins/process.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins
import sys
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'process'

    def sanitize_command_line(self, cmdline):
        # Check if cmdline starts with a file path and separate it
        match = re.match(r'^(\S+)(\s+.*)?$', cmdline)
        if match:
            initial_path = match.group(1)
            remaining_cmdline = match.group(2) or ""
        else:
            initial_path = ""
            remaining_cmdline = cmdline

        # Redact sensitive information in the remaining command line (case-insensitive)
        remaining_cmdline = re.sub(r'(/[^ ]+)+', '/***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'(--(?:password|pass|pwd|token|secret|key|api-key|access-key|secret-key|client-secret|auth-key|auth-token)\s+\S+)', '--***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'(-p\s+\S+)', '-p ***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:password|pass|pwd|token|secret|key|api_key|access_key|client_secret|auth_key|auth_token)=\S+', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:[a-fA-F0-9:]+:+)+[a-fA-F0-9]+\b', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'(--port\s+\d+)', '--port ***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:DB_PASS|DB_USER|AWS_SECRET_ACCESS_KEY|AWS_ACCESS_KEY_ID|SECRET_KEY|TOKEN|PASSWORD|USERNAME|API_KEY|PRIVATE_KEY|SSH_KEY|SSL_CERTIFICATE|SSL_KEY)\b=\S+', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(root|admin|cpanelsolr|user\d*)\b', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'(\S+\.(pem|crt|key|cert|csr|pfx|p12|ovpn|enc|asc|gpg))', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:id_rsa|id_dsa|id_ecdsa|id_ed25519|known_hosts|authorized_keys|credentials|.env|docker-compose.yml)\b', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:jdbc|mysql|postgres|mongodb|redis|amqp|http|https|ftp|sftp|s3):\/\/\S+', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b(?:https?|ftp):\/\/(?:\S+\:\S+@)?(?:[a-zA-Z0-9.-]+\.\S+)', '***', remaining_cmdline, flags=re.IGNORECASE)
        remaining_cmdline = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b', '***', remaining_cmdline, flags=re.IGNORECASE)

        # Combine the initial path and the sanitized command line, then limit length
        sanitized_cmdline = (initial_path + remaining_cmdline).strip()
        if len(sanitized_cmdline) > 256:
            sanitized_cmdline = sanitized_cmdline[:253] + '...'

        return sanitized_cmdline

    def run(self, *unused):
        process = []
        for proc in psutil.process_iter():
            try:
                pinfo = proc.as_dict(attrs=[
                    'pid', 'name', 'ppid', 'exe', 'cmdline', 'username',
                    'cpu_percent', 'memory_percent', 'io_counters'
                ])

                try:
                    # Sanitize and format the command line
                    pinfo['cmdline'] = self.sanitize_command_line(' '.join(pinfo['cmdline']).strip())
                except:
                    pass
                if sys.version_info < (3,):
                    pinfo['cmdline'] = unicode(pinfo['cmdline'], sys.getdefaultencoding(), errors="replace").strip()
                    pinfo['name'] = unicode(pinfo['name'], sys.getdefaultencoding(), errors="replace")
                    pinfo['username'] = unicode(pinfo['username'], sys.getdefaultencoding(), errors="replace")
                try:
                    pinfo['exe'] = unicode(pinfo['exe'], sys.getdefaultencoding(), errors="replace")
                except:
                    pass
            except psutil.NoSuchProcess:
                pass
            except psutil.AccessDenied:
                pass
            except:
                pass
            else:
                process.append(pinfo)
        return process

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\ϣYi��agent360/plugins/postfix.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'postfix'

    def run(self, config):
        '''
        Monitoring of the Postfix MTA log and optionally the Postfix version and the mailqueue
        Dependency: Pflogsumm log analyzer, sudo access

        Exampel config for /etc/agent360.ini:
        [postfix]
        enabled = yes
        log = /var/log/mail.log
        pflogsumm = /usr/sbin/pflogsumm
        version = true
        queue = true
        '''
        data = {}
        maillog = config.get('postfix', 'log')
        pflbin = config.get('postfix', 'pflogsumm')
        pversion = config.get('postfix', 'version')
        mqueue = config.get('postfix', 'queue')

        output = os.popen('sudo ' + pflbin + ' -d today --detail 0 ' + maillog).read()

        for row in output.split("\n"):
            if re.search(r' +[0-9]+ +[a-z]{1}[a-z- ]+[a-z]', row):
                stat = re.findall(r'[a-z]{1}[a-z- ]+[a-z]', row)[0]
                num = re.findall(r'\b[0-9]*\b', row)[0]
                data[stat] =  int(num)


        if pversion == "true":
           data['meta'] = {'version': 'Postfix ' +  os.popen('sudo postconf -d | grep mail_version -m 1 | egrep -o "[0-9.]+"').read().rstrip()}

        if mqueue == "true":
           mqcommand = os.popen('sudo mailq | tail -n 1').read()
           if "empty" in mqcommand:
                data['queue'] = {'mails': 0 , 'size': 0 }
           else:
                data['queue'] = {'mails': re.findall(r'[0-9]+ Request', mqcommand)[0].replace(" Request", ""), 'size': re.findall(r'-- [0-9]+', mqcommand)[0].replace("-- ", "") }

        return data

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\�����agent360/plugins/memory.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins
import os

class Plugin(plugins.BasePlugin):
    __name__ = 'memory'

    def run(self, *unused):
        memory = {}
        memory['buffers'] = 0
        mem = psutil.virtual_memory()
        for name in mem._fields:
            memory[name] = getattr(mem, name)

        if (memory['available'] == 0 or memory['buffers'] == 0) and os.name != 'nt':
            tot_m, used_m, free_m, sha_m, buf_m, cac_m, ava_m = map(int, os.popen('free -b -w').readlines()[1].split(':', 1)[1].split())
            memory['percent'] = 100-(((free_m+buf_m+cac_m)*100)/tot_m)
            memory['available'] = ava_m
            memory['buffers'] = buf_m
            memory['cached'] = cac_m
            memory['total'] = tot_m
            memory['used'] = used_m
            memory['shared'] = sha_m

        return memory

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\�;����agent360/plugins/haproxy.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import plugins
import csv
import requests


class Plugin(plugins.BasePlugin):
    __name__ = 'haproxy'

    def run(self, config):
            results = dict()
            next_cache = dict()
            try:
                username = config.get('haproxy', 'username')
                password = config.get('haproxy', 'password')
                user_pass = (username, password)
            except:
                user_pass = False

            status_page_url = config.get('haproxy', 'status_page_url')
            if not ";csv" in status_page_url:
                status_page_url = status_page_url + ";csv"

            request = requests.get(status_page_url, auth=user_pass)
            next_cache['ts'] = time.time()
            prev_cache = self.get_agent_cache()  # Get absolute values from previous check
            if request.status_code is 200:
                response = request.text.split("\n")
            else:
                return "Could not load haproxy status page: {}".format(request.text)

            non_delta = (
            'qcur',
            'qmax',
            'scur',
            'smax',
            'slim',
            'stot',
            'weight',
            # 'act',
            # 'bck',
            # 'chkfail',
            # 'chkdown',
            # 'lastchg',
            # 'downtime',
            'qlimit',
            # 'pid',
            # 'iid',
            # 'sid',
            'throttle',
            'lbtot',
            'tracked',
            # 'type',
            'rate',
            'rate_lim',
            'rate_max',
            # 'check_status',
            # 'check_code',
            # 'check_duration',
            'hanafail',
            'req_rate',
            'req_rate_max',
            'req_tot',
            # 'lastsess',
            # 'last_chk',
            # 'last_agt',
            # 'qtime',
            # 'ctime',
            # 'rtime',
            # 'ttime',
            # 'agent_status',
            # 'agent_code',
            # 'agent_duration',
            # 'agent_health',
            'conn_rate',
            'conn_rate_max',
            'conn_tot',
            # 'intercepted'
            )

            delta = (
            'bin',
            'bout',
            'cli_abrt',
            'srv_abrt',
            'intercepted',
            'hrsp_1xx',
            'hrsp_2xx',
            'hrsp_3xx',
            'hrsp_4xx',
            'check_rise',
            'check_fall',
            'check_health',
            'agent_rise',
            'agent_fall',
            'hrsp_5xx',
            'comp_in',
            'comp_out',
            'comp_byp',
            'comp_rsp',
            'hrsp_other',
            'dcon',
            'dreq',
            'dresp',
            'ereq',
            'econ',
            'eresp',
            'wretr',
            'wredis',
            'dses'
            )
            csv_reader = csv.DictReader(response)
            data = dict()
            constructors = [str, float]
            for row in csv_reader:
                results[row["# pxname"]+"/"+row["svname"]] = {}
                data[row["# pxname"]+"/"+row["svname"]] = {}
                try:
                    prev_cache[row["# pxname"]+"/"+row["svname"]]['ts'] = prev_cache['ts']
                except KeyError:
                    prev_cache[row["# pxname"]+"/"+row["svname"]] = {}

                for k, v in row.items():
                    for c in constructors:
                        try:
                            v = c(v)
                        except ValueError:
                            pass
                    if k in non_delta:
                        results[row["# pxname"]+"/"+row["svname"]][k] = v
                    elif k in delta and type(v) is not str:
                        results[row["# pxname"]+"/"+row["svname"]][k] = self.absolute_to_per_second(k, float(v), prev_cache[row["# pxname"]+"/"+row["svname"]])
                        data[row["# pxname"]+"/"+row["svname"]][k] = float(v)
                    else:
                        pass

            data['ts'] = time.time()
            self.set_agent_cache(data)

            return results


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\`�4��agent360/plugins/cloudlinux.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'cloudlinux'

    def run(self, config):
        '''
        Beta plugin to monitor cloudlinux users
        Requires sudo access to lveinfo (whereis lveinfo) add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /REPLACE/PATH/TO/lveinfo

        To enable add to /etc/agent360.ini:
        [cloudlinux]
        enabled = yes
        '''
        data = os.popen('sudo lveinfo -d --period 5m -o cpu_avg -l 20 --json').read()
        results = {}

        try:
            data = json.loads(data)
        except Exception:
            return "Could not load lveinfo data"

        if data['success'] is not 1:
            return "Failed to load lveinfo"

        results = {}

        for line in data['data']:
            username = line['ID']
            del line['ID']
            results[username] = line

        return results

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\;l�	��agent360/plugins/vms.pynu�[���from __future__ import print_function
import re, sys, os
import libvirt
import libxml2
import time
import plugins
import psutil

class Plugin(plugins.BasePlugin):
    __name__ = 'vms'

    def run(self, config):
        '''
        Using the libvirt API to fetch statistics from guests
        running KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC,
        BHyve and more
        '''
        results = {}
        last_value = {}
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        uri = os.getenv("uri", "qemu:///system")
        values = self.fetch_values(uri)

        deltas = {}
        for key, value in values.items():
            deltas[key] = {}
            for subkey, subvalue in value.items():
                if subkey == 'mem_bytes' or subkey == 'soft_limit_bytes' or subkey == 'min_guarantee_bytes' or subkey == 'hard_limit_bytes':
                    deltas[key][subkey] = value[subkey]
                else:
                    deltas[key][subkey] = self.absolute_to_per_second('%s_%s' % (key, subkey), float(subvalue), prev_cache)
                    last_value['%s_%s' % (key, subkey)] = float(value[subkey])
        last_value['ts'] = time.time()
        self.set_agent_cache(last_value)
        return deltas

    def canon(self, name):
        return re.sub(r"[^a-zA-Z0-9_]", "_", name)

    def get_ifaces(self, dom):
        xml = dom.XMLDesc(0)
        doc = None
        try:
            doc = libxml2.parseDoc(xml)
        except:
            return []
        ctx = doc.xpathNewContext()
        ifaces = []
        try:
            ret = ctx.xpathEval("/domain/devices/interface")
            for node in ret:
                devdst = None
                for child in node.children:
                    if child.name == "target":
                        devdst = child.prop("dev")
                if devdst == None:
                    continue
                ifaces.append(devdst)
        finally:
            if ctx != None:
                ctx.xpathFreeContext()
            if doc != None:
                doc.freeDoc()
        return ifaces

    def get_memtune(self, dom):
        memtune = { 'min_guarantee': 0, 'soft_limit': 0, 'hard_limit': 0 }
        xml = dom.XMLDesc(0)

        try:
            doc = libxml2.parseDoc(xml)
        except:
            return []

        ctx = doc.xpathNewContext()
        try:
            for key in memtune:
                ret = ctx.xpathEval("/domain/memtune/%s" % key)
                try:
                    for child in ret[0].children:
                        memtune[key] = int(child.content)
                        break
                except IndexError:
                        # key not found in xml
                        pass
        finally:
            if ctx != None:
                ctx.xpathFreeContext()
            if doc != None:
                doc.freeDoc()
        return memtune

    def fetch_values(self, uri):
        conn = libvirt.openReadOnly(uri)
        ids = conn.listDomainsID()
        results = {}
        for id in ids:
            data = {}
            data['net_rx_bytes'] = 0
            data['net_tx_bytes'] = 0
            try:
                dom = conn.lookupByID(id)
                name = dom.name()
            except libvirt.libvirtError as err:
                print("Id: %s: %s" % (id, err), file=sys.stderr)
                continue
            if name == "Domain-0":
                continue
            ifaces = self.get_ifaces(dom)
            for iface in ifaces:
                try:
                    stats = dom.interfaceStats(iface)
                    data['net_rx_bytes'] += stats[0]
                    data['net_tx_bytes'] += stats[4]
                except:
                    print >>sys.stderr, "Cannot get ifstats for '%s' on '%s'" % (iface, name)

            cputime = float(dom.info()[4])
            cputime_percentage = 1.0e-7 * cputime
            data['cpu'] = cputime_percentage
            try:
                data['cpu_percentage'] = cputime_percentage / psutil.cpu_count()
            except Exception as e:
                pass

            maxmem, mem = dom.info()[1:3]
            mem *= 1024
            maxmem *= 1024
            data['mem_bytes'] = mem
            memtune = self.get_memtune(dom)
            data['min_guarantee_bytes'] = memtune['min_guarantee'] * 1024
            data['hard_limit_bytes'] = memtune['hard_limit'] * 1024
            data['soft_limit_bytes'] = memtune['soft_limit'] * 1024

            data['disk_rd_bytes'] = 0
            data['disk_wr_bytes'] = 0
            data['disk_wr_req'] = 0
            data['disk_rd_req'] = 0
            try:
                dom = conn.lookupByID(id)
                name = dom.name()
            except libvirt.libvirtError as err:
                print("Id: %s: %s" % (id, err), file=sys.stderr)
                continue
            if name == "Domain-0":
                continue
            disks = self.get_disks(dom)
            for disk in disks:
                try:
                    rd_req, rd_bytes, wr_req, wr_bytes, errs = dom.blockStats(disk)
                    data['disk_rd_bytes'] += rd_bytes
                    data['disk_wr_bytes'] += wr_bytes
                    data['disk_rd_req'] += rd_req
                    data['disk_wr_req'] += wr_req
                except TypeError:
                    print >>sys.stderr, "Cannot get blockstats for '%s' on '%s'" % (disk, name)

            results[self.canon(name)] = data
        return results

    def get_disks(self, dom):
        xml = dom.XMLDesc(0)
        doc = None
        try:
            doc = libxml2.parseDoc(xml)
        except:
            return []
        ctx = doc.xpathNewContext()
        disks = []
        try:
            ret = ctx.xpathEval("/domain/devices/disk")
            for node in ret:
                devdst = None
                for child in node.children:
                    if child.name == "target":
                        devdst = child.prop("dev")
                if devdst == None:
                    continue
                disks.append(devdst)
        finally:
            if ctx != None:
                ctx.xpathFreeContext()
            if doc != None:
                doc.freeDoc()
        return disks


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\Y/@G��agent360/plugins/diskinodes.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'diskinodes'

    def run(self, config):
        disk = {}
        try:
            df_output_lines = [s.split() for s in os.popen("df -Pli").read().splitlines()]
            del df_output_lines[0]
            for row in df_output_lines:
                if row[0] == 'tmpfs':
                    continue
                disk[row[0]] = {'total': int(row[1]), 'used': int(row[2]), 'free': int(row[3]), 'percent': row[4][:-1]}
        except:
            pass

        return disk


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\���˯�agent360/plugins/openvpn.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import time
from openvpn_status import parse_status


class Plugin(plugins.BasePlugin):
    __name__ = 'openvpn'

    def run(self, config):
        '''
        OpenVPN monitoring, needs access to openvpn-status.log file.
        pip install openvpn-status
        or
        pip3 install openvpn-status

        In /etc/agent360.ini to enable put:
        [openvpn]
        enabled = yes
        status_path = /etc/openvpn/openvpn-status.log

        test the plugin by running: sudo -u agent360 agent360 test OpenVPN

        If you are having permission issues try to run the agent as root user:
        https://docs.360monitoring.com/docs/run-the-monitoring-agent-as-the-root-user
        '''
        openvpn_clients = {}
        last_value = {}
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check

        try:
            with open(config.get('openvpn', 'status_path')) as logfile:
                status = parse_status(logfile.read())
        except Exception as e:
            return e

        try:
            openvpn_clients['containers'] = len(status.client_list.items())
            for key, client in status.client_list.items():
                 client.common_name = client.common_name.replace('.', '-')
                 openvpn_clients[client.common_name] = {}
                 bytes_out = int(client.bytes_sent)
                 bytes_in = int(client.bytes_received)
                 openvpn_clients[client.common_name]['net_out_bytes'] = self.absolute_to_per_second('%s_%s' % (client.common_name, 'net_out_bytes'), bytes_out, prev_cache)
                 openvpn_clients[client.common_name]['net_in_bytes'] = self.absolute_to_per_second('%s_%s' % (client.common_name, 'net_in_bytes'), bytes_in, prev_cache)

                 last_value['%s_%s' % (client.common_name, 'net_in_bytes')] = bytes_in
                 last_value['%s_%s' % (client.common_name, 'net_out_bytes')] = bytes_out
        except Exception as e:
            return e

        last_value['ts'] = time.time()
        self.set_agent_cache(last_value)

        return openvpn_clients

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\�\����agent360/plugins/nginx.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
# import psutil
try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError
import time
import plugins


class Plugin(plugins.BasePlugin):
    __name__ = 'nginx'

    def run(self, config):
        '''
        Provides the following metrics (example):
            "accepts": 588462,
            "accepts_per_second": 0.0,
            "active_connections": 192,
            "handled": 588462,
            "handled_per_second": 0.0,
            "reading": 0,
            "requests": 9637106,
            "requests_per_second": 0.0,
            "waiting": 189,
            "writing": 3

        requests, accepts, handled are values since the start of nginx.
        *_per_second values calculated from them using cached values from previous call.
        '''

        try:
            results = dict()
            next_cache = dict()
            # request = urllib2.Request(config.get('nginx', 'status_page_url'))
            # raw_response = urllib2.urlopen(request)
            next_cache['ts'] = time.time()
            prev_cache = self.get_agent_cache()  # Get absolute values from previous check
            # response = raw_response.readlines()
            request = Request(config.get('nginx', 'status_page_url'))
            response = urlopen(request).read().decode('utf-8').split("\n")
            # Active connections: N
            # active_connections = response[0].split(':')[1].strip()
            active_connections = response[0].split()[-1]
            results['active_connections'] = int(active_connections)

            # server accepts handled requests
            keys = response[1].split()[1:]
            values = response[2].split()
            for key, value in zip(keys, values):
                next_cache[key] = int(value)
                results[key] = next_cache[key]  # Keep absolute values in results
                try:
                    if next_cache[key] >= prev_cache[key]:
                        results['%s_per_second' % key] = \
                            (next_cache[key] - prev_cache[key]) / \
                            (next_cache['ts'] - prev_cache['ts'])
                    else:  # Nginx was restarted after previous caching
                        results['%s_per_second' % key] = \
                            next_cache[key] / \
                            (next_cache['ts'] - prev_cache['ts'])
                except KeyError:  # No cache yet, can't calculate
                    results['%s_per_second' % key] = 0.0

            # Reading: X Writing: Y Waiting: Z
            keys = response[3].split()[0::2]
            keys = [entry.strip(':').lower() for entry in keys]
            values = response[3].split()[1::2]
            for key, value in zip(keys, values):
                results[key] = int(value)

            # Cache absolute values for next check calculations
            self.set_agent_cache(next_cache)

            return results
        except Exception:
            return False


if __name__ == '__main__':
    Plugin().execute()
PK�Eu\����llagent360/plugins/mdstat.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'mdstat'

    def run(self, config):
        '''
        Monitor software raid status using mdadm:
        pip install mdstat

        Requires sudo access to mdjson add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /usr/local/bin/mdjson
        '''
        data = os.popen('mdjson').read()
        results = {}

        try:
            data = json.loads(data)
        except Exception:
            return "Could not load mdstat data"

        for key, value in data['devices'].items():
            device = {}
            if(value['active'] is not True):
                device['active'] = 0
            else:
                device['active'] = 1
            if(value['read_only'] is not False):
                device['read_only'] = 1
            else:
                device['read_only'] = 0
            if(value['resync'] is not None):
                device['resync'] = 1
            else:
                device['resync'] = 0
            device['faulty'] = 0
            for disk, diskvalue in value['disks'].items():
                if diskvalue['faulty'] is not False:
                    device['faulty'] = device['faulty'] + 1
            results[key] = device
        return results

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\��??agent360/plugins/fail2ban.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'fail2ban'

    def run(self, config):
        '''
        Monitor currently banned IP's, specify the fail2ban jail you want to monitor in /etc/agent360.ini
        
        Example:
        [fail2ban]
        enabled = yes
        jail = sshd
        
        Nota bene: agent360 requires sudo permission to access fail2ban-client 
        '''

        data = {}
        jail = config.get('fail2ban', 'jail').split(',')

        for nom in jail:
            data[nom] = {'count': os.popen('sudo /bin/fail2ban-client status '+ nom +' | egrep -i "Currently banned:.*"  | egrep -o "[0-9.]+"').read().rstrip()}

        return data

if __name__ == '__main__':
    Plugin().execute()
PK�Eu\"22��3agent360/plugins/__pycache__/megacli.cpython-39.pycnu�[���a

��?h�
�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�PluginZmegaclic	
Cs�i}�z�t�d�����}i}|D�]�}|�d�rb|�d�}|�d�}t||d|����|d<|�d�r�|�d�}||dd���|d<q"|�d	�r�|�d�}||dd���|d
<q"|�d�r�|�d�}||dd���|d<q"|�d
��r|�d�}||dd���|d<q"|�d��rV|�d�}|�d�}||d|���|d<q"|�d��r�|�d�}t||dd����|d<q"|�d��r�|�d�}t||dd����|d<q"|�d��r�|�d�}||dd���|d<q"|�d��r|�d�}||dd���|d<q"|�d��rN|�d�}||dd���|d<q"|�d��r~|�d�}||dd���|d<q"|�d�r"|�d�}||dd���|d<q"|||d<Wn*t�y�}z|WYd}~Sd}~00|S) Nzmegacli -LDInfo -Lall -aALLz
Virtual Drive�(�:�Zvirtualdisk_id�Name�namez
RAID LevelZ
raid_level�Size�size�State�statez
Strip Sizez KBZstripe_sizezNumber Of DrivesZnumber_of_drivesz
Span DepthZ
span_depthzDefault Cache PolicyZdefault_cache_policyzCurrent Cache PolicyZcurrent_cache_policyzCurrent Access PolicyZ
access_policyzDisk Cache PolicyZdisk_cache_policyZ
EncryptionZ
encryption)	�os�popen�read�
splitlines�
startswith�find�int�strip�	Exception)	�self�configZdiskZdf_output_lines�data�line�delim�offset�e�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/megacli.py�runsd




















z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZplugins�jsonZ
BasePluginrr�executerrrr�<module>s
;PK�Eu\1oFF2agent360/plugins/__pycache__/iostat.cpython-39.pycnu�[���a

��?hk�@sjddlZddlZddlZddlZddlZddlZddlZddd�ZGdd�dej�Z	e
dkrfe	���dS)�Nc
Cs~d}i}tj�d�sdSgd�}gd�}gd�}gd�}t|d���}|D�],}|dkrZqJ|��}	t|	�t|�krx|}
nDt|	�t|�kr�|}
n.t|	�t|�kr�|}
nt|	�t|�krJ|}
nqJtt|
|	��}|d	dd
�dkr�|d	dd
�dkr�n|d	d
d��	�du�rqJd|d	vsJd|d	v�r,qJ|du�rF||d	k�rFqJ|D] }|d	k�rJt
||�||<�qJ|||d	<qJ|S)Nz/proc/diskstatsF)�m�mm�dev�reads�rd_mrg�
rd_sectors�
ms_reading�writes�wr_mrg�
wr_sectors�
ms_writing�cur_ios�ms_doing_io�ms_weighted)rrrrrrrr	r
rrr
rr�discards�discards_merged�discarded_sectors�discarded_time)rrrrrrrr	r
rrr
rrrrrr�flush�
flush_time)rrrrrr	r�r�r�Znvm�������nT�loopZram)�os�path�isfile�open�	readlines�split�len�dict�zip�isdigit�int)
r�	file_path�resultZcolumns_diskZcolumns_disk_418Zcolumns_disk_55Zcolumns_partition�lines�liner"�columns�data�key�r/�A/usr/local/lib/python3.9/site-packages/agent360/plugins/iostat.py�diskstats_parse
sH
(
r1c@seZdZdZdd�ZdS)�PluginZiostatc
Gsd}i}t��|d<|��}t�}|r.|dur�i}zHtjdd�}|��D].\}}	i}
|	jD]}t|	|�|
|<qZ|
||<qHWn*ty�}z|j	}WYd}~n
d}~00�ndi}|��D�]T\}}	i}
i||<t��||d<z||Wni||<Yn0|	��D]R\}}
||v�rNz|�
||
||�|
|<WnYn0|
|||<n|
|
|<�qz(|
d|
d|
d|
d	|
d
<Wnd|
d
<Yn0z|
d|
d	|
d<Wnd|
d<Yn0z(d
|
dd|d|d|
d<Wnd|
d<Yn0|
||<q�|�|�|S)N)rr	rrrrr
rrrrrrrr�tsFT)Zperdiskrrrr	zavgrq-szrZtps�dri��usage)�timeZget_agent_cacher1�psutilZdisk_io_counters�items�_fields�getattr�	Exception�messageZabsolute_to_per_secondZset_agent_cache)�selfZunusedZ
delta_keysZ
next_cacheZ
prev_cache�disks�resultsZdiskdataZdevice�valuesZdevice_stats�	key_value�e�valuer/r/r0�runOs`
 
((

z
Plugin.runN)�__name__�
__module__�__qualname__rDr/r/r/r0r2Lsr2�__main__)N)r�signal�
subprocess�sysr7Zpluginsr6r1Z
BasePluginr2rE�executer/r/r/r0�<module>s
?JPK�Eu\��/{	{	2agent360/plugins/__pycache__/phpfpm.cpython-39.pycnu�[���a

��?h�
�@s�z0ddlmZmZddlmZmZddlmZWn>eynddlmZddl	mZddl
mZmZmZYn0ddlZddlZddl
Z
ddlZGdd	�d	e
j�Zed
kr�e���dS)�)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)rrrNc@seZdZdZdd�ZdS)�PluginZphpfpmcCs�dd�}t�}t�}|�td��d�}|��}|D�]�}t|�}t|�}	z�|	���d�}
t	j
dkrnt�|
�}ntj|
|d�}i||d<t
�
�|d	|d<|��D] \}}
|
||d|�d
d�<q�t||dd�|d
|d<Wn.t�y}z|WYd}~Sd}~00z�|d
|d|d
|dk�r�|d
|d|d
|d|d	|d|d	|d||dd<n<|d
|d|d	|d|d	|d||dd<Wq2t�y�d||dd<Yq20q2|�|�|S)z-
        php-fpm status page metrics
        cs"dd��t�fdd�|��D��S)NcSst|t�r|�d�S|S)N�ascii)�
isinstance�unicode�encode)�x�r
�A/usr/local/lib/python3.9/site-packages/agent360/plugins/phpfpm.py�<lambda>�z7Plugin.run.<locals>.ascii_encode_dict.<locals>.<lambda>c3s|]}t�|�VqdS)N)�map)�.0�pair��ascii_encoder
r�	<genexpr>rz8Plugin.run.<locals>.ascii_encode_dict.<locals>.<genexpr>)�dict�items)�datar
rr�ascii_encode_dictsz%Plugin.run.<locals>.ascii_encode_dictZstatus_page_url�,zutf-8)�)�object_hook�poolz%s_ts� �_Z
accepted_connz%s_accepted_connNZaccepted_conn_per_secondg)r�get�__name__�splitZget_agent_cacherr�read�decode�sys�version_info�json�loads�timer�replace�int�	Exception�KeyErrorZset_agent_cache)�self�configr�resultsZ
next_cacheZmy_poolsZ
prev_cacher�requestZraw_responser�j�k�v�er
r
r�runsD

$"����
z
Plugin.runN)r"�
__module__�__qualname__r7r
r
r
rrsr�__main__)�urllib.parserr�urllib.requestrr�urllib.errorr�ImportError�urllib�urllib2r&r*Zpluginsr(Z
BasePluginrr"�executer
r
r
r�<module>s5PK�Eu\3�c�``3agent360/plugins/__pycache__/haproxy.cpython-39.pycnu�[���a

��?h��@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�haproxycCsNt�}t�}z$|�dd�}|�dd�}||f}Wnd}Yn0|�dd�}d|vr^|d}tj||d�}t��|d<|��}	|jd	ur�|j�d
�}
nd�|j�Sd}d
}t	�
|
�}
t�}ttg}|
D�]h}i||dd|d<i||dd|d<z$|	d|	|dd|dd<Wn,t
�yNi|	|dd|d<Yn0|��D]�\}}|D](}z||�}Wnt�y�Yn0�qd||v�r�|||dd|d|<nx||v�rXt|�tu�rX|�|t|�|	|dd|d�||dd|d|<t|�||dd|d|<n�qXq�t��|d<|�|�|S)Nr�username�passwordF�status_page_urlz;csv)�auth�ts���
z&Could not load haproxy status page: {})ZqcurZqmaxZscurZsmaxZslimZstotZweightZqlimitZthrottleZlbtotZtrackedZrateZrate_limZrate_maxZhanafailZreq_rateZreq_rate_maxZreq_totZ	conn_rateZ
conn_rate_maxZconn_tot)�binZboutZcli_abrtZsrv_abrtZinterceptedZhrsp_1xxZhrsp_2xxZhrsp_3xxZhrsp_4xxZ
check_riseZ
check_fallZcheck_healthZ
agent_riseZ
agent_fallZhrsp_5xxZcomp_inZcomp_outZcomp_bypZcomp_rspZ
hrsp_otherZdconZdreqZdrespZereqZeconZerespZwretrZwredisZdsesz# pxname�/Zsvname)�dict�get�requests�timeZget_agent_cache�status_code�text�split�format�csv�
DictReader�str�float�KeyError�items�
ValueError�typeZabsolute_to_per_secondZset_agent_cache)�self�config�resultsZ
next_cacherr�	user_passr�requestZ
prev_cache�responseZ	non_delta�deltaZ
csv_reader�data�constructors�row�k�v�c�r*�B/usr/local/lib/python3.9/site-packages/agent360/plugins/haproxy.py�runsX

1

$

>"
z
Plugin.runN)�__name__�
__module__�__qualname__r,r*r*r*r+r	sr�__main__)rZpluginsrrZ
BasePluginrr-�executer*r*r*r+�<module>s	PK�Eu\�����6agent360/plugins/__pycache__/diskinodes.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZ
diskinodescCs�i}zxdd�t�d�����D�}|d=|D]L}|ddkr>q,t|d�t|d�t|d�|d	dd
�d�||d<q,WnYn0|S)NcSsg|]}|���qS�)�split)�.0�srr�E/usr/local/lib/python3.9/site-packages/agent360/plugins/diskinodes.py�
<listcomp>�zPlugin.run.<locals>.<listcomp>zdf -PlirZtmpfs�������)�total�used�free�percent)�os�popen�read�
splitlines�int)�self�configZdiskZdf_output_lines�rowrrr�run	s@z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsZ
BasePluginrr�executerrrr�<module>sPK�Eu\�YL�^^6agent360/plugins/__pycache__/wp-toolkit.cpython-39.pycnu�[���a

��?h5�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@s eZdZdZdZdZdd�ZdS)�Plugin�
wp-toolkitz
WP ToolkitzPUnified plugin for gathering metrics for WP Toolkit servers on cPanel and Plesk.cCs�d}tj�d�rd}ntj�d�r&d}i}|dkr�tt�d|d����|d<tt�d|d	����|d
<tt�d|d����|d<tt�d|d
����|d<tt�d|d����|d<|SdSdS)a�
        Grabbing some basic information from your cPanel or Plesk server
        If you are using Plesk:
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/sbin/plesk

        For cPanel:
        agent360 ALL=(ALL) NOPASSWD: /usr/local/bin/wp-toolkit



        test by running:
        sudo -u agent360 agent360 test wp-toolkit
        Add to /etc/agent360.ini:
        [wp-toolkit]
        enabled = yes
        interval = 3600
        �z/var/cpanel/usersrz
/opt/pleskzplesk ext wp-toolkitzsudo -n z3 --list | grep -v "Main Domain ID" | grep . | wc -lzWordPress Websitesz  --list | grep "Working" | wc -lzWordPress Websites - Alivez% --list | grep "Outdated WP" |  wc -lzWordPress Websites - Outdatedz& --list | grep "Outdated PHP" |  wc -lz!WordPress Websites - Outdated PHPz --list | grep "Broken" | wc -lzWordPress Websites - Brokenz!Neither cPanel nor Plesk detectedN)�os�path�isdir�int�popen�read)�self�config�command�data�r�E/usr/local/lib/python3.9/site-packages/agent360/plugins/wp-toolkit.py�runsz
Plugin.runN)�__name__�
__module__�__qualname__�	__title__�__description__rrrrrrsr�__main__)rZpluginsZ
BasePluginrr�executerrrr�<module>s)PK�Eu\��r�mm1agent360/plugins/__pycache__/mysql.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�mysqlc
Cs�|��}i}zt|�dd��|d<Wnty>d|d<Yn0z|�dd�|d<Wnd|d<Yn0z|�dd�|d<Wnd	|d<Yn0z|�dd
�|d
<Wn|�dd�|d<Yn0z|�dd
�|d<Wnd|d<Yn0tjfi|��}|��}|�d�|��}d}d}t	�}	t	�}
t
tg}|D]�\}}
|���
�}|D](}z||
�}
Wnt�yvYn0�qR||v�r�|
|	|<n<||v�r:t|
�t
u�r:|�|t|
�|�|	|<t|
�|
|<n�q:|�tjj�}|�d�|��}d}|du�rt	�}|��D]�\}}
|���
�}|dk�r<|
dk�r8dnd}
|dk�rX|
dk�rTdnd}
|D](}z||
�}
Wnt�y�Yn0�q\||v�rt|
�t
u�r|
|	|<n�q|��t��|
d<|�|
�|	S)z&
        MySQL metrics plugin
        r�porti��username�user�root�password�passwd��host�socketZunix_socketZdatabase�dbzSHOW GLOBAL STATUS;)�max_used_connections�
open_files�open_tables�qcache_free_blocks�qcache_free_memory�qcache_total_blocks�slave_open_temp_tables�threads_cached�threads_connected�threads_runningZuptime)7Zaborted_clientsZaborted_connectsZbinlog_cache_disk_useZbinlog_cache_useZbytes_receivedZ
bytes_sentZ
com_deleteZcom_delete_multiZ
com_insertZcom_insert_selectZcom_loadZcom_replaceZcom_replace_selectZ
com_selectZ
com_updateZcom_update_multi�connectionsZcreated_tmp_disk_tablesZcreated_tmp_filesZcreated_tmp_tablesZ	key_readsZkey_read_requestsZ
key_writesZkey_write_requestsrrrZ
opened_tablesrrZqcache_hitsZqcache_insertsZqcache_lowmem_prunesZqcache_not_cachedZqcache_queries_in_cacherZ	questionsZselect_full_joinZselect_full_range_joinZselect_rangeZselect_range_checkZselect_scanrZslave_retried_transactionsZslow_launch_threadsZslow_queriesZ
sort_rangeZ	sort_rowsZ	sort_scanZtable_locks_immediateZtable_locks_waitedrrZthreads_createdrzSHOW SLAVE STATUS)Zslave_io_stateZmaster_hostZseconds_behind_masterZread_master_log_posZ
relay_log_pos�slave_io_running�slave_sql_runningZ
last_errorZexec_master_log_posZrelay_log_spaceZslave_sql_running_stateZmaster_retry_countNrZYes�rr�ts)Zget_agent_cache�int�get�
ValueError�MySQLdb�connect�cursor�executeZfetchall�dict�str�float�lower�strip�typeZabsolute_to_per_secondZcursorsZ
DictCursorZfetchone�items�close�timeZset_agent_cache)�self�configZ
prev_cache�authr
r"Zquery_resultZ	non_deltaZ
delta_keys�results�data�constructors�key�value�cZquery_result_slaveZnon_delta_slave�r6�@/usr/local/lib/python3.9/site-packages/agent360/plugins/mysql.py�run
s�

:









z
Plugin.runN)�__name__�
__module__�__qualname__r8r6r6r6r7rsr�__main__)r,r ZpluginsZ
BasePluginrr9r#r6r6r6r7�<module>s'PK�Eu\tlT��4agent360/plugins/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�C/usr/local/lib/python3.9/site-packages/agent360/plugins/__init__.py�<module>�PK�Eu\�d�=	=	1agent360/plugins/__pycache__/nginx.cpython-39.pycnu�[���a

��?h��@s�z0ddlmZmZddlmZmZddlmZWn>eynddlmZddl	mZddl
mZmZmZYn0ddlZddlZGdd	�d	ej
�Zed
kr�e���dS)�)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)rrrNc@seZdZdZdd�ZdS)�Plugin�nginxc	Cs��z�t�}t�}t��|d<|��}t|�dd��}t|����d��d�}|d��d}t	|�|d<|d	��d	d
�}|d��}	t
||	�D]�\}
}t	|�||
<||
||
<z^||
||
kr�||
||
|d|d|d|
<n ||
|d|d|d|
<Wq�t�y0d
|d|
<Yq�0q�|d��dd
d�}dd�|D�}|d��d	d
d�}	t
||	�D]\}
}t	|�||
<�qx|�|�|WSt
�y�YdS0d
S)a(
        Provides the following metrics (example):
            "accepts": 588462,
            "accepts_per_second": 0.0,
            "active_connections": 192,
            "handled": 588462,
            "handled_per_second": 0.0,
            "reading": 0,
            "requests": 9637106,
            "requests_per_second": 0.0,
            "waiting": 189,
            "writing": 3

        requests, accepts, handled are values since the start of nginx.
        *_per_second values calculated from them using cached values from previous call.
        �tsrZstatus_page_urlzutf-8�
r����active_connections�N�z
%s_per_secondg�cSsg|]}|�d����qS)�:)�strip�lower)�.0�entry�r�@/usr/local/lib/python3.9/site-packages/agent360/plugins/nginx.py�
<listcomp>H�zPlugin.run.<locals>.<listcomp>F)�dict�timeZget_agent_cacher�getr�read�decode�split�int�zip�KeyErrorZset_agent_cache�	Exception)�self�config�resultsZ
next_cacheZ
prev_cache�request�responser�keys�values�key�valuerrr�runsF����
z
Plugin.runN)�__name__�
__module__�__qualname__r,rrrrrsr�__main__)�urllib.parserr�urllib.requestrr�urllib.errorr�ImportError�urllib�urllib2rZpluginsZ
BasePluginrr-�executerrrr�<module>sEPK�Eu\(ʠ�aa4agent360/plugins/__pycache__/tcpports.cpython-39.pycnu�[���a

��?hV�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZtcpportsc	Cshdd�}t�}|�td��d�}t|�td��}|D].}|�d�\}}t|�}d||||�i||<q4|S)z/
        Checks if TCP ports are open.
        cSsRt�tjtj�}|�|�z|�||f�|��WdStjyLYdS0dS)N�r)�socket�AF_INET�SOCK_STREAM�
settimeout�connect�close�error)�host�port�timeout�sock�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/tcpports.py�is_port_opens
z Plugin.run.<locals>.is_port_open�
host_ports�,r
�:�	available)�dict�get�__name__�split�float�int)	�self�configr�resultsrr
�	host_portrrrrr�run
s
z
Plugin.runN)r�
__module__�__qualname__r rrrrrsr�__main__)rZpluginsZ
BasePluginrr�executerrrr�<module>sPK�Eu\#��}}3agent360/plugins/__pycache__/openvpn.cpython-39.pycnu�[���a

��?h��@sLddlZddlZddlZddlmZGdd�dej�ZedkrHe���dS)�N)�parse_statusc@seZdZdZdd�ZdS)�Plugin�openvpnc
Cs�i}i}|��}z@t|�dd���}t|���}Wd�n1sD0YWn(tyx}z|WYd}~Sd}~00z�t|j���|d<|j��D]�\}}	|	j	�
dd�|	_	i||	j	<t|	j�}
t|	j
�}|�d|	j	df|
|�||	j	d<|�d|	j	d	f||�||	j	d	<||d|	j	d	f<|
|d|	j	df<q�Wn*t�yd}z|WYd}~Sd}~00t��|d
<|�|�|S)a
        OpenVPN monitoring, needs access to openvpn-status.log file.
        pip install openvpn-status
        or
        pip3 install openvpn-status

        In /etc/agent360.ini to enable put:
        [openvpn]
        enabled = yes
        status_path = /etc/openvpn/openvpn-status.log

        test the plugin by running: sudo -u agent360 agent360 test OpenVPN

        If you are having permission issues try to run the agent as root user:
        https://docs.360monitoring.com/docs/run-the-monitoring-agent-as-the-root-user
        rZstatus_pathNZ
containers�.�-z%s_%sZ
net_out_bytesZnet_in_bytes�ts)Zget_agent_cache�open�getr�read�	Exception�lenZclient_list�itemsZcommon_name�replace�intZ
bytes_sentZbytes_receivedZabsolute_to_per_second�timeZset_agent_cache)�self�configZopenvpn_clients�
last_valueZ
prev_cache�logfile�status�e�key�clientZ	bytes_outZbytes_in�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/openvpn.py�runs0.


""
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrr	sr�__main__)	�osZpluginsrZopenvpn_statusrZ
BasePluginrr�executerrrr�<module>s2PK�Eu\%�I??4agent360/plugins/__pycache__/asterisk.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZasteriskcCs�|�td�}tjdtjdd�}|��d�d��dd�}tjd	tjdd�}|��d�d��dd�}tjd
|tjdd�}|��d�d��dd�}|||d�}|S)NZsbcipzDsudo asterisk -rx 'core show calls' | grep 'active' | cut -f1 -d ' 'T)�stdout�shellrzutf-8�
�zRsudo asterisk -rx 'core show channels verbose' | cut -c1-15 | grep 'pstn_' | wc -lz6sudo asterisk -rx 'sip show peers' | grep '%s' | wc -l)ZcallsZ
incomingcalls�devices)�get�__name__�
subprocess�Popen�PIPE�communicate�decode�replace)�self�config�ip�p�incomingr�res�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/asterisk.py�run
sz
Plugin.runN)r	�
__module__�__qualname__rrrrrrsr�__main__)Zpluginsr
Z
BasePluginrr	�executerrrr�<module>sPK�Eu\��
���1agent360/plugins/__pycache__/janus.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�januscCsL|�dd�}tjd|dtjdd�}|��d�d��d	d
�}d|i}|S)Nr�adminpwz�curl -s -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{ "janus": "list_sessions", "transaction": "324", "admin_secret": "zC" }' http://localhost:7088/admin | awk 'NR>=5' | head -n -2 | wc -lT)�stdout�shellrzutf-8�
�Zjanus_sessions)�get�
subprocess�Popen�PIPE�communicate�decode�replace)�self�configr�p�res�r�@/usr/local/lib/python3.9/site-packages/agent360/plugins/janus.py�run
s
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)Zpluginsr
Z
BasePluginrr�executerrrr�<module>s
PK�Eu\��f��6agent360/plugins/__pycache__/cloudlinux.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�
cloudlinuxcCsrt�d���}i}zt�|�}Wnty4YdS0|ddurFdSi}|dD]}|d}|d=|||<qR|S)a
        Beta plugin to monitor cloudlinux users
        Requires sudo access to lveinfo (whereis lveinfo) add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /REPLACE/PATH/TO/lveinfo

        To enable add to /etc/agent360.ini:
        [cloudlinux]
        enabled = yes
        z3sudo lveinfo -d --period 5m -o cpu_avg -l 20 --jsonzCould not load lveinfo data�success�zFailed to load lveinfo�dataZID)�os�popen�read�json�loads�	Exception)�self�configr�results�line�username�r�E/usr/local/lib/python3.9/site-packages/agent360/plugins/cloudlinux.py�runs

z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsr
Z
BasePluginrr�executerrrr�<module>s
!PK�Eu\=ߘ�QQ2agent360/plugins/__pycache__/cpanel.cpython-39.pycnu�[���a

��?h��@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@s eZdZdZdd�Zdd�ZdS)�PluginZcpanelcCs\|��}dddddd�}t�d|�s2t�dd	|�}d
d�|��D�\}}tt|�||�S)N�iii@l)�B�K�M�G�T� z([KMGT])z \1cSsg|]}|���qS�)�strip)�.0�stringr
r
�A/usr/local/lib/python3.9/site-packages/agent360/plugins/cpanel.py�
<listcomp>�z#Plugin.to_bytes.<locals>.<listcomp>)�upper�re�match�sub�split�int�float)�self�sizeZunits�number�unitr
r
r�to_bytesszPlugin.to_bytesc	Cspt�gd��}i}t�|�}|ddD]B}|�|d�|d|d|d|d|d	|d
d�||d<q(|S)
z�
        Plugin to collect cpanel user accounts
        To enable add to /etc/agent360.ini:
        [cpanel]
        enabled = yes
        )Zwhmapi1z--output=jsonprettyZ	listaccts�dataZacctZdiskused�
inodesused�	is_locked�
has_backup�outgoing_mail_hold�outgoing_mail_suspended�	suspended)Zdiskused_bytesrrr r!r"r#�user)�
subprocess�check_output�json�loadsr)r�configr�resultsZaccounts�accountr
r
r�runs
�	z
Plugin.runN)�__name__�
__module__�__qualname__rr,r
r
r
rrs	r�__main__)Zpluginsr%r'rZ
BasePluginrr-�executer
r
r
r�<module>s%PK�Eu\��?�}}3agent360/plugins/__pycache__/dirsize.cpython-39.pycnu�[���a

��?hi�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�dirsizecCsLi}|�dd��d�}|D],}dt�d�|�����dd���i||<q|S)zq
        Monitor total directory sizes, specify the directories you want to monitor in /etc/agent360.ini
        r�dirs�,�byteszdu -sbc {} | grep total�total�)�get�split�os�popen�format�read�replace�rstrip)�self�config�dataZmy_dirs�dir�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/dirsize.py�runs
*z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r�
subprocessZplugins�jsonZ
BasePluginrr�executerrrr�<module>sPK�Eu\�5ۛrr0agent360/plugins/__pycache__/temp.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�tempcGs�i}tjdkr�zddl}WnYdS0zH|jdd�}|��}|D](}|jdkrD|j||j�dd	��	d	�<qD|WSYd
S0t
td�s�dSzt��}WnYd
S0|�
�D]6\}}	|	D](}
|
d}|
ddkr�|}|
d||<q�q�|S)z�
        expirimental plugin used to collect temperature from system sensors
        plugin can be tested by running agent360 test temp
        �win32rNzwmi module not installed.zroot\OpenHardwareMonitor)�	namespaceZTemperature�/�-z:Could not fetch temperature data from OpenHardwareMonitor.�sensors_temperatureszplatform not supportedzcan't read any temperature��)�sys�platform�wmiZWMIZSensorZ
SensorType�ValueZParent�replace�strip�hasattr�psutilr�items)�selfZunused�datar
�wZtemperature_infosZsensorZtempsZdevicer�value�type�r�?/usr/local/lib/python3.9/site-packages/agent360/plugins/temp.py�run
s8


z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)ZpluginsrrZ
BasePluginrr�executerrrr�<module>s
*PK�Eu\�߷
��4agent360/plugins/__pycache__/kamailio.cpython-39.pycnu�[���a

��?h,�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZasteriskcGs8tjdtjdd�}|��d�d��dd�}d|i}|S)	Nz&sudo kamctl ul show | grep AOR | wc -lT)�stdout�shellrzutf-8�
�Zdevices_online)�
subprocess�Popen�PIPE�communicate�decode�replace)�selfZunused�p�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/kamailio.py�run
sz
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)ZpluginsrZ
BasePluginrr�executerrrr�<module>s	PK�Eu\�!X���0agent360/plugins/__pycache__/bind.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�bindcCs4|�dd�}t�dt|�d���}t�|�}|S)a
        Monitoring bind nameserver

        You must have the bind statistics-channels configured
        and you must have jq installed for json processing.
        
        Usage for /etc/agent360.ini:
        [bind]
        enabled = yes
        port = 8053
        r�portzcurl -j http://localhost:z5/json 2>/dev/null | jq ".qtypes * .rcodes * .nsstats")�get�os�popen�str�read�json�loads)�self�configZbport�result�data�r�?/usr/local/lib/python3.9/site-packages/agent360/plugins/bind.py�run
s
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsr
Z
BasePluginrr�executerrrr�<module>s
PK�Eu\��|���1agent360/plugins/__pycache__/mailq.cpython-39.pycnu�[���a

��?h�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZmailqcGsDddl}|�d�}|��}i}t|�dkr4d|d<nt|�|d<|S)NrzDsudo /usr/bin/mailq | /usr/bin/tail -n1 | /usr/bin/gawk '{print $5}'�Z
queue_size)�os�popen�read�len�int)�selfZunusedr�stream�retval�results�r
�@/usr/local/lib/python3.9/site-packages/agent360/plugins/mailq.py�runs

z
Plugin.runN)�__name__�
__module__�__qualname__rr
r
r
rrsr�__main__)rZpluginsZ
BasePluginrr�executer
r
r
r�<module>s
PK�Eu\�o�^��4agent360/plugins/__pycache__/powerdns.cpython-39.pycnu�[���a

��?h=�@s�zddlmZmZWn"ey6ddlmZmZYn0ddlZddlZddlZddlZGdd�dej�Z	e
dkr|e	���dS)�)�urlopen�RequestNc@seZdZdZdd�ZdS)�PluginZpowerdnsc
Cs�tj|�td�dd|�td�id�}d}t�}d}t�}|��}|�td�r^t|�td��}|�td	�r�tj|||�td	�d
�}n4|�td�r�tj|||�td�d�}ntj||d
�}t	�	�|d<zt
�|���}Wnt
y�YdS0d}	d}
t�}|D]�}d|v�r�d|v�r�d|v�r�|ddk�r�|d|	v�rt|�|dt|d�|�||d<t|d�||d<n"|d|
v�r�t|d�||d<�q�t	�	�|d<|�|�|S)a�
        Experimental plugin for PowerDNS authoritative server. Might also work with PowerDNS recursor,
        but it may need extra delta_keys / absolute_keys.
        Add to /etc/agent360.ini:
        [powerdns]
        enabled=yes
        statistics_url=http://localhost:8081/api/v1/servers/localhost/statistics
        api_key=changeme
        ;ca_file=
        ;ca_path=
        ;timeout=10
        Zstatistics_urlz	X-API-Keyz%sZapi_key)�headers�
N�timeoutZca_file)r�cafileZca_path)r�capath)r�tsF)/zcorrupt-packetszdeferred-cache-insertszdeferred-cache-lookupzdeferred-packetcache-insertszdeferred-packetcache-lookupzdnsupdate-answerszdnsupdate-changeszdnsupdate-querieszdnsupdate-refusedzincoming-notificationszoverload-dropszpacketcache-hitzpacketcache-misszquery-cache-hitzquery-cache-missz
rd-querieszrecursing-answerszrecursing-questionszrecursion-unansweredzservfail-packetsZ
signatureszsys-msecztcp-answersztcp-answers-bytesztcp-queriesztcp4-answersztcp4-answers-bytesztcp4-queriesztcp6-answersztcp6-answers-bytesztcp6-queriesztimedout-packetszudp-answerszudp-answers-byteszudp-do-queriesz
udp-in-errorszudp-noport-errorszudp-querieszudp-recvbuf-errorszudp-sndbuf-errorszudp4-answerszudp4-answers-byteszudp4-querieszudp6-answerszudp6-answers-byteszudp6-queriesz	user-msec)zkey-cache-sizeZlatencyzfd-usagezmeta-cache-sizezopen-tcp-connectionszpacketcache-sizezqsize-qzquery-cache-sizezreal-memory-usagezsecurity-statuszsignature-cache-sizeZuptime�name�value�typeZ
StatisticItem)�urllib2r�get�__name__�dictZget_agent_cache�
has_option�intr�time�json�loads�read�	ExceptionZabsolute_to_per_second�floatZset_agent_cache)
�self�config�requestr�resultsZraw_responseZ
next_cacheZ
prev_cache�statsZ
delta_keysZ
absolute_keys�data�stat�r!�C/usr/local/lib/python3.9/site-packages/agent360/plugins/powerdns.py�runs@&3"
z
Plugin.runN)r�
__module__�__qualname__r#r!r!r!r"rsr�__main__)�urllib.requestrr�ImportErrorrrZpluginsrZ
BasePluginrr�executer!r!r!r"�<module>syPK�Eu\��z`��6agent360/plugins/__pycache__/diskstatus.cpython-39.pycnu�[���a

��?h��@sPddlZddlZddlZddlZddlZGdd�dej�ZedkrLe���dS)�Nc@seZdZdZdd�ZdS)�PluginZ
diskstatuscCs�i}z.tjdtjtjdd���d����}d}Wn*ty\}zd}WYd}~dSd}~00|du�r�|D�]}z�|�d�d�d	�d
}t�	d�
|�d�d������}d}	|d�d
�ddkr�d}	i||<d}
|D]p}|dd�dk�r�d}
q�|
du�rq�t�
dd|���}|�d�}t|�dkr�|d|||d���dd�<q�|	||d<Wqlt�y�}zt|�WYd}~qld}~00ql|S)a
        Monitor nvme or smart disk status.
        For NVME drives use the diskstatus-nvme plugin
        for smart status install smartmontools (apt-get/yum install smartmontools)
        This plugin requires the agent to be run under the root user.
        zsmartctl --scanT)�stdout�stderr�shellrFNz+Could not fetch smartctl status information� �/�zsmartctl -A -H {}�z: �ZPASSED�zID#z +�	Z_celsius��status)�
subprocess�Popen�PIPE�communicate�decode�
splitlines�	Exception�split�os�popen�format�read�re�sub�strip�len�lower�replace�print)�self�config�resultsZdevlistZsmartctl�e�rowZdisk_idZ
disk_statsZsmart_status�start�stats�r)�E/usr/local/lib/python3.9/site-packages/agent360/plugins/diskstatus.py�runs>&

"

" z
Plugin.runN)�__name__�
__module__�__qualname__r+r)r)r)r*r	sr�__main__)	rrZplugins�jsonrZ
BasePluginrr,�executer)r)r)r*�<module>s-PK�Eu\�+�}}7agent360/plugins/__pycache__/apt-updates.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�apt-updatescCs�i}tt�d����|d<tt�d����|d<z>|�dd�}tj�d�r\|dkr\d	|d
<n|dkrld|d
<Wnty�Yn0|S)a�
        ubuntu/debian updates available from apt-get
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/bin/apt-get

        test by running:
        sudo -u agent360 agent360 test apt-updates

        Add to /etc/agent360.ini:
        [apt-updates]
        enabled = yes
        interval = 3600

        Optionally check if a reboot is required:
        checkreboot = true
        z>sudo -n apt-get upgrade -s | grep Inst | grep security | wc -l�securityzAsudo -n apt-get upgrade -s | grep Inst | grep -v security | wc -l�otherr�checkrebootz/var/run/reboot-required�trueZYeszReboot RequiredZNo)�int�os�popen�read�get�path�exists�	Exception)�self�config�datar�r�F/usr/local/lib/python3.9/site-packages/agent360/plugins/apt-updates.py�run	s
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r	ZpluginsZ
BasePluginrr�executerrrr�<module>s!PK�Eu\�Nr/agent360/plugins/__pycache__/cpu.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�cpucGs�|��}i}t��|d<i}t��}z|dWnpty�tjdd�}d}t��|d<|D]2}|d}i||<|jD]}	t||	�|||	<qxq^t�d�Yn0tjdd�}d}|D]�}|d}i||<i||<|jD]�}	t||	�|||	<zt��|d}
WnYq�Yn0|
dk�r q�t||	�|||	}|dk�rDd}||
d|||	<|||	dk�rvd|||	<|||	dkr�d|||	<q�q�|�	|�|S)	N�tsT)Zpercpu����g�?r�d)
Zget_agent_cache�time�psutilZ	cpu_stats�KeyErrorZ	cpu_times�_fields�getattr�sleepZset_agent_cache)�selfZunusedZ
prev_cacheZ
next_cache�resultsZ
data_stats�data�
cpu_numberr�keyZ
time_deltaZcpu_time_delta�r�>/usr/local/lib/python3.9/site-packages/agent360/plugins/cpu.py�run
sR





z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r	ZpluginsrZ
BasePluginrr�executerrrr�<module>s
3PK�Eu\z`�3t
t
2agent360/plugins/__pycache__/docker.cpython-39.pycnu�[���a

��?hr
�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@s eZdZdZdd�Zdd�ZdS)�PluginZdockerc
Csi}i}|��}�z�dd�t�d�����D�}|D�]z}i}|d�d�|d<|d}|d}	|�|d	�|d
<|�|d�|d<|�d
|df|�|d�|�|d<|�d
|df|�|d�|�|d<|�d
|df|�|d�|�|d<|�d
|df|�|d�|�|d<|d�d�|d<|�|d	�|d
|d
f<|�|d�|d
|df<|�|d�|d
|df<|�|d�|d
|df<|�|d�|d
|df<|||<q4Wn,t�y�}
z|
j	WYd}
~
Sd}
~
00t
|�|d<t��|d<|�|�|S)z�
        Docker monitoring, needs sudo access!
        Instructions at:
        https://docs.360monitoring.com/docs/docker-plugin
        cSsg|]}|�d��qS)z / )�split)�.0�s�r�A/usr/local/lib/python3.9/site-packages/agent360/plugins/docker.py�
<listcomp>�zPlugin.run.<locals>.<listcomp>z�sudo docker stats --no-stream --no-trunc --format "{{.CPUPerc}} / {{.Name}} / {{.ID}} / {{.MemUsage}} / {{.NetIO}} / {{.BlockIO}} / {{.MemPerc}}"r�%�cpu���Zmem_usage_bytes�Zmem_total_bytesz%s_%sZnet_in_bytes�Z
net_out_bytes�Z
disk_in_bytes�Zdisk_out_bytes��	Zmem_pctN�
containers�ts)
Zget_agent_cache�os�popen�read�
splitlines�strip�computerReadableZabsolute_to_per_second�	Exception�message�len�timeZset_agent_cache)�self�configr�
last_valueZ
prev_cache�lines�row�	container�nameZcontainer_id�errr�runs:
$$$$
z
Plugin.runcCs�|dd�dkr$t|dd��dS|dd�dkrLt|dd��ddS|dd�dkrxt|dd��dddS|dd�dkr�t|dd��ddddS|dd�dkr�t|dd��dddddS|dd�d	k�rt|dd��dS|dd�d
k�r,t|dd��ddS|dd�dk�rZt|dd��dddS|dd�dk�r�t|dd��ddddS|dd�d
k�r�t|dd��dddddS|dd�dk�r�t|dd��SdS)N���ZKiBiZMiBZGiBZTiBZPiB���ZkBZMBZGBZTBZPB����B)�float)r!�valuerrrr1s, $ $zPlugin.computerReadableN)�__name__�
__module__�__qualname__r)rrrrrrs&r�__main__)rZpluginsr Z
BasePluginrr0�executerrrr�<module>s
APK�Eu\��X��
�
5agent360/plugins/__pycache__/diskusage.cpython-39.pycnu�[���a

��?h,�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�	diskusagec
Cs.i}g|d<t�d�D]�}d}gd�}|D]&}||jvsL||jvsL||jvr*d}q*|dkr\qtjdkr|d|jvs|jdkr|qzDt�|j�}i}||d<|j	D]}	t
||	�||	<q�|d�|�WqYq0qz|�d	d
�}
Wnd}
Yn0t
|d�dk�s|
d
k�r�z�g|d<dd�t�d�����D�}|d=|D]r}|ddk�rT�q>|d�|d|dddgt|d�dt|d�dt|d�d|ddd�d���q>WnYn0z|�d	d�}
Wnd}
Yn0|
d
k�r�z�dd�t�d�����D�}|D]�}i}|d|d<t|ddd��|d<t|ddd��|d <t|d|d �|d!<|d!t|d�d"|d#<|d�|d|dddg|d|d!|d |d#d���qWn,t�y�}z|jWYd}~Sd}~00z|�d	d$�}Wnd}Yn0|d
k�r*z�d%d�t�d&�����D�}|D]�}i}|d|d<t|ddd��|d<t|ddd��|d <t|d|d �|d!<|d!t|d�d"|d#<|d�|d|dd$dg|d|d!|d |d#d���qDWn,t�y(}z|jWYd}~Sd}~00|S)'Nz	df-psutilFT)z/loopz/snapZsquashfszcagefs-skeleton�ntZcdrom��infor�force_df�nor�yescSsg|]}|���qS���split��.0�sr
r
�D/usr/local/lib/python3.9/site-packages/agent360/plugins/diskusage.py�
<listcomp>3�zPlugin.run.<locals>.<listcomp>zdf -PlZtmpfs��i������)r�total�used�free�percentZzfscSsg|]}|�d��qS�z, rr
r
r
rrCrzzfs list -Hp -t volumeZvg_nameZvg_size�Zvg_freeZvg_used�dZ
vg_percentageZlvmcSsg|]}|�d��qSrrr
r
r
rr[rz6sudo vgs --all --units b --noheadings --separator ', ')�psutilZdisk_partitionsZdeviceZ
mountpointZfstype�os�name�opts�
disk_usage�_fields�getattr�append�get�len�popen�read�
splitlines�int�float�	Exception�message)�self�configZdisk�partZ
valid_partZignored_partitions�ignore�usageZdiskdata�keyrZdf_output_lines�rowZ	zfs_stats�lines�v�eZ	lvm_statsr
r
r�run
s�


b

B

Bz
Plugin.runN)�__name__�
__module__�__qualname__r;r
r
r
rr	sr�__main__)r!r Zplugins�jsonZ
BasePluginrr<�executer
r
r
r�<module>sbPK�Eu\ �H�2agent360/plugins/__pycache__/memory.cpython-39.pycnu�[���a

��?h��@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�memorycGs�i}d|d<t��}|jD]}t||�||<q|ddksF|ddkr�tjdkr�ttt�d��	�d�
dd�d�
��\}}}}}	}
}d||	|
d||d	<||d<|	|d<|
|d
<||d<||d<||d
<|S)Nr�buffers�	available�ntz
free -b -w��:�d�percent�cached�total�usedZshared)�psutilZvirtual_memory�_fields�getattr�os�name�map�int�popen�	readlines�split)�selfZunusedrZmemrZtot_mZused_mZfree_mZsha_mZbuf_mZcac_mZava_m�r�A/usr/local/lib/python3.9/site-packages/agent360/plugins/memory.py�run
s
"6z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsrZ
BasePluginrr�executerrrr�<module>s
PK�Eu\_���,,3agent360/plugins/__pycache__/sleeper.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZsleepercGst�d�dS)Ni�Q)�time�sleep)�selfZunused�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/sleeper.py�run
sz
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsZ
BasePluginrr	rrrrr�<module>sPK�Eu\�����/agent360/plugins/__pycache__/vms.cpython-39.pycnu�[���a

��?h��@stddlmZddlZddlZddlZddlZddlZddlZddlZddl	Z	Gdd�dej
�Zedkrpe��
�dS)�)�print_functionNc@s@eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�Zdd
�ZdS)�PluginZvmscCs�i}i}|��}t�dd�}|�|�}i}|��D]�\}}	i||<|	��D]t\}
}|
dksr|
dksr|
dksr|
dkr�|	|
|||
<qJ|�d||
ft|�|�|||
<t|	|
�|d||
f<qJq2t��|d<|�|�|S)	z�
        Using the libvirt API to fetch statistics from guests
        running KVM, QEMU, Xen, Virtuozzo, VMWare ESX, LXC,
        BHyve and more
        �urizqemu:///system�	mem_bytes�soft_limit_bytes�min_guarantee_bytes�hard_limit_bytesz%s_%s�ts)	Zget_agent_cache�os�getenv�fetch_values�itemsZabsolute_to_per_second�float�timeZset_agent_cache)�self�config�results�
last_valueZ
prev_cacher�valuesZdeltas�key�value�subkeyZsubvalue�r�>/usr/local/lib/python3.9/site-packages/agent360/plugins/vms.py�runs 
 "
z
Plugin.runcCst�dd|�S)Nz
[^a-zA-Z0-9_]�_)�re�sub)r�namerrr�canon%szPlugin.canonc
Cs�|�d�}d}zt�|�}WngYS0|��}g}zp|�d�}|D]<}d}|jD]}	|	jdkrX|	�d�}qX|dkr|qJ|�|�qJW|dkr�|�	�|dkr�|�
�n"|dkr�|�	�|dkr�|�
�0|S)Nrz/domain/devices/interface�target�dev��XMLDesc�libxml2�parseDoc�xpathNewContext�	xpathEval�childrenr�prop�append�xpathFreeContext�freeDoc)
r�dom�xml�doc�ctx�ifaces�ret�node�devdst�childrrr�
get_ifaces(s6





�
zPlugin.get_ifacesc		Cs�dddd�}|�d�}zt�|�}WngYS0|��}zv|D]L}|�d|�}z&|djD]}t|j�||<qzqbWqDty�YqD0qDW|dkr�|�	�|dkr�|�
�n"|dkr�|�	�|dkr�|�
�0|S)Nr)�
min_guarantee�
soft_limit�
hard_limitz/domain/memtune/%s)r#r$r%r&r'r(�int�content�
IndexErrorr+r,)	rr-�memtuner.r/r0rr2r5rrr�get_memtuneBs2




�
zPlugin.get_memtunecCs
t�|�}|��}i}|D�]�}i}d|d<d|d<z|�|�}|��}WnHtjy�}	z.td||	ftjd�WYd}	~	qWYd}	~	n
d}	~	00|dkr�q|�	|�}
|
D]^}z6|�
|�}|d|d7<|d|d7<Wq�ttj?d||ffYq�0q�t|��d�}
d	|
}||d
<z|t
��|d<Wn&t�yf}zWYd}~n
d}~00|��dd
�\}}|d9}|d9}||d<|�|�}|dd|d<|dd|d<|dd|d<d|d<d|d<d|d<d|d<z|�|�}|��}WnJtj�yN}	z.td||	ftjd�WYd}	~	qWYd}	~	n
d}	~	00|dk�r\q|�|�}|D]�}zX|�|�\}}}}}|d|7<|d|7<|d|7<|d|7<Wn*t�y�ttj?d||ffYn0�qj|||�|�<q|S)NrZnet_rx_bytesZnet_tx_bytesz
Id: %s: %s)�filezDomain-0�z#Cannot get ifstats for '%s' on '%s'gH�����z>�cpuZcpu_percentage��irr7rr9rr8rZ
disk_rd_bytesZ
disk_wr_bytesZdisk_wr_reqZdisk_rd_reqz&Cannot get blockstats for '%s' on '%s')�libvirtZopenReadOnlyZ
listDomainsIDZ
lookupByIDrZlibvirtError�print�sys�stderrr6ZinterfaceStatsr�info�psutil�	cpu_count�	Exceptionr>�	get_disksZ
blockStats�	TypeErrorr)rr�conn�idsr�id�datar-r�errr1Ziface�statsZcputimeZcputime_percentage�eZmaxmemZmemr=�disksZdiskZrd_reqZrd_bytesZwr_reqZwr_bytesZerrsrrrr]s|


"



"

 zPlugin.fetch_valuesc
Cs�|�d�}d}zt�|�}WngYS0|��}g}zp|�d�}|D]<}d}|jD]}	|	jdkrX|	�d�}qX|dkr|qJ|�|�qJW|dkr�|�	�|dkr�|�
�n"|dkr�|�	�|dkr�|�
�0|S)Nrz/domain/devices/diskr r!r")
rr-r.r/r0rUr2r3r4r5rrrrL�s6





�
zPlugin.get_disksN)	�__name__�
__module__�__qualname__rrr6r>rrLrrrrr	sDr�__main__)�
__future__rrrFr
rDr$rZpluginsrIZ
BasePluginrrV�executerrrr�<module>s4PKFu\��|���2agent360/plugins/__pycache__/mdstat.cpython-39.pycnu�[���a

��?hl�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�PluginZmdstatc	Cs�t�d���}i}zt�|�}Wnty4YdS0|d��D]�\}}i}|ddurdd|d<nd|d<|dd	ur�d|d<nd|d<|d
dur�d|d
<nd|d
<d|d<|d
��D]$\}}|dd	ur�|dd|d<q�|||<qB|S)z�
        Monitor software raid status using mdadm:
        pip install mdstat

        Requires sudo access to mdjson add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /usr/local/bin/mdjson
        ZmdjsonzCould not load mdstat dataZdevices�activeTr�Z	read_onlyFZresyncNZfaulty�disks)�os�popen�read�json�loads�	Exception�items)	�self�config�data�results�key�valueZdeviceZdiskZ	diskvalue�r�A/usr/local/lib/python3.9/site-packages/agent360/plugins/mdstat.py�runs.



z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsr	Z
BasePluginrr�executerrrr�<module>s
(PKFu\�p
20agent360/plugins/__pycache__/exim.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZeximcCsi}tt�d����|d<|S)z�
        exim mail queue monitoring, needs sudo access!
        Instructions at:
        https://docs.360monitoring.com/docs/exim-queue-size-plugin
        zsudo exim -bpcZ
queue_size)�int�os�popen�read)�self�config�data�r
�?/usr/local/lib/python3.9/site-packages/agent360/plugins/exim.py�run	sz
Plugin.runN)�__name__�
__module__�__qualname__rr
r
r
rrsr�__main__)rZpluginsZ
BasePluginrr
�executer
r
r
r�<module>s
PKFu\ꟍ0ss0agent360/plugins/__pycache__/swap.cpython-39.pycnu�[���a

��?hl�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�swapcGs*i}t��}|jD]}t||�||<q|S)N)�psutilZswap_memory�_fields�getattr)�selfZunusedrZmem�name�r	�?/usr/local/lib/python3.9/site-packages/agent360/plugins/swap.py�run
s

z
Plugin.runN)�__name__�
__module__�__qualname__rr	r	r	r
rsr�__main__)rZpluginsZ
BasePluginrr�executer	r	r	r
�<module>sPKFu\(��B&&0agent360/plugins/__pycache__/bird.cpython-39.pycnu�[���a

��?hI�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZbirdcCsvi}tt�d����|d<tt�d����|d<tt�d����|d<tt�d����|d<tt�d	����|d
<|S)z;
        Monitor status of bgp sessions using bird
        z0sudo birdc show proto | /bin/grep -c EstablishedZestablishedz,sudo birdc show proto | /bin/grep -c Connect�connectz+sudo birdc show proto | /bin/grep -c Active�activez?sudo birdc show proto | /bin/grep -c "Connection reset by peer"Z
conn_reset_bpz9sudo birdc show proto | /bin/grep -c "Hold timer expired"Z
hold_timer)�int�os�popen�read)�self�config�data�r�?/usr/local/lib/python3.9/site-packages/agent360/plugins/bird.py�run	sz
Plugin.runN)�__name__�
__module__�__qualname__rrrrr
rsr�__main__)rZpluginsZ
BasePluginrr�executerrrr
�<module>sPK
Fu\�CPoo3agent360/plugins/__pycache__/network.cpython-39.pycnu�[���a

��?h�
�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�networkc	Cst�}t��|d<|��}z|�dd��d�}Wnd}Yn0i}tjdd�}|��D�]�\}}|durv||vrvqZz||Wni||<Yn0i||<t��||d<|j||d<|j	||d	<|j
||d
<|j||d<|j||d<|j
||d
<|j||d<|j||d<i||<|�d|j||�||d<|�d	|j	||�||d	<|�d
|j
||�||d
<|�d|j||�||d<|�d|j||�||d<|�d
|j
||�||d
<|�d|j||�||d<|�d|j||�||d<qZ|�|�|S)z�
        Network monitoring plugin.
        To only enable certain interfaces add below [network]:
        interfaces = eth1,eth3,...
        �tsr�
interfaces�,FT)Zpernic�
bytes_sent�
bytes_recv�packets_sent�packets_recv�errin�errout�dropin�dropout)�dict�timeZget_agent_cache�get�split�psutilZnet_io_counters�itemsrrr	r
rrr
rZabsolute_to_per_secondZset_agent_cache)	�self�config�absoluteZ
prev_cacheZenabled_interfacesZ
returndatarZ	interface�stats�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/network.py�run
sL

z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsrZ
BasePluginrr�executerrrr�<module>s
4PKFu\]�
�
9agent360/plugins/__pycache__/plesk-cgroups.cpython-39.pycnu�[���a

��?h��@sXddlZddlZddlZddlZddlZddlZGdd�dej�ZedkrTe��	�dS)�Nc@seZdZdZdd�ZdS)�Pluginz
plesk-cgroupscGs\i}|��}t�d�}d}d}t�||�t�|d|�D�]}i||<||vr\i||<t|�|����||d<}zt�|�j	||d<Wnt
y�Yn0zLt�|d��
��(}	t|	�����||d<Wd�n1s�0YWn�t�yvzZt�|dd	d
�|�d��
��(}	t|	�����||d<Wd�n1�sP0YWnt�ypYn0Yn0�zt�|d��
���}	i||d<}
d||v�r�i||d<}|	��D]�}|jd
d�\}
}|
|
v�r�i|
|
<t��|
|
d<|
|v�ri||
<|��D]B}|�d�\}}|�|t|�||
�|
|
|<t|�||
|<�q�q�Wd�n1�sr0YW�n6t�y��z
t�|dd	d
�|�d��
���}	i||d<}
d||v�r�i||d<}|	��D]�}z|��\}
}}Wnt�yYn0|
|
v�r6i|
|
<t��|
|
d<|
|v�rHi||
<|�|t|�||
�|
|
|<t|�||
|<�q�Wd�n1�s�0YWnt�y�Yn0Yn0z�t�|d��
���}	i||d<}
d||v�r�i||d<}t��|
d<|	��D]4}|��\}}|�|t|�|�|
|<t|�||<�qWd�n1�sZ0YWq>t�yJz�t�|dd	d
�|�d��
���}	i||d<}
d||v�r�i||d<}t��|
d<|	��D]4}|��\}}|�|t|�|�|
|<t|�||<�q�Wd�n1�s$0YWnt�yDYn0Yq>0q>|�|�|S)Nz\d+z/sys/fs/cgroup/zuser.slice/user-*.slicezsystemd/�uid�usernamezmemory.currentZmemoryz
user.slicez
user-{}.slicezmemory.usage_in_byteszio.stat�)�maxsplit�ts�=Zblkiozblkio.throttle.io_service_byteszcpu.stat�cpuzcpuacct.stat)Zget_agent_cache�re�compile�glob�int�search�group�pwd�getpwuid�pw_name�KeyError�pathlib�Path�open�read�strip�FileNotFoundError�format�	readlines�split�timeZabsolute_to_per_second�
ValueErrorZset_agent_cache)�selfZunusedZ
accounting�cacheZuid_reZsysfs_prefixZsysfs_suffixZ
user_slicer�f�a�c�lineZdevnumZmetrics�kv�k�v�r(�H/usr/local/lib/python3.9/site-packages/agent360/plugins/plesk-cgroups.py�runs�
����,��.

�>������

�84�4
z
Plugin.runN)�__name__�
__module__�__qualname__r*r(r(r(r)r
sr�__main__)
rrrr
rZpluginsZ
BasePluginrr+�executer(r(r(r)�<module>sPKFu\8�c��9agent360/plugins/__pycache__/elasticsearch.cpython-39.pycnu�[���a

��?h��@s�z0ddlmZmZddlmZmZddlmZWn>eynddlmZddl	mZddl
mZmZmZYn0ddlZddlZddl
Z
ddlZGdd	�d	ej�Zed
kr�e���dS)�)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)rrrNc@seZdZdZdd�ZdS)�Plugin�
elasticsearchc
s4dd�}t�}t�}t�|�dd��}t�|�}t��|d<|��}d�fdd	�	�z"�tj|�	�|d
�dd�}Wnt
y�Yd
S0d}	i}
ttg}|�
�D]v\}}
|����}|D]$}z||
�}
Wq�ty�Yq�0q�||	vr�t|
�tur�|�|t|
�|�||<t|
�|
|<q�q�t��|
d<|�|
�|S)z�
        experimental monitoring plugin for elasticsearch
        Add to /etc/agent360.ini:
        [elasticsearch]
        enabled = yes
        status_page_url = http://127.0.0.1:9200/_stats
        cs"dd��t�fdd�|��D��S)NcSst|t�r|�d�S|S)N�ascii)�
isinstance�unicode�encode)�x�r�H/usr/local/lib/python3.9/site-packages/agent360/plugins/elasticsearch.py�<lambda>�z7Plugin.run.<locals>.ascii_encode_dict.<locals>.<lambda>c3s|]}t�|�VqdS)N)�map)�.0�pair��ascii_encoderr�	<genexpr>rz8Plugin.run.<locals>.ascii_encode_dict.<locals>.<genexpr>)�dict�items)�datarrr�ascii_encode_dictsz%Plugin.run.<locals>.ascii_encode_dictrZstatus_page_url�ts��_csfg}|��D]P\}}|r$|||n|}t|tj�rN|��|||d����q|�||f�qt|�S)N)�sep)rr
�collections�MutableMapping�extend�appendr)�dZ
parent_keyrr�k�vZnew_key��flattenrrr('szPlugin.run.<locals>.flatten)�object_hookZ_all�totalF)/Zget_time_in_millis�indexing_index_time_in_millisZflush_total_time_in_millisZindexing_delete_time_in_millisr+Z indexing_throttle_time_in_millisZ#merges_total_stopped_time_in_millisZ%merges_total_throttled_time_in_millisZmerges_total_time_in_millisZ recovery_throttle_time_in_millisZrefresh_total_time_in_millisZsearch_fetch_time_in_millisZsearch_query_time_in_millisZsearch_scroll_time_in_millisZsearch_suggest_time_in_millisZwarmer_total_time_in_millisZ
docs_countZdocs_deletedZflush_totalZget_exists_totalZget_missing_totalZ	get_totalZindexing_delete_totalZindexing_index_totalZindexing_noop_update_totalZmerges_totalZmerges_total_docsZ#merges_total_auto_throttle_in_bytesZquery_cache_cache_countZquery_cache_cache_sizeZquery_cache_evictionsZquery_cache_hit_countZquery_cache_miss_countZquery_cache_total_countZ
refresh_totalZrequest_cache_hit_countZrequest_cache_miss_countZsearch_fetch_totalZsearch_open_contextsZsearch_query_totalZsearch_scroll_totalZsearch_suggest_totalZsegments_countZ%segments_max_unsafe_auto_id_timestampZwarmer_totalZget_exists_time_in_millisZget_missing_time_in_millis)rr)r�urllib2r�getr�timeZget_agent_cache�json�loads�read�	Exception�str�floatr�lower�strip�
ValueError�typeZabsolute_to_per_secondZset_agent_cache)�self�configr�resultsZ
next_cache�requestZraw_responseZ
prev_cache�jZ
delta_keysr�constructors�key�value�crr'r�runs:	
	"2
z
Plugin.runN)�__name__�
__module__�__qualname__rBrrrrrsr�__main__)�urllib.parserr�urllib.requestrr�urllib.errorr�ImportError�urllibr,r.Zpluginsr/r Z
BasePluginrrC�executerrrr�<module>smPKFu\=B��BB7agent360/plugins/__pycache__/yum-updates.cpython-39.pycnu�[���a

��?h*�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�Pluginzyum-updatescCs4i}tt�d����|d<tt�d����|d<|S)aT
        updates for RHEL-based OS available from yum
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/bin/yum

        test by running:
        sudo -u agent360 agent360 test yum-updates

        Add to /etc/agent360.ini:
        [yum-updates]
        enabled = yes
        interval = 3600
        z:yum -q list updates --security | grep -v Available | wc -l�securityz/yum -q list updates | grep -v Available | wc -l�all)�int�os�popen�read)�self�config�data�r�F/usr/local/lib/python3.9/site-packages/agent360/plugins/yum-updates.py�run	sz
Plugin.runN)�__name__�
__module__�__qualname__rrrrr
rsr�__main__)rZpluginsZ
BasePluginrr�executerrrr
�<module>sPKFu\�7z���5agent360/plugins/__pycache__/minecraft.cpython-39.pycnu�[���a

��?hj	�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@s8eZdZdZdd�Zdd�Zdd�Zdd	�Zd
d�ZdS)
�Plugin�	minecraftcCsh|�dd��d�}i}|D�]F}z�t�tjtj�}|�d�d}t|�d�d�}|�||f�|�|�d|�|�	d��|�
|�d	��|�|�d
��|�|�|�|�|�|�}d}	t|	�|kr�|	|�
d�7}	q�|��WnYn0i}
z8t�|	�d��d
}t|d�|
d<t|d�|
d<Wnd|
d<d|
d<Yn0|
|t|�dd��<q|S)z�
        Fetch the amount of active and max players
        add to /etc/agent360.ini
        [minecraft]
        enabled=yes
        hosts=127.0.0.1:8000,127.0.0.2:8000...
        r�hosts�,�:r�z�utf8���i�playersZonline�max�.�-)�get�split�socket�AF_INET�SOCK_STREAM�int�connect�send�	pack_data�encode�	pack_port�
unpack_varint�len�recv�close�json�loads�decode�str�replace)�self�configZmy_hosts�resultZconnection_string�s�hostname�port�l�d�resultsr�r-�D/usr/local/lib/python3.9/site-packages/agent360/plugins/minecraft.py�run	s:	
.


z
Plugin.runcCsDd}td�D]2}t|�d��}||d@d|>O}|d@sq@q|S)Nr�r���)�range�ordr)r$r'r+�i�br-r-r.r<szPlugin.unpack_varintcCsDd}|d@}|dL}|t�d||dkr*dndB�7}|dkrq@q|S)Nrr1r2�Brr3��struct�pack)r$r+�or7r-r-r.�pack_varintEs zPlugin.pack_varintcCs|�t|��|S)N)r=r)r$r+r-r-r.rOszPlugin.pack_datacCst�d|�S)Nz>Hr9)r$r6r-r-r.rRszPlugin.pack_portN)�__name__�
__module__�__qualname__r/rr=rrr-r-r-r.rs3	
r�__main__)Zpluginsrr:rZ
BasePluginrr>�executer-r-r-r.�<module>sOPKFu\y����<agent360/plugins/__pycache__/cloudlinux-dbgov.cpython-39.pycnu�[���a

��?h*�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�Pluginzcloudlinux-dbgovcCsrt�d���}i}zt�|�}Wnty4YdS0|ddurFdSi}|dD]}|d}|d=|||<qR|S)a0
        Beta plugin to monitor cloudlinux db governor users
        Requires sudo access to lveinfo (whereis lveinfo) add to /etc/sudoers:
        agent360 ALL=(ALL) NOPASSWD: /REPLACE/PATH/TO/lveinfo

        To enable add to /etc/agent360.ini:
        [cloudlinux-dbgov]
        enabled = yes
        z9sudo lveinfo --dbgov --period 5m -o cpu --limit 20 --jsonz!Could not load lveinfo dbgov data�success�zFailed to load lveinfo dbgov�data�USER)�os�popen�read�json�loads�	Exception)�self�configr�results�line�username�r�K/usr/local/lib/python3.9/site-packages/agent360/plugins/cloudlinux-dbgov.py�runs

z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsr
Z
BasePluginrr�executerrrr�<module>s
!PKFu\�
��f	f	1agent360/plugins/__pycache__/httpd.cpython-39.pycnu�[���a

��?h�
�@s�z0ddlmZmZddlmZmZddlmZWn>eynddlmZddl	mZddl
mZmZmZYn0ddlZddlZddl
Z
Gdd	�d	ej�Zed
kr�e���dS)�)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)rrrNc@seZdZdZdd�ZdS)�Plugin�httpdc
Cs�i}t�}t��|d<|��}z&t|�dd��}t|����d�}Wn&tyj}zWYd}~dSd}~00t	�
d�}i}dd	�}	|�d
�D�]�}
|
r�|�|
�}|r�|�
d�}|�
d�}
|d
ks�|dks�|dks�|dks�|dks�|dks�|dks�|dks�|dks�|dks�|dks�|dk�rq�|dk�rH|�|t|
�|�|d<t|
�|d<|dk�rt|	|
�D]}|d||d<�qZq�|
||<q�|�|�|S)z2
        Apache/httpd status page metrics
        �tsrZstatus_page_urlzutf-8NFz^([A-Za-z ]+):\s+(.+)$cSs�g}|�d|�d�f�|�d|�d�f�|�d|�d�f�|�d|�d�f�|�d	|�d
�f�|�d|�d�f�|�d
|�d�f�|�d|�d�f�|�d|�d�f�|S)N�IdleWorkers�_ZReadingWorkers�RZWritingWorkers�WZKeepaliveWorkers�KZ
DnsWorkers�DZClosingWorkers�CZLoggingWorkers�LZFinishingWorkers�GZCleanupWorkers�I)�append�count)Zsb�ret�r�@/usr/local/lib/python3.9/site-packages/agent360/plugins/httpd.py�parse_score_board%sz%Plugin.run.<locals>.parse_score_board�
��r
zServer BuiltZCurrentTimeZRestartTimeZServerUptimeZCPULoadZCPUUserZ	CPUSystemZCPUChildrenUserZCPUChildrenSystemZ	ReqPerSeczTotal AccessesZrequests_per_secondZ
Scoreboardr)�dict�timeZget_agent_cacher�getr�read�decode�	Exception�re�compile�split�match�groupZabsolute_to_per_second�intZset_agent_cache)�self�configZ
prev_cacheZ
next_cache�request�data�e�exp�resultsr�line�m�k�vZsb_kvrrr�runs^



���������



z
Plugin.runN)�__name__�
__module__�__qualname__r4rrrrrsr�__main__)�urllib.parserr�urllib.requestrr�urllib.errorr�ImportError�urllib�urllib2rZpluginsr#Z
BasePluginrr5�executerrrr�<module>s?PKFu\z=�SS4agent360/plugins/__pycache__/fail2ban.cpython-39.pycnu�[���a

��?h?�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�fail2bancCsFi}|�dd��d�}|D]&}dt�d|d�����i||<q|S)a#
        Monitor currently banned IP's, specify the fail2ban jail you want to monitor in /etc/agent360.ini
        
        Example:
        [fail2ban]
        enabled = yes
        jail = sshd
        
        Nota bene: agent360 requires sudo permission to access fail2ban-client 
        r�jail�,�countz!sudo /bin/fail2ban-client status z7 | egrep -i "Currently banned:.*"  | egrep -o "[0-9.]+")�get�split�os�popen�read�rstrip)�self�config�datarZnom�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/fail2ban.py�runs
$z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r	�
subprocessZplugins�jsonZ
BasePluginrr�executerrrr�<module>sPK Fu\�+|�ZZ3agent360/plugins/__pycache__/dovecot.cpython-39.pycnu�[���a

��?h�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�PluginZdovecotcCs�i}t�d���}t�d���}t�d���}d}d}|�d�D]d}t�d|�r@t�d|tj�}	t�d|tj�}
|	d	ur�|t|	�d
��}|
d	ur@|t|
�d
��}q@||d<||d<|�	�|�	�d
�|d<|S)z�
        Returns active dovecot IMAP and POP3 session and the current version.
        Sudo permission to acces doveadm and dovecot commands are required.

        Exampel config for /etc/agent360.ini:
        [dovecot]
        enabled = yes
        zsudo doveadm whozsudo dovecot --versionzsudo dovecot --hostdomainr�
z.*(imap|pop3).*z +([0-9]+) +imap +z +([0-9]+) +pop3 +N��imapZpop3)�versionZ
hostdomain�meta)
�os�popen�read�split�re�search�
IGNORECASE�int�group�strip)�self�config�data�outputZoutput2Zoutput3ZimapsumZpop3sum�rowZimaprZpopr�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/dovecot.py�runs 	z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r�
subprocessZpluginsrZ
BasePluginrr�executerrrr�<module>s"PK#Fu\^e�&		0agent360/plugins/__pycache__/ping.cpython-39.pycnu�[���a

��?h]�@snddlZddlmZmZmZddlZddlZdd�Zd
dd�Zdd	�Z	Gd
d�dej
�Zedkrje��
�dS)�N)�Popen�PIPE�CalledProcessErrorcCs|�|�}|sdS|��SdS)NF)�search�groups)Zping_output�regex�match�r	�?/usr/local/lib/python3.9/site-packages/agent360/plugins/ping.py�_get_match_groups	s
rTcCszd}d}z t|��td�}|��d}Wnty:Yn0|rZ|durT|�d�}q^|}ng}|rn|�d�}ng}||fS)N�)�stdoutrTz\n�
)r�splitr�communicate�	Exception)�Command�newlinesZOutput�Error�procZStdoutZStderrr	r	r
�system_commands rcCs�tj�d�stj�d�rnttd|d�d�}z$t�d�}t||�\}}}}|}Wntyhd}Yn0�ntjdkr�ttd	|d�d�}t�d�}t||�}|dur�d}n|\}}}}|}n�tjd
k�r�d}z`t	ddd
|gt
t
d�}|��\}	}
|	�r,ztt�
d|	�d�}Wnt�y(Yn0nd}Wnt�yFYn0|dk�r�zt�
d|	�}|d��}Wnt�y�Yn0nd}||d�S)N�linuxZfreebsdzping -W 5 -c 1 Frz'(\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+)����darwinz
ping -c 1 �win32�pingz-nz1 )r
�stderrzAverage = (\d+)s: + .+ = [0-9]{1,9}ms, .+ = [0-9]{1,9}ms, .+ = (\d+){1,9}ms)�avgping�host)�sys�platform�
startswith�strr�re�compilerrrrr�int�findallr�decode)�hostname�responseZmatcherZminpingrZmaxpingZjitter�matchedr�out�errorZ
rxresponser	r	r
�collect_ping)sJ




r-c@seZdZdZdd�ZdS)�PluginrcCsBi}|�dd��d�}g|d<|D]}|d�t|��q"|dS)Nr�hosts�,)�getr�appendr-)�self�config�dataZmy_hostsrr	r	r
�run_sz
Plugin.runN)�__name__�
__module__�__qualname__r6r	r	r	r
r.\sr.�__main__)T)r#�
subprocessrrrrZpluginsrrr-Z
BasePluginr.r7�executer	r	r	r
�<module>s
3PK%Fu\ź[J��3agent360/plugins/__pycache__/postfix.cpython-39.pycnu�[���a

��?h��@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�postfixcCs&i}|�dd�}|�dd�}|�dd�}|�dd�}t�d|d|���}|�d�D]<}t�d	|�rXt�d
|�d}	t�d|�d}
t|
�||	<qX|d
kr�ddt�d����	�i|d<|d
k�r"t�d���}d|vr�ddd�|d<n6t�d|�d�
dd�t�d|�d�
dd�d�|d<|S)ak
        Monitoring of the Postfix MTA log and optionally the Postfix version and the mailqueue
        Dependency: Pflogsumm log analyzer, sudo access

        Exampel config for /etc/agent360.ini:
        [postfix]
        enabled = yes
        log = /var/log/mail.log
        pflogsumm = /usr/sbin/pflogsumm
        version = true
        queue = true
        r�logZ	pflogsumm�version�queuezsudo z -d today --detail 0 �
z +[0-9]+ +[a-z]{1}[a-z- ]+[a-z]z[a-z]{1}[a-z- ]+[a-z]rz
\b[0-9]*\b�truezPostfix z>sudo postconf -d | grep mail_version -m 1 | egrep -o "[0-9.]+"�metazsudo mailq | tail -n 1�empty)Zmails�sizez[0-9]+ Requestz Request�z	-- [0-9]+z-- )�get�os�popen�read�split�re�search�findall�int�rstrip�replace)�self�config�dataZmaillogZpflbinZpversionZmqueue�output�row�stat�numZ	mqcommand�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/postfix.py�runs&

6z
Plugin.runN)�__name__�
__module__�__qualname__r!rrrr rsr�__main__)r�
subprocessZpluginsrZ
BasePluginrr"�executerrrr �<module>s+PK(Fu\(�؟XX3agent360/plugins/__pycache__/proftpd.cpython-39.pycnu�[���a

��?hj�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�PluginZproftpdc
	Cs�i}t�d���}d}d}d}t�|�}|dD]l}|drD|d7}z|ddkrZ|d7}WntynYn0z|ddkr�|d7}Wq0ty�Yq00q0||d<||d<||d	<|d
d|d<dt|d
d�|d<|d
d
}	d|	i|d<|S)z2
        Current acitive ProFTPD sessions
        z/bin/ftpwho -o jsonr�connections�pid��	uploadingTZidling�idle�serverZserver_typezPID Z
started_ms�msZuptime)�os�popen�read�json�loads�	Exception�str)
�self�config�data�resultZcntrr�rawdata�itemZupdt�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/proftpd.py�run
s0
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)r
Zpluginsr
Z
BasePluginrr�executerrrr�<module>s
 PK+Fu\�-���4agent360/plugins/__pycache__/bitninja.cpython-39.pycnu�[���a

��?h�@sDddlZddlZddlZdZGdd�dej�Zedkr@e���dS)�Nz&/usr/sbin/bitninjacli --stats --minifyc@seZdZdZdd�ZdS)�PluginZbitninjacCspz|�td�}Wnt}Yn0t�d|�}|��}|d}|�d�}|dkrlt�||d��	dd��SdS)aj
        Collects metrics from the BitNinja Linux Agent.

        For this plugin to work, at least BitNinja version 2.38.8 is required, and
        the following configurations must be applied to it:

        /etc/bitninja/System/config.ini:
        ```ini
        [statistics]
        enableIntegration = 1
        ```

        This will allow agent360 to access the statistics.

        Then after a BitNinja restart, the plugin can be tested by running:
        sudo -u agent360 agent360 test bitninja

        Add to /etc/agent360.ini:
        ```ini
        [bitninja]
        enabled = true
        ```
        ZstatCommandzsudo r�{N�
�)
�get�__name__�DEFAULT_STAT_COMMAND�os�popen�	readlines�find�json�loads�replace)�self�config�command�stream�resultZ	firstLineZ
startIndex�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/bitninja.py�runs

z
Plugin.runN)r�
__module__�__qualname__rrrrrrsr�__main__)Zpluginsr	r
rZ
BasePluginrr�executerrrr�<module>s)PK-Fu\Efo��3agent360/plugins/__pycache__/mongodb.cpython-39.pycnu�[���a

��?h��@sDddlZddlZddlmZGdd�dej�Zedkr@e���dS)�N)�MongoClientc@seZdZdZdd�ZdS)�Plugin�mongodbc
Cs�t|�dd��}|j}|�d�}|��}i}i}z4|dddurFdnd|d	<t|dd
�|d<WnYn0z�|dd
|d<|dd|d<|dd|d<|dd|d<|dd|d<|dd|d<|dd|d<|dd|d<|dd|d<WnYn0|dd |d!<|dd"|d#<|dd$|d%<|d&d'|d(<|d&d)|d*<|d&d+|d,<|d&d-|d.<|d&d/|d0<|d&d1|d2<|d3d4d5|d6<|d3d4d7|d8<|d3d9d5|d:<|d3d9d7|d;<|d3d<d5|d=<|d3d<d7|d><|d?d@dA|dB<|d?d@dC|dD<|d?d@dE|dF<|d?dGdA|dH<|d?dGdC|dI<|d?dGdE|dJ<|dKdL|dM<|dKdN|dO<|dKdP|dQ<|dKdR|dS<|dKdT|dU<zZ|dV��D]H\}}	|	��D]4\}
}|��D] \}}
|
|dW�|��|
|�<�q�q�q�WnYn0zd|dXd'|dY<|dXd)|dZ<|dXd+|d[<|dXd-|d\<|dXd/|d]<|dXd1|d^<Wnt	�y�Yn0|��D]\}}	|�
||	|�||<�q�z@|d6|d8|d_<|d=|d>|d`<|d:|d;|da<WnYn0|}t��|db<|�|�|dcdd|de<|dcdf|dg<|dcdh|di<|dcdjdu�r�dnd|dk<|S)lz$
        Mongodb monitoring
        rZconnection_stringZserverStatus�replZismasterFr�Z	isprimary�hosts�membersZtransactionsZretriedCommandsCountz!transactions-retriedCommandsCountZretriedStatementsCountz#transactions-retriedStatementsCountZ transactionsCollectionWriteCountz-transactions-transactionsCollectionWriteCountZtotalAbortedztransactions-totalAbortedZtotalCommittedztransactions-totalCommittedZtotalStartedztransactions-totalStartedZ
currentActiveztransactions-currentActiveZcurrentInactiveztransactions-currentInactiveZcurrentOpenztransactions-currentOpen�connectionsZtotalCreatedzconnections.totalCreated�	availablezconnections.available�currentzconnections.currentZ
opcounters�commandzopcounters.command�deletezopcounters.deleteZgetmorezopcounters.getmore�insertzopcounters.insert�queryzopcounters.query�updatezopcounters.updateZopLatencies�commandsZlatencyzopLatencies.commands.latency�opszopLatencies.commands.opsZreadszopLatencies.reads.latencyzopLatencies.reads.opsZwriteszopLatencies.writes.latencyzopLatencies.writes.opsZ
globalLockZcurrentQueue�totalzglobalLock.currentQueue.totalZreaderszglobalLock.currentQueue.readersZwriterszglobalLock.currentQueue.writersZ
activeClientszglobalLock.activeClients.totalz globalLock.activeClients.readersz globalLock.activeClients.writersZasserts�msgzasserts.msgZregularzasserts.regularZ	rolloverszasserts.rollovers�userzasserts.user�warningzasserts.warning�lockszlocks-{}-{}-{}ZopcountersReplzopcountersRepl.commandzopcountersRepl.deletezopcountersRepl.getmorezopcountersRepl.insertzopcountersRepl.queryzopcountersRepl.updatezopLatencies.commandszopLatencies.writeszopLatencies.reads�tsZmemZresidentzmem.resident�bitszmem.bitsZvirtualzmem.virtual�	supportedz
mem.supported)
r�getZadminrZget_agent_cache�len�items�format�lower�KeyErrorZabsolute_to_per_second�timeZset_agent_cache)�self�config�client�db�
statisticsZ
prev_cache�data�results�key�val�key2Zval2Zkey3Zval3Z
next_cache�r,�B/usr/local/lib/python3.9/site-packages/agent360/plugins/mongodb.py�run
s�
&
z
Plugin.runN)�__name__�
__module__�__qualname__r.r,r,r,r-rsr�__main__)r!ZpluginsZpymongorZ
BasePluginrr/�executer,r,r,r-�<module>s
kPK0Fu\ZJPncc3agent360/plugins/__pycache__/loadavg.cpython-39.pycnu�[���a

��?hL�@s@ddlZddlZddlZGdd�dej�Zedkr<e���dS)�Nc@seZdZdZdd�ZdS)�PluginZloadavgcGstjdkrdSt��SdS)N�win32)�sys�platform�os�
getloadavg)�selfZunused�r	�B/usr/local/lib/python3.9/site-packages/agent360/plugins/loadavg.py�run
s
z
Plugin.runN)�__name__�
__module__�__qualname__rr	r	r	r
rsr�__main__)rZpluginsrZ
BasePluginrr�executer	r	r	r
�<module>s

PK2Fu\8�+���5agent360/plugins/__pycache__/litespeed.cpython-39.pycnu�[���a

��?h*�@sPddlZddlZddlZddlZddlZGdd�dej�ZedkrLe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�	litespeedc
Cs~i}i}d}|��}t�d|�dd�|�dd�|�dd�|�dd�f���}|��D]�}t�d|�}|durV|�d	�rVd
}z||�d	�Wn t	y�i||�d	�<Yn0|�
dd��
|�d
�d��d�}	|	D]v}|�d����d�}
z*||�d	�|
d
t
|
d	�7<Wq�t	�yHt
|
d	�||�d	�|
d
<Yq�0q�qVd}|d
u�rd|��D]�\}}
z|d||d<Wnt	�y�i||<Yn0i||<|
��D]�\}}|dk�r�|�||||�||d<|dk�r�|�||||�||d<|dk�r"|�||||�||d<|dk�rF|�||||�||d<||v�r�||||<�q��qdt��|d<|�|�|S)NFz9curl -s -i -k -u %s:%s 'https://%s:%s/status?rpt=summary'r�username�password�host�portzREQ_RATE \[(.*)\]�T�
�rz, �:)Z
SSL_BPS_INZBPS_OUTZMAXSSL_CONNZ	PLAINCONNZBPS_INZSSLCONNZAVAILSSLZIDLECONNZSSL_BPS_OUTZ	AVAILCONNZMAXCONNZREQ_PROCESSING�tsZTOT_REQSZRPSZTOTAL_STATIC_HITSZ
STATIC_RPSZTOTAL_PUB_CACHE_HITSZ
PUB_CACHE_RPSZTOTAL_PRIVATE_CACHE_HITSZPRIVATE_CACHE_RPS)Zget_agent_cache�os�popen�get�read�
splitlines�re�search�group�KeyError�replace�split�strip�float�itemsZabsolute_to_per_second�timeZset_agent_cache)�self�config�result�results�dataZ
prev_cache�response�line�test�lines�keyvalZmetricsZvhost�
statistics�key�value�r)�D/usr/local/lib/python3.9/site-packages/agent360/plugins/litespeed.py�runsT: *(






z
Plugin.runN)�__name__�
__module__�__qualname__r+r)r)r)r*r	s
r�__main__)	Zpluginsr
rr�base64Z
BasePluginrr,�executer)r)r)r*�<module>sLPK5Fu\F�u[��5agent360/plugins/__pycache__/memcached.cpython-39.pycnu�[���a

��?h]�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@seZdZdZdd�ZdS)�Plugin�	memcachedc
CsJ|��}z|�dd�}Wnd}Yn0zJ|dur^tjd|�dd�|�dd�fgdd�}ntjd	|gdd�}WnYd
S0d}d}i}i}z�|��}	t|	dd
�D]n\}
}|	dd
|}|����}
|
|vr�t|�||
<q�|
|vr�t|�}|�	|
t|�|�||
<t|�||
<q�q�WnYdS0t
�
�|d<|�|�|S)z�
        pip install python-memcached
        add to /etc/agent360.ini
        [memcached]
        enabled=yes
        host=127.0.0.1
        port=11211
        r�socketFz%s:%s�host�portr)�debugzunix:/%szCould not connect to memcached)Zaccepting_conns�bytesZuptimeZtotal_itemsZtotal_connectionsZtime_in_listen_disabled_us�threadsZrusage_userZ
rusage_systemZreserved_fdsZpointer_sizeZmalloc_failsZlrutail_reflockedZlisten_disabled_numZlimit_maxbytesZhash_power_levelZ
hash_bytesZ
curr_itemsZcurr_connectionsZconnection_structuresZconn_yieldsZ	reclaimed)Z	auth_cmdsZauth_errors�
bytes_readZ
bytes_writtenZtouch_missesZ
touch_hitsZincr_misses�	incr_hitsZ
cas_missesZ
cas_badvalrZ
get_missesZget_hitsZexpired_unfetchedZ	evictionsZevicted_unfetchedZ
delete_missesZdelete_hitsZdecr_missesZ	decr_hitsZcrawler_reclaimedZcrawler_items_checkedZ	cmd_touchZcmd_getZcmd_setZ	cmd_flushZ
cmd_missesZ
cmd_badvalZcmd_hits�zCould not fetch memcached stats�ts)Zget_agent_cache�get�memcacheZClientZ	get_stats�	enumerate�lower�strip�floatZabsolute_to_per_second�timeZset_agent_cache)
�self�configZ
prev_cacherZmcZ	non_deltaZ
delta_keys�results�data�result�key�	key_value�value�r�D/usr/local/lib/python3.9/site-packages/agent360/plugins/memcached.py�run	s@	
* 
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)Zplugins�structrrZ
BasePluginrr �executerrrr�<module>siPK7Fu\q*�$
$
3agent360/plugins/__pycache__/plugins.cpython-39.pycnu�[���a

��?h�	�@sFddlZddlZddlZejdkr,ddlZnddlZGdd�d�ZdS)�N��c@sJeZdZdZdZgfdd�Zddd�Zdd	�Zd
d�Zdd
�Zdd�Z	dS)�
BasePluginz$
    Abstract class for plugins
    �cCst|t�r||_ntd��dS)Nz#Type of agent_cache have to be list)�
isinstance�list�agent_cache�	TypeError)�selfr�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/plugins.py�__init__s
zBasePlugin.__init__NcCsdS)z7
        Virtual method for running the plugin
        Nr�r
�configrrr�runszBasePlugin.runcCs\d}ttj�dkrBtjdkr(t�t�}n
t�t�}|�tjd�t	�
|�|�tjj
�dS)zL
        Execution wrapper for the plugin
        argv[1]: ini_file
        N�r)�len�sys�argv�version_info�configparser�RawConfigParser�defaults�ConfigParser�read�pickle�dumpr�stdout�bufferrrrr�execute!s

zBasePlugin.executecCs(z|jdWSty"iYS0dS)zE
        Return agent cached value for this specific plugin.
        rN)r�	Exception)r
rrr�get_agent_cache/szBasePlugin.get_agent_cachecCs2z||jd<Wnty,|j�|�Yn0dS)a�
        Set agent cache value previously passed to this plugin instance.
        To enable caching existing agent_cache list have to be passed
        to Plugin on initialization.
        Minimally it should be list().
        Agent will be able to see only changes in zero element of agent_cache, so
        do not manually override self.agent_cache, othervice cache will not be saved!

        If self.agent_cache is not a list appropriate exception will be raised.
        rN)r�
IndexError�append)r
�cacherrr�set_agent_cache8szBasePlugin.set_agent_cachecCs^zB|||kr,|||t��|d}n|t��|d}WntyXd}Yn0|S)N�tsr)�timer )r
�key�valZ
prev_cache�valuerrr�absolute_to_per_secondHs
���
z!BasePlugin.absolute_to_per_second)N)
�__name__�
__module__�__qualname__�__doc__r
rrr!r%r+rrrrrs	
	r)rr'rrrrrrrrr�<module>s

PK:Fu\�����;agent360/plugins/__pycache__/diskstatus-nvme.cpython-39.pycnu�[���a

��?hR�@sPddlZddlZddlZddlZddlZGdd�dej�ZedkrLe���dS)�Nc@seZdZdZdd�ZdS)�Pluginzdiskstatus-nvmec	Csi}z>tjdtjtjdd���d}t�|�d��}|dd}WntyZd}YdS0|du�r�|dD]�}i}t�	d	�
|d
����}zt�|�}Wnty�Yn0|��D].\}	}
|	�
d�r�t|
dd�||	<q�|
||	<q�|||d
�d
d�<qn|S)z�
        Monitor nvme disk status
        For NVME drives install nvme-cli (https://github.com/linux-nvme/nvme-cli#distro-support)
        This plugin requires the agent to be run under the root user.
        z nvme --list --output-format=jsonT)�stdout�stderr�shellrzutf-8ZDevicesFz'Could not fetch nvme status informationz&nvme smart-log {} --output-format=jsonZ
DevicePathZtemperaturegfffffq@z/dev/�)�
subprocess�Popen�PIPE�communicate�json�loads�decode�	Exception�os�popen�format�read�items�
startswith�round�replace)�self�config�results�dataZnvme�valueZdeviceZ	disk_dataZ	data_diskZdisk_keyZ
disk_value�r�J/usr/local/lib/python3.9/site-packages/agent360/plugins/diskstatus-nvme.py�runs.


z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrr	sr�__main__)	rrZpluginsr�reZ
BasePluginrr�executerrrr�<module>s%PK<Fu\�C��3agent360/plugins/__pycache__/unbound.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@sHeZdZdZgd�Zgd�Zgd�Zgd�Zdd�Zdd	�Zd
d�Z	dS)
�Plugin�unbound)z.avgz.medianz.nowz.upz.elapsed)%znum.answer.bogus�num.answer.rcodeznum.answer.securez
num.cachehitsz
num.cachemissznum.dnscrypt.certznum.dnscrypt.cleartextznum.dnscrypt.cryptedznum.dnscrypt.malformedznum.prefetchznum.queriesznum.queries_ip_ratelimited�num.query.aggressiveznum.query.authzone.downznum.query.authzone.up�num.query.classznum.query.dnscrypt.replayz*num.query.dnscrypt.shared_secret.cachemiss�num.query.edns�num.query.flagsznum.query.ipv6�num.query.opcodeznum.query.ratelimitedznum.query.subnetznum.query.subnet_cachez
num.query.tcpznum.query.tcpoutz
num.query.tlsznum.query.tls.resume�num.query.typeznum.recursiverepliesznum.rrset.bogusznum.zero_ttlzrequestlist.exceededzrequestlist.overwrittenzunwanted.querieszunwanted.replies)zdnscrypt_nonce.cache.countz"dnscrypt_shared_secret.cache.countzinfra.cache.countzkey.cache.countzmem.cache.dnscrypt_noncez mem.cache.dnscrypt_shared_secretzmem.cache.messagezmem.cache.rrsetzmem.mod.iteratorzmem.mod.validatorzmem.streamwaitzmsg.cache.countzrecursion.time.avgzrecursion.time.medianzrequestlist.avgzrequestlist.current.allzrequestlist.current.userzrequestlist.maxzrrset.cache.countZtcpusageztime.elapsedztime.nowztime.up)rrrrrr	r
c
CsXd}ztj|tjdd�}Wn8tjyR}zd�|||j�}WYd}~dSd}~00|S)Nz$sudo /usr/sbin/unbound-control statsT)�stderr�shellzERROR CALLING {0}: {1}  {2})�
subprocess�check_output�STDOUT�CalledProcessError�format�output)�self�cmdr�e�	error_msg�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/unbound.py�	get_statsZszPlugin.get_statscCsRdd�|��D�}|��D]2\}}|�t|j��r@t|�||<qt|�||<q|S)NcSs$i|]}|�d�}|d|d�qS)�=r�)�	partition)�.0�line�trrr�
<dictcomp>g�z%Plugin.parse_stat.<locals>.<dictcomp>)�
splitlines�items�endswith�tuple�	floatKeys�float�int)r�stat�stats�key�valuerrr�
parse_stateszPlugin.parse_statcGs&|��}i}|durdS|�|�}|S)NF)rr-)rZunusedZresdata�finalrrr�runos
z
Plugin.runN)
�__name__�
__module__�__qualname__r&Zrate_metricsZ
gauge_metricsZ
by_tag_labelsrr-r/rrrrrs(

r�__main__)Zpluginsr
Z
BasePluginrr0�executerrrr�<module>ssPKDFu\"��
�
3agent360/plugins/__pycache__/process.cpython-39.pycnu�[���a

��?h
�@sHddlZddlZddlZddlZGdd�dej�ZedkrDe���dS)�Nc@s eZdZdZdd�Zdd�ZdS)�Plugin�processcCsxt�d|�}|r*|�d�}|�d�p&d}nd}|}tjdd|tjd�}tjdd	|tjd�}tjd
d|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}tjdd
|tjd�}||��}t|�dk�rt|dd�d}|S)Nz^(\S+)(\s+.*)?$���z	(/[^ ]+)+z/***)�flagszp(--(?:password|pass|pwd|token|secret|key|api-key|access-key|secret-key|client-secret|auth-key|auth-token)\s+\S+)z--***z
(-p\s+\S+)z-p ***za\b(?:password|pass|pwd|token|secret|key|api_key|access_key|client_secret|auth_key|auth_token)=\S+z***z!\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\bz$\b(?:[a-fA-F0-9:]+:+)+[a-fA-F0-9]+\bz(--port\s+\d+)z
--port ***z�\b(?:DB_PASS|DB_USER|AWS_SECRET_ACCESS_KEY|AWS_ACCESS_KEY_ID|SECRET_KEY|TOKEN|PASSWORD|USERNAME|API_KEY|PRIVATE_KEY|SSH_KEY|SSL_CERTIFICATE|SSL_KEY)\b=\S+z#\b(root|admin|cpanelsolr|user\d*)\bz6(\S+\.(pem|crt|key|cert|csr|pfx|p12|ovpn|enc|asc|gpg))zi\b(?:id_rsa|id_dsa|id_ecdsa|id_ed25519|known_hosts|authorized_keys|credentials|.env|docker-compose.yml)\bzK\b(?:jdbc|mysql|postgres|mongodb|redis|amqp|http|https|ftp|sftp|s3):\/\/\S+z:\b(?:https?|ftp):\/\/(?:\S+\:\S+@)?(?:[a-zA-Z0-9.-]+\.\S+)z2\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}\b��z...)�re�match�group�sub�
IGNORECASE�strip�len)�self�cmdlinerZinitial_pathZremaining_cmdlineZsanitized_cmdline�r�B/usr/local/lib/python3.9/site-packages/agent360/plugins/process.py�sanitize_command_lines0
zPlugin.sanitize_command_linec	Gs g}t��D�]}z�|jgd�d�}z |�d�|d����|d<WnYn0tjdkr�t|dt�	�dd���|d<t|dt�	�dd�|d<t|d	t�	�dd�|d	<zt|d
t�	�dd�|d
<WnYn0Wn2tj
y�Yqtj�yYqYq0|�|�q|S)N)	�pid�nameZppid�exer�usernameZcpu_percentZmemory_percentZio_counters)�attrs� r)��replace)�errorsrrr)
�psutilZprocess_iterZas_dictr�joinr�sys�version_info�unicode�getdefaultencodingZ
NoSuchProcessZAccessDenied�append)rZunusedr�procZpinforrr�run,s0 

z
Plugin.runN)�__name__�
__module__�__qualname__rr'rrrrrs!r�__main__)rZpluginsr!r
Z
BasePluginrr(�executerrrr�<module>sDPKFFu\�e332agent360/plugins/__pycache__/system.cpython-39.pycnu�[���a

��?h��@s�zddlZWney"dZYn0ddlZddlZddlmZmZddlZddlZddl	Z	ddl
Z
zddlZWney�dZYn0d
dd�Zdd�Z
dd	�ZGd
d�de
j�Zedkr�e���dS)�N)�Popen�PIPETcCszd}d}z t|��td�}|��d}Wnty:Yn0|rZ|durT|�d�}q^|}ng}|rn|�d�}ng}||fS)N�)�stdoutrT�
)r�splitr�communicate�	Exception)�Command�newlinesZOutput�Error�procZStdoutZStderr�r�A/usr/local/lib/python3.9/site-packages/agent360/plugins/system.py�
systemCommands rc
Csd}z�tdd��"}t|����d�}Wd�n1s80Yd}t�d���}|D]j}|j�d�shqVt|jdd��8}d|����kr�Wd�qVn||7}Wd�qV1s�0YqVWd�n1s�0Y|WSt	�y�YdS0dS)	Nrz+/sys/devices/system/memory/block_size_bytes�r�z/sys/devices/system/memory/�memoryz/stateZonline)
�open�int�readline�strip�os�scandir�name�
startswith�pathr	)�
block_size�fr�it�entryrrr�linux_hardware_memory+s 0Fr!cCs�i}i|d<i|d<tdur |St��D]�}t�|�}tj|vrp||dvrXg|d|<|d|�|tj�tj|vr(||dvr�g|d|<|d|�|tj�q(|S)NZv4Zv6)�	netifacesZ
interfacesZifaddresses�AF_INET�append�AF_INET6)Zip_listZ	interface�linkrrr�ip_addressesAs 


r'c@seZdZdZdd�ZdS)�Plugin�systemcGs�i}i}d|d<d|d<tj�d�r�td�}|r�|D]�}|��r4d|�d��d�d��krx|�d��d�d	��|d<d
|�d��d�d��kr�|�d��d�d	��|d<d|�d��d�d��kr4|�d��d�d	��|d<q4|ddk�r�t�d����d�}|�r�|D]�}|���rd
|�d��d�d��k�r^|�d��d�d	��|d<d
|�d��d�d��k�r�|�d��d�d	��|d<d|�d��d�d��k�r|�d��d�d	��|d<�qt	�
�j}tj
dk�s�tj
dk�rJt�}|dk�r|}tdu�r.td�t
����|d<ntd�tjdd���|d<n�tj
dk�r�dt
��d|d<ttdd�d��d�d	|d<n�tj
dk�s�tj
dk�r�dt
��|d<ttdd�d��d�d	|d<td�|d<nVtj
dk�r8t��jd k�rd!�t
��dd"�|d<n d!�t
��dt
��d#�|d<|d|d$<|d|d%<||d&<d'�ttt	j��|d(<tj|d)<t
�
�|d*<tt��t	���|d+<t �|d,<t
�!�|d-<|S).NzUnknown CPUZbrandr�countz
/proc/cpuinfoz
model namer�:�Z	Processor�	processorZlscpuz
Model namezCPU(s)�linuxZlinux2� rT)�full_distribution_name�darwinz	Mac OS %szsysctl machdep.cpu.brand_stringFz: Z	freebsd10Z	freebsd11z
FreeBSD %szsysctl hw.modelzsysctl hw.ncpu�win32i�Uz{} {}���cpuZcoresr�.�psutil�python_version�platformZuptimer'�hostname)"rr�isfilerr�rstripr�popen�readr7Zvirtual_memory�total�sysr9r!�distro�str�join�linux_distribution�mac_verr�release�getwindowsversion�build�format�uname�map�version_info�versionr�timeZ	boot_timer'�node)�selfZunusedZ
systeminfor5r�lineZmemZhw_memrrr�runWsn
 


"  

z
Plugin.runN)�__name__�
__module__�__qualname__rRrrrrr(Tsr(�__main__)T)r"�ImportErrorrr9�
subprocessrrr@rNr7ZpluginsrArr!r'Z
BasePluginr(rS�executerrrr�<module>s(


GPKIFu\��^^4agent360/plugins/__pycache__/rabbitmq.cpython-39.pycnu�[���a

��?h�
�@s�z0ddlmZmZddlmZmZddlmZWn>eynddlmZddl	mZddl
mZmZmZYn0ddlZddlZddl
Z
ddlZddlmZddlZGd	d
�d
ej�Zedkr�e���dS)�)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)rrrN)�
HTTPBasicAuthc@seZdZdZdd�ZdS)�Plugin�rabbitmqc
Cs�dd�}t�}t�}z$|�dd�}|�dd�}||f}Wnd}Yn0tj|�dd�|d�}|jd	kr�tj|�dd�t||�d�}|jd
ur�z|��}	Wq�ty�}
z|
WYd}
~
Sd}
~
00nd�|j�St	�	�|d
<|�
�}z|dWni|d<Yn0|	|d<t	�	�|dd
<|�d|	dd|d�|d<|	dd|d<|�d|	dd|d�|d<|	dd|d<|�d|	dd|d�|d<|	dd|d<|�d|	dd|d�|d<|	dd|d<|�d|	dd|d�|d<|	dd|d<|	dd|d<|	dd|d<|	dd|d<t|	d�|d<|	dd |d <|	dd!|d!<|	dd"|d"<|	dd#|d#<|	dd$|d$<|�
|�|S)%z.
        rabbitmq status page metrics
        cs"dd��t�fdd�|��D��S)NcSst|t�r|�d�S|S)N�ascii)�
isinstance�unicode�encode)�x�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/rabbitmq.py�<lambda>�z7Plugin.run.<locals>.ascii_encode_dict.<locals>.<lambda>c3s|]}t�|�VqdS)N)�map)�.0�pair��ascii_encoderr�	<genexpr>rz8Plugin.run.<locals>.ascii_encode_dict.<locals>.<genexpr>)�dict�items)�datarrr�ascii_encode_dictsz%Plugin.run.<locals>.ascii_encode_dictr	�username�passwordFZstatus_page_url)�authi���NzCould not load status page: {}�tsZ
message_statsZ	published�publishZpublished_totalZackZ	ack_totalZdeliver_getZdeliver_get_totalZ	redeliverZredeliver_totalZdeliverZ
deliver_totalZqueue_totals�messagesZmessages_readyZmessages_unacknowledgedZ	listenersZ
object_totalsZ	consumers�queuesZ	exchanges�connectionsZchannels)r�get�requests�status_coder�json�	Exception�format�text�timeZget_agent_cacheZabsolute_to_per_second�lenZset_agent_cache)�self�configr�resultsZ
next_cacherr�	user_pass�request�j�eZ
prev_cacherrr�runs^



z
Plugin.runN)�__name__�
__module__�__qualname__r6rrrrrsr�__main__)�urllib.parserr�urllib.requestrr�urllib.errorr�ImportError�urllib�urllib2r-Zpluginsr)r'Z
requests.authr�sysZ
BasePluginrr7�executerrrr�<module>s FPKKFu\+��ױ�4agent360/plugins/__pycache__/loggedin.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZloggedincCsi}tt�d����|d<|S)zB
        Returns the number of users currently logged in.
        z/bin/users | wc -w�sessions)�int�os�popen�read)�self�config�data�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/loggedin.py�run	sz
Plugin.runN)�__name__�
__module__�__qualname__r
rrrrrsr�__main__)rZpluginsZ
BasePluginrr�executerrrr�<module>sPKNFu\�4���6agent360/plugins/__pycache__/redis_stat.cpython-39.pycnu�[���a

��?h��@sZddlZddlZdddddddd	d
ddd
ddd�ZGdd�dej�ZedkrVe���dS)�NZuptimezconnected clientszused memoryzused memory peak human�	maxmemoryzmaxmemory policyztotal commands processedztotal net input bytesztotal net output byteszexpired keyszevicted keysz
keyspace hitszkeyspace misseszcluster enabled)Zuptime_in_secondsZconnected_clientsZused_memoryZused_memory_peak_humanrZmaxmemory_policyZtotal_commands_processedZtotal_net_input_bytesZtotal_net_output_bytesZexpired_keysZevicted_keysZ
keyspace_hitsZkeyspace_missesZcluster_enabledc@seZdZdZdd�ZdS)�PluginZ
redis_statc
Cs*i}d}z|�td�}Wnd}Yn0z|�td�}Wnd}Yn0z|�td�}Wnd}Yn0z|�td�}Wnd}Yn0ztj||||d	�}|��}WnHty�}	z0d
|d<d|d
<|s�|WYd}	~	SWYd}	~	n
d}	~	00|��D]$\}
}|
t��v�r||t|
<�q|S)N�hostz	127.0.0.1�portZ6379�db�0�password�)rrrrr�statuszConnection Error�msg)	�get�__name__�redisZStrictRedis�info�	Exception�items�METRICS�keys)�self�config�data�statsZ
redis_hostZ
redis_portZredis_dbZredis_passwordZredis_connection�e�name�value�r�E/usr/local/lib/python3.9/site-packages/agent360/plugins/redis_stat.py�runus<



&z
Plugin.runN)r
�
__module__�__qualname__rrrrrrrsr�__main__)ZpluginsrrZ
BasePluginrr
�executerrrr�<module>s&�k%PKPFu\�7���4agent360/plugins/__pycache__/cpu_freq.cpython-39.pycnu�[���a

��?hG�@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�Plugin�cpu_freqcGs\i}tjdd�}d}|D]>}i}|d}i||<|jD]}t||�||<q6|d||<q|S)NT)Zpercpu�����current)�psutilr�_fields�getattr)�selfZunused�results�data�
cpu_number�cpu�core�key�r�C/usr/local/lib/python3.9/site-packages/agent360/plugins/cpu_freq.py�run	s
z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)rZpluginsZ
BasePluginrr�executerrrr�<module>sPKSFu\}5�--/agent360/plugins/__pycache__/gpu.cpython-39.pycnu�[���a

��?h��@s8ddlZddlZGdd�dej�Zedkr4e���dS)�Nc@seZdZdZdd�ZdS)�PluginZgpucGs�i}tjdkr�zddl}WnYdS0zP|jdd�}|��}|D]2}|jdkrD|jdkrD|j||j�	d	d
��
d
�<qDWnYdS0|S)za
        expirimental plugin used to collect GPU load from OpenHardWareMonitor (Windows)
        �win32rNzwmi module not installed.zroot\OpenHardwareMonitor)�	namespace�LoadzGPU Core�/�-z7Could not fetch GPU Load data from OpenHardwareMonitor.)�sys�platform�wmiZWMIZSensorZ
SensorType�Name�ValueZParent�replace�strip)�selfZunused�datar
�wZtemperature_infosZsensor�r�>/usr/local/lib/python3.9/site-packages/agent360/plugins/gpu.py�run	s
 z
Plugin.runN)�__name__�
__module__�__qualname__rrrrrrsr�__main__)ZpluginsrZ
BasePluginrr�executerrrr�<module>sPKXFu\��G!,,agent360/plugins/diskusage.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import psutil
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'diskusage'


    def run(self, config):
        disk = {}
        disk['df-psutil'] = []

        for part in psutil.disk_partitions(False):
            valid_part = True
            ignored_partitions = ['/loop', '/snap', 'squashfs', 'cagefs-skeleton']
            
            for ignore in ignored_partitions:
                if ignore in part.device or ignore in part.mountpoint or ignore in part.fstype:
                   valid_part = False
            if valid_part == False:
                continue
            
            if os.name == 'nt':
                if 'cdrom' in part.opts or part.fstype == '':
                    # skip cd-rom drives with no disk in it; they may raise
                    # ENOENT, pop-up a Windows GUI error for a non-ready
                    # partition or just hang.
                    continue
            try:
                usage = psutil.disk_usage(part.mountpoint)
                diskdata = {}
                diskdata['info'] = part
                for key in usage._fields:
                    diskdata[key] = getattr(usage, key)
                disk['df-psutil'].append(diskdata)
            except:
                pass

        try:
            force_df = config.get('diskusage', 'force_df')
        except:
            force_df = 'no'

        if len(disk['df-psutil']) == 0 or force_df == 'yes':
            try:
                disk['df-psutil'] = []
                df_output_lines = [s.split() for s in os.popen("df -Pl").read().splitlines()]
                del df_output_lines[0]
                for row in df_output_lines:
                    if row[0] == 'tmpfs':
                        continue
                    disk['df-psutil'].append({'info': [row[0], row[5],'',''], 'total': int(row[1])*1024, 'used': int(row[2])*1024, 'free': int(row[3])*1024, 'percent': row[4][:-1]})
            except:
                pass

        try:
            zfs_stats = config.get('diskusage', 'zfs')
        except:
            zfs_stats = 'no'

        if zfs_stats == 'yes':
            try:
                lines = [s.split(', ') for s in os.popen("zfs list -Hp -t volume").read().splitlines()]
                for row in lines:
                    v = {}
                    v['vg_name'] = row[0]
                    v['vg_size'] = int(row[5][:-1])
                    v['vg_free'] = int(row[6][:-1])
                    v['vg_used'] = int(v['vg_size']-v['vg_free'])
                    v['vg_percentage'] = (v['vg_used']/float(v['vg_size']))*100
                    disk['df-psutil'].append({'info': [v['vg_name'], v['vg_name'], 'zfs', False], 'total': v['vg_size'], 'used': v['vg_used'], 'free': v['vg_free'], 'percent': v['vg_percentage']})
            except Exception as e:
                return e.message

        try:
            lvm_stats = config.get('diskusage', 'lvm')
        except:
            lvm_stats = 'no'


        # For LVM volume group monitoring, requires sudo access to vgs
        # add vgs to /etc/sudoers
        # agent360 ALL=(ALL) NOPASSWD: /usr/sbin/vgs
        # set lvm = yes right under enabled = yes in /etc/agent360.ini
        if lvm_stats == 'yes':
            try:
                lines = [s.split(', ') for s in os.popen("sudo vgs --all --units b --noheadings --separator ', '").read().splitlines()]
                for row in lines:
                    v = {}
                    v['vg_name'] = row[0]
                    v['vg_size'] = int(row[5][:-1])
                    v['vg_free'] = int(row[6][:-1])
                    v['vg_used'] = int(v['vg_size']-v['vg_free'])
                    v['vg_percentage'] = (v['vg_used']/float(v['vg_size']))*100
                    disk['df-psutil'].append({'info': [v['vg_name'], v['vg_name'], 'lvm', False], 'total': v['vg_size'], 'used': v['vg_used'], 'free': v['vg_free'], 'percent': v['vg_percentage']})
            except Exception as e:
                return e.message


        return disk


if __name__ == '__main__':
    Plugin().execute()
PK[Fu\�����
�
agent360/plugins/megacli.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import json


class Plugin(plugins.BasePlugin):
    __name__ = 'megacli'

    def run(self, config):
        disk = {}
        try:
            df_output_lines = os.popen("megacli -LDInfo -Lall -aALL").read().splitlines()
            data = {}
            for line in df_output_lines:

                if line.startswith('Virtual Drive'):
                    delim = line.find('(')
                    offset = line.find(':')
                    data['virtualdisk_id'] = int(line[offset+1:delim].strip())
                if line.startswith('Name'):
                    offset = line.find(':')
                    data['name'] = line[offset+1:].strip()
                elif line.startswith('RAID Level'):
                    offset = line.find(':')
                    data['raid_level'] = line[offset+1:].strip()
                elif line.startswith('Size'):
                    offset = line.find(':')
                    data['size'] = line[offset+1:].strip()
                elif line.startswith('State'):
                    offset = line.find(':')
                    data['state'] = line[offset+1:].strip()
                elif line.startswith('Strip Size'):
                    delim = line.find(' KB')
                    offset = line.find(':')
                    data['stripe_size'] = line[offset+1:delim].strip()
                elif line.startswith('Number Of Drives'):
                    offset = line.find(':')
                    data['number_of_drives'] = int(line[offset+1:].strip())
                elif line.startswith('Span Depth'):
                    offset = line.find(':')
                    data['span_depth'] = int(line[offset+1:].strip())
                elif line.startswith('Default Cache Policy'):
                    offset = line.find(':')
                    data['default_cache_policy'] = line[offset+1:].strip()
                elif line.startswith('Current Cache Policy'):
                    offset = line.find(':')
                    data['current_cache_policy'] = line[offset+1:].strip()
                elif line.startswith('Current Access Policy'):
                    offset = line.find(':')
                    data['access_policy'] = line[offset+1:].strip()
                elif line.startswith('Disk Cache Policy'):
                    offset = line.find(':')
                    data['disk_cache_policy'] = line[offset+1:].strip()
                elif line.startswith('Encryption'):
                    offset = line.find(':')
                    data['encryption'] = line[offset+1:].strip()

            disk[data['virtualdisk_id']] = data
        except Exception as e:
            return e

        return disk


if __name__ == '__main__':
    Plugin().execute()
PK]Fu\u��IIagent360/plugins/bird.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'bird'

    def run(self, config):
        '''
        Monitor status of bgp sessions using bird
        '''
        data = {}
        data['established'] = int(os.popen('sudo birdc show proto | /bin/grep -c Established').read())
        data['connect'] = int(os.popen('sudo birdc show proto | /bin/grep -c Connect').read())
        data['active'] = int(os.popen('sudo birdc show proto | /bin/grep -c Active').read())
        data['conn_reset_bp'] = int(os.popen('sudo birdc show proto | /bin/grep -c "Connection reset by peer"').read())
        data['hold_timer'] = int(os.popen('sudo birdc show proto | /bin/grep -c "Hold timer expired"').read())
        return data

if __name__ == '__main__':
    Plugin().execute()
PKbFu\wQ=�LLagent360/plugins/loadavg.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins
import sys

class Plugin(plugins.BasePlugin):
    __name__ = 'loadavg'

    def run(self, *unused):
        if sys.platform == 'win32':
            return None
        else:
            return os.getloadavg()


if __name__ == '__main__':
    Plugin().execute()
PKdFu\3�����agent360/plugins/apt-updates.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'apt-updates'

    def run(self, config):
        '''
        ubuntu/debian updates available from apt-get
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/bin/apt-get

        test by running:
        sudo -u agent360 agent360 test apt-updates

        Add to /etc/agent360.ini:
        [apt-updates]
        enabled = yes
        interval = 3600

        Optionally check if a reboot is required:
        checkreboot = true
        '''
        data = {}
        data['security'] = int(os.popen('sudo -n apt-get upgrade -s | grep Inst | grep security | wc -l').read())
        data['other'] = int(os.popen('sudo -n apt-get upgrade -s | grep Inst | grep -v security | wc -l').read())
        try:
            checkreboot = config.get('apt-updates', 'checkreboot')
            if os.path.exists('/var/run/reboot-required') and checkreboot == "true":
                data['Reboot Required'] = 'Yes'
            elif checkreboot == "true":
                data['Reboot Required'] = 'No'
        except Exception:
            pass
        return data

if __name__ == '__main__':
    Plugin().execute()
PKgFu\QfF'��agent360/plugins/loggedin.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'loggedin'

    def run(self, config):
        '''
        Returns the number of users currently logged in.
        '''
        data = {}
        data['sessions'] = int(os.popen('/bin/users | wc -w').read())
        return data

if __name__ == '__main__':
    Plugin().execute()
PKiFu\�� (j	j	agent360/plugins/minecraft.pynu�[���import plugins
import socket
import struct
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'minecraft'

    def run(self, config):
        '''
        Fetch the amount of active and max players
        add to /etc/agent360.ini
        [minecraft]
        enabled=yes
        hosts=127.0.0.1:8000,127.0.0.2:8000...
        '''

        my_hosts = config.get('minecraft', 'hosts').split(',')
        result = {}
        for connection_string in my_hosts:
            try:
                # Connect
                s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
                hostname = connection_string.split(':')[0]
                port = int(connection_string.split(':')[1])
                s.connect((hostname, port))

                # Send handshake + status request
                s.send(self.pack_data("\x00\x00" + self.pack_data(hostname.encode('utf8')) + self.pack_port(port) + "\x01"))
                s.send(self.pack_data("\x00"))

                # Read response
                self.unpack_varint(s)     # Packet length
                self.unpack_varint(s)     # Packet ID
                l = self.unpack_varint(s) # String length

                d = ""
                while len(d) < l:
                    d += s.recv(1024)

                # Close our socket
                s.close()
            except:
                pass

            results = {}

            try:
                players = json.loads(d.decode('utf8'))['players']
                results['online'] = int(players['online'])
                results['max'] = int(players['max'])
            except:
                results['online'] = 0
                results['max'] = 0
            result[str(connection_string.replace('.', '-'))] = results

        return result


    def unpack_varint(self, s):
        d = 0
        for i in range(5):
            b = ord(s.recv(1))
            d |= (b & 0x7F) << 7*i
            if not b & 0x80:
                break
        return d

    def pack_varint(self, d):
        o = ""
        while True:
            b = d & 0x7F
            d >>= 7
            o += struct.pack("B", b | (0x80 if d > 0 else 0))
            if d == 0:
                break
        return o

    def pack_data(self, d):
        return self.pack_varint(len(d)) + d

    def pack_port(self, i):
        return struct.pack('>H', i)

if __name__ == '__main__':
    Plugin().execute()
PKlFu\?���
�
agent360/plugins/httpd.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError
import time
import plugins
import re


class Plugin(plugins.BasePlugin):
    __name__ = 'httpd'

    def run(self, config):
        '''
        Apache/httpd status page metrics
        '''

        prev_cache = {}
        next_cache = dict()
        next_cache['ts'] = time.time()
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check

        try:
            request = Request(config.get('httpd', 'status_page_url'))
            data = urlopen(request).read().decode('utf-8')
        except Exception as e:
            return False

        exp = re.compile('^([A-Za-z ]+):\s+(.+)$')
        results = {}
        def parse_score_board(sb):

            ret = []

            ret.append(('IdleWorkers', sb.count('_')))
            ret.append(('ReadingWorkers', sb.count('R')))
            ret.append(('WritingWorkers', sb.count('W')))
            ret.append(('KeepaliveWorkers', sb.count('K')))
            ret.append(('DnsWorkers', sb.count('D')))
            ret.append(('ClosingWorkers', sb.count('C')))
            ret.append(('LoggingWorkers', sb.count('L')))
            ret.append(('FinishingWorkers', sb.count('G')))
            ret.append(('CleanupWorkers', sb.count('I')))

            return ret
        for line in data.split('\n'):
            if line:
                m = exp.match(line)
                if m:
                    k = m.group(1)
                    v = m.group(2)

                    # Ignore the following values
                    if k == 'IdleWorkers' or k == 'Server Built' or k == 'Server Built' \
                            or k == 'CurrentTime' or k == 'RestartTime' or k == 'ServerUptime' \
                            or k == 'CPULoad' or k == 'CPUUser' or k == 'CPUSystem' \
                            or k == 'CPUChildrenUser' or k == 'CPUChildrenSystem' \
                            or k == 'ReqPerSec':
                        continue

                    if k == 'Total Accesses':
                        results['requests_per_second'] = self.absolute_to_per_second(k, int(v), prev_cache)
                        next_cache['Total Accesses'] = int(v)

                    if k == 'Scoreboard':
                        for sb_kv in parse_score_board(v):
                            results[sb_kv[0]] = sb_kv[1]
                    else:
                        results[k] = v
        self.set_agent_cache(next_cache)
        return results

if __name__ == '__main__':
    Plugin().execute()
PKoFu\��a�agent360/plugins/mailq.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'mailq'
    def run(self, *unused):
        import os
        stream = os.popen("sudo /usr/bin/mailq | /usr/bin/tail -n1 | /usr/bin/gawk '{print $5}'")
        retval = stream.read()
        results = {}
        if len(retval) == 1:
            results['queue_size'] = 0
        else:
            results['queue_size'] = int(retval)
        return results

if __name__ == '__main__':
    Plugin().execute()
PKqFu\[�H	��!agent360/plugins/elasticsearch.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
    from urllib.parse import urlparse, urlencode
    from urllib.request import urlopen, Request
    from urllib.error import HTTPError
except ImportError:
    from urlparse import urlparse
    from urllib import urlencode
    from urllib2 import urlopen, Request, HTTPError
import time
import plugins
import json
import collections


class Plugin(plugins.BasePlugin):
    __name__ = 'elasticsearch'

    def run(self, config):
        '''
        experimental monitoring plugin for elasticsearch
        Add to /etc/agent360.ini:
        [elasticsearch]
        enabled = yes
        status_page_url = http://127.0.0.1:9200/_stats
        '''

        def ascii_encode_dict(data):
            ascii_encode = lambda x: x.encode('ascii') if isinstance(x, unicode) else x
            return dict(map(ascii_encode, pair) for pair in data.items())

        results = dict()
        next_cache = dict()
        request = urllib2.Request(config.get('elasticsearch', 'status_page_url'))
        raw_response = urllib2.urlopen(request)
        next_cache['ts'] = time.time()
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        def flatten(d, parent_key='', sep='_'):
            items = []
            for k, v in d.items():
                new_key = parent_key + sep + k if parent_key else k
                if isinstance(v, collections.MutableMapping):
                    items.extend(flatten(v, new_key, sep=sep).items())
                else:
                    items.append((new_key, v))
            return dict(items)
        try:
            j = flatten(json.loads(raw_response.read(), object_hook=ascii_encode_dict)['_all']['total'])
        except Exception:
            return False


        delta_keys = (
            'get_time_in_millis',
            'indexing_index_time_in_millis',
            'flush_total_time_in_millis',
            'indexing_delete_time_in_millis',
            'indexing_index_time_in_millis',
            'indexing_throttle_time_in_millis',
            'merges_total_stopped_time_in_millis',
            'merges_total_throttled_time_in_millis',
            'merges_total_time_in_millis',
            'recovery_throttle_time_in_millis',
            'refresh_total_time_in_millis',
            'search_fetch_time_in_millis',
            'search_query_time_in_millis',
            'search_scroll_time_in_millis',
            'search_suggest_time_in_millis',
            'warmer_total_time_in_millis',
            'docs_count',
            'docs_deleted',
            'flush_total',
            'get_exists_total',
            'get_missing_total',
            'get_total',
            'indexing_delete_total',
            'indexing_index_total',
            'indexing_noop_update_total',
            'merges_total',
            'merges_total_docs',
            'merges_total_auto_throttle_in_bytes',
            'query_cache_cache_count',
            'query_cache_cache_size',
            'query_cache_evictions',
            'query_cache_hit_count',
            'query_cache_miss_count',
            'query_cache_total_count',
            'refresh_total',
            'request_cache_hit_count',
            'request_cache_miss_count',
            'search_fetch_total',
            'search_open_contexts',
            'search_query_total',
            'search_scroll_total',
            'search_suggest_total',
            'segments_count',
            'segments_max_unsafe_auto_id_timestamp',
            'warmer_total',
            'get_exists_time_in_millis',
            'get_missing_time_in_millis'
        )

        data = {}
        constructors = [str, float]
        for key, value in j.items():
            key = key.lower().strip()
            for c in constructors:
                try:
                    value = c(value)
                except ValueError:
                    pass
            if key in delta_keys and type(value) is not str:
                j[key] = self.absolute_to_per_second(key, float(value), prev_cache)
                data[key] = float(value)
            else:
                pass

        data['ts'] = time.time()
        # Cache absolute values for next check calculations
        self.set_agent_cache(data)

        return j


if __name__ == '__main__':
    Plugin().execute()
PKtFu\L�̆agent360/plugins/bitninja.pynu�[���#!/usr/bin/env python
import plugins
import os
import json

DEFAULT_STAT_COMMAND = "/usr/sbin/bitninjacli --stats --minify"

class Plugin(plugins.BasePlugin):
    __name__ = 'bitninja'

    def run(self, config):
        '''
        Collects metrics from the BitNinja Linux Agent.

        For this plugin to work, at least BitNinja version 2.38.8 is required, and
        the following configurations must be applied to it:

        /etc/bitninja/System/config.ini:
        ```ini
        [statistics]
        enableIntegration = 1
        ```

        This will allow agent360 to access the statistics.

        Then after a BitNinja restart, the plugin can be tested by running:
        sudo -u agent360 agent360 test bitninja

        Add to /etc/agent360.ini:
        ```ini
        [bitninja]
        enabled = true
        ```
        '''
        try:
            command = config.get(__name__, "statCommand")
        except:
            command = DEFAULT_STAT_COMMAND

        stream = os.popen("sudo " + command)
        result = stream.readlines()
        firstLine = result[0]
        startIndex = firstLine.find('{')
        if startIndex >= 0:
            return json.loads(firstLine[startIndex::].replace("\n", ""))

        return None

if __name__ == '__main__':
    Plugin().execute()
PKvFu\蔱_��agent360/plugins/mysql.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import MySQLdb
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'mysql'

    def run(self, config):
        '''
        MySQL metrics plugin
        '''

        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        auth = {}
        try:
            auth['port'] = int(config.get('mysql', 'port'))
        except ValueError:
            auth['port'] = 3306
        try:
            auth['user'] = config.get('mysql', 'username')
        except:
            auth['user'] = 'root'
        try:
            auth['passwd'] = config.get('mysql', 'password')
        except:
            auth['passwd'] = ''
        try:
            auth['host'] = config.get('mysql', 'host')
        except:
            auth['unix_socket'] = config.get('mysql', 'socket')
        try:
            auth['db'] = config.get('mysql', 'database')
        except:
            auth['db'] = 'mysql'

        db = MySQLdb.connect(**auth)
        cursor = db.cursor()
        cursor.execute("SHOW GLOBAL STATUS;")
        query_result = cursor.fetchall()
        non_delta = (
            'max_used_connections',
            'open_files',
            'open_tables',
            'qcache_free_blocks',
            'qcache_free_memory',
            'qcache_total_blocks',
            'slave_open_temp_tables',
            'threads_cached',
            'threads_connected',
            'threads_running',
            'uptime'
        )
        delta_keys = (
            'aborted_clients',
            'aborted_connects',
            'binlog_cache_disk_use',
            'binlog_cache_use',
            'bytes_received',
            'bytes_sent',
            'com_delete',
            'com_delete_multi',
            'com_insert',
            'com_insert_select',
            'com_load',
            'com_replace',
            'com_replace_select',
            'com_select',
            'com_update',
            'com_update_multi',
            'connections',
            'created_tmp_disk_tables',
            'created_tmp_files',
            'created_tmp_tables',
            'key_reads',
            'key_read_requests',
            'key_writes',
            'key_write_requests',
            'max_used_connections',
            'open_files',
            'open_tables',
            'opened_tables',
            'qcache_free_blocks',
            'qcache_free_memory',
            'qcache_hits',
            'qcache_inserts',
            'qcache_lowmem_prunes',
            'qcache_not_cached',
            'qcache_queries_in_cache',
            'qcache_total_blocks',
            'questions',
            'select_full_join',
            'select_full_range_join',
            'select_range',
            'select_range_check',
            'select_scan',
            'slave_open_temp_tables',
            'slave_retried_transactions',
            'slow_launch_threads',
            'slow_queries',
            'sort_range',
            'sort_rows',
            'sort_scan',
            'table_locks_immediate',
            'table_locks_waited',
            'threads_cached',
            'threads_connected',
            'threads_created',
            'threads_running'
        )

        results = dict()
        data = dict()
        constructors = [str, float]
        for key, value in query_result:
            key = key.lower().strip()
            for c in constructors:
                try:
                    value = c(value)
                except ValueError:
                    pass
            if key in non_delta:
                results[key] = value
            elif key in delta_keys and type(value) is not str:
                results[key] = self.absolute_to_per_second(key, float(value), prev_cache)
                data[key] = float(value)
            else:
                pass

        cursor = db.cursor(MySQLdb.cursors.DictCursor)
        cursor.execute('SHOW SLAVE STATUS')
        query_result_slave = cursor.fetchone()
        non_delta_slave = (
            'slave_io_state',
            'master_host',
            'seconds_behind_master',
            'read_master_log_pos',
            'relay_log_pos',
            'slave_io_running',
            'slave_sql_running',
            'last_error',
            'exec_master_log_pos',
            'relay_log_space',
            'slave_sql_running_state',
            'master_retry_count'
        )
        if query_result_slave is None:
            query_result_slave = dict()
        for key, value in query_result_slave.items():
            key = key.lower().strip()
            if key == 'slave_sql_running':
                value = 1 if value == 'Yes' else 0
            if key == 'slave_io_running':
                value = 1 if value == 'Yes' else 0

            for c in constructors:
                try:
                    value = c(value)
                except ValueError:
                    pass
            if key in non_delta_slave and type(value) is not str:
                results[key] = value
            else:
                pass

        db.close()
        data['ts'] = time.time()
        self.set_agent_cache(data)
        return results


if __name__ == '__main__':
    Plugin().execute()
PKyFu\��`���agent360/plugins/system.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
try:
    import netifaces
except ImportError:
    netifaces = None
import os
import platform
from subprocess import Popen, PIPE
import sys
import time
import psutil
import plugins
try:
    import distro
except ImportError:
    distro = None

def systemCommand(Command, newlines=True):
    Output = ""
    Error = ""
    try:
        proc = Popen(Command.split(), stdout=PIPE)
        Output = proc.communicate()[0]
    except Exception:
        pass

    if Output:
        if newlines is True:
            Stdout = Output.split("\n")
        else:
            Stdout = Output
    else:
        Stdout = []
    if Error:
        Stderr = Error.split("\n")
    else:
        Stderr = []

    return (Stdout, Stderr)


def linux_hardware_memory():
    block_size = 0
    try:
        with open("/sys/devices/system/memory/block_size_bytes", "r") as f:
            block_size = int(f.readline().strip(), 16)

        memory = 0
        with os.scandir("/sys/devices/system/memory/") as it:
            for entry in it:
                if not entry.name.startswith("memory"):
                    continue
                with open(entry.path + "/state", "r") as f:
                    if "online" != f.readline().strip():
                        continue
                    else:
                        memory += block_size

        return memory
    except Exception:
        return 0


def ip_addresses():
    ip_list = {}
    ip_list['v4'] = {}
    ip_list['v6'] = {}
    if netifaces is None:
        return ip_list
    for interface in netifaces.interfaces():
        link = netifaces.ifaddresses(interface)
        if netifaces.AF_INET in link:
            if interface not in ip_list['v4']:
                ip_list['v4'][interface] = []
            ip_list['v4'][interface].append(link[netifaces.AF_INET])
        if netifaces.AF_INET6 in link:
            if interface not in ip_list['v6']:
                ip_list['v6'][interface] = []
            ip_list['v6'][interface].append(link[netifaces.AF_INET6])
    return ip_list


class Plugin(plugins.BasePlugin):
    __name__ = 'system'

    def run(self, *unused):
        systeminfo = {}
        cpu = {}
        cpu['brand'] = "Unknown CPU"
        cpu['count'] = 0
        if(os.path.isfile("/proc/cpuinfo")):
            f = open('/proc/cpuinfo')
            if f:
                for line in f:
                    # Ignore the blank line separating the information between
                    # details about two processing units
                    if line.strip():
                        if "model name" == line.rstrip('\n').split(':')[0].strip():
                            cpu['brand'] = line.rstrip('\n').split(':')[1].strip()
                        if "Processor" == line.rstrip('\n').split(':')[0].strip():
                            cpu['brand'] = line.rstrip('\n').split(':')[1].strip()
                        if "processor" == line.rstrip('\n').split(':')[0].strip():
                            cpu['count'] = line.rstrip('\n').split(':')[1].strip()
        if cpu['brand'] == "Unknown CPU":
            f = os.popen('lscpu').read().split('\n')
            if f:
                for line in f:
                    # Ignore the blank line separating the information between
                    # details about two processing units
                    if line.strip():
                        if "Model name" == line.rstrip('\n').split(':')[0].strip():
                            cpu['brand'] = line.rstrip('\n').split(':')[1].strip()
                        if "Processor" == line.rstrip('\n').split(':')[0].strip():
                            cpu['brand'] = line.rstrip('\n').split(':')[1].strip()
                        if "CPU(s)" == line.rstrip('\n').split(':')[0].strip():
                            cpu['count'] = line.rstrip('\n').split(':')[1].strip()
        mem = psutil.virtual_memory().total
        if sys.platform == "linux" or sys.platform == "linux2":
            hw_mem = linux_hardware_memory()
            if hw_mem != 0:
                mem = hw_mem

            if distro is None:
                systeminfo['os'] = str(' '.join(platform.linux_distribution()))
            else:
                systeminfo['os'] = str(' '.join(distro.linux_distribution(full_distribution_name=True)))
        elif sys.platform == "darwin":
            systeminfo['os'] = "Mac OS %s" % platform.mac_ver()[0]
            cpu['brand'] = str(systemCommand('sysctl machdep.cpu.brand_string', False)[0]).split(': ')[1]
            #cpu['count'] = systemCommand('sysctl hw.ncpu')
        elif sys.platform == "freebsd10" or sys.platform == "freebsd11":
            systeminfo['os'] = "FreeBSD %s" % platform.release()
            cpu['brand'] = str(systemCommand('sysctl hw.model', False)[0]).split(': ')[1]
            cpu['count'] = systemCommand('sysctl hw.ncpu')
        elif sys.platform == "win32":
            # https://learn.microsoft.com/en-us/windows/release-health/windows11-release-information
            if sys.getwindowsversion().build >= 22000:
                systeminfo['os'] = "{} {}".format(platform.uname()[0], 11)
            else:
                systeminfo['os'] = "{} {}".format(platform.uname()[0], platform.uname()[2])
        systeminfo['cpu'] = cpu['brand']
        systeminfo['cores'] = cpu['count']
        systeminfo['memory'] = mem
        systeminfo['psutil'] = '.'.join(map(str, psutil.version_info))
        systeminfo['python_version'] = sys.version
        systeminfo['platform'] = platform.platform()
        systeminfo['uptime'] = int(time.time()-psutil.boot_time())
        systeminfo['ip_addresses'] = ip_addresses()
        systeminfo['hostname'] = platform.node()

        return systeminfo


if __name__ == '__main__':
    Plugin().execute()
PK{Fu\���iiagent360/plugins/dirsize.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import json

class Plugin(plugins.BasePlugin):
    __name__ = 'dirsize'

    def run(self, config):
        '''
        Monitor total directory sizes, specify the directories you want to monitor in /etc/agent360.ini
        '''

        data = {}
        my_dirs = config.get('dirsize', 'dirs').split(',')

        for dir in my_dirs:
            data[dir] = {'bytes': os.popen('du -sbc {} | grep total'.format(dir)).read().replace('total', '').rstrip()}

        return data


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\�=�**agent360/plugins/yum-updates.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'yum-updates'

    def run(self, config):
        '''
        updates for RHEL-based OS available from yum
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/bin/yum

        test by running:
        sudo -u agent360 agent360 test yum-updates

        Add to /etc/agent360.ini:
        [yum-updates]
        enabled = yes
        interval = 3600
        '''
        data = {}
        data['security'] = int(os.popen('yum -q list updates --security | grep -v Available | wc -l').read())
        data['all'] = int(os.popen('yum -q list updates | grep -v Available | wc -l').read())
        return data

if __name__ == '__main__':
    Plugin().execute()PK�Fu\��Ԭ��agent360/plugins/cpanel.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import subprocess
import json
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'cpanel'

    def to_bytes(self, size):
        size = size.upper()
        units = {"B": 1, "K": 2 ** 10, "M": 2 ** 20, "G": 2 ** 30, "T": 2 ** 40}

        if not re.match(r' ', size):
            size = re.sub(r'([KMGT])', r' \1', size)
        number, unit = [string.strip() for string in size.split()]
        return int(float(number) * units[unit])

    def run(self, config):
        '''
        Plugin to collect cpanel user accounts
        To enable add to /etc/agent360.ini:
        [cpanel]
        enabled = yes
        '''

        data = subprocess.check_output(['whmapi1', '--output=jsonpretty',  'listaccts'])

        results = {}
        accounts = json.loads(data)
        for account in accounts['data']['acct']:
            results[account['user']] = {
                'diskused_bytes': self.to_bytes(account['diskused']),
                'inodesused': account['inodesused'],
                'is_locked': account['is_locked'],
                'has_backup': account['has_backup'],
                'outgoing_mail_hold': account['outgoing_mail_hold'],
                'outgoing_mail_suspended': account['outgoing_mail_suspended'],
                'suspended': account['suspended'],
            }
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\(.�**agent360/plugins/litespeed.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import os
import time
import re
import base64

class Plugin(plugins.BasePlugin):
    __name__ = 'litespeed'

    '''
    Litespeed monitoring plugin. Add the following section to /etc/agent360.ini

    [litespeed]
    enabled=yes
    host=localhost
    port=7080
    username=admin
    password=pass
    '''

    def run(self, config):
        result = {}
        results = {}
        data = False
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check

        response = os.popen("curl -s -i -k -u %s:%s 'https://%s:%s/status?rpt=summary'"% (config.get('litespeed', 'username'), config.get('litespeed', 'password'), config.get('litespeed', 'host'),config.get('litespeed', 'port'))).read()

        for line in response.splitlines():
            test = re.search('REQ_RATE \[(.*)\]', line)
            if test is not None and test.group(1):
                data = True
                try:
                    result[test.group(1)]
                except KeyError:
                    result[test.group(1)] = {}
                lines = line.replace('\n', '').replace(test.group(0), '').split(', ')
                for line in lines:
                    keyval = line.strip(':').strip().split(':')
                    try:
                        result[test.group(1)][keyval[0]] += float(keyval[1])
                    except KeyError:
                        result[test.group(1)][keyval[0]] = float(keyval[1])

        metrics = (
                'SSL_BPS_IN',
                'BPS_OUT',
                'MAXSSL_CONN',
                'PLAINCONN',
                'BPS_IN',
                'SSLCONN',
                'AVAILSSL',
                'IDLECONN',
                'SSL_BPS_OUT',
                'AVAILCONN',
                'MAXCONN',
                'REQ_PROCESSING'
        )

        if data is True:
            for vhost, statistics in result.items():
                try:
                    prev_cache[vhost]['ts'] = prev_cache['ts']
                except KeyError:
                    prev_cache[vhost] = {}
                results[vhost] = {}
                for key, value in statistics.items():
                    if key == 'TOT_REQS':
                        results[vhost]['RPS'] = self.absolute_to_per_second(key, value, prev_cache[vhost])
                    if key == 'TOTAL_STATIC_HITS':
                        results[vhost]['STATIC_RPS'] = self.absolute_to_per_second(key, value, prev_cache[vhost])
                    if key == 'TOTAL_PUB_CACHE_HITS':
                        results[vhost]['PUB_CACHE_RPS'] = self.absolute_to_per_second(key, value, prev_cache[vhost])
                    if key == 'TOTAL_PRIVATE_CACHE_HITS':
                        results[vhost]['PRIVATE_CACHE_RPS'] = self.absolute_to_per_second(key, value, prev_cache[vhost])
                    if key in metrics:
                        results[vhost][key] = value

        result['ts'] = time.time()
        self.set_agent_cache(result)
        return results

if __name__ == '__main__':
    Plugin().execute()
PK�Fu\Q}2~��agent360/plugins/temp.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import plugins
import psutil
import sys

class Plugin(plugins.BasePlugin):
    __name__ = 'temp'

    def run(self, *unused):
        '''
        expirimental plugin used to collect temperature from system sensors
        plugin can be tested by running agent360 test temp
        '''
        data = {}

        if sys.platform == "win32":
            try:
                import wmi
            except:
                return 'wmi module not installed.'

            try:
                w = wmi.WMI(namespace="root\OpenHardwareMonitor")
                temperature_infos = w.Sensor()
                for sensor in temperature_infos:
                    if sensor.SensorType==u'Temperature':
                        data[sensor.Parent.replace('/','-').strip('-')] = sensor.Value
                return data
            except:
                return 'Could not fetch temperature data from OpenHardwareMonitor.'
        if not hasattr(psutil, "sensors_temperatures"):
            return "platform not supported"

        try:
                temps = psutil.sensors_temperatures()
        except:
                return "can't read any temperature"

        for device, temp in temps.items():
            for value in temp:
                type = value[0]
                if value[0] == '':
                    type = device
                data[type] = value[1]
        return data


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\�S�bkkagent360/plugins/iostat.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
#

import os
import signal
import subprocess
import sys
import psutil
import plugins
import time

def diskstats_parse(dev=None):
    file_path = '/proc/diskstats'
    result = {}

    if not os.path.isfile("/proc/diskstats"):
        return False

    # ref: http://lxr.osuosl.org/source/Documentation/iostats.txt
    columns_disk = ['m', 'mm', 'dev', 'reads', 'rd_mrg', 'rd_sectors',
                    'ms_reading', 'writes', 'wr_mrg', 'wr_sectors',
                    'ms_writing', 'cur_ios', 'ms_doing_io', 'ms_weighted']
    # For kernel 4.18+
    columns_disk_418 = ['m', 'mm', 'dev', 'reads', 'rd_mrg', 'rd_sectors',
                    'ms_reading', 'writes', 'wr_mrg', 'wr_sectors',
                    'ms_writing', 'cur_ios', 'ms_doing_io', 'ms_weighted',
                    'discards', 'discards_merged', 'discarded_sectors',
                    'discarded_time']
    # for kernel 5.5+
    columns_disk_55 = ['m', 'mm', 'dev', 'reads', 'rd_mrg', 'rd_sectors',
                    'ms_reading', 'writes', 'wr_mrg', 'wr_sectors',
                    'ms_writing', 'cur_ios', 'ms_doing_io', 'ms_weighted',
                    'discards', 'discards_merged', 'discarded_sectors',
                    'discarded_time', 'flush', 'flush_time']

    columns_partition = ['m', 'mm', 'dev', 'reads', 'rd_sectors', 'writes', 'wr_sectors']

    lines = open(file_path, 'r').readlines()
    for line in lines:
        if line == '':
            continue
        split = line.split()
        if len(split) == len(columns_disk_55):
            columns = columns_disk_55
        elif len(split) == len(columns_disk_418):
            columns = columns_disk_418
        elif len(split) == len(columns_disk):
            columns = columns_disk
        elif len(split) == len(columns_partition):
            columns = columns_partition
        else:
            # No match
            continue

        data = dict(zip(columns, split))

        if data['dev'][:3] == 'nvm' and data['dev'][-2:-1] == 'n':
            pass
        elif data['dev'][-1:].isdigit() is True:
            continue

        if "loop" in data['dev'] or "ram" in data['dev']:
            continue

        if dev is not None and dev != data['dev']:
            continue
        for key in data:
            if key != 'dev':
                data[key] = int(data[key])
        result[data['dev']] = data

    return result


class Plugin(plugins.BasePlugin):
    __name__ = 'iostat'

    def run(self, *unused):
        delta_keys = (
            'reads',
            'writes',
            'wr_sectors',
            'rd_sectors',
            'ms_reading',
            'rd_mrg',
            'wr_mrg',
            'ms_weighted',
            'ms_doing_io',
            'ms_writing',
            'discarded_sectors',
            'discarded_time',
            'flush',
            'flush_time',
            'discards'
        )
        next_cache = {}
        next_cache['ts'] = time.time()
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        disks = diskstats_parse()
        if not disks  or disks is False:
            results = {}
            try:
                diskdata = psutil.disk_io_counters(perdisk=True)
                for device, values in diskdata.items():
                    device_stats = {}
                    for key_value in values._fields:
                        device_stats[key_value] = getattr(values, key_value)
                    results[device] = device_stats
            except Exception as e:
                results = e.message
        else:
            results = {}
            for device, values in disks.items():
                device_stats = {}
                next_cache[device] = {}
                next_cache[device]['ts'] = time.time()
                try:
                    prev_cache[device]
                except:
                    prev_cache[device] = {}
                for key_value, value in values.items():
                    if key_value in delta_keys:
                        try:
                            device_stats[key_value] = self.absolute_to_per_second(key_value, value, prev_cache[device])
                        except:
                            pass
                        next_cache[device][key_value] = value
                    else:
                        device_stats[key_value] = value
                try:
                    device_stats['avgrq-sz'] = (device_stats['wr_sectors']+device_stats['rd_sectors']) / (device_stats['reads']+device_stats['writes'])
                except:
                    device_stats['avgrq-sz'] = 0
                try:
                    device_stats['tps'] = device_stats['reads']+device_stats['writes']
                except:
                    device_stats['tps'] = 0
                try:
                    device_stats['usage'] = (100 * device_stats['ms_doing_io']) / (1000 * (next_cache['ts'] - prev_cache['ts']))
                except:
                    device_stats['usage'] = 0

                results[device] = device_stats

        self.set_agent_cache(next_cache)
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\ݢ�M��agent360/plugins/mongodb.pynu�[���#!/usr/bin/env python
import time
import plugins
from pymongo import MongoClient

class Plugin(plugins.BasePlugin):
    __name__ = 'mongodb'


    def run(self, config):
        """
        Mongodb monitoring
        """

        client = MongoClient(config.get('mongodb', 'connection_string'))
        db = client.admin
        statistics = db.command("serverStatus")
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        data = {}
        results = {}
        # replication status
        try:
            results['isprimary'] = 0 if statistics['repl']['ismaster'] is False else 1
            results['members'] = len(statistics['repl']['hosts'])
        except:
            pass

        # transactions stats, available in v3.6.3 and up
        try:
            data['transactions-retriedCommandsCount'] = statistics['transactions']['retriedCommandsCount']
            data['transactions-retriedStatementsCount'] = statistics['transactions']['retriedStatementsCount']
            data['transactions-transactionsCollectionWriteCount'] = statistics['transactions']['transactionsCollectionWriteCount']
            data['transactions-totalAborted'] = statistics['transactions']['totalAborted']
            data['transactions-totalCommitted'] = statistics['transactions']['totalCommitted']
            data['transactions-totalStarted'] = statistics['transactions']['totalStarted']
            results['transactions-currentActive'] = statistics['transactions']['currentActive']
            results['transactions-currentInactive'] = statistics['transactions']['currentInactive']
            results['transactions-currentOpen'] = statistics['transactions']['currentOpen']
        except:
             pass

        data['connections.totalCreated'] = statistics['connections']['totalCreated']
        results['connections.available'] = statistics['connections']['available']
        results['connections.current'] = statistics['connections']['current']
        data['opcounters.command'] = statistics['opcounters']['command']
        data['opcounters.delete'] = statistics['opcounters']['delete']
        data['opcounters.getmore'] = statistics['opcounters']['getmore']
        data['opcounters.insert'] = statistics['opcounters']['insert']
        data['opcounters.query'] = statistics['opcounters']['query']
        data['opcounters.update'] = statistics['opcounters']['update']

        data['opLatencies.commands.latency'] = statistics['opLatencies']['commands']['latency']
        data['opLatencies.commands.ops'] = statistics['opLatencies']['commands']['ops']
        data['opLatencies.reads.latency'] = statistics['opLatencies']['reads']['latency']
        data['opLatencies.reads.ops'] = statistics['opLatencies']['reads']['ops']
        data['opLatencies.writes.latency'] = statistics['opLatencies']['writes']['latency']
        data['opLatencies.writes.ops'] = statistics['opLatencies']['writes']['ops']

        data['globalLock.currentQueue.total'] = statistics['globalLock']['currentQueue']['total']
        data['globalLock.currentQueue.readers'] = statistics['globalLock']['currentQueue']['readers']
        data['globalLock.currentQueue.writers'] = statistics['globalLock']['currentQueue']['writers']

        data['globalLock.activeClients.total'] = statistics['globalLock']['activeClients']['total']
        data['globalLock.activeClients.readers'] = statistics['globalLock']['activeClients']['readers']
        data['globalLock.activeClients.writers'] = statistics['globalLock']['activeClients']['writers']

        data['asserts.msg'] = statistics['asserts']['msg']
        data['asserts.regular'] = statistics['asserts']['regular']
        data['asserts.rollovers'] = statistics['asserts']['rollovers']
        data['asserts.user'] = statistics['asserts']['user']
        data['asserts.warning'] = statistics['asserts']['warning']

        #deadlock stats
        try:
                for key, val in statistics['locks'].items():
                        for key2, val2 in val.items():
                                for key3, val3 in val2.items():
                                        data['locks-{}-{}-{}'.format(key.lower(), key2, key3)] = val3
        except:
                pass

        try:
            data['opcountersRepl.command'] = statistics['opcountersRepl']['command']
            data['opcountersRepl.delete'] = statistics['opcountersRepl']['delete']
            data['opcountersRepl.getmore'] = statistics['opcountersRepl']['getmore']
            data['opcountersRepl.insert'] = statistics['opcountersRepl']['insert']
            data['opcountersRepl.query'] = statistics['opcountersRepl']['query']
            data['opcountersRepl.update'] = statistics['opcountersRepl']['update']
        except KeyError:
            pass

        for key, val in data.items():
            results[key] = self.absolute_to_per_second(key, val, prev_cache)

        try:
            results['opLatencies.commands'] = results['opLatencies.commands.latency']/results['opLatencies.commands.ops']
            results['opLatencies.writes'] = results['opLatencies.writes.latency']/results['opLatencies.writes.ops']
            results['opLatencies.reads'] = results['opLatencies.reads.latency']/results['opLatencies.reads.ops']
        except:
            pass

        next_cache = data
        next_cache['ts'] = time.time()
        self.set_agent_cache(next_cache)
        results['mem.resident'] = statistics['mem']['resident']
        results['mem.bits'] = statistics['mem']['bits']
        results['mem.virtual'] = statistics['mem']['virtual']
        results['mem.supported'] = 0 if statistics['mem']['supported'] is False else 1

        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\�Q�VVagent360/plugins/tcpports.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import socket
import plugins


class Plugin(plugins.BasePlugin):
    __name__ = 'tcpports'

    def run(self, config):
        '''
        Checks if TCP ports are open.
        '''
        def is_port_open(host, port, timeout):
            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            sock.settimeout(timeout)  # Timeout for the connection attempt
            try:
                sock.connect((host, port))
                sock.close()
                return 1
            except socket.error:
                return 0

        results = dict()

        # Parse the config for the host-port pairs to check
        host_ports = config.get(__name__, 'host_ports').split(',')
        timeout = float(config.get(__name__, 'timeout'))  # Timeout for the connection attempt

        for host_port in host_ports:
            host, port = host_port.split(':')
            port = int(port)
            results[host_port] = {'available': is_port_open(host, port, timeout)}

        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\BE���agent360/plugins/janus.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*- 
import plugins
import subprocess


class Plugin(plugins.BasePlugin):
    __name__ = 'janus'

    def run(self, config):
        adminpw = config.get('janus', 'adminpw')
        p = subprocess.Popen("curl -s -H \"Accept: application/json\" -H \"Content-type: application/json\" -X POST -d '{ \"janus\": \"list_sessions\", \"transaction\": \"324\", \"admin_secret\": \""+adminpw+"\" }' http://localhost:7088/admin | awk 'NR>=5' | head -n -2 | wc -l", stdout=subprocess.PIPE, shell=True)
        p = p.communicate()[0].decode('utf-8').replace("\n", "")
        res = { "janus_sessions": p }
        return res

if __name__ == '__main__':
    Plugin().execute()
PK�Fu\��3�RR#agent360/plugins/diskstatus-nvme.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import json
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'diskstatus-nvme'

    def run(self, config):
        '''
        Monitor nvme disk status
        For NVME drives install nvme-cli (https://github.com/linux-nvme/nvme-cli#distro-support)
        This plugin requires the agent to be run under the root user.
        '''
        results = {}
        try:
            data = subprocess.Popen('nvme --list --output-format=json', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()[0]
            data = json.loads(data.decode('utf-8'))
            data['Devices']
            nvme = True
        except Exception:
            nvme = False
            return "Could not fetch nvme status information"

        if nvme is True:
            for value in data['Devices']:
                device = {}
                disk_data = os.popen('nvme smart-log {} --output-format=json'.format(value['DevicePath'])).read()
                try:
                    data_disk = json.loads(disk_data)
                except Exception:
                    pass

                for disk_key, disk_value in data_disk.items():
                    if disk_key.startswith('temperature'):
                        device[disk_key] = round(disk_value-273.15, 0) # kelvin to celsius
                    else:
                        device[disk_key] = disk_value
                results[value['DevicePath'].replace('/dev/', '')] = device
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\6��55agent360/plugins/wp-toolkit.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'wp-toolkit'
    __title__ = 'WP Toolkit'
    __description__ = 'Unified plugin for gathering metrics for WP Toolkit servers on cPanel and Plesk.'

    def run(self, config):
        '''
        Grabbing some basic information from your cPanel or Plesk server
        If you are using Plesk:
        add to /etc/sudoers the following line:
        agent360 ALL=(ALL) NOPASSWD: /usr/sbin/plesk

        For cPanel:
        agent360 ALL=(ALL) NOPASSWD: /usr/local/bin/wp-toolkit



        test by running:
        sudo -u agent360 agent360 test wp-toolkit
        Add to /etc/agent360.ini:
        [wp-toolkit]
        enabled = yes
        interval = 3600
        '''
        command = ''
        if os.path.isdir("/var/cpanel/users"):
            command = 'wp-toolkit'
        elif os.path.isdir("/opt/plesk"):
            command = 'plesk ext wp-toolkit'
        data = {}
        if command != '':
            data['WordPress Websites'] = int(os.popen('sudo -n ' + command + ' --list | grep -v "Main Domain ID" | grep . | wc -l').read())
            data['WordPress Websites - Alive'] = int(os.popen('sudo -n ' + command + ' --list | grep "Working" | wc -l').read())
            data['WordPress Websites - Outdated'] = int(os.popen('sudo -n ' + command + ' --list | grep "Outdated WP" |  wc -l').read())
            data['WordPress Websites - Outdated PHP'] = int(os.popen('sudo -n ' + command + ' --list | grep "Outdated PHP" |  wc -l').read())
            data['WordPress Websites - Broken'] = int(os.popen('sudo -n ' + command + ' --list | grep "Broken" | wc -l').read())
            return data
        else:
            return "Neither cPanel nor Plesk detected"


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\S�@��agent360/plugins/sleeper.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import plugins


class Plugin(plugins.BasePlugin):
    __name__ = 'sleeper'

    def run(self, *unused):
        time.sleep(60 * 60 * 24)


if __name__ == '__main__':
    Plugin().run()
PK�Fu\El���agent360/plugins/exim.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import plugins

class Plugin(plugins.BasePlugin):
    __name__ = 'exim'

    def run(self, config):
        '''
        exim mail queue monitoring, needs sudo access!
        Instructions at:
        https://docs.360monitoring.com/docs/exim-queue-size-plugin
        '''
        data = {}
        data['queue_size'] = int(os.popen('sudo exim -bpc').read())
        return data

if __name__ == '__main__':
    Plugin().execute()
PK�Fu\?'�b��agent360/plugins/diskstatus.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import subprocess
import plugins
import json
import re

class Plugin(plugins.BasePlugin):
    __name__ = 'diskstatus'

    def run(self, config):
        '''
        Monitor nvme or smart disk status.
        For NVME drives use the diskstatus-nvme plugin
        for smart status install smartmontools (apt-get/yum install smartmontools)
        This plugin requires the agent to be run under the root user.
        '''
        results = {}

        try:
            devlist = subprocess.Popen('smartctl --scan', stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True).communicate()[0].decode().splitlines()
            smartctl = True
        except Exception as e:
            smartctl = False
            return "Could not fetch smartctl status information"

        if smartctl is True:
            for row in devlist:
                try:
                    disk_id = row.split(' ')[0].split('/')[2]
                    disk_stats = os.popen('smartctl -A -H {}'.format(row.split(' ')[0])).read().splitlines()
                    smart_status = 0
                    if disk_stats[4].split(': ')[1] == 'PASSED':
                        smart_status = 1
                    results[disk_id] = {}
                    start = False
                    for stats in disk_stats:
                        if stats[0:3] == 'ID#':
                            start = True
                            continue
                        if start is False:
                            continue
                        stats = re.sub(' +', ' ', stats).strip()
                        stats = stats.split(' ')
                        if len(stats) > 9:
                            results[disk_id][stats[1].lower().replace('_celsius','')] = stats[9]
                    results[disk_id]["status"] = smart_status
                except Exception as e:
                    print(e)
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\�$����agent360/plugins/cpu.pynu�[���#!/usr/bin/env python
# -*- coding: utf-8 -*-
import psutil
import plugins
import time

class Plugin(plugins.BasePlugin):
    __name__ = 'cpu'

    def run(self, *unused):
        prev_cache = self.get_agent_cache()  # Get absolute values from previous check
        next_cache = {}
        next_cache['ts'] = time.time()
        results = {}
        data_stats = psutil.cpu_stats()

        # if there's no previous cache, build the first baseline value
        # so we don't have all 0% values, should happen only the first time
        try:
           prev_cache['ts']
        except KeyError:
           data = psutil.cpu_times(percpu=True)
           cpu_number = -1
           prev_cache['ts'] = time.time()
           for cpu in data:
              cpu_number = cpu_number+1
              prev_cache[cpu_number] = {}
              for key in cpu._fields:
                  prev_cache[cpu_number][key] = getattr(cpu, key)
           time.sleep(0.5)

        data = psutil.cpu_times(percpu=True)
        cpu_number = -1
        for cpu in data:
            cpu_number = cpu_number+1
            results[cpu_number] = {}
            next_cache[cpu_number] = {}
            for key in cpu._fields:
                next_cache[cpu_number][key] = getattr(cpu, key)
                try:
                    time_delta = time.time() - prev_cache['ts']
                except:
                    continue
                if time_delta <= 0:
                    continue
                cpu_time_delta = getattr(cpu, key) - prev_cache[cpu_number][key]
                if cpu_time_delta < 0:
                    cpu_time_delta = 0
                results[cpu_number][key] = cpu_time_delta / time_delta * 100
                if results[cpu_number][key] > 100:
                    results[cpu_number][key] = 100
                if results[cpu_number][key] < 0:
                    results[cpu_number][key] = 0
        self.set_agent_cache(next_cache)
        return results


if __name__ == '__main__':
    Plugin().execute()
PK�Fu\G�c���,agent360/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�;/usr/local/lib/python3.9/site-packages/agent360/__init__.py�<module>�PK�Fu\��S@�I�I,agent360/__pycache__/agent360.cpython-39.pycnu�[���a

��?h�o�@sddlmZddlZddlZejdkrxzddlmZWneyLeZYn0ddl	Z	ddl
Zddlm
Z
mZddlZn(ddlZddlZddlZddl
m
Z
mZejdkr�ddlZnddlZddlZddlZddlZzddlZWney�ddlZYn0ddlZddlZddlZddlZddlZddl Z ddl!Z!ddl"Z"ddl#Z#ddl$m%Z%z0ddl&m'Z'm(Z(dd	l)m*Z*m+Z+dd
l,m-Z-Wn@e�y�ddl'm'Z'ddl.m(Z(dd
l/m*Z*m+Z+m-Z-Yn0dZ0ej1�2ej1�3e4��Z5ej1�6dd�ej1�6dd�ej1�6dd�ej1�6ej1�2e5�d�ej1�6ej1�2e5�d�ej1�6ej1�2e5�d�ej1�3d�ej1�3d�ej1�3d�f	Z7ej8dk�r�ej1�6e5ddd�ej1�6e5ddd�ej1�6e5ddd�fZ7dd�Z9d,dd�Z:dd�Z;dd�Z<dd �Z=d!d"�Z>d#d$�Z?gfd%d&�Z@Gd'd(�d(�ZAd)d*�ZBeCd+k�reB�dS)-�)�print_functionN��)�
basestring)�Queue�Empty�r�)�OptionParser)�urlparse�	urlencode)�urlopen�Request)�	HTTPError)r)r)r
rrz1.3.1z/etczagent360.inizagent360-custom.ini�agent360-token.ini�ntz..�configc
CsPtdd�}|��}|jdd�}d�dtdd�|�d	|d
|j�dd�f�S)
z�
    Return string with info about agent360:
        - version
        - plugins enabled
        - absolute path to plugin directory
        - server id from configuration file
    T��dry_instance�enabled��state�
zVersion: %szPlugins enabled: %s�, zPlugins directory: %sz
Server: %s�agent�server)�Agent�_get_plugins_path�_get_plugins�join�__version__r�get)r�plugins_pathZplugins_enabled�r#�;/usr/local/lib/python3.9/site-packages/agent360/agent360.py�infoMs
�r%�httpsc	Cs�t�}|jdddd�|jddtddd	�|��\}}|d}td
d�}t|�dkr^|d}ntj�t	d
�}t|�dkr�|d}nd}|j
dur�d}n|j
}|jdkr�d�t��}	nd}	d|vr�|�
d�d}
|�
d�d}nnzt��d}Wnt�y
t��}Yn0t|d|j�dd�dt|||||	d���d�d�����}
t|
�dk�r�td|
�t|d��d||
f�ntd|
�dS)Nz-tz--tagszComma-separated list of tags)�helpz-az	--automonrz6Enable/disable automatic monitoring of hosted websites)�type�defaultr'Tr�r���,�_z://�data�hello_api_hostz/hello)�user�hostname�	unique_id�tags�domainszutf-8)r/�zGot server_id: %s�wz[DEFAULT]
user=%s
server=%s
z Could not retrieve server_id: %s)r
�
add_option�int�
parse_argsr�len�os�pathr�__FILEABSDIRNAME__r4Zautomon�_get_domains�split�uname�AttributeError�socket�getfqdnr
rr!r�encode�read�decode�print�open�write)�proto�parser�options�argsZuser_idrZtoken_filenamer3r4r5Z	server_idr2r#r#r$�hellobsV




���

�rOcCsfg}zJt�ddg�}|����D]*}d|vr.q |���d�}|�|d�q Wnty`Yn0|S)NZ	apachectlz-SZ	namevhost� r)�
subprocess�check_outputrG�
splitlines�stripr@�append�FileNotFoundError)r5�output�line�colsr#r#r$�_get_apache_domains�srZcCs�g}zjt�ddg�}|����D]J}d|vr.q |���d�}t|�dkr |d�dd��d	d�}|�|�q Wnt	y�Yn0|S)
NZnginxz-TZserver_namerPr+r*�;r,�")
rQrRrGrSrTr@r;�replacerUrV)r5rWrXrY�domainr#r#r$�_get_nginx_domains�sr_cCs�g}z<t�gd��}t�|�}|ddD]}|�|d�q*Wn�ty�z.t�gd��}|����D]}|�|�qjWndty�tt	t
�t���D]<}d|vr�q�|�d�r�q�|�d�r�q�|�d	�r�q�|�|�q�Yn0Yn0|S)
N)Zwhmapi1z--output=jsonprettyZget_domain_infor/r5r^)Zplesk�binr^z--list�.z.localdomainz
.localhostz.local)
rQrR�json�loadsrUrVrGrS�list�setrZr_�endswith)r5Zjson_str�responser^�strr#r#r$r?�s.



r?cCsttt���dS)N)rHr;r?r#r#r#r$�
count_domains�sricCs0t|t�r&tj�|�}tj�|�dS|jSdS)Nr)�
isinstancerr<r=�basename�splitext�__name__)�pluginrkr#r#r$�_plugin_name�s
rocCs�tdd�}|��}|tjvr*tj�d|�|sL|jdd�}tdd�|��|D�]�}td|�z,tjd	kr|t	j
�|�}nt�
|�\}}}Wn:ty�}z"td
|�WYd}~qPWYd}~n
d}~00z�z8tjd	kr�t	j
�|�}	|j�|	�nt�||||�}	WnTt�yX}z:td|�WYd}~Wtjd	krP|rP|��qPWYd}~n
d}~00Wtjd	k�r�|�r�|��ntjd	k�r�|�r�|��0z(|	���|j�}
ttj|
d
dd��WqPt�y�}ztd|�WYd}~qPd}~00qPdS)z�
    Test specified plugins and print their data output after single check.
    If plugins list is empty test all enabled plugins.
    TrrrrzCheck all enabled plugins: %srz%s:rzFind error:NzLoad error:r	��indent�	sort_keyszExecution error:)rr�sysr=�insertrrHr�version_info�	importlib�util�	find_spec�imp�find_module�	Exception�module_from_spec�loader�exec_module�load_module�close�Plugin�runrrb�dumps)�pluginsrr"�plugin_name�spec�fp�pathname�description�e�module�payloadr#r#r$�test_plugins�sN




"


�
�
r�c@s�eZdZe�Ze�Ze�Ze�ZdZd dd�Z	dd�Z
dd�Zdd	�Zd
d�Z
dd
�Zdd�Zdd�Zdd�Zdd�Zdd�Zd!dd�Zdd�Zdd�ZdS)"rFcCs:|��i|_|rdS|��|��|��|��dS)z0
        Initialize internal strictures
        N)�_config_init�
plugins_cache�
_logging_init�
_plugins_init�_data_worker_init�_dump_config)�selfrr#r#r$�__init__"szAgent.__init__cCs2tjdkr tj�|j�dd��S|j�dd�SdS)Nrrr�)r<�namer=�
expandvarsrr!�r�r#r#r$r3s
zAgent._get_plugins_pathcCs�ddtjdddtj�td�ddddddd	d
ddd
�}gd�}tjdkrRt�	|�}n
t
�	|�}|�t�||_
|D]>}|�|�|dkr�|j
�|dd�|dkrp|j
�|dd�qpdS)z1
        Initialize configuration object
        �<iX�dr��nor,zingest.monitoring360.iozapi.monitoring360.ioz/v2/server/pollz/var/log/agent360.log�a�
)�
max_data_span�max_data_age�
logging_level�threads�ttl�intervalr�rrQr1r�api_hostr0�api_path�log_file�
log_file_mode�max_cached_collections)r�	executionr/rr/r�r*r��?N)�logging�INFOr<r=rr>rsru�configparser�RawConfigParser�ConfigParserrF�	ini_filesr�_config_section_createre)r��defaults�sectionsr�sectionr#r#r$r�:s<�



zAgent._config_initcCs|j�|�s|j�|�dS)zc
        Create an addition section in the configuration object
        if it's not exists
        N)r�has_section�add_section)r�r�r#r#r$r�cszAgent._config_section_createc
Cs�|j�dd�}tjdkr0tj�|j�dd��}n|j�dd�}|j�dd�}|dvrVn |dkrdd}n|d	krrd
}nd
}|dkr�tj|d�n^ztj|||d
d�WnFt	y�}z.tj|d�t�
d|�t�
d�WYd}~n
d}~00t�
d|�dS)z,
        Initialize logging faculty
        rr�rr�r�)r7r��truncater7rUr��-)�levelz,%(asctime)-15s  %(levelname)s    %(message)s)�filename�filemoder��formatzIOError: %szDrop logging to stderrNzAgent logging_level %i)r�getintr<r�r=r�r!r��basicConfig�IOErrorr%)r�r�r�r�r�r#r#r$r�ks*
 zAgent._logging_initc

Cs�t�d�|��}t�tj�|d��}|tjvr>tj�d|�i|_	|D�]4}t
|�}|dkr`qH|�|�|j�
|d�rH|j�
|d�r�d|j	|<qHtjdkr�tj�|�}nt�|�\}}}z�z8tjdkr�tj�|�}	|j�|	�nt�||||�}	Wn0t�y$d}	t�d	tt��d��Yn0Wtjdk�r`|�r`|��ntjdk�r^|�r^|��0|	�rrd|j	|	<qHt�d
|�qHdS)z&
        Discover the plugins
        r��*.pyrr�rrQrNzimport_plugin_exception:%szimport_plugin:%s)r�r%r�globr<r=rrsrt�scheduleror�r�
getbooleanrurvrwrxryrzr|r}r~rr{�errorrh�exc_infor�)
r�r"�	filenamesr�r�r�r�r�r�r�r#r#r$r��sD





 
�
zAgent._plugins_initc
Cs$tjtj|ftjtjdd�}t�dt��|j	�|j
�dd�}t|�}|j
�|d�}||p^d}|�
�|jdur�|d	kr�t�d
t��|�t�|�|d8}|�
�qh|jdur�t�dt��|j	�t�|j	tj�|��\}}|jd	ks�|�r
t�dt��||j|�|�rt�|�}	nd}	|	S)
z0
        Execute /task/ in a subprocess
        T)�stdout�stderr�universal_newlinesz
%s:process:%ir�r�r�r*Nrz
%s:tick:%iz
%s:kill:%iz%s:%s:%s:%s)rQ�Popenrs�
executable�PIPEr��debug�	threading�
currentThread�pidrr�ro�poll�
returncode�time�sleepr�r<�kill�signal�SIGTERM�communicate�picklerc)
r��task�processr�r�r�Zticksr�r��retr#r#r$�_subprocess_execution�s6�


�zAgent._subprocess_executioncCs^|jrt�dt����q@t�dt��|j���z|j��}Wnt	yXY�q@Yn0t�dt��|�t
|�}z|j�|d�}Wnd}Yn0t
�
�}t|t�r�|�|�}npz<|j�||j�|g�i�|j|j|d�}|�|j�}Wn2t�y$t�d�dtt��d	�i}Yn0|j�|||||d
��q|j�t���|j��dS)zW
        Take queued execution requests, execute plugins and queue the results
        �%s:shutdownz%s:exec_queue:%iz
%s:task:%sr�r�)Zagent_cacheZplugin_exception�	exceptionr)�tsr�r�r�r�N)�shutdownr�r%r�r�r��execute�qsize�
get_nowaitrrorr!r�rjrr�r��updater�r�r{r�rhrsr��metrics�put�cemetery�hire�release)r�r�r�r�r�r�rnr#r#r$�
_execution�sF

�
�
zAgent._executioncCs t�dt���|j�dd�}|j�dd�}|j�dd�}|j�dd�}|j�dd�}|j�dd	�}|j�dd
�}|j�dd�}g}	g}
d}|r�d
}n|j�dd�}t��}|jr�t�dt����qt�	dt��|j
��t|
��|j
���r<z|
�
|j
���Wq�t�y8}
zt�d|
�WYd}
~
q�d}
~
00q�|
�r�tdd�|
D��}tdd�|
D��}t��}d}|||k�r�t�	d�d}d}n |||k�r�t�d�d}d}|�r�d}dd||fd�}t�	dtj|
ddd��|�r�|�st�d�d}�n�zڐz<tjt��d�}tjd k�r>tjj||d!d"�}ntj||d!d"�}|	�r�t�d#t|	��|	�r�|j d$d%|t!f|	d&|d'�|�"�}|�#�|j$d(k�r�|	d&=t�	d)|j$�nt%d*|j$���qdt�d+�|j d$d%|t!ft&�'t(t�|
�d,��)��|d'�|�"�}|�#�|j$d(k�r8t�	d)|j$�d}nt%d*|j$��Wn�t�y�}
ztt�d-|
�|d&k�r�t|	�|k�r�|	d&=t�d.|�t�d/�|	�
t&�'t(t�|
�d,��)���g}
WYd}
~
n
d}
~
00W|�*�n
|�*�0|�r�g}
|t��|}|d&kr�t�+|�q�dS)0zA
        Take and collect data, send and clean if needed
        z%sr/r�r�rr�r�rr1r�r�Tr�r�z%s:data_queue:%i:collection:%izData queue error: %sNcss|]}|dVqdS�r�Nr#��.0r�r#r#r$�	<genexpr>'�zAgent._data.<locals>.<genexpr>css|]}|dVqdSr�r#r�r#r#r$r�(r�Fz
Max data spanzMax data agezapplication/jsonzApiKey %s:%s)zContent-type�
Authorizationzcollection: %sr+rpz&Empty server or user, nowhere to send.)�cafiler�)�context�timeoutzSending cached collections: %i�PUTz
%s?version=%sr)�headers��zSuccessful response: %szUnsuccessful response: %szAll cached collections sentrzFailed to submit collection: %szCReach max_cached_collections (%s): oldest cached collection droppedz,Cache current collection to resend next time),r�r%r�r�rr!r�r�r�r�r/r�r;rUr�r{r��min�max�warningrbr��ssl�create_default_context�certifi�wherersru�http�client�HTTPSConnection�httplib�requestr �getresponserF�status�
ValueError�bz2�compressrhrEr�r�)r�r�r�Zmax_ageZmax_spanrr1r�r�Zcached_collectionsZ
collectionZinitial_dataZloop_tsr�Zfirst_tsZlast_ts�now�send�cleanr��ctx�
connectionrg�sleep_intervalr#r#r$�_datas��&


��
�
�
�
"zAgent._datacCs t�d�tj|jd���dS)z/
        Initialize data worker thread
        r���targetN)r�r%r��Threadr�startr�r#r#r$r�xs
zAgent._data_worker_initcCs<tjdkrt��}nt��}|j�|�t�d|���dS)z,
        Dumps configuration object
        rz
Config: %sN)	rsru�io�StringIOrrJr�r%�getvalue)r��bufr#r#r$r�s


zAgent._dump_configrcCs�|��}g}t�tj�|d��D]b}t|�}|dkr6q |�|�|dkrb|j�|d�r�|�	|�q |dkr |j�|d�s |�	|�q |S)z0
        Return list with plugins names
        r�r�r�disabled)
rr�r<r=rror�rr�rU)r�rr"r�r�r�r#r#r$r�s
zAgent._get_pluginscCsRt�d|j���z|j��}Wnty6YqNYn0t�d|�|��qdS)zb
        Join with dead workers
        Workaround for https://bugs.python.org/issue37788
        zcemetery:%iz
joining:%sN)r�r�r�r�r�rr)r��threadr#r#r$�_rip�s
z
Agent._ripcs�t�d�|j�dd�}t�|j�dd��|_�z�|��t	�	��t�
dt���|j�
�r�|j��}|d}t�
d|�|�d	�}|r�t��|j�|d�|j|<t|tj�r�|j|d	<|j�|�qR�fd
d�|j��D�}|D]�}t�
d|�|j|=|j�|�|j�d
��rrz&tj|jd�}|��t�
d|�Wn2t�yn}zt�d|�WYd}~n
d}~00q�t�d�|j��dd|j�dd�id��q�dt	�	��}|dk�r�t	� |�q2t�d�t	� d�q2Wn�t!�y�t�t"�#�d�t�d�|��d}	|	�r�t�$�}
t�d|
�dd�|
D�}	|	�sXt�d�t"�%d�d|_&t�dt'|	��|	D]}t�d ||�|�(|��qr�qYn4t�y�}zt�)d!|�WYd}~n
d}~00dS)"z.
        Start all the worker threads
        zAgent main looprr�r�r�z
%i threadsr�z
metrics:%sr�csg|]\}}|�kr|�qSr#r#)r��what�when�r
r#r$�
<listcomp>�s�zAgent.run.<locals>.<listcomp>z
scheduling:%sFrznew_execution_worker_thread:%szCan not start new thread: %sNZthreads_cappedZagent_internalZthreads_capping)r�r�r�r�rz'not enough time to start worker threadsg�������?z
Shutting downTzRemaining threads: %scSs$g|]}|��st|tj�s|�qSr#)�isDaemonrjr��_MainThread)r�rr#r#r$r�s�zBye!zWaiting for %i threads to exitzJoining with %s/%fzWorker error: %s)*r�r%r�getfloatr��	Semaphorer�r�rr�r��activeCountr�r�r�r!r9r�rj�types�
ModuleType�__file__r/r��itemsr��acquirerr�rr{r�r��KeyboardInterruptrsr��	enumerate�exitr�r;rr�)r�r�r�r�rnr�rr�r�wait_forZall_threadsr#rr$r��s�
�


�

�$
��



�

z	Agent.runN)F)r)rm�
__module__�__qualname__rr�r�r/r�r�r�rr�r�r�r�r�r�rr�r�rrr�r#r#r#r$rs&
)!,/r
rcCslttj�dk�r^tjd�d�r8tjddd�tjd<tjddkr^td�d��t��n�tjddkr�tt��t��n�tjddkr�tt�t��n�tjdd	kr�tjd=t�t	��n�tjdd
kr�tjd=t�t
��nttjddk�rtjd=t�t	dd
��nJtjddk�r<t�ttjdd���n tdtjdtjd�t�d�n
t
���dS)Nr*z--r+r'r)z!Run without options to run agent.z,Acceptable options (leading -- is optional):z4    help, info, version, hello, insecure-hello, testr%�versionrOz
count-domainszinsecure-hellor)rK�testzInvalid option:)�file)r;rs�argv�
startswithrHrr,r%r rOrir�r�rr�r#r#r#r$�mains4



r5�__main__)r&)D�
__future__rrrsruZ
past.builtinsr�ImportErrorrhr��http.clientr�queuerrrr�rr�importlib.utilrvryr�r�r�rbZ
simplejsonr�r<r�r�rCrQr�r�r%�optparser
�urllib.parserr�urllib.requestr
r�urllib.errorr�urllib�urllib2r r=�dirname�abspathr'r>rr�r�r%rOrZr_r?riror�rr5rmr#r#r#r$�<module>s�







��
8!0i$
PK�Fu\��IR��$libpasteurize/fixes/fix_unpacking.pynu�[���u"""
Fixer for:
(a,)* *b (,c)* [,] = s
for (a,)* *b (,c)* [,] in d: ...
"""

from lib2to3 import fixer_base
from itertools import count
from lib2to3.fixer_util import (Assign, Comma, Call, Newline, Name,
                                Number, token, syms, Node, Leaf)
from libfuturize.fixer_util import indentation, suitify, commatize
# from libfuturize.fixer_util import Assign, Comma, Call, Newline, Name, Number, indentation, suitify, commatize, token, syms, Node, Leaf

def assignment_source(num_pre, num_post, LISTNAME, ITERNAME):
    u"""
    Accepts num_pre and num_post, which are counts of values
    before and after the starg (not including the starg)
    Returns a source fit for Assign() from fixer_util
    """
    children = []
    try:
        pre = unicode(num_pre)
        post = unicode(num_post)
    except NameError:
        pre = str(num_pre)
        post = str(num_post)
    # This code builds the assignment source from lib2to3 tree primitives.
    # It's not very readable, but it seems like the most correct way to do it.
    if num_pre > 0:
        pre_part = Node(syms.power, [Name(LISTNAME), Node(syms.trailer, [Leaf(token.LSQB, u"["), Node(syms.subscript, [Leaf(token.COLON, u":"), Number(pre)]), Leaf(token.RSQB, u"]")])])
        children.append(pre_part)
        children.append(Leaf(token.PLUS, u"+", prefix=u" "))
    main_part = Node(syms.power, [Leaf(token.LSQB, u"[", prefix=u" "), Name(LISTNAME), Node(syms.trailer, [Leaf(token.LSQB, u"["), Node(syms.subscript, [Number(pre) if num_pre > 0 else Leaf(1, u""), Leaf(token.COLON, u":"), Node(syms.factor, [Leaf(token.MINUS, u"-"), Number(post)]) if num_post > 0 else Leaf(1, u"")]), Leaf(token.RSQB, u"]"), Leaf(token.RSQB, u"]")])])
    children.append(main_part)
    if num_post > 0:
        children.append(Leaf(token.PLUS, u"+", prefix=u" "))
        post_part = Node(syms.power, [Name(LISTNAME, prefix=u" "), Node(syms.trailer, [Leaf(token.LSQB, u"["), Node(syms.subscript, [Node(syms.factor, [Leaf(token.MINUS, u"-"), Number(post)]), Leaf(token.COLON, u":")]), Leaf(token.RSQB, u"]")])])
        children.append(post_part)
    source = Node(syms.arith_expr, children)
    return source

class FixUnpacking(fixer_base.BaseFix):

    PATTERN = u"""
    expl=expr_stmt< testlist_star_expr<
        pre=(any ',')*
            star_expr< '*' name=NAME >
        post=(',' any)* [','] > '=' source=any > |
    impl=for_stmt< 'for' lst=exprlist<
        pre=(any ',')*
            star_expr< '*' name=NAME >
        post=(',' any)* [','] > 'in' it=any ':' suite=any>"""

    def fix_explicit_context(self, node, results):
        pre, name, post, source = (results.get(n) for n in (u"pre", u"name", u"post", u"source"))
        pre = [n.clone() for n in pre if n.type == token.NAME]
        name.prefix = u" "
        post = [n.clone() for n in post if n.type == token.NAME]
        target = [n.clone() for n in commatize(pre + [name.clone()] + post)]
        # to make the special-case fix for "*z, = ..." correct with the least
        # amount of modification, make the left-side into a guaranteed tuple
        target.append(Comma())
        source.prefix = u""
        setup_line = Assign(Name(self.LISTNAME), Call(Name(u"list"), [source.clone()]))
        power_line = Assign(target, assignment_source(len(pre), len(post), self.LISTNAME, self.ITERNAME))
        return setup_line, power_line

    def fix_implicit_context(self, node, results):
        u"""
        Only example of the implicit context is
        a for loop, so only fix that.
        """
        pre, name, post, it = (results.get(n) for n in (u"pre", u"name", u"post", u"it"))
        pre = [n.clone() for n in pre if n.type == token.NAME]
        name.prefix = u" "
        post = [n.clone() for n in post if n.type == token.NAME]
        target = [n.clone() for n in commatize(pre + [name.clone()] + post)]
        # to make the special-case fix for "*z, = ..." correct with the least
        # amount of modification, make the left-side into a guaranteed tuple
        target.append(Comma())
        source = it.clone()
        source.prefix = u""
        setup_line = Assign(Name(self.LISTNAME), Call(Name(u"list"), [Name(self.ITERNAME)]))
        power_line = Assign(target, assignment_source(len(pre), len(post), self.LISTNAME, self.ITERNAME))
        return setup_line, power_line

    def transform(self, node, results):
        u"""
        a,b,c,d,e,f,*g,h,i = range(100) changes to
        _3to2list = list(range(100))
        a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:]

        and

        for a,b,*c,d,e in iter_of_iters: do_stuff changes to
        for _3to2iter in iter_of_iters:
            _3to2list = list(_3to2iter)
            a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:]
            do_stuff
        """
        self.LISTNAME = self.new_name(u"_3to2list")
        self.ITERNAME = self.new_name(u"_3to2iter")
        expl, impl = results.get(u"expl"), results.get(u"impl")
        if expl is not None:
            setup_line, power_line = self.fix_explicit_context(node, results)
            setup_line.prefix = expl.prefix
            power_line.prefix = indentation(expl.parent)
            setup_line.append_child(Newline())
            parent = node.parent
            i = node.remove()
            parent.insert_child(i, power_line)
            parent.insert_child(i, setup_line)
        elif impl is not None:
            setup_line, power_line = self.fix_implicit_context(node, results)
            suitify(node)
            suite = [k for k in node.children if k.type == syms.suite][0]
            setup_line.prefix = u""
            power_line.prefix = suite.children[1].value
            suite.children[2].prefix = indentation(suite.children[2])
            suite.insert_child(2, Newline())
            suite.insert_child(2, power_line)
            suite.insert_child(2, Newline())
            suite.insert_child(2, setup_line)
            results.get(u"lst").replace(Name(self.ITERNAME, prefix=u" "))
PK�Fu\�{���#libpasteurize/fixes/fix_division.pynu�[���u"""
Fixer for division: from __future__ import division if needed
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import token, future_import

def match_division(node):
    u"""
    __future__.division redefines the meaning of a single slash for division,
    so we match that and only that.
    """
    slash = token.SLASH
    return node.type == slash and not node.next_sibling.type == slash and \
                                  not node.prev_sibling.type == slash

class FixDivision(fixer_base.BaseFix):
    run_order = 4    # this seems to be ignored?

    def match(self, node):
        u"""
        Since the tree needs to be fixed once and only once if and only if it
        matches, then we can start discarding matches after we make the first.
        """
        return match_division(node)

    def transform(self, node, results):
        future_import(u"division", node)
PK�Fu\�g��(libpasteurize/fixes/fix_printfunction.pynu�[���u"""
Fixer for print: from __future__ import print_function.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import future_import

class FixPrintfunction(fixer_base.BaseFix):

    # explicit = True

    PATTERN = u"""
              power< 'print' trailer < '(' any* ')' > any* >
              """

    def transform(self, node, results):
        future_import(u"print_function", node)
PK�Fu\�}'�s
s
#libpasteurize/fixes/fix_features.pynu�[���u"""
Warn about features that are not present in Python 2.5, giving a message that
points to the earliest version of Python 2.x (or 3.x, if none) that supports it
"""

from .feature_base import Feature, Features
from lib2to3 import fixer_base

FEATURES = [
   #(FeatureName,
   #    FeaturePattern,
   # FeatureMinVersion,
   #),
    (u"memoryview",
        u"power < 'memoryview' trailer < '(' any* ')' > any* >",
     u"2.7",
    ),
    (u"numbers",
        u"""import_from< 'from' 'numbers' 'import' any* > |
           import_name< 'import' ('numbers' dotted_as_names< any* 'numbers' any* >) >""",
     u"2.6",
    ),
    (u"abc",
        u"""import_name< 'import' ('abc' dotted_as_names< any* 'abc' any* >) > |
           import_from< 'from' 'abc' 'import' any* >""",
     u"2.6",
    ),
    (u"io",
        u"""import_name< 'import' ('io' dotted_as_names< any* 'io' any* >) > |
           import_from< 'from' 'io' 'import' any* >""",
     u"2.6",
    ),
    (u"bin",
        u"power< 'bin' trailer< '(' any* ')' > any* >",
     u"2.6",
    ),
    (u"formatting",
        u"power< any trailer< '.' 'format' > trailer< '(' any* ')' > >",
     u"2.6",
    ),
    (u"nonlocal",
        u"global_stmt< 'nonlocal' any* >",
     u"3.0",
    ),
    (u"with_traceback",
        u"trailer< '.' 'with_traceback' >",
     u"3.0",
    ),
]

class FixFeatures(fixer_base.BaseFix):

    run_order = 9 # Wait until all other fixers have run to check for these

    # To avoid spamming, we only want to warn for each feature once.
    features_warned = set()

    # Build features from the list above
    features = Features([Feature(name, pattern, version) for \
                                name, pattern, version in FEATURES])

    PATTERN = features.PATTERN

    def match(self, node):
        to_ret = super(FixFeatures, self).match(node)
        # We want the mapping only to tell us the node's specific information.
        try:
            del to_ret[u'node']
        except Exception:
            # We want it to delete the 'node' from the results
            # if it's there, so we don't care if it fails for normal reasons.
            pass
        return to_ret

    def transform(self, node, results):
        for feature_name in results:
            if feature_name in self.features_warned:
                continue
            else:
                curr_feature = self.features[feature_name]
                if curr_feature.version >= u"3":
                    fail = self.cannot_convert
                else:
                    fail = self.warning
                fail(node, reason=curr_feature.message_text())
                self.features_warned.add(feature_name)
PK�Fu\�5�ii!libpasteurize/fixes/fix_getcwd.pynu�[���u"""
Fixer for os.getcwd() -> os.getcwdu().
Also warns about "from os import getcwd", suggesting the above form.
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name

class FixGetcwd(fixer_base.BaseFix):

    PATTERN = u"""
              power< 'os' trailer< dot='.' name='getcwd' > any* >
              |
              import_from< 'from' 'os' 'import' bad='getcwd' >
              """

    def transform(self, node, results):
        if u"name" in results:
            name = results[u"name"]
            name.replace(Name(u"getcwdu", prefix=name.prefix))
        elif u"bad" in results:
            # Can't convert to getcwdu and then expect to catch every use.
            self.cannot_convert(node, u"import os, use os.getcwd() instead.")
            return
        else:
            raise ValueError(u"For some reason, the pattern matcher failed.")
PK�Fu\a����3libpasteurize/fixes/fix_add_all__future__imports.pynu�[���"""
Fixer for adding:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    from __future__ import unicode_literals

This is done when converting from Py3 to both Py3/Py2.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import future_import

class FixAddAllFutureImports(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"
    run_order = 1

    def transform(self, node, results):
        future_import(u"absolute_import", node)
        future_import(u"division", node)
        future_import(u"print_function", node)
        future_import(u"unicode_literals", node)
PK�Fu\ k�4��libpasteurize/fixes/__init__.pynu�[���import sys
from lib2to3 import refactor

# The original set of these fixes comes from lib3to2 (https://bitbucket.org/amentajo/lib3to2):
fix_names = set([
                 'libpasteurize.fixes.fix_add_all__future__imports',  # from __future__ import absolute_import etc. on separate lines
                 'libpasteurize.fixes.fix_add_future_standard_library_import',  # we force adding this import for now, even if it doesn't seem necessary to the fix_future_standard_library fixer, for ease of testing
                 # 'libfuturize.fixes.fix_order___future__imports',  # consolidates to a single line to simplify testing -- UNFINISHED
                 'libpasteurize.fixes.fix_future_builtins',   # adds "from future.builtins import *"
                 'libfuturize.fixes.fix_future_standard_library', # adds "from future import standard_library"

                 'libpasteurize.fixes.fix_annotations',
                 # 'libpasteurize.fixes.fix_bitlength',  # ints have this in Py2.7
                 # 'libpasteurize.fixes.fix_bool',    # need a decorator or Mixin
                 # 'libpasteurize.fixes.fix_bytes',   # leave bytes as bytes
                 # 'libpasteurize.fixes.fix_classdecorator',  # available in
                 # Py2.6+
                 # 'libpasteurize.fixes.fix_collections', hmmm ...
                 # 'libpasteurize.fixes.fix_dctsetcomp',  # avail in Py27
                 'libpasteurize.fixes.fix_division',   # yes
                 # 'libpasteurize.fixes.fix_except',   # avail in Py2.6+
                 # 'libpasteurize.fixes.fix_features',  # ?
                 'libpasteurize.fixes.fix_fullargspec',
                 # 'libpasteurize.fixes.fix_funcattrs',
                 'libpasteurize.fixes.fix_getcwd',
                 'libpasteurize.fixes.fix_imports',   # adds "from future import standard_library"
                 'libpasteurize.fixes.fix_imports2',
                 # 'libpasteurize.fixes.fix_input',
                 # 'libpasteurize.fixes.fix_int',
                 # 'libpasteurize.fixes.fix_intern',
                 # 'libpasteurize.fixes.fix_itertools',
                 'libpasteurize.fixes.fix_kwargs',   # yes, we want this
                 # 'libpasteurize.fixes.fix_memoryview',
                 # 'libpasteurize.fixes.fix_metaclass',  # write a custom handler for
                 # this
                 # 'libpasteurize.fixes.fix_methodattrs',  # __func__ and __self__ seem to be defined on Py2.7 already
                 'libpasteurize.fixes.fix_newstyle',   # yes, we want this: explicit inheritance from object. Without new-style classes in Py2, super() will break etc.
                 # 'libpasteurize.fixes.fix_next',   # use a decorator for this
                 # 'libpasteurize.fixes.fix_numliterals',   # prob not
                 # 'libpasteurize.fixes.fix_open',   # huh?
                 # 'libpasteurize.fixes.fix_print',  # no way
                 'libpasteurize.fixes.fix_printfunction',  # adds __future__ import print_function
                 # 'libpasteurize.fixes.fix_raise_',   # TODO: get this working!

                 # 'libpasteurize.fixes.fix_range',  # nope
                 # 'libpasteurize.fixes.fix_reduce',
                 # 'libpasteurize.fixes.fix_setliteral',
                 # 'libpasteurize.fixes.fix_str',
                 # 'libpasteurize.fixes.fix_super',  # maybe, if our magic super() isn't robust enough
                 'libpasteurize.fixes.fix_throw',   # yes, if Py3 supports it
                 # 'libpasteurize.fixes.fix_unittest',
                 'libpasteurize.fixes.fix_unpacking',  # yes, this is useful
                 # 'libpasteurize.fixes.fix_with'      # way out of date
                ])
PK�Fu\mg_KK libpasteurize/fixes/fix_raise.pynu�[���u"""Fixer for 'raise E(V).with_traceback(T)' -> 'raise E, V, T'"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Comma, Node, Leaf, token, syms

class FixRaise(fixer_base.BaseFix):

    PATTERN = u"""
    raise_stmt< 'raise' (power< name=any [trailer< '(' val=any* ')' >]
        [trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' >] > | any) ['from' chain=any] >"""

    def transform(self, node, results):
        name, val, trc = (results.get(u"name"), results.get(u"val"), results.get(u"trc"))
        chain = results.get(u"chain")
        if chain is not None:
            self.warning(node, u"explicit exception chaining is not supported in Python 2")
            chain.prev_sibling.remove()
            chain.remove()
        if trc is not None:
            val = val[0] if val else Leaf(token.NAME, u"None")
            val.prefix = trc.prefix = u" "
            kids = [Leaf(token.NAME, u"raise"), name.clone(), Comma(),
                    val.clone(), Comma(), trc.clone()]
            raise_stmt = Node(syms.raise_stmt, kids)
            node.replace(raise_stmt)
PK�Fu\�xx#libpasteurize/fixes/fix_newstyle.pynu�[���u"""
Fixer for "class Foo: ..." -> "class Foo(object): ..."
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import LParen, RParen, Name

from libfuturize.fixer_util import touch_import_top


def insert_object(node, idx):
    node.insert_child(idx, RParen())
    node.insert_child(idx, Name(u"object"))
    node.insert_child(idx, LParen())

class FixNewstyle(fixer_base.BaseFix):

    # Match:
    #   class Blah:
    # and:
    #   class Blah():

    PATTERN = u"classdef< 'class' NAME ['(' ')'] colon=':' any >"

    def transform(self, node, results):
        colon = results[u"colon"]
        idx = node.children.index(colon)
        if (node.children[idx-2].value == '(' and
            node.children[idx-1].value == ')'):
            del node.children[idx-2:idx]
            idx -= 2
        insert_object(node, idx)
        touch_import_top(u'builtins', 'object', node)
PK�Fu\��CC libpasteurize/fixes/fix_throw.pynu�[���u"""Fixer for 'g.throw(E(V).with_traceback(T))' -> 'g.throw(E, V, T)'"""

from lib2to3 import fixer_base
from lib2to3.pytree import Node, Leaf
from lib2to3.pgen2 import token
from lib2to3.fixer_util import Comma

class FixThrow(fixer_base.BaseFix):

    PATTERN = u"""
    power< any trailer< '.' 'throw' >
        trailer< '(' args=power< exc=any trailer< '(' val=any* ')' >
        trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' > > ')' > >
    """

    def transform(self, node, results):
        syms = self.syms
        exc, val, trc = (results[u"exc"], results[u"val"], results[u"trc"])
        val = val[0] if val else Leaf(token.NAME, u"None")
        val.prefix = trc.prefix = u" "
        kids = [exc.clone(), Comma(), val.clone(), Comma(), trc.clone()]
        args = results[u"args"]
        args.children = kids
PK�Fu\T����Blibpasteurize/fixes/__pycache__/fix_future_builtins.cpython-39.pycnu�[���a

��?h��@szdZddlmZddlmZddlmZddlm	Z	m
Z
mZddlm
Z
d��Zd�d	d
�eD��ZGdd�dej�Zd
S)zq
Adds this import line:

    from builtins import XYZ

for each of the functions XYZ that is used in the module.
�)�unicode_literals)�
fixer_base)�python_symbols)�Name�Call�in_special_context)�touch_import_topz�filter map zip
                       ascii chr hex input next oct open round super
                       bytes dict int range str�|cCsg|]}d�|��qS)z
name='{0}')�format)�.0�name�r
�Q/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_future_builtins.py�
<listcomp>�rc@s&eZdZdZdZd�e�Zdd�ZdS)�FixFutureBuiltinsT�	zs
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              cCs|d}td|j|�dS)Nr�builtins)r�value)�self�node�resultsrr
r
r�	transform+szFixFutureBuiltins.transformN)	�__name__�
__module__�__qualname__Z
BM_compatibleZ	run_orderr
�
expressionZPATTERNrr
r
r
rrs�rN)�__doc__�
__future__rZlib2to3rZlib2to3.pygramrZsymsZlib2to3.fixer_utilrrrZlibfuturize.fixer_utilr�splitZreplaced_builtins�joinrZBaseFixrr
r
r
r�<module>sPK�Fu\C����9libpasteurize/fixes/__pycache__/fix_raise_.cpython-39.pycnu�[���a

��?h��@sBdZddlmZddlmZmZmZmZmZGdd�dej	�Z
dS)z�Fixer for
              raise E(V).with_traceback(T)
    to:
              from future.utils import raise_
              ...
              raise_(E, V, T)

TODO: FIXME!!

�)�
fixer_base)�Comma�Node�Leaf�token�symsc@seZdZdZdd�ZdS)�FixRaisez�
    raise_stmt< 'raise' (power< name=any [trailer< '(' val=any* ')' >]
        [trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' >] > | any) ['from' chain=any] >c	Cs�t|�d�|�d�|�d�}}}|�d�}|durV|�|d�|j��|��|dur�|rj|dn
ttjd�}d|_|_ttjd	�|�	�t
�|�	�t
�|�	�g}ttj
|�}|�|�dS)
N�name�val�trc�chainz8explicit exception chaining is not supported in Python 2r�None� �raise)ZFIXME�get�warningZprev_sibling�removerr�NAME�prefix�clonerrr�
raise_stmt�replace)	�self�node�resultsr	r
rrZkidsr�r�H/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_raise_.py�	transforms"

�zFixRaise.transformN)�__name__�
__module__�__qualname__ZPATTERNrrrrrrsrN)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrrrZBaseFixrrrrr�<module>sPK�Fu\`��+��;libpasteurize/fixes/__pycache__/fix_features.cpython-39.pycnu�[���a

��?hs
�@s>dZddlmZmZddlmZgd�ZGdd�dej�ZdS)	z�
Warn about features that are not present in Python 2.5, giving a message that
points to the earliest version of Python 2.x (or 3.x, if none) that supports it
�)�Feature�Features�)�
fixer_base))�
memoryviewz4power < 'memoryview' trailer < '(' any* ')' > any* >z2.7)Znumbersz�import_from< 'from' 'numbers' 'import' any* > |
           import_name< 'import' ('numbers' dotted_as_names< any* 'numbers' any* >) >�2.6)�abczyimport_name< 'import' ('abc' dotted_as_names< any* 'abc' any* >) > |
           import_from< 'from' 'abc' 'import' any* >r)�iozvimport_name< 'import' ('io' dotted_as_names< any* 'io' any* >) > |
           import_from< 'from' 'io' 'import' any* >r)�binz+power< 'bin' trailer< '(' any* ')' > any* >r)Z
formattingz<power< any trailer< '.' 'format' > trailer< '(' any* ')' > >r)�nonlocalzglobal_stmt< 'nonlocal' any* >�3.0)�with_tracebackztrailer< '.' 'with_traceback' >rcsFeZdZdZe�Zedd�eD��Zej	Z	�fdd�Z
dd�Z�ZS)�FixFeatures�	cCsg|]\}}}t|||��qS�)r)�.0�name�pattern�versionrr�J/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_features.py�
<listcomp>;s�zFixFeatures.<listcomp>cs2tt|��|�}z
|d=Wnty,Yn0|S)N�node)�superr�match�	Exception)�selfrZto_ret��	__class__rrr@s
zFixFeatures.matchcCsZ|D]P}||jvrqq|j|}|jdkr2|j}n|j}|||��d�|j�|�qdS)N�3)�reason)�features_warned�featuresrZcannot_convert�warningZmessage_text�add)rr�resultsZfeature_nameZcurr_featureZfailrrr�	transformKs


zFixFeatures.transform)
�__name__�
__module__�__qualname__Z	run_order�setr r�FEATURESr!ZPATTERNrr%�
__classcell__rrrrr3s�rN)	�__doc__Zfeature_baserrZlib2to3rr*ZBaseFixrrrrr�<module>s*PK�Fu\Y��.Klibpasteurize/fixes/__pycache__/fix_add_all__future__imports.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)z�
Fixer for adding:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    from __future__ import unicode_literals

This is done when converting from Py3 to both Py3/Py2.
�)�
fixer_base��
future_importc@s eZdZdZdZdZdd�ZdS)�FixAddAllFutureImportsTZ
file_input�cCs,td|�td|�td|�td|�dS)N�absolute_import�division�print_function�unicode_literalsr)�self�node�results�r�Z/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_add_all__future__imports.py�	transforms


z FixAddAllFutureImports.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderrrrrrrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrrrrr�<module>sPK�Fu\�6u�887libpasteurize/fixes/__pycache__/__init__.cpython-39.pycnu�[���a

��?h��@s$ddlZddlmZegd��ZdS)�N)�refactor)z0libpasteurize.fixes.fix_add_all__future__importsz:libpasteurize.fixes.fix_add_future_standard_library_importz'libpasteurize.fixes.fix_future_builtinsz-libfuturize.fixes.fix_future_standard_libraryz#libpasteurize.fixes.fix_annotationsz libpasteurize.fixes.fix_divisionz#libpasteurize.fixes.fix_fullargspeczlibpasteurize.fixes.fix_getcwdzlibpasteurize.fixes.fix_importsz libpasteurize.fixes.fix_imports2zlibpasteurize.fixes.fix_kwargsz libpasteurize.fixes.fix_newstylez%libpasteurize.fixes.fix_printfunctionzlibpasteurize.fixes.fix_throwz!libpasteurize.fixes.fix_unpacking)�sysZlib2to3r�setZ	fix_names�rr�F/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/__init__.py�<module>sPK�Fu\��<99;libpasteurize/fixes/__pycache__/fix_division.cpython-39.pycnu�[���a

��?h��@s>dZddlmZddlmZmZdd�ZGdd�dej�ZdS)	z?
Fixer for division: from __future__ import division if needed
�)�
fixer_base)�token�
future_importcCs,tj}|j|ko*|jj|ko*|jj|kS)zw
    __future__.division redefines the meaning of a single slash for division,
    so we match that and only that.
    )r�SLASH�typeZnext_siblingZprev_sibling)�nodeZslash�r�J/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_division.py�match_divisions�r
c@s eZdZdZdd�Zdd�ZdS)�FixDivision�cCst|�S)z�
        Since the tree needs to be fixed once and only once if and only if it
        matches, then we can start discarding matches after we make the first.
        )r
)�selfrrrr	�matchszFixDivision.matchcCstd|�dS)N�division)r)r
r�resultsrrr	�	transformszFixDivision.transformN)�__name__�
__module__�__qualname__Z	run_orderrrrrrr	rsrN)	�__doc__Zlib2to3rZlibfuturize.fixer_utilrrr
ZBaseFixrrrrr	�<module>s	PK�Fu\���=��Jlibpasteurize/fixes/__pycache__/fix_add_all_future_builtins.cpython-39.pycnu�[���a

��?h��@s>dZddlmZddlmZddlmZGdd�dej�ZdS)a�
For the ``future`` package.

Adds this import line::

    from builtins import (ascii, bytes, chr, dict, filter, hex, input,
                          int, list, map, next, object, oct, open, pow,
                          range, round, str, super, zip)

to a module, irrespective of whether each definition is used.

Adds these imports after any other imports (in an initial block of them).
�)�unicode_literals)�
fixer_base��touch_import_topc@s eZdZdZdZdZdd�ZdS)�FixAddAllFutureBuiltinsTZ
file_input�cCstdd|�dS)N�builtins�*r)�self�node�results�r
�Y/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_add_all_future_builtins.py�	transformsz!FixAddAllFutureBuiltins.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderrr
r
r
rrsrN)	�__doc__�
__future__rZlib2to3rZlibfuturize.fixer_utilrZBaseFixrr
r
r
r�<module>sPK�Fu\$�JJ:libpasteurize/fixes/__pycache__/fix_imports.cpython-39.pycnu�[���a

��?h��#@s�dZddlmZddlmZmZmZmZddlm	Z
ddlmZddl
mZmZddlmZdd	d
ddd
ddddddddddddddddddd d!d"d#d$d%d&d'd(d)d*�"Zd+Zd,Zd-Zd.Zd/Zd0Zd1Zd2Zd3Zd4Zd5d6�ZGd7d8�d8ej�Zd9S):z8
Fixer for standard library imports renamed in Python 3
�)�
fixer_base)�Name�is_probably_builtin�Newline�does_tree_import)�python_symbols)�token)�Node�Leaf��touch_import_top�repr�_winreg�ConfigParser�copy_regz"multiprocessing.queues.SimpleQueue�Queue�SocketServer�
markupbaseztest.test_support�dbhash�dbm�dumbdbm�gdbm�
HTMLParser�htmlentitydefs�httplib�Cookie�	cookielib�DialogZFixTk�ScrolledText�Tix�Tkconstants�Tkdnd�Tkinter�tkColorChooser�tkCommonDialog�tkFont�ttk�tkMessageBoxZturtle�robotparser�	xmlrpclib�__builtin__)"�reprlib�winreg�configparser�copyregzmultiprocessing.SimpleQueue�queue�socketserver�_markupbaseztest.supportzdbm.bsdzdbm.ndbmzdbm.dumbzdbm.gnuzhtml.parserz
html.entitieszhttp.clientzhttp.cookieszhttp.cookiejarztkinter.dialogztkinter._fixztkinter.scrolledtextztkinter.tixztkinter.constantsztkinter.dndztkinter.__init__ztkinter.colorchooserztkinter.commondialogztkinter.fontztkinter.ttkztkinter.messageboxztkinter.turtlezurllib.robotparserz
xmlrpc.client�builtinsz	name='%s'z	attr='%s'z$dotted_name=dotted_name< %s '.' %s >z%sz"power< %s trailer< '.' %s > any* >zpower< %s any* >z:from_import=import_from< 'from' %s 'import' imported=any >z�from_import_submod=import_from< 'from' %s 'import' (%s | import_as_name< %s 'as' renamed=any > | import_as_names< any* (%s | import_as_name< %s 'as' renamed=any >) any* > ) >zrname_import=import_name< 'import' %s > | name_import=import_name< 'import' dotted_as_name< %s 'as' renamed=any > >zAname_import=import_name< 'import' dotted_as_names< names=any* > >c
Cs�d|vr�|�dd�\}}t|}t|}t||f}t|}t|||||f}t||f}t||f}t|}	d�	|||||	f�St|}t||f}t|}t
|}d�	|||f�SdS)z�
    Accepts a string and returns a pattern of possible patterns involving that name
    Called by simple_mapping_to_pattern for each name in the mapping it receives.
    �.�� | 
N)�split�simple_name_match�
subname_match�dotted_name_match�from_import_match�from_import_submod_match�name_import_match�power_twoname_match�power_subname_match�join�power_onename_match)
�name�attrZsimple_nameZsimple_attrZdotted_nameZi_fromZ
i_from_submodZi_nameZu_nameZ	u_subname�rC�I/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_imports.py�all_patternsPs rEc@s6eZdZd�dd�eD��Zd�eef�Zdd�ZdS)�
FixImportsr5cCsg|]}t|��qSrC)rE)�.0rArCrCrD�
<listcomp>m�zFixImports.<listcomp>cCstdd|�dS)N�futureZstandard_libraryr)�self�node�resultsrCrCrD�	transformpszFixImports.transformN)�__name__�
__module__�__qualname__r?�MAPPINGZPATTERN�multiple_name_import_matchrNrCrCrCrDrFksrFN) �__doc__Zlib2to3rZlib2to3.fixer_utilrrrrZlib2to3.pygramrZsymsZ
lib2to3.pgen2rZlib2to3.pytreer	r
Zlibfuturize.fixer_utilrrRr7r8r9r@r=r>r:r;r<rSrEZBaseFixrFrCrCrCrD�<module>sj�-PK�Fu\WY���;libpasteurize/fixes/__pycache__/fix_newstyle.cpython-39.pycnu�[���a

��?hx�@sNdZddlmZddlmZmZmZddlmZdd�Z	Gdd�dej
�Zd	S)
z8
Fixer for "class Foo: ..." -> "class Foo(object): ..."
�)�
fixer_base)�LParen�RParen�Name)�touch_import_topcCs0|�|t��|�|td��|�|t��dS)N�object)Zinsert_childrrr)�node�idx�r
�J/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_newstyle.py�
insert_objectsrc@seZdZdZdd�ZdS)�FixNewstylez0classdef< 'class' NAME ['(' ')'] colon=':' any >cCsn|d}|j�|�}|j|djdkrT|j|djdkrT|j|d|�=|d8}t||�tdd|�dS)N�colon��(��)�builtinsr)�children�index�valuerr)�selfr�resultsrr	r
r
r�	transforms�
zFixNewstyle.transformN)�__name__�
__module__�__qualname__ZPATTERNrr
r
r
rr
sr
N)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrZlibfuturize.fixer_utilrrZBaseFixr
r
r
r
r�<module>s
PK�Fu\�`����8libpasteurize/fixes/__pycache__/fix_throw.cpython-39.pycnu�[���a

��?hC�@sNdZddlmZddlmZmZddlmZddlm	Z	Gdd�dej
�ZdS)	zAFixer for 'g.throw(E(V).with_traceback(T))' -> 'g.throw(E, V, T)'�)�
fixer_base)�Node�Leaf)�token)�Commac@seZdZdZdd�ZdS)�FixThrowz�
    power< any trailer< '.' 'throw' >
        trailer< '(' args=power< exc=any trailer< '(' val=any* ')' >
        trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' > > ')' > >
    c	Csv|j}|d|d|d}}}|r.|dn
ttjd�}d|_|_|��t�|��t�|��g}|d}||_dS)N�exc�val�trcr�None� �args)�symsrr�NAME�prefix�cloner�children)	�self�node�resultsrrr	r
Zkidsr
�r�G/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_throw.py�	transformszFixThrow.transformN)�__name__�
__module__�__qualname__ZPATTERNrrrrrrsrN)�__doc__Zlib2to3rZlib2to3.pytreerrZ
lib2to3.pgen2rZlib2to3.fixer_utilrZBaseFixrrrrr�<module>s
PK�Fu\�!�@libpasteurize/fixes/__pycache__/fix_printfunction.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)z9
Fixer for print: from __future__ import print_function.
�)�
fixer_base��
future_importc@seZdZdZdd�ZdS)�FixPrintfunctionzL
              power< 'print' trailer < '(' any* ')' > any* >
              cCstd|�dS)N�print_functionr)�self�node�results�r
�O/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_printfunction.py�	transformszFixPrintfunction.transformN)�__name__�
__module__�__qualname__ZPATTERNrr
r
r
rrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrr
r
r
r�<module>sPK�Fu\��LY��7libpasteurize/fixes/__pycache__/fix_next.cpython-39.pycnu�[���a

��?h��@sZdZddlmZddlmZddlmZddlm	Z	m
Z
mZmZdZ
Gdd�dej�Zd	S)
z?
Fixer for:
it.__next__() -> it.next().
next(it) -> it.next().
�)�token)�python_symbols)�
fixer_base)�Name�Call�find_binding�Attrz;Calls to builtin next() possibly shadowed by global bindingc@seZdZdZdd�ZdS)�FixNextas
    power< base=any+ trailer< '.' attr='__next__' > any* >
    |
    power< head='next' trailer< '(' arg=any ')' > any* >
    |
    classdef< 'class' base=any+ ':'
              suite< any*
                     funcdef< 'def'
                              attr='__next__'
                              parameters< '(' NAME ')' > any+ >
                     any* > >
    cCs�|sJ�|�d�}|�d�}|�d�}|�d�}|rh|��}|�ttt|�|jd�td���|��n|r�|�td|jd��dS)N�base�attr�head�arg)�prefix�next)�get�clone�replacerr�unicoder�remove)�self�node�resultsr
rrZarg_r
�r�F/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_next.py�	transforms



�
zFixNext.transformN)�__name__�
__module__�__qualname__ZPATTERNrrrrrr	s
r	N)�__doc__Z
lib2to3.pgen2rZlib2to3.pygramrZsymsZlib2to3rZlib2to3.fixer_utilrrrrZbind_warningZBaseFixr	rrrr�<module>sPK�Fu\�BE..8libpasteurize/fixes/__pycache__/fix_raise.cpython-39.pycnu�[���a

��?hK�@sBdZddlmZddlmZmZmZmZmZGdd�dej	�Z
dS)z;Fixer for 'raise E(V).with_traceback(T)' -> 'raise E, V, T'�)�
fixer_base)�Comma�Node�Leaf�token�symsc@seZdZdZdd�ZdS)�FixRaisez�
    raise_stmt< 'raise' (power< name=any [trailer< '(' val=any* ')' >]
        [trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' >] > | any) ['from' chain=any] >c	Cs�|�d�|�d�|�d�}}}|�d�}|durR|�|d�|j��|��|dur�|rf|dn
ttjd�}d|_|_ttjd	�|��t	�|��t	�|��g}t
tj|�}|�
|�dS)
N�name�val�trc�chainz8explicit exception chaining is not supported in Python 2r�None� �raise)�get�warningZprev_sibling�removerr�NAME�prefix�clonerrr�
raise_stmt�replace)	�self�node�resultsr	r
rrZkidsr�r�G/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_raise.py�	transforms"

�zFixRaise.transformN)�__name__�
__module__�__qualname__ZPATTERNrrrrrrsrN)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrrrZBaseFixrrrrr�<module>sPKGu\��k��>libpasteurize/fixes/__pycache__/fix_annotations.cpython-39.pycnu�[���a

��?h-�@sJdZddlmZddlmZddlmZdZdd�ZGdd	�d	ej	�Z
d
S)z&
Fixer to remove function annotations
�)�
fixer_base)�token)�symsz)Removing function annotations completely.cCs
|jdS)Nr)�children)�node�r�M/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_annotations.py�param_without_annotationssr	c@s$eZdZdZdd�ZdZdd�ZdS)�FixAnnotationsFcCs|jsd|_|j||d�dS)NT��reason)�warned�warning)�selfrrrrr�	warn_onceszFixAnnotations.warn_oncezm
              funcdef< 'def' any parameters< '(' [params=any] ')' > ['->' ret=any] ':' any* >
              cCs�|�d�}|�d�}|durR|jjtjks2Jd��|j|td�|j��|��|dur^dS|jtj	kr�|j
D],}|jtjkrp|j|td�|�t
|��qpn(|jtjkr�|j|td�|�t
|��dS)zK
        This just strips annotations from the funcdef completely.
        �params�retNzInvalid return annotationr)�getZprev_sibling�typer�RARROWr�warning_text�removerZ
typedargslistrZtname�replacer	)rr�resultsrr�paramrrr�	transforms 



zFixAnnotations.transformN)�__name__�
__module__�__qualname__r
rZPATTERNrrrrrr
sr
N)�__doc__Zlib2to3rZ
lib2to3.pgen2rZlib2to3.fixer_utilrrr	ZBaseFixr
rrrr�<module>sPKGu\ �Ʊ��9libpasteurize/fixes/__pycache__/fix_kwargs.cpython-39.pycnu�[���a

��?hg�@s�dZddlmZddlmZmZmZmZmZddl	m
Z
mZmZdZ
dZdZdZd	d
�Zefdd�Zefd
d�ZGdd�dej�ZdS)zg
Fixer for Python 3 function parameter syntax
This fixer is rather sensitive to incorrect py3k syntax.
�)�
fixer_base)�token�String�Newline�Comma�Name)�indentation�suitify�
DoubleStarz=%(name)s = %(kwargs)s['%(name)s']; del %(kwargs)s['%(name)s']z'if '%(name)s' in %(kwargs)s: %(assign)szelse: %(name)s = %(default)sZ_3to2kwargsccs�|djtjkrt|�dks J�d}t|�}||kr�||}|j}|jtjkrX|d7}q,|durn|jtjkrnq�|j}|j}|dur�|jtj	kr�|j}|d7}nd}||fV|d7}q,dS)z�
    Generator that yields tuples of (name, default_value) for each parameter in the list
    If no default is given, then it is default_value is None (not Leaf(token.NAME, 'None'))
    r��N)
�typer�STAR�lenZprev_sibling�NAME�
DOUBLESTAR�valueZnext_sibling�EQUAL)�
raw_paramsZcurr_idxZmax_idxZ	curr_itemZ	prev_item�nameZnxt�
default_value�r�H/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_kwargs.py�
gen_paramss& 

rcCs�|djtjksJ�|djtjkrJ|d��|d��|dd�}n|dd�}|D] }|jtjkrt|��qZdSqZdS)z�
    Removes all keyword-only args from the params list and a bare star, if any.
    Does not add the kwargs dict if needed.
    Returns True if more action is needed, False if not
    (more action is needed if no kwargs dict exists)
    rrrN�FT)r
rr�COMMA�remover)r�kwargs_defaultZ	kw_params�paramrrr�
remove_params+s
rcCs~d}d}|dd�D]\}|jtjkr(qq|jtjkr>|s>d}q|jtjkr`|r`|rX|jndS|jtjkrd}q|rz|SdS)z�
    Returns string with the name of the kwargs dict if the params after the first star need fixing
    Otherwise returns empty string
    FrNT�)r
rrrrr)rrZfound_kwargsZ	needs_fix�trrr�needs_fixingAsr"c@seZdZdZdZdd�ZdS)�	FixKwargs�z^funcdef< 'def' NAME parameters< '(' arglist=typedargslist< params=any* > ')' > ':' suite=any >c
Cs�|d}t|�D]$\}}|jtjkr||d�}q:qdSt|�}|sJdSt|�|jd}|jd}t|�}	t|�D]�\}
}|dur�|�	dt
��|�	dtt|
|d�|	d��qv|�	dt
��|�	dtt
|
|d�|	d��|�	dt
��|�	dttt|
|d�|
|d�|	d��qv|	|_d|jd_t|�}|�r�|d	}
t|
j�d
k�rr|
jdjtjk�rr|
�t��|
�tdd��|
�t|��dS)
N�params�r)r�kwargs)�prefix)r�default)Zassignrr'r �arglistr���� )�	enumerater
rrr"r	�childrenrrZinsert_childrr�_assign_template�_else_template�_if_templater(rrrZappend_childrr
r)�self�node�resultsZparams_rawlist�i�itemZ
new_kwargsZsuiteZ
first_stmt�identrrZmust_add_kwargsr*rrr�	transform`s<

 ,$zFixKwargs.transformN)�__name__�
__module__�__qualname__Z	run_orderZPATTERNr8rrrrr#Zsr#N)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrrrZlibfuturize.fixer_utilrr	r
r/r1r0Z_kwargs_default_namerrr"ZBaseFixr#rrrr�<module>sPKGu\������Ulibpasteurize/fixes/__pycache__/fix_add_future_standard_library_import.cpython-39.pycnu�[���a

��?h��@s2dZddlmZddlmZGdd�dej�ZdS)z�
For the ``future`` package.

Adds this import line:

    from future import standard_library

after any __future__ imports but before any other imports. Doesn't actually
change the imports to Py3 style.
�)�
fixer_base��touch_import_topc@s eZdZdZdZdZdd�ZdS)�!FixAddFutureStandardLibraryImportTZ
file_input�cCstdd|�dS)N�futureZstandard_libraryr)�self�node�results�r�d/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_add_future_standard_library_import.py�	transformsz+FixAddFutureStandardLibraryImport.transformN)�__name__�
__module__�__qualname__Z
BM_compatibleZPATTERNZ	run_orderr
rrrrrsrN)�__doc__Zlib2to3rZlibfuturize.fixer_utilrZBaseFixrrrrr�<module>sPKGu\�ƹ PP9libpasteurize/fixes/__pycache__/fix_getcwd.cpython-39.pycnu�[���a

��?hi�@s2dZddlmZddlmZGdd�dej�ZdS)zm
Fixer for os.getcwd() -> os.getcwdu().
Also warns about "from os import getcwd", suggesting the above form.
�)�
fixer_base)�Namec@seZdZdZdd�ZdS)�	FixGetcwdz�
              power< 'os' trailer< dot='.' name='getcwd' > any* >
              |
              import_from< 'from' 'os' 'import' bad='getcwd' >
              cCsJd|vr&|d}|�td|jd��n d|vr>|�|d�dStd��dS)N�nameZgetcwdu)�prefix�badz#import os, use os.getcwd() instead.z,For some reason, the pattern matcher failed.)�replacerrZcannot_convert�
ValueError)�self�node�resultsr�r
�H/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_getcwd.py�	transformszFixGetcwd.transformN)�__name__�
__module__�__qualname__ZPATTERNrr
r
r
rr	srN)�__doc__Zlib2to3rZlib2to3.fixer_utilrZBaseFixrr
r
r
r�<module>sPKGu\$޹�@
@
;libpasteurize/fixes/__pycache__/feature_base.cpython-39.pycnu�[���a

��?h��@s0dZdZdZGdd�de�ZGdd�de�ZdS)z�
Base classes for features that are backwards-incompatible.

Usage:
features = Features()
features.add(Feature("py3k_feature", "power< 'py3k' any* >", "2.7"))
PATTERN = features.PATTERN
z%s=%sz-
%s is only supported in Python %s and above.c@s eZdZdZdd�Zdd�ZdS)�Featurez�
    A feature has a name, a pattern, and a minimum version of Python 2.x
    required to use the feature (or 3.x if there is no backwards-compatible
    version of 2.x)
    cCs||_||_||_dS)N)�name�_pattern�version)�selfr�PATTERNr�r�J/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/feature_base.py�__init__szFeature.__init__cCst|j|jfS)zS
        Format the above text with the name and minimum version required.
        )�message_unformattedrr�rrrr�message_textszFeature.message_textN)�__name__�
__module__�__qualname__�__doc__r	rrrrrrsrc@s0eZdZdZiZdd�Zedd��Zdd�ZdS)	�Featuresz�
    A set of features that generates a pattern for the features it contains.
    This set will act like a mapping in that we map names to patterns.
    cCstdd�t|�D��|_dS)zS
        Called every time we care about the mapping of names to features.
        cSsg|]}|j|f�qSr)r��.0�frrr�
<listcomp>*�z+Features.update_mapping.<locals>.<listcomp>N)�dict�iter�mappingrrrr�update_mapping&szFeatures.update_mappingcCs |��d�dd�t|�D��S)z{
        Uses the mapping of names to features to return a PATTERN suitable
        for using the lib2to3 patcomp.
        z |
cSsg|]}t|j|jf�qSr)�pattern_unformattedrrrrrrr3rz$Features.PATTERN.<locals>.<listcomp>)r�joinrrrrrr,szFeatures.PATTERNcCs
|j|S)zH
        Implement a simple mapping to get patterns from names.
        )r)r�keyrrr�__getitem__5szFeatures.__getitem__N)	r
rrrrr�propertyrrrrrrrs
rN)rrr
�objectr�setrrrrr�<module>s	PK
Gu\�h?��=libpasteurize/fixes/__pycache__/fix_memoryview.cpython-39.pycnu�[���a

��?h'�@s2dZddlmZddlmZGdd�dej�ZdS)zo
Fixer for memoryview(s) -> buffer(s).
Explicit because some memoryview methods are invalid on buffer objects.
�)�
fixer_base)�Namec@seZdZdZdZdd�ZdS)�
FixMemoryviewTzi
              power< name='memoryview' trailer< '(' [any] ')' >
              rest=any* >
              cCs |d}|�td|jd��dS)N�name�buffer)�prefix)�replacerr)�self�node�resultsr�r�L/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_memoryview.py�	transformszFixMemoryview.transformN)�__name__�
__module__�__qualname__�explicitZPATTERNrrrrr
r
srN)�__doc__Zlib2to3rZlib2to3.fixer_utilrZBaseFixrrrrr
�<module>sPKGu\Z�q���<libpasteurize/fixes/__pycache__/fix_metaclass.cpython-39.pycnu�[���a

��?h��@sjdZddlmZddlmZmZmZmZmZm	Z	ddl
mZddlm
Z
mZdd�ZGdd	�d	ej�Zd
S)zn
Fixer for (metaclass=X) -> __metaclass__ = X
Some semantics (see PEP 3115) may be altered in the translation.�)�
fixer_base)�Name�syms�Node�Leaf�Newline�	find_root)�token)�indentation�suitifyc
Csd}|jD�]�}|j}|jtjkrd|dttjd�krb|dttjd�krb|drb|g|}�qq
|jtjkr
|jD]�}|r�q
|jtj	kr�|}qvt|�t
krvd}}}|jD]P}	|	ttjd�kr�|	}q�|r�|	ttjd�kr�|	}q�|r�|r�|	}||||f}qvq�qvq
|S)Nr�	metaclass��=�)�children�typer�argumentrr	�NAME�EQUALZarglist�COMMAr)
�parent�results�nodeZkids�child�comma�meta�equal�name�arg�r�K/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_metaclass.py�
has_metaclasss8��



r!c@seZdZdZdd�ZdS)�FixMetaclassz
    classdef<any*>
    c
Cs�t|�}|sdS|D]}|��qttjd�}ttjddd�}|}d|_ttj	|||g�}t
|�|jD]p}	|	jtj
krh|	jD]X}
|
jtjkr~|	j�|
�d}ttj|
j�}|	�||�|	�|t��|	�||�qhq~qhdS)N�
__metaclass__r� )�prefixr
)r!�removerr	rrr%rrZatomrrrZsuite�INDENT�index�valueZinsert_childr)
�selfrrZmeta_resultsr�targetrrZ	stmt_node�item�stmt�loc�identrrr �	transform5s(


zFixMetaclass.transformN)�__name__�
__module__�__qualname__ZPATTERNr0rrrr r"/sr"N)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrrrrZlib2to3.pygramr	Zlibfuturize.fixer_utilr
rr!ZBaseFixr"rrrr �<module>s $PKGu\@7]�OO<libpasteurize/fixes/__pycache__/fix_unpacking.cpython-39.pycnu�[���a

��?h��@s~dZddlmZddlmZddlmZmZmZm	Z	m
Z
mZmZm
Z
mZmZddlmZmZmZdd�ZGdd	�d	ej�Zd
S)zD
Fixer for:
(a,)* *b (,c)* [,] = s
for (a,)* *b (,c)* [,] in d: ...
�)�
fixer_base)�count)
�Assign�Comma�Call�Newline�Name�Number�token�syms�Node�Leaf)�indentation�suitify�	commatizecCs�g}zt|�}t|�}Wn"ty:t|�}t|�}Yn0|dkr�ttjt|�ttjtt	j
d�ttjtt	jd�t
|�g�tt	jd�g�g�}|�|�|�tt	jddd��ttjtt	j
ddd�t|�ttjtt	j
d�ttj|dkr�t
|�ntdd	�tt	jd�|dk�r&ttjtt	jd
�t
|�g�ntdd	�g�tt	jd�tt	jd�g�g�}|�|�|dk�r�|�tt	jddd��ttjt|dd�ttjtt	j
d�ttjttjtt	jd
�t
|�g�tt	jd�g�tt	jd�g�g�}	|�|	�ttj|�}
|
S)z�
    Accepts num_pre and num_post, which are counts of values
    before and after the starg (not including the starg)
    Returns a source fit for Assign() from fixer_util
    r�[�:�]�+� ��prefix���-)�unicode�	NameError�strrr�powerr�trailerr
r
�LSQBZ	subscript�COLONr	�RSQB�append�PLUSZfactor�MINUSZ
arith_expr)Znum_preZnum_post�LISTNAME�ITERNAME�children�pre�postZpre_partZ	main_partZ	post_part�source�r,�K/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_unpacking.py�assignment_sources&J
�

b
r.c@s(eZdZdZdd�Zdd�Zdd�ZdS)	�FixUnpackinga9
    expl=expr_stmt< testlist_star_expr<
        pre=(any ',')*
            star_expr< '*' name=NAME >
        post=(',' any)* [','] > '=' source=any > |
    impl=for_stmt< 'for' lst=exprlist<
        pre=(any ',')*
            star_expr< '*' name=NAME >
        post=(',' any)* [','] > 'in' it=any ':' suite=any>c
s��fdd�dD�\}}}}dd�|D�}d|_dd�|D�}dd�t||��g|�D�}|�t��d	|_tt|j�ttd
�|��g��}t|t	t
|�t
|�|j|j��}	||	fS)Nc3s|]}��|�VqdS�N��get��.0�n��resultsr,r-�	<genexpr>7�z4FixUnpacking.fix_explicit_context.<locals>.<genexpr>)r)�namer*r+cSs g|]}|jtjkr|���qSr,��typer
�NAME�cloner3r,r,r-�
<listcomp>8r9z5FixUnpacking.fix_explicit_context.<locals>.<listcomp>rcSs g|]}|jtjkr|���qSr,r;r3r,r,r-r?:r9cSsg|]}|���qSr,�r>r3r,r,r-r?;r9r�list)rrr>r#rrrr&rr.�lenr')
�self�noder7r)r:r*r+�target�
setup_line�
power_liner,r6r-�fix_explicit_context6s   z!FixUnpacking.fix_explicit_contextcs��fdd�dD�\}}}}dd�|D�}d|_dd�|D�}dd�t||��g|�D�}|�t��|��}d	|_tt|j�ttd
�t|j	�g��}	t|t
t|�t|�|j|j	��}
|	|
fS)z_
        Only example of the implicit context is
        a for loop, so only fix that.
        c3s|]}��|�VqdSr0r1r3r6r,r-r8Ir9z4FixUnpacking.fix_implicit_context.<locals>.<genexpr>)r)r:r*�itcSs g|]}|jtjkr|���qSr,r;r3r,r,r-r?Jr9z5FixUnpacking.fix_implicit_context.<locals>.<listcomp>rcSs g|]}|jtjkr|���qSr,r;r3r,r,r-r?Lr9cSsg|]}|���qSr,r@r3r,r,r-r?Mr9rrA)rrr>r#rrrr&rr'r.rB)rCrDr7r)r:r*rIrEr+rFrGr,r6r-�fix_implicit_contextDs " z!FixUnpacking.fix_implicit_contextc
Cs@|�d�|_|�d�|_|�d�|�d�}}|dur�|�||�\}}|j|_t|j�|_|�t	��|j}|�
�}|�||�|�||�n�|du�r<|�||�\}}t
|�dd�|jD�d}	d	|_|	jd
j|_t|	jd�|	jd_|	�dt	��|	�d|�|	�dt	��|	�d|�|�d��t|jd
d��dS)a�
        a,b,c,d,e,f,*g,h,i = range(100) changes to
        _3to2list = list(range(100))
        a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:]

        and

        for a,b,*c,d,e in iter_of_iters: do_stuff changes to
        for _3to2iter in iter_of_iters:
            _3to2list = list(_3to2iter)
            a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:]
            do_stuff
        Z	_3to2listZ	_3to2iter�expl�implNcSsg|]}|jtjkr|�qSr,)r<r�suite)r4�kr,r,r-r?tr9z*FixUnpacking.transform.<locals>.<listcomp>rrr��lstrr)�new_namer&r'r2rHrr�parentZappend_childr�removeZinsert_childrJrr(�value�replacer)
rCrDr7rKrLrFrGrR�irMr,r,r-�	transformWs0
zFixUnpacking.transformN)�__name__�
__module__�__qualname__ZPATTERNrHrJrWr,r,r,r-r/*s
r/N)�__doc__Zlib2to3r�	itertoolsrZlib2to3.fixer_utilrrrrrr	r
rrr
Zlibfuturize.fixer_utilrrrr.ZBaseFixr/r,r,r,r-�<module>s0PKGu\�oƺqq;libpasteurize/fixes/__pycache__/fix_imports2.cpython-39.pycnu�[���a

��?h�!�@s�dZddlmZddlmZmZmZmZmZddl	m
Z
dZdddd	d
ddd
deddededddd�Zddddddddd�Z
dZdZd Zd!Zd"Zd#Zd$Zd%Zd&Zd'Zd(d)�Zd*d+�ZGd,d-�d-ej�Zd.S)/z
Fixer for complicated imports
�)�
fixer_base)�Name�String�
FromImport�Newline�Comma��touch_import_top)TZACTIVE�ALLZANCHORZARCZBASELINEZBEVELZBOTHZBOTTOMZBROWSEZBUTTZCASCADEZCENTERZCHARZCHECKBUTTONZCHORDZCOMMANDZCURRENTZDISABLEDZDOTBOX�EZENDZEWZ	EXCEPTIONZEXTENDED�FALSEZFIRSTZFLATZGROOVEZHIDDENZ
HORIZONTALZINSERTZINSIDEZLAST�LEFTZMITERZMOVETOZMULTIPLE�NZNE�NO�NONE�NORMALZNSZNSEWZNUMERICZNWZOFF�ONZOUTSIDEZPAGESZPIESLICEZ
PROJECTINGZRADIOBUTTONZRAISEDZREADABLEZRIDGE�RIGHTZROUND�SZSCROLLZSEZSELZ	SEL_FIRSTZSEL_LASTZ	SEPARATORZSINGLEZSOLIDZSUNKENZSWZStringTypesZTOP�TRUEZ
TclVersionZ	TkVersionZ	UNDERLINE�UNITSZVERTICAL�WZWORDZWRITABLE�X�YZYESZwantobjects)"�AbstractBasicAuthHandler�AbstractDigestAuthHandler�AbstractHTTPHandler�BaseHandler�CacheFTPHandler�
FTPHandler�FileHandler�HTTPBasicAuthHandler�HTTPCookieProcessor�HTTPDefaultErrorHandler�HTTPDigestAuthHandler�	HTTPError�HTTPErrorProcessor�HTTPHandler�HTTPPasswordMgr�HTTPPasswordMgrWithDefaultRealm�HTTPRedirectHandler�HTTPSHandler�OpenerDirector�ProxyBasicAuthHandler�ProxyDigestAuthHandler�ProxyHandler�Request�StringIO�URLError�UnknownHandler�
addinfourl�build_opener�install_opener�parse_http_list�parse_keqv_listZrandombytes�request_host�urlopen)�ContentTooShortError�FancyURLopener�	URLopenerZbasejoin�	ftperrors�
getproxies�getproxies_environment�	localhost�pathname2url�quote�
quote_plus�	splitattr�	splithost�
splitnport�splitpasswd�	splitport�
splitquery�splittag�	splittype�	splituser�
splitvalue�thishost�unquote�unquote_plus�unwrap�url2pathname�
urlcleanup�	urlencoder:�urlretrieve)�parse_qs�	parse_qsl�	urldefrag�urljoin�urlparse�urlsplit�
urlunparse�
urlunsplit)ZndbmZgnuZdumb)�error�open)�whichdb)ZBaseHTTPRequestHandlerZ
HTTPServer)�CGIHTTPRequestHandler)�SimpleHTTPRequestHandler)�
FileDialog�LoadFileDialog�SaveFileDialogZdialogstates�test)Z	DirectoryZOpenZSaveAsZ_DialogZaskdirectoryZaskopenfileZaskopenfilenameZaskopenfilenamesZaskopenfilesZ
asksaveasfileZasksaveasfilename)�SimpleDialog)ZaskfloatZ
askintegerZ	askstring�Dialog)ZCGIXMLRPCRequestHandlerZSimpleXMLRPCDispatcherZSimpleXMLRPCRequestHandler�SimpleXMLRPCServerZlist_public_methodsZremove_duplicatesZresolve_dotted_attribute)�DocCGIXMLRPCRequestHandler�DocXMLRPCRequestHandler�DocXMLRPCServer�
ServerHTMLDoc�XMLRPCDocGenerator)�urllib2�urllibr[�dbm�anydbmra�BaseHTTPServer�
CGIHTTPServer�SimpleHTTPServerrd�tkFileDialogrh�tkSimpleDialogrjrm)rprq)rprqr[)rsra)rurvrt)rwrd)rxrh)rmrj)zurllib.requestzurllib.errorzurllib.parsezdbm.__init__zhttp.serverztkinter.filedialogztkinter.simpledialogz
xmlrpc.serverz	name='%s'z	attr='%s'z
using='%s'z$dotted_name=dotted_name< %s '.' %s >z?pow=power< %s trailer< '.' %s > trailer< '.' using=any > any* >z-pow=power< %s trailer< '.' using=any > any* >z�from_import=import_from< 'from' %s 'import' (import_as_name< using=any 'as' renamed=any> | in_list=import_as_names< using=any* > | using='*' | using=NAME) >zSname_import=import_name< 'import' (%s | in_list=dotted_as_names< imp_list=any* >) >z8name_import_rename=dotted_as_name< %s 'as' renamed=any >z�from_import_rename=import_from< 'from' %s 'import' (%s | import_as_name< %s 'as' renamed=any > | in_list=import_as_names< any* (%s | import_as_name< %s 'as' renamed=any >) any* >) >cCsNdd�tD�}dd�dd�|D��}|d7}|d�dd�|D��d7}|S)zI
    Builds a pattern for all toplevel names
    (urllib, http, etc)
    cSsg|]}|�d��qS)�.)�split��.0�mod�r~�J/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_imports2.py�
<listcomp>��z*all_modules_subpattern.<locals>.<listcomp>z( z | cSs(g|] }tt|dt|df�qS)r�)�dotted_name�simple_name�simple_attrr{r~r~rr��s�
�cSs$g|]}|ddkrt|d�qS)r��__init__r)r�r{r~r~rr��r�z ))�MAPPING�join)Znames_dot_attrs�retr~r~r�all_modules_subpattern�s�
r�c	cs�tt�V|��D]�\}}|�d�\}}t|}t|}t||f}t|Vt||fV|dkrzt|Vt	|Vt
|Vt|||||fVqdS)z�
    mapping1: A dict mapping py3k modules to all possible py2k replacements
    mapping2: A dict mapping py2k modules to the things they do
    This builds a HUGE pattern to match all ways that things can be imported
    ryr�N)�from_importr��itemsrzr�r�r��name_import�
power_twoname�
power_onename�name_import_rename�from_import_rename)	Zmapping1Zmapping2Zpy3kZpy2k�name�attrZs_nameZs_attrZd_namer~r~r�build_import_pattern�s



r�c@s(eZdZdZd�eee��Zdd�Z	dS)�FixImports2�z | 
cCstdd|�dS)N�futureZstandard_libraryr)�self�node�resultsr~r~r�	transform�szFixImports2.transformN)
�__name__�
__module__�__qualname__Z	run_orderr�r�r��
PY2MODULESZPATTERNr�r~r~r~rr��sr�N)�__doc__Zlib2to3rZlib2to3.fixer_utilrrrrrZlibfuturize.fixer_utilr	Z
TK_BASE_NAMESr�r�r�r�Zsimple_usingr�r�r�r�r�r�r�r�r�ZBaseFixr�r~r~r~r�<module>sT

�=�
PKGu\���JJ>libpasteurize/fixes/__pycache__/fix_fullargspec.cpython-39.pycnu�[���a

��?h��@s6dZddlmZddlmZdZGdd�dej�ZdS)z(
Fixer for getfullargspec -> getargspec
�)�
fixer_base)�Namez_some of the values returned by getfullargspec are not valid in Python 2 and have no equivalent.c@seZdZdZdd�ZdS)�FixFullargspecz'getfullargspec'cCs|�|t�td|jd�S)N�
getargspec)�prefix)�warning�warn_msgrr)�self�node�results�r�M/usr/local/lib/python3.9/site-packages/libpasteurize/fixes/fix_fullargspec.py�	transformszFixFullargspec.transformN)�__name__�
__module__�__qualname__ZPATTERNrrrrr
r
srN)�__doc__Zlib2to3rZlib2to3.fixer_utilrrZBaseFixrrrrr
�<module>sPKGu\-�B��$libpasteurize/fixes/fix_metaclass.pynu�[���u"""
Fixer for (metaclass=X) -> __metaclass__ = X
Some semantics (see PEP 3115) may be altered in the translation."""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, syms, Node, Leaf, Newline, find_root
from lib2to3.pygram import token
from libfuturize.fixer_util import indentation, suitify
# from ..fixer_util import Name, syms, Node, Leaf, Newline, find_root, indentation, suitify

def has_metaclass(parent):
    results = None
    for node in parent.children:
        kids = node.children
        if node.type == syms.argument:
            if kids[0] == Leaf(token.NAME, u"metaclass") and \
                kids[1] == Leaf(token.EQUAL, u"=") and \
                kids[2]:
                #Hack to avoid "class X(=):" with this case.
                results = [node] + kids
                break
        elif node.type == syms.arglist:
            # Argument list... loop through it looking for:
            # Node(*, [*, Leaf(token.NAME, u"metaclass"), Leaf(token.EQUAL, u"="), Leaf(*, *)]
            for child in node.children:
                if results: break
                if child.type == token.COMMA:
                    #Store the last comma, which precedes the metaclass
                    comma = child
                elif type(child) == Node:
                    meta = equal = name = None
                    for arg in child.children:
                        if arg == Leaf(token.NAME, u"metaclass"):
                            #We have the (metaclass) part
                            meta = arg
                        elif meta and arg == Leaf(token.EQUAL, u"="):
                            #We have the (metaclass=) part
                            equal = arg
                        elif meta and equal:
                            #Here we go, we have (metaclass=X)
                            name = arg
                            results = (comma, meta, equal, name)
                            break
    return results


class FixMetaclass(fixer_base.BaseFix):

    PATTERN = u"""
    classdef<any*>
    """

    def transform(self, node, results):
        meta_results = has_metaclass(node)
        if not meta_results: return
        for meta in meta_results:
            meta.remove()
        target = Leaf(token.NAME, u"__metaclass__")
        equal = Leaf(token.EQUAL, u"=", prefix=u" ")
        # meta is the last item in what was returned by has_metaclass(): name
        name = meta
        name.prefix = u" "
        stmt_node = Node(syms.atom, [target, equal, name])

        suitify(node)
        for item in node.children:
            if item.type == syms.suite:
                for stmt in item.children:
                    if stmt.type == token.INDENT:
                        # Insert, in reverse order, the statement, a newline,
                        # and an indent right after the first indented line
                        loc = item.children.index(stmt) + 1
                        # Keep consistent indentation form
                        ident = Leaf(token.INDENT, stmt.value)
                        item.insert_child(loc, ident)
                        item.insert_child(loc, Newline())
                        item.insert_child(loc, stmt_node)
                        break
PK!Gu\j�hS�!�!#libpasteurize/fixes/fix_imports2.pynu�[���u"""
Fixer for complicated imports
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, String, FromImport, Newline, Comma
from libfuturize.fixer_util import touch_import_top


TK_BASE_NAMES = (u'ACTIVE', u'ALL', u'ANCHOR', u'ARC',u'BASELINE', u'BEVEL', u'BOTH',
                 u'BOTTOM', u'BROWSE', u'BUTT', u'CASCADE', u'CENTER', u'CHAR',
                 u'CHECKBUTTON', u'CHORD', u'COMMAND', u'CURRENT', u'DISABLED',
                 u'DOTBOX', u'E', u'END', u'EW', u'EXCEPTION', u'EXTENDED', u'FALSE',
                 u'FIRST', u'FLAT', u'GROOVE', u'HIDDEN', u'HORIZONTAL', u'INSERT',
                 u'INSIDE', u'LAST', u'LEFT', u'MITER', u'MOVETO', u'MULTIPLE', u'N',
                 u'NE', u'NO', u'NONE', u'NORMAL', u'NS', u'NSEW', u'NUMERIC', u'NW',
                 u'OFF', u'ON', u'OUTSIDE', u'PAGES', u'PIESLICE', u'PROJECTING',
                 u'RADIOBUTTON', u'RAISED', u'READABLE', u'RIDGE', u'RIGHT',
                 u'ROUND', u'S', u'SCROLL', u'SE', u'SEL', u'SEL_FIRST', u'SEL_LAST',
                 u'SEPARATOR', u'SINGLE', u'SOLID', u'SUNKEN', u'SW', u'StringTypes',
                 u'TOP', u'TRUE', u'TclVersion', u'TkVersion', u'UNDERLINE',
                 u'UNITS', u'VERTICAL', u'W', u'WORD', u'WRITABLE', u'X', u'Y', u'YES',
                 u'wantobjects')

PY2MODULES = {
              u'urllib2' : (
                  u'AbstractBasicAuthHandler', u'AbstractDigestAuthHandler',
                  u'AbstractHTTPHandler', u'BaseHandler', u'CacheFTPHandler',
                  u'FTPHandler', u'FileHandler', u'HTTPBasicAuthHandler',
                  u'HTTPCookieProcessor', u'HTTPDefaultErrorHandler',
                  u'HTTPDigestAuthHandler', u'HTTPError', u'HTTPErrorProcessor',
                  u'HTTPHandler', u'HTTPPasswordMgr',
                  u'HTTPPasswordMgrWithDefaultRealm', u'HTTPRedirectHandler',
                  u'HTTPSHandler', u'OpenerDirector', u'ProxyBasicAuthHandler',
                  u'ProxyDigestAuthHandler', u'ProxyHandler', u'Request',
                  u'StringIO', u'URLError', u'UnknownHandler', u'addinfourl',
                  u'build_opener', u'install_opener', u'parse_http_list',
                  u'parse_keqv_list', u'randombytes', u'request_host', u'urlopen'),
              u'urllib' : (
                  u'ContentTooShortError', u'FancyURLopener',u'URLopener',
                  u'basejoin', u'ftperrors', u'getproxies',
                  u'getproxies_environment', u'localhost', u'pathname2url',
                  u'quote', u'quote_plus', u'splitattr', u'splithost',
                  u'splitnport', u'splitpasswd', u'splitport', u'splitquery',
                  u'splittag', u'splittype', u'splituser', u'splitvalue',
                  u'thishost', u'unquote', u'unquote_plus', u'unwrap',
                  u'url2pathname', u'urlcleanup', u'urlencode', u'urlopen',
                  u'urlretrieve',),
              u'urlparse' : (
                  u'parse_qs', u'parse_qsl', u'urldefrag', u'urljoin',
                  u'urlparse', u'urlsplit', u'urlunparse', u'urlunsplit'),
              u'dbm' : (
                  u'ndbm', u'gnu', u'dumb'),
              u'anydbm' : (
                  u'error', u'open'),
              u'whichdb' : (
                  u'whichdb',),
              u'BaseHTTPServer' : (
                  u'BaseHTTPRequestHandler', u'HTTPServer'),
              u'CGIHTTPServer' : (
                  u'CGIHTTPRequestHandler',),
              u'SimpleHTTPServer' : (
                  u'SimpleHTTPRequestHandler',),
              u'FileDialog' : TK_BASE_NAMES + (
                  u'FileDialog', u'LoadFileDialog', u'SaveFileDialog',
                  u'dialogstates', u'test'),
              u'tkFileDialog' : (
                  u'Directory', u'Open', u'SaveAs', u'_Dialog', u'askdirectory',
                  u'askopenfile', u'askopenfilename', u'askopenfilenames',
                  u'askopenfiles', u'asksaveasfile', u'asksaveasfilename'),
              u'SimpleDialog' : TK_BASE_NAMES + (
                  u'SimpleDialog',),
              u'tkSimpleDialog' : TK_BASE_NAMES + (
                  u'askfloat', u'askinteger', u'askstring', u'Dialog'),
              u'SimpleXMLRPCServer' : (
                  u'CGIXMLRPCRequestHandler', u'SimpleXMLRPCDispatcher',
                  u'SimpleXMLRPCRequestHandler', u'SimpleXMLRPCServer',
                  u'list_public_methods', u'remove_duplicates',
                  u'resolve_dotted_attribute'),
              u'DocXMLRPCServer' : (
                  u'DocCGIXMLRPCRequestHandler', u'DocXMLRPCRequestHandler',
                  u'DocXMLRPCServer', u'ServerHTMLDoc',u'XMLRPCDocGenerator'),
                }

MAPPING = { u'urllib.request' :
                (u'urllib2', u'urllib'),
            u'urllib.error' :
                (u'urllib2', u'urllib'),
            u'urllib.parse' :
                (u'urllib2', u'urllib', u'urlparse'),
            u'dbm.__init__' :
                (u'anydbm', u'whichdb'),
            u'http.server' :
                (u'CGIHTTPServer', u'SimpleHTTPServer', u'BaseHTTPServer'),
            u'tkinter.filedialog' :
                (u'tkFileDialog', u'FileDialog'),
            u'tkinter.simpledialog' :
                (u'tkSimpleDialog', u'SimpleDialog'),
            u'xmlrpc.server' :
                (u'DocXMLRPCServer', u'SimpleXMLRPCServer'),
            }

# helps match 'http', as in 'from http.server import ...'
simple_name = u"name='%s'"
# helps match 'server', as in 'from http.server import ...'
simple_attr = u"attr='%s'"
# helps match 'HTTPServer', as in 'from http.server import HTTPServer'
simple_using = u"using='%s'"
# helps match 'urllib.request', as in 'import urllib.request'
dotted_name = u"dotted_name=dotted_name< %s '.' %s >"
# helps match 'http.server', as in 'http.server.HTTPServer(...)'
power_twoname = u"pow=power< %s trailer< '.' %s > trailer< '.' using=any > any* >"
# helps match 'dbm.whichdb', as in 'dbm.whichdb(...)'
power_onename = u"pow=power< %s trailer< '.' using=any > any* >"
# helps match 'from http.server import HTTPServer'
# also helps match 'from http.server import HTTPServer, SimpleHTTPRequestHandler'
# also helps match 'from http.server import *'
from_import = u"from_import=import_from< 'from' %s 'import' (import_as_name< using=any 'as' renamed=any> | in_list=import_as_names< using=any* > | using='*' | using=NAME) >"
# helps match 'import urllib.request'
name_import = u"name_import=import_name< 'import' (%s | in_list=dotted_as_names< imp_list=any* >) >"

#############
# WON'T FIX #
#############

# helps match 'import urllib.request as name'
name_import_rename = u"name_import_rename=dotted_as_name< %s 'as' renamed=any >"
# helps match 'from http import server'
from_import_rename = u"from_import_rename=import_from< 'from' %s 'import' (%s | import_as_name< %s 'as' renamed=any > | in_list=import_as_names< any* (%s | import_as_name< %s 'as' renamed=any >) any* >) >"


def all_modules_subpattern():
    u"""
    Builds a pattern for all toplevel names
    (urllib, http, etc)
    """
    names_dot_attrs = [mod.split(u".") for mod in MAPPING]
    ret = u"( " + u" | ".join([dotted_name % (simple_name % (mod[0]),
                                            simple_attr % (mod[1])) for mod in names_dot_attrs])
    ret += u" | "
    ret += u" | ".join([simple_name % (mod[0]) for mod in names_dot_attrs if mod[1] == u"__init__"]) + u" )"
    return ret


def build_import_pattern(mapping1, mapping2):
    u"""
    mapping1: A dict mapping py3k modules to all possible py2k replacements
    mapping2: A dict mapping py2k modules to the things they do
    This builds a HUGE pattern to match all ways that things can be imported
    """
    # py3k: urllib.request, py2k: ('urllib2', 'urllib')
    yield from_import % (all_modules_subpattern())
    for py3k, py2k in mapping1.items():
        name, attr = py3k.split(u'.')
        s_name = simple_name % (name)
        s_attr = simple_attr % (attr)
        d_name = dotted_name % (s_name, s_attr)
        yield name_import % (d_name)
        yield power_twoname % (s_name, s_attr)
        if attr == u'__init__':
            yield name_import % (s_name)
            yield power_onename % (s_name)
        yield name_import_rename % (d_name)
        yield from_import_rename % (s_name, s_attr, s_attr, s_attr, s_attr)


class FixImports2(fixer_base.BaseFix):

    run_order = 4

    PATTERN = u" | \n".join(build_import_pattern(MAPPING, PY2MODULES))

    def transform(self, node, results):
        touch_import_top(u'future', u'standard_library', node)
PK$Gu\C7YV��"libpasteurize/fixes/fix_imports.pynu�[���u"""
Fixer for standard library imports renamed in Python 3
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, is_probably_builtin, Newline, does_tree_import
from lib2to3.pygram import python_symbols as syms
from lib2to3.pgen2 import token
from lib2to3.pytree import Node, Leaf

from libfuturize.fixer_util import touch_import_top
# from ..fixer_util import NameImport

# used in simple_mapping_to_pattern()
MAPPING = {u"reprlib": u"repr",
           u"winreg": u"_winreg",
           u"configparser": u"ConfigParser",
           u"copyreg": u"copy_reg",
           u"multiprocessing.SimpleQueue": u"multiprocessing.queues.SimpleQueue",
           u"queue": u"Queue",
           u"socketserver": u"SocketServer",
           u"_markupbase": u"markupbase",
           u"test.support": u"test.test_support",
           u"dbm.bsd": u"dbhash",
           u"dbm.ndbm": u"dbm",
           u"dbm.dumb": u"dumbdbm",
           u"dbm.gnu": u"gdbm",
           u"html.parser": u"HTMLParser",
           u"html.entities": u"htmlentitydefs",
           u"http.client": u"httplib",
           u"http.cookies": u"Cookie",
           u"http.cookiejar": u"cookielib",
#          "tkinter": "Tkinter",
           u"tkinter.dialog": u"Dialog",
           u"tkinter._fix": u"FixTk",
           u"tkinter.scrolledtext": u"ScrolledText",
           u"tkinter.tix": u"Tix",
           u"tkinter.constants": u"Tkconstants",
           u"tkinter.dnd": u"Tkdnd",
           u"tkinter.__init__": u"Tkinter",
           u"tkinter.colorchooser": u"tkColorChooser",
           u"tkinter.commondialog": u"tkCommonDialog",
           u"tkinter.font": u"tkFont",
           u"tkinter.ttk": u"ttk",
           u"tkinter.messagebox": u"tkMessageBox",
           u"tkinter.turtle": u"turtle",
           u"urllib.robotparser": u"robotparser",
           u"xmlrpc.client": u"xmlrpclib",
           u"builtins": u"__builtin__",
}

# generic strings to help build patterns
# these variables mean (with http.client.HTTPConnection as an example):
# name = http
# attr = client
# used = HTTPConnection
# fmt_name is a formatted subpattern (simple_name_match or dotted_name_match)

# helps match 'queue', as in 'from queue import ...'
simple_name_match = u"name='%s'"
# helps match 'client', to be used if client has been imported from http
subname_match = u"attr='%s'"
# helps match 'http.client', as in 'import urllib.request'
dotted_name_match = u"dotted_name=dotted_name< %s '.' %s >"
# helps match 'queue', as in 'queue.Queue(...)'
power_onename_match = u"%s"
# helps match 'http.client', as in 'http.client.HTTPConnection(...)'
power_twoname_match = u"power< %s trailer< '.' %s > any* >"
# helps match 'client.HTTPConnection', if 'client' has been imported from http
power_subname_match = u"power< %s any* >"
# helps match 'from http.client import HTTPConnection'
from_import_match = u"from_import=import_from< 'from' %s 'import' imported=any >"
# helps match 'from http import client'
from_import_submod_match = u"from_import_submod=import_from< 'from' %s 'import' (%s | import_as_name< %s 'as' renamed=any > | import_as_names< any* (%s | import_as_name< %s 'as' renamed=any >) any* > ) >"
# helps match 'import urllib.request'
name_import_match = u"name_import=import_name< 'import' %s > | name_import=import_name< 'import' dotted_as_name< %s 'as' renamed=any > >"
# helps match 'import http.client, winreg'
multiple_name_import_match = u"name_import=import_name< 'import' dotted_as_names< names=any* > >"

def all_patterns(name):
    u"""
    Accepts a string and returns a pattern of possible patterns involving that name
    Called by simple_mapping_to_pattern for each name in the mapping it receives.
    """

    # i_ denotes an import-like node
    # u_ denotes a node that appears to be a usage of the name
    if u'.' in name:
        name, attr = name.split(u'.', 1)
        simple_name = simple_name_match % (name)
        simple_attr = subname_match % (attr)
        dotted_name = dotted_name_match % (simple_name, simple_attr)
        i_from = from_import_match % (dotted_name)
        i_from_submod = from_import_submod_match % (simple_name, simple_attr, simple_attr, simple_attr, simple_attr)
        i_name = name_import_match % (dotted_name, dotted_name)
        u_name = power_twoname_match % (simple_name, simple_attr)
        u_subname = power_subname_match % (simple_attr)
        return u' | \n'.join((i_name, i_from, i_from_submod, u_name, u_subname))
    else:
        simple_name = simple_name_match % (name)
        i_name = name_import_match % (simple_name, simple_name)
        i_from = from_import_match % (simple_name)
        u_name = power_onename_match % (simple_name)
        return u' | \n'.join((i_name, i_from, u_name))


class FixImports(fixer_base.BaseFix):

    PATTERN = u' | \n'.join([all_patterns(name) for name in MAPPING])
    PATTERN = u' | \n'.join((PATTERN, multiple_name_import_match))

    def transform(self, node, results):
        touch_import_top(u'future', u'standard_library', node)
PK'Gu\4<ǻ��&libpasteurize/fixes/fix_fullargspec.pynu�[���u"""
Fixer for getfullargspec -> getargspec
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name

warn_msg = u"some of the values returned by getfullargspec are not valid in Python 2 and have no equivalent."

class FixFullargspec(fixer_base.BaseFix):

    PATTERN = u"'getfullargspec'"

    def transform(self, node, results):
        self.warning(node, warn_msg)
        return Name(u"getargspec", prefix=node.prefix)
PK)Gu\��1�--&libpasteurize/fixes/fix_annotations.pynu�[���u"""
Fixer to remove function annotations
"""

from lib2to3 import fixer_base
from lib2to3.pgen2 import token
from lib2to3.fixer_util import syms

warning_text = u"Removing function annotations completely."

def param_without_annotations(node):
    return node.children[0]

class FixAnnotations(fixer_base.BaseFix):

    warned = False

    def warn_once(self, node, reason):
        if not self.warned:
            self.warned = True
            self.warning(node, reason=reason)

    PATTERN = u"""
              funcdef< 'def' any parameters< '(' [params=any] ')' > ['->' ret=any] ':' any* >
              """

    def transform(self, node, results):
        u"""
        This just strips annotations from the funcdef completely.
        """
        params = results.get(u"params")
        ret = results.get(u"ret")
        if ret is not None:
            assert ret.prev_sibling.type == token.RARROW, u"Invalid return annotation"
            self.warn_once(node, reason=warning_text)
            ret.prev_sibling.remove()
            ret.remove()
        if params is None: return
        if params.type == syms.typedargslist:
            # more than one param in a typedargslist
            for param in params.children:
                if param.type == syms.tname:
                    self.warn_once(node, reason=warning_text)
                    param.replace(param_without_annotations(param))
        elif params.type == syms.tname:
            # one param
            self.warn_once(node, reason=warning_text)
            params.replace(param_without_annotations(params))
PK,Gu\VX@�gg!libpasteurize/fixes/fix_kwargs.pynu�[���u"""
Fixer for Python 3 function parameter syntax
This fixer is rather sensitive to incorrect py3k syntax.
"""

# Note: "relevant" parameters are parameters following the first STAR in the list.

from lib2to3 import fixer_base
from lib2to3.fixer_util import token, String, Newline, Comma, Name
from libfuturize.fixer_util import indentation, suitify, DoubleStar

_assign_template = u"%(name)s = %(kwargs)s['%(name)s']; del %(kwargs)s['%(name)s']"
_if_template = u"if '%(name)s' in %(kwargs)s: %(assign)s"
_else_template = u"else: %(name)s = %(default)s"
_kwargs_default_name = u"_3to2kwargs"

def gen_params(raw_params):
    u"""
    Generator that yields tuples of (name, default_value) for each parameter in the list
    If no default is given, then it is default_value is None (not Leaf(token.NAME, 'None'))
    """
    assert raw_params[0].type == token.STAR and len(raw_params) > 2
    curr_idx = 2 # the first place a keyword-only parameter name can be is index 2
    max_idx = len(raw_params)
    while curr_idx < max_idx:
        curr_item = raw_params[curr_idx]
        prev_item = curr_item.prev_sibling
        if curr_item.type != token.NAME:
            curr_idx += 1
            continue
        if prev_item is not None and prev_item.type == token.DOUBLESTAR:
            break
        name = curr_item.value
        nxt = curr_item.next_sibling
        if nxt is not None and nxt.type == token.EQUAL:
            default_value = nxt.next_sibling
            curr_idx += 2
        else:
            default_value = None
        yield (name, default_value)
        curr_idx += 1

def remove_params(raw_params, kwargs_default=_kwargs_default_name):
    u"""
    Removes all keyword-only args from the params list and a bare star, if any.
    Does not add the kwargs dict if needed.
    Returns True if more action is needed, False if not
    (more action is needed if no kwargs dict exists)
    """
    assert raw_params[0].type == token.STAR
    if raw_params[1].type == token.COMMA:
        raw_params[0].remove()
        raw_params[1].remove()
        kw_params = raw_params[2:]
    else:
        kw_params = raw_params[3:]
    for param in kw_params:
        if param.type != token.DOUBLESTAR:
            param.remove()
        else:
            return False
    else:
        return True

def needs_fixing(raw_params, kwargs_default=_kwargs_default_name):
    u"""
    Returns string with the name of the kwargs dict if the params after the first star need fixing
    Otherwise returns empty string
    """
    found_kwargs = False
    needs_fix = False

    for t in raw_params[2:]:
        if t.type == token.COMMA:
            # Commas are irrelevant at this stage.
            continue
        elif t.type == token.NAME and not found_kwargs:
            # Keyword-only argument: definitely need to fix.
            needs_fix = True
        elif t.type == token.NAME and found_kwargs:
            # Return 'foobar' of **foobar, if needed.
            return t.value if needs_fix else u''
        elif t.type == token.DOUBLESTAR:
            # Found either '*' from **foobar.
            found_kwargs = True
    else:
        # Never found **foobar.  Return a synthetic name, if needed.
        return kwargs_default if needs_fix else u''

class FixKwargs(fixer_base.BaseFix):

    run_order = 7 # Run after function annotations are removed

    PATTERN = u"funcdef< 'def' NAME parameters< '(' arglist=typedargslist< params=any* > ')' > ':' suite=any >"

    def transform(self, node, results):
        params_rawlist = results[u"params"]
        for i, item in enumerate(params_rawlist):
            if item.type == token.STAR:
                params_rawlist = params_rawlist[i:]
                break
        else:
            return
        # params is guaranteed to be a list starting with *.
        # if fixing is needed, there will be at least 3 items in this list:
        # [STAR, COMMA, NAME] is the minimum that we need to worry about.
        new_kwargs = needs_fixing(params_rawlist)
        # new_kwargs is the name of the kwargs dictionary.
        if not new_kwargs:
            return
        suitify(node)

        # At this point, params_rawlist is guaranteed to be a list
        # beginning with a star that includes at least one keyword-only param
        # e.g., [STAR, NAME, COMMA, NAME, COMMA, DOUBLESTAR, NAME] or
        # [STAR, COMMA, NAME], or [STAR, COMMA, NAME, COMMA, DOUBLESTAR, NAME]

        # Anatomy of a funcdef: ['def', 'name', parameters, ':', suite]
        # Anatomy of that suite: [NEWLINE, INDENT, first_stmt, all_other_stmts]
        # We need to insert our new stuff before the first_stmt and change the
        # first_stmt's prefix.

        suite = node.children[4]
        first_stmt = suite.children[2]
        ident = indentation(first_stmt)

        for name, default_value in gen_params(params_rawlist):
            if default_value is None:
                suite.insert_child(2, Newline())
                suite.insert_child(2, String(_assign_template %{u'name':name, u'kwargs':new_kwargs}, prefix=ident))
            else:
                suite.insert_child(2, Newline())
                suite.insert_child(2, String(_else_template %{u'name':name, u'default':default_value}, prefix=ident))
                suite.insert_child(2, Newline())
                suite.insert_child(2, String(_if_template %{u'assign':_assign_template %{u'name':name, u'kwargs':new_kwargs}, u'name':name, u'kwargs':new_kwargs}, prefix=ident))
        first_stmt.prefix = ident
        suite.children[2].prefix = u""

        # Now, we need to fix up the list of params.

        must_add_kwargs = remove_params(params_rawlist)
        if must_add_kwargs:
            arglist = results[u'arglist']
            if len(arglist.children) > 0 and arglist.children[-1].type != token.COMMA:
                arglist.append_child(Comma())
            arglist.append_child(DoubleStar(prefix=u" "))
            arglist.append_child(Name(new_kwargs))
PK.Gu\n`����2libpasteurize/fixes/fix_add_all_future_builtins.pynu�[���"""
For the ``future`` package.

Adds this import line::

    from builtins import (ascii, bytes, chr, dict, filter, hex, input,
                          int, list, map, next, object, oct, open, pow,
                          range, round, str, super, zip)

to a module, irrespective of whether each definition is used.

Adds these imports after any other imports (in an initial block of them).
"""

from __future__ import unicode_literals

from lib2to3 import fixer_base

from libfuturize.fixer_util import touch_import_top


class FixAddAllFutureBuiltins(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"
    run_order = 1

    def transform(self, node, results):
        # import_str = """(ascii, bytes, chr, dict, filter, hex, input,
        #                      int, list, map, next, object, oct, open, pow,
        #                      range, round, str, super, zip)"""
        touch_import_top(u'builtins', '*', node)

        # builtins = """ascii bytes chr dict filter hex input
        #                      int list map next object oct open pow
        #                      range round str super zip"""
        # for builtin in sorted(builtins.split(), reverse=True):
        #     touch_import_top(u'builtins', builtin, node)
PK1Gu\#�''%libpasteurize/fixes/fix_memoryview.pynu�[���u"""
Fixer for memoryview(s) -> buffer(s).
Explicit because some memoryview methods are invalid on buffer objects.
"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Name


class FixMemoryview(fixer_base.BaseFix):

    explicit = True # User must specify that they want this.

    PATTERN = u"""
              power< name='memoryview' trailer< '(' [any] ')' >
              rest=any* >
              """

    def transform(self, node, results):
        name = results[u"name"]
        name.replace(Name(u"buffer", prefix=name.prefix))
PK3Gu\���libpasteurize/fixes/fix_next.pynu�[���u"""
Fixer for:
it.__next__() -> it.next().
next(it) -> it.next().
"""

from lib2to3.pgen2 import token
from lib2to3.pygram import python_symbols as syms
from lib2to3 import fixer_base
from lib2to3.fixer_util import Name, Call, find_binding, Attr

bind_warning = u"Calls to builtin next() possibly shadowed by global binding"


class FixNext(fixer_base.BaseFix):

    PATTERN = u"""
    power< base=any+ trailer< '.' attr='__next__' > any* >
    |
    power< head='next' trailer< '(' arg=any ')' > any* >
    |
    classdef< 'class' base=any+ ':'
              suite< any*
                     funcdef< 'def'
                              attr='__next__'
                              parameters< '(' NAME ')' > any+ >
                     any* > >
    """

    def transform(self, node, results):
        assert results

        base = results.get(u"base")
        attr = results.get(u"attr")
        head = results.get(u"head")
        arg_ = results.get(u"arg")
        if arg_:
            arg = arg_.clone()
            head.replace(Attr(Name(unicode(arg),prefix=head.prefix),
                              Name(u"next")))
            arg_.remove()
        elif base:
            attr.replace(Name(u"next", prefix=attr.prefix))
PK6Gu\%P���*libpasteurize/fixes/fix_future_builtins.pynu�[���"""
Adds this import line:

    from builtins import XYZ

for each of the functions XYZ that is used in the module.
"""

from __future__ import unicode_literals

from lib2to3 import fixer_base
from lib2to3.pygram import python_symbols as syms
from lib2to3.fixer_util import Name, Call, in_special_context

from libfuturize.fixer_util import touch_import_top

# All builtins are:
#     from future.builtins.iterators import (filter, map, zip)
#     from future.builtins.misc import (ascii, chr, hex, input, isinstance, oct, open, round, super)
#     from future.types import (bytes, dict, int, range, str)
# We don't need isinstance any more.

replaced_builtins = '''filter map zip
                       ascii chr hex input next oct open round super
                       bytes dict int range str'''.split()

expression = '|'.join(["name='{0}'".format(name) for name in replaced_builtins])


class FixFutureBuiltins(fixer_base.BaseFix):
    BM_compatible = True
    run_order = 9

    # Currently we only match uses as a function. This doesn't match e.g.:
    #     if isinstance(s, str):
    #         ...
    PATTERN = """
              power<
                 ({0}) trailer< '(' args=[any] ')' >
              rest=any* >
              """.format(expression)

    def transform(self, node, results):
        name = results["name"]
        touch_import_top(u'builtins', name.value, node)
        # name.replace(Name(u"input", prefix=name.prefix))
PK8Gu\w�y��!libpasteurize/fixes/fix_raise_.pynu�[���u"""Fixer for
              raise E(V).with_traceback(T)
    to:
              from future.utils import raise_
              ...
              raise_(E, V, T)

TODO: FIXME!!

"""

from lib2to3 import fixer_base
from lib2to3.fixer_util import Comma, Node, Leaf, token, syms

class FixRaise(fixer_base.BaseFix):

    PATTERN = u"""
    raise_stmt< 'raise' (power< name=any [trailer< '(' val=any* ')' >]
        [trailer< '.' 'with_traceback' > trailer< '(' trc=any ')' >] > | any) ['from' chain=any] >"""

    def transform(self, node, results):
        FIXME
        name, val, trc = (results.get(u"name"), results.get(u"val"), results.get(u"trc"))
        chain = results.get(u"chain")
        if chain is not None:
            self.warning(node, u"explicit exception chaining is not supported in Python 2")
            chain.prev_sibling.remove()
            chain.remove()
        if trc is not None:
            val = val[0] if val else Leaf(token.NAME, u"None")
            val.prefix = trc.prefix = u" "
            kids = [Leaf(token.NAME, u"raise"), name.clone(), Comma(),
                    val.clone(), Comma(), trc.clone()]
            raise_stmt = Node(syms.raise_stmt, kids)
            node.replace(raise_stmt)
PK;Gu\F1}��#libpasteurize/fixes/feature_base.pynu�[���u"""
Base classes for features that are backwards-incompatible.

Usage:
features = Features()
features.add(Feature("py3k_feature", "power< 'py3k' any* >", "2.7"))
PATTERN = features.PATTERN
"""

pattern_unformatted = u"%s=%s" # name=pattern, for dict lookups
message_unformatted = u"""
%s is only supported in Python %s and above."""

class Feature(object):
    u"""
    A feature has a name, a pattern, and a minimum version of Python 2.x
    required to use the feature (or 3.x if there is no backwards-compatible
    version of 2.x)
    """
    def __init__(self, name, PATTERN, version):
        self.name = name
        self._pattern = PATTERN
        self.version = version

    def message_text(self):
        u"""
        Format the above text with the name and minimum version required.
        """
        return message_unformatted % (self.name, self.version)

class Features(set):
    u"""
    A set of features that generates a pattern for the features it contains.
    This set will act like a mapping in that we map names to patterns.
    """
    mapping = {}

    def update_mapping(self):
        u"""
        Called every time we care about the mapping of names to features.
        """
        self.mapping = dict([(f.name, f) for f in iter(self)])

    @property
    def PATTERN(self):
        u"""
        Uses the mapping of names to features to return a PATTERN suitable
        for using the lib2to3 patcomp.
        """
        self.update_mapping()
        return u" |\n".join([pattern_unformatted % (f.name, f._pattern) for f in iter(self)])

    def __getitem__(self, key):
        u"""
        Implement a simple mapping to get patterns from names.
        """
        return self.mapping[key]
PK@Gu\�bXї�=libpasteurize/fixes/fix_add_future_standard_library_import.pynu�[���"""
For the ``future`` package.

Adds this import line:

    from future import standard_library

after any __future__ imports but before any other imports. Doesn't actually
change the imports to Py3 style.
"""

from lib2to3 import fixer_base
from libfuturize.fixer_util import touch_import_top

class FixAddFutureStandardLibraryImport(fixer_base.BaseFix):
    BM_compatible = True
    PATTERN = "file_input"
    run_order = 8

    def transform(self, node, results):
        # TODO: add a blank line between any __future__ imports and this?
        touch_import_top(u'future', u'standard_library', node)
        # TODO: also add standard_library.install_hooks()
PKEGu\4Rlibpasteurize/__init__.pynu�[���# empty to make this a package
PKJGu\Ÿ!A��1libpasteurize/__pycache__/__init__.cpython-39.pycnu�[���a

��?h�@sdS)N�rrr�@/usr/local/lib/python3.9/site-packages/libpasteurize/__init__.py�<module>�PKLGu\��^a��-libpasteurize/__pycache__/main.cpython-39.pycnu�[���a

��?h��@svdZddlmZmZmZddlZddlZddlZddlm	Z	m
Z
mZddlm
Z
ddlmZddlmZd
dd	�Z	dS)a�
pasteurize: automatic conversion of Python 3 code to clean 2/3 code
===================================================================

``pasteurize`` attempts to convert existing Python 3 code into source-compatible
Python 2 and 3 code.

Use it like this on Python 3 code:

  $ pasteurize --verbose mypython3script.py

This removes any Py3-only syntax (e.g. new metaclasses) and adds these
import lines:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    from __future__ import unicode_literals
    from future import standard_library
    standard_library.install_hooks()
    from builtins import *

To write changes to the files, use the -w flag.

It also adds any other wrappers needed for Py2/3 compatibility.

Note that separate stages are not available (or needed) when converting from
Python 3 with ``pasteurize`` as they are when converting from Python 2 with
``futurize``.

The --all-imports option forces adding all ``__future__`` imports,
``builtins`` imports, and standard library aliases, even if they don't
seem necessary for the current state of each module. (This can simplify
testing, and can reduce the need to think about Py2 compatibility when editing
the code further.)

�)�absolute_import�print_function�unicode_literalsN)�main�warn�StdoutRefactoringTool)�refactor)�__version__)�	fix_namescsttjdd�}|jddddd�|jdd	dd
d�|jddd
gdd�|jddddddd�|jddd
gdd�|jddddd�|jddddd�|jd dd!d�|jd"d#dd$d�|jd%d&dd'd(d�d'}i}|�|�\}}d)}t}d*|d+<|j�s|j�rtd,�|j�s |j�r |�	d-�|j
�r4tt�d.S|j
�rftd/�t|�D]}t|��qL|�sfd.S|�s�td0tjd1�td2tjd1�d3Sd4|v�r�d*}|j�r�td5tjd1�d3S|j�r�tjntj}tjd6|d7�t�}	|jD]��d8�v�r�|	���nx�fd9d:�|D�}
t|
�dk�rDtd;d<�d=d>�|
D��tjd1�d3St|
�d.k�rftd?tjd1�d3S|	�|
d.��q�t�}|j�r�d@}|�|dA�|�|dB�|�|dC�t�}
|j�r�d'}|jD]���dDk�r�d*}n�d8�v�r�|
���nx�fdEd:�|D�}
t|
�dk�r>td;d<�dFd>�|
D��tjd1�d3St|
�d.k�r`td?tjd1�d3S|
�|
d.��q�t|
|	@�d.k�r�tdGd<�dHd>�|
|	@D��tjd1�d3S|�r�|�|
�n|
}n
|�|
�}|||	B}tt|�|t�|j|j�}|j�sf|�r|� �nPz|�!||jdI|j"�Wn6t!j#�y\|j"dk�sHJ�tdJtjd1�YdS0|�$�t%t&|j��S)KzBMain program.

    Returns a suggested exit status (0, 1, 2).
    z!pasteurize [options] file|dir ...)�usagez-Vz	--version�
store_truez'Report the version number of pasteurize)�action�helpz-az
--all-importsz5Adds all __future__ and future imports to each modulez-fz--fix�appendz1Each FIX specifies a transformation; default: all)r
�defaultrz-jz--processes�store��intzRun 2to3 concurrently)r
r�typerz-xz--nofixzPrevent a fixer from being run.z-lz--list-fixeszList available transformationsz-vz	--verbosezMore verbose loggingz
--no-diffsz#Don't show diffs of the refactoringz-wz--writezWrite back modified filesz-nz--nobackupsFz'Don't write backups for modified files.zlibpasteurize.fixesTrz@not writing files and not printing diffs; that's not very usefulzCan't use -n without -wrz2Available transformations for the -f/--fix option:z1At least one file or directory argument required.)�filezUse --help to show usage.��-zCan't write to stdin.z%(name)s: %(message)s)�format�levelz.fix_cs g|]}|�d����r|�qS�zfix_{0}��endswithr��.0�f��fix��</usr/local/lib/python3.9/site-packages/libpasteurize/main.py�
<listcomp>}s�zmain.<locals>.<listcomp>zOAmbiguous fixer name. Choose a fully qualified module name instead from these:
�
css|]}d|VqdS�z  Nr"�rZmyfr"r"r#�	<genexpr>��zmain.<locals>.<genexpr>z1Unknown fixer. Use --list-fixes or -l for a list.zlibpasteurize.fixes.Zfix_add_all__future__importsZ&fix_add_future_standard_library_importZfix_add_all_future_builtins�allcs g|]}|�d����r|�qSrrrr r"r#r$�s�css|]}d|VqdSr&r"r'r"r"r#r(�r)z[Conflicting usage: the following fixers have been simultaneously requested and disallowed:
css|]}d|VqdSr&r"r'r"r"r#r(�r)Nz+Sorry, -j isn't supported on this platform.)'�optparse�OptionParser�
add_option�
parse_argsr
�writeZno_diffsrZ	nobackups�error�version�printr	Z
list_fixes�sorted�sys�stderr�verbose�logging�DEBUG�INFO�basicConfig�setZnofix�add�len�joinZall_importsr!�unionr�errors�refactor_stdinrZ	processesZMultiprocessingUnsupportedZ	summarizer�bool)�args�parserrA�flags�optionsZ	fixer_pkgZavail_fixesZfixnamerZunwanted_fixes�foundZextra_fixes�prefix�explicitZall_present�	requested�fixer_names�rtr"r r#r3s
�
����
�
��
��
�


���


�����

�
��r)N)�__doc__�
__future__rrrr4r7r+Zlib2to3.mainrrrZlib2to3r�futurer	Zlibpasteurize.fixesr
r"r"r"r#�<module>s&PKOGu\^���libpasteurize/main.pynu�[���"""
pasteurize: automatic conversion of Python 3 code to clean 2/3 code
===================================================================

``pasteurize`` attempts to convert existing Python 3 code into source-compatible
Python 2 and 3 code.

Use it like this on Python 3 code:

  $ pasteurize --verbose mypython3script.py

This removes any Py3-only syntax (e.g. new metaclasses) and adds these
import lines:

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    from __future__ import unicode_literals
    from future import standard_library
    standard_library.install_hooks()
    from builtins import *

To write changes to the files, use the -w flag.

It also adds any other wrappers needed for Py2/3 compatibility.

Note that separate stages are not available (or needed) when converting from
Python 3 with ``pasteurize`` as they are when converting from Python 2 with
``futurize``.

The --all-imports option forces adding all ``__future__`` imports,
``builtins`` imports, and standard library aliases, even if they don't
seem necessary for the current state of each module. (This can simplify
testing, and can reduce the need to think about Py2 compatibility when editing
the code further.)

"""

from __future__ import (absolute_import, print_function, unicode_literals)

import sys
import logging
import optparse
from lib2to3.main import main, warn, StdoutRefactoringTool
from lib2to3 import refactor

from future import __version__
from libpasteurize.fixes import fix_names


def main(args=None):
    """Main program.

    Returns a suggested exit status (0, 1, 2).
    """
    # Set up option parser
    parser = optparse.OptionParser(usage="pasteurize [options] file|dir ...")
    parser.add_option("-V", "--version", action="store_true",
                      help="Report the version number of pasteurize")
    parser.add_option("-a", "--all-imports", action="store_true",
                      help="Adds all __future__ and future imports to each module")
    parser.add_option("-f", "--fix", action="append", default=[],
                      help="Each FIX specifies a transformation; default: all")
    parser.add_option("-j", "--processes", action="store", default=1,
                      type="int", help="Run 2to3 concurrently")
    parser.add_option("-x", "--nofix", action="append", default=[],
                      help="Prevent a fixer from being run.")
    parser.add_option("-l", "--list-fixes", action="store_true",
                      help="List available transformations")
    # parser.add_option("-p", "--print-function", action="store_true",
    #                   help="Modify the grammar so that print() is a function")
    parser.add_option("-v", "--verbose", action="store_true",
                      help="More verbose logging")
    parser.add_option("--no-diffs", action="store_true",
                      help="Don't show diffs of the refactoring")
    parser.add_option("-w", "--write", action="store_true",
                      help="Write back modified files")
    parser.add_option("-n", "--nobackups", action="store_true", default=False,
                      help="Don't write backups for modified files.")

    # Parse command line arguments
    refactor_stdin = False
    flags = {}
    options, args = parser.parse_args(args)
    fixer_pkg = 'libpasteurize.fixes'
    avail_fixes = fix_names
    flags["print_function"] = True

    if not options.write and options.no_diffs:
        warn("not writing files and not printing diffs; that's not very useful")
    if not options.write and options.nobackups:
        parser.error("Can't use -n without -w")
    if options.version:
        print(__version__)
        return 0
    if options.list_fixes:
        print("Available transformations for the -f/--fix option:")
        for fixname in sorted(avail_fixes):
            print(fixname)
        if not args:
            return 0
    if not args:
        print("At least one file or directory argument required.",
              file=sys.stderr)
        print("Use --help to show usage.", file=sys.stderr)
        return 2
    if "-" in args:
        refactor_stdin = True
        if options.write:
            print("Can't write to stdin.", file=sys.stderr)
            return 2

    # Set up logging handler
    level = logging.DEBUG if options.verbose else logging.INFO
    logging.basicConfig(format='%(name)s: %(message)s', level=level)

    unwanted_fixes = set()
    for fix in options.nofix:
        if ".fix_" in fix:
            unwanted_fixes.add(fix)
        else:
            # Infer the full module name for the fixer.
            # First ensure that no names clash (e.g.
            # lib2to3.fixes.fix_blah and libfuturize.fixes.fix_blah):
            found = [f for f in avail_fixes
                     if f.endswith('fix_{0}'.format(fix))]
            if len(found) > 1:
                print("Ambiguous fixer name. Choose a fully qualified "
                      "module name instead from these:\n" +
                      "\n".join("  " + myf for myf in found),
                      file=sys.stderr)
                return 2
            elif len(found) == 0:
                print("Unknown fixer. Use --list-fixes or -l for a list.",
                      file=sys.stderr)
                return 2
            unwanted_fixes.add(found[0])

    extra_fixes = set()
    if options.all_imports:
        prefix = 'libpasteurize.fixes.'
        extra_fixes.add(prefix + 'fix_add_all__future__imports')
        extra_fixes.add(prefix + 'fix_add_future_standard_library_import')
        extra_fixes.add(prefix + 'fix_add_all_future_builtins')

    explicit = set()
    if options.fix:
        all_present = False
        for fix in options.fix:
            if fix == 'all':
                all_present = True
            else:
                if ".fix_" in fix:
                    explicit.add(fix)
                else:
                    # Infer the full module name for the fixer.
                    # First ensure that no names clash (e.g.
                    # lib2to3.fixes.fix_blah and libpasteurize.fixes.fix_blah):
                    found = [f for f in avail_fixes
                             if f.endswith('fix_{0}'.format(fix))]
                    if len(found) > 1:
                        print("Ambiguous fixer name. Choose a fully qualified "
                              "module name instead from these:\n" +
                              "\n".join("  " + myf for myf in found),
                              file=sys.stderr)
                        return 2
                    elif len(found) == 0:
                        print("Unknown fixer. Use --list-fixes or -l for a list.",
                              file=sys.stderr)
                        return 2
                    explicit.add(found[0])
        if len(explicit & unwanted_fixes) > 0:
            print("Conflicting usage: the following fixers have been "
                  "simultaneously requested and disallowed:\n" +
                  "\n".join("  " + myf for myf in (explicit & unwanted_fixes)),
                  file=sys.stderr)
            return 2
        requested = avail_fixes.union(explicit) if all_present else explicit
    else:
        requested = avail_fixes.union(explicit)

    fixer_names = requested | extra_fixes - unwanted_fixes

    # Initialize the refactoring tool
    rt = StdoutRefactoringTool(sorted(fixer_names), flags, set(),
                               options.nobackups, not options.no_diffs)

    # Refactor all files and directories passed as arguments
    if not rt.errors:
        if refactor_stdin:
            rt.refactor_stdin()
        else:
            try:
                rt.refactor(args, options.write, None,
                            options.processes)
            except refactor.MultiprocessingUnsupported:
                assert options.processes > 1
                print("Sorry, -j isn't " \
                      "supported on this platform.", file=sys.stderr)
                return 1
        rt.summarize()

    # Return error status (0 if rt.errors is zero)
    return int(bool(rt.errors))
PKok\%M�mm(netifaces.cpython-39-x86_64-linux-gnu.sonuȯ��PKok\�Ï�;�;�psutil/_compat.pynu�[���PKok\͔�������Bpsutil/tests/test_misc.pynu�[���PKok\X���N�Nk�psutil/tests/test_bsd.pynu�[���PKok\�;�3<3<�psutil/tests/test_memleaks.pynu�[���PKok\���H!1!11[psutil/tests/test_contracts.pynu�[���PKok\��c�H�H ��psutil/tests/test_process_all.pynu�[���PKok\Z������psutil/tests/__init__.pynu�[���PKok\V�a�SS x�psutil/tests/test_connections.pynu�[���PKok\�TZn�n��-psutil/tests/test_process.pynu�[���PKok\���O�O��$psutil/tests/test_system.pynu�[���PKok\3�
�� �psutil/tests/test_osx.pynu�[���PKok\
�u55]�psutil/tests/__main__.pynu�[���PKok\��k��0��psutil/tests/__pycache__/test_aix.cpython-39.pycnu�[���PKok\�*ejXjX6'�psutil/tests/__pycache__/test_testutils.cpython-39.pycnu�[���PKok\���Sh�h�0�1psutil/tests/__pycache__/__init__.cpython-39.pycnu�[���PKok\�����\�\5�
psutil/tests/__pycache__/test_memleaks.cpython-39.pycnu�[���PKok\�đ�&>&>8�jpsutil/tests/__pycache__/test_connections.cpython-39.pycnu�[���PKok\����7�72g�psutil/tests/__pycache__/test_posix.cpython-39.pycnu�[���PKok\v�b�?�?8��psutil/tests/__pycache__/test_process_all.cpython-39.pycnu�[���PKok\+��N�N0
"psutil/tests/__pycache__/test_bsd.cpython-39.pycnu�[���PKok\{��
y
y4+qpsutil/tests/__pycache__/test_windows.cpython-39.pycnu�[���PKok\�n�E0��psutil/tests/__pycache__/test_osx.cpython-39.pycnu�[���PKok\�J�>C)C)2		psutil/tests/__pycache__/test_linux.cpython-39.pycnu�[���PKok\ r.�}�}�4�.
psutil/tests/__pycache__/test_process.cpython-39.pycnu�[���PKok\��yZ556��
psutil/tests/__pycache__/test_contracts.cpython-39.pycnu�[���PKok\R4�(||10psutil/tests/__pycache__/test_misc.cpython-39.pycnu�[���PKok\�6�r�r3��psutil/tests/__pycache__/test_system.cpython-39.pycnu�[���PKok\}��}}2�psutil/tests/__pycache__/test_sunos.cpython-39.pycnu�[���PKok\豍BB0�%psutil/tests/__pycache__/__main__.cpython-39.pycnu�[���PKok\ށ�,,4f'psutil/tests/__pycache__/test_unicode.cpython-39.pycnu�[���PKok\7؄؄�Spsutil/tests/test_windows.pynu�[���PKok\�P�
�H�H��psutil/tests/test_testutils.pynu�[���PKok\�c�;1;1�!
psutil/tests/test_unicode.pynu�[���PKok\�μ[d[d^S
psutil/tests/test_linux.pynu�[���PKok\�u����psutil/tests/test_sunos.pynu�[���PKok\0��DD��psutil/tests/test_posix.pynu�[���PKok\�����?psutil/tests/test_aix.pynu�[���PKok\�$�B\B\Jpsutil/__init__.pynu�[���PKok\0T;+ + �mpsutil/_psposix.pynu�[���PKok\t�|�ZZ;�psutil/_pslinux.pynu�[���PKok\�#�!x�x���psutil/_psutil_linux.abi3.sonuȯ��PKok\��G��r�r,S�psutil/__pycache__/_pswindows.cpython-39.pycnu�[���PKok\�U��*�psutil/__pycache__/__init__.cpython-39.pycnu�[���PKok\E��55(psutil/__pycache__/_psosx.cpython-39.pycnu�[���PKok\�DB�g[g[)�Spsutil/__pycache__/_common.cpython-39.pycnu�[���PKok\]2�مG�G*C�psutil/__pycache__/_pssunos.cpython-39.pycnu�[���PKok\1}:p�;�;("�psutil/__pycache__/_psaix.cpython-39.pycnu�[���PKok\+�������*3psutil/__pycache__/_pslinux.cpython-39.pycnu�[���PKok\�Dˤ��*apsutil/__pycache__/_psposix.cpython-39.pycnu�[���PKok\q�-��-�-)�psutil/__pycache__/_compat.cpython-39.pycnu�[���PKok\<�RR(�Epsutil/__pycache__/_psbsd.cpython-39.pycnu�[���PKok\s�L+t+t�psutil/_common.pynu�[���PKok\�*Ǘ�H�H�psutil/_psaix.pynu�[���PKok\,��S??�Upsutil/_psosx.pynu�[���PKok\݄����psutil/_psutil_posix.abi3.sonuȯ��PKok\ػ�هc�c�psutil/_pssunos.pynu�[���PKok\;�`�}�}�psutil/_psbsd.pynu�[���PKok\w�]����psutil/_pswindows.pynu�[���PKok\�ɞ1�� $netifaces-0.11.0.dist-info/WHEELnu�[���PKok\���**"�$netifaces-0.11.0.dist-info/LICENSEnu�[���PKok\�zb�

!{)netifaces-0.11.0.dist-info/RECORDnu�[���PKok\t��/�"�"#�,netifaces-0.11.0.dist-info/METADATAnu�[���PKok\$ Pnetifaces-0.11.0.dist-info/REQUESTEDnu�[���PKok\�.�

(tPnetifaces-0.11.0.dist-info/top_level.txtnu�[���PKok\��2#�Pnetifaces-0.11.0.dist-info/zip-safenu�[���PKok\���$*Qnetifaces-0.11.0.dist-info/INSTALLERnu�[���PKok\�94���Qpsutil-6.1.0.dist-info/WHEELnu�[���PKok\٘q��Rpsutil-6.1.0.dist-info/LICENSEnu�[���PKok\W����Ypsutil-6.1.0.dist-info/RECORDnu�[���PKok\t�c�WW�jpsutil-6.1.0.dist-info/METADATAnu�[���PKok\ @�psutil-6.1.0.dist-info/REQUESTEDnu�[���PKok\TFG$��psutil-6.1.0.dist-info/top_level.txtnu�[���PKok\��� ��psutil-6.1.0.dist-info/INSTALLERnu�[���PK@u\?�certifi/py.typednu�[���PK@u\���^^�certifi/__init__.pynu�[���PK@u\ ����� �certifi/__main__.pynu�[���PK
@u\b�H+V�certifi/__pycache__/__init__.cpython-39.pycnu�[���PK@u\�5�!(('��certifi/__pycache__/core.cpython-39.pycnu�[���PK@u\x����+6�certifi/__pycache__/__main__.cpython-39.pycnu�[���PK@u\!�AJJ�certifi/core.pynu�[���PK@u\l�Hڣ�����certifi/cacert.pemnu�[���PK@u\o��4\\�t distro-1.9.0.dist-info/WHEELnu�[���PK!@u\�[�=,=,5u distro-1.9.0.dist-info/LICENSEnu�[���PK#@u\�e	_���� distro-1.9.0.dist-info/RECORDnu�[���PK&@u\�û����� distro-1.9.0.dist-info/METADATAnu�[���PK(@u\ l� distro-1.9.0.dist-info/REQUESTEDnu�[���PK(@u\�f��$�� distro-1.9.0.dist-info/top_level.txtnu�[���PK,@u\���..'� distro-1.9.0.dist-info/entry_points.txtnu�[���PK.@u\��� �� distro-1.9.0.dist-info/INSTALLERnu�[���PK3@u\I��\\�� future-1.0.0.dist-info/WHEELnu�[���PK6@u\�E+��z�z�� future-1.0.0.dist-info/RECORDnu�[���PK9@u\���ww�>!future-1.0.0.dist-info/METADATAnu�[���PK;@u\�l�33"|N!future-1.0.0.dist-info/LICENSE.txtnu�[���PK@@u\ S!future-1.0.0.dist-info/REQUESTEDnu�[���PK@@u\Z�w�&&$QS!future-1.0.0.dist-info/top_level.txtnu�[���PKB@u\}
�=XX'�S!future-1.0.0.dist-info/entry_points.txtnu�[���PKE@u\��� zT!future-1.0.0.dist-info/INSTALLERnu�[���PKJ@u\E���

�T!configparser.pynu�[���PKP@u\�9J���'[!__pycache__/configparser.cpython-39.pycnu�[���PKU@u\�P2�\\"L`!configparser-5.2.0.dist-info/WHEELnu�[���PKW@u\�Yď$�`!configparser-5.2.0.dist-info/LICENSEnu�[���PKZ@u\����#he!configparser-5.2.0.dist-info/RECORDnu�[���PK\@u\�J��-+-+%�i!configparser-5.2.0.dist-info/METADATAnu�[���PKa@u\&<�!configparser-5.2.0.dist-info/REQUESTEDnu�[���PKa@u\�*��!configparser-5.2.0.dist-info/top_level.txtnu�[���PKc@u\���&�!configparser-5.2.0.dist-info/INSTALLERnu�[���PKi@u\h��[[!]�!certifi-2024.8.30.dist-info/WHEELnu�[���PKk@u\�+2��#	�!certifi-2024.8.30.dist-info/LICENSEnu�[���PKn@u\���%OO"9�!certifi-2024.8.30.dist-info/RECORDnu�[���PKp@u\�AϺ��$ڟ!certifi-2024.8.30.dist-info/METADATAnu�[���PKs@u\%ܨ!certifi-2024.8.30.dist-info/REQUESTEDnu�[���PKs@u\]��)1�!certifi-2024.8.30.dist-info/top_level.txtnu�[���PKu@u\���%��!certifi-2024.8.30.dist-info/INSTALLERnu�[���PK{@u\
^��LL�!past/__init__.pynu�[���PK�@u\8�p�B:B:w�!past/translation/__init__.pynu�[���PK�@u\��!��+�+4�!past/translation/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\zD���W"past/builtins/misc.pynu�[���PK�@u\J�<�$�$z2"past/builtins/noniterators.pynu�[���PK�@u\�Q�|

aW"past/builtins/__init__.pynu�[���PK�@u\�5�DD1�^"past/builtins/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\��Z���-\e"past/builtins/__pycache__/misc.cpython-39.pycnu�[���PK�@u\93]���5Xu"past/builtins/__pycache__/noniterators.cpython-39.pycnu�[���PK�@u\���$��(r�"past/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\ZF2��"past/types/basestring.pynu�[���PK�@u\�Woo�"past/types/__init__.pynu�[���PK�@u\�
��
�
ĕ"past/types/olddict.pynu�[���PK�@u\,#l����"past/types/oldstr.pynu�[���PK�@u\��ջcc.۱"past/types/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\�g�TT,��"past/types/__pycache__/oldstr.cpython-39.pycnu�[���PK�@u\Y�ױ1	1	-L�"past/types/__pycache__/olddict.cpython-39.pycnu�[���PK�@u\�3L``0��"past/types/__pycache__/basestring.cpython-39.pycnu�[���PK�@u\�ߵ�I
I
��"past/utils/__init__.pynu�[���PK�@u\�����.)�"past/utils/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\�졀��"a�"backports/configparser/__init__.pynu�[���PK�@u\�����82�#backports/configparser/__pycache__/compat.cpython-39.pycnu�[���PK�@u\HEʷ����:2�#backports/configparser/__pycache__/__init__.cpython-39.pycnu�[���PK�@u\�($K�� �t$backports/configparser/compat.pynu�[���PK�@u\�V��Ezv$libfuturize/fixes/fix_add__future__imports_except_unicode_literals.pynu�[���PK�@u\Ʉf��!�y$libfuturize/fixes/fix_division.pynu�[���PK�@u\�:�{���z$libfuturize/fixes/fix_bytes.pynu�[���PK�@u\C��VV"�}$libfuturize/fixes/fix_next_call.pynu�[���PK�@u\D�;�!]�$libfuturize/fixes/fix_UserDict.pynu�[���PK�@u\;�"4tt��$libfuturize/fixes/__init__.pynu�[���PK�@u\������+o�$libfuturize/fixes/fix_xrange_with_import.pynu�[���PK�@u\Fy���!��$libfuturize/fixes/fix_execfile.pynu�[���PK�@u\���@@��$libfuturize/fixes/fix_raise.pynu�[���PKAu\�H{l��!�$libfuturize/fixes/fix_input.pynu�[���PKAu\�rؑ���$libfuturize/fixes/fix_cmp.pynu�[���PKAu\��z��:'�$libfuturize/fixes/__pycache__/fix_next_call.cpython-39.pycnu�[���PKAu\�����@`�$libfuturize/fixes/__pycache__/fix_future_builtins.cpython-39.pycnu�[���PK
Au\�#�jjO��$libfuturize/fixes/__pycache__/fix_future_standard_library_urllib.cpython-39.pycnu�[���PKAu\�ڌ���5s�$libfuturize/fixes/__pycache__/__init__.cpython-39.pycnu�[���PKAu\����9��$libfuturize/fixes/__pycache__/fix_division.cpython-39.pycnu�[���PKAu\;�ě��?��$libfuturize/fixes/__pycache__/fix_unicode_keep_u.cpython-39.pycnu�[���PKAu\uj4EHu�$libfuturize/fixes/__pycache__/fix_order___future__imports.cpython-39.pycnu�[���PKAu\;��9�$libfuturize/fixes/__pycache__/fix_execfile.cpython-39.pycnu�[���PKAu\��޺	�	6s�$libfuturize/fixes/__pycache__/fix_raise.cpython-39.pycnu�[���PK$Au\v�OC7�%libfuturize/fixes/__pycache__/fix_object.cpython-39.pycnu�[���PK'Au\�&D�334	%libfuturize/fixes/__pycache__/fix_cmp.cpython-39.pycnu�[���PK*Au\�#N�6�
%libfuturize/fixes/__pycache__/fix_bytes.cpython-39.pycnu�[���PK,Au\�o��e
e
6%libfuturize/fixes/__pycache__/fix_print.cpython-39.pycnu�[���PK/Au\�����:�%libfuturize/fixes/__pycache__/fix_metaclass.cpython-39.pycnu�[���PK1Au\�"8���9�3%libfuturize/fixes/__pycache__/fix_UserDict.cpython-39.pycnu�[���PK4Au\�+�((H�<%libfuturize/fixes/__pycache__/fix_future_standard_library.cpython-39.pycnu�[���PK7Au\�ú�kkC�A%libfuturize/fixes/__pycache__/fix_xrange_with_import.cpython-39.pycnu�[���PK:Au\�h�,,6hE%libfuturize/fixes/__pycache__/fix_input.cpython-39.pycnu�[���PK=Au\X�Xebb<�I%libfuturize/fixes/__pycache__/fix_oldstr_wrap.cpython-39.pycnu�[���PKAAu\�JD;�	�	@�O%libfuturize/fixes/__pycache__/fix_absolute_import.cpython-39.pycnu�[���PKDAu\Iڭ;�Y%libfuturize/fixes/__pycache__/fix_basestring.cpython-39.pycnu�[���PKFAu\�&�˰�Ba]%libfuturize/fixes/__pycache__/fix_print_with_import.cpython-39.pycnu�[���PKIAu\H�aU))]�a%libfuturize/fixes/__pycache__/fix_add__future__imports_except_unicode_literals.cpython-39.pycnu�[���PKKAu\Yu��>9f%libfuturize/fixes/__pycache__/fix_division_safe.cpython-39.pycnu�[���PKNAu\;ɧ���L�r%libfuturize/fixes/__pycache__/fix_remove_old__future__imports.cpython-39.pycnu�[���PKPAu\�ވH�w%libfuturize/fixes/__pycache__/fix_unicode_literals_import.cpython-39.pycnu�[���PKVAu\*���${%libfuturize/fixes/fix_oldstr_wrap.pynu�[���PKXAu\�S��b%b%"��%libfuturize/fixes/fix_metaclass.pynu�[���PK[Au\9�k0��#E�%libfuturize/fixes/fix_basestring.pynu�[���PK]Au\C5��))"�%libfuturize/fixes/fix_print.pynu�[���PKbAu\��em��7��%libfuturize/fixes/fix_future_standard_library_urllib.pynu�[���PKdAu\6<����0�%libfuturize/fixes/fix_future_standard_library.pynu�[���PKgAu\�䭫��*&�%libfuturize/fixes/fix_print_with_import.pynu�[���PKjAu\/�D�SS4_�%libfuturize/fixes/fix_remove_old__future__imports.pynu�[���PKlAu\]�md//&�%libfuturize/fixes/fix_division_safe.pynu�[���PKoAu\���'��%libfuturize/fixes/fix_unicode_keep_u.pynu�[���PKqAu\5�Ioo0��%libfuturize/fixes/fix_unicode_literals_import.pynu�[���PKtAu\H�(���(��%libfuturize/fixes/fix_future_builtins.pynu�[���PKvAu\:m����%libfuturize/fixes/fix_object.pynu�[���PKyAu\���/DD(��%libfuturize/fixes/fix_absolute_import.pynu�[���PK{Au\ �h==0��%libfuturize/fixes/fix_order___future__imports.pynu�[���PK�Au\4R.�%libfuturize/__init__.pynu�[���PK�Au\��D��/��%libfuturize/__pycache__/__init__.cpython-39.pycnu�[���PK�Au\i{<;%;%+��%libfuturize/__pycache__/main.cpython-39.pycnu�[���PK�Au\�	�QQ.Q.1"&libfuturize/__pycache__/fixer_util.cpython-39.pycnu�[���PK�Au\j��/�C�C�I&libfuturize/fixer_util.pynu�[���PK�Au\�BPs�5�5�&libfuturize/main.pynu�[���PK�Au\��&future/tests/__init__.pynu�[���PK�Au\�:�x��0�&future/tests/__pycache__/__init__.cpython-39.pycnu�[���PK�Au\�o�AA,�&future/tests/__pycache__/base.cpython-39.pycnu�[���PK�Au\t�{��M�M�'future/tests/base.pynu�[���PK�Au\fa�zz�T'future/__init__.pynu�[���PK�Au\n����v`'future/builtins/misc.pynu�[���PK�Au\������r'future/builtins/new_min_max.pynu�[���PK�Au\6���==�y'future/builtins/disabled.pynu�[���PK�Au\�C��6�'future/builtins/__init__.pynu�[���PK�Au\�&	tt�'future/builtins/iterators.pynu�[���PK�Au\�Kh��َ'future/builtins/newnext.pynu�[���PK�Au\g��}��3��'future/builtins/__pycache__/__init__.cpython-39.pycnu�[���PK�Au\�^���4)�'future/builtins/__pycache__/iterators.cpython-39.pycnu�[���PK�Au\0D	��3g�'future/builtins/__pycache__/newsuper.cpython-39.pycnu�[���PK�Au\���V		3��'future/builtins/__pycache__/disabled.cpython-39.pycnu�[���PK�Au\��]+��/!�'future/builtins/__pycache__/misc.cpython-39.pycnu�[���PK�Au\~"=,�
�
3i�'future/builtins/__pycache__/newround.cpython-39.pycnu�[���PK�Au\5����2��'future/builtins/__pycache__/newnext.cpython-39.pycnu�[���PK�Au\��Ï6��'future/builtins/__pycache__/new_min_max.cpython-39.pycnu�[���PK�Au\���		p�'future/builtins/newsuper.pynu�[���PK�Au\�H��vv��'future/builtins/newround.pynu�[���PK�Au\�D���*��'future/__pycache__/__init__.cpython-39.pycnu�[���PK�Au\Nw��**�(future/types/newopen.pynu�[���PK�Au\l��9�=�=E
(future/types/newstr.pynu�[���PK�Au\��]��H(future/types/__init__.pynu�[���PK�Au\#6���c(future/types/newrange.pynu�[���PK�Au\�����x(future/types/newmemoryview.pynu�[���PK�Au\�����{(future/types/newlist.pynu�[���PK�Au\�pU�^4^4P�(future/types/newint.pynu�[���PK�Au\E[�]���(future/types/newdict.pynu�[���PKBu\�
P^^0�(future/types/__pycache__/__init__.cpython-39.pycnu�[���PKBu\��r�))/��(future/types/__pycache__/newopen.cpython-39.pycnu�[���PKBu\�<��/\�(future/types/__pycache__/newlist.cpython-39.pycnu�[���PK	Bu\^�Ty11.��(future/types/__pycache__/newint.cpython-39.pycnu�[���PKBu\h/�P
P
10)future/types/__pycache__/newobject.cpython-39.pycnu�[���PKBu\%����5�')future/types/__pycache__/newmemoryview.cpython-39.pycnu�[���PKBu\�7�d�	�	/,)future/types/__pycache__/newdict.cpython-39.pycnu�[���PKBu\�U`��06)future/types/__pycache__/newrange.cpython-39.pycnu�[���PKBu\w��/�8�8.SN)future/types/__pycache__/newstr.cpython-39.pycnu�[���PKBu\�JB�7�70w�)future/types/__pycache__/newbytes.cpython-39.pycnu�[���PKBu\���?�?¿)future/types/newbytes.pynu�[���PKBu\��l�

��)future/types/newobject.pynu�[���PK%Bu\��>��m�m# 
*future/standard_library/__init__.pynu�[���PK*Bu\��um�H�H;{*future/standard_library/__pycache__/__init__.cpython-39.pycnu�[���PK1Bu\�W�����*future/moves/_markupbase.pynu�[���PK7Bu\���Q����*future/moves/http/client.pynu�[���PK9Bu\�h�����*future/moves/http/cookiejar.pynu�[���PK<Bu\+�͉GG��*future/moves/http/__init__.pynu�[���PKCBu\�=�PP6z�*future/moves/http/__pycache__/cookiejar.cpython-39.pycnu�[���PKEBu\C^Dl��50�*future/moves/http/__pycache__/__init__.cpython-39.pycnu�[���PKHBu\��883|�*future/moves/http/__pycache__/client.cpython-39.pycnu�[���PKJBu\NPLff4�*future/moves/http/__pycache__/cookies.cpython-39.pycnu�[���PKMBu\)�R�##3��*future/moves/http/__pycache__/server.cpython-39.pycnu�[���PKPBu\�����g�*future/moves/http/cookies.pynu�[���PKRBu\m�[�^^��*future/moves/http/server.pynu�[���PKUBu\��Q	��E�*future/moves/copyreg.pynu�[���PKZBu\��NƱ�B�*future/moves/html/entities.pynu�[���PK\Bu\�ù���@�*future/moves/html/__init__.pynu�[���PKcBu\�|�WCC5��*future/moves/html/__pycache__/__init__.cpython-39.pycnu�[���PKfBu\���SS5-�*future/moves/html/__pycache__/entities.cpython-39.pycnu�[���PKiBu\\��KK3��*future/moves/html/__pycache__/parser.cpython-39.pycnu�[���PKkBu\^�R@����*future/moves/html/parser.pynu�[���PKpBu\+<b�DD!��*future/moves/tkinter/constants.pynu�[���PKsBu\�U��55�*future/moves/tkinter/font.pynu�[���PKuBu\�3�GG"��*future/moves/tkinter/messagebox.pynu�[���PKxBu\%{0v224�*future/moves/tkinter/dnd.pynu�[���PK{Bu\9��..��*future/moves/tkinter/ttk.pynu�[���PK}Bu\���ll *�*future/moves/tkinter/__init__.pynu�[���PK�Bu\���77��*future/moves/tkinter/dialog.pynu�[���PK�Bu\	ǚ��6k�*future/moves/tkinter/__pycache__/dialog.cpython-39.pycnu�[���PK�Bu\�"`��<��*future/moves/tkinter/__pycache__/commondialog.cpython-39.pycnu�[���PK�Bu\��,��8��*future/moves/tkinter/__pycache__/__init__.cpython-39.pycnu�[���PK�Bu\�����4�*future/moves/tkinter/__pycache__/font.cpython-39.pycnu�[���PK�Bu\dtO���:�*future/moves/tkinter/__pycache__/messagebox.cpython-39.pycnu�[���PK�Bu\���c��<�*future/moves/tkinter/__pycache__/colorchooser.cpython-39.pycnu�[���PK�Bu\�����3I�*future/moves/tkinter/__pycache__/tix.cpython-39.pycnu�[���PK�Bu\�5����3Q�*future/moves/tkinter/__pycache__/ttk.cpython-39.pycnu�[���PK�Bu\�&�PP:Y+future/moves/tkinter/__pycache__/filedialog.cpython-39.pycnu�[���PK�Bu\����3+future/moves/tkinter/__pycache__/dnd.cpython-39.pycnu�[���PK�Bu\5þ��9+future/moves/tkinter/__pycache__/constants.cpython-39.pycnu�[���PK�Bu\�ml��<I+future/moves/tkinter/__pycache__/scrolledtext.cpython-39.pycnu�[���PK�Bu\P�[$��<~
+future/moves/tkinter/__pycache__/simpledialog.cpython-39.pycnu�[���PK�Bu\�Ka;II$�+future/moves/tkinter/scrolledtext.pynu�[���PK�Bu\�"��II$P+future/moves/tkinter/simpledialog.pynu�[���PK�Bu\�ޝMM$�+future/moves/tkinter/commondialog.pynu�[���PK�Bu\hu'%..�+future/moves/tkinter/tix.pynu�[���PK�Bu\ްs"+future/moves/tkinter/filedialog.pynu�[���PK�Bu\{<N\MM$o+future/moves/tkinter/colorchooser.pynu�[���PK�Bu\)�	��+future/moves/queue.pynu�[���PK�Bu\5�K���+future/moves/subprocess.pynu�[���PK�Bu\]�)�:+future/moves/builtins.pynu�[���PK�Bu\'������+future/moves/__init__.pynu�[���PK�Bu\�b����+future/moves/dbm/gnu.pynu�[���PK�Bu\��Y����+future/moves/dbm/ndbm.pynu�[���PK�Bu\������+future/moves/dbm/__init__.pynu�[���PK�Bu\�k���4�+future/moves/dbm/__pycache__/__init__.cpython-39.pycnu�[���PK�Bu\�|Eu>>0�!+future/moves/dbm/__pycache__/ndbm.cpython-39.pycnu�[���PK�Bu\��)�==/�#+future/moves/dbm/__pycache__/gnu.cpython-39.pycnu�[���PK�Bu\	�'�BB0)%+future/moves/dbm/__pycache__/dumb.cpython-39.pycnu�[���PK�Bu\]
E����&+future/moves/dbm/dumb.pynu�[���PK�Bu\���6���'+future/moves/winreg.pynu�[���PK�Bu\^CZے��(+future/moves/configparser.pynu�[���PK�Bu\��3N���)+future/moves/pickle.pynu�[���PK�Bu\�=���2�*+future/moves/__pycache__/subprocess.cpython-39.pycnu�[���PK�Bu\�8n??/�,+future/moves/__pycache__/_thread.cpython-39.pycnu�[���PK�Bu\�L��``3o.+future/moves/__pycache__/collections.cpython-39.pycnu�[���PK�Bu\�I�hh021+future/moves/__pycache__/__init__.cpython-39.pycnu�[���PK�Bu\���À�5�2+future/moves/__pycache__/_dummy_thread.cpython-39.pycnu�[���PK�Bu\֙/�RR1�4+future/moves/__pycache__/itertools.cpython-39.pycnu�[���PK�Bu\&��n33+�6+future/moves/__pycache__/sys.cpython-39.pycnu�[���PKCu\���::- 8+future/moves/__pycache__/queue.cpython-39.pycnu�[���PKCu\�L��OO4�9+future/moves/__pycache__/socketserver.cpython-39.pycnu�[���PKCu\ ���KK3j;+future/moves/__pycache__/_markupbase.cpython-39.pycnu�[���PKCu\;�E==/=+future/moves/__pycache__/reprlib.cpython-39.pycnu�[���PK
Cu\�ok554�>+future/moves/__pycache__/configparser.cpython-39.pycnu�[���PK
Cu\���>>.M@+future/moves/__pycache__/winreg.cpython-39.pycnu�[���PKCu\��@uu7�A+future/moves/__pycache__/multiprocessing.cpython-39.pycnu�[���PKCu\�S�{aa0�C+future/moves/__pycache__/builtins.cpython-39.pycnu�[���PKCu\^,�qq.�E+future/moves/__pycache__/pickle.cpython-39.pycnu�[���PKCu\��n�yy/UG+future/moves/__pycache__/copyreg.cpython-39.pycnu�[���PK!Cu\w�
���-I+future/moves/multiprocessing.pynu�[���PK#Cu\�,���;J+future/moves/collections.pynu�[���PK&Cu\��\\'L+future/moves/_dummy_thread.pynu�[���PK+Cu\�
	���M+future/moves/xmlrpc/client.pynu�[���PK/Cu\�N+future/moves/xmlrpc/__init__.pynu�[���PK2Cu\@֟���7�N+future/moves/xmlrpc/__pycache__/__init__.cpython-39.pycnu�[���PK4Cu\�ז445P+future/moves/xmlrpc/__pycache__/client.cpython-39.pycnu�[���PK7Cu\���n445�Q+future/moves/xmlrpc/__pycache__/server.cpython-39.pycnu�[���PK9Cu\k\�!��5S+future/moves/xmlrpc/server.pynu�[���PK<Cu\��
��T+future/moves/sys.pynu�[���PKCCu\�}�VV�T+future/moves/urllib/response.pynu�[���PKECu\�U�jnn}V+future/moves/urllib/__init__.pynu�[���PKKCu\>�$$6:W+future/moves/urllib/__pycache__/request.cpython-39.pycnu�[���PKMCu\�`�g4�[+future/moves/urllib/__pycache__/error.cpython-39.pycnu�[���PKPCu\���BZZ:3^+future/moves/urllib/__pycache__/robotparser.cpython-39.pycnu�[���PKRCu\�V".7�_+future/moves/urllib/__pycache__/__init__.cpython-39.pycnu�[���PKUCu\�g�9��4ya+future/moves/urllib/__pycache__/parse.cpython-39.pycnu�[���PKWCu\9<����7�d+future/moves/urllib/__pycache__/response.cpython-39.pycnu�[���PKZCu\7�<���g+future/moves/urllib/error.pynu�[���PK]Cu\�s6�Bi+future/moves/urllib/parse.pynu�[���PKaCu\���
�
�m+future/moves/urllib/request.pynu�[���PKdCu\|���"�{+future/moves/urllib/robotparser.pynu�[���PKfCu\�_����|+future/moves/itertools.pynu�[���PKkCu\�U�jnn�}+future/moves/test/__init__.pynu�[���PKnCu\�����@~+future/moves/test/support.pynu�[���PKsCu\��H�5p�+future/moves/test/__pycache__/__init__.cpython-39.pycnu�[���PKvCu\d�5a��4�+future/moves/test/__pycache__/support.cpython-39.pycnu�[���PKyCu\Z9�p���+future/moves/socketserver.pynu�[���PK{Cu\��>��݅+future/moves/_thread.pynu�[���PK�Cu\�65���dž+future/moves/reprlib.pynu�[���PK�Cu\"vDW?W?��+future/backports/_markupbase.pynu�[���PK�Cu\Sq"l'�'�U�+future/backports/misc.pynu�[���PK�Cu\2����H,future/backports/http/client.pynu�[���PK�Cu\3�2�@+@+"-future/backports/http/cookiejar.pynu�[���PK�Cu\!�..future/backports/http/__init__.pynu�[���PK�Cu\af��z�z�:�..future/backports/http/__pycache__/cookiejar.cpython-39.pycnu�[���PK�Cu\�NTs��9�/future/backports/http/__pycache__/__init__.cpython-39.pycnu�[���PK�Cu\Ko�
x
x7�/future/backports/http/__pycache__/client.cpython-39.pycnu�[���PK�Cu\C��.�?�?8I{/future/backports/http/__pycache__/cookies.cpython-39.pycnu�[���PK�Cu\#@ACz�z�7e�/future/backports/http/__pycache__/server.cpython-39.pycnu�[���PK�Cu\���2MTMT FB0future/backports/http/cookies.pynu�[���PK�Cu\��Vӱӱ�0future/backports/http/server.pynu�[���PK�Cu\\�m�&�&!I1future/backports/html/entities.pynu�[���PK�Cu\Qޅ��!�o2future/backports/html/__init__.pynu�[���PK�Cu\���e��9�s2future/backports/html/__pycache__/__init__.cpython-39.pycnu�[���PK�Cu\�L��(�(�9Mx2future/backports/html/__pycache__/entities.cpython-39.pycnu�[���PK�Cu\Q	*5*57�>3future/backports/html/__pycache__/parser.cpython-39.pycnu�[���PK�Cu\�FS�:M:Mot3future/backports/html/parser.pynu�[���PK�Cu\�H�g��3future/backports/__init__.pynu�[���PK�Cu\X�h~/=/=V�3future/backports/socket.pynu�[���PK�Cu\'( gg4�4future/backports/__pycache__/__init__.cpython-39.pycnu�[���PK�Cu\lu1���4�4future/backports/__pycache__/datetime.cpython-39.pycnu�[���PK�Cu\�{�7�72��4future/backports/__pycache__/socket.cpython-39.pycnu�[���PK�Cu\�]��p�p0��4future/backports/__pycache__/misc.cpython-39.pycnu�[���PK�Cu\��lr�V�V8�n5future/backports/__pycache__/socketserver.cpython-39.pycnu�[���PK�Cu\B��u�$�$7^�5future/backports/__pycache__/_markupbase.cpython-39.pycnu�[���PK�Cu\��-L��:��5future/backports/__pycache__/total_ordering.cpython-39.pycnu�[���PK�Cu\+&<<��"��5future/backports/total_ordering.pynu�[���PK�Cu\���J/�/�!��5future/backports/xmlrpc/client.pynu�[���PK�Cu\���I&&#4�6future/backports/xmlrpc/__init__.pynu�[���PK�Cu\���C��;��6future/backports/xmlrpc/__pycache__/__init__.cpython-39.pycnu�[���PK�Cu\>�D���9��6future/backports/xmlrpc/__pycache__/client.cpython-39.pycnu�[���PK�Cu\���t�t9?7future/backports/xmlrpc/__pycache__/server.cpython-39.pycnu�[���PK�Cu\	�_�����!�7future/backports/xmlrpc/server.pynu�[���PK�Cu\����ll#F8future/backports/urllib/response.pynu�[���PK�Cu\#�R8future/backports/urllib/__init__.pynu�[���PKDu\���cc:$S8future/backports/urllib/__pycache__/request.cpython-39.pycnu�[���PKDu\_S}�e
e
8�c9future/backports/urllib/__pycache__/error.cpython-39.pycnu�[���PKDu\^����>�n9future/backports/urllib/__pycache__/robotparser.cpython-39.pycnu�[���PKDu\���ڥ�;�9future/backports/urllib/__pycache__/__init__.cpython-39.pycnu�[���PK
Du\���!�p�p8�9future/backports/urllib/__pycache__/parse.cpython-39.pycnu�[���PK
Du\�U�@@;-�9future/backports/urllib/__pycache__/response.cpython-39.pycnu�[���PKDu\+ܓ�
�
 �:future/backports/urllib/error.pynu�[���PKDu\~LD�ЋЋ �:future/backports/urllib/parse.pynu�[���PKDu\�Pxx"�:future/backports/urllib/request.pynu�[���PKDu\����&I<future/backports/urllib/robotparser.pynu�[���PKDu\P��!'!'p3<future/backports/datetime.pynu�[���PK#Du\���"�Z=future/backports/test/keycert2.pemnu�[���PK&Du\��g��2b=future/backports/test/dh512.pemnu�[���PK(Du\����!d=future/backports/test/__init__.pynu�[���PK+Du\��2�cc"le=future/backports/test/ssl_cert.pemnu�[���PK.Du\̺v	
	
3!i=future/backports/test/https_svn_python_org_root.pemnu�[���PK0Du\���;��!�s=future/backports/test/ssl_key.pemnu�[���PK3Du\�C�� rw=future/backports/test/support.pynu�[���PK5Du\��� ��>future/backports/test/pystone.pynu�[���PK:Du\d�|��9	�>future/backports/test/__pycache__/__init__.cpython-39.pycnu�[���PK=Du\#(�SS8%�>future/backports/test/__pycache__/pystone.cpython-39.pycnu�[���PKADu\�������8�>future/backports/test/__pycache__/support.cpython-39.pycnu�[���PKDDu\��E���<�?future/backports/test/__pycache__/ssl_servers.cpython-39.pycnu�[���PKIDu\��|())$*�?future/backports/test/ssl_servers.pynu�[���PKLDu\'�q� �  ��?future/backports/test/sha256.pemnu�[���PKNDu\���;;&��?future/backports/test/nullbytecert.pemnu�[���PKQDu\��|G�� @future/backports/test/nokia.pemnu�[���PKSDu\�,\��!�@future/backports/test/keycert.pemnu�[���PKVDu\":@future/backports/test/nullcert.pemnu�[���PKVDu\<�&&(�@future/backports/test/keycert.passwd.pemnu�[���PKYDu\�<����(
@future/backports/test/ssl_key.passwd.pemnu�[���PK[Du\0,5��!%"@future/backports/test/badcert.pemnu�[���PK`Du\�
��rr �)@future/backports/test/badkey.pemnu�[���PKbDu\��t�^�^ �2@future/backports/socketserver.pynu�[���PKgDu\dd����.�@future/backports/email/_header_value_parser.pynu�[���PKjDu\7L��DD!@+Bfuture/backports/email/charset.pynu�[���PKlDu\髠��X�X$�oBfuture/backports/email/feedparser.pynu�[���PKoDu\��@L@L#��Bfuture/backports/email/generator.pynu�[���PKrDu\������"gCfuture/backports/email/__init__.pynu�[���PKtDu\�'�7979%�Cfuture/backports/email/_policybase.pynu�[���PKwDu\��0�,	,	#"XCfuture/backports/email/iterators.pynu�[���PKyDu\)��� � (�aCfuture/backports/email/_encoded_words.pynu�[���PK|Du\���C�C$�Cfuture/backports/email/_parseaddr.pynu�[���PK�Du\P�U�
�
"5�Cfuture/backports/email/encoders.pynu�[���PK�Du\���fBfB8w�Cfuture/backports/email/__pycache__/header.cpython-39.pycnu�[���PK�Du\�����:EDfuture/backports/email/__pycache__/__init__.cpython-39.pycnu�[���PK�Du\+��mm;�Dfuture/backports/email/__pycache__/iterators.cpython-39.pycnu�[���PK�Du\y�
��R�R@l&Dfuture/backports/email/__pycache__/headerregistry.cpython-39.pycnu�[���PK�Du\�yI��@�yDfuture/backports/email/__pycache__/_encoded_words.cpython-39.pycnu�[���PK�Du\=ި�99=�Dfuture/backports/email/__pycache__/_policybase.cpython-39.pycnu�[���PK�Du\�L�
� � 8��Dfuture/backports/email/__pycache__/policy.cpython-39.pycnu�[���PK�Du\UKj^�)�)<��Dfuture/backports/email/__pycache__/feedparser.cpython-39.pycnu�[���PK�Du\�{�/�
�
<�Efuture/backports/email/__pycache__/base64mime.cpython-39.pycnu�[���PK�Du\4t_N$N$<�$Efuture/backports/email/__pycache__/quoprimime.cpython-39.pycnu�[���PK�Du\����-�-;cIEfuture/backports/email/__pycache__/generator.cpython-39.pycnu�[���PK�Du\��g��o�o9�wEfuture/backports/email/__pycache__/message.cpython-39.pycnu�[���PK�Du\���X.X.9��Efuture/backports/email/__pycache__/charset.cpython-39.pycnu�[���PK�Du\�H�rjj:�Ffuture/backports/email/__pycache__/encoders.cpython-39.pycnu�[���PK�Du\��N�x(x(7wFfuture/backports/email/__pycache__/utils.cpython-39.pycnu�[���PK�Du\�ɐ��8VHFfuture/backports/email/__pycache__/parser.cpython-39.pycnu�[���PK�Du\�m}^11<l`Ffuture/backports/email/__pycache__/_parseaddr.cpython-39.pycnu�[���PK�Du\�n^r8�Ffuture/backports/email/__pycache__/errors.cpython-39.pycnu�[���PK�Du\�}�8�8Fj�Ffuture/backports/email/__pycache__/_header_value_parser.cpython-39.pycnu�[���PK�Du\)��!�*�*$��Gfuture/backports/email/quoprimime.pynu�[���PK�Du\Wxl�� �
Hfuture/backports/email/parser.pynu�[���PK�Du\�';�P�P(�"Hfuture/backports/email/headerregistry.pynu�[���PK�Du\ӌ��_�_ �sHfuture/backports/email/header.pynu�[���PK�Du\��Ա�$��Hfuture/backports/email/base64mime.pynu�[���PK�Du\6ޜ�@@+��Hfuture/backports/email/mime/nonmultipart.pynu�[���PK�Du\'&�Hfuture/backports/email/mime/__init__.pynu�[���PK�Du\ԁyy*}�Hfuture/backports/email/mime/application.pynu�[���PK�Du\��?�
�
<P�Hfuture/backports/email/mime/__pycache__/audio.cpython-39.pycnu�[���PK�Du\��a;��?��Hfuture/backports/email/mime/__pycache__/__init__.cpython-39.pycnu�[���PK�Du\��~jj@��Hfuture/backports/email/mime/__pycache__/multipart.cpython-39.pycnu�[���PK�Du\�d�LLB��Hfuture/backports/email/mime/__pycache__/application.cpython-39.pycnu�[���PK�Du\�X�MM;EIfuture/backports/email/mime/__pycache__/base.cpython-39.pycnu�[���PK�Du\^th��C�
Ifuture/backports/email/mime/__pycache__/nonmultipart.cpython-39.pycnu�[���PK�Du\��q	��>)Ifuture/backports/email/mime/__pycache__/message.cpython-39.pycnu�[���PK�Du\,���		<:Ifuture/backports/email/mime/__pycache__/image.cpython-39.pycnu�[���PK�Du\�[����;�Ifuture/backports/email/mime/__pycache__/text.cpython-39.pycnu�[���PK�Du\v��#�#Ifuture/backports/email/mime/text.pynu�[���PK�Du\R�t��(:*Ifuture/backports/email/mime/multipart.pynu�[���PK�Du\��֕�&51Ifuture/backports/email/mime/message.pynu�[���PK�Du\�s��
�
$ 7Ifuture/backports/email/mime/audio.pynu�[���PK�Du\�G�kk#sBIfuture/backports/email/mime/base.pynu�[���PK�Du\�6�ss$1FIfuture/backports/email/mime/image.pynu�[���PK�Du\��d�����!�MIfuture/backports/email/message.pynu�[���PK�Du\-��/�7�7��Ifuture/backports/email/utils.pynu�[���PKEu\!��{`` �Jfuture/backports/email/errors.pynu�[���PKEu\ӥ�w"w" �Jfuture/backports/email/policy.pynu�[���PKEu\:���U�UrAJfuture/utils/__init__.pynu�[���PKEu\��ާO�O0��Jfuture/utils/__pycache__/__init__.cpython-39.pycnu�[���PKEu\0����7��Jfuture/utils/__pycache__/surrogateescape.cpython-39.pycnu�[���PKEu\
�Z���Jfuture/utils/surrogateescape.pynu�[���PK#Eu\�Kdistro/py.typednu�[���PK#Eu\�`p���7Kdistro/__init__.pynu�[���PK%Eu\T�i@@NKdistro/__main__.pynu�[���PK+Eu\�n$ii*�Kdistro/__pycache__/__init__.cpython-39.pycnu�[���PK.Eu\y�
P�P�(�Kdistro/__pycache__/distro.cpython-39.pycnu�[���PK0Eu\�2���*;�Kdistro/__pycache__/__main__.cpython-39.pycnu�[���PK3Eu\J:mk��r�Kdistro/distro.pynu�[���PK8Eu\�+[[�Lagent360-1.3.1.dist-info/WHEELnu�[���PK;Eu\�fc�33 q�Lagent360-1.3.1.dist-info/LICENSEnu�[���PK=Eu\#X�D&D&�Lagent360-1.3.1.dist-info/RECORDnu�[���PKBEu\����!��Lagent360-1.3.1.dist-info/METADATAnu�[���PKDEu\"��Lagent360-1.3.1.dist-info/REQUESTEDnu�[���PKDEu\r�m		&ֿLagent360-1.3.1.dist-info/top_level.txtnu�[���PKGEu\ ��WW)5�Lagent360-1.3.1.dist-info/entry_points.txtnu�[���PKIEu\���"�Lagent360-1.3.1.dist-info/INSTALLERnu�[���PKNEu\�e@�o�o;�Lagent360/agent360.pynu�[���PKQEu\j1Magent360/__init__.pynu�[���PKTEu\�ǎ]]�1Magent360/plugins/ping.pynu�[���PKVEu\z8��S>Magent360/plugins/redis_stat.pynu�[���PKYEu\gԒr
r
�TMagent360/plugins/docker.pynu�[���PK[Eu\���I�
�
LbMagent360/plugins/phpfpm.pynu�[���PK`Eu\��p_��smMagent360/plugins/asterisk.pynu�[���PKbEu\��#,,�qMagent360/plugins/kamailio.pynu�[���PKeEu\�q��	�	+tMagent360/plugins/plugins.pynu�[���PKhEu\<'�
�
`~Magent360/plugins/network.pynu�[���PKkEu\9����Magent360/plugins/dovecot.pynu�[���PKmEu\4�<���!�Magent360/plugins/plesk-cgroups.pynu�[���PKpEu\�ܓ����Magent360/plugins/gpu.pynu�[���PKrEu\a���Magent360/plugins/unbound.pynu�[���PKuEu\��.==@�Magent360/plugins/powerdns.pynu�[���PKxEu\�h)pjj��Magent360/plugins/proftpd.pynu�[���PKzEu\�M�ll~�Magent360/plugins/swap.pynu�[���PK}Eu\�%�**$2�Magent360/plugins/cloudlinux-dbgov.pynu�[���PK�Eu\����
�
��Magent360/plugins/rabbitmq.pynu�[���PK�Eu\��Magent360/plugins/__init__.pynu�[���PK�Eu\�h����9�Magent360/plugins/bind.pynu�[���PK�Eu\&:0�GG]�Magent360/plugins/cpu_freq.pynu�[���PK�Eu\��3]]��Magent360/plugins/memcached.pynu�[���PK�Eu\�F�?

��Magent360/plugins/process.pynu�[���PK�Eu\ϣYi���Nagent360/plugins/postfix.pynu�[���PK�Eu\�����+Nagent360/plugins/memory.pynu�[���PK�Eu\�;����Nagent360/plugins/haproxy.pynu�[���PK�Eu\`�4��%Nagent360/plugins/cloudlinux.pynu�[���PK�Eu\;l�	��`)Nagent360/plugins/vms.pynu�[���PK�Eu\Y/@G��wBNagent360/plugins/diskinodes.pynu�[���PK�Eu\���˯�QENagent360/plugins/openvpn.pynu�[���PK�Eu\�\����KNNagent360/plugins/nginx.pynu�[���PK�Eu\����ll_[Nagent360/plugins/mdstat.pynu�[���PK�Eu\��??aNagent360/plugins/fail2ban.pynu�[���PK�Eu\"22��3�dNagent360/plugins/__pycache__/megacli.cpython-39.pycnu�[���PK�Eu\1oFF2�lNagent360/plugins/__pycache__/iostat.cpython-39.pycnu�[���PK�Eu\��/{	{	2-xNagent360/plugins/__pycache__/phpfpm.cpython-39.pycnu�[���PK�Eu\3�c�``3
�Nagent360/plugins/__pycache__/haproxy.cpython-39.pycnu�[���PK�Eu\�����6͊Nagent360/plugins/__pycache__/diskinodes.cpython-39.pycnu�[���PK�Eu\�YL�^^6�Nagent360/plugins/__pycache__/wp-toolkit.cpython-39.pycnu�[���PK�Eu\��r�mm1ƖNagent360/plugins/__pycache__/mysql.cpython-39.pycnu�[���PK�Eu\tlT��4��Nagent360/plugins/__pycache__/__init__.cpython-39.pycnu�[���PK�Eu\�d�=	=	1��Nagent360/plugins/__pycache__/nginx.cpython-39.pycnu�[���PK�Eu\(ʠ�aa44�Nagent360/plugins/__pycache__/tcpports.cpython-39.pycnu�[���PK�Eu\#��}}3��Nagent360/plugins/__pycache__/openvpn.cpython-39.pycnu�[���PK�Eu\%�I??4ٺNagent360/plugins/__pycache__/asterisk.cpython-39.pycnu�[���PK�Eu\��
���1|�Nagent360/plugins/__pycache__/janus.cpython-39.pycnu�[���PK�Eu\��f��6��Nagent360/plugins/__pycache__/cloudlinux.cpython-39.pycnu�[���PK�Eu\=ߘ�QQ2��Nagent360/plugins/__pycache__/cpanel.cpython-39.pycnu�[���PK�Eu\��?�}}3\�Nagent360/plugins/__pycache__/dirsize.cpython-39.pycnu�[���PK�Eu\�5ۛrr0<�Nagent360/plugins/__pycache__/temp.cpython-39.pycnu�[���PK�Eu\�߷
��4�Nagent360/plugins/__pycache__/kamailio.cpython-39.pycnu�[���PK�Eu\�!X���0W�Nagent360/plugins/__pycache__/bind.cpython-39.pycnu�[���PK�Eu\��|���1��Nagent360/plugins/__pycache__/mailq.cpython-39.pycnu�[���PK�Eu\�o�^��4
�Nagent360/plugins/__pycache__/powerdns.cpython-39.pycnu�[���PK�Eu\��z`��6<�Nagent360/plugins/__pycache__/diskstatus.cpython-39.pycnu�[���PK�Eu\�+�}}7\�Nagent360/plugins/__pycache__/apt-updates.cpython-39.pycnu�[���PK�Eu\�Nr/@�Nagent360/plugins/__pycache__/cpu.cpython-39.pycnu�[���PK�Eu\z`�3t
t
2�Oagent360/plugins/__pycache__/docker.cpython-39.pycnu�[���PK�Eu\��X��
�
5z
Oagent360/plugins/__pycache__/diskusage.cpython-39.pycnu�[���PK�Eu\ �H�2�Oagent360/plugins/__pycache__/memory.cpython-39.pycnu�[���PK�Eu\_���,,3Oagent360/plugins/__pycache__/sleeper.cpython-39.pycnu�[���PK�Eu\�����/�Oagent360/plugins/__pycache__/vms.cpython-39.pycnu�[���PKFu\��|���2�2Oagent360/plugins/__pycache__/mdstat.cpython-39.pycnu�[���PKFu\�p
20�7Oagent360/plugins/__pycache__/exim.cpython-39.pycnu�[���PKFu\ꟍ0ss0!;Oagent360/plugins/__pycache__/swap.cpython-39.pycnu�[���PKFu\(��B&&0�=Oagent360/plugins/__pycache__/bird.cpython-39.pycnu�[���PK
Fu\�CPoo3zBOagent360/plugins/__pycache__/network.cpython-39.pycnu�[���PKFu\]�
�
9LIOagent360/plugins/__pycache__/plesk-cgroups.cpython-39.pycnu�[���PKFu\8�c��9�TOagent360/plugins/__pycache__/elasticsearch.cpython-39.pycnu�[���PKFu\=B��BB7�cOagent360/plugins/__pycache__/yum-updates.cpython-39.pycnu�[���PKFu\�7z���5�hOagent360/plugins/__pycache__/minecraft.cpython-39.pycnu�[���PKFu\y����<�qOagent360/plugins/__pycache__/cloudlinux-dbgov.cpython-39.pycnu�[���PKFu\�
��f	f	1
wOagent360/plugins/__pycache__/httpd.cpython-39.pycnu�[���PKFu\z=�SS4рOagent360/plugins/__pycache__/fail2ban.cpython-39.pycnu�[���PK Fu\�+|�ZZ3��Oagent360/plugins/__pycache__/dovecot.cpython-39.pycnu�[���PK#Fu\^e�&		0E�Oagent360/plugins/__pycache__/ping.cpython-39.pycnu�[���PK%Fu\ź[J��3��Oagent360/plugins/__pycache__/postfix.cpython-39.pycnu�[���PK(Fu\(�؟XX3�Oagent360/plugins/__pycache__/proftpd.cpython-39.pycnu�[���PK+Fu\�-���4��Oagent360/plugins/__pycache__/bitninja.cpython-39.pycnu�[���PK-Fu\Efo��3�Oagent360/plugins/__pycache__/mongodb.cpython-39.pycnu�[���PK0Fu\ZJPncc3ζOagent360/plugins/__pycache__/loadavg.cpython-39.pycnu�[���PK2Fu\8�+���5��Oagent360/plugins/__pycache__/litespeed.cpython-39.pycnu�[���PK5Fu\F�u[��5��Oagent360/plugins/__pycache__/memcached.cpython-39.pycnu�[���PK7Fu\q*�$
$
3��Oagent360/plugins/__pycache__/plugins.cpython-39.pycnu�[���PK:Fu\�����;"�Oagent360/plugins/__pycache__/diskstatus-nvme.cpython-39.pycnu�[���PK<Fu\�C��3y�Oagent360/plugins/__pycache__/unbound.cpython-39.pycnu�[���PKDFu\"��
�
3��Oagent360/plugins/__pycache__/process.cpython-39.pycnu�[���PKFFu\�e332M�Oagent360/plugins/__pycache__/system.cpython-39.pycnu�[���PKIFu\��^^4�Pagent360/plugins/__pycache__/rabbitmq.cpython-39.pycnu�[���PKKFu\+��ױ�4�
Pagent360/plugins/__pycache__/loggedin.cpython-39.pycnu�[���PKNFu\�4���6�Pagent360/plugins/__pycache__/redis_stat.cpython-39.pycnu�[���PKPFu\�7���4�Pagent360/plugins/__pycache__/cpu_freq.cpython-39.pycnu�[���PKSFu\}5�--/)Pagent360/plugins/__pycache__/gpu.cpython-39.pycnu�[���PKXFu\��G!,,�Pagent360/plugins/diskusage.pynu�[���PK[Fu\�����
�
.0Pagent360/plugins/megacli.pynu�[���PK]Fu\u��II\;Pagent360/plugins/bird.pynu�[���PKbFu\wQ=�LL�>Pagent360/plugins/loadavg.pynu�[���PKdFu\3������@Pagent360/plugins/apt-updates.pynu�[���PKgFu\QfF'���EPagent360/plugins/loggedin.pynu�[���PKiFu\�� (j	j	�GPagent360/plugins/minecraft.pynu�[���PKlFu\?���
�
RQPagent360/plugins/httpd.pynu�[���PKoFu\��a��\Pagent360/plugins/mailq.pynu�[���PKqFu\[�H	��!�^Pagent360/plugins/elasticsearch.pynu�[���PKtFu\L�̆+pPagent360/plugins/bitninja.pynu�[���PKvFu\蔱_���uPagent360/plugins/mysql.pynu�[���PKyFu\��`���^�Pagent360/plugins/system.pynu�[���PK{Fu\���ii[�Pagent360/plugins/dirsize.pynu�[���PK�Fu\�=�**�Pagent360/plugins/yum-updates.pynu�[���PK�Fu\��Ԭ����Pagent360/plugins/cpanel.pynu�[���PK�Fu\(.�**s�Pagent360/plugins/litespeed.pynu�[���PK�Fu\Q}2~���Pagent360/plugins/temp.pynu�[���PK�Fu\�S�bkk��Pagent360/plugins/iostat.pynu�[���PK�Fu\ݢ�M����Pagent360/plugins/mongodb.pynu�[���PK�Fu\�Q�VV��Pagent360/plugins/tcpports.pynu�[���PK�Fu\BE���"�Pagent360/plugins/janus.pynu�[���PK�Fu\��3�RR#,�Pagent360/plugins/diskstatus-nvme.pynu�[���PK�Fu\6��55�Pagent360/plugins/wp-toolkit.pynu�[���PK�Fu\S�@��TQagent360/plugins/sleeper.pynu�[���PK�Fu\El����Qagent360/plugins/exim.pynu�[���PK�Fu\?'�b���Qagent360/plugins/diskstatus.pynu�[���PK�Fu\�$�����Qagent360/plugins/cpu.pynu�[���PK�Fu\G�c���,
Qagent360/__pycache__/__init__.cpython-39.pycnu�[���PK�Fu\��S@�I�I,�Qagent360/__pycache__/agent360.cpython-39.pycnu�[���PK�Fu\��IR��$9`Qlibpasteurize/fixes/fix_unpacking.pynu�[���PK�Fu\�{���#&xQlibpasteurize/fixes/fix_division.pynu�[���PK�Fu\�g��(|Qlibpasteurize/fixes/fix_printfunction.pynu�[���PK�Fu\�}'�s
s
#�}Qlibpasteurize/fixes/fix_features.pynu�[���PK�Fu\�5�ii!��Qlibpasteurize/fixes/fix_getcwd.pynu�[���PK�Fu\a����3j�Qlibpasteurize/fixes/fix_add_all__future__imports.pynu�[���PK�Fu\ k�4��q�Qlibpasteurize/fixes/__init__.pynu�[���PK�Fu\mg_KK G�Qlibpasteurize/fixes/fix_raise.pynu�[���PK�Fu\�xx#�Qlibpasteurize/fixes/fix_newstyle.pynu�[���PK�Fu\��CC ��Qlibpasteurize/fixes/fix_throw.pynu�[���PK�Fu\T����B@�Qlibpasteurize/fixes/__pycache__/fix_future_builtins.cpython-39.pycnu�[���PK�Fu\C����9��Qlibpasteurize/fixes/__pycache__/fix_raise_.cpython-39.pycnu�[���PK�Fu\`��+��;��Qlibpasteurize/fixes/__pycache__/fix_features.cpython-39.pycnu�[���PK�Fu\Y��.K�Qlibpasteurize/fixes/__pycache__/fix_add_all__future__imports.cpython-39.pycnu�[���PK�Fu\�6u�887z�Qlibpasteurize/fixes/__pycache__/__init__.cpython-39.pycnu�[���PK�Fu\��<99;�Qlibpasteurize/fixes/__pycache__/fix_division.cpython-39.pycnu�[���PK�Fu\���=��J��Qlibpasteurize/fixes/__pycache__/fix_add_all_future_builtins.cpython-39.pycnu�[���PK�Fu\$�JJ:��Qlibpasteurize/fixes/__pycache__/fix_imports.cpython-39.pycnu�[���PK�Fu\WY���;��Qlibpasteurize/fixes/__pycache__/fix_newstyle.cpython-39.pycnu�[���PK�Fu\�`����8��Qlibpasteurize/fixes/__pycache__/fix_throw.cpython-39.pycnu�[���PK�Fu\�!�@��Qlibpasteurize/fixes/__pycache__/fix_printfunction.cpython-39.pycnu�[���PK�Fu\��LY��7%�Qlibpasteurize/fixes/__pycache__/fix_next.cpython-39.pycnu�[���PK�Fu\�BE..8��Qlibpasteurize/fixes/__pycache__/fix_raise.cpython-39.pycnu�[���PKGu\��k��>�Qlibpasteurize/fixes/__pycache__/fix_annotations.cpython-39.pycnu�[���PKGu\ �Ʊ��9Rlibpasteurize/fixes/__pycache__/fix_kwargs.cpython-39.pycnu�[���PKGu\������URlibpasteurize/fixes/__pycache__/fix_add_future_standard_library_import.cpython-39.pycnu�[���PKGu\�ƹ PP9rRlibpasteurize/fixes/__pycache__/fix_getcwd.cpython-39.pycnu�[���PKGu\$޹�@
@
;+Rlibpasteurize/fixes/__pycache__/feature_base.cpython-39.pycnu�[���PK
Gu\�h?��=�$Rlibpasteurize/fixes/__pycache__/fix_memoryview.cpython-39.pycnu�[���PKGu\Z�q���<�(Rlibpasteurize/fixes/__pycache__/fix_metaclass.cpython-39.pycnu�[���PKGu\@7]�OO<1Rlibpasteurize/fixes/__pycache__/fix_unpacking.cpython-39.pycnu�[���PKGu\�oƺqq;�GRlibpasteurize/fixes/__pycache__/fix_imports2.cpython-39.pycnu�[���PKGu\���JJ>�_Rlibpasteurize/fixes/__pycache__/fix_fullargspec.cpython-39.pycnu�[���PKGu\-�B��$`cRlibpasteurize/fixes/fix_metaclass.pynu�[���PK!Gu\j�hS�!�!#ppRlibpasteurize/fixes/fix_imports2.pynu�[���PK$Gu\C7YV��"G�Rlibpasteurize/fixes/fix_imports.pynu�[���PK'Gu\4<ǻ��&;�Rlibpasteurize/fixes/fix_fullargspec.pynu�[���PK)Gu\��1�--&G�Rlibpasteurize/fixes/fix_annotations.pynu�[���PK,Gu\VX@�gg!ʮRlibpasteurize/fixes/fix_kwargs.pynu�[���PK.Gu\n`����2��Rlibpasteurize/fixes/fix_add_all_future_builtins.pynu�[���PK1Gu\#�''%��Rlibpasteurize/fixes/fix_memoryview.pynu�[���PK3Gu\���U�Rlibpasteurize/fixes/fix_next.pynu�[���PK6Gu\%P���*u�Rlibpasteurize/fixes/fix_future_builtins.pynu�[���PK8Gu\w�y��!y�Rlibpasteurize/fixes/fix_raise_.pynu�[���PK;Gu\F1}��#��Rlibpasteurize/fixes/feature_base.pynu�[���PK@Gu\�bXї�=��Rlibpasteurize/fixes/fix_add_future_standard_library_import.pynu�[���PKEGu\4R��Rlibpasteurize/__init__.pynu�[���PKJGu\Ÿ!A��1
�Rlibpasteurize/__pycache__/__init__.cpython-39.pycnu�[���PKLGu\��^a��-	�Rlibpasteurize/__pycache__/main.cpython-39.pycnu�[���PKOGu\^���`Slibpasteurize/main.pynu�[���PK����!S