HEX

Warning: set_time_limit() [function.set-time-limit]: Cannot set time limit - prohibited by configuration in /home/u547966/brikov.ru/www/wp-content/plugins/admin-menu-editor/menu-editor.php on line 745
Server: Apache
System: Linux 4.19.0-0.bpo.9-amd64 x86_64 at red40
User: u547966 (5490)
PHP: 5.3.29-mh2
Disabled: syslog, dl, popen, proc_open, proc_nice, proc_get_status, proc_close, proc_terminate, posix_mkfifo, chown, chgrp, accelerator_reset, opcache_reset, accelerator_get_status, opcache_get_status, pcntl_alarm, pcntl_fork, pcntl_waitpid, pcntl_wait, pcntl_wifexited, pcntl_wifstopped, pcntl_wifsignaled, pcntl_wifcontinued, pcntl_wexitstatus, pcntl_wtermsig, pcntl_wstopsig, pcntl_signal, pcntl_signal_dispatch, pcntl_get_last_error, pcntl_strerror, pcntl_sigprocmask, pcntl_sigwaitinfo, pcntl_sigtimedwait, pcntl_exec, pcntl_getpriority, pcntl_setpriority
Upload Files
File: //usr/lib/python2.7/dist-packages/mercurial/worker.pyc
ó
ÛXc@@sëddlmZddlZddlZddlZddlZddlZddlmZddl	m
Z
d„Zd„Zej
dkr–d	Znd
Zd„Zd„Zd
„Zd„Zej
dkrÞeZeZnd„ZdS(i(tabsolute_importNi(t_(terrorcC@sˆy)ttjdƒƒ}|dkr(|SWnttfk
rBnXy'ttjdƒ}|dkri|SWnttfk
rƒnXdS(s-try to count the number of CPUs on the systemtSC_NPROCESSORS_ONLNitNUMBER_OF_PROCESSORSi(tinttostsysconftAttributeErrort
ValueErrortenvirontKeyError(tn((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt	countcpusscC@s}|jddƒ}|rdy t|ƒ}|dkr7|SWqdtk
r`tjtdƒƒ‚qdXntttƒdƒdƒS(Ntworkertnumcpusis!number of cpus must be an integerii (	tconfigRR	RtAbortRtmintmaxR
(tuitsR((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt_numworkers(s
tposixg{®Gáz„?gꌠ9Y>)FcC@s6||}t|ƒ}|t|||}|dkS(setry to determine whether the benefit of multiple processes can
    outweigh the cost of starting themg333333Ã?(Rt_startupcost(Rt	costperoptnopstlineartworkerstbenefit((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt
worthwhile8s
cC@s<t||t|ƒƒr+t||||ƒS|||fŒS(sFrun a function, possibly in parallel in multiple worker
    processes.

    returns a progress iterator

    costperarg - cost of a single task

    func - function to run

    staticargs - arguments to pass to every invocation of the function

    args - arguments to split into chunks, to pass to individual
    workers
    (Rtlent_platformworker(Rt
costperargtfunct
staticargstargs((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyR@sc#@stjƒ\}}t|ƒ}tjtjƒ‰tjtjtjƒgdg‰‰xÏt||ƒD]¾}tjƒ}|dkrtjtjˆƒy\tj	|ƒx;|||fŒD]&\}	}
tj
|d|	|
fƒq»WtjdƒWqtk
rtjdƒqXnˆj
|ƒqfWˆjƒtj	|ƒtj|ddƒ}‡fd†‰‡‡‡fd†}tjd|ƒ‰ˆjƒ‡‡‡fd†}
yDx=|D]5}|jd	d
ƒ}t|dƒ|d
d fVq³WWnˆƒ|
ƒ‚nX|
ƒdS(Nis%d %s
iÿtrbc@sZxSˆD]K}ytj|tjƒWqtk
rQ}|jtjkrR‚qRqXqWdS(N(RtkilltsignaltSIGTERMtOSErrorterrnotESRCH(tpterr(tpids(s4/usr/lib/python2.7/dist-packages/mercurial/worker.pytkillworkersjs
c@sPxIˆD]A}ttjƒdƒ}|rˆdr|ˆd<ˆƒqqWdS(Nii(t_exitstatusRtwait(t_pidtst(R/R.tproblem(s4/usr/lib/python2.7/dist-packages/mercurial/worker.pytwaitforworkersrs


ttargetc@sgtjtjˆƒˆjƒˆd}|rc|dkrStjtjƒ|ƒntj|ƒndS(Ni(R'tSIGINTtjoinRR&tgetpidtsystexit(tstatus(t
oldhandlerR4tt(s4/usr/lib/python2.7/dist-packages/mercurial/worker.pytcleanupzs

t iiÿÿÿÿ(RtpipeRR't	getsignalR7tSIG_IGNt	partitiontforktclosetwritet_exittKeyboardInterrupttappendtreversetfdopent	threadingtThreadtstarttsplitR(RR"R#R$trfdtwfdRtpargstpidtititemtfpR5R?tlinetl((R/R=R.R4R>s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt_posixworkerSsD
 




%cC@s=tj|ƒrtj|ƒStj|ƒr9tj|ƒSdS(sˆconvert a posix exit status into the same form returned by
    os.spawnv

    returns None if the process was stopped instead of exitingN(Rt	WIFEXITEDtWEXITSTATUStWIFSIGNALEDtWTERMSIG(tcode((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt_posixexitstatusŒs
tntcc@s-x&t|ƒD]}||d|…Vq
WdS(sãpartition a list into N slices of roughly equal size

    The current strategy takes every Nth element from the input. If
    we ever write workers that need to preserve grouping in input
    we should consider allowing callers to specify a partition strategy.

    mpm is not a fan of this partitioning strategy when files are involved.
    In his words:

        Single-threaded Mercurial makes a point of creating and visiting
        files in a fixed order (alphabetical). When creating files in order,
        a typical filesystem is likely to allocate them on nearby regions on
        disk. Thus, when revisiting in the same order, locality is maximized
        and various forms of OS and disk-level caching and read-ahead get a
        chance to work.

        This effect can be quite significant on spinning disks. I discovered it
        circa Mercurial v0.4 when revlogs were named by hashes of filenames.
        Tarring a repo and copying it to another disk effectively randomized
        the revlog ordering on disk by sorting the revlogs by hash and suddenly
        performance of my kernel checkout benchmark dropped by ~10x because the
        "working set" of sectors visited no longer fit in the drive's cache and
        the workload switched from streaming to random I/O.

        What we should really be doing is have workers read filenames from a
        ordered queue. This preserves locality and also keeps any worker from
        getting more than one file out of balance.
    N(trange(tlsttnslicesRU((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyRDšs(t
__future__RR*RR'R:RMti18nRtRR
RtnameRRRRZR`R R0RD(((s4/usr/lib/python2.7/dist-packages/mercurial/worker.pyt<module>s(						9