[Python-checkins] cpython (merge default -> default): commit merge (original) (raw)
skip.montanaro python-checkins at python.org
Sat Mar 19 19:08:05 CET 2011
- Previous message: [Python-checkins] cpython: Mention RFC 4180. Based on input by Tony Wallace in issue 11456.
- Next message: [Python-checkins] cpython (merge default -> default): merge from upstream
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
http://hg.python.org/cpython/rev/64eeb4cd4b56 changeset: 68683:64eeb4cd4b56 parent: 68682:c63d7374b89a parent: 68678:dfceb98767c0 user: Skip Montanaro <skip at pobox.com> date: Sat Mar 19 09:15:28 2011 -0500 summary: commit merge
files:
diff --git a/Doc/includes/sqlite3/shared_cache.py b/Doc/includes/sqlite3/shared_cache.py
--- a/Doc/includes/sqlite3/shared_cache.py
+++ b/Doc/includes/sqlite3/shared_cache.py
@@ -1,6 +1,6 @@
import sqlite3
# The shared cache is only available in SQLite versions 3.3.3 or later
-# See the SQLite documentaton for details.
+# See the SQLite documentation for details.
sqlite3.enable_shared_cache(True)
diff --git a/Doc/library/os.rst b/Doc/library/os.rst
--- a/Doc/library/os.rst
+++ b/Doc/library/os.rst
@@ -743,6 +743,17 @@
.. versionadded:: 3.3
+.. function:: fexecve(fd, args, env)
+
+ Execute the program specified by a file descriptor fd with arguments given
+ by args and environment given by env, replacing the current process.
+ args and env are given as in :func:execve
.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: fpathconf(fd, name)
Return system configuration information relevant to an open file. name
@@ -819,6 +830,45 @@
.. versionadded:: 3.3
+.. function:: futimens(fd, (atime_sec, atime_nsec), (mtime_sec, mtime_nsec))
+ futimens(fd, None, None)
+
+ Updates the timestamps of a file specified by the file descriptor fd, with
+ nanosecond precision.
+ The second form sets atime and mtime to the current time.
+ If atime_nsec or mtime_nsec is specified as :data:UTIME_NOW
, the corresponding
+ timestamp is updated to the current time.
+ If atime_nsec or mtime_nsec is specified as :data:UTIME_OMIT
, the corresponding
+ timestamp is not updated.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. data:: UTIME_NOW
+ UTIME_OMIT
+
+ Flags used with :func:futimens
to specify that the timestamp must be
+ updated either to the current time or not updated at all.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. function:: futimes(fd, (atime, mtime))
+ futimes(fd, None)
+
+ Set the access and modified time of the file specified by the file
+ descriptor fd to the given values. If the second form is used, set the
+ access and modified times to the current time.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: isatty(fd)
Return True
if the file descriptor fd is open and connected to a
@@ -841,6 +891,30 @@
.. versionadded:: 3.3
+.. function:: lockf(fd, cmd, len)
+
+ Apply, test or remove a POSIX lock on an open file descriptor.
+ fd is an open file descriptor.
+ cmd specifies the command to use - one of :data:F_LOCK
, :data:F_TLOCK
,
+ :data:F_ULOCK
or :data:F_TEST
.
+ len specifies the section of the file to lock.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. data:: F_LOCK
+ F_TLOCK
+ F_ULOCK
+ F_TEST
+
+ Flags that specify what action :func:lockf
will take.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
.. function:: lseek(fd, pos, how)
Set the current position of file descriptor fd to position pos, modified
@@ -945,6 +1019,66 @@
Availability: Unix, Windows.
+.. function:: posix_fallocate(fd, offset, len)
+
+ Ensures that enough disk space is allocated for the file specified by fd
+ starting from offset and continuing for len bytes.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. function:: posix_fadvise(fd, offset, len, advice)
+
+ Announces an intention to access data in a specific pattern thus allowing
+ the kernel to make optimizations.
+ The advice applies to the region of the file specified by fd starting at
+ offset and continuing for len bytes.
+ advice is one of :data:POSIX_FADV_NORMAL
, :data:POSIX_FADV_SEQUENTIAL
,
+ :data:POSIX_FADV_RANDOM
, :data:POSIX_FADV_NOREUSE
,
+ :data:POSIX_FADV_WILLNEED
or :data:POSIX_FADV_DONTNEED
.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. data:: POSIX_FADV_NORMAL
+ POSIX_FADV_SEQUENTIAL
+ POSIX_FADV_RANDOM
+ POSIX_FADV_NOREUSE
+ POSIX_FADV_WILLNEED
+ POSIX_FADV_DONTNEED
+
+ Flags that can be used in advice in :func:posix_fadvise
that specify
+ the access pattern that is likely to be used.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. function:: pread(fd, buffersize, offset)
+
+ Read from a file descriptor, fd, at a position of offset. It will read up
+ to buffersize number of bytes. The file offset remains unchanged.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. function:: pwrite(fd, string, offset)
+
+ Write string to a file descriptor, fd, from offset, leaving the file
+ offset unchanged.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: read(fd, n)
Read at most n bytes from file descriptor fd. Return a bytestring containing the
@@ -1038,6 +1172,17 @@
.. versionadded:: 3.3
+.. function:: readv(fd, buffers)
+
+ Read from a file descriptor into a number of writable buffers. buffers is
+ an arbitrary sequence of writable buffers. Returns the total number of bytes
+ read.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: tcgetpgrp(fd)
Return the process group associated with the terminal given by fd (an open
@@ -1111,6 +1256,17 @@
:meth:~file.write
method.
+.. function:: writev(fd, buffers)
+
+ Write the the contents of buffers to file descriptor fd, where buffers
+ is an arbitrary sequence of buffers.
+ Returns the total number of bytes written.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. _open-constants:
open()
flag constants
@@ -1384,6 +1540,17 @@
Added support for Windows 6.0 (Vista) symbolic links.
+.. function:: lutimes(path, (atime, mtime))
+ lutimes(path, None)
+
+ Like :func:utime
, but if path is a symbolic link, it is not
+ dereferenced.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: mkfifo(path[, mode])
Create a FIFO (a named pipe) named path with numeric mode mode. The
@@ -1727,6 +1894,25 @@
Added support for Windows 6.0 (Vista) symbolic links.
+.. function:: sync()
+
+ Force write of everything to disk.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. function:: truncate(path, length)
+
+ Truncate the file corresponding to path, so that it is at most
+ length bytes in size.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
.. function:: unlink(path)
Remove (delete) the file path. This is the same function as
@@ -2306,6 +2492,58 @@
Availability: Unix.
+.. function:: waitid(idtype, id, options)
+
+ Wait for the completion of one or more child processes.
+ idtype can be :data:P_PID
, :data:P_PGID
or :data:P_ALL
.
+ id specifies the pid to wait on.
+ options is constructed from the ORing of one or more of :data:WEXITED
,
+ :data:WSTOPPED
or :data:WCONTINUED
and additionally may be ORed with
+ :data:WNOHANG
or :data:WNOWAIT
. The return value is an object
+ representing the data contained in the :c:type:siginfo_t
structure, namely:
+ :attr:si_pid
, :attr:si_uid
, :attr:si_signo
, :attr:si_status
,
+ :attr:si_code
or None
if :data:WNOHANG
is specified and there are no
+ children in a waitable state.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+.. data:: P_PID
+ P_PGID
+ P_ALL
+
+ These are the possible values for idtype in :func:waitid
. They affect
+ how id is interpreted.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+.. data:: WEXITED
+ WSTOPPED
+ WNOWAIT
+
+ Flags that can be used in options in :func:waitid
that specify what
+ child signal to wait for.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
+
+.. data:: CLD_EXITED
+ CLD_DUMPED
+ CLD_TRAPPED
+ CLD_CONTINUED
+
+ These are the possible values for :attr:si_code
in the result returned by
+ :func:waitid
.
+
+ Availability: Unix.
+
+ .. versionadded:: 3.3
+
.. function:: waitpid(pid, options)
diff --git a/Doc/library/subprocess.rst b/Doc/library/subprocess.rst
--- a/Doc/library/subprocess.rst
+++ b/Doc/library/subprocess.rst
@@ -125,12 +125,14 @@
stdin, stdout and stderr specify the executed programs' standard input,
standard output and standard error file handles, respectively. Valid values
- are :data:PIPE
, an existing file descriptor (a positive integer), an
- existing :term:file object
, and None
. :data:PIPE
indicates that a
- new pipe to the child should be created. With None
, no redirection will
- occur; the child's file handles will be inherited from the parent. Additionally,
- stderr can be :data:STDOUT
, which indicates that the stderr data from the
- applications should be captured into the same file handle as for stdout.
+ are :data:PIPE
, :data:DEVNULL
, an existing file descriptor (a positive
+ integer), an existing :term:file object
, and None
. :data:PIPE
+ indicates that a new pipe to the child should be created. :data:DEVNULL
+ indicates that the special file :data:os.devnull
will be used. With None
,
+ no redirection will occur; the child's file handles will be inherited from
+ the parent. Additionally, stderr can be :data:STDOUT
, which indicates
+ that the stderr data from the applications should be captured into the same
+ file handle as for stdout.
If preexec_fn is set to a callable object, this object will be called in the
child process just before the child is executed.
@@ -229,6 +231,15 @@
Added context manager support.
+.. data:: DEVNULL
+
+ Special value that can be used as the stdin, stdout or stderr argument
+ to :class:Popen
and indicates that the special file :data:os.devnull
+ will be used.
+
+ .. versionadded:: 3.3
+
+
.. data:: PIPE
Special value that can be used as the stdin, stdout or stderr argument
@@ -387,7 +398,7 @@
:func:call
and :meth:Popen.communicate
will raise :exc:TimeoutExpired
if
the timeout expires before the process exits.
-Exceptions defined in this module all inherit from :ext:SubprocessError
.
+Exceptions defined in this module all inherit from :exc:SubprocessError
.
.. versionadded:: 3.3
The :exc:SubprocessError
base class was added.
diff --git a/Include/abstract.h b/Include/abstract.h
--- a/Include/abstract.h
+++ b/Include/abstract.h
@@ -468,7 +468,7 @@
arbitrary data.
0 is returned on success. buffer and buffer_len are only
- set in case no error occurrs. Otherwise, -1 is returned and
+ set in case no error occurs. Otherwise, -1 is returned and
an exception set.
/
@@ -482,7 +482,7 @@
writable memory location in buffer of size buffer_len.
0 is returned on success. buffer and buffer_len are only
- set in case no error occurrs. Otherwise, -1 is returned and
+ set in case no error occurs. Otherwise, -1 is returned and
an exception set.
/
diff --git a/Include/pymacconfig.h b/Include/pymacconfig.h
--- a/Include/pymacconfig.h
+++ b/Include/pymacconfig.h
@@ -61,7 +61,7 @@
# endif
# if defined(LP64)
- / MacOSX 10.4 (the first release to suppport 64-bit code
+ / MacOSX 10.4 (the first release to support 64-bit code
* at all) only supports 64-bit in the UNIX layer.
* Therefore surpress the toolbox-glue in 64-bit mode.
*/
diff --git a/Lib/binhex.py b/Lib/binhex.py
--- a/Lib/binhex.py
+++ b/Lib/binhex.py
@@ -52,14 +52,13 @@
def getfileinfo(name):
finfo = FInfo()
- fp = io.open(name, 'rb')
- # Quick check for textfile
- data = fp.read(512)
- if 0 not in data:
- finfo.Type = 'TEXT'
- fp.seek(0, 2)
- dsize = fp.tell()
- fp.close()
+ with io.open(name, 'rb') as fp:
+ # Quick check for textfile
+ data = fp.read(512)
+ if 0 not in data:
+ finfo.Type = 'TEXT'
+ fp.seek(0, 2)
+ dsize = fp.tell()
dir, file = os.path.split(name)
file = file.replace(':', '-', 1)
return file, finfo, dsize, 0
@@ -140,19 +139,26 @@
class BinHex:
def init(self, name_finfo_dlen_rlen, ofp):
name, finfo, dlen, rlen = name_finfo_dlen_rlen
+ close_on_error = False
if isinstance(ofp, str):
ofname = ofp
ofp = io.open(ofname, 'wb')
- ofp.write(b'(This file must be converted with BinHex 4.0)\r\r:')
- hqxer = _Hqxcoderengine(ofp)
- self.ofp = _Rlecoderengine(hqxer)
- self.crc = 0
- if finfo is None:
- finfo = FInfo()
- self.dlen = dlen
- self.rlen = rlen
- self._writeinfo(name, finfo)
- self.state = _DID_HEADER
+ close_on_error = True
+ try:
+ ofp.write(b'(This file must be converted with BinHex 4.0)\r\r:')
+ hqxer = _Hqxcoderengine(ofp)
+ self.ofp = _Rlecoderengine(hqxer)
+ self.crc = 0
+ if finfo is None:
+ finfo = FInfo()
+ self.dlen = dlen
+ self.rlen = rlen
+ self._writeinfo(name, finfo)
+ self.state = _DID_HEADER
+ except:
+ if close_on_error:
+ ofp.close()
+ raise
def _writeinfo(self, name, finfo):
nl = len(name)
diff --git a/Lib/csv.py b/Lib/csv.py
--- a/Lib/csv.py
+++ b/Lib/csv.py
@@ -284,7 +284,7 @@
an all or nothing approach, so we allow for small variations in this
number.
1) build a table of the frequency of each character on every line.
- 2) build a table of freqencies of this frequency (meta-frequency?),
+ 2) build a table of frequencies of this frequency (meta-frequency?),
e.g. 'x occurred 5 times in 10 rows, 6 times in 1000 rows,
7 times in 2 rows'
3) use the mode of the meta-frequency to determine the /expected/
diff --git a/Lib/ctypes/test/test_arrays.py b/Lib/ctypes/test/test_arrays.py
--- a/Lib/ctypes/test/test_arrays.py
+++ b/Lib/ctypes/test/test_arrays.py
@@ -37,7 +37,7 @@
values = [ia[i] for i in range(len(init))]
self.assertEqual(values, [0] * len(init))
- # Too many in itializers should be caught
+ # Too many initializers should be caught
self.assertRaises(IndexError, int_array, range(alen2))
CharArray = ARRAY(c_char, 3)
diff --git a/Lib/ctypes/test/test_init.py b/Lib/ctypes/test/test_init.py
--- a/Lib/ctypes/test/test_init.py
+++ b/Lib/ctypes/test/test_init.py
@@ -27,7 +27,7 @@
self.assertEqual((y.x.a, y.x.b), (0, 0))
self.assertEqual(y.x.new_was_called, False)
- # But explicitely creating an X structure calls new and init, of course.
+ # But explicitly creating an X structure calls new and init, of course.
x = X()
self.assertEqual((x.a, x.b), (9, 12))
self.assertEqual(x.new_was_called, True)
diff --git a/Lib/ctypes/test/test_numbers.py b/Lib/ctypes/test/test_numbers.py
--- a/Lib/ctypes/test/test_numbers.py
+++ b/Lib/ctypes/test/test_numbers.py
@@ -157,7 +157,7 @@
def test_int_from_address(self):
from array import array
for t in signed_types + unsigned_types:
- # the array module doesn't suppport all format codes
+ # the array module doesn't support all format codes
# (no 'q' or 'Q')
try:
array(t.type)
diff --git a/Lib/ctypes/test/test_win32.py b/Lib/ctypes/test/test_win32.py
--- a/Lib/ctypes/test/test_win32.py
+++ b/Lib/ctypes/test/test_win32.py
@@ -17,7 +17,7 @@
# ValueError: Procedure probably called with not enough arguments (4 bytes missing)
self.assertRaises(ValueError, IsWindow)
- # This one should succeeed...
+ # This one should succeed...
self.assertEqual(0, IsWindow(0))
# ValueError: Procedure probably called with too many arguments (8 bytes in excess)
diff --git a/Lib/difflib.py b/Lib/difflib.py
--- a/Lib/difflib.py
+++ b/Lib/difflib.py
@@ -1719,7 +1719,7 @@
line = line.replace(' ','\0')
# expand tabs into spaces
line = line.expandtabs(self._tabsize)
- # relace spaces from expanded tabs back into tab characters
+ # replace spaces from expanded tabs back into tab characters
# (we'll replace them with markup after we do differencing)
line = line.replace(' ','\t')
return line.replace('\0',' ').rstrip('\n')
diff --git a/Lib/distutils/cmd.py b/Lib/distutils/cmd.py
--- a/Lib/distutils/cmd.py
+++ b/Lib/distutils/cmd.py
@@ -359,7 +359,7 @@
not self.force, dry_run=self.dry_run)
def move_file (self, src, dst, level=1):
- """Move a file respectin dry-run flag."""
+ """Move a file respecting dry-run flag."""
return file_util.move_file(src, dst, dry_run=self.dry_run)
def spawn(self, cmd, search_path=1, level=1):
diff --git a/Lib/distutils/cygwinccompiler.py b/Lib/distutils/cygwinccompiler.py
--- a/Lib/distutils/cygwinccompiler.py
+++ b/Lib/distutils/cygwinccompiler.py
@@ -155,7 +155,7 @@
self.dll_libraries = get_msvcr()
def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
- """Compiles the source by spawing GCC and windres if needed."""
+ """Compiles the source by spawning GCC and windres if needed."""
if ext == '.rc' or ext == '.res':
# gcc needs '.res' and '.rc' compiled to object files !!!
try:
diff --git a/Lib/distutils/tests/test_clean.py b/Lib/distutils/tests/test_clean.py
--- a/Lib/distutils/tests/test_clean.py
+++ b/Lib/distutils/tests/test_clean.py
@@ -39,7 +39,7 @@
self.assertTrue(not os.path.exists(path),
'%s was not removed' % path)
- # let's run the command again (should spit warnings but suceed)
+ # let's run the command again (should spit warnings but succeed)
cmd.all = 1
cmd.ensure_finalized()
cmd.run()
diff --git a/Lib/distutils/tests/test_install.py b/Lib/distutils/tests/test_install.py
--- a/Lib/distutils/tests/test_install.py
+++ b/Lib/distutils/tests/test_install.py
@@ -62,7 +62,7 @@
if sys.version < '2.6':
return
- # preparing the environement for the test
+ # preparing the environment for the test
self.old_user_base = site.USER_BASE
self.old_user_site = site.USER_SITE
self.tmpdir = self.mkdtemp()
diff --git a/Lib/distutils/tests/test_sdist.py b/Lib/distutils/tests/test_sdist.py
--- a/Lib/distutils/tests/test_sdist.py
+++ b/Lib/distutils/tests/test_sdist.py
@@ -311,7 +311,7 @@
# adding a file
self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#')
- # make sure build_py is reinitinialized, like a fresh run
+ # make sure build_py is reinitialized, like a fresh run
build_py = dist.get_command_obj('build_py')
build_py.finalized = False
build_py.ensure_finalized()
diff --git a/Lib/doctest.py b/Lib/doctest.py
--- a/Lib/doctest.py
+++ b/Lib/doctest.py
@@ -1211,7 +1211,7 @@
# Process each example.
for examplenum, example in enumerate(test.examples):
- # If REPORT_ONLY_FIRST_FAILURE is set, then supress
+ # If REPORT_ONLY_FIRST_FAILURE is set, then suppress
# reporting after the first failure.
quiet = (self.optionflags & REPORT_ONLY_FIRST_FAILURE and
failures > 0)
@@ -2135,7 +2135,7 @@
caller can catch the errors and initiate post-mortem debugging.
The DocTestCase provides a debug method that raises
- UnexpectedException errors if there is an unexepcted
+ UnexpectedException errors if there is an unexpected
exception:
>>> test = DocTestParser().get_doctest('>>> raise KeyError\n42',
diff --git a/Lib/email/encoders.py b/Lib/email/encoders.py
--- a/Lib/email/encoders.py
+++ b/Lib/email/encoders.py
@@ -12,7 +12,7 @@
]
-from base64 import b64encode as _bencode
+from base64 import encodebytes as _bencode
from quopri import encodestring as _encodestring
diff --git a/Lib/email/header.py b/Lib/email/header.py
--- a/Lib/email/header.py
+++ b/Lib/email/header.py
@@ -47,7 +47,7 @@
# For use with .match()
fcre = re.compile(r'[\041-\176]+:$')
-# Find a header embeded in a putative header value. Used to check for
+# Find a header embedded in a putative header value. Used to check for
# header injection attack.
_embeded_header = re.compile(r'\n[^ \t]+:')
@@ -314,7 +314,7 @@
self._continuation_ws, splitchars)
for string, charset in self._chunks:
lines = string.splitlines()
- formatter.feed(lines[0], charset)
+ formatter.feed(lines[0] if lines else '', charset)
for line in lines[1:]:
formatter.newline()
if charset.header_encoding is not None:
diff --git a/Lib/email/message.py b/Lib/email/message.py
--- a/Lib/email/message.py
+++ b/Lib/email/message.py
@@ -48,9 +48,9 @@
def _splitparam(param):
# Split header parameters. BAW: this may be too simple. It isn't
# strictly RFC 2045 (section 5.1) compliant, but it catches most headers
- # found in the wild. We may eventually need a full fledged parser
- # eventually.
- a, sep, b = param.partition(';')
+ # found in the wild. We may eventually need a full fledged parser.
+ # RDM: we might have a Header here; for now just stringify it.
+ a, sep, b = str(param).partition(';')
if not sep:
return a.strip(), None
return a.strip(), b.strip()
@@ -90,6 +90,8 @@
return param
def _parseparam(s):
+ # RDM This might be a Header, so for now stringify it.
+ s = ';' + str(s)
plist = []
while s[:1] == ';':
s = s[1:]
@@ -240,7 +242,8 @@
if i is not None and not isinstance(self._payload, list):
raise TypeError('Expected list, got %s' % type(self._payload))
payload = self._payload
- cte = self.get('content-transfer-encoding', '').lower()
+ # cte might be a Header, so for now stringify it.
+ cte = str(self.get('content-transfer-encoding', '')).lower()
# payload may be bytes here.
if isinstance(payload, str):
if _has_surrogates(payload):
@@ -561,7 +564,7 @@
if value is missing:
return failobj
params = []
- for p in _parseparam(';' + value):
+ for p in _parseparam(value):
try:
name, val = p.split('=', 1)
name = name.strip()
diff --git a/Lib/email/test/test_email.py b/Lib/email/test/test_email.py
--- a/Lib/email/test/test_email.py
+++ b/Lib/email/test/test_email.py
@@ -573,9 +573,18 @@
msg['Dummy'] = 'dummy\nX-Injected-Header: test'
self.assertRaises(errors.HeaderParseError, msg.as_string)
Test the email.encoders module
class TestEncoders(unittest.TestCase):
+
+ def test_EncodersEncode_base64(self):
+ with openfile('PyBanner048.gif', 'rb') as fp:
+ bindata = fp.read()
+ mimed = email.mime.image.MIMEImage(bindata)
+ base64ed = mimed.get_payload()
+ # the transfer-encoded body lines should all be <=76 characters
+ lines = base64ed.split('\n')
+ self.assertLessEqual(max([ len(x) for x in lines ]), 76)
+
def test_encode_empty_payload(self):
eq = self.assertEqual
msg = Message()
@@ -1141,10 +1150,11 @@
def test_body(self):
eq = self.assertEqual
- bytes = b'\xfa\xfb\xfc\xfd\xfe\xff'
- msg = MIMEApplication(bytes)
- eq(msg.get_payload(), '+vv8/f7/')
- eq(msg.get_payload(decode=True), bytes)
+ bytesdata = b'\xfa\xfb\xfc\xfd\xfe\xff'
+ msg = MIMEApplication(bytesdata)
+ # whitespace in the cte encoded block is RFC-irrelevant.
+ eq(msg.get_payload().strip(), '+vv8/f7/')
+ eq(msg.get_payload(decode=True), bytesdata)
@@ -2992,6 +3002,58 @@
['foo at bar.com',
'g\uFFFD\uFFFDst'])
+ def test_get_content_type_with_8bit(self):
+ msg = email.message_from_bytes(textwrap.dedent("""
+ Content-Type: text/pl\xA7in; charset=utf-8
+ """).encode('latin-1'))
+ self.assertEqual(msg.get_content_type(), "text/pl\uFFFDin")
+ self.assertEqual(msg.get_content_maintype(), "text")
+ self.assertEqual(msg.get_content_subtype(), "pl\uFFFDin")
+
+ def test_get_params_with_8bit(self):
+ msg = email.message_from_bytes(
+ 'X-Header: foo=\xa7ne; b\xa7r=two; baz=three\n'.encode('latin-1'))
+ self.assertEqual(msg.get_params(header='x-header'),
+ [('foo', '\uFFFDne'), ('b\uFFFDr', 'two'), ('baz', 'three')])
+ self.assertEqual(msg.get_param('Foo', header='x-header'), '\uFFFdne')
+ # XXX: someday you might be able to get 'b\xa7r', for now you can't.
+ self.assertEqual(msg.get_param('b\xa7r', header='x-header'), None)
+
+ def test_get_rfc2231_params_with_8bit(self):
+ msg = email.message_from_bytes(textwrap.dedent("""
+ Content-Type: text/plain; charset=us-ascii;
+ title*=us-ascii'en'This%20is%20not%20f\xa7n"""
+ ).encode('latin-1'))
+ self.assertEqual(msg.get_param('title'),
+ ('us-ascii', 'en', 'This is not f\uFFFDn'))
+
+ def test_set_rfc2231_params_with_8bit(self):
+ msg = email.message_from_bytes(textwrap.dedent("""
+ Content-Type: text/plain; charset=us-ascii;
+ title*=us-ascii'en'This%20is%20not%20f\xa7n"""
+ ).encode('latin-1'))
+ msg.set_param('title', 'test')
+ self.assertEqual(msg.get_param('title'), 'test')
+
+ def test_del_rfc2231_params_with_8bit(self):
+ msg = email.message_from_bytes(textwrap.dedent("""
+ Content-Type: text/plain; charset=us-ascii;
+ title*=us-ascii'en'This%20is%20not%20f\xa7n"""
+ ).encode('latin-1'))
+ msg.del_param('title')
+ self.assertEqual(msg.get_param('title'), None)
+ self.assertEqual(msg.get_content_maintype(), 'text')
+
+ def test_get_payload_with_8bit_cte_header(self):
+ msg = email.message_from_bytes(textwrap.dedent("""
+ Content-Transfer-Encoding: b\xa7se64
+ Content-Type: text/plain; charset=latin-1
+
+ payload
+ """).encode('latin-1'))
+ self.assertEqual(msg.get_payload(), 'payload\n')
+ self.assertEqual(msg.get_payload(decode=True), b'payload\n')
+
non_latin_bin_msg = textwrap.dedent("""
From: foo at bar.com
To: báz
@@ -3695,6 +3757,13 @@
h = Header('æ–‡', charset='shift_jis')
self.assertEqual(h.encode(), '=?iso-2022-jp?b?GyRCSjgbKEI=?=')
+ def test_flatten_header_with_no_value(self):
+ # Issue 11401 (regression from email 4.x) Note that the space after
+ # the header doesn't reflect the input, but this is also the way
+ # email 4.x behaved. At some point it would be nice to fix that.
+ msg = email.message_from_string("EmptyHeader:")
+ self.assertEqual(str(msg), "EmptyHeader: \n\n")
+
# Test RFC 2231 header parameters (en/de)coding
diff --git a/Lib/functools.py b/Lib/functools.py
--- a/Lib/functools.py
+++ b/Lib/functools.py
@@ -140,7 +140,7 @@
tuple=tuple, sorted=sorted, len=len, KeyError=KeyError):
hits = misses = 0
- kwd_mark = object() # separates positional and keyword args
+ kwd_mark = (object(),) # separates positional and keyword args
lock = Lock() # needed because ordereddicts aren't threadsafe
if maxsize is None:
@@ -151,7 +151,7 @@
nonlocal hits, misses
key = args
if kwds:
- key += (kwd_mark,) + tuple(sorted(kwds.items()))
+ key += kwd_mark + tuple(sorted(kwds.items()))
try:
result = cache[key]
hits += 1
@@ -170,7 +170,7 @@
nonlocal hits, misses
key = args
if kwds:
- key += (kwd_mark,) + tuple(sorted(kwds.items()))
+ key += kwd_mark + tuple(sorted(kwds.items()))
try:
with lock:
result = cache[key]
diff --git a/Lib/http/server.py b/Lib/http/server.py
--- a/Lib/http/server.py
+++ b/Lib/http/server.py
@@ -103,15 +103,20 @@
# Default error message template
DEFAULT_ERROR_MESSAGE = """
-
-Error response
-
-
-Error response
-
Error code %(code)d. -
Message: %(message)s. -
Error code explanation: %(code)s = %(explain)s. -
+ + + + + Error response + + +Error response
+Error code: %(code)d
+Message: %(message)s.
+Error code explanation: %(code)s - %(explain)s.
+ + """ DEFAULT_ERROR_CONTENT_TYPE = "text/html;charset=utf-8" diff --git a/Lib/idlelib/FormatParagraph.py b/Lib/idlelib/FormatParagraph.py --- a/Lib/idlelib/FormatParagraph.py +++ b/Lib/idlelib/FormatParagraph.py @@ -54,7 +54,7 @@ # If the block ends in a \n, we dont want the comment # prefix inserted after it. (Im not sure it makes sense to # reformat a comment block that isnt made of complete - # lines, but whatever!) Can't think of a clean soltution, + # lines, but whatever!) Can't think of a clean solution, # so we hack away block_suffix = "" if not newdata[-1]: diff --git a/Lib/idlelib/extend.txt b/Lib/idlelib/extend.txt --- a/Lib/idlelib/extend.txt +++ b/Lib/idlelib/extend.txt @@ -18,7 +18,7 @@ An IDLE extension class is instantiated with a single argument,editwin', an EditorWindow instance. The extension cannot assume much -about this argument, but it is guarateed to have the following instance +about this argument, but it is guaranteed to have the following instance variables: text a Text instance (a widget) diff --git a/Lib/idlelib/macosxSupport.py b/Lib/idlelib/macosxSupport.py --- a/Lib/idlelib/macosxSupport.py +++ b/Lib/idlelib/macosxSupport.py @@ -53,8 +53,8 @@ def addOpenEventSupport(root, flist): """ - This ensures that the application will respont to open AppleEvents, which - makes is feaseable to use IDLE as the default application for python files. + This ensures that the application will respond to open AppleEvents, which + makes is feasible to use IDLE as the default application for python files. """ def doOpenFile(*args): for fn in args: diff --git a/Lib/lib2to3/fixes/fix_metaclass.py b/Lib/lib2to3/fixes/fix_metaclass.py --- a/Lib/lib2to3/fixes/fix_metaclass.py +++ b/Lib/lib2to3/fixes/fix_metaclass.py @@ -48,7 +48,7 @@ """ for node in cls_node.children: if node.type == syms.suite: - # already in the prefered format, do nothing + # already in the preferred format, do nothing return # !%@#! oneliners have no suite node, we have to fake one up diff --git a/Lib/lib2to3/pgen2/conv.py b/Lib/lib2to3/pgen2/conv.py --- a/Lib/lib2to3/pgen2/conv.py +++ b/Lib/lib2to3/pgen2/conv.py @@ -51,7 +51,7 @@ self.finish_off() def parse_graminit_h(self, filename): - """Parse the .h file writen by pgen. (Internal) + """Parse the .h file written by pgen. (Internal) This file is a sequence of #define statements defining the nonterminals of the grammar as numbers. We build two tables @@ -82,7 +82,7 @@ return True def parse_graminit_c(self, filename): - """Parse the .c file writen by pgen. (Internal) + """Parse the .c file written by pgen. (Internal) The file looks as follows. The first two lines are always this: diff --git a/Lib/lib2to3/pytree.py b/Lib/lib2to3/pytree.py --- a/Lib/lib2to3/pytree.py +++ b/Lib/lib2to3/pytree.py @@ -658,8 +658,8 @@ content: optional sequence of subsequences of patterns; if absent, matches one node; if present, each subsequence is an alternative [*] - min: optinal minumum number of times to match, default 0 - max: optional maximum number of times tro match, default HUGE + min: optional minimum number of times to match, default 0 + max: optional maximum number of times to match, default HUGE name: optional name assigned to this match [*] Thus, if content is [[a, b, c], [d, e], [f, g, h]] this is diff --git a/Lib/lib2to3/tests/data/py2_test_grammar.py b/Lib/lib2to3/tests/data/py2_test_grammar.py --- a/Lib/lib2to3/tests/data/py2_test_grammar.py +++ b/Lib/lib2to3/tests/data/py2_test_grammar.py @@ -316,7 +316,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/Lib/lib2to3/tests/data/py3_test_grammar.py b/Lib/lib2to3/tests/data/py3_test_grammar.py --- a/Lib/lib2to3/tests/data/py3_test_grammar.py +++ b/Lib/lib2to3/tests/data/py3_test_grammar.py @@ -356,7 +356,7 @@ ### simple_stmt: small_stmt (';' small_stmt)* [';'] x = 1; pass; del x def foo(): - # verify statments that end with semi-colons + # verify statements that end with semi-colons x = 1; pass; del x; foo() diff --git a/Lib/multiprocessing/__init__.py b/Lib/multiprocessing/__init__.py --- a/Lib/multiprocessing/__init__.py +++ b/Lib/multiprocessing/__init__.py @@ -115,8 +115,11 @@ except (ValueError, KeyError): num = 0 elif 'bsd' in sys.platform or sys.platform == 'darwin': + comm = '/sbin/sysctl -n hw.ncpu' + if sys.platform == 'darwin': + comm = '/usr' + comm try: - with os.popen('sysctl -n hw.ncpu') as p: + with os.popen(comm) as p: num = int(p.read()) except ValueError: num = 0 diff --git a/Lib/ntpath.py b/Lib/ntpath.py --- a/Lib/ntpath.py +++ b/Lib/ntpath.py @@ -405,7 +405,7 @@ # - $varname is accepted. # - %varname% is accepted. # - varnames can be made out of letters, digits and the characters '_-' -# (though is not verifed in the ${varname} and %varname% cases) +# (though is not verified in the ${varname} and %varname% cases) # XXX With COMMAND.COM you can use any characters in a variable name, # XXX except '^|<>='. diff --git a/Lib/pickletools.py b/Lib/pickletools.py --- a/Lib/pickletools.py +++ b/Lib/pickletools.py @@ -1406,7 +1406,7 @@ proto=0, doc="""Read an object from the memo and push it on the stack. - The index of the memo object to push is given by the newline-teriminated + The index of the memo object to push is given by the newline-terminated decimal string following. BINGET and LONG_BINGET are space-optimized versions. """), diff --git a/Lib/platform.py b/Lib/platform.py --- a/Lib/platform.py +++ b/Lib/platform.py @@ -417,7 +417,7 @@ info = pipe.read() if pipe.close(): raise os.error('command failed') - # XXX How can I supress shell errors from being written + # XXX How can I suppress shell errors from being written # to stderr ? except os.error as why: #print 'Command %s failed: %s' % (cmd,why) diff --git a/Lib/pydoc.py b/Lib/pydoc.py --- a/Lib/pydoc.py +++ b/Lib/pydoc.py @@ -168,11 +168,11 @@ def visiblename(name, all=None): """Decide whether to show documentation on a variable.""" # Certain special names are redundant. - _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__', + if name in {'__builtins__', '__doc__', '__file__', '__path__', '__module__', '__name__', '__slots__', '__package__', '__cached__', '__author__', '__credits__', '__date__', - '__version__') - if name in _hidden_names: return 0 + '__version__'}: + return 0 # Private names are hidden, but special names are displayed. if name.startswith('__') and name.endswith('__'): return 1 if all is not None: diff --git a/Lib/shutil.py b/Lib/shutil.py --- a/Lib/shutil.py +++ b/Lib/shutil.py @@ -737,8 +737,8 @@ except KeyError: raise ValueError("Unknown unpack format '{0}'".format(format)) - func = format_info[0] - func(filename, extract_dir, **dict(format_info[1])) + func = format_info[1] + func(filename, extract_dir, **dict(format_info[2])) else: # we need to look at the registered unpackers supported extensions format = _find_unpack_format(filename) diff --git a/Lib/subprocess.py b/Lib/subprocess.py --- a/Lib/subprocess.py +++ b/Lib/subprocess.py @@ -371,8 +371,9 @@ """This exception is raised when the timeout expires while waiting for a child process. """ - def __init__(self, cmd, output=None): + def __init__(self, cmd, timeout, output=None): self.cmd = cmd + self.timeout = timeout self.output = output def __str__(self): @@ -431,7 +432,7 @@ return fds __all__ = ["Popen", "PIPE", "STDOUT", "call", "check_call", "getstatusoutput", - "getoutput", "check_output", "CalledProcessError"] + "getoutput", "check_output", "CalledProcessError", "DEVNULL"] if mswindows: from _subprocess import CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP @@ -456,6 +457,7 @@ PIPE = -1 STDOUT = -2 +DEVNULL = -3 def _eintr_retry_call(func, *args): @@ -532,7 +534,7 @@ except TimeoutExpired: process.kill() output, unused_err = process.communicate() - raise TimeoutExpired(process.args, output=output) + raise TimeoutExpired(process.args, timeout, output=output) retcode = process.poll() if retcode: raise CalledProcessError(retcode, process.args, output=output) @@ -800,6 +802,10 @@ # Child is still running, keep us alive until we can wait on it. _active.append(self) + def _get_devnull(self): + if not hasattr(self, '_devnull'): + self._devnull = os.open(os.devnull, os.O_RDWR) + return self._devnull def communicate(self, input=None, timeout=None): """Interact with process: Send data to stdin. Read data from @@ -839,7 +845,7 @@ return (stdout, stderr) try: - stdout, stderr = self._communicate(input, endtime) + stdout, stderr = self._communicate(input, endtime, timeout) finally: self._communication_started = True @@ -860,12 +866,12 @@ return endtime - time.time() - def _check_timeout(self, endtime): + def _check_timeout(self, endtime, orig_timeout): """Convenience for checking if a timeout has expired.""" if endtime is None: return if time.time() > endtime: - raise TimeoutExpired(self.args) + raise TimeoutExpired(self.args, orig_timeout) if mswindows: @@ -889,6 +895,8 @@ p2cread, _ = _subprocess.CreatePipe(None, 0) elif stdin == PIPE: p2cread, p2cwrite = _subprocess.CreatePipe(None, 0) + elif stdin == DEVNULL: + p2cread = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stdin, int): p2cread = msvcrt.get_osfhandle(stdin) else: @@ -902,6 +910,8 @@ _, c2pwrite = _subprocess.CreatePipe(None, 0) elif stdout == PIPE: c2pread, c2pwrite = _subprocess.CreatePipe(None, 0) + elif stdout == DEVNULL: + c2pwrite = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stdout, int): c2pwrite = msvcrt.get_osfhandle(stdout) else: @@ -917,6 +927,8 @@ errread, errwrite = _subprocess.CreatePipe(None, 0) elif stderr == STDOUT: errwrite = c2pwrite + elif stderr == DEVNULL: + errwrite = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stderr, int): errwrite = msvcrt.get_osfhandle(stderr) else: @@ -1010,7 +1022,7 @@ except pywintypes.error as e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really - # translate errno using _sys_errlist (or simliar), but + # translate errno using _sys_errlist (or similar), but # how can this be done from Python? raise WindowsError(*e.args) finally: @@ -1026,6 +1038,8 @@ c2pwrite.Close() if errwrite != -1: errwrite.Close() + if hasattr(self, '_devnull'): + os.close(self._devnull) # Retain the process handle, but close the thread handle self._child_created = True @@ -1050,9 +1064,11 @@ return self.returncode - def wait(self, timeout=None): + def wait(self, timeout=None, endtime=None): """Wait for child process to terminate. Returns returncode attribute.""" + if endtime is not None: + timeout = self._remaining_time(endtime) if timeout is None: timeout = _subprocess.INFINITE else: @@ -1060,7 +1076,7 @@ if self.returncode is None: result = _subprocess.WaitForSingleObject(self._handle, timeout) if result == _subprocess.WAIT_TIMEOUT: - raise TimeoutExpired(self.args) + raise TimeoutExpired(self.args, timeout) self.returncode = _subprocess.GetExitCodeProcess(self._handle) return self.returncode @@ -1070,7 +1086,7 @@ fh.close() - def _communicate(self, input, endtime): + def _communicate(self, input, endtime, orig_timeout): # Start reader threads feeding into a list hanging off of this # object, unless they've already been started. if self.stdout and not hasattr(self, "_stdout_buff"): @@ -1159,6 +1175,8 @@ pass elif stdin == PIPE: p2cread, p2cwrite = _create_pipe() + elif stdin == DEVNULL: + p2cread = self._get_devnull() elif isinstance(stdin, int): p2cread = stdin else: @@ -1169,6 +1187,8 @@ pass elif stdout == PIPE: c2pread, c2pwrite = _create_pipe() + elif stdout == DEVNULL: + c2pwrite = self._get_devnull() elif isinstance(stdout, int): c2pwrite = stdout else: @@ -1181,6 +1201,8 @@ errread, errwrite = _create_pipe() elif stderr == STDOUT: errwrite = c2pwrite + elif stderr == DEVNULL: + errwrite = self._get_devnull() elif isinstance(stderr, int): errwrite = stderr else: @@ -1374,6 +1396,8 @@ os.close(c2pwrite) if errwrite != -1 and errread != -1: os.close(errwrite) + if hasattr(self, '_devnull'): + os.close(self._devnull) # Wait for exec to fail or succeed; possibly raising an # exception (limited in size) @@ -1468,13 +1492,18 @@ def wait(self, timeout=None, endtime=None): """Wait for child process to terminate. Returns returncode attribute.""" - # If timeout was passed but not endtime, compute endtime in terms of - # timeout. - if endtime is None and timeout is not None: - endtime = time.time() + timeout if self.returncode is not None: return self.returncode - elif endtime is not None: + + # endtime is preferred to timeout. timeout is only used for + # printing. + if endtime is not None or timeout is not None: + if endtime is None: + endtime = time.time() + timeout + elif timeout is None: + timeout = self._remaining_time(endtime) + + if endtime is not None: # Enter a busy loop if we have a timeout. This busy loop was # cribbed from Lib/threading.py in Thread.wait() at r71065. delay = 0.0005 # 500 us -> initial delay of 1 ms @@ -1486,7 +1515,7 @@ break remaining = self._remaining_time(endtime) if remaining <= 0: - raise TimeoutExpired(self.args) + raise TimeoutExpired(self.args, timeout) delay = min(delay * 2, remaining, .05) time.sleep(delay) elif self.returncode is None: @@ -1495,7 +1524,7 @@ return self.returncode - def _communicate(self, input, endtime): + def _communicate(self, input, endtime, orig_timeout): if self.stdin and not self._communication_started: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. @@ -1504,9 +1533,11 @@ self.stdin.close() if _has_poll: - stdout, stderr = self._communicate_with_poll(input, endtime) + stdout, stderr = self._communicate_with_poll(input, endtime, + orig_timeout) else: - stdout, stderr = self._communicate_with_select(input, endtime) + stdout, stderr = self._communicate_with_select(input, endtime, + orig_timeout) self.wait(timeout=self._remaining_time(endtime)) @@ -1529,7 +1560,7 @@ return (stdout, stderr) - def _communicate_with_poll(self, input, endtime): + def _communicate_with_poll(self, input, endtime, orig_timeout): stdout = None # Return stderr = None # Return @@ -1580,7 +1611,7 @@ if e.args[0] == errno.EINTR: continue raise - self._check_timeout(endtime) + self._check_timeout(endtime, orig_timeout) # XXX Rewrite these to use non-blocking I/O on the # file objects; they are no longer using C stdio! @@ -1604,7 +1635,7 @@ return (stdout, stderr) - def _communicate_with_select(self, input, endtime): + def _communicate_with_select(self, input, endtime, orig_timeout): if not self._communication_started: self._read_set = [] self._write_set = [] @@ -1646,9 +1677,9 @@ # According to the docs, returning three empty lists indicates # that the timeout expired. if not (rlist or wlist or xlist): - raise TimeoutExpired(self.args) + raise TimeoutExpired(self.args, orig_timeout) # We also check what time it is ourselves for good measure. - self._check_timeout(endtime) + self._check_timeout(endtime, orig_timeout) # XXX Rewrite these to use non-blocking I/O on the # file objects; they are no longer using C stdio! diff --git a/Lib/test/crashers/README b/Lib/test/crashers/README --- a/Lib/test/crashers/README +++ b/Lib/test/crashers/README @@ -14,3 +14,7 @@ Once the crash is fixed, the test case should be moved into an appropriate test (even if it was originally from the test suite). This ensures the regression doesn't happen again. And if it does, it should be easier to track down. + +Also see Lib/test_crashers.py which exercises the crashers in this directory. +In particular, make sure to add any new infinite loop crashers to the black +list so it doesn't try to run them. diff --git a/Lib/test/crashers/compiler_recursion.py b/Lib/test/crashers/compiler_recursion.py --- a/Lib/test/crashers/compiler_recursion.py +++ b/Lib/test/crashers/compiler_recursion.py @@ -9,5 +9,5 @@ # e.g. '1*'*10**5+'1' will die in compiler_visit_expr # The exact limit to destroy the stack will vary by platform -# but 100k should do the trick most places -compile('()'*10**5, '?', 'exec') +# but 10M should do the trick even with huge stack allocations +compile('()'*10**7, '?', 'exec') diff --git a/Lib/test/datetimetester.py b/Lib/test/datetimetester.py --- a/Lib/test/datetimetester.py +++ b/Lib/test/datetimetester.py @@ -3414,7 +3414,7 @@ self.assertEqual(dt, there_and_back) # Because we have a redundant spelling when DST begins, there is - # (unforunately) an hour when DST ends that can't be spelled at all in + # (unfortunately) an hour when DST ends that can't be spelled at all in # local time. When DST ends, the clock jumps from 1:59 back to 1:00 # again. The hour 1:MM DST has no spelling then: 1:MM is taken to be # standard time. 1:MM DST == 0:MM EST, but 0:MM is taken to be diff --git a/Lib/test/pyclbr_input.py b/Lib/test/pyclbr_input.py --- a/Lib/test/pyclbr_input.py +++ b/Lib/test/pyclbr_input.py @@ -19,7 +19,7 @@ # XXX: This causes test_pyclbr.py to fail, but only because the # introspection-based is_method() code in the test can't - # distinguish between this and a geniune method function like m(). + # distinguish between this and a genuine method function like m(). # The pyclbr.py module gets this right as it parses the text. # #f = f diff --git a/Lib/test/test_binhex.py b/Lib/test/test_binhex.py --- a/Lib/test/test_binhex.py +++ b/Lib/test/test_binhex.py @@ -15,10 +15,12 @@ def setUp(self): self.fname1 = support.TESTFN + "1" self.fname2 = support.TESTFN + "2" + self.fname3 = support.TESTFN + "very_long_filename__very_long_filename__very_long_filename__very_long_filename__" def tearDown(self): support.unlink(self.fname1) support.unlink(self.fname2) + support.unlink(self.fname3) DATA = b'Jack is my hero' @@ -37,6 +39,15 @@ self.assertEqual(self.DATA, finish) + def test_binhex_error_on_long_filename(self): + """ + The testcase fails if no exception is raised when a filename parameter provided to binhex.binhex() + is too long, or if the exception raised in binhex.binhex() is not an instance of binhex.Error. + """ + f3 = open(self.fname3, 'wb') + f3.close() + + self.assertRaises(binhex.Error, binhex.binhex, self.fname3, self.fname2) def test_main(): support.run_unittest(BinHexTestCase) diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py --- a/Lib/test/test_capi.py +++ b/Lib/test/test_capi.py @@ -127,7 +127,7 @@ context.event.set() def test_pendingcalls_non_threaded(self): - #again, just using the main thread, likely they will all be dispathced at + #again, just using the main thread, likely they will all be dispatched at #once. It is ok to ask for too many, because we loop until we find a slot. #the loop can be interrupted to dispatch. #there are only 32 dispatch slots, so we go for twice that! diff --git a/Lib/test/test_crashers.py b/Lib/test/test_crashers.py new file mode 100644 --- /dev/null +++ b/Lib/test/test_crashers.py @@ -0,0 +1,37 @@ +# Tests that the crashers in the Lib/test/crashers directory actually +# do crash the interpreter as expected +# +# If a crasher is fixed, it should be moved elsewhere in the test suite to +# ensure it continues to work correctly. + +import unittest +import glob +import os.path +import test.support +from test.script_helper import assert_python_failure + +CRASHER_DIR = os.path.join(os.path.dirname(__file__), "crashers") +CRASHER_FILES = os.path.join(CRASHER_DIR, "*.py") + +infinite_loops = ["infinite_loop_re.py", "nasty_eq_vs_dict.py"] + +class CrasherTest(unittest.TestCase): + + @test.support.cpython_only + def test_crashers_crash(self): + for fname in glob.glob(CRASHER_FILES): + if os.path.basename(fname) in infinite_loops: + continue + # Some "crashers" only trigger an exception rather than a + # segfault. Consider that an acceptable outcome. + if test.support.verbose: + print("Checking crasher:", fname) + assert_python_failure(fname) + + +def test_main(): + test.support.run_unittest(CrasherTest) + test.support.reap_children() + +if __name__ == "__main__": + test_main() diff --git a/Lib/test/test_decimal.py b/Lib/test/test_decimal.py --- a/Lib/test/test_decimal.py +++ b/Lib/test/test_decimal.py @@ -228,7 +228,7 @@ try: t = self.eval_line(line) except DecimalException as exception: - #Exception raised where there shoudn't have been one. + #Exception raised where there shouldn't have been one. self.fail('Exception "'+exception.__class__.__name__ + '" raised on line '+line) return diff --git a/Lib/test/test_descr.py b/Lib/test/test_descr.py --- a/Lib/test/test_descr.py +++ b/Lib/test/test_descr.py @@ -3967,7 +3967,7 @@ except TypeError: pass else: - self.fail("Carlo Verre __setattr__ suceeded!") + self.fail("Carlo Verre __setattr__ succeeded!") try: object.__delattr__(str, "lower") except TypeError: diff --git a/Lib/test/test_doctest.py b/Lib/test/test_doctest.py --- a/Lib/test/test_doctest.py +++ b/Lib/test/test_doctest.py @@ -1297,7 +1297,7 @@ ? + ++ ^ TestResults(failed=1, attempted=1) -The REPORT_ONLY_FIRST_FAILURE supresses result output after the first +The REPORT_ONLY_FIRST_FAILURE suppresses result output after the first failing example: >>> def f(x): @@ -1327,7 +1327,7 @@ 2 TestResults(failed=3, attempted=5) -However, output from
report_startis not supressed: +However, output from
report_start is not suppressed: >>> doctest.DocTestRunner(verbose=True, optionflags=flags).run(test) ... # doctest: +ELLIPSIS @@ -2278,7 +2278,7 @@ >>> doctest.master = None # Reset master. (Note: we'll be clearing doctest.master after each call to -
doctest.testfile, to supress warnings about multiple tests with the +
doctest.testfile, to suppress warnings about multiple tests with the same name.) Globals may be specified with the
globsand
extraglobsparameters: @@ -2314,7 +2314,7 @@ TestResults(failed=0, attempted=2) >>> doctest.master = None # Reset master. -Verbosity can be increased with the optional
verboseparemter: +Verbosity can be increased with the optional
verboseparameter: >>> doctest.testfile('test_doctest.txt', globs=globs, verbose=True) Trying: @@ -2351,7 +2351,7 @@ TestResults(failed=1, attempted=2) >>> doctest.master = None # Reset master. -The summary report may be supressed with the optional
report+The summary report may be suppressed with the optional
report`
parameter:
>>> doctest.testfile('test_doctest.txt', report=False)
diff --git a/Lib/test/test_extcall.py b/Lib/test/test_extcall.py
--- a/Lib/test/test_extcall.py
+++ b/Lib/test/test_extcall.py
@@ -228,7 +228,7 @@
>>> Foo.method(1, [2, 3])
5
-A PyCFunction that takes only positional parameters shoud allow an
+A PyCFunction that takes only positional parameters should allow an
empty keyword dictionary to pass without a complaint, but raise a
TypeError if te dictionary is not empty
diff --git a/Lib/test/test_fileinput.py b/Lib/test/test_fileinput.py
--- a/Lib/test/test_fileinput.py
+++ b/Lib/test/test_fileinput.py
@@ -8,11 +8,15 @@
import fileinput
import collections
import gzip
-import bz2
import types
import codecs
import unittest
+try:
+ import bz2
+except ImportError:
+ bz2 = None
+
from io import StringIO
from fileinput import FileInput, hook_encoded
@@ -765,6 +769,7 @@
self.assertEqual(self.fake_open.invocation_count, 1)
self.assertEqual(self.fake_open.last_invocation, (("test.gz", 3), {}))
+ @unittest.skipUnless(bz2, "Requires bz2")
def test_bz2_ext_fake(self):
original_open = bz2.BZ2File
bz2.BZ2File = self.fake_open
diff --git a/Lib/test/test_float.py b/Lib/test/test_float.py
--- a/Lib/test/test_float.py
+++ b/Lib/test/test_float.py
@@ -67,7 +67,7 @@
def test_float_with_comma(self):
# set locale to something that doesn't use '.' for the decimal point
# float must not accept the locale specific decimal point but
- # it still has to accept the normal python syntac
+ # it still has to accept the normal python syntax
import locale
if not locale.localeconv()['decimal_point'] == ',':
return
@@ -189,7 +189,7 @@
def assertEqualAndEqualSign(self, a, b):
# fail unless a == b and a and b have the same sign bit;
# the only difference from assertEqual is that this test
- # distingishes -0.0 and 0.0.
+ # distinguishes -0.0 and 0.0.
self.assertEqual((a, copysign(1.0, a)), (b, copysign(1.0, b)))
@support.requires_IEEE_754
diff --git a/Lib/test/test_grammar.py b/Lib/test/test_grammar.py
--- a/Lib/test/test_grammar.py
+++ b/Lib/test/test_grammar.py
@@ -350,7 +350,7 @@
### simple_stmt: small_stmt (';' small_stmt) [';']
x = 1; pass; del x
def foo():
- # verify statments that end with semi-colons
+ # verify statements that end with semi-colons
x = 1; pass; del x;
foo()
diff --git a/Lib/test/test_httpservers.py b/Lib/test/test_httpservers.py
--- a/Lib/test/test_httpservers.py
+++ b/Lib/test/test_httpservers.py
@@ -462,7 +462,7 @@
return False
class BaseHTTPRequestHandlerTestCase(unittest.TestCase):
- """Test the functionaility of the BaseHTTPServer.
+ """Test the functionality of the BaseHTTPServer.
Test the support for the Expect 100-continue header.
"""
diff --git a/Lib/test/test_import.py b/Lib/test/test_import.py
--- a/Lib/test/test_import.py
+++ b/Lib/test/test_import.py
@@ -283,8 +283,6 @@
self.skipTest('path is not encodable to {}'.format(encoding))
with self.assertRaises(ImportError) as c:
import(path)
- self.assertEqual("Import by filename is not supported.",
- c.exception.args[0])
def test_import_in_del_does_not_crash(self):
# Issue 4236
diff --git a/Lib/test/test_iterlen.py b/Lib/test/test_iterlen.py
--- a/Lib/test/test_iterlen.py
+++ b/Lib/test/test_iterlen.py
@@ -20,11 +20,11 @@
Some containers become temporarily immutable during iteration. This includes
dicts, sets, and collections.deque. Their implementation is equally simple
-though they need to permantently set their length to zero whenever there is
+though they need to permanently set their length to zero whenever there is
an attempt to iterate after a length mutation.
The situation slightly more involved whenever an object allows length mutation
-during iteration. Lists and sequence iterators are dynanamically updatable.
+during iteration. Lists and sequence iterators are dynamically updatable.
So, if a list is extended during iteration, the iterator will continue through
the new items. If it shrinks to a point before the most recent iteration,
then no further items are available and the length is reported at zero.
diff --git a/Lib/test/test_itertools.py b/Lib/test/test_itertools.py
--- a/Lib/test/test_itertools.py
+++ b/Lib/test/test_itertools.py
@@ -1526,7 +1526,7 @@
... return chain(iterable, repeat(None))
>>> def ncycles(iterable, n):
-... "Returns the seqeuence elements n times"
+... "Returns the sequence elements n times"
... return chain(*repeat(iterable, n))
>>> def dotproduct(vec1, vec2):
diff --git a/Lib/test/test_marshal.py b/Lib/test/test_marshal.py
--- a/Lib/test/test_marshal.py
+++ b/Lib/test/test_marshal.py
@@ -194,7 +194,7 @@
# >>> type(loads(dumps(Int())))
# <type 'int'>
for typ in (int, float, complex, tuple, list, dict, set, frozenset):
- # Note: str sublclasses are not tested because they get handled
+ # Note: str subclasses are not tested because they get handled
# by marshal's routines for objects supporting the buffer API.
subtyp = type('subtyp', (typ,), {})
self.assertRaises(ValueError, marshal.dumps, subtyp())
diff --git a/Lib/test/test_math.py b/Lib/test/test_math.py
--- a/Lib/test/test_math.py
+++ b/Lib/test/test_math.py
@@ -820,7 +820,7 @@
# the following tests have been commented out since they don't
# really belong here: the implementation of ** for floats is
- # independent of the implemention of math.pow
+ # independent of the implementation of math.pow
#self.assertEqual(1NAN, 1)
#self.assertEqual(1INF, 1)
#self.assertEqual(1**NINF, 1)
diff --git a/Lib/test/test_mmap.py b/Lib/test/test_mmap.py
--- a/Lib/test/test_mmap.py
+++ b/Lib/test/test_mmap.py
@@ -594,7 +594,7 @@
m2.close()
m1.close()
- # Test differnt tag
+ # Test different tag
m1 = mmap.mmap(-1, len(data1), tagname="foo")
m1[:] = data1
m2 = mmap.mmap(-1, len(data2), tagname="boo")
diff --git a/Lib/test/test_multiprocessing.py b/Lib/test/test_multiprocessing.py
--- a/Lib/test/test_multiprocessing.py
+++ b/Lib/test/test_multiprocessing.py
@@ -795,7 +795,7 @@
event = self.Event()
wait = TimingWrapper(event.wait)
- # Removed temporaily, due to API shear, this does not
+ # Removed temporarily, due to API shear, this does not
# work with threading._Event objects. is_set == isSet
self.assertEqual(event.is_set(), False)
@@ -1765,7 +1765,7 @@
util.Finalize(None, conn.send, args=('STOP',), exitpriority=-100)
- # call mutliprocessing's cleanup function then exit process without
+ # call multiprocessing's cleanup function then exit process without
# garbage collecting locals
util._exit_function()
conn.close()
diff --git a/Lib/test/test_pkg.py b/Lib/test/test_pkg.py
--- a/Lib/test/test_pkg.py
+++ b/Lib/test/test_pkg.py
@@ -56,7 +56,7 @@
if self.root: # Only clean if the test was actually run
cleanout(self.root)
- # delete all modules concerning the tested hiearchy
+ # delete all modules concerning the tested hierarchy
if self.pkgname:
modules = [name for name in sys.modules
if self.pkgname in name.split('.')]
diff --git a/Lib/test/test_posix.py b/Lib/test/test_posix.py
--- a/Lib/test/test_posix.py
+++ b/Lib/test/test_posix.py
@@ -37,7 +37,7 @@
NO_ARG_FUNCTIONS = [ "ctermid", "getcwd", "getcwdb", "uname",
"times", "getloadavg",
"getegid", "geteuid", "getgid", "getgroups",
- "getpid", "getpgrp", "getppid", "getuid",
+ "getpid", "getpgrp", "getppid", "getuid", "sync",
]
for name in NO_ARG_FUNCTIONS:
@@ -132,6 +132,156 @@
finally:
fp.close()
+ @unittest.skipUnless(hasattr(posix, 'truncate'), "test needs posix.truncate()")
+ def test_truncate(self):
+ with open(support.TESTFN, 'w') as fp:
+ fp.write('test')
+ fp.flush()
+ posix.truncate(support.TESTFN, 0)
+
+ @unittest.skipUnless(hasattr(posix, 'fexecve'), "test needs posix.fexecve()")
+ @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()")
+ @unittest.skipUnless(hasattr(os, 'wait'), "test needs os.wait()")
+ def test_fexecve(self):
+ fp = os.open(sys.executable, os.O_RDONLY)
+ try:
+ pid = os.fork()
+ if pid == 0:
+ os.chdir(os.path.split(sys.executable)[0])
+ posix.fexecve(fp, [sys.executable, '-c', 'pass'], os.environ)
+ else:
+ self.assertEqual(os.wait(), (pid, 0))
+ finally:
+ os.close(fp)
+
+ @unittest.skipUnless(hasattr(posix, 'waitid'), "test needs posix.waitid()")
+ @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()")
+ def test_waitid(self):
+ pid = os.fork()
+ if pid == 0:
+ os.chdir(os.path.split(sys.executable)[0])
+ posix.execve(sys.executable, [sys.executable, '-c', 'pass'], os.environ)
+ else:
+ res = posix.waitid(posix.P_PID, pid, posix.WEXITED)
+ self.assertEqual(pid, res.si_pid)
+
+ @unittest.skipUnless(hasattr(posix, 'lockf'), "test needs posix.lockf()")
+ def test_lockf(self):
+ fd = os.open(support.TESTFN, os.O_WRONLY | os.O_CREAT)
+ try:
+ os.write(fd, b'test')
+ os.lseek(fd, 0, os.SEEK_SET)
+ posix.lockf(fd, posix.F_LOCK, 4)
+ # section is locked
+ posix.lockf(fd, posix.F_ULOCK, 4)
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'pread'), "test needs posix.pread()")
+ def test_pread(self):
+ fd = os.open(support.TESTFN, os.O_RDWR | os.O_CREAT)
+ try:
+ os.write(fd, b'test')
+ os.lseek(fd, 0, os.SEEK_SET)
+ self.assertEqual(b'es', posix.pread(fd, 2, 1))
+ # the first pread() shoudn't disturb the file offset
+ self.assertEqual(b'te', posix.read(fd, 2))
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'pwrite'), "test needs posix.pwrite()")
+ def test_pwrite(self):
+ fd = os.open(support.TESTFN, os.O_RDWR | os.O_CREAT)
+ try:
+ os.write(fd, b'test')
+ os.lseek(fd, 0, os.SEEK_SET)
+ posix.pwrite(fd, b'xx', 1)
+ self.assertEqual(b'txxt', posix.read(fd, 4))
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'posix_fallocate'),
+ "test needs posix.posix_fallocate()")
+ def test_posix_fallocate(self):
+ fd = os.open(support.TESTFN, os.O_WRONLY | os.O_CREAT)
+ try:
+ posix.posix_fallocate(fd, 0, 10)
+ except OSError as inst:
+ # issue10812, ZFS doesn't appear to support posix_fallocate,
+ # so skip Solaris-based since they are likely to have ZFS.
+ if inst.errno != errno.EINVAL or not sys.platform.startswith("sunos"):
+ raise
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'posix_fadvise'),
+ "test needs posix.posix_fadvise()")
+ def test_posix_fadvise(self):
+ fd = os.open(support.TESTFN, os.O_RDONLY)
+ try:
+ posix.posix_fadvise(fd, 0, 0, posix.POSIX_FADV_WILLNEED)
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'futimes'), "test needs posix.futimes()")
+ def test_futimes(self):
+ now = time.time()
+ fd = os.open(support.TESTFN, os.O_RDONLY)
+ try:
+ posix.futimes(fd, None)
+ self.assertRaises(TypeError, posix.futimes, fd, (None, None))
+ self.assertRaises(TypeError, posix.futimes, fd, (now, None))
+ self.assertRaises(TypeError, posix.futimes, fd, (None, now))
+ posix.futimes(fd, (int(now), int(now)))
+ posix.futimes(fd, (now, now))
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'lutimes'), "test needs posix.lutimes()")
+ def test_lutimes(self):
+ now = time.time()
+ posix.lutimes(support.TESTFN, None)
+ self.assertRaises(TypeError, posix.lutimes, support.TESTFN, (None, None))
+ self.assertRaises(TypeError, posix.lutimes, support.TESTFN, (now, None))
+ self.assertRaises(TypeError, posix.lutimes, support.TESTFN, (None, now))
+ posix.lutimes(support.TESTFN, (int(now), int(now)))
+ posix.lutimes(support.TESTFN, (now, now))
+
+ @unittest.skipUnless(hasattr(posix, 'futimens'), "test needs posix.futimens()")
+ def test_futimens(self):
+ now = time.time()
+ fd = os.open(support.TESTFN, os.O_RDONLY)
+ try:
+ self.assertRaises(TypeError, posix.futimens, fd, (None, None), (None, None))
+ self.assertRaises(TypeError, posix.futimens, fd, (now, 0), None)
+ self.assertRaises(TypeError, posix.futimens, fd, None, (now, 0))
+ posix.futimens(fd, (int(now), int((now - int(now)) * 1e9)),
+ (int(now), int((now - int(now)) * 1e9)))
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'writev'), "test needs posix.writev()")
+ def test_writev(self):
+ fd = os.open(support.TESTFN, os.O_RDWR | os.O_CREAT)
+ try:
+ os.writev(fd, (b'test1', b'tt2', b't3'))
+ os.lseek(fd, 0, os.SEEK_SET)
+ self.assertEqual(b'test1tt2t3', posix.read(fd, 10))
+ finally:
+ os.close(fd)
+
+ @unittest.skipUnless(hasattr(posix, 'readv'), "test needs posix.readv()")
+ def test_readv(self):
+ fd = os.open(support.TESTFN, os.O_RDWR | os.O_CREAT)
+ try:
+ os.write(fd, b'test1tt2t3')
+ os.lseek(fd, 0, os.SEEK_SET)
+ buf = [bytearray(i) for i in [5, 3, 2]]
+ self.assertEqual(posix.readv(fd, buf), 10)
+ self.assertEqual([b'test1', b'tt2', b't3'], [bytes(i) for i in buf])
+ finally:
+ os.close(fd)
+
def test_dup(self):
if hasattr(posix, 'dup'):
fp = open(support.TESTFN)
diff --git a/Lib/test/test_posixpath.py b/Lib/test/test_posixpath.py
--- a/Lib/test/test_posixpath.py
+++ b/Lib/test/test_posixpath.py
@@ -6,6 +6,11 @@
import sys
from posixpath import realpath, abspath, dirname, basename
+try:
+ import posix
+except ImportError:
+ posix = None
+
# An absolute path to a temporary filename for testing. We can't rely on TESTFN
# being an absolute path, so we need this.
@@ -150,6 +155,7 @@
def test_islink(self):
self.assertIs(posixpath.islink(support.TESTFN + "1"), False)
+ self.assertIs(posixpath.lexists(support.TESTFN + "2"), False)
f = open(support.TESTFN + "1", "wb")
try:
f.write(b"foo")
@@ -225,6 +231,44 @@
def test_ismount(self):
self.assertIs(posixpath.ismount("/"), True)
+ self.assertIs(posixpath.ismount(b"/"), True)
+
+ def test_ismount_non_existent(self):
+ # Non-existent mountpoint.
+ self.assertIs(posixpath.ismount(ABSTFN), False)
+ try:
+ os.mkdir(ABSTFN)
+ self.assertIs(posixpath.ismount(ABSTFN), False)
+ finally:
+ safe_rmdir(ABSTFN)
+
+ @unittest.skipUnless(support.can_symlink(),
+ "Test requires symlink support")
+ def test_ismount_symlinks(self):
+ # Symlinks are never mountpoints.
+ try:
+ os.symlink("/", ABSTFN)
+ self.assertIs(posixpath.ismount(ABSTFN), False)
+ finally:
+ os.unlink(ABSTFN)
+
+ @unittest.skipIf(posix is None, "Test requires posix module")
+ def test_ismount_different_device(self):
+ # Simulate the path being on a different device from its parent by
+ # mocking out st_dev.
+ save_lstat = os.lstat
+ def fake_lstat(path):
+ st_ino = 0
+ st_dev = 0
+ if path == ABSTFN:
+ st_dev = 1
+ st_ino = 1
+ return posix.stat_result((0, st_ino, st_dev, 0, 0, 0, 0, 0, 0, 0))
+ try:
+ os.lstat = fake_lstat
+ self.assertIs(posixpath.ismount(ABSTFN), True)
+ finally:
+ os.lstat = save_lstat
def test_expanduser(self):
self.assertEqual(posixpath.expanduser("foo"), "foo")
@@ -254,6 +298,10 @@
with support.EnvironmentVarGuard() as env:
env['HOME'] = '/'
self.assertEqual(posixpath.expanduser(" def test_wait_timeout(self):
p = subprocess.Popen([sys.executable,
"-c", "import time; time.sleep(0.1)"])
self.assertRaises(subprocess.TimeoutExpired, p.wait, timeout=0.01)
self.assertEqual(p.wait(timeout=2), 0)
with self.assertRaises(subprocess.TimeoutExpired) as c:
p.wait(timeout=0.01)
self.assertIn("0.01", str(c.exception)) # For coverage of __str__.
# Some heavily loaded buildbots (sparc Debian 3.x) require this much
# time to start.
def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raiseself.assertEqual(p.wait(timeout=3), 0)
diff --git a/Lib/test/test_sundry.py b/Lib/test/test_sundry.py
--- a/Lib/test/test_sundry.py
+++ b/Lib/test/test_sundry.py
@@ -54,7 +54,6 @@
import py_compile
import sndhdr
import tabnanny
- import timeit
try:
import tty # not available on Windows
except ImportError:
diff --git a/Lib/test/test_syntax.py b/Lib/test/test_syntax.py
--- a/Lib/test/test_syntax.py
+++ b/Lib/test/test_syntax.py
@@ -237,7 +237,7 @@
Test continue in finally in weird combinations.
-continue in for loop under finally shouuld be ok.
+continue in for loop under finally should be ok.
>>> def test():
... try:
diff --git a/Lib/test/test_sys.py b/Lib/test/test_sys.py
--- a/Lib/test/test_sys.py
+++ b/Lib/test/test_sys.py
@@ -492,7 +492,7 @@
# provide too much opportunity for insane things to happen.
# We don't want them in the interned dict and if they aren't
# actually interned, we don't want to create the appearance
- # that they are by allowing intern() to succeeed.
+ # that they are by allowing intern() to succeed.
class S(str):
def hash(self):
return 123
diff --git a/Lib/test/test_threading.py b/Lib/test/test_threading.py
--- a/Lib/test/test_threading.py
+++ b/Lib/test/test_threading.py
@@ -578,7 +578,7 @@
# This acquires the lock and then waits until the child has forked
# before returning, which will release the lock soon after. If
# someone else tries to fix this test case by acquiring this lock
- # before forking instead of reseting it, the test case will
+ # before forking instead of resetting it, the test case will
# deadlock when it shouldn't.
condition = w._block
orig_acquire = condition.acquire
diff --git a/Lib/test/test_timeit.py b/Lib/test/test_timeit.py
new file mode 100644
--- /dev/null
+++ b/Lib/test/test_timeit.py
@@ -0,0 +1,305 @@
+import timeit
+import unittest
+import sys
+import io
+import time
+from textwrap import dedent
+
+from test.support import run_unittest
+from test.support import captured_stdout
+from test.support import captured_stderr
+
+# timeit's default number of iterations.
+DEFAULT_NUMBER = 1000000
+
+# timeit's default number of repetitions.
+DEFAULT_REPEAT = 3
+
+# XXX: some tests are commented out that would improve the coverage but take a
+# long time to run because they test the default number of loops, which is
+# large. The tests could be enabled if there was a way to override the default
+# number of loops during testing, but this would require changing the signature
+# of some functions that use the default as a default argument.
+
+class FakeTimer:
+ BASE_TIME = 42.0
+ def init(self, seconds_per_increment=1.0):
+ self.count = 0
+ self.setup_calls = 0
+ self.seconds_per_increment=seconds_per_increment
+ timeit._fake_timer = self
+
+ def call(self):
+ return self.BASE_TIME + self.count * self.seconds_per_increment
+
+ def inc(self):
+ self.count += 1
+
+ def setup(self):
+ self.setup_calls += 1
+
+ def wrap_timer(self, timer):
+ """Records 'timer' and returns self as callable timer."""
+ self.saved_timer = timer
+ return self
+
+class TestTimeit(unittest.TestCase):
+
+ def tearDown(self):
+ try:
+ del timeit._fake_timer
+ except AttributeError:
+ pass
+
+ def test_reindent_empty(self):
+ self.assertEqual(timeit.reindent("", 0), "")
+ self.assertEqual(timeit.reindent("", 4), "")
+
+ def test_reindent_single(self):
+ self.assertEqual(timeit.reindent("pass", 0), "pass")
+ self.assertEqual(timeit.reindent("pass", 4), "pass")
+
+ def test_reindent_multi_empty(self):
+ self.assertEqual(timeit.reindent("\n\n", 0), "\n\n")
+ self.assertEqual(timeit.reindent("\n\n", 4), "\n \n ")
+
+ def test_reindent_multi(self):
+ self.assertEqual(timeit.reindent(
+ "print()\npass\nbreak", 0),
+ "print()\npass\nbreak")
+ self.assertEqual(timeit.reindent(
+ "print()\npass\nbreak", 4),
+ "print()\n pass\n break")
+
+ def test_timer_invalid_stmt(self):
+ self.assertRaises(ValueError, timeit.Timer, stmt=None)
+
+ def test_timer_invalid_setup(self):
+ self.assertRaises(ValueError, timeit.Timer, setup=None)
+
+ fake_setup = "import timeit; timeit._fake_timer.setup()"
+ fake_stmt = "import timeit; timeit._fake_timer.inc()"
+
+ def fake_callable_setup(self):
+ self.fake_timer.setup()
+
+ def fake_callable_stmt(self):
+ self.fake_timer.inc()
+
+ def timeit(self, stmt, setup, number=None):
+ self.fake_timer = FakeTimer()
+ t = timeit.Timer(stmt=stmt, setup=setup, timer=self.fake_timer)
+ kwargs = {}
+ if number is None:
+ number = DEFAULT_NUMBER
+ else:
+ kwargs['number'] = number
+ delta_time = t.timeit(**kwargs)
+ self.assertEqual(self.fake_timer.setup_calls, 1)
+ self.assertEqual(self.fake_timer.count, number)
+ self.assertEqual(delta_time, number)
+
+ # Takes too long to run in debug build.
+ #def test_timeit_default_iters(self):
+ # self.timeit(self.fake_stmt, self.fake_setup)
+
+ def test_timeit_zero_iters(self):
+ self.timeit(self.fake_stmt, self.fake_setup, number=0)
+
+ def test_timeit_few_iters(self):
+ self.timeit(self.fake_stmt, self.fake_setup, number=3)
+
+ def test_timeit_callable_stmt(self):
+ self.timeit(self.fake_callable_stmt, self.fake_setup, number=3)
+
+ def test_timeit_callable_stmt_and_setup(self):
+ self.timeit(self.fake_callable_stmt,
+ self.fake_callable_setup, number=3)
+
+ # Takes too long to run in debug build.
+ #def test_timeit_function(self):
+ # delta_time = timeit.timeit(self.fake_stmt, self.fake_setup,
+ # timer=FakeTimer())
+ # self.assertEqual(delta_time, DEFAULT_NUMBER)
+
+ def test_timeit_function_zero_iters(self):
+ delta_time = timeit.timeit(self.fake_stmt, self.fake_setup, number=0,
+ timer=FakeTimer())
+ self.assertEqual(delta_time, 0)
+
+ def repeat(self, stmt, setup, repeat=None, number=None):
+ self.fake_timer = FakeTimer()
+ t = timeit.Timer(stmt=stmt, setup=setup, timer=self.fake_timer)
+ kwargs = {}
+ if repeat is None:
+ repeat = DEFAULT_REPEAT
+ else:
+ kwargs['repeat'] = repeat
+ if number is None:
+ number = DEFAULT_NUMBER
+ else:
+ kwargs['number'] = number
+ delta_times = t.repeat(**kwargs)
+ self.assertEqual(self.fake_timer.setup_calls, repeat)
+ self.assertEqual(self.fake_timer.count, repeat * number)
+ self.assertEqual(delta_times, repeat * [float(number)])
+
+ # Takes too long to run in debug build.
+ #def test_repeat_default(self):
+ # self.repeat(self.fake_stmt, self.fake_setup)
+
+ def test_repeat_zero_reps(self):
+ self.repeat(self.fake_stmt, self.fake_setup, repeat=0)
+
+ def test_repeat_zero_iters(self):
+ self.repeat(self.fake_stmt, self.fake_setup, number=0)
+
+ def test_repeat_few_reps_and_iters(self):
+ self.repeat(self.fake_stmt, self.fake_setup, repeat=3, number=5)
+
+ def test_repeat_callable_stmt(self):
+ self.repeat(self.fake_callable_stmt, self.fake_setup,
+ repeat=3, number=5)
+
+ def test_repeat_callable_stmt_and_setup(self):
+ self.repeat(self.fake_callable_stmt, self.fake_callable_setup,
+ repeat=3, number=5)
+
+ # Takes too long to run in debug build.
+ #def test_repeat_function(self):
+ # delta_times = timeit.repeat(self.fake_stmt, self.fake_setup,
+ # timer=FakeTimer())
+ # self.assertEqual(delta_times, DEFAULT_REPEAT * [float(DEFAULT_NUMBER)])
+
+ def test_repeat_function_zero_reps(self):
+ delta_times = timeit.repeat(self.fake_stmt, self.fake_setup, repeat=0,
+ timer=FakeTimer())
+ self.assertEqual(delta_times, [])
+
+ def test_repeat_function_zero_iters(self):
+ delta_times = timeit.repeat(self.fake_stmt, self.fake_setup, number=0,
+ timer=FakeTimer())
+ self.assertEqual(delta_times, DEFAULT_REPEAT * [0.0])
+
+ def assert_exc_string(self, exc_string, expected_exc_name):
+ exc_lines = exc_string.splitlines()
+ self.assertGreater(len(exc_lines), 2)
+ self.assertTrue(exc_lines[0].startswith('Traceback'))
+ self.assertTrue(exc_lines[-1].startswith(expected_exc_name))
+
+ def test_print_exc(self):
+ s = io.StringIO()
+ t = timeit.Timer("1/0")
+ try:
+ t.timeit()
+ except:
+ t.print_exc(s)
+ self.assert_exc_string(s.getvalue(), 'ZeroDivisionError')
+
+ MAIN_DEFAULT_OUTPUT = "10 loops, best of 3: 1 sec per loop\n"
+
+ def run_main(self, seconds_per_increment=1.0, switches=None, timer=None):
+ if timer is None:
+ timer = FakeTimer(seconds_per_increment=seconds_per_increment)
+ if switches is None:
+ args = []
+ else:
+ args = switches[:]
+ args.append(self.fake_stmt)
+ # timeit.main() modifies sys.path, so save and restore it.
+ orig_sys_path = sys.path[:]
+ with captured_stdout() as s:
+ timeit.main(args=args, _wrap_timer=timer.wrap_timer)
+ sys.path[:] = orig_sys_path[:]
+ return s.getvalue()
+
+ def test_main_bad_switch(self):
+ s = self.run_main(switches=['--bad-switch'])
+ self.assertEqual(s, dedent("""
+ option --bad-switch not recognized
+ use -h/--help for command line help
+ """))
+
+ def test_main_seconds(self):
+ s = self.run_main(seconds_per_increment=5.5)
+ self.assertEqual(s, "10 loops, best of 3: 5.5 sec per loop\n")
+
+ def test_main_milliseconds(self):
+ s = self.run_main(seconds_per_increment=0.0055)
+ self.assertEqual(s, "100 loops, best of 3: 5.5 msec per loop\n")
+
+ def test_main_microseconds(self):
+ s = self.run_main(seconds_per_increment=0.0000025, switches=['-n100'])
+ self.assertEqual(s, "100 loops, best of 3: 2.5 usec per loop\n")
+
+ def test_main_fixed_iters(self):
+ s = self.run_main(seconds_per_increment=2.0, switches=['-n35'])
+ self.assertEqual(s, "35 loops, best of 3: 2 sec per loop\n")
+
+ def test_main_setup(self):
+ s = self.run_main(seconds_per_increment=2.0,
+ switches=['-n35', '-s', 'print("CustomSetup")'])
+ self.assertEqual(s, "CustomSetup\n" * 3 +
+ "35 loops, best of 3: 2 sec per loop\n")
+
+ def test_main_fixed_reps(self):
+ s = self.run_main(seconds_per_increment=60.0, switches=['-r9'])
+ self.assertEqual(s, "10 loops, best of 9: 60 sec per loop\n")
+
+ def test_main_negative_reps(self):
+ s = self.run_main(seconds_per_increment=60.0, switches=['-r-5'])
+ self.assertEqual(s, "10 loops, best of 1: 60 sec per loop\n")
+
+ def test_main_help(self):
+ s = self.run_main(switches=['-h'])
+ # Note: It's not clear that the trailing space was intended as part of
+ # the help text, but since it's there, check for it.
+ self.assertEqual(s, timeit.doc + ' ')
+
+ def test_main_using_time(self):
+ fake_timer = FakeTimer()
+ s = self.run_main(switches=['-t'], timer=fake_timer)
+ self.assertEqual(s, self.MAIN_DEFAULT_OUTPUT)
+ self.assertIs(fake_timer.saved_timer, time.time)
+
+ def test_main_using_clock(self):
+ fake_timer = FakeTimer()
+ s = self.run_main(switches=['-c'], timer=fake_timer)
+ self.assertEqual(s, self.MAIN_DEFAULT_OUTPUT)
+ self.assertIs(fake_timer.saved_timer, time.clock)
+
+ def test_main_verbose(self):
+ s = self.run_main(switches=['-v'])
+ self.assertEqual(s, dedent("""
+ 10 loops -> 10 secs
+ raw times: 10 10 10
+ 10 loops, best of 3: 1 sec per loop
+ """))
+
+ def test_main_very_verbose(self):
+ s = self.run_main(seconds_per_increment=0.000050, switches=['-vv'])
+ self.assertEqual(s, dedent("""
+ 10 loops -> 0.0005 secs
+ 100 loops -> 0.005 secs
+ 1000 loops -> 0.05 secs
+ 10000 loops -> 0.5 secs
+ raw times: 0.5 0.5 0.5
+ 10000 loops, best of 3: 50 usec per loop
+ """))
+
+ def test_main_exception(self):
+ with captured_stderr() as error_stringio:
+ s = self.run_main(switches=['1/0'])
+ self.assert_exc_string(error_stringio.getvalue(), 'ZeroDivisionError')
+
+ def test_main_exception_fixed_reps(self):
+ with captured_stderr() as error_stringio:
+ s = self.run_main(switches=['-n1', '1/0'])
+ self.assert_exc_string(error_stringio.getvalue(), 'ZeroDivisionError')
+
+
+def test_main():
+ run_unittest(TestTimeit)
+
+if name == 'main':
+ test_main()
diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py
--- a/Lib/test/test_trace.py
+++ b/Lib/test/test_trace.py
@@ -209,7 +209,7 @@
(self.my_py_filename, firstlineno + 4): 1,
}
- # When used through 'run', some other spurios counts are produced, like
+ # When used through 'run', some other spurious counts are produced, like
# the settrace of threading, which we ignore, just making sure that the
# counts fo traced_func_loop were right.
#
diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py
--- a/Lib/test/test_urllib.py
+++ b/Lib/test/test_urllib.py
@@ -91,7 +91,7 @@
"did not return the expected text")
def test_close(self):
- # Test close() by calling it hear and then having it be called again
+ # Test close() by calling it here and then having it be called again
# by the tearDown() method for the test
self.returned_obj.close()
@@ -174,6 +174,11 @@
finally:
self.unfakehttp()
+ def test_willclose(self):
+ self.fakehttp(b"HTTP/1.1 200 OK\r\n\r\nHello!")
+ resp = urlopen("http://www.python.org")
+ self.assertTrue(resp.fp.will_close)
+
def test_read_0_9(self):
# "0.9" response accepted (but not "simple responses" without
# a status line)
@@ -1021,7 +1026,7 @@
# Just commented them out.
# Can't really tell why keep failing in windows and sparc.
-# Everywhere else they work ok, but on those machines, someteimes
+# Everywhere else they work ok, but on those machines, sometimes
# fail in one of the tests, sometimes in other. I have a linux, and
# the tests go ok.
# If anybody has one of the problematic enviroments, please help!
diff --git a/Lib/test/test_warnings.py b/Lib/test/test_warnings.py
--- a/Lib/test/test_warnings.py
+++ b/Lib/test/test_warnings.py
@@ -332,7 +332,7 @@
sys.argv = argv
def test_warn_explicit_type_errors(self):
- # warn_explicit() shoud error out gracefully if it is given objects
+ # warn_explicit() should error out gracefully if it is given objects
# of the wrong types.
# lineno is expected to be an integer.
self.assertRaises(TypeError, self.module.warn_explicit,
diff --git a/Lib/timeit.py b/Lib/timeit.py
--- a/Lib/timeit.py
+++ b/Lib/timeit.py
@@ -232,10 +232,10 @@
"""Convenience function to create Timer object and call repeat method."""
return Timer(stmt, setup, timer).repeat(repeat, number)
-def main(args=None):
+def main(args=None, *, _wrap_timer=None):
"""Main program, used when run as a script.
- The optional argument specifies the command line to be parsed,
+ The optional 'args' argument specifies the command line to be parsed,
defaulting to sys.argv[1:].
The return value is an exit code to be passed to sys.exit(); it
@@ -244,6 +244,10 @@
When an exception happens during timing, a traceback is printed to
stderr and the return value is 1. Exceptions at other times
(including the template compilation) are not caught.
+
+ '_wrap_timer' is an internal interface used for unit testing. If it
+ is not None, it must be a callable that accepts a timer function
+ and returns another timer function (used for unit testing).
"""
if args is None:
args = sys.argv[1:]
@@ -289,6 +293,8 @@
# directory)
import os
sys.path.insert(0, os.curdir)
+ if _wrap_timer is not None:
+ timer = _wrap_timer(timer)
t = Timer(stmt, setup, timer)
if number == 0:
# determine number so that 0.2 <= total time < 2.0
diff --git a/Lib/tkinter/test/test_ttk/test_functions.py b/Lib/tkinter/test/test_ttk/test_functions.py
--- a/Lib/tkinter/test/test_ttk/test_functions.py
+++ b/Lib/tkinter/test/test_ttk/test_functions.py
@@ -135,7 +135,7 @@
# minimum acceptable for image type
self.assertEqual(ttk._format_elemcreate('image', False, 'test'),
("test ", ()))
- # specifiyng a state spec
+ # specifying a state spec
self.assertEqual(ttk._format_elemcreate('image', False, 'test',
('', 'a')), ("test {} a", ()))
# state spec with multiple states
diff --git a/Lib/tkinter/tix.py b/Lib/tkinter/tix.py
--- a/Lib/tkinter/tix.py
+++ b/Lib/tkinter/tix.py
@@ -171,7 +171,7 @@
return self.tk.call('tix', 'getimage', name)
def tix_option_get(self, name):
- """Gets the options manitained by the Tix
+ """Gets the options maintained by the Tix
scheme mechanism. Available options include:
active_bg active_fg bg
@@ -576,7 +576,7 @@
class ComboBox(TixWidget):
"""ComboBox - an Entry field with a dropdown menu. The user can select a
- choice by either typing in the entry subwdget or selecting from the
+ choice by either typing in the entry subwidget or selecting from the
listbox subwidget.
Subwidget Class
@@ -869,7 +869,7 @@
"""HList - Hierarchy display widget can be used to display any data
that have a hierarchical structure, for example, file system directory
trees. The list entries are indented and connected by branch lines
- according to their places in the hierachy.
+ according to their places in the hierarchy.
Subwidgets - None"""
@@ -1519,7 +1519,7 @@
self.tk.call(self._w, 'selection', 'set', first, last)
class Tree(TixWidget):
- """Tree - The tixTree widget can be used to display hierachical
+ """Tree - The tixTree widget can be used to display hierarchical
data in a tree form. The user can adjust
the view of the tree by opening or closing parts of the tree."""
diff --git a/Lib/tkinter/ttk.py b/Lib/tkinter/ttk.py
--- a/Lib/tkinter/ttk.py
+++ b/Lib/tkinter/ttk.py
@@ -707,7 +707,7 @@
textvariable, values, width
"""
# The "values" option may need special formatting, so leave to
- # _format_optdict the responsability to format it
+ # _format_optdict the responsibility to format it
if "values" in kw:
kw["values"] = _format_optdict({'v': kw["values"]})[1]
diff --git a/Lib/turtle.py b/Lib/turtle.py
--- a/Lib/turtle.py
+++ b/Lib/turtle.py
@@ -1488,7 +1488,7 @@
Optional arguments:
canvwidth -- positive integer, new width of canvas in pixels
canvheight -- positive integer, new height of canvas in pixels
- bg -- colorstring or color-tupel, new backgroundcolor
+ bg -- colorstring or color-tuple, new backgroundcolor
If no arguments are given, return current (canvaswidth, canvasheight)
Do not alter the drawing window. To observe hidden parts of
@@ -3242,9 +3242,9 @@
fill="", width=ps)
# Turtle now at position old,
self._position = old
- ## if undo is done during crating a polygon, the last vertex
- ## will be deleted. if the polygon is entirel deleted,
- ## creatigPoly will be set to False.
+ ## if undo is done during creating a polygon, the last vertex
+ ## will be deleted. if the polygon is entirely deleted,
+ ## creatingPoly will be set to False.
## Polygons created before the last one will not be affected by undo()
if self._creatingPoly:
if len(self._poly) > 0:
@@ -3796,7 +3796,7 @@
class Turtle(RawTurtle):
- """RawTurtle auto-crating (scrolled) canvas.
+ """RawTurtle auto-creating (scrolled) canvas.
When a Turtle object is created or a function derived from some
Turtle method is called a TurtleScreen object is automatically created.
@@ -3836,7 +3836,7 @@
filename -- a string, used as filename
default value is turtle_docstringdict
- Has to be called explicitely, (not used by the turtle-graphics classes)
+ Has to be called explicitly, (not used by the turtle-graphics classes)
The docstring dictionary will be written to the Python script .py
It is intended to serve as a template for translation of the docstrings
into different languages.
diff --git a/Lib/turtledemo/bytedesign.py b/Lib/turtledemo/bytedesign.py
--- a/Lib/turtledemo/bytedesign.py
+++ b/Lib/turtledemo/bytedesign.py
@@ -4,7 +4,7 @@
tdemo_bytedesign.py
An example adapted from the example-suite
-of PythonCard's turtle graphcis.
+of PythonCard's turtle graphics.
It's based on an article in BYTE magazine
Problem Solving with Logo: Using Turtle
diff --git a/Lib/unittest/result.py b/Lib/unittest/result.py
--- a/Lib/unittest/result.py
+++ b/Lib/unittest/result.py
@@ -59,6 +59,9 @@
"Called when the given test is about to be run"
self.testsRun += 1
self._mirrorOutput = False
+ self._setupStdout()
+
+ def _setupStdout(self):
if self.buffer:
if self._stderr_buffer is None:
self._stderr_buffer = io.StringIO()
@@ -74,6 +77,10 @@
def stopTest(self, test):
"""Called when the given test has been run"""
+ self._restoreStdout()
+ self._mirrorOutput = False
+
+ def _restoreStdout(self):
if self.buffer:
if self._mirrorOutput:
output = sys.stdout.getvalue()
@@ -93,7 +100,6 @@
self._stdout_buffer.truncate()
self._stderr_buffer.seek(0)
self._stderr_buffer.truncate()
- self._mirrorOutput = False
def stopTestRun(self):
"""Called once after all tests are executed.
diff --git a/Lib/unittest/suite.py b/Lib/unittest/suite.py
--- a/Lib/unittest/suite.py
+++ b/Lib/unittest/suite.py
@@ -8,6 +8,11 @@
__unittest = True
+def _call_if_exists(parent, attr):
+ func = getattr(parent, attr, lambda: None)
+ func()
+
+
class BaseTestSuite(object):
"""A simple test suite that doesn't provide class or module shared fixtures.
"""
@@ -133,6 +138,7 @@
setUpClass = getattr(currentClass, 'setUpClass', None)
if setUpClass is not None:
+ _call_if_exists(result, '_setupStdout')
try:
setUpClass()
except Exception as e:
@@ -142,6 +148,8 @@
className = util.strclass(currentClass)
errorName = 'setUpClass (%s)' % className
self._addClassOrModuleLevelException(result, e, errorName)
+ finally:
+ _call_if_exists(result, '_restoreStdout')
def _get_previous_module(self, result):
previousModule = None
@@ -167,6 +175,7 @@
return
setUpModule = getattr(module, 'setUpModule', None)
if setUpModule is not None:
+ _call_if_exists(result, '_setupStdout')
try:
setUpModule()
except Exception as e:
@@ -175,6 +184,8 @@
result._moduleSetUpFailed = True
errorName = 'setUpModule (%s)' % currentModule
self._addClassOrModuleLevelException(result, e, errorName)
+ finally:
+ _call_if_exists(result, '_restoreStdout')
def _addClassOrModuleLevelException(self, result, exception, errorName):
error = _ErrorHolder(errorName)
@@ -198,6 +209,7 @@
tearDownModule = getattr(module, 'tearDownModule', None)
if tearDownModule is not None:
+ _call_if_exists(result, '_setupStdout')
try:
tearDownModule()
except Exception as e:
@@ -205,6 +217,8 @@
raise
errorName = 'tearDownModule (%s)' % previousModule
self._addClassOrModuleLevelException(result, e, errorName)
+ finally:
+ _call_if_exists(result, '_restoreStdout')
def _tearDownPreviousClass(self, test, result):
previousClass = getattr(result, '_previousTestClass', None)
@@ -220,6 +234,7 @@
tearDownClass = getattr(previousClass, 'tearDownClass', None)
if tearDownClass is not None:
+ _call_if_exists(result, '_setupStdout')
try:
tearDownClass()
except Exception as e:
@@ -228,7 +243,8 @@
className = util.strclass(previousClass)
errorName = 'tearDownClass (%s)' % className
self._addClassOrModuleLevelException(result, e, errorName)
finally:
_call_if_exists(result, '_restoreStdout')
class _ErrorHolder(object): diff --git a/Lib/unittest/test/test_result.py b/Lib/unittest/test/test_result.py --- a/Lib/unittest/test/test_result.py +++ b/Lib/unittest/test/test_result.py @@ -497,5 +497,72 @@ self.assertEqual(result._original_stderr.getvalue(), expectedErrMessage) self.assertMultiLineEqual(message, expectedFullMessage) + def testBufferSetupClass(self): + result = unittest.TestResult() + result.buffer = True + + class Foo(unittest.TestCase): + @classmethod + def setUpClass(cls): + 1/0 + def test_foo(self): + pass + suite = unittest.TestSuite([Foo('test_foo')]) + suite(result) + self.assertEqual(len(result.errors), 1) + + def testBufferTearDownClass(self): + result = unittest.TestResult() + result.buffer = True + + class Foo(unittest.TestCase): + @classmethod + def tearDownClass(cls): + 1/0 + def test_foo(self): + pass + suite = unittest.TestSuite([Foo('test_foo')]) + suite(result) + self.assertEqual(len(result.errors), 1) + + def testBufferSetUpModule(self): + result = unittest.TestResult() + result.buffer = True + + class Foo(unittest.TestCase): + def test_foo(self): + pass + class Module(object): + @staticmethod + def setUpModule(): + 1/0 + + Foo.module = 'Module' + sys.modules['Module'] = Module + self.addCleanup(sys.modules.pop, 'Module') + suite = unittest.TestSuite([Foo('test_foo')]) + suite(result) + self.assertEqual(len(result.errors), 1) + + def testBufferTearDownModule(self): + result = unittest.TestResult() + result.buffer = True + + class Foo(unittest.TestCase): + def test_foo(self): + pass + class Module(object): + @staticmethod + def tearDownModule(): + 1/0 + + Foo.module = 'Module' + sys.modules['Module'] = Module + self.addCleanup(sys.modules.pop, 'Module') + suite = unittest.TestSuite([Foo('test_foo')]) + suite(result) + self.assertEqual(len(result.errors), 1) + + if name == 'main': unittest.main() diff --git a/Lib/urllib/request.py b/Lib/urllib/request.py --- a/Lib/urllib/request.py +++ b/Lib/urllib/request.py @@ -1657,6 +1657,12 @@ headers["Authorization"] = "Basic %s" % auth if realhost: headers["Host"] = realhost + + # Add Connection:close as we don't support persistent connections yet. + # This helps in closing the socket and avoiding ResourceWarning + + headers["Connection"] = "close" + for header, value in self.addheaders: headers[header] = value diff --git a/Lib/xml/dom/minicompat.py b/Lib/xml/dom/minicompat.py --- a/Lib/xml/dom/minicompat.py +++ b/Lib/xml/dom/minicompat.py @@ -6,7 +6,7 @@ # # NodeList -- lightest possible NodeList implementation # -# EmptyNodeList -- lightest possible NodeList that is guarateed to +# EmptyNodeList -- lightest possible NodeList that is guaranteed to # remain empty (immutable) # # StringTypes -- tuple of defined string types diff --git a/Lib/xml/dom/minidom.py b/Lib/xml/dom/minidom.py --- a/Lib/xml/dom/minidom.py +++ b/Lib/xml/dom/minidom.py @@ -1905,7 +1905,7 @@ e._call_user_data_handler(operation, n, entity) else: # Note the cloning of Document and DocumentType nodes is - # implemenetation specific. minidom handles those cases + # implementation specific. minidom handles those cases # directly in the cloneNode() methods. raise xml.dom.NotSupportedErr("Cannot clone node %s" % repr(node)) diff --git a/Lib/xmlrpc/server.py b/Lib/xmlrpc/server.py --- a/Lib/xmlrpc/server.py +++ b/Lib/xmlrpc/server.py @@ -240,7 +240,7 @@ marshalled data. For backwards compatibility, a dispatch function can be provided as an argument (see comment in SimpleXMLRPCRequestHandler.do_POST) but overriding the - existing method through subclassing is the prefered means + existing method through subclassing is the preferred means of changing method dispatch behavior. """ diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -362,7 +362,7 @@ def runCommand(commandline): """ - Run a command and raise RuntimeError if it fails. Output is surpressed + Run a command and raise RuntimeError if it fails. Output is suppressed unless the command fails. """ fd = os.popen(commandline, 'r') diff --git a/Misc/ACKS b/Misc/ACKS --- a/Misc/ACKS +++ b/Misc/ACKS @@ -227,6 +227,7 @@ Ismail Donmez Marcos Donolo Dima Dorfman +Yves Dorfsman Cesar Douady Dean Draayer Fred L. Drake, Jr. @@ -450,6 +451,7 @@ Peter van Kampen Rafe Kaplan Jacob Kaplan-Moss +Arkady Koplyarov Lou Kates Hiroaki Kawai Sebastien Keim diff --git a/Misc/NEWS b/Misc/NEWS --- a/Misc/NEWS +++ b/Misc/NEWS @@ -10,6 +10,9 @@ Core and Builtins
+- Issue #11320: fix bogus memory management in Modules/getpath.c, leading to + a possible crash when calling Py_SetPath(). + - _ast.version is now a Mercurial integer and hex revision. - Issue #11432: A bug was introduced in subprocess.Popen on posix systems with @@ -72,8 +75,37 @@ Library
+- Issue #5421: Fix misleading error message when one of socket.sendto()'s
+ arguments has the wrong type. Patch by Nikita Vetoshkin.
+
+- Issue #10812: Add some extra posix functions to the os module.
+
+- Issue #10979: unittest stdout buffering now works with class and module
+ setup and teardown.
+
+- Issue #11577: fix ResourceWarning triggered by improved binhex test coverage
+
+- Issue #11243: fix the parameter querying methods of Message to work if
+ the headers contain un-encoded non-ASCII data.
+
+- Issue #11401: fix handling of headers with no value; this fixes a regression
+ relative to Python2 and the result is now the same as it was in Python2.
+
+- Issue #9298: base64 bodies weren't being folded to line lengths less than 78,
+ which was a regression relative to Python2. Unlike Python2, the last line
+ of the folded body now ends with a carriage return.
+
+- Issue #11560: shutil.unpack_archive now correctly handles the format
+ parameter. Patch by Evan Dandrea.
+
+- Issue #5870: Add subprocess.DEVNULL
constant.
+
- Issue #11133: fix two cases where inspect.getattr_static can trigger code
- execution. Patch by Daniel Urban.
+ execution. Patch by Andreas Stührk.
+
+- Issue #11569: use absolute path to the sysctl command in multiprocessing to
+ ensure that it will be found regardless of the shell PATH. This ensures
+ that multiprocessing.cpu_count works on default installs of MacOSX.
- Issue #11501: disutils.archive_utils.make_zipfile no longer fails if zlib is
not installed. Instead, the zipfile.ZIP_STORED compression is used to create
@@ -223,9 +255,25 @@
Tests
+- Issue #11577: improve test coverage of binhex.py. Patch by Arkady Koplyarov.
+
+- New test_crashers added to exercise the scripts in the Lib/test/crashers
+ directory and confirm they fail as expected
+
+- Issue #11578: added test for the timeit module. Patch Michael Henry.
+
+- Issue #11503: improve test coverage of posixpath.py. Patch by Evan Dandrea.
+
+- Issue #11505: improves test coverage of string.py. Patch by Alicia
+ Arlen.
+
+- Issue #11548: Improve test coverage of the shutil module. Patch by
+ Evan Dandrea.
+
- Issue #11554: Reactivated test_email_codecs.
-- Issue #11505: improves test coverage of string.py
+- Issue #11505: improves test coverage of string.py. Patch by Alicia
+ Arlen
- Issue #11490: test_subprocess:test_leaking_fds_on_error no longer gives a
false positive if the last directory in the path is inaccessible.
diff --git a/Modules/_ctypes/_ctypes.c b/Modules/_ctypes/_ctypes.c
--- a/Modules/_ctypes/_ctypes.c
+++ b/Modules/_ctypes/_ctypes.c
@@ -3317,7 +3317,7 @@
/* XXX XXX This would allow to pass additional options. For COM
method implementations, we would probably want different
behaviour than in 'normal' callback functions: return a HRESULT if
- an exception occurrs in the callback, and print the traceback not
+ an exception occurs in the callback, and print the traceback not
only on the console, but also to OutputDebugString() or something
like that.
/
diff --git a/Modules/_ctypes/callbacks.c b/Modules/_ctypes/callbacks.c
--- a/Modules/_ctypes/callbacks.c
+++ b/Modules/_ctypes/callbacks.c
@@ -202,7 +202,7 @@
/ XXX XXX XX
We have the problem that c_byte or c_short have dict->size of
1 resp. 4, but these parameters are pushed as sizeof(int) bytes.
- BTW, the same problem occurrs when they are pushed as parameters
+ BTW, the same problem occurs when they are pushed as parameters
/
} else if (dict) {
/ Hm, shouldn't we use PyCData_AtAddress() or something like that instead? */
diff --git a/Modules/_ctypes/callproc.c b/Modules/_ctypes/callproc.c
--- a/Modules/_ctypes/callproc.c
+++ b/Modules/_ctypes/callproc.c
@@ -29,7 +29,7 @@
4. _ctypes_callproc is then called with the 'callargs' tuple. _ctypes_callproc first
allocates two arrays. The first is an array of 'struct argument' items, the
- second array has 'void ' entried.
+ second array has 'void ' entries.
5. If 'converters' are present (converters is a sequence of argtypes'
from_param methods), for each item in 'callargs' converter is called and the
diff --git a/Modules/_functoolsmodule.c b/Modules/_functoolsmodule.c
--- a/Modules/_functoolsmodule.c
+++ b/Modules/_functoolsmodule.c
@@ -242,7 +242,7 @@
reduce by itself doesn't support getting kwargs in the unpickle
operation so we define a setstate that replaces all the information
about the partial. If we only replaced part of it someone would use
- it as a hook to do stange things.
+ it as a hook to do strange things.
/
static PyObject *
diff --git a/Modules/_io/iobase.c b/Modules/_io/iobase.c
--- a/Modules/_io/iobase.c
+++ b/Modules/_io/iobase.c
@@ -50,7 +50,7 @@
"stream.\n"
"\n"
"IOBase also supports the :keyword:with
statement. In this example,\n"
- "fp is closed after the suite of the with statment is complete:\n"
+ "fp is closed after the suite of the with statement is complete:\n"
"\n"
"with open('spam.txt', 'r') as fp:\n"
" fp.write('Spam and eggs!')\n");
diff --git a/Modules/_io/stringio.c b/Modules/_io/stringio.c
--- a/Modules/_io/stringio.c
+++ b/Modules/_io/stringio.c
@@ -157,7 +157,7 @@
0 lo string_size hi
| |<---used--->|<----------available----------->|
| | <--to pad-->|<---to write---> |
- 0 buf positon
+ 0 buf position
/
memset(self->buf + self->string_size, '\0',
diff --git a/Modules/_pickle.c b/Modules/_pickle.c
--- a/Modules/_pickle.c
+++ b/Modules/_pickle.c
@@ -6259,7 +6259,7 @@
goto error;
if (!PyDict_CheckExact(name_mapping_3to2)) {
PyErr_Format(PyExc_RuntimeError,
- "_compat_pickle.REVERSE_NAME_MAPPING shouldbe a dict, "
+ "_compat_pickle.REVERSE_NAME_MAPPING should be a dict, "
"not %.200s", Py_TYPE(name_mapping_3to2)->tp_name);
goto error;
}
diff --git a/Modules/_sqlite/connection.h b/Modules/_sqlite/connection.h
--- a/Modules/_sqlite/connection.h
+++ b/Modules/_sqlite/connection.h
@@ -55,7 +55,7 @@
/ None for autocommit, otherwise a PyString with the isolation level /
PyObject isolation_level;
- / NULL for autocommit, otherwise a string with the BEGIN statment; will be
+ / NULL for autocommit, otherwise a string with the BEGIN statement; will be
* freed in connection destructor /
char begin_statement;
diff --git a/Modules/cmathmodule.c b/Modules/cmathmodule.c
--- a/Modules/cmathmodule.c
+++ b/Modules/cmathmodule.c
@@ -23,7 +23,7 @@
/
CM_LARGE_DOUBLE is used to avoid spurious overflow in the sqrt, log,
inverse trig and inverse hyperbolic trig functions. Its log is used in the
- evaluation of exp, cos, cosh, sin, sinh, tan, and tanh to avoid unecessary
+ evaluation of exp, cos, cosh, sin, sinh, tan, and tanh to avoid unnecessary
overflow.
*/
diff --git a/Modules/getpath.c b/Modules/getpath.c
--- a/Modules/getpath.c
+++ b/Modules/getpath.c
@@ -134,6 +134,7 @@
static wchar_t exec_prefix[MAXPATHLEN+1];
static wchar_t progpath[MAXPATHLEN+1];
static wchar_t *module_search_path = NULL;
+static int module_search_path_malloced = 0;
static wchar_t lib_python = L"lib/python" VERSION;
static void
@@ -634,7 +635,6 @@
bufsz += wcslen(zip_path) + 1;
bufsz += wcslen(exec_prefix) + 1;
- / This is the only malloc call in this file /
buf = (wchar_t )PyMem_Malloc(bufszsizeof(wchar_t));
if (buf == NULL) {
@@ -687,6 +687,7 @@
/ And publish the results /
module_search_path = buf;
+ module_search_path_malloced = 1;
}
/ Reduce prefix and exec_prefix to their essence,
@@ -726,15 +727,18 @@
Py_SetPath(const wchar_t *path)
{
if (module_search_path != NULL) {
- free(module_search_path);
+ if (module_search_path_malloced)
+ PyMem_Free(module_search_path);
module_search_path = NULL;
+ module_search_path_malloced = 0;
}
if (path != NULL) {
extern wchar_t *Py_GetProgramName(void);
wchar_t prog = Py_GetProgramName();
wcsncpy(progpath, prog, MAXPATHLEN);
exec_prefix[0] = prefix[0] = L'\0';
- module_search_path = malloc((wcslen(path) + 1) * sizeof(wchar_t));
+ module_search_path = PyMem_Malloc((wcslen(path) + 1) * sizeof(wchar_t));
+ module_search_path_malloced = 1;
if (module_search_path != NULL)
wcscpy(module_search_path, path);
}
diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c
--- a/Modules/posixmodule.c
+++ b/Modules/posixmodule.c
@@ -59,6 +59,10 @@
#include "osdefs.h"
#endif
+#ifdef HAVE_SYS_UIO_H
+#include <sys/uio.h>
+#endif
+
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif / HAVE_SYS_TYPES_H */
@@ -103,10 +107,6 @@
#ifdef HAVE_SYS_SOCKET_H
#include <sys/socket.h>
#endif
-#ifdef HAVE_SYS_UIO_H
-#include <sys/uio.h>
-#endif
#endif
/* Various compilers have only certain posix functions /
@@ -1503,6 +1503,33 @@
10
};
+#if defined(HAVE_WAITID) && !defined(APPLE)
+PyDoc_STRVAR(waitid_result__doc__,
+"waitid_result: Result from waitid.\n\n
+This object may be accessed either as a tuple of\n
+ (si_pid, si_uid, si_signo, si_status, si_code),\n
+or via the attributes si_pid, si_uid, and so on.\n
+\n
+See os.waitid for more information.");
+
+static PyStructSequence_Field waitid_result_fields[] = {
+ {"si_pid", },
+ {"si_uid", },
+ {"si_signo", },
+ {"si_status", },
+ {"si_code", },
+ {0}
+};
+
+static PyStructSequence_Desc waitid_result_desc = {
+ "waitid_result", / name /
+ waitid_result__doc__, / doc /
+ waitid_result_fields,
+ 5
+};
+static PyTypeObject WaitidResultType;
+#endif
+
static int initialized;
static PyTypeObject StatResultType;
static PyTypeObject StatVFSResultType;
@@ -2102,6 +2129,21 @@
}
#endif / HAVE_FSYNC */
+#ifdef HAVE_SYNC
+PyDoc_STRVAR(posix_sync__doc__,
+"sync()\n\n
+Force write of everything to disk.");
+
+static PyObject *
+posix_sync(PyObject *self, PyObject noargs)
+{
+ Py_BEGIN_ALLOW_THREADS
+ sync();
+ Py_END_ALLOW_THREADS
+ Py_RETURN_NONE;
+}
+#endif
+
#ifdef HAVE_FDATASYNC
#ifdef __hpux
@@ -3488,6 +3530,167 @@
#endif / MS_WINDOWS */
}
+#ifdef HAVE_FUTIMES
+PyDoc_STRVAR(posix_futimes__doc__,
+"futimes(fd, (atime, mtime))\n
+futimes(fd, None)\n\n
+Set the access and modified time of the file specified by the file\n
+descriptor fd to the given values. If the second form is used, set the\n
+access and modified times to the current time.");
+
+static PyObject *
+posix_futimes(PyObject self, PyObject args)
+{
+ int res, fd;
+ PyObject arg;
+ struct timeval buf[2];
+ long ausec, musec;
+
+ if (!PyArg_ParseTuple(args, "iO:futimes", &fd, &arg))
+ return NULL;
+
+ if (arg == Py_None) {
+ / optional time values not given */
+ Py_BEGIN_ALLOW_THREADS
+ res = futimes(fd, NULL);
+ Py_END_ALLOW_THREADS
+ }
+ else if (!PyTuple_Check(arg) || PyTuple_Size(arg) != 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "futimes() arg 2 must be a tuple (atime, mtime)");
+ return NULL;
+ }
+ else {
+ if (extract_time(PyTuple_GET_ITEM(arg, 0),
+ &(buf[0].tv_sec), &ausec) == -1) {
+ return NULL;
+ }
+ if (extract_time(PyTuple_GET_ITEM(arg, 1),
+ &(buf[1].tv_sec), &musec) == -1) {
+ return NULL;
+ }
+ buf[0].tv_usec = ausec;
+ buf[1].tv_usec = musec;
+ Py_BEGIN_ALLOW_THREADS
+ res = futimes(fd, buf);
+ Py_END_ALLOW_THREADS
+ }
+ if (res < 0)
+ return posix_error();
+ Py_RETURN_NONE;
+}
+#endif
+
+#ifdef HAVE_LUTIMES
+PyDoc_STRVAR(posix_lutimes__doc__,
+"lutimes(path, (atime, mtime))\n
+lutimes(path, None)\n\n
+Like utime(), but if path is a symbolic link, it is not dereferenced.");
+
+static PyObject *
+posix_lutimes(PyObject *self, PyObject *args)
+{
+ PyObject *opath, *arg;
+ const char path;
+ int res;
+ struct timeval buf[2];
+ long ausec, musec;
+
+ if (!PyArg_ParseTuple(args, "O&O:lutimes",
+ PyUnicode_FSConverter, &opath, &arg))
+ return NULL;
+ path = PyBytes_AsString(opath);
+ if (arg == Py_None) {
+ / optional time values not given */
+ Py_BEGIN_ALLOW_THREADS
+ res = lutimes(path, NULL);
+ Py_END_ALLOW_THREADS
+ }
+ else if (!PyTuple_Check(arg) || PyTuple_Size(arg) != 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "lutimes() arg 2 must be a tuple (atime, mtime)");
+ Py_DECREF(opath);
+ return NULL;
+ }
+ else {
+ if (extract_time(PyTuple_GET_ITEM(arg, 0),
+ &(buf[0].tv_sec), &ausec) == -1) {
+ Py_DECREF(opath);
+ return NULL;
+ }
+ if (extract_time(PyTuple_GET_ITEM(arg, 1),
+ &(buf[1].tv_sec), &musec) == -1) {
+ Py_DECREF(opath);
+ return NULL;
+ }
+ buf[0].tv_usec = ausec;
+ buf[1].tv_usec = musec;
+ Py_BEGIN_ALLOW_THREADS
+ res = lutimes(path, buf);
+ Py_END_ALLOW_THREADS
+ }
+ Py_DECREF(opath);
+ if (res < 0)
+ return posix_error();
+ Py_RETURN_NONE;
+}
+#endif
+
+#ifdef HAVE_FUTIMENS
+PyDoc_STRVAR(posix_futimens__doc__,
+"futimens(fd, (atime_sec, atime_nsec), (mtime_sec, mtime_nsec))\n
+futimens(fd, None, None)\n\n
+Updates the timestamps of a file specified by the file descriptor fd, with\n
+nanosecond precision.\n
+The second form sets atime and mtime to the current time.\n
+If *_nsec is specified as UTIME_NOW, the timestamp is updated to the\n
+current time.\n
+If *_nsec is specified as UTIME_OMIT, the timestamp is not updated.");
+
+static PyObject *
+posix_futimens(PyObject *self, PyObject *args)
+{
+ int res, fd;
+ PyObject *atime, mtime;
+ struct timespec buf[2];
+
+ if (!PyArg_ParseTuple(args, "iOO:futimens",
+ &fd, &atime, &mtime))
+ return NULL;
+ if (atime == Py_None && mtime == Py_None) {
+ / optional time values not given /
+ Py_BEGIN_ALLOW_THREADS
+ res = futimens(fd, NULL);
+ Py_END_ALLOW_THREADS
+ }
+ else if (!PyTuple_Check(atime) || PyTuple_Size(atime) != 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "futimens() arg 2 must be a tuple (atime_sec, atime_nsec)");
+ return NULL;
+ }
+ else if (!PyTuple_Check(mtime) || PyTuple_Size(mtime) != 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "futimens() arg 3 must be a tuple (mtime_sec, mtime_nsec)");
+ return NULL;
+ }
+ else {
+ if (!PyArg_ParseTuple(atime, "ll:futimens",
+ &(buf[0].tv_sec), &(buf[0].tv_nsec))) {
+ return NULL;
+ }
+ if (!PyArg_ParseTuple(mtime, "ll:futimens",
+ &(buf[1].tv_sec), &(buf[1].tv_nsec))) {
+ return NULL;
+ }
+ Py_BEGIN_ALLOW_THREADS
+ res = futimens(fd, buf);
+ Py_END_ALLOW_THREADS
+ }
+ if (res < 0)
+ return posix_error();
+ Py_RETURN_NONE;
+}
+#endif
/ Process operations */
@@ -3532,79 +3735,7 @@
}
#endif
-#ifdef HAVE_EXECV
-PyDoc_STRVAR(posix_execv__doc__,
-"execv(path, args)\n\n
-Execute an executable path with arguments, replacing current process.\n
-\n
- path: path of executable file\n
- args: tuple or list of strings");
-static PyObject * -posix_execv(PyObject *self, PyObject *args) -{ - PyObject *opath; - char *path; - PyObject *argv; - char **argvlist; - Py_ssize_t i, argc; - PyObject *(*getitem)(PyObject *, Py_ssize_t);
- /* execv has two arguments: (path, argv), where
argv is a list or tuple of strings. */
- if (!PyArg_ParseTuple(args, "O&O:execv",
PyUnicode_FSConverter,
&opath, &argv))
return NULL;
- path = PyBytes_AsString(opath);
- if (PyList_Check(argv)) {
argc = PyList_Size(argv);
getitem = PyList_GetItem;
- }
- else if (PyTuple_Check(argv)) {
argc = PyTuple_Size(argv);
getitem = PyTuple_GetItem;
- }
- else {
PyErr_SetString(PyExc_TypeError, "execv() arg 2 must be a tuple or list");
Py_DECREF(opath);
return NULL;
- }
- if (argc < 1) {
PyErr_SetString(PyExc_ValueError, "execv() arg 2 must not be empty");
Py_DECREF(opath);
return NULL;
- }
- argvlist = PyMem_NEW(char *, argc+1);
- if (argvlist == NULL) {
Py_DECREF(opath);
return PyErr_NoMemory();
- }
- for (i = 0; i < argc; i++) {
if (!fsconvert_strdup((*getitem)(argv, i),
&argvlist[i])) {
free_string_array(argvlist, i);
PyErr_SetString(PyExc_TypeError,
"execv() arg 2 must contain only strings");
Py_DECREF(opath);
return NULL;
}
- }
- argvlist[argc] = NULL;
- execv(path, argvlist);
- /* If we get here it's definitely an error */
- free_string_array(argvlist, argc);
- Py_DECREF(opath);
- return posix_error();
-}
+#if defined(HAVE_EXECV) || defined (HAVE_FEXECVE)
static char**
parse_envlist(PyObject* env, Py_ssize_t envc_ptr)
{
@@ -3686,6 +3817,87 @@
return NULL;
}
+static char*
+parse_arglist(PyObject* argv, Py_ssize_t *argc)
+{
+ int i;
+ char **argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ for (i = 0; i < *argc; i++) {
+ PyObject* item = PySequence_ITEM(argv, i);
+ if (item == NULL)
+ goto fail;
+ if (!fsconvert_strdup(item, &argvlist[i])) {
+ Py_DECREF(item);
+ goto fail;
+ }
+ Py_DECREF(item);
+ }
+ argvlist[*argc] = NULL;
+ return argvlist;
+fail:
+ *argc = i;
+ free_string_array(argvlist, *argc);
+ return NULL;
+}
+#endif
+
+#ifdef HAVE_EXECV
+PyDoc_STRVAR(posix_execv__doc__,
+"execv(path, args)\n\n
+Execute an executable path with arguments, replacing current process.\n
+\n
+ path: path of executable file\n
+ args: tuple or list of strings");
+
+static PyObject *
+posix_execv(PyObject *self, PyObject *args)
+{
+ PyObject *opath;
+ char *path;
+ PyObject *argv;
+ char **argvlist;
+ Py_ssize_t argc;
+
+ /* execv has two arguments: (path, argv), where
+ argv is a list or tuple of strings. */
+
+ if (!PyArg_ParseTuple(args, "O&O:execv",
+ PyUnicode_FSConverter,
+ &opath, &argv))
+ return NULL;
+ path = PyBytes_AsString(opath);
+ if (!PyList_Check(argv) && !PyTuple_Check(argv)) {
+ PyErr_SetString(PyExc_TypeError,
+ "execv() arg 2 must be a tuple or list");
+ Py_DECREF(opath);
+ return NULL;
+ }
+ argc = PySequence_Size(argv);
+ if (argc < 1) {
+ PyErr_SetString(PyExc_ValueError, "execv() arg 2 must not be empty");
+ Py_DECREF(opath);
+ return NULL;
+ }
+
+ argvlist = parse_arglist(argv, &argc);
+ if (argvlist == NULL) {
+ Py_DECREF(opath);
+ return NULL;
+ }
+
+ execv(path, argvlist);
+
+ /* If we get here it's definitely an error */
+
+ free_string_array(argvlist, argc);
+ Py_DECREF(opath);
+ return posix_error();
+}
+
PyDoc_STRVAR(posix_execve__doc__,
"execve(path, args, env)\n\n
Execute a path with arguments and environment, replacing current process.\n
@@ -3702,9 +3914,7 @@
PyObject *argv, *env;
char **argvlist;
char **envlist;
- Py_ssize_t i, argc, envc;
- PyObject *(*getitem)(PyObject *, Py_ssize_t);
- Py_ssize_t lastarg = 0;
+ Py_ssize_t argc, envc;
/* execve has three arguments: (path, argv, env), where
argv is a list or tuple of strings and env is a dictionary
@@ -3715,40 +3925,22 @@
&opath, &argv, &env))
return NULL;
path = PyBytes_AsString(opath);
- if (PyList_Check(argv)) {
- argc = PyList_Size(argv);
- getitem = PyList_GetItem;
- }
- else if (PyTuple_Check(argv)) {
- argc = PyTuple_Size(argv);
- getitem = PyTuple_GetItem;
- }
- else {
+ if (!PyList_Check(argv) && !PyTuple_Check(argv)) {
PyErr_SetString(PyExc_TypeError,
"execve() arg 2 must be a tuple or list");
goto fail_0;
}
+ argc = PySequence_Size(argv);
if (!PyMapping_Check(env)) {
PyErr_SetString(PyExc_TypeError,
"execve() arg 3 must be a mapping object");
goto fail_0;
}
- argvlist = PyMem_NEW(char *, argc+1);
+ argvlist = parse_arglist(argv, &argc);
if (argvlist == NULL) {
- PyErr_NoMemory();
goto fail_0;
}
- for (i = 0; i < argc; i++) {
- if (!fsconvert_strdup((*getitem)(argv, i),
- &argvlist[i]))
- {
- lastarg = i;
- goto fail_1;
- }
- }
- lastarg = argc;
- argvlist[argc] = NULL;
envlist = parse_envlist(env, &envc);
if (envlist == NULL)
@@ -3764,13 +3956,69 @@
PyMem_DEL(envlist[envc]);
PyMem_DEL(envlist);
fail_1:
- free_string_array(argvlist, lastarg);
+ free_string_array(argvlist, argc);
fail_0:
Py_DECREF(opath);
return NULL;
}
#endif /* HAVE_EXECV */
+#ifdef HAVE_FEXECVE
+PyDoc_STRVAR(posix_fexecve__doc__,
+"fexecve(fd, args, env)\n\n
+Execute the program specified by a file descriptor with arguments and\n
+environment, replacing the current process.\n
+\n
+ fd: file descriptor of executable\n
+ args: tuple or list of arguments\n
+ env: dictionary of strings mapping to strings");
+
+static PyObject *
+posix_fexecve(PyObject *self, PyObject *args)
+{
+ int fd;
+ PyObject *argv, *env;
+ char **argvlist;
+ char **envlist;
+ Py_ssize_t argc, envc;
+
+ if (!PyArg_ParseTuple(args, "iOO:fexecve",
+ &fd, &argv, &env))
+ return NULL;
+ if (!PyList_Check(argv) && !PyTuple_Check(argv)) {
+ PyErr_SetString(PyExc_TypeError,
+ "fexecve() arg 2 must be a tuple or list");
+ return NULL;
+ }
+ argc = PySequence_Size(argv);
+ if (!PyMapping_Check(env)) {
+ PyErr_SetString(PyExc_TypeError,
+ "fexecve() arg 3 must be a mapping object");
+ return NULL;
+ }
+
+ argvlist = parse_arglist(argv, &argc);
+ if (argvlist == NULL)
+ return NULL;
+
+ envlist = parse_envlist(env, &envc);
+ if (envlist == NULL)
+ goto fail;
+
+ fexecve(fd, argvlist, envlist);
+
+ /* If we get here it's definitely an error */
+
+ (void) posix_error();
+
+ while (--envc >= 0)
+ PyMem_DEL(envlist[envc]);
+ PyMem_DEL(envlist);
+ fail:
+ free_string_array(argvlist, argc);
+ return NULL;
+}
+#endif / HAVE_FEXECVE /
#ifdef HAVE_SPAWNV
PyDoc_STRVAR(posix_spawnv__doc__,
@@ -4337,6 +4585,7 @@
}
#endif
+
#ifdef HAVE_GETEGID
PyDoc_STRVAR(posix_getegid__doc__,
"getegid() -> egid\n\n
@@ -5127,6 +5376,55 @@
}
#endif / HAVE_WAIT4 */
+#if defined(HAVE_WAITID) && !defined(APPLE)
+PyDoc_STRVAR(posix_waitid__doc__,
+"waitid(idtype, id, options) -> waitid_result\n\n
+Wait for the completion of one or more child processes.\n\n
+idtype can be P_PID, P_PGID or P_ALL.\n
+id specifies the pid to wait on.\n
+options is constructed from the ORing of one or more of WEXITED, WSTOPPED\n
+or WCONTINUED and additionally may be ORed with WNOHANG or WNOWAIT.\n
+Returns either waitid_result or None if WNOHANG is specified and there are\n
+no children in a waitable state.");
+
+static PyObject *
+posix_waitid(PyObject *self, PyObject *args)
+{
+ PyObject *result;
+ idtype_t idtype;
+ id_t id;
+ int options, res;
+ siginfo_t si;
+ si.si_pid = 0;
+ if (!PyArg_ParseTuple(args, "i" Py_PARSE_PID "i:waitid", &idtype, &id, &options))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = waitid(idtype, id, &si, options);
+ Py_END_ALLOW_THREADS
+ if (res == -1)
+ return posix_error();
+
+ if (si.si_pid == 0)
+ Py_RETURN_NONE;
+
+ result = PyStructSequence_New(&WaitidResultType);
+ if (!result)
+ return NULL;
+
+ PyStructSequence_SET_ITEM(result, 0, PyLong_FromPid(si.si_pid));
+ PyStructSequence_SET_ITEM(result, 1, PyLong_FromPid(si.si_uid));
+ PyStructSequence_SET_ITEM(result, 2, PyLong_FromLong((long)(si.si_signo)));
+ PyStructSequence_SET_ITEM(result, 3, PyLong_FromLong((long)(si.si_status)));
+ PyStructSequence_SET_ITEM(result, 4, PyLong_FromLong((long)(si.si_code)));
+ if (PyErr_Occurred()) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ return result;
+}
+#endif
+
#ifdef HAVE_WAITPID
PyDoc_STRVAR(posix_waitpid__doc,
"waitpid(pid, options) -> (pid, status)\n\n
@@ -5742,6 +6040,35 @@
return Py_None;
}
+#ifdef HAVE_LOCKF
+PyDoc_STRVAR(posix_lockf__doc,
+"lockf(fd, cmd, len)\n\n
+Apply, test or remove a POSIX lock on an open file descriptor.\n\n
+fd is an open file descriptor.\n
+cmd specifies the command to use - one of F_LOCK, F_TLOCK, F_ULOCK or\n
+F_TEST.\n
+len specifies the section of the file to lock.");
+
+static PyObject *
+posix_lockf(PyObject *self, PyObject *args)
+{
+ int fd, cmd, res;
+ off_t len;
+ if (!PyArg_ParseTuple(args, "iiO&:lockf",
+ &fd, &cmd, parse_off_t, &len))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ res = lockf(fd, cmd, len);
+ Py_END_ALLOW_THREADS
+
+ if (res < 0)
+ return posix_error();
+
+ Py_RETURN_NONE;
+}
+#endif
+
PyDoc_STRVAR(posix_lseek__doc__,
"lseek(fd, pos, how) -> newpos\n\n
@@ -5831,6 +6158,140 @@
return buffer;
}
+#if (defined(HAVE_SENDFILE) && (defined(FreeBSD) || defined(DragonFly)
+ || defined(APPLE))) || defined(HAVE_READV) || defined(HAVE_WRITEV)
+static Py_ssize_t
+iov_setup(struct iovec **iov, Py_buffer **buf, PyObject *seq, int cnt, int type)
+{
+ int i, j;
+ Py_ssize_t blen, total = 0;
+
+ *iov = PyMem_New(struct iovec, cnt);
+ if (*iov == NULL) {
+ PyErr_NoMemory();
+ return total;
+ }
+
+ *buf = PyMem_New(Py_buffer, cnt);
+ if (*buf == NULL) {
+ PyMem_Del(*iov);
+ PyErr_NoMemory();
+ return total;
+ }
+
+ for (i = 0; i < cnt; i++) {
+ PyObject *item = PySequence_GetItem(seq, i);
+ if (item == NULL)
+ goto fail;
+ if (PyObject_GetBuffer(item, &(*buf)[i], type) == -1) {
+ Py_DECREF(item);
+ goto fail;
+ }
+ Py_DECREF(item);
+ (*iov)[i].iov_base = (*buf)[i].buf;
+ blen = (*buf)[i].len;
+ (*iov)[i].iov_len = blen;
+ total += blen;
+ }
+ return total;
+
+fail:
+ PyMem_Del(*iov);
+ for (j = 0; j < i; j++) {
+ PyBuffer_Release(&(*buf)[j]);
+ }
+ PyMem_Del(*buf);
+ return 0;
+}
+
+static void
+iov_cleanup(struct iovec *iov, Py_buffer *buf, int cnt)
+{
+ int i;
+ PyMem_Del(iov);
+ for (i = 0; i < cnt; i++) {
+ PyBuffer_Release(&buf[i]);
+ }
+ PyMem_Del(buf);
+}
+#endif
+
+#ifdef HAVE_READV
+PyDoc_STRVAR(posix_readv__doc__,
+"readv(fd, buffers) -> bytesread\n\n
+Read from a file descriptor into a number of writable buffers. buffers\n
+is an arbitrary sequence of writable buffers.\n
+Returns the total number of bytes read.");
+
+static PyObject *
+posix_readv(PyObject *self, PyObject *args)
+{
+ int fd, cnt;
+ Py_ssize_t n;
+ PyObject *seq;
+ struct iovec *iov;
+ Py_buffer *buf;
+
+ if (!PyArg_ParseTuple(args, "iO:readv", &fd, &seq))
+ return NULL;
+ if (!PySequence_Check(seq)) {
+ PyErr_SetString(PyExc_TypeError,
+ "readv() arg 2 must be a sequence");
+ return NULL;
+ }
+ cnt = PySequence_Size(seq);
+
+ if (!iov_setup(&iov, &buf, seq, cnt, PyBUF_WRITABLE))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ n = readv(fd, iov, cnt);
+ Py_END_ALLOW_THREADS
+
+ iov_cleanup(iov, buf, cnt);
+ return PyLong_FromSsize_t(n);
+}
+#endif
+
+#ifdef HAVE_PREAD
+PyDoc_STRVAR(posix_pread__doc,
+"pread(fd, buffersize, offset) -> string\n\n
+Read from a file descriptor, fd, at a position of offset. It will read up\n
+to buffersize number of bytes. The file offset remains unchanged.");
+
+static PyObject *
+posix_pread(PyObject *self, PyObject *args)
+{
+ int fd, size;
+ off_t offset;
+ Py_ssize_t n;
+ PyObject *buffer;
+ if (!PyArg_ParseTuple(args, "iiO&:pread", &fd, &size, _parse_off_t, &offset))
+ return NULL;
+
+ if (size < 0) {
+ errno = EINVAL;
+ return posix_error();
+ }
+ buffer = PyBytes_FromStringAndSize((char *)NULL, size);
+ if (buffer == NULL)
+ return NULL;
+ if (!_PyVerify_fd(fd)) {
+ Py_DECREF(buffer);
+ return posix_error();
+ }
+ Py_BEGIN_ALLOW_THREADS
+ n = pread(fd, PyBytes_AS_STRING(buffer), size, offset);
+ Py_END_ALLOW_THREADS
+ if (n < 0) {
+ Py_DECREF(buffer);
+ return posix_error();
+ }
+ if (n != size)
+ _PyBytes_Resize(&buffer, n);
+ return buffer;
+}
+#endif
PyDoc_STRVAR(posix_write__doc__,
"write(fd, string) -> byteswritten\n\n
@@ -5866,57 +6327,6 @@
}
#ifdef HAVE_SENDFILE
-#if defined(FreeBSD) || defined(DragonFly) || defined(APPLE)
-static Py_ssize_t
-iov_setup(struct iovec **iov, Py_buffer **buf, PyObject *seq, int cnt, int type)
-{
- int i, j;
- Py_ssize_t blen, total = 0;
- *iov = PyMem_New(struct iovec, cnt);
- if (*iov == NULL) {
PyErr_NoMemory();
return total;
- }
- *buf = PyMem_New(Py_buffer, cnt);
- if (*buf == NULL) {
PyMem_Del(*iov);
PyErr_NoMemory();
return total;
- }
- for (i = 0; i < cnt; i++) {
if (PyObject_GetBuffer(PySequence_GetItem(seq, i),
&(*buf)[i], type) == -1) {
PyMem_Del(*iov);
for (j = 0; j < i; j++) {
PyBuffer_Release(&(*buf)[j]);
}
PyMem_Del(*buf);
total = 0;
return total;
}
(*iov)[i].iov_base = (*buf)[i].buf;
blen = (*buf)[i].len;
(*iov)[i].iov_len = blen;
total += blen;
- }
- return total;
-}
-static void -iov_cleanup(struct iovec *iov, Py_buffer *buf, int cnt) -{ - int i; - PyMem_Del(iov); - for (i = 0; i < cnt; i++) { - PyBuffer_Release(&buf[i]); - } - PyMem_Del(buf); -} -#endif
PyDoc_STRVAR(posix_sendfile__doc__,
"sendfile(out, in, offset, nbytes) -> byteswritten\n
sendfile(out, in, offset, nbytes, headers=None, trailers=None, flags=0)\n
@@ -6150,6 +6560,73 @@
}
#endif /* HAVE_PIPE */
+#ifdef HAVE_WRITEV
+PyDoc_STRVAR(posix_writev__doc__,
+"writev(fd, buffers) -> byteswritten\n\n
+Write the contents of buffers to a file descriptor, where buffers is an\n
+arbitrary sequence of buffers.\n
+Returns the total bytes written.");
+
+static PyObject *
+posix_writev(PyObject *self, PyObject *args)
+{
+ int fd, cnt;
+ Py_ssize_t res;
+ PyObject *seq;
+ struct iovec *iov;
+ Py_buffer *buf;
+ if (!PyArg_ParseTuple(args, "iO:writev", &fd, &seq))
+ return NULL;
+ if (!PySequence_Check(seq)) {
+ PyErr_SetString(PyExc_TypeError,
+ "writev() arg 2 must be a sequence");
+ return NULL;
+ }
+ cnt = PySequence_Size(seq);
+
+ if (!iov_setup(&iov, &buf, seq, cnt, PyBUF_SIMPLE)) {
+ return NULL;
+ }
+
+ Py_BEGIN_ALLOW_THREADS
+ res = writev(fd, iov, cnt);
+ Py_END_ALLOW_THREADS
+
+ iov_cleanup(iov, buf, cnt);
+ return PyLong_FromSsize_t(res);
+}
+#endif
+
+#ifdef HAVE_PWRITE
+PyDoc_STRVAR(posix_pwrite__doc__,
+"pwrite(fd, string, offset) -> byteswritten\n\n
+Write string to a file descriptor, fd, from offset, leaving the file\n
+offset unchanged.");
+
+static PyObject *
+posix_pwrite(PyObject *self, PyObject args)
+{
+ Py_buffer pbuf;
+ int fd;
+ off_t offset;
+ Py_ssize_t size;
+
+ if (!PyArg_ParseTuple(args, "iyO&:pwrite", &fd, &pbuf, parse_off_t, &offset))
+ return NULL;
+
+ if (!PyVerify_fd(fd)) {
+ PyBuffer_Release(&pbuf);
+ return posix_error();
+ }
+ Py_BEGIN_ALLOW_THREADS
+ size = pwrite(fd, pbuf.buf, (size_t)pbuf.len, offset);
+ Py_END_ALLOW_THREADS
+ PyBuffer_Release(&pbuf);
+ if (size < 0)
+ return posix_error();
+ return PyLong_FromSsize_t(size);
+}
+#endif
#ifdef HAVE_MKFIFO
PyDoc_STRVAR(posix_mkfifo__doc,
@@ -6266,18 +6743,8 @@
int fd;
off_t length;
int res;
- PyObject *lenobj;
- if (!PyArg_ParseTuple(args, "iO:ftruncate", &fd, &lenobj))
return NULL;
- -#if !defined(HAVE_LARGEFILE_SUPPORT)
- length = PyLong_AsLong(lenobj);
-#else
- length = PyLong_Check(lenobj) ?
PyLong_AsLongLong(lenobj) : PyLong_AsLong(lenobj);
-#endif
- if (PyErr_Occurred())
- if (!PyArg_ParseTuple(args, "iO&:ftruncate", &fd, _parse_off_t, &length)) return NULL; Py_BEGIN_ALLOW_THREADS
@@ -6290,6 +6757,93 @@ } #endif
+#ifdef HAVE_TRUNCATE
+PyDoc_STRVAR(posix_truncate__doc__,
+"truncate(path, length)\n\n
+Truncate the file given by path to length bytes.");
+
+static PyObject *
+posix_truncate(PyObject *self, PyObject *args)
+{
- PyObject *opath;
- const char *path;
- off_t length;
- int res;
- if (!PyArg_ParseTuple(args, "O&O&:truncate",
PyUnicode_FSConverter, &opath, _parse_off_t, &length))
return NULL;
- path = PyBytes_AsString(opath);
- Py_BEGIN_ALLOW_THREADS
- res = truncate(path, length);
- Py_END_ALLOW_THREADS
- Py_DECREF(opath);
- if (res < 0)
return posix_error();
- Py_RETURN_NONE;
+}
+#endif
+
+#ifdef HAVE_POSIX_FALLOCATE
+PyDoc_STRVAR(posix_posix_fallocate__doc__,
+"posix_fallocate(fd, offset, len)\n\n
+Ensures that enough disk space is allocated for the file specified by fd\n
+starting from offset and continuing for len bytes.");
+
+static PyObject *
+posix_posix_fallocate(PyObject *self, PyObject *args)
+{
- off_t len, offset;
- int res, fd;
- if (!PyArg_ParseTuple(args, "iO&O&:posix_fallocate",
&fd, _parse_off_t, &offset, _parse_off_t, &len))
return NULL;
- Py_BEGIN_ALLOW_THREADS
- res = posix_fallocate(fd, offset, len);
- Py_END_ALLOW_THREADS
- if (res != 0) {
errno = res;
return posix_error();
- }
- Py_RETURN_NONE;
+}
+#endif
+
+#ifdef HAVE_POSIX_FADVISE
+PyDoc_STRVAR(posix_posix_fadvise__doc__,
+"posix_fadvise(fd, offset, len, advice)\n\n
+Announces an intention to access data in a specific pattern thus allowing\n
+the kernel to make optimizations.\n
+The advice applies to the region of the file specified by fd starting at\n
+offset and continuing for len bytes.\n
+advice is one of POSIX_FADV_NORMAL, POSIX_FADV_SEQUENTIAL,\n
+POSIX_FADV_RANDOM, POSIX_FADV_NOREUSE, POSIX_FADV_WILLNEED or\n
+POSIX_FADV_DONTNEED.");
+
+static PyObject *
+posix_posix_fadvise(PyObject *self, PyObject *args)
+{
- off_t len, offset;
- int res, fd, advice;
- if (!PyArg_ParseTuple(args, "iO&O&i:posix_fadvise",
&fd, _parse_off_t, &offset, _parse_off_t, &len, &advice))
return NULL;
- Py_BEGIN_ALLOW_THREADS
- res = posix_fadvise(fd, offset, len, advice);
- Py_END_ALLOW_THREADS
- if (res != 0) {
errno = res;
return posix_error();
- }
- Py_RETURN_NONE;
+}
+#endif
+
#ifdef HAVE_PUTENV
PyDoc_STRVAR(posix_putenv__doc__,
"putenv(key, value)\n\n
@@ -8725,6 +9279,15 @@
{"unlink", posix_unlink, METH_VARARGS, posix_unlink__doc__},
{"remove", posix_unlink, METH_VARARGS, posix_remove__doc__},
{"utime", posix_utime, METH_VARARGS, posix_utime__doc__},
+#ifdef HAVE_FUTIMES
- {"futimes", posix_futimes, METH_VARARGS, posix_futimes__doc__},
+#endif +#ifdef HAVE_LUTIMES
- {"lutimes", posix_lutimes, METH_VARARGS, posix_lutimes__doc__},
+#endif +#ifdef HAVE_FUTIMENS
- {"futimens", posix_futimens, METH_VARARGS, posix_futimens__doc__},
+#endif #ifdef HAVE_TIMES {"times", posix_times, METH_NOARGS, posix_times__doc__}, #endif /* HAVE_TIMES / @@ -8733,6 +9296,9 @@ {"execv", posix_execv, METH_VARARGS, posix_execv__doc__}, {"execve", posix_execve, METH_VARARGS, posix_execve__doc__}, #endif / HAVE_EXECV */ +#ifdef HAVE_FEXECVE
- {"fexecve", posix_fexecve, METH_VARARGS, posix_fexecve__doc__},
+#endif #ifdef HAVE_SPAWNV {"spawnv", posix_spawnv, METH_VARARGS, posix_spawnv__doc__}, {"spawnve", posix_spawnve, METH_VARARGS, posix_spawnve__doc__}, @@ -8831,6 +9397,9 @@ #ifdef HAVE_WAIT4 {"wait4", posix_wait4, METH_VARARGS, posix_wait4__doc__}, #endif /* HAVE_WAIT4 */ +#if defined(HAVE_WAITID) && !defined(APPLE)
- {"waitid", posix_waitid, METH_VARARGS, posix_waitid__doc__},
+#endif #if defined(HAVE_WAITPID) || defined(HAVE_CWAIT) {"waitpid", posix_waitpid, METH_VARARGS, posix_waitpid__doc__}, #endif /* HAVE_WAITPID */ @@ -8855,9 +9424,24 @@ {"device_encoding", device_encoding, METH_VARARGS, device_encoding__doc__}, {"dup", posix_dup, METH_VARARGS, posix_dup__doc__}, {"dup2", posix_dup2, METH_VARARGS, posix_dup2__doc__}, +#ifdef HAVE_LOCKF
- {"lockf", posix_lockf, METH_VARARGS, posix_lockf__doc__},
+#endif {"lseek", posix_lseek, METH_VARARGS, posix_lseek__doc__}, {"read", posix_read, METH_VARARGS, posix_read__doc__}, +#ifdef HAVE_READV
- {"readv", posix_readv, METH_VARARGS, posix_readv__doc__},
+#endif +#ifdef HAVE_PREAD
- {"pread", posix_pread, METH_VARARGS, posix_pread__doc__},
+#endif {"write", posix_write, METH_VARARGS, posix_write__doc__}, +#ifdef HAVE_WRITEV
- {"writev", posix_writev, METH_VARARGS, posix_writev__doc__},
+#endif +#ifdef HAVE_PWRITE
- {"pwrite", posix_pwrite, METH_VARARGS, posix_pwrite__doc__},
+#endif #ifdef HAVE_SENDFILE {"sendfile", (PyCFunction)posix_sendfile, METH_VARARGS | METH_KEYWORDS, posix_sendfile__doc__}, @@ -8881,6 +9465,15 @@ #ifdef HAVE_FTRUNCATE {"ftruncate", posix_ftruncate, METH_VARARGS, posix_ftruncate__doc__}, #endif +#ifdef HAVE_TRUNCATE
- {"truncate", posix_truncate, METH_VARARGS, posix_truncate__doc__},
+#endif +#ifdef HAVE_POSIX_FALLOCATE
- {"posix_fallocate", posix_posix_fallocate, METH_VARARGS, posix_posix_fallocate__doc__},
+#endif +#ifdef HAVE_POSIX_FADVISE
- {"posix_fadvise", posix_posix_fadvise, METH_VARARGS, posix_posix_fadvise__doc__},
+#endif #ifdef HAVE_PUTENV {"putenv", posix_putenv, METH_VARARGS, posix_putenv__doc__}, #endif @@ -8894,6 +9487,9 @@ #ifdef HAVE_FSYNC {"fsync", posix_fsync, METH_O, posix_fsync__doc__}, #endif +#ifdef HAVE_SYNC
- {"sync", posix_sync, METH_NOARGS, posix_sync__doc__},
+#endif #ifdef HAVE_FDATASYNC {"fdatasync", posix_fdatasync, METH_O, posix_fdatasync__doc__}, #endif @@ -9342,6 +9938,76 @@ if (ins(d, "SF_SYNC", (long)SF_SYNC)) return -1; #endif
- /* constants for posix_fadvise */
+#ifdef POSIX_FADV_NORMAL
- if (ins(d, "POSIX_FADV_NORMAL", (long)POSIX_FADV_NORMAL)) return -1;
+#endif +#ifdef POSIX_FADV_SEQUENTIAL
- if (ins(d, "POSIX_FADV_SEQUENTIAL", (long)POSIX_FADV_SEQUENTIAL)) return -1;
+#endif +#ifdef POSIX_FADV_RANDOM
- if (ins(d, "POSIX_FADV_RANDOM", (long)POSIX_FADV_RANDOM)) return -1;
+#endif +#ifdef POSIX_FADV_NOREUSE
- if (ins(d, "POSIX_FADV_NOREUSE", (long)POSIX_FADV_NOREUSE)) return -1;
+#endif +#ifdef POSIX_FADV_WILLNEED
- if (ins(d, "POSIX_FADV_WILLNEED", (long)POSIX_FADV_WILLNEED)) return -1;
+#endif +#ifdef POSIX_FADV_DONTNEED
- if (ins(d, "POSIX_FADV_DONTNEED", (long)POSIX_FADV_DONTNEED)) return -1;
+#endif +
- /* constants for waitid */
+#if defined(HAVE_SYS_WAIT_H) && defined(HAVE_WAITID)
- if (ins(d, "P_PID", (long)P_PID)) return -1;
- if (ins(d, "P_PGID", (long)P_PGID)) return -1;
- if (ins(d, "P_ALL", (long)P_ALL)) return -1;
+#endif +#ifdef WEXITED
- if (ins(d, "WEXITED", (long)WEXITED)) return -1;
+#endif +#ifdef WNOWAIT
- if (ins(d, "WNOWAIT", (long)WNOWAIT)) return -1;
+#endif +#ifdef WSTOPPED
- if (ins(d, "WSTOPPED", (long)WSTOPPED)) return -1;
+#endif +#ifdef CLD_EXITED
- if (ins(d, "CLD_EXITED", (long)CLD_EXITED)) return -1;
+#endif +#ifdef CLD_DUMPED
- if (ins(d, "CLD_DUMPED", (long)CLD_DUMPED)) return -1;
+#endif +#ifdef CLD_TRAPPED
- if (ins(d, "CLD_TRAPPED", (long)CLD_TRAPPED)) return -1;
+#endif +#ifdef CLD_CONTINUED
- if (ins(d, "CLD_CONTINUED", (long)CLD_CONTINUED)) return -1;
+#endif +
- /* constants for lockf */
+#ifdef F_LOCK
- if (ins(d, "F_LOCK", (long)F_LOCK)) return -1;
+#endif +#ifdef F_TLOCK
- if (ins(d, "F_TLOCK", (long)F_TLOCK)) return -1;
+#endif +#ifdef F_ULOCK
- if (ins(d, "F_ULOCK", (long)F_ULOCK)) return -1;
+#endif +#ifdef F_TEST
- if (ins(d, "F_TEST", (long)F_TEST)) return -1;
+#endif +
- /* constants for futimens */
+#ifdef UTIME_NOW
- if (ins(d, "UTIME_NOW", (long)UTIME_NOW)) return -1;
+#endif +#ifdef UTIME_OMIT
- if (ins(d, "UTIME_OMIT", (long)UTIME_OMIT)) return -1;
+#endif + #ifdef HAVE_SPAWNV #if defined(PYOS_OS2) && defined(PYCC_GCC) if (ins(d, "P_WAIT", (long)P_WAIT)) return -1; @@ -9441,6 +10107,11 @@ #endif
if (!initialized) {
+#if defined(HAVE_WAITID) && !defined(APPLE)
waitid_result_desc.name = MODNAME ".waitid_result";
PyStructSequence_InitType(&WaitidResultType, &waitid_result_desc);
+#endif + stat_result_desc.name = MODNAME ".stat_result"; stat_result_desc.fields[7].name = PyStructSequence_UnnamedField; stat_result_desc.fields[8].name = PyStructSequence_UnnamedField; @@ -9461,6 +10132,10 @@
endif
#endif } +#if defined(HAVE_WAITID) && !defined(APPLE)
- Py_INCREF((PyObject*) &WaitidResultType);
- PyModule_AddObject(m, "waitid_result", (PyObject*) &WaitidResultType);
+#endif Py_INCREF((PyObject*) &StatResultType); PyModule_AddObject(m, "stat_result", (PyObject*) &StatResultType); Py_INCREF((PyObject*) &StatVFSResultType); diff --git a/Modules/socketmodule.c b/Modules/socketmodule.c --- a/Modules/socketmodule.c +++ b/Modules/socketmodule.c @@ -2242,7 +2242,7 @@
- This is the guts of the recv() and recv_into() methods, which reads into a
- char buffer. If you have any inc/dec ref to do to the objects that contain
- the buffer, do it in the caller. This function returns the number of bytes
- succesfully read. If there was an error, it returns -1. Note that it is
- successfully read. If there was an error, it returns -1. Note that it is
- also possible that we return a number of bytes smaller than the request
- bytes. */ @@ -2446,7 +2446,7 @@
- This is the guts of the recvfrom() and recvfrom_into() methods, which reads
- into a char buffer. If you have any inc/def ref to do to the objects that
- contain the buffer, do it in the caller. This function returns the number
- of bytes succesfully read. If there was an error, it returns -1. Note
of bytes successfully read. If there was an error, it returns -1. Note
that it is also possible that we return a number of bytes smaller than the
request bytes.
@@ -2541,9 +2541,9 @@
if (outlen != recvlen) { /* We did not read as many bytes as we anticipated, resize the
string if possible and be succesful. */
string if possible and be successful. */ if (_PyBytes_Resize(&buf, outlen) < 0)
/* Oopsy, not so succesful after all. */
}/* Oopsy, not so successful after all. */ goto finally;
@@ -2747,17 +2747,28 @@ Py_buffer pbuf; PyObject *addro; char *buf;
- Py_ssize_t len;
Py_ssize_t len, arglen; sock_addr_t addrbuf; int addrlen, n = -1, flags, timeout;
flags = 0;
- if (!PyArg_ParseTuple(args, "y*O:sendto", &pbuf, &addro)) {
PyErr_Clear();
if (!PyArg_ParseTuple(args, "y*iO:sendto",
&pbuf, &flags, &addro))
return NULL;
- arglen = PyTuple_Size(args);
- switch (arglen) {
case 2:
PyArg_ParseTuple(args, "y*O:sendto", &pbuf, &addro);
break;
case 3:
PyArg_ParseTuple(args, "y*iO:sendto",
&pbuf, &flags, &addro);
break;
default:
PyErr_Format(PyExc_TypeError,
"sendto() takes 2 or 3 arguments (%d given)",
}arglen);
- if (PyErr_Occurred())
return NULL;
buf = pbuf.buf; len = pbuf.len;
@@ -4372,7 +4383,7 @@
return 0; /* Failure */
#else
- /* No need to initialise sockets with GCC/EMX */
- /* No need to initialize sockets with GCC/EMX / return 1; / Success */
#endif } @@ -4406,7 +4417,7 @@ "socket.py" which implements some additional functionality. The import of "_socket" may fail with an ImportError exception if os-specific initialization fails. On Windows, this does WINSOCK
- initialization. When WINSOCK is initialized succesfully, a call to
- initialization. When WINSOCK is initialized successfully, a call to WSACleanup() is scheduled to be made at exit time. */
diff --git a/Modules/timemodule.c b/Modules/timemodule.c --- a/Modules/timemodule.c +++ b/Modules/timemodule.c @@ -697,7 +697,7 @@ buf.tm_wday = -1; /* sentinel; original value ignored / tt = mktime(&buf); / Return value of -1 does not necessarily mean an error, but tm_wday
* cannot remain set to -1 if mktime succedded. */
if (tt == (time_t)(-1) && buf.tm_wday == -1) { PyErr_SetString(PyExc_OverflowError, "mktime argument out of range");* cannot remain set to -1 if mktime succeeded. */
diff --git a/Modules/zipimport.c b/Modules/zipimport.c --- a/Modules/zipimport.c +++ b/Modules/zipimport.c @@ -1120,7 +1120,7 @@ }
/* Given a path to a .pyc or .pyo file in the archive, return the
- modifictaion time of the matching .py file, or 0 if no source
- modification time of the matching .py file, or 0 if no source is available. */ static time_t get_mtime_of_source(ZipImporter *self, char *path) diff --git a/Objects/dictobject.c b/Objects/dictobject.c --- a/Objects/dictobject.c +++ b/Objects/dictobject.c @@ -2080,7 +2080,7 @@ assert(d->ma_table == NULL && d->ma_fill == 0 && d->ma_used == 0); INIT_NONZERO_DICT_SLOTS(d); d->ma_lookup = lookdict_unicode;
/* The object has been implicitely tracked by tp_alloc */
/* The object has been implicitly tracked by tp_alloc */ if (type == &PyDict_Type) _PyObject_GC_UNTRACK(d);
#ifdef SHOW_CONVERSION_COUNTS diff --git a/Objects/listobject.c b/Objects/listobject.c --- a/Objects/listobject.c +++ b/Objects/listobject.c @@ -11,7 +11,7 @@ /* Ensure ob_item has room for at least newsize elements, and set
- ob_size to newsize. If newsize > ob_size on entry, the content
- of the new slots at exit is undefined heap trash; it's the caller's
- responsiblity to overwrite them with sane values.
- responsibility to overwrite them with sane values.
- The number of allocated elements may grow, shrink, or stay the same.
- Failure is impossible if newsize <= self.allocated on entry, although
- that partly relies on an assumption that the system realloc() never diff --git a/Objects/longobject.c b/Objects/longobject.c --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -709,7 +709,7 @@ is_signed = pendbyte >= 0x80; / Compute numsignificantbytes. This consists of finding the most
significant byte. Leading 0 bytes are insignficant if the number
{ size_t i;significant byte. Leading 0 bytes are insignificant if the number is positive, and leading 0xff bytes if negative. */
diff --git a/Objects/typeobject.c b/Objects/typeobject.c --- a/Objects/typeobject.c +++ b/Objects/typeobject.c @@ -1008,7 +1008,7 @@ self has a refcount of 0, and if gc ever gets its hands on it (which can happen if any weakref callback gets invoked), it looks like trash to gc too, and gc also tries to delete self
then. But we're already deleting self. Double dealloction is
then. But we're already deleting self. Double deallocation is a subtle disaster. Q. Why the bizarre (net-zero) manipulation of
@@ -5955,7 +5955,7 @@ slots compete for the same descriptor (for example both sq_item and mp_subscript generate a getitem descriptor).
- In the latter case, the first slotdef entry encoutered wins. Since
- In the latter case, the first slotdef entry encountered wins. Since slotdef entries are sorted by the offset of the slot in the PyHeapTypeObject, this gives us some control over disambiguating between competing slots: the members of PyHeapTypeObject are listed diff --git a/PC/bdist_wininst/extract.c b/PC/bdist_wininst/extract.c --- a/PC/bdist_wininst/extract.c +++ b/PC/bdist_wininst/extract.c @@ -54,7 +54,7 @@ return TRUE; }
-/* XXX Should better explicitely specify +/* XXX Should better explicitly specify
- uncomp_size and file_times instead of pfhdr! */ char *map_new_file(DWORD flags, char *filename, @@ -164,7 +164,7 @@ zstream.avail_out = uncomp_size;
/* Apparently an undocumented feature of zlib: Set windowsize
- to negative values to supress the gzip header and be compatible with
- to negative values to suppress the gzip header and be compatible with zip! */ result = TRUE; if (Z_OK != (x = inflateInit2(&zstream, -15))) {
diff --git a/PC/bdist_wininst/install.c b/PC/bdist_wininst/install.c --- a/PC/bdist_wininst/install.c +++ b/PC/bdist_wininst/install.c @@ -148,7 +148,7 @@ the permissions of the current user. */ HKEY hkey_root = (HKEY)-1;
-BOOL success; /* Installation successfull? / +BOOL success; / Installation successful? */ char *failure_reason = NULL;
HANDLE hBitmap; @@ -797,7 +797,7 @@
tempname = tempnam(NULL, NULL);
// We use a static CRT while the Python version we load uses
- // the CRT from one of various possibile DLLs. As a result we
- // the CRT from one of various possible DLLs. As a result we // need to redirect the standard handles using the API rather // than the CRT. redirected = CreateFile( diff --git a/PC/os2emx/dlfcn.c b/PC/os2emx/dlfcn.c --- a/PC/os2emx/dlfcn.c +++ b/PC/os2emx/dlfcn.c @@ -188,7 +188,7 @@ return NULL; }
-/* free dynamicaly-linked library / +/ free dynamically-linked library */ int dlclose(void handle) { int rc; diff --git a/PC/os2emx/dlfcn.h b/PC/os2emx/dlfcn.h --- a/PC/os2emx/dlfcn.h +++ b/PC/os2emx/dlfcn.h @@ -42,7 +42,7 @@ / return a pointer to the `symbol' in DLL */ void *dlsym(void *handle, char *symbol);
-/* free dynamicaly-linked library / +/ free dynamically-linked library */ int dlclose(void *handle);
/* return a string describing last occurred dl error */ diff --git a/Python/ceval.c b/Python/ceval.c --- a/Python/ceval.c +++ b/Python/ceval.c @@ -26,7 +26,7 @@
typedef unsigned long long uint64;
-/* PowerPC suppport. +/* PowerPC support. "ppc" appears to be the preprocessor definition to detect on OS X, whereas "powerpc" appears to be the correct one for Linux with GCC */ @@ -1266,7 +1266,7 @@ if (_Py_atomic_load_relaxed(&eval_breaker)) { if (next_instr == SETUP_FINALLY) { / Make the last opcode before
a try: finally: block uninterruptable. */
a try: finally: block uninterruptible. */ goto fast_next_opcode; } tstate->tick_counter++;
diff --git a/Python/pystate.c b/Python/pystate.c --- a/Python/pystate.c +++ b/Python/pystate.c @@ -512,7 +512,7 @@ /* for i in all interpreters: * for t in all of i's thread states: * if t's frame isn't NULL, map t's id to its frame
* Because these lists can mutute even when the GIL is held, we
HEAD_LOCK(); diff --git a/Python/thread.c b/Python/thread.c --- a/Python/thread.c +++ b/Python/thread.c @@ -40,7 +40,7 @@* Because these lists can mutate even when the GIL is held, we * need to grab head_mutex for the duration. */
#endif
/* Check if we're running on HP-UX and _SC_THREADS is defined. If so, then
- enough of the Posix threads package is implimented to support python
enough of the Posix threads package is implemented to support python threads.
This is valid for HP-UX 11.23 running on an ia64 system. If needed, add
diff --git a/Tools/freeze/checkextensions_win32.py b/Tools/freeze/checkextensions_win32.py --- a/Tools/freeze/checkextensions_win32.py +++ b/Tools/freeze/checkextensions_win32.py @@ -7,7 +7,7 @@ we get it just right, a specific freeze application may have specific compiler options anyway (eg, to enable or disable specific functionality)
-So my basic stragtegy is: +So my basic strategy is:
- Have some Windows INI files which "describe" one or more extension modules. (Freeze comes with a default one for all known modules - but you can specify diff --git a/Tools/scripts/fixcid.py b/Tools/scripts/fixcid.py --- a/Tools/scripts/fixcid.py +++ b/Tools/scripts/fixcid.py @@ -188,7 +188,7 @@ except os.error as msg: err(filename + ': rename failed (' + str(msg) + ')\n') return 1
Return succes
return 0 Return success
Tokenizing ANSI C (partly)
diff --git a/Tools/unicode/makeunicodedata.py b/Tools/unicode/makeunicodedata.py --- a/Tools/unicode/makeunicodedata.py +++ b/Tools/unicode/makeunicodedata.py @@ -1002,7 +1002,7 @@ poly = size + poly break else:
raise AssertionError("ran out of polynominals")
raise AssertionError("ran out of polynomials") print(size, "slots in hash table")
diff --git a/configure b/configure --- a/configure +++ b/configure @@ -776,8 +776,7 @@ LDFLAGS LIBS CPPFLAGS -CPP -CPPFLAGS' +CPP'
Initialize some variables set by options.
@@ -9251,19 +9250,21 @@
checks for library functions
for ac_func in alarm accept4 setitimer getitimer bind_textdomain_codeset chown
clock confstr ctermid execv faccessat fchmod fchmodat fchown fchownat \
- fdopendir fork fpathconf fstatat ftime ftruncate futimesat \
- fexecve fdopendir fork fpathconf fstatat ftime ftruncate futimesat \
- futimens futimes
gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid
getpriority getresuid getresgid getpwent getspnam getspent getsid getwd \
- initgroups kill killpg lchmod lchown linkat lstat mbrtowc mkdirat mkfifo \
- initgroups kill killpg lchmod lchown lockf linkat lstat lutimes mbrtowc mkdirat mkfifo
mkfifoat mknod mknodat mktime mremap nice openat pathconf pause plock poll \
- pthread_init putenv readlink readlinkat realpath renameat \
- posix_fallocate posix_fadvise pread \
- pthread_init putenv pwrite readlink readlinkat readv realpath renameat
select sem_open sem_timedwait sem_getvalue sem_unlink sendfile setegid seteuid
setgid sethostname
setlocale setregid setreuid setresuid setresgid setsid setpgid setpgrp setpriority setuid setvbuf \
- sigaction siginterrupt sigrelse snprintf strftime strlcpy symlinkat \
- sigaction siginterrupt sigrelse snprintf strftime strlcpy symlinkat sync
sysconf tcgetpgrp tcsetpgrp tempnam timegm times tmpfile tmpnam tmpnam_r \
- truncate uname unlinkat unsetenv utimensat utimes waitpid wait3 wait4 \
- wcscoll wcsftime wcsxfrm _getpty
- truncate uname unlinkat unsetenv utimensat utimes waitid waitpid wait3 wait4 \
- wcscoll wcsftime wcsxfrm writev _getpty
do :
as_ac_var=
$as_echo "ac_cv_func_$ac_func" | $as_tr_sh
ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" diff --git a/configure.in b/configure.in --- a/configure.in +++ b/configure.in @@ -2496,19 +2496,21 @@
checks for library functions
AC_CHECK_FUNCS(alarm accept4 setitimer getitimer bind_textdomain_codeset chown
clock confstr ctermid execv faccessat fchmod fchmodat fchown fchownat \
- fdopendir fork fpathconf fstatat ftime ftruncate futimesat \
- fexecve fdopendir fork fpathconf fstatat ftime ftruncate futimesat \
- futimens futimes
gai_strerror getgroups getlogin getloadavg getpeername getpgid getpid
getpriority getresuid getresgid getpwent getspnam getspent getsid getwd \
- initgroups kill killpg lchmod lchown linkat lstat mbrtowc mkdirat mkfifo \
- initgroups kill killpg lchmod lchown lockf linkat lstat lutimes mbrtowc mkdirat mkfifo
mkfifoat mknod mknodat mktime mremap nice openat pathconf pause plock poll \
- pthread_init putenv readlink readlinkat realpath renameat \
- posix_fallocate posix_fadvise pread \
- pthread_init putenv pwrite readlink readlinkat readv realpath renameat
select sem_open sem_timedwait sem_getvalue sem_unlink sendfile setegid seteuid
setgid sethostname
setlocale setregid setreuid setresuid setresgid setsid setpgid setpgrp setpriority setuid setvbuf \
- sigaction siginterrupt sigrelse snprintf strftime strlcpy symlinkat \
- sigaction siginterrupt sigrelse snprintf strftime strlcpy symlinkat sync
sysconf tcgetpgrp tcsetpgrp tempnam timegm times tmpfile tmpnam tmpnam_r \
- truncate uname unlinkat unsetenv utimensat utimes waitpid wait3 wait4 \
- wcscoll wcsftime wcsxfrm _getpty)
- truncate uname unlinkat unsetenv utimensat utimes waitid waitpid wait3 wait4 \
- wcscoll wcsftime wcsxfrm writev _getpty)
For some functions, having a definition is not sufficient, since
we want to take their address.
diff --git a/pyconfig.h.in b/pyconfig.h.in --- a/pyconfig.h.in +++ b/pyconfig.h.in @@ -232,6 +232,9 @@ /* Define to 1 if you have the `fdopendir' function. */ #undef HAVE_FDOPENDIR
+/* Define to 1 if you have the fexecve' function. */ +#undef HAVE_FEXECVE + /* Define to 1 if you have the
finite' function. */
#undef HAVE_FINITE
@@ -274,6 +277,12 @@ /* Define to 1 if you have the `ftruncate' function. */ #undef HAVE_FTRUNCATE
+/* Define to 1 if you have the futimens' function. */ +#undef HAVE_FUTIMENS + +/* Define to 1 if you have the
futimes' function. /
+#undef HAVE_FUTIMES
+
/ Define to 1 if you have the `futimesat' function. */
#undef HAVE_FUTIMESAT
@@ -461,6 +470,9 @@ /* Define to 1 if you have the <linux/tipc.h> header file. */ #undef HAVE_LINUX_TIPC_H
+/* Define to 1 if you have the lockf' function. */ +#undef HAVE_LOCKF + /* Define to 1 if you have the
log1p' function. */
#undef HAVE_LOG1P
@@ -473,6 +485,9 @@ /* Define to 1 if you have the `lstat' function. */ #undef HAVE_LSTAT
+/* Define to 1 if you have the `lutimes' function. / +#undef HAVE_LUTIMES + / Define this if you have the makedev macro. */ #undef HAVE_MAKEDEV
@@ -545,6 +560,15 @@ /* Define to 1 if you have the <poll.h> header file. */ #undef HAVE_POLL_H
+/* Define to 1 if you have the posix_fadvise' function. */ +#undef HAVE_POSIX_FADVISE + +/* Define to 1 if you have the
posix_fallocate' function. /
+#undef HAVE_POSIX_FALLOCATE
+
+/ Define to 1 if you have the `pread' function. /
+#undef HAVE_PREAD
+
/ Define to 1 if you have the <process.h> header file. */
#undef HAVE_PROCESS_H
@@ -569,12 +593,18 @@ /* Define to 1 if you have the `putenv' function. */ #undef HAVE_PUTENV
+/* Define to 1 if you have the pwrite' function. */ +#undef HAVE_PWRITE + /* Define to 1 if you have the
readlink' function. */
#undef HAVE_READLINK
/* Define to 1 if you have the `readlinkat' function. */ #undef HAVE_READLINKAT
+/* Define to 1 if you have the readv' function. */ +#undef HAVE_READV + /* Define to 1 if you have the
realpath' function. */
#undef HAVE_REALPATH
@@ -775,6 +805,9 @@ /* Define to 1 if you have the `symlinkat' function. */ #undef HAVE_SYMLINKAT
+/* Define to 1 if you have the sync' function. */ +#undef HAVE_SYNC + /* Define to 1 if you have the
sysconf' function. */
#undef HAVE_SYSCONF
@@ -952,6 +985,9 @@ /* Define to 1 if you have the `wait4' function. */ #undef HAVE_WAIT4
+/* Define to 1 if you have the waitid' function. */ +#undef HAVE_WAITID + /* Define to 1 if you have the
waitpid' function. */
#undef HAVE_WAITPID
@@ -971,6 +1007,9 @@ */ #undef HAVE_WORKING_TZSET
+/* Define to 1 if you have the `writev' function. / +#undef HAVE_WRITEV + / Define if the zlib library has inflateCopy */ #undef HAVE_ZLIB_COPY
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1017,8 +1017,8 @@ if sys.platform == 'darwin': # In every directory on the search path search for a dynamic # library and then a static library, instead of first looking
# for dynamic libraries on the entiry path.
# This way a staticly linked custom sqlite gets picked up
# for dynamic libraries on the entire path.
# This way a statically linked custom sqlite gets picked up # before the dynamic library in /usr/lib. sqlite_extra_link_args = ('-Wl,-search_paths_first',) else:
-- Repository URL: http://hg.python.org/cpython
- Previous message: [Python-checkins] cpython: Mention RFC 4180. Based on input by Tony Wallace in issue 11456.
- Next message: [Python-checkins] cpython (merge default -> default): merge from upstream
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]