UPDATE : 나는 PyPi 설치할 수 있습니다 Python3 토네이도 4.0 이상에 대한 패키지 만들었습니다 https://pypi.python.org/pypi/tornadostreamform
내가 이전 답변을 이미 받아 들여진 것을 알고,하지만 저도 같은 문제를 겪고 있고 내가 제공 할 수 있습니다 완벽한 모듈 - post_streamer라고 부르 자. - 너무 많은 메모리를 사용하지 않고 모든 요청에 대해 모든 스트림을 파트로 구문 분석하는 Python 3의 경우.
#!/usr/bin/env python3
"""Post data streamer for tornadoweb 4.0"""
import os
import re
import random
import tempfile
class SizeLimitError(Exception):
pass
class PostDataStreamer:
"""Parse a stream of multpart/form-data.
Useful for request handlers decorated with tornado.web.stream_request_body"""
SEP = b"\r\n"
LSEP = len(SEP)
PAT_HEADERVALUE = re.compile(r"""([^:]+):\s+([^\s;]+)(.*)""")
PAT_HEADERPARAMS = re.compile(r""";\s*([^=]+)=\"(.*?)\"(.*)""")
# Encoding for the header values. Only header name and parameters
# will be decoded. Streamed data will remain binary.
# This is required because multipart/form-data headers cannot
# be parsed without a valid encoding.
header_encoding = "UTF-8"
def __init__(self, total, tmpdir=None):
self.buf = b""
self.dlen = None
self.delimiter = None
self.in_data = False
self.headers = []
self.parts = []
self.total = total
self.received = 0
self.tmpdir = tmpdir
def _get_raw_header(self,data):
idx = data.find(self.SEP)
if idx>=0:
return (data[:idx], data[idx+self.LSEP:])
else:
return (None, data)
def receive(self, chunk):
self.received += len(chunk)
self.on_progress()
self.buf += chunk
if not self.delimiter:
self.delimiter, self.buf = self._get_raw_header(self.buf)
if self.delimiter:
self.delimiter+=self.SEP
self.dlen = len(self.delimiter)
elif len(self.buf)>1000:
raise Exception("Cannot find multipart delimiter")
else:
return
while True:
if self.in_data:
if (len(self.buf)>3*self.dlen):
idx = self.buf.find(self.SEP+self.delimiter)
if idx>=0:
self.feed_part(self.buf[:idx])
self.end_part()
self.buf = self.buf[idx+len(self.SEP+self.delimiter):]
self.in_data = False
else:
limit = len(self.buf)-2*self.dlen
self.feed_part(self.buf[:limit])
self.buf = self.buf[limit:]
return
else:
return
if not self.in_data:
while True:
header, self.buf = self._get_raw_header(self.buf)
if header==b"":
assert(self.delimiter)
self.in_data = True
self.begin_part(self.headers)
self.headers = []
break
elif header:
self.headers.append(self.parse_header(header))
else:
# Header is None, not enough data yet
return
def parse_header(self,header):
header = header.decode(self.header_encoding)
res = self.PAT_HEADERVALUE.match(header)
if res:
name,value,tail = res.groups()
params = {}
hdr = {"name":name,"value":value,"params":params}
while True:
res = self.PAT_HEADERPARAMS.match(tail)
if not res:
break
fname,fvalue,tail = res.groups()
params[fname] = fvalue
return hdr
else:
return {"value":header}
def begin_part(self,headers):
"""Internal method called when a new part is started."""
self.fout = tempfile.NamedTemporaryFile(
dir=self.tmpdir, delete=False)
self.part = {
"headers":headers,
"size":0,
"tmpfile":self.fout
}
self.parts.append(self.part)
def feed_part(self,data):
"""Internal method called when content is added to the current part."""
self.fout.write(data)
self.part["size"] += len(data)
def end_part(self):
"""Internal method called when receiving the current part has finished."""
# Will not close the file here, so we will be able to read later.
#self.fout.close()
#self.fout.flush() This is not needed because we update part["size"]
pass
def finish_receive(self):
"""Call this after the last receive() call.
You MUST call this before using the parts."""
if self.in_data:
idx = self.buf.rfind(self.SEP+self.delimiter[:-2])
if idx>0:
self.feed_part(self.buf[:idx])
self.end_part()
def release_parts(self):
"""Call this to remove the temporary files."""
for part in self.parts:
part["tmpfile"].close()
os.unlink(part["tmpfile"].name)
def get_part_payload(self, part):
"""Return the contents of a part.
Warning: do not use this for big files!"""
fsource = part["tmpfile"]
fsource.seek(0)
return fsource.read()
def get_part_ct_params(self, part):
"""Get content-disposition parameters.
If there is no content-disposition header then it returns an
empty list."""
for header in part["headers"]:
if header.get("name","").lower().strip()=="content-disposition":
return header.get("params",[])
return []
def get_part_ct_param(self, part, pname, defval=None):
"""Get parameter for a part.
@param part: The part
@param pname: Name of the parameter, case insensitive
@param defval: Value to return when not found.
"""
ct_params = self.get_part_ct_params(part)
for name in ct_params:
if name.lower().strip()==pname:
return ct_params[name]
return defval
def get_part_name(self, part):
"""Get name of a part.
When not given, returns None."""
return self.get_part_ct_param(part, "name", None)
def get_parts_by_name(self, pname):
"""Get a parts by name.
@param pname: Name of the part. This is case sensitive!
Attention! A form may have posted multiple values for the same
name. So the return value of this method is a list of parts!"""
res = []
for part in self.parts:
name = self.get_part_name(part)
if name==pname:
res.append(part)
return res
def get_values(self, fnames, size_limit=10*1024):
"""Return a dictionary of values for the given field names.
@param fnames: A list of field names.
@param size_limit: Maximum size of the value of a single field.
If a field's size exceeds this then SizeLimitError is raised.
Warning: do not use this for big file values.
Warning: a form may have posted multiple values for a field name.
This method returns the first available value for that name.
To get all values, use the get_parts_by_name method.
Tip: use get_nonfile_names() to get a list of field names
that are not originally files.
"""
res = {}
for fname in fnames:
parts = self.get_parts_by_name(fname)
if not parts:
raise KeyError("No such field: %s"%fname)
size = parts[0]["size"]
if size>size_limit:
raise SizeLimitError("Part size=%s > limit=%s"%(size, limit))
res[fname] = self.get_part_payload(parts[0])
return res
def get_nonfile_names(self):
"""Get a list of part names are originally not files.
It examines the filename attribute of the content-disposition header.
Be aware that these fields still may be huge in size."""
res = []
for part in self.parts:
filename = self.get_part_ct_param(part, "filename", None)
if filename is None:
name = self.get_part_name(part)
if name:
res.append(name)
return res
def examine(self):
"""Debugging method for examining received data."""
print("============= structure =============")
for idx,part in enumerate(self.parts):
print("PART #",idx)
print(" HEADERS")
for header in part["headers"]:
print(" ",repr(header.get("name","")),"=",repr(header.get("value","")))
params = header.get("params",None)
if params:
for pname in params:
print(" ",repr(pname),"=",repr(params[pname]))
print(" DATA")
print(" SIZE", part["size"])
print(" LOCATION",part["tmpfile"].name)
if part["size"]<80:
print(" PAYLOAD:",repr(self.get_part_payload(part)))
else:
print(" PAYLOAD:","<too long...>")
print("========== non-file values ==========")
print(self.get_values(self.get_nonfile_names()))
def on_progress(self):
"""Override this function to handle progress of receiving data."""
pass # Received <self.received> of <self.total>
더 효율적일 수 있지만 휴대용이며 메모리에 큰 내용을로드하지 않습니다. 사용 방법은 다음과 같습니다. tornado web 4.0 (및 firefox 및 pycurl을 클라이언트로 사용)으로 테스트했습니다. 그냥 "당신이 PostDataStreamer.params 및 PostDataStreamer.get_part_ct_param (부품을 사용하여 컨텐츠 유형 헤더에 액세스 할 수 있습니다 finish_receive()가 호출 된 8888
#!/usr/bin/env python3
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application, url, stream_request_body
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from post_streamer import PostDataStreamer
import sys
class MyPostDataStreamer(PostDataStreamer):
percent = 0
def on_progress(self):
"""Override this function to handle progress of receiving data."""
if self.total:
new_percent = self.received*100//self.total
if new_percent != self.percent:
self.percent = new_percent
print("progress",new_percent)
@stream_request_body
class StreamHandler(RequestHandler):
def get(self):
self.write('''<html><body>
<form method="POST" action="/" enctype="multipart/form-data">
File #1: <input name="file1" type="file"><br>
File #2: <input name="file2" type="file"><br>
File #3: <input name="file3" type="file"><br>
Other field 1: <input name="other1" type="text"><br>
Other field 2: <input name="other2" type="text"><br>
Other field 3: <input name="other3" type="text"><br>
<input type="submit">
</form>
</body></html>''')
def post(self):
try:
#self.fout.close()
self.ps.finish_receive()
# Use parts here!
self.set_header("Content-Type","text/plain")
oout = sys.stdout
try:
sys.stdout = self
self.ps.examine()
finally:
sys.stdout = oout
finally:
# Don't forget to release temporary files.
self.ps.release_parts()
def prepare(self):
# TODO: get content length here?
try:
total = int(self.request.headers.get("Content-Length","0"))
except:
total = 0
self.ps = MyPostDataStreamer(total) #,tmpdir="/tmp"
#self.fout = open("raw_received.dat","wb+")
def data_received(self, chunk):
#self.fout.write(chunk)
self.ps.receive(chunk)
def main():
application = Application([
url(r"/", StreamHandler),
])
max_buffer_size = 4 * 1024**3 # 4GB
http_server = HTTPServer(
application,
max_buffer_size=max_buffer_size,
)
http_server.listen(8888)
IOLoop.instance().start()
main()
후 포트, 콘텐츠 -을이 서버를 시작하고 로컬 호스트에 브라우저를 유형 ")
업데이트 : max_buffer_size는 일반적으로 증가되어서는 안됩니다. 일반적으로 max_body_size를 늘려서는 안됩니다. 그들은 낮은 값으로 유지되어야합니다. stream_request_body로 장식 된 핸들러의 prepare() 메소드에서만, 스트리밍 할 수있는 최대 크기를 설정하기 위해 self.request.connection.set_max_body_size()를 호출해야합니다. 자세한 내용은 다음을 참조하십시오. https://groups.google.com/forum/#!topic/python-tornado/izEXQd71rQk
이것은 토네이도의 문서화되지 않은 부분입니다. 나는 상자 밖으로 파일 업로드를 처리하는 데 사용할 수있는 모듈을 준비 중입니다. 준비가되면 여기에 링크를 달아 놓을 것입니다.
불행히도 아닙니다. 가장 좋은 방법은 email.contentmanager입니다. https://docs.python.org/3/library/email.contentmanager.html 그러나 MIME 메시지를 메모리로로드합니다. 아무도 파이썬 함수를 이미 만들었는지 궁금합니다. 너무 많은 메모리를 사용하지 않고 원시 포스트 데이터에서 파일을 추출 할 수 있습니다. – nagylzs
어쩌면 headersonly 매개 변수를 사용하면 헤더 만 처리하고 content-type을 가져올 수 있습니다. 하지만 데이터가 아니라 ... – nagylzs