안녕하세요 저는 파이썬과 치료법에 익숙하지 않지만 거미를 코딩하려고하지만 시작 URL을 처리하는 동안 오류 또는 오류 해결 방법을 찾을 수 없습니다. xpath 나 다른 것에서 문제가 있는지 알았다면 틀린 들여 쓰기에 대해 이야기 한 스레드의 대부분이 내 경우는 아닙니다. 코드 :치료 오류 처리 URL
import scrapy
from scrapy.exceptions import CloseSpider
from scrapy_crawls.items import Vino
class BodebocaSpider(scrapy.Spider):
name = "Bodeboca"
allowed_domains = ["bodeboca.com"]
start_urls = (
'http://www.bodeboca.com/vino/espana',
)
counter = 1
next_url = ""
vino = None
def __init__(self):
self.next_url = self.start_urls[0]
def parse(self, response):
for sel in response.xpath(
'//div[@id="venta-main-wrapper"]/div[@id="venta-main"]/div/div/div/div/div/div/span'):
#print sel
# HREF
a_href = sel.xpath('.//a/@href').extract()
the_href = a_href[0]
print the_href
yield scrapy.Request(the_href, callback=self.parse_item, headers={'Referer': response.url.encode('utf-8'),
'Accept-Language': 'es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3'})
# SIGUIENTE URL
results = response.xpath(
'//div[@id="wrapper"]/article/div[@id="article-inner"]/div[@id="default-filter-form-wrapper"]/div[@id="venta-main-wrapper"]/div[@class="bb-product-info-sort bb-sort-behavior-attached"]/div[@clsas="bb-product-info"]/span[@class="bb-product-info-count"]').extract()
if not results:
raise CloseSpider
else:
#self.next_url = self.next_url.replace(str(self.counter), str(self.counter + 1))
#self.counter += 1
self.next_url = response.xpath('//div[@id="venta-main-wrapper"]/div[@class="item-list"]/ul[@class="pager"]/li[@class="pager-next"]/a/@href').extract()[0]
yield scrapy.Request(self.next_url, callback=self.parse, headers={'Referer': self.allowed_domains[0],
'Accept-Language': 'es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3'})
오류 :
2017-03-28 12:29:08 [scrapy.core.engine] INFO: Spider opened
2017-03-28 12:29:08 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-03-28 12:29:08 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.bodeboca.com/robots.txt> (referer: None)
2017-03-28 12:29:08 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.bodeboca.com/vino/espana> (referer: None)
/vino/terra-cuques-2014
2017-03-28 12:29:08 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.bodeboca.com/vino/espana> (referer: None)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
for x in result:
File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or())
File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or() if _filter(r))
File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
return (r for r in result or() if _filter(r))
File "/home/gerardo/proyectos/vinos-diferentes-crawl/scrapy_crawls/spiders/Bodeboca.py", line 36, in parse
'Accept-Language': 'es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3'})
File "/usr/local/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 25, in __init__
self._set_url(url)
File "/usr/local/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 57, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /vino/terra-cuques-2014
2017-03-28 12:29:08 [scrapy.core.engine] INFO: Closing spider (finished)
2017-03-28 12:29:08 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 449,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 38558,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 3, 28, 10, 29, 8, 951654),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/ValueError': 1,
'start_time': datetime.datetime(2017, 3, 28, 10, 29, 8, 690948)}
2017-03-28 12:29:08 [scrapy.core.engine] INFO: Spider closed (finished)
'the_href' 전화 치료에 그리고 그것은 단지 URL의 경로 부분이지 전체 URL이 아닙니다. –
또한 여기'start_urls = ( 'http://www.bodeboca.com/vino/espana', )'나는 당신이 쉼표를 가지고 있다고 생각합니다 –
예, 쉼표가 문제였습니다. 다른 문제지만, 나는 그것을 해결하려고 노력하지만, 내 자신, 만약 내가 다른 질문이 다시 것입니다. 또한 당신의 도움에 감사드립니다 – desconectad0