2016-10-26 3 views
0

나는 시로 치료법을 분석하고자합니다.치료 요청 콜백 횟수보다 적게 요청 횟수

나는 다음 단계로 분석 얻을 다음 poem list url

  • poem detail url
  • poem analyze url
  • 얻을 얻을 수

    1. 하지만 수라는 요청 콜백 요청 개수보다 작 발견 . 내 데모 요구 카운트에서 은 10 만 콜백 8

      다음 내용이 로그입니다 :

      2016-10-26 16:15:54 [scrapy] INFO: Scrapy 1.2.0 started (bot: poem) 
      2016-10-26 16:15:54 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'poem.spiders', 'SPIDER_MODULES': ['poem.spiders'], 'ROBOTSTXT_OBEY': True, 'LOG_LEVEL': 'INFO', 'BOT_NAME': 'poem'} 
      2016-10-26 16:15:54 [scrapy] INFO: Enabled extensions: 
      ['scrapy.extensions.logstats.LogStats', 
      'scrapy.extensions.telnet.TelnetConsole', 
      'scrapy.extensions.corestats.CoreStats'] 
      2016-10-26 16:15:54 [scrapy] INFO: Enabled downloader middlewares: 
      ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 
      'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
      'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
      'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
      'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
      'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
      'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
      'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
      'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
      'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
      'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 
      'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
      2016-10-26 16:15:54 [scrapy] INFO: Enabled spider middlewares: 
      ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
      'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
      'scrapy.spidermiddlewares.referer.RefererMiddleware', 
      'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
      'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
      2016-10-26 16:15:54 [scrapy] INFO: Enabled item pipelines: 
      ['poem.pipelines.PoemPipeline'] 
      2016-10-26 16:15:54 [scrapy] INFO: Spider opened 
      2016-10-26 16:15:54 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
      poem list count : 10 
      callback count : 1 
      item count : 1 
      callback count : 2 
      item count : 2 
      callback count : 3 
      item count : 3 
      callback count : 4 
      item count : 4 
      callback count : 5 
      item count : 5 
      callback count : 6 
      item count : 6 
      callback count : 7 
      item count : 7 
      callback count : 8 
      item count : 8 
      2016-10-26 16:15:55 [scrapy] INFO: Closing spider (finished) 
      2016-10-26 16:15:55 [scrapy] INFO: Dumping Scrapy stats: 
      {'downloader/request_bytes': 5385, 
      'downloader/request_count': 20, // (10 * 2) 
      'downloader/request_method_count/GET': 20, 
      'downloader/response_bytes': 139702, 
      'downloader/response_count': 20, 
      'downloader/response_status_count/200': 20, 
      'dupefilter/filtered': 2, 
      'finish_reason': 'finished', 
      'finish_time': datetime.datetime(2016, 10, 26, 8, 15, 55, 416028), 
      'item_scraped_count': 8, 
      'log_count/INFO': 7, 
      'request_depth_max': 2, 
      'response_received_count': 20, 
      'scheduler/dequeued': 19, 
      'scheduler/dequeued/memory': 19, 
      'scheduler/enqueued': 19, 
      'scheduler/enqueued/memory': 19, 
      'start_time': datetime.datetime(2016, 10, 26, 8, 15, 54, 887101)} 
      2016-10-26 16:15:55 [scrapy] INFO: Spider closed (finished) 
      

      IDE : Pycharm
      DEBUG :

      ############## spider.py ############## 
      import scrapy 
      from poem.items import PoemItem 
      
      
      class PoemSpider(scrapy.Spider): 
      
          name = 'poem' 
      
          analyze_count = 0 
      
          start_urls = ['http://so.gushiwen.org/type.aspx'] 
      
          def parse(self, response): 
           # 1. get poem list url 
           poems = response.xpath("//div[@class='typeleft']/div[@class='sons']") 
           for poem in poems: 
            # 2. get poem detail url 
            poem_href = poem.xpath("./p[1]/a/@href").extract_first() 
            poem_url = response.urljoin(poem_href) 
            yield scrapy.Request(poem_url, callback=self.parse_poem) 
      
          def parse_poem(self, response): 
           ## 3. get analyze url 
           analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'赏析')).extract_first() 
           analyze_url = response.urljoin(analyze_href) 
           yield scrapy.Request(analyze_url, callback=self.parse_poem_analyze) 
      
          def parse_poem_analyze(self, response): 
           # print analyze callback called count 
           print "#####################################" 
           PoemSpider.analyze_count = PoemSpider.analyze_count + 1 
           print PoemSpider.analyze_count 
           poem = PoemItem() 
           yield poem 
      
      ############## pipelines.py ############## 
      class PoemPipeline(object): 
      
          processcount = 0 
      
          def process_item(self, item, spider): 
           # print item count 
           print ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>" 
           PoemPipeline.processcount = PoemPipeline.processcount + 1 
           print PoemPipeline.processcount 
           return item 
      

    답변

    2
    : Pycharm
    코드 터미널

    로그에 표준 오류가 누락되어 있지만 통계 출력을 통해 어떤 일이 발생하는지 계속 볼 수 있습니다.

    {'downloader/request_bytes': 5385, 
    'downloader/request_count': 20, // (10 * 2) 
    'downloader/request_method_count/GET': 20, 
    'downloader/response_bytes': 139702, 
    'downloader/response_count': 20, 
    'downloader/response_status_count/200': 20, 
    'dupefilter/filtered': 2, <--------------------------------------------- 
    'finish_reason': 'finished', 
    'finish_time': datetime.datetime(2016, 10, 26, 8, 15, 55, 416028), 
    'item_scraped_count': 8, <--------------------------------------------- 
    'log_count/INFO': 7, 
    'request_depth_max': 2, 
    'response_received_count': 20, 
    'scheduler/dequeued': 19, 
    'scheduler/dequeued/memory': 19, 
    'scheduler/enqueued': 19, 
    'scheduler/enqueued/memory': 19, 
    'start_time': datetime.datetime(2016, 10, 26, 8, 15, 54, 887101)} 
    

    두 요청은 dupefilter 미들웨어에 의해 걸러 지므로 모든 요청은 고유합니다 (일반적으로 고유 URL 만 있음).

    의 문제가이 안전하지 않은 URL이 생성 될 것으로 보인다 : analyze_href 당신이 analyze_url == response.url하게 될 겁니다 비어있을 수 있기 때문에

    analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'赏析')).extract_first() 
    analyze_url = response.urljoin(analyze_href) 
    

    을하고 그냥 크롤링 이후 그 필터링됩니다.
    는 analyze_href가 비어 있지 않은 경우에만 해당 검사를 피하기 URL로 만들려면 :

    analyze_href = response.xpath("//u[text()='%s']/parent::*/@href"%(u'赏析')).extract_first() 
    if not analyze_href: 
        logging.error("failed to find analyze_href for {}".format(response.url)) 
        return 
    analyze_url = response.urljoin(analyze_href) 
    
    +0

    당신 말이 맞아! 고마워요! – TomatoPeter