규칙을 만드는 동안 문제가 발생합니다.Scrapy Python에서 규칙을 만드는 방법은 무엇입니까?
<html><head>...</head><body>
<ul id="results-list">
<li class="result clearfix news">
<div class="summary">
<h3><a href="/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/">Struggling Canucks rely on Schneider to snag win against Sens</a></h3>
<p class="summary">Nov 21, 2011– Eleventh place Canucks rely on goalie Cory Schneider to improve record to 10-9-1
</p>
<p class="meta"><a href="/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/">http://www.example.com/sports/hockey/struggling-canucks-rely-on-schneider-to-snag-win-against-sens/article2243069/</a>
</p>
</div>
</li>
<li class="result clearfix news">
<div class="summary">
<h3><a href="/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/">Celebrities set to testify at U.K. media ethics inquiry</a></h3>
<p class="summary">Nov 20, 2011– Hugh Grant and J.K. Rowling given opportunity to strike back against tabloids’ invasion of privacy
</p>
<p class="meta"><a href="/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/">http://www.example.com/news/world/celebrities-set-to-testify-at-uk-media-ethics-inquiry/article2242840/</a>
</p>
</div>
</li>
...
</ul><!-- end of ul#results-list -->
<ul class="paginator">
<li class="selected"><a href="http://www.example.com/search/?q=news&start=0">1</a></li>
<li ><a href="http://www.example.com/search/?q=news&start=10">2</a></li>
<li ><a href="http://www.example.com/search/?q=news&start=20">3</a></li>
...
<li class="jump last"><a href="http://www.example.com/search/?q=news&start=90">Last</a></li>
</ul><!-- end of ul.paginator -->
</body></html>
은 지금이 링크 결과 목록 UL #에 존재하는 (링크에서 데이터를 추출 할 : 내가 소스 코드를 다음과 얻을 웹 브라우저에서이 URL을 열 때 내 시작 URL이 http://www.example.com/search?q=news
입니다 가정하자 나는 다음과 같은 것을 위해 거미를 만든
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from thirdapp.items import ThirdappItem
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = [
'http://www.example.com/search?q=news',
'http://www.example.com/search?q=movies',
]
rules = (
Rule(SgmlLinkExtractor(allow('?q=news',), restrict_xpaths('ul[@class="paginator"]',)), callback='parse_item', allow=True),
)
def parse_item(self, response):
self.log('Hi, this is an item page! %s', response.url)
hxs = HtmlXPathSelector(response)
#item = ThirdappItem()
items = hxs.select('//h3')
scraped_items = []
for item in items:
scraped_item = ThirdappItem()
scraped_item["title"] = item.select('a/text()').extract()
scraped_items.append(scraped_item)
return items
spider = MySpider()
그래서 규칙 것 그래서 내가 기대하는 결과를 얻을 수 있을까요?