Nutch와 cygwin, tomcat, nutch 1.4 및 solr 3.4를 사용하여 웹 크롤링을위한 자습서를 읽었습니다. 이미 URL을 한 번 크롤링 할 수는 있지만 어쨌든이 URL은 어떤 URL을 사용해도 더 이상 작동하지 않습니다. 런타임/지역/conf의 나의 정규식 - urlfilter.txt은 다음과 같습니다 :Nutch 1.4와 Solr 3.4 - URL을 크롤링 할 수 없음, "가져올 URL 없음"
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
# for a more extensive coverage use the urlfilter-suffix plugin
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|CSS|sit|SIT|eps|EPS|wmf|WMF|zip|ZIP|ppt|PPT|mpg|MPG|xls|XLS|gz|GZ|rpm|RPM|tgz|TGZ|mov|MOV|exe|EXE|jpeg|JPEG|bmp|BMP|js|JS)$
# skip URLs containing certain characters as probable queries, etc.
-[?*[email protected]=]
# skip URLs with slash-delimited segment that repeats 3+ times, to break loops
-.*(/[^/]+)/[^/]+\1/[^/]+\1/
# accept anything else
+^http://([a-z0-9]*\.)*nutch.apache.org/
런타임/지방/빈 내 seed.txt의 유일한 URL은/URL을 만 http://nutch.apache.org/입니다.
I를 크롤링명령을 사용
$ ./nutch crawl urls -dir newCrawl3 -solr http://localhost:8080/solr/ -depth 2 -topN 3
콘솔 출력은 다음과 같습니다
cygpath: can't convert empty path
crawl started in: newCrawl3
rootUrlDir = urls
threads = 10
depth = 2
solrUrl=http://localhost:8080/solr/
topN = 3
Injector: starting at 2017-05-18 17:03:25
Injector: crawlDb: newCrawl3/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: Merging injected urls into crawl db.
Injector: finished at 2017-05-18 17:03:28, elapsed: 00:00:02
Generator: starting at 2017-05-18 17:03:28
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 3
Generator: jobtracker is 'local', generating exactly one partition.
Generator: 0 records selected for fetching, exiting ...
Stopping at depth=0 - no more URLs to fetch.
No URLs to fetch - check your seed list and URL filters.
crawl finished: newCrawl3
내가 몇 가지 비슷한 질문이 알고 있지만, 그들 중 대부분은 해결되지 않습니다. 누구든지 도와 줄 수 있습니까?
미리 감사드립니다.