Scrapy - 爬取
说明
要执行您的爬取程序,请在您的 first_scrapy 目录中运行以下命令 −
scrapy crawl first
其中,first 是创建爬取程序时指定的爬取程序的名称。
一旦爬取程序开始爬取,您就可以看到以下输出 −
2016-08-09 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial) 2016-08-09 18:13:07-0400 [scrapy] INFO: Optional features available: ... 2016-08-09 18:13:07-0400 [scrapy] INFO: Overridden settings: {} 2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled extensions: ... 2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ... 2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ... 2016-08-09 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ... 2016-08-09 18:13:07-0400 [scrapy] INFO: Spider opened 2016-08-09 18:13:08-0400 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/> (referer: None) 2016-08-09 18:13:09-0400 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> (referer: None) 2016-08-09 18:13:09-0400 [scrapy] INFO: Closing spider (finished)
正如您在输出中看到的,每个 URL 都有一个日志行,其中 (referer: None) 表明这些 URL 是起始 URL,并且没有引荐来源。接下来,您应该会看到在 first_scrapy 目录中创建了两个名为 Books.html 和 Resources.html 的新文件。