V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
推荐学习书目
Learn Python the Hard Way
Python Sites
PyPI - Python Package Index
http://diveintopython.org/toc/index.html
Pocoo
值得关注的项目
PyPy
Celery
Jinja2
Read the Docs
gevent
pyenv
virtualenv
Stackless Python
Beautiful Soup
结巴中文分词
Green Unicorn
Sentry
Shovel
Pyflakes
pytest
Python 编程
pep8 Checker
Styles
PEP 8
Google Python Style Guide
Code Style from The Hitchhiker's Guide
al0ne
V2EX  ›  Python

scrapy 抓取西刺网站一直提示 TCP connection timed out: 110:错误

  •  
  •   al0ne ·
    al0ne · 2015-10-21 11:55:42 +08:00 · 14334 次点击
    这是一个创建于 3323 天前的主题,其中的信息可能已经有所发展或是发生改变。
    错误日志:

    2015-10-21 11:39:10+0800 [xici] DEBUG: Retrying <GET http://www.xicidaili.com/nn/4>; (failed 2 times): TCP connection timed out: 110: Connection timed out.
    2015-10-21 11:39:10+0800 [xici] DEBUG: Retrying <GET http://www.xicidaili.com/nn/5>; (failed 2 times): TCP connection timed out: 110: Connection timed out.
    2015-10-21 11:39:11+0800 [xici] DEBUG: Retrying <GET http://www.xicidaili.com/nn/6>; (failed 2 times): TCP connection timed out: 110: Connection timed out.
    2015-10-21 11:39:11+0800 [xici] DEBUG: Retrying <GET http://www.xicidaili.com/nn/7>; (failed 2 times): TCP connection timed out: 110: Connection timed out.
    ^C2015-10-21 11:39:40+0800 [scrapy] INFO: Received SIGINT, shutting down gracefully. Send again to force
    2015-10-21 11:39:40+0800 [xici] INFO: Closing spider (shutdown)
    ^C2015-10-21 11:39:41+0800 [scrapy] INFO: Received SIGINT twice, forcing unclean shutdown
    2015-10-21 11:39:41+0800 [xici] DEBUG: Retrying <GET http://www.xicidaili.com/nn/8>; (failed 2 times): An error occurred while connecting: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionLost'>: Connection to the other side was lost in a non-clean fashion: Connection lost.

    setting.py 设置:

    BOT_NAME = 'xici'
    SPIDER_MODULES = ['xici.spiders']
    NEWSPIDER_MODULE = 'xici.spiders'
    DBKWARGS={'db':'python','user':'root', 'passwd':'12344321',
    'host':'localhost','use_unicode':True, 'charset':'utf8'}

    ITEM_PIPELINES = {
    'xici.pipelines.XiciPipeline': 300,
    }
    DOWNLOAD_DELAY = 0.25
    USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'
    1 条回复    2015-10-26 16:21:32 +08:00
    leavic
        1
    leavic  
       2015-10-26 16:21:32 +08:00
    我抓 SIS 的时候偶尔也会出这个问题,我猜测是网络问题,你也知道 SIS 是要走代理的。
    不过 scrapy 的框架好像很智能,一旦网络连接恢复了,就会重新开始抓取,不过已经抓过的不会重复抓,只是会从中断的位置往回倒退检查一遍,然后继续从中断的位置开始。
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   3188 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 23ms · UTC 13:45 · PVG 21:45 · LAX 05:45 · JFK 08:45
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.