我想用 mutiprocess 和 gevent,requests 配合来爬取拉钩上面的关于各种语言的职位信息,思路就是每种语言开一个进程,比如 java 有 30 页职位信息,我就在 java 进程用 gevent 开 30 个协程去爬每 30 个页面职位信息,可是当我爬的时候发现进程数开多了之后, 多进程加协程却没有单纯的多进程爬取速度快,且随着进程数开启的越多,会有更多的页面 timeout,爬取失败。但单纯的多进程却始终维持在 34s 左右,且基本都爬下所有的页面。下面是我测试的数据:
一个进程+多协程 12s
一个进程 34s
两个进程+多协程 26s
两个进程 34s
三个进程+多协程 51s
三个进程 34s
下面是我的主要代码,请问是我的进程和协程的使用姿势不对还是有什么问题吗,为什么在进程数开多了之后协程反而会拖慢爬虫的效率呢,第一次提问,如有不对之处请多多见谅,谢谢各位。
def get_profession_jobs(self, professions):
process = []
for profession in professions:
p = Process(target=self.get_all_pages, args=(profession, self.page_list))
p.start()
process.append(p)
for p in process:
p.join()
def get_all_pages(self, profession, pages):
print(os.getpid())
jobs = [gevent.spawn(self.get_detail_page, profession, page) for page in pages]
gevent.joinall(jobs)
#for page in pages:
#self.get_detail_page(profession, page)
def get_detail_page(self, profession, page):
user_agent = self.user_agents[random.randint(0, 3)]
header = {
'Host': 'www.lagou.com',
'Referer': self.base_referer + profession,
'User-Agent': user_agent,
'Origin': 'https://www.lagou.com',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Accept': 'application/json, text/javascript, */*; q=0.01',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8' }
data = {'pn': page, 'kd': profession}
print(profession + ' page ' + str(page) + ' start')
response = requests.post(self.base_url, headers=header, data=data, timeout=20)
self.clear_data(response.content, profession)
print(profession + ' page ' + str(page) + ' finished')
def clear_data(self, page, profession):
results = json.loads(page.decode('utf-8'))['content']['positionResult']['result']
for result in results:
job_name = result['positionName']
job_class = profession
publish_date = result['createTime']
money = result['salary']
experience = result['workYear']
education = result['education']
location = result['city']
with open('jobs.csv', 'a', newline='') as jobs:
writer = csv.writer(jobs)
writer.writerow([job_name, job_class, publish_date, money, experience, education, location])
if name == 'main':
start_time = time.time()
professions = ['PHP', 'Python', 'Go', 'Java']
spider = Spider()
spider.get_profession_jobs(professions)
end_time = time.time()
print('All finished')
print('Used ' + str(end_time-start_time) + ' seconds')
1
clearT OP 希望有大佬帮忙看看,谢谢
|
2
yulon 2018-07-10 19:06:10 +08:00
打开任务管理器看看是不是带宽跑满了,都阻塞住了自然更慢
|
3
lieh222 2018-07-11 11:15:33 +08:00
把每个页面的下载时间都打印出来就知道了吧。。。
|
4
chengxiao 2018-07-11 16:11:27 +08:00
问题在网速
|