1500字范文,内容丰富有趣,写作好帮手!
1500字范文 > 猎聘网招聘信息爬取

猎聘网招聘信息爬取

时间:2023-10-09 19:09:26

相关推荐

猎聘网招聘信息爬取

技术路线:requests+BeautifulSoup+re

首先给出2月2日运行成功的程序源代码

程序源码:

import requestsfrom bs4 import BeautifulSoupimport re#/c101280100/?query=python&page=2&ka=page-2def getHTMLText(url, cookie):header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/0101 Firefox/58.0','Connection': 'close'}cookies = {}for line in cookie.split(';'): # 浏览器伪装name, value = line.strip().split('=', 1)cookies[name] = valuetry:r = requests.get(url, headers=header, cookies=cookies, timeout=30)r.raise_for_status()r.encoding = 'utf-8'return r.textexcept:return "爬取失败!"def HTMLParse(html, jlist):JobName = []JobArea = []JobPayment = []JobTime = []JobEducation = []CompanyName = []CompanyNature = []CompanyScale = []CompanyWelfare = []CompanyTag = []soup = BeautifulSoup(html, "html.parser")job_name = soup.find_all(name = 'span', attrs={"class":"job-name"})job_area = soup.find_all(name = 'span', attrs={"class":"job_area"})job_payment = soup.find_all(name = 'span', attrs={'class':'red'})# company_name = soup.find_all(name = 'a', attrs={'target':'_blank'})job_title = soup.find_all(name = 'div', attrs={'class':'job-title'})job_limit = soup.find_all(name = 'div', attrs={'class':'job-limit clearfix'})company_info = soup.find_all(name = 'div', attrs={'class':'info-company'})append_info = soup.find_all(name = 'div', attrs={'class':'info-append clearfix'})for job in job_title:job_name = job.find_all(name='span', attrs={'class':'job-name'})JobName.append(job_name[0].string)job_area = job.find_all(name = 'span', attrs={'class':'job-area'})JobArea.append(job_area[0].string)for job in job_limit:job_payment = job.find_all(name = 'span', attrs={'class':'red'})JobPayment.append(job_payment[0].string)job_time = job.find_all(name = 'p')job_time_partt = pile(r'.*?<p>(.*?)<em.*?', re.S)JobTime.append(re.findall(job_time_partt, str(job_time[0]))[0])job_education_partt = pile(r'.*</em>(.*?)</p>.*?', re.S)JobEducation.append(re.findall(job_education_partt, str(job_time[0]))[0])for company in company_info:company_name = company.find_all(name = 'a', attrs={'target':'_blank'})[0].stringCompanyName.append(company_name)company_nature_partt = pile(r'.*?<p>(.*?)<em.*?', re.S)company_nature = re.findall(company_nature_partt, str(company))CompanyNature.append(company_nature[0])company_scale_partt = pile(r'.*</em>(.*?)</p>.*?', re.S)company_scale = re.findall(company_scale_partt, str(company))CompanyScale.append(company_scale[0])for append in append_info:companytags = append.find_all(name = 'span', attrs={'class':'tag-item'})company_tags = []welfare = append.find_all(name = 'div', attrs={'class':'info-desc'})[0].stringCompanyWelfare.append(welfare)for companytag in companytags:company_tags.append(companytag.string)CompanyTag.append(company_tags)for i in range(len(JobName)):if CompanyWelfare[i] == None:CompanyWelfare[i] = '无'jlist.append([JobName[i], CompanyName[i], JobArea[i], JobPayment[i], JobTime[i], JobEducation[i], CompanyNature[i], CompanyScale[i], CompanyWelfare[i], CompanyTag[i]])# print(JobName)# print(JobArea)# print(JobPayment)# print(JobTime)# print(JobEducation)# print(CompanyNature)# print(CompanyScale)# print(CompanyWelfare)# print(CompanyTag)def printList(jlist):out = "{0:{10}^10}\t{1:{10}^10}\t{2:{10}^10}\t{3:{10}^10}\t{4:{10}^10}\t{5:{10}^10}\t{6:{10}^10}\t{7:{10}^10}\t{8:{10}^10}\t{9:{10}^11}"print(out.format("岗位名称", '公司名称', '工作地区', '工作薪资', '工作经验', '学历要求', '公司性质', '公司规模', '公司福利', '公司标签', chr(12288)))for i in range(len(jlist)):print(out.format(jlist[i][0], jlist[i][1], jlist[i][2], jlist[i][3], jlist[i][4], jlist[i][5], jlist[i][6], jlist[i][7], jlist[i][8], str(jlist[i][9]), chr(12288)))def printList2(jlist):out = "岗位名称:{0:{10}<10}\n公司名称:{1:{10}<10}\n工作地区:{2:{10}<10}\n工作薪资:{3:{10}<10}\n工作经验:{4:{10}<10}\n学历要求:{5:{10}<10}\n公司性质:{6:{10}<10}\n公司规模:{7:{10}<10}\n公司福利:{8:{10}<10}\n公司标签:{9:{10}<10}\n"for i in range(len(jlist)):print(out.format(jlist[i][0], jlist[i][1], jlist[i][2], jlist[i][3], jlist[i][4], jlist[i][5], jlist[i][6], jlist[i][7], jlist[i][8], str(jlist[i][9]), chr(12288)))def writetxt(jlist):if jlist:filename = 'jobList.txt'out = "岗位名称:{0:{10}<10}\n公司名称:{1:{10}<10}\n工作地区:{2:{10}<10}\n工作薪资:{3:{10}<10}\n工作经验:{4:{10}<10}\n学历要求:{5:{10}<10}\n公司性质:{6:{10}<10}\n公司规模:{7:{10}<10}\n公司福利:{8:{10}<10}\n公司标签:{9:{10}<10}\n"with open(filename, 'w', encoding='utf-8') as f:for i in range(len(jlist)):f.write(out.format(jlist[i][0], jlist[i][1], jlist[i][2], jlist[i][3], jlist[i][4], jlist[i][5], jlist[i][6], jlist[i][7], jlist[i][8], str(jlist[i][9]), chr(12288)))f.write('\n')f.close()print(filename, "数据写入完成!")else:print("未查询到信息!")def main():cookie = 'lastCity=101010100; t=9GE4oa4Fzh4HjMwh; wt=9GE4oa4Fzh4HjMwh; _bl_uid=egkmj6834Unr9t6191ggtLOi9O1p; sid=sem; __c=1580646913; __g=-; Hm_lvt_194df3105ad7148dcf2b98a91b5e727a=1580630919,1580634656,1580634731,1580646913; __l=l=https%3A%2F%%2Fs%3Fie%3Dutf-8%26src%3Dhao_360so_b%26shb%3D1%26hsid%3Df1e32f46a6b0a8b6%26q%3D%25E7%259B%25B4%25E8%2581%2598%25E7%25BD%2591&r=https%3A%2F%%2Fs%3Fie%3Dutf-8%26src%3Dhao_360so_b%26shb%3D1%26hsid%3Df1e32f46a6b0a8b6%26q%3D%25E7%259B%25B4%25E8%2581%2598%25E7%25BD%2591&friend_source=0&friend_source=0; Hm_lpvt_194df3105ad7148dcf2b98a91b5e727a=1580646929; __a=31002138.1580529728.1580634656.1580646913.89.4.5.80; __zp_stoken__=5010RSZ%2Fg0%2FD3VefcRhQPZS8fSN3IsIuM4YI6hyIx%2F4nmX74inlgvTtfmjOge7end0H8IbPc3IehrTvaRAePWCRZXd1RUiH%2BnYq0XOC%2BgC9MnQizZmccCXkjWlqsyxKLN1u4'url = '/c100010000/?query={0}&page={1}'joblist = []keyword = 'python'for i in range(1, 11):spider_url = url.format(keyword, str(i))html = getHTMLText(spider_url, cookie)HTMLParse(html, joblist)print(joblist)print(len(joblist))printList(joblist)printList2(joblist)writetxt(joblist)if __name__ == '__main__':main()

由于自己学习的技术尚未成熟,所以很多方法都没能优化!

关于html文本的获取可以使用通用框架:

def getHTMLText(url, cookie):header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/0101 Firefox/58.0','Connection': 'close'}cookies = {}for line in cookie.split(';'): # 浏览器伪装name, value = line.strip().split('=', 1)cookies[name] = valuetry:r = requests.get(url, headers=header, cookies=cookies, timeout=30)r.raise_for_status()r.encoding = 'utf-8'return r.textexcept:return "爬取失败!"

最开始没有使用cookie,所以无法获取猎聘网的招聘信息,之后便尝试使用cookie信息。能够只使用cookie信息就爬取页面也可以说是比较轻松的了,现在很多网站的反爬虫技术都特别成熟了,例如淘宝连续查询时就会需要滑动验证。但是目前爬取猎聘网招聘信息时遇见的一个难题是cookie获取之后只能使用一次,第二次使用的时候就会遇见问题。目前不清楚是否是我获取cookie方法的问题!

关于HTML网页解析使用的是BeautifulSoup+re或许可以直接使用re实现,但是我是在bs无法解析网页时才加入的re。当然应该有能够使用bs解析的方法,但是我没有实现。

在获取每一个公司的经验要求以及学历要求时:

<p>3-5年<em class="vline"></em>本科</p>

无法分别获取经验要求:3-5年

学历要求:本科

可以获取p标签,然后p.get_text()方法获取所有的文本信息,但是之后无法进行分割。

所以希望使用正则表达式re来实现文本的提取。

re的模式设置为:

pile(r'.*</em>(.*?)</p>.*?', re.S)pile(r'.*?<p>(.*?)<em.*?', re.S)

这样便能成功提取相关信息了

关于文本信息的数据结果

首先将每一类文本添加到相对应的列表中,然后在以jobname将所以的job进行整合,使每一个job构成一个列表,然后左右的关于job的列表形成jlist列表。

最后是信息的显示和保存

由于信息条目稍多,所以保存在了txt文档中。

最终结果如图:

程序中还有许多问题,希望大家指教!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。