zl程序教程

您现在的位置是:首页 >  后端

当前栏目

Python: “股票数据定向爬虫”实例

Python实例爬虫数据 股票 定向
2023-06-13 09:12:58 时间

文章背景:之前学习了BeautifulSoup模块和Re库(参见文末的延伸阅读),在此基础上,获取沪深两市A股所有股票的名称和交易信息,并保存到文件中。

技术路线:requests-bs4-re

1 数据网站的确定

选取原则:股票信息静态存在于HTML页面中,非Js代码生成。

选取方法:浏览器F12,查看源文件等

选取心态:不要纠结于某个网站,多找信息源。

(1)获取股票列表:

炒股一点通:http://www.cgedt.com/stockcode/yilanbiao.asp

(2)获取个股信息:

股城网:https://hq.gucheng.com/HSinfo.html

单个股票:https://hq.gucheng.com/SH600050/

https://hq.gucheng.com/SZ002276/

2 程序的结构设计

  1. 从炒股一点通网页获取股票列表 getStocklist()
  2. 根据股票列表逐个到股城网获取个股信息,并将结果保存到文件中 getStockInfo()

3 代码展示

import requests, re, traceback
from bs4 import BeautifulSoup

def getHTMLText(url, code="utf-8"):
    try:
        r = requests.get(url)
        r.raise_for_status()
        r.encoding = code
        return r.text
    except Exception as exc:
        print('\nThere was a problem: %s' % (exc))
        return ""
    
def getStockList(lst, stockURL):
    html = getHTMLText(stockURL, "GB2312")
    soup = BeautifulSoup(html, 'html.parser') 
    a = soup.find_all('a')
    for i in a:
        try:
            href = i.attrs['href']
            temp = re.findall(r"/stock/\d{6}/", href)[0]
            if temp[7] == "6":
                lst.append("SH" + temp[7:13]) 
            else:
                lst.append("SZ" + temp[7:13])
        except:
            continue

def getStockInfo(lst, stockURL, fpath):
    count = 0
    for stock in lst:
        url = stockURL + stock
        html = getHTMLText(url)
        try:
            if html=="":
                count = count + 1
                print("\r当前进度: {:.2f}%".format(count*100/len(lst)),end="")
                continue
            infoDict = {}
            soup = BeautifulSoup(html, 'html.parser')
            stockInfo1 = soup.find('header',attrs={'class':'stock_title'})
            infoDict.update({'股票名称': stockInfo1.h1.string})
            
            stockInfo2 = soup.find('section',attrs={'class':'stock_price clearfix'})
            
            keyList = stockInfo2.find_all('dt')
            valueList = stockInfo2.find_all('dd')
            for i in range(len(keyList)):
                key = keyList[i].text
                val = valueList[i].text
                infoDict[key] = val
            
            with open(fpath, 'a', encoding='utf-8') as f:
                f.write( str(infoDict) + '\n' )
                count = count + 1
                print("\r当前进度: {:.2f}%".format(count*100/len(lst)),end="")
        except:
            count = count + 1
            print("\r当前进度: {:.2f}%".format(count*100/len(lst)),end="")
            traceback.print_exc()
            continue

def main():
    stock_list_url = 'http://www.cgedt.com/stockcode/yilanbiao.asp'
    stock_info_url = 'https://hq.gucheng.com/'
    output_file = 'E://python123//GuchengStockInfo.txt'
    
    slist=[]
    getStockList(slist, stock_list_url)
    getStockInfo(slist, stock_info_url, output_file)

main()

运行结果:

参考资料:

[1] 中国大学MOOC: Python网络爬虫与信息提取(https://www.icourse163.org/course/BIT-1001870001)

[2] 字典(Dictionary) update()方法(https://www.runoob.com/python/att-dictionary-update.html)

[3] Python traceback模块简单使用(https://www.cnblogs.com/ldy-miss/p/9857694.html)

延伸阅读:

[1] Python: BeautifulSoup库入门

[2] Python: Re(正则表达式)库入门

[3] Python: “淘宝商品比价定向爬虫”实例