通过urllib2、re模块抓种子
思路
1.用程序登录论坛(如果需要登录才能访问的版块)
2.访问指定版块
3.遍历帖子(先取指定页,再遍历页面所有帖子的url)
4.循环访问所有帖子url,从帖子页面代码中取种子下载地址(通过正则表达式或第三方页面解析库)
5.访问种子页面下载种子
复制代码 代码如下:
import urllib
import urllib2
import cookielib
import re
import sys
import os
# site is website address | fid is part id
site = "http://xxx.yyy.zzz/"
source = "thread0806.php?fid=x&search=&page="
btSave = "./clyzwm/"
if os.path.isdir(btSave):
print btSave + " existing"
else:
os.mkdir(btSave)
logfile = "./clyzwm/down.log"
errorfile = "./clyzwm/error.log"
sucfile = "./clyzwm/sucess.log"
headers = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36',
'Referer' : 'http://xxx.yyy.zzz/'}
def btDown(url, dirPath):
logger(logfile, "download file : " + url)
try:
#pageCode = urllib2.urlopen(url).read()
#print pageCode
btStep1 = re.findall('http://[\w]+\.[\w]+\.[\w]{0,4}/[\w]{2,6}\.php\?[\w]{2,6}=([\w]+)', url, re.I)
#print btStep1
if len(btStep1)>0:
ref = btStep1[0]
downsite = ""
downData = {}
if len(ref)>20:
downsite = re.findall('http://www.[\w]+\.[\w]+/', url)[0]
downsite = downsite + "download.php"
reff = re.findall('input\stype=\"hidden\"\sname=\"reff\"\svalue=\"([\w=]+)\"', urllib2.urlopen(url).read(), re.I)[0]
downData = {'ref': ref, 'reff':reff, 'submit':'download'}
else:
downsite = "http://www.downhh.com/download.php"
downData = {'ref': ref, 'rulesubmit':'download'}
#print "bt site - " + downsite + "\n downData:"
#print downData
downData = urllib.urlencode(downData)
downReq = urllib2.Request(downsite, downData)
downReq.add_header('User-Agent','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36')
downPost = urllib2.urlopen(downReq)
stream = downPost.read(-1)
if (len(stream) > 1000):
downPost.close()
name = btStep1[0]+ ".torrent"
fw = open(dirPath + name, 'w')
fw.write(stream)
fw.close()
logger(sucfile, url+"\n")
else:
logger(errorfile, url+"\n")
except urllib2.URLError, e:
print e.reason
def logger(logfile, msg):
print msg
fw = open(logfile, 'a')
fw.write(msg)
fw.close()
for i in range(1, 1000):
logger(logfile, "\n\n\n@ page " + str(i) + " ...")
part = site + source + str(i)
content = urllib2.urlopen(part).read()
content = content.decode('gbk').encode('utf8')
#print content
pages = re.findall('<a\s+href=\"(htm_data/[\d]+/[\d]+/[\d]+\.html).*?<\/a>', content,re.I)
#print pages
for page in pages:
page = site + page;
#logger(logfile, "\n# visiting " + page + " ...")
pageCode = urllib2.urlopen(page).read()
#print pageCode
zzJump = re.findall('http://www.viidii.info/\?http://[\w]+/[\w]+\?[\w]{2,6}=[\w]+' ,pageCode)
#zzJump = re.findall('http://www.viidii.info/\?http://[\w/\?=]*', pageCode)
if len(zzJump) > 0:
zzJump = zzJump[0]
#print "- jump page - " + zzJump
pageCode = urllib2.urlopen(page).read()
zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.php\?[\w]{2,6}=[\w]+' ,pageCode)
if len(zzPage) > 0:
zzPage = zzPage[0]
logger(logfile, "\n- zhongzi page -" + zzPage)
btDown(zzPage, btSave)
else:
logger(logfile, "\n. NOT FOUND .")
else:
logger(logfile, "\n... NOT FOUND ...")
zzPage = re.findall('http://[\w]+\.[\w]+\.[\w]+/link[\w]?\.php\?ref=[\w]+' ,pageCode)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
更新日志
- 柏菲·胭花四乐《胭花四乐》限量开盘母带ORMCD[低速原抓WAV+CUE]
- 群星《监听耳机天碟》2018[WAV分轨][1G]
- 群星《娱协奖原创金曲合辑》滚石[WAV+CUE][1.1G]
- 罗大佑《美丽岛》2CD[WAV+CUE][1.1G]
- 言承旭.2009-多出来的自由【SONY】【WAV+CUE】
- 赤道.2000-精选2CD【ACM】【WAV+UCE】
- 许廷铿.2017-神奇之旅【华纳】【WAV+CUE】
- 李克勤《罪人》环球[WAV+CUE][1G]
- 陈粒2024《乌有乡地图》有此山文化[FLAC分轨][1G]
- 蔡依林《MYSELF》 奢华庆菌版 2CD[WAV+CUE][1.5G]
- 刘春美《心与心寻世界名曲中文版》新京文[低速原抓WAV+CUE]
- 朱逢博《蔷薇蔷薇处处开》[FLAC+CUE]
- 姚璎格2005《心在哭泣》龙韵[WAV分轨]
- 费玉清《费玉清收藏》 2CD 华纳[WAV+CUE][1G]
- 徐怀钰《LOVE》台湾首版[WAV+CUE][1G]