有個需求要分析Nginx日志,也懶得去研究logstach之類的開源工具,干脆直接寫一個腳本,自己根據需求來實現:
先看日志格式:我們跟別人的不太一樣,所以沒辦法了:
12.195.166.35 [10/May/2015:14:38:09 +0800] "list.xxxx.com" "GET /new/10:00/9.html?cat=0,0&sort=price_asc HTTP/1.0" 200 42164 "http://list.linuxidc.com/new/10:00/8.html?cat=0,0&sort=price_asc" "Mozilla/5.0 (Linux; U; Android 4.4.2; zh-CN; H60-L02 Build/HDH60-L02) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/10.4.0.558 U3/0.8.0 Mobile Safari/534.30"
上面是我的日志格式:
腳本如下:
#!/usr/bin/env python
#-*- coding:utf-8 –*-
#Author:xiaoluo
#QQ:942729042
#date:2015:05:12
import re
import sys
log = sys.argv[1]
ip = r"?P<ip>[\d.]*"
date = r"?P<date>\d+"
month = r"?P<month>\w+"
year = r"?P<year>\d+"
log_time = r"?P<time>\S+"
timezone = r"""?P<timezone>
[^\"]*
"""
name = r"""?P<name>\"
[^\"]*\"
"""
method = r"?P<method>\S+"
request = r"?P<request>\S+"
protocol = r"?P<protocol>\S+"
status = r"?P<status>\d+"
bodyBytesSent = r"?P<bodyBytesSent>\d+"
refer = r"""?P<refer>\"
[^\"]*\"
"""
userAgent=r"""?P<userAgent>
.*
"""
#f = open('access1.log','r')
#for logline in f.readlines():
p = re.compile(r"(%s)\ \[(%s)/(%s)/(%s)\:(%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)\ (%s)" %(ip, date, month, year, log_time,timezone,name,method,request,protocol,status,bodyBytesSent,refer,userAgent), re.VERBOSE)
def getcode():
codedic={}
f = open(log,'r')
for logline in f.readlines():
matchs = p.match(logline)
if matchs !=None:
allGroups =matchs.groups()
status= allGroups[10]
codedic[status]=codedic.get(status,0) +1
return codedic
f.close()
def getIP():
f = open(log,'r')
IPdic={}
for logline in f.readlines():
matchs = p.match(logline)
if matchs !=None:
allGroups =matchs.groups()
IP=allGroups[0]
IPdic[IP] = IPdic.get(IP,0) +1
IPdic=sorted(IPdic.iteritems(),key=lambda c:c[1],reverse=True)
IPdic=IPdic[0:21:1]
return IPdic
f.close()
def getURL():
f = open(log,'r')
URLdic={}
for logline in f.readlines():
matchs = p.match(logline)
if matchs !=None:
allGroups =matchs.groups()
urlname = allGroups[6]
URLdic[urlname] = URLdic.get(urlname,0) +1
URLdic=sorted(URLdic.iteritems(),key=lambda c:c[1],reverse=True)
URLdic=URLdic[0:21:1]
return URLdic
def getpv():
f = open(log,'r')
pvdic={}
for logline in f.readlines():
matchs = p.match(logline)
if matchs !=None:
allGroups =matchs.groups()
timezone=allGroups[4]
time = timezone.split(':')
minute = time[0]+":"+time[1]
pvdic[minute]=pvdic.get(minute,0) +1
pvdic=sorted(pvdic.iteritems(),key=lambda c:c[1],reverse=True)
pvdic=pvdic[0:21:1]
return pvdic
if __name__=='__main__':
print "網站監控狀況檢查狀態碼"
print getcode()
print "網站訪問量最高的20個IP地址"
print getIP()
print "網站訪問最多的20個站點名"
print getURL()
print getpv()
這裡要指出的是。我當初是給正則匹配的時候單獨封裝一個函數的,這樣就省去了下面每個函數要打開之前都要單獨打開一遍文件,但是我return的時候只能用列表的形式返回,結果列表太大把我的內存耗光了,我的是32G的內存,15G的日志。
效果:
最後一個函數是統計每分鐘,訪問的數量
CentOS 6.2實戰部署Nginx+MySQL+PHP http://www.linuxidc.com/Linux/2013-09/90020.htm
使用Nginx搭建WEB服務器 http://www.linuxidc.com/Linux/2013-09/89768.htm
搭建基於Linux6.3+Nginx1.2+PHP5+MySQL5.5的Web服務器全過程 http://www.linuxidc.com/Linux/2013-09/89692.htm
CentOS 6.3下Nginx性能調優 http://www.linuxidc.com/Linux/2013-09/89656.htm
CentOS 6.3下配置Nginx加載ngx_pagespeed模塊 http://www.linuxidc.com/Linux/2013-09/89657.htm
CentOS 6.4安裝配置Nginx+Pcre+php-fpm http://www.linuxidc.com/Linux/2013-08/88984.htm
Nginx安裝配置使用詳細筆記 http://www.linuxidc.com/Linux/2014-07/104499.htm
Nginx日志過濾 使用ngx_log_if不記錄特定日志 http://www.linuxidc.com/Linux/2014-07/104686.htm
Nginx 的詳細介紹:請點這裡
Nginx 的下載地址:請點這裡