epoll_wait()返回可用uid時,對uid取狀態,本該是BROKEN的,卻取到CLOSED,然而,不能像處理BROKEN事件那樣處理CLOSED事件,這樣移除不了CLOSED事件,於是epoll_wait不斷返回該uid,就造成了死循環。跟蹤代碼至底層,尋找原因。
int CUDTUnited::epoll_remove_usock(const int eid, const UDTSOCKET u)
{
int ret = m_EPoll.remove_usock(eid, u);
CUDTSocket* s = locate(u);
if (NULL != s)
{
s->m_pUDT->removeEPoll(eid);
}
//else
//{
// throw CUDTException(5, 4);
//}
return ret;
}
CUDTSocket* CUDTUnited::locate(const UDTSOCKET u)
{
CGuard cg(m_ControlLock);
map<UDTSOCKET, CUDTSocket*>::iterator i = m_Sockets.find(u);
if ((i == m_Sockets.end()) || (i->second->m_Status == CLOSED))
return NULL;
return i->second;
}
void CUDT::removeEPoll(const int eid)
{
// clear IO events notifications;
// since this happens after the epoll ID has been removed, they cannot be set again
set<int> remove;
remove.insert(eid);
s_UDTUnited.m_EPoll.update_events(m_SocketID, remove, UDT_EPOLL_IN | UDT_EPOLL_OUT, false);
CGuard::enterCS(s_UDTUnited.m_EPoll.m_EPollLock);
m_sPollID.erase(eid);
CGuard::leaveCS(s_UDTUnited.m_EPoll.m_EPollLock);
}
CUDTUnited::epoll_remove_usock裡,先locate目前uid的位置,但如果此時uid的狀態是CLOSED,則返回NULL, 於是,epoll_remove_usock無法再繼續調用removeEPoll,所以無法移除epoll事件。
但為什麼會發生CLOSED事件呢?按照作者的原意,應該是只會發生BROKEN事件,不會發生CLOSED事件的,繼續查找原因。
首先看看BROKEN事件怎麼發生的。
客戶端疑似斷開十秒以上之後, CUDT::checkTimers()做以下操作
……
……
m_bClosing = true;
m_bBroken = true;
m_iBrokenCounter = 30;
// update snd U list to remove this socket
m_pSndQueue->m_pSndUList->update(this);
releaseSynch();
// app can call any UDT API to learn the connection_broken error
s_UDTUnited.m_EPoll.update_events(m_SocketID, m_sPollID, UDT_EPOLL_IN | UDT_EPOLL_OUT | UDT_EPOLL_ERR, true);
CTimer::triggerEvent();
……
……
在這裡把m_bBroken置為true,並觸發epoll事件。
然而,在epoll_wait返回事件之前,還可能發生這個:
#ifndef WIN32
void* CUDTUnited::garbageCollect(void* p)
#else
DWORD WINAPI CUDTUnited::garbageCollect(LPVOID p)
#endif
{
CUDTUnited* self = (CUDTUnited*)p;
CGuard gcguard(self->m_GCStopLock);
while (!self->m_bClosing)
{
self->checkBrokenSockets();
……
……
void CUDTUnited::checkBrokenSockets()
{
CGuard cg(m_ControlLock);
// set of sockets To Be Closed and To Be Removed
vector<UDTSOCKET> tbc;
vector<UDTSOCKET> tbr;
for (map<UDTSOCKET, CUDTSocket*>::iterator i = m_Sockets.begin(); i != m_Sockets.end(); ++ i)
{
// check broken connection
if (i->second->m_pUDT->m_bBroken)
{
if (i->second->m_Status == LISTENING)
{
// for a listening socket, it should wait an extra 3 seconds in case a client is connecting
if (CTimer::getTime() - i->second->m_TimeStamp < 3000000)
continue;
}
else if ((i->second->m_pUDT->m_pRcvBuffer != NULL) && (i->second->m_pUDT->m_pRcvBuffer->getRcvDataSize() > 0) && (i->second->m_pUDT->m_iBrokenCounter -- > 0))
{
// if there is still data in the receiver buffer, wait longer
continue;
}
//close broken connections and start removal timer
i->second->m_Status = CLOSED;
i->second->m_TimeStamp = CTimer::getTime();
tbc.push_back(i->first);
m_ClosedSockets[i->first] = i->second;
……
……
GC線程是UDT的垃圾回收處理,在UDT調用cleanup(),之前,會一直處於checkBrokenSocket和阻塞的循環中。
然後在checkBrokenSocket裡,當socket的m_bBroken為true時,m_Status的狀態被置為CLOSED。
所以,這時候再用getsocketstate取socket的狀態,就會取到CLOSED,也就是明明是BROKEN事件,硬生生變成了CLOSED事件!然後接下去epoll事件的移除就失敗了。
於是,修改如下,
把
int CEPoll::remove_usock(const int eid, const UDTSOCKET& u)
{
CGuard pg(m_EPollLock);
map<int, CEPollDesc>::iterator p = m_mPolls.find(eid);
if (p == m_mPolls.end())
throw CUDTException(5, 13);
p->second.m_sUDTSocksIn.erase(u);
p->second.m_sUDTSocksOut.erase(u);
p->second.m_sUDTSocksEx.erase(u);
return 0;
}
改為
int CEPoll::remove_usock2(const int eid, const UDTSOCKET& u)
{
CGuard pg(m_EPollLock);
map<int, CEPollDesc>::iterator p = m_mPolls.find(eid);
if (p == m_mPolls.end())
throw CUDTException(5, 13);
p->second.m_sUDTSocksIn.erase(u);
p->second.m_sUDTSocksOut.erase(u);
p->second.m_sUDTSocksEx.erase(u);
p->second.m_sUDTWrites.erase(u);
p->second.m_sUDTReads.erase(u);
p->second.m_sUDTExcepts.erase(u);
return 0;
}
並去掉CUDTUnited::epoll_remove_usock()中對removeEPoll()的調用。
這是比較簡單也比較粗糙的改法,應該有更方便的思路才對。