問題描述
套接字編程:某些 ISP 是否對 FTP 上傳施加了速率限制? (Socket programming: Do some ISP's impose rate‑limiting on FTP uploads?)
我目前正在嘗試使用我們的一款產品中的 FTP 上傳功能來調試客戶的問題。該功能允許客戶將文件 (<1MB) 上傳到中央 FTP 服務器以進行進一步處理。FTP客戶端代碼是在VB.NET內部編寫的。
客戶報告他們在嘗試上傳300KB到500KB範圍內的文件時收到“遠程主機強制關閉連接”錯誤。但是,我們使用更大的文件(相對而言)(即 3MB 及以上)在內部對此進行了測試,但從未收到此錯誤。我們使用相同的 FTP 登錄憑據上傳到客戶端連接的同一個 FTP 服務器,唯一的區別是我們是在辦公室完成的。
我知道 TCP 協議內置了流控制‑在,所以它應該' 不管在單個 Send 調用中發送了多少數據,因為協議會相應地限制自己以匹配服務器的內部限制(如果我沒記錯的話......)
因此,我唯一能想到的是客戶端和路由器之間的中間主機人為地對客戶端進行限速並斷開它(我們以 512 字節塊的形式循環發送文件數據)。
這是一個循環用於發送數據(緩衝區是包含文件數據的字節數組):
For i = 0 To buffer.Length ‑ 1 Step 512
mDataSocket.Send(buffer, i, 512, SocketFlags.None)
OnTransferStatus(i, buffer.Length)
Next
客戶的 ISP(或他們自己的防火牆)是否有可能對我們的數據量施加人為的速率限制客戶端代碼可以在給定的時間內發送嗎?如果是這樣,處理這種情況的最佳方法是什麼?我想顯而易見的解決方案是在我們的發送循環中引入延遲,除非有辦法在套接字級別執行此操作。
對我來說,ISP 會處理速率似乎真的很奇怪——通過終止客戶端連接來限制違規。為什麼他們不只依賴 TCP/IP 的內部流量控制/節流機制?
參考解法
方法 1:
Do a search for Comcast and BitTorrent. Here's one article.
方法 2:
Try to isolate the issue:
- Let the customer upload the same file to a different server. Maybe the problem is with the client's ... FTP client.
- Get the file from the client and upload it yourself with your client and see if you can repro the issue.
In the end, even if a 3MB file works fine, a 500KB file isn't guaranteed to work, because the issue could be state‑depending and happening while ending the file transfer.
方法 3:
Yes, ISPs can impose limits to packets as they see fit (although it is ethically questionable). My ISP for example has no problem in cutting any P2P traffic its hardware manages to sniff out. Its called traffic shaping.
However for FTP traffic this is highly unlikelly, but you never know. The thing is, they never drop your sockets with traffic shaping, they only drop packets. The tcp protocol is handled on each pear side so you can drop all the packets in between and the socket keeps alive. In some instances if one of the computers crashes the socket remains alive if you dont try to use it.
I think you best bet is a bad firewall/proxy configuration on the client side. Better explanations here.
Either that or a faulty or badly configured router or cable on the client installations.
方法 4:
500k is awefully small these days, so I'd be a little surprised if they throttle something that small.
I know you're already chunking your request, but can you determine if any data is transferred? Does the code always fail at the same loop point? Are you able to look at the ftp server logs? What about an entire stack trace? Have you tried contacting the ISP and asking them what policies they have?
That said, assuming that some data makes it through, one thought is that the ISP has traffic shaping and the rules engage after x bytes have been written. What could be happening is at data > x the socket timeout expires before the data is sent, throwing an exception.
Keep in mind ftp clients create another connection for data transfer, but if the server detects the control connection is closed, it will typically kill the data transfer connection. So another thing to check is ensure the control connection is still alive.
Lastly, ftp servers usually support resumable transfers, so if all other remedy's fail, resuming the failed transfer might be the easiest solution.
方法 5:
I dont think the ISP would try to kill a 500KB file transfer. Im no expert in either socket thingy or on ISPs... just giving my thoughts on the matter.
(by Mike Spross、Mark Ransom、xmjx、Caerbanog、Robert Paulson、Mostlyharmless)