Cache Hit flow control using PCQ
Cache Hit flow control using PCQ
Sometimes, uncontrolled Http and P2P cache hit traffic leads to congestion in ISP's distribution network. As a result, all other type of traffic including http browsing suffers from considerable delay.
This happens mostly when large files are being delivered from the cache to the subscribers at unlimited speed. During such file download, an individual user may experience speed like 70 Mbps or more. When a number of users download such contents at that speed, the distribution network of the ISP may experience severe congestion. While congested, all other traffic including http browsing itself may suffer severely. Http browsing involves quick retrieval of large number of small objects from internet to the subscribers'computers. If the distribution network is congested they delivery of the objects from internet to the subscriber is considerabl delayed. This results slow browsing.
ISPs keep on doubting whethere there are issues with Cache's http caching performance. It leads to the misconception that while the subscribers download torrents from the cache, the http delivery from cache becomes slower. However, this may not be a fault of the caching solution, rather, the distribution network is flooded with cache hit traffic.
Such problem can be avoided if the Cache hit traffic flow from the cache to the subscribers can be controlled. While there are sophisticated per-user bandwidth control solutions, a mikrotik router can be used to control the httP and P2P cache hit traffic flow per user per HttP/P2P download instance from the cache in a very cost effective manner.
This article is particularly useful for the cache deployment scenario where there is no bandwidth control device in between the cache and the subscribers.
Also, the ISPs who have Wireless links in the last mile connetivity in point to multipoint set up, may find this article useful.
1. All users in general category would get maximum download speed(HTTP and/or P2P) @ 8 Mbps from Cache.
2. All users in 16M category would get maximum download speed (HTTP and/or P2P) @ 16 Mbps per file download instance from Cache.
3. All users in 12M category would get maximum download speed (HTTP and/or P2P) @ 12 Mbps per file download instance from Cache.
4. Multiple download instances on one single user from the cache would share his own download speed quota(8M/12M/16M) among download instances.
5. In this example the individual user 192.168.100.1 and all the users in network 192.168.220.0/24 would have 12 Mbps speed for cache hit (HTTP and/or P2P) from Cache.
6. Also, the individual user 192.168.200.1 and all the users in network 192.168.210.0/24 would have 16 Mbps speed for cache hit (HTTP and/or P2P) from Cache.
7. All other users of 192.168.100.0/24 and 192.168.200.0/24 networks would have to be considered in general category to have speed quota @ 8 Mbps for for cache hit
(HTTP and/or P2P) from Cache.
1. IP address of the cache is 192.168.248.14.
2. HTTP proxy in the cache runs in 'hidden' mode. Meaning it spoofs the ip address of the subscribers while sending requests to the servers. Therefore the servers do not detect the presence of the http proxy.
3. HTTP cache hit traffic is marked with DSCP=4 in the Cache. Meaning all traffic through MiktoTik with DSCP mark 4 are considered HTTP cache hit.
4. P2P cache hit traffic is having a source IP address as that of the cache. Therefore, any traffic through Mikrotik that has source IP address= 192.168.248.14 is considered as P2P cache hit traffic.
Static Simple Queues won't work. Because, if the ISP has few thousand users, there should be that many simple queues defined. This is fairly impractical. Also, static simple queues would not support dynamic assignment of IP addresses to subscribers.
We need to use dynamic simple queues along with PCQ. This would create thousands of dynamic sub-queues for thousands of subscribers automatically.
[Internet] | | | | [L-3 Switch]--------[Mikrotik in bridge mode using ether2 and ether3]---------[HTTP &P2P Cache] | | | | [Subscribers] ======= Subscribers' subnets in this example: 192.168.100.0/24; 192.168.200.0/24; 192.168.210.0/24; 192.168.220.0/24
1. Create a bridge named bridge1
/interface bridge add name="bridge1"
2. Add interfaces ether2 and ether3 to bridge1
/interface bridge port add interface=ether2 bridge=bridge1 add interface=ether3 bridge=bridge1
3. Configure bridge1 to use ip firewall so that packet marking works.
/interface bridge settings set use-ip-firewall=yes
1. Add address of Cache servers to the address-list P2P_Cache
/ip firewall address-list add address=192.168.248.14 disabled=no list=Cache
2. Add host/network Addresses to address lists P2P_Client_12M or P2P_Client_16m for 12 Mbps or 16 Mbps download speed quota.
add address=192.168.100.1 disabled=no list=Client_12M add address=192.168.200.1 disabled=no list=Client_16m add address=192.168.220.0/24 disabled=no list=Client_12M add address=192.168.210.0/24 disabled=no list=Client_16m
1. Enable connection tracking for PCQ to work. TCP-established timeout has been reduced from default 1 day to 1 hour.
/ip firewall connection tracking set enabled=yes tcp-established-timeout=1h
1. All the packets with source IP address equal to the IP address of P2P interface on CacheMARA needs to be classified by packet marking into 12M, 16M or General
/ip firewall mangle add action=mark-packet chain=prerouting comment="HTTP Cache Hit Packet Mark for 12M Clients" dst-address-list=Client_12M new-packet-mark=Cache_Hit_12M passthrough=no dscp=4 add action=mark-packet chain=prerouting comment="P2P Cache Hit Packet Mark for 12M Clients" dst-address-list=Client_12M new-packet-mark=Cache_Hit_12M passthrough=no src-address-list=Cache add action=mark-packet chain=prerouting comment="HTTP Cache Hit Packet Mark for 16M Clients" dst-address-list=Client_16M new-packet-mark=Cache_Hit_16M passthrough=no dscp=4 add action=mark-packet chain=prerouting comment="P2P Cache Hit Packet Mark for 16M Clients" dst-address-list=Client_16M new-packet-mark=Cache_Hit_16M passthrough=no src-address-list=Cache add action=mark-packet chain=prerouting comment="HTTP Cache Hit Packet Mark for All Clients" new-packet-mark=Cache_Hit_General passthrough=no dscp=4 add action=mark-packet chain=prerouting comment="P2P Cache Hit Packet Mark for All Clients" new-packet-mark=Cache_Hit_General passthrough=no src-address-list=Cache
Per Connection Queuing
1. Different PCQs for 12M, 16M or general class need to be created.
/queue type add kind=pcq name=8Mbps pcq-rate=8M add kind=pcq name=12Mbps pcq-rate=12M add kind=pcq name=16Mbps pcq-rate=16M
Create simple queues for different speed limits.
/queue simple add comment="HTTP P2P Cache Hit @ 16 Mbps" name=HTTP_P2P_16 packet-marks=Cache_Hit_16M queue=16Mbps/16Mbps add comment="HTTP P2P Cache Hit @ 12 Mbps" name=HTTP_P2P_12 packet-marks=Cache_Hit_12M queue=12Mbps/12Mbps add comment="HTTP P2P Cache Hit @ 8 Mbps" name=HTTP_P2P_8 packet-marks=Cache_Hit_General queue=8Mbps/8Mbps
Author: Sudipta Kumar Pal(Email: email@example.comfirstname.lastname@example.org)