INTELLIGENT CACHE SYSTEM

WHAT IS INTELLIGENT CACHING?

To improve the user experience of online video content over wireless networks, intelligent caching is utilized as part of media optimization programmes.

According to the Cisco Visual Networking Index (VNI), internet traffic will increase fivefold between 2009 and 2013, with video content accounting for 90% of that increase. Telcos must adapt to this increased demand and be able to deliver constant video without causing buffering or stuttering to end users. They can do so by expanding capacity, as well as through intelligent caching and the integration of media optimization technologies into the network that can detect and correct any possible content issues.

Many Internet services, such as FTP, Usenet, and the Web, may be depicted as a collection of files stored on a collection of servers and accessible by a collection of clients. Long reaction times are one of the most common issues people face when using the Internet today.

This is because of:

  • the stress on the servers that hold the documents, as well as the strain on the communication connections that carry the papers.
  • The issue is particularly acute for sites that are connected to the rest of the Internet via a slow connection.
  • Many customers, including clients of geographically isolated Internet service providers, small businesses, and dial-up users, have considerable bandwidth limits. Despite the fact that high-speed lines are becoming more widely available, the growing traffic levels created by new applications cause these links to remain crowded.
  • For customers accessing the Internet through a low-speed, heavily loaded link, our system tries to reduce overall file access latency.

The latency of a system like this is made up of three parts:

  1. The time between the client and the link’s local end.
  2. Due to low link speed and link congestion, there is a delay between the two ends of the link.
  3. Due to the server’s response time and congestion elsewhere on the Internet, there is a delay between the server and the distant end of the link.

The usage of a split gateway with caches on both ends of the slow link is a key aspect of our approach. This enables us to deliver services that would be impossible to provide with just a single cache.

The following are the methods used:

  • Files that have been accessed recently are cached.
  • Files that will be accessed soon are pre-fetched. (Pre-fetching files are chosen based on the presently referenced file.) We employ a generic model of file access patterns as well as the history of specific file access patterns).
  • Loading of huge and/or low-priority files is postponed.
  • During off-peak hours, frequently visited pages are automatically updated.
  • Compression and differencing of files. (Only the differences between the files are transmitted if an updated file is comparable to its previous version.)

When data is retrieved from a remote place, the above-mentioned concerns, namely data availability and network latency, can have an impact on the user inquiry. Our suggested Intelligent Cache Management (ICM) system saves data fetched from these distant requests with the assumption that the data will be accessed again and again. In the event that the same query is submitted again, users will not be required to retrieve data from a remote place; instead, our system will retrieve records from the local cache.

Maintaining such a cache is a difficult operation because the query may not be exactly the same the next time. The local cache and remote data access could partially satisfy the query.

We can confidently assume the following points in the context of a national demographic database.

1. Data is distributed regionally; each region has its own local database, in addition to cached and replicated data. The same distributed DBMS is used by all areas to manage their data.

2. A single schema for dealing with structured data that is known in every operating region.

3. Data from local or nearby locations will be accessed more frequently, but data from distant geographical locations will be accessed less frequently. As a result, in order to reduce cache management overheads, the system may keep a clone of neighbouring datasets.

4. When compared to data updation, data access is much more frequent.

5. Because Data Grid is part of an intra-organizational arrangement, a unified policy for concerns such as security and data consistency may be applied.

The gateway-to-gateway relay protocol is a protocol that connects two gateways (GGRP)

The GGRP is used to send requests from local gateways to remote gateways, as well as data and status information in the reverse direction. A finite state machine is used to model the protocol.

The GGRP message structure and operation

The messages are broken down into blocks of up to 256 bytes in size. Messages are multiplexed on the channel based on the request’s priority (which may change dynamically).Small objects are also given precedence over huge objects in this procedure.

Because replies are sent in blocks, the reply message includes instructions for reassembling the entire object at the destination. The first and last blocks of a file are identified by flags. To reassemble the object at the local gateway, each block contains the request ID and the byte offset from the beginning of the file.

The protocol’s user interface

The protocol includes the following information.

At the local entry point:

  • Send request: In the local request queue, add a URL request.
  • Receive data: The local gateway has received at least the first part of a file.
  • Change priority: A previous request’s priority has been altered. A message with the lowest priority can be used to cancel a previous request.
  • Receive status: An error or other problem at the remote gateway is indicated.

At the remote access point:

  • Receive request: The local gateway has made a request for a file.
  • Send data: Starts the process of transferring a document to the local gateway.
  • Receive priority change: Indicates that a previous request has a new priority.
  • Send status: Errors, etc. are reported to the local gateway.

How is intelligent caching different from other forms of caching?

Intelligent caching entails looking for patterns in data traffic and determining which videos are likely to be downloaded several times in a short period of time. The information is then downloaded from the internet in its original quality and compressed into various sizes before being stored in a local cache. Intelligent caching is distinguished by the fact that material is optimised on various levels in order to react with the appropriate content based on network parameters, and will only respond with the cached (and optimised) version when the network is congested.

To ensure a pleasant user experience, content optimization began with 2G networks, compressing all content travelling via the network. “From an optimization aspect, the fundamental impetus for 2G networks was optimisation by sheer force,” explains Ram Venketaramani, head of product management and marketing at Openwave. However, with the advent of HSPA+ and 4G networks, there is a greater demand for congestion-aware optimization. In other words, the optimiser should only intervene with optimised content if the user is experiencing congestion issues such as video stalls. Otherwise, it should just remain out of the way.” With the arrival of 3G and 4G networks, developers have realised that optimised cache delivery is only required when bandwidth is constrained. Intelligent caching serves as a safety net for networks, ensuring high-quality video content.

What are the advantages of intelligent caching for operators?

By design, mobile networks have more changeable network conditions and do not provide the security of a stationary network. Operators can have more confidence of the quality that their consumers are receiving thanks to a technology incorporated into the network that can adjust the video bit rate to the network conditions.

Openwave Systems, for example, provides software that allows users to sit in mobile networks and monitor traffic flow. Venketaramani explains that, in addition to the user-experience benefits of delivering compressed data, the ability to pre-optimise content lowers the cost of transcoding video content; video content is five to ten times more expensive to transcode in terms of CPU cost. Another advantage is the near-instantaneous delivery of content by networks. When content is pre-processed, it can be delivered to the end user more quickly.

Overall, intelligent caching is intended to improve the customer’s ability to stream video on their mobile network without interruption.

In terms of net neutrality, how does intelligent caching work?

Because of the software’s capacity to dynamically optimise video traffic flows, intelligent caching is linked to net neutrality issues and disputes. If a corporation is accused of optimising a content provider’s content without the content provider’s explicit authorization, problems arise. Carriers must also demonstrate that they treat all traffic flows fairly and do not discriminate between content providers. This feature is achieved by intelligent caching, which delivers the optimised version only if the destination device is facing network problems.

Issues with sizing and performance

  1. Pre-fetching

The pre-fetching of files in anticipation of user queries is a key feature of our system that is not included in traditional caching systems. This will raise the pressure on the link by sending potentially unnecessary files, which may appear unwarranted if the link is already overburdened. It does, however, boost performance for two reasons.

The priority of pre-fetch requests is lower than that of user queries. As a result, they never create delays in the retrieval of files for user requests. We’ve seen that until a link is extremely crowded, there are times when it’s idle. During these intervals, pre-fetched files can be transferred.

The remote gateway caches pre-fetched files. As a result, when a user requests them, they can be sent right away, bypassing the server access latency. This improves access to slow servers greatly, notably for in-line graphics.

2. Overhead for links

The strain on the link is caused by communication between the gateways, particularly notifications about changes in the neighborhood. However, because the majority of this traffic comes from the local to the distant gateway, and the majority of the data flows in the reverse direction, this has no impact on performance. Requests are also aggregated to lessen the burden on the link.

3. Size of cache and history

All files ever accessed by clients should be stored in the local cache, and all pre-fetched files should be stored in the remote cache. In fact, the cache may only keep a few minutes’ worth of pre-fetched files and several days’ worth of requested files.

The history list, on the other hand, needs a lot more entries; we recommend at least six months’ worth. Even for files that have been removed from the cache, this helps us to predict access patterns. Keeping history information for 1 million URLs (about six months’ worth) would only take up about 100MB of disc space, which is manageable.

What direction will this technology take in the future?

Today, clever caching takes happen in the carrier network’s services layer (data centre). Companies like Openwave are looking for methods to save money by moving to the mobile network edge. “We’re looking into ways to push our technology onto the network edge by partnering with relevant network equipment vendors to realize increased savings for our carrier customers,” Venketaramani says.

CONCLUSION

In a data grid, an effective cache management system and environment can help us cut the access time of remote accesses. We looked at few common cases where replication and cache can help us retrieve a record set more quickly and efficiently. We’re building a prototype for an intelligent data grid cache and will track its impact after it’s integrated with existing OGSA services. We believe that Intelligent Cache Management, together with its integration with existing systems such as Globus, will open up new possibilities for the next generation of homogeneous, distributed data and computationally intensive, high-performance systems. For a data grid setting like this, the Intelligent Cache Management component can be quite useful.

About the author

Ashwini

View all posts
0 0 votes
Article Rating
Subscribe
Notify of
guest


0 Comments
Inline Feedbacks
View all comments