AZURE REDIS CACHE

What is Azure Redis Cache?

Azure Redis Cache is based on the popular open-source Redis cache. It gives you access to a secure, dedicated Redis cache, managed by Microsoft, and accessible from any where in the world.

Azure Redis Cache is available in the following tiers:

  • Basic—Single node, multiple sizes, ideal for development/test and non-critical workloads. The basic tier has no SLA.
  • Standard—A replicated cache in a two node Primary/Secondary configuration managed by Microsoft, with a high availability SLA.
  • Premium—The new Premium tier includes a high availability SLA and all the Standard-tier features and more, such as better performance over Basic or Standard-tier Caches, bigger workloads, disaster recovery and enhanced security. Additional features include:
    • Redis persistence allows you to persist data stored in Redis cache. You can also take snapshots and back up the data which you can load in case of a failure.
    • Redis cluster automatically shards data across multiple Redis nodes, so you can create workloads of bigger memory sizes (greater than 53 GB) and get better performance.
    • Azure Virtual Network (VNET) deployment provides enhanced security and isolation for your Azure Redis Cache, as well as subnets, access control policies and other features to further restrict access.

Basic and Standard caches are available in sizes up to 53 GB and Premium caches are available in sizes up to 530 GB with more on request.

What Redis Cache offering and size should I use?

Each Azure Redis Cache offering provides different levels of size, bandwidth, high availability, and SLA options.

The following are considerations for choosing a Cache offering.

  • Memory: The Basic and Standard tiers offer 250 MB – 53 GB. The Premium tier offers up to 530 GB with more available on request. For more information, see Azure Redis Cache Pricing.
  • Network Performance: If you have a workload that requires high throughput, the Premium tier offers more bandwidth compared to Standard or Basic. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. See the following tablefor more information.
  • Throughput: The Premium tier offers the maximum available throughput. If the cache server or client reaches the bandwidth limits, you may receive timeouts on the client side. For more information, see the following table.
  • High Availability/SLA: Azure Redis Cache guarantees that a Standard/Premium cache is available at least 99.9% of the time. To learn more about our SLA, see Azure Redis Cache Pricing. The SLA only covers connectivity to the Cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss.
  • Redis Data Persistence: The Premium tier allows you to persist the cache data in an Azure Storage account. In a Basic/Standard cache, all the data is stored only in memory. If there are underlying infrastructure issues there can be potential data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss. Azure Redis Cache offers RDB and AOF (coming soon) options in Redis persistence. For more information, see How to configure persistence for a Premium Azure Redis Cache.
  • Redis Cluster: To create caches larger than 53 GB, or to shard data across multiple Redis nodes, you can use Redis clustering, which is available in the Premium tier. Each node consists of a primary/replica cache pair for high availability. For more information, see How to configure clustering for a Premium Azure Redis Cache.
  • Enhanced security and network isolation: Azure Virtual Network (VNET) deployment provides enhanced security and isolation for your Azure Redis Cache, as well as subnets, access control policies, and other features to further restrict access. For more information, see How to configure Virtual Network support for a Premium Azure Redis Cache.
  • Configure Redis: In both the Standard and Premium tiers, you can configure Redis for Keyspace notifications.
  • Maximum number of client connections: The Premium tier offers the maximum number of clients that can connect to Redis, with a higher number of connections for larger sized caches. For more information, see Azure Redis Cache pricing.
  • Dedicated Core for Redis Server: In the Premium tier, all cache sizes have a dedicated core for Redis. In the Basic/Standard tiers, the C1 size and above have a dedicated core for Redis server.
  • Redis is single-threaded so having more than two cores does not provide additional benefit over having just two cores, but larger VM sizes typically have more bandwidth than smaller sizes. If the cache server or client reaches the bandwidth limits, then you receive timeouts on the client side.
  • Performance improvements: Caches in the Premium tier are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies.

In what region should I locate my cache?

For best performance and lowest latency, locate your Azure Redis Cache in the same region as your cache client application.

How am I billed for Azure Redis Cache?

Azure Redis Cache pricing is here. The pricing page lists pricing as an hourly rate. Caches are billed on a per-minute basis from the time that the cache is created until the time that a cache is deleted. There is no option for stopping or pausing the billing of a cache.

What do the StackExchange.Redis configuration options do?

StackExchange.Redis has many options. This section talks about some of the common settings. For more detailed information about StackExchange.Redis options, see StackExchange.Redis configuration.

ConfigurationOptions Description Recommendation
AbortOnConnectFail When set to true, the connection will not reconnect after a network failure. Set to false and let StackExchange.Redis reconnect automatically.
ConnectRetry The number of times to repeat connection attempts during initial connect. See the following notes for guidance.
ConnectTimeout Timeout in ms for connect operations. See the following notes for guidance.

Usually the default values of the client are sufficient. You can fine-tune the options based on your workload.

  • Retries
    • For ConnectRetry and ConnectTimeout, the general guidance is to fail fast and retry again. This guidance is based on your workload and how much time on average it takes for your client to issue a Redis command and receive a response.
    • Let StackExchange.Redis automatically reconnect instead of checking connection status and reconnecting yourself. Avoid using the ConnectionMultiplexer.IsConnected property.
    • Snowballing – sometimes you may run into an issue where you are retrying and the retries snowball and never recovers. If snowballing occurs, you should consider using an exponential backoff retry algorithm as described in Retry general guidance published by the Microsoft Patterns & Practices group.
  • Timeout values
    • Consider your workload and set the values accordingly. If you are storing large values, set the timeout to a higher value.
    • Set AbortOnConnectFail to false and let StackExchange.Redis reconnect for you.
    • Use a single ConnectionMultiplexer instance for the application. You can use a LazyConnection to create a single instance that is returned by a Connection property, as shown in Connect to the cache using the ConnectionMultiplexer class.
    • Set the ConnectionMultiplexer.ClientName property to an app instance unique name for diagnostic purposes.
    • Use multiple ConnectionMultiplexer instances for custom workloads.
      • You can follow this model if you have varying load in your application. For example:
      • You can have one multiplexer for dealing with large keys.
      • You can have one multiplexer for dealing with small keys.
      • You can set different values for connection timeouts and retry logic for each ConnectionMultiplexer that you use.
      • Set the ClientName property on each multiplexer to help with diagnostics.
      • This guidance may lead to more streamlined latency per ConnectionMultiplexer.

 

What are Redis databases?

Redis Databases are just a logical separation of data within the same Redis instance. The cache memory is shared between all the databases and actual memory consumption of a given database depends on the keys/values stored in that database. For example a C6 cache has 53 GB of memory. You can choose to put all 53 GB into one database or you can split it up between multiple databases.

When should I enable the non-SSL port for connecting to Redis?

Redis server does not natively support SSL, but Azure Redis Cache does. If you are connecting to Azure Redis Cache and your client supports SSL, like StackExchange.Redis, then you should use SSL.

AZURE REDIS CACHE

WHAT IS A WEBHOOK?

The concept of a WebHook is simple. A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST.

A web application implementing WebHooks will POST a message to a URL when certain things happen. When a web application enables users to register their own URLs, the users can then extend, customize, and integrate that application with their own custom extensions or even with other applications around the web. For the user, WebHooks are a way to receive valuable information when it happens, rather than continually polling for that data and receiving nothing valuable most of the time. WebHooks have enormous potential and are limited only by your imagination! (No, it can’t wash the dishes. Yet.)

WebHooks are meant to do something. To get your imagination spinning with your own ideas, here are the three general ways in which WebHooks can be used to make your web more programmable:

Push: receiving data in real time

Push is the simplest of reasons to use WebHooks. As was just stated above, no more polling every couple of minutes to find out if there is new information. Just register a WebHook and receive the data at your doorstep as soon as it exists. It’s less work, less hassle, and you’ll probably even receive it sooner than if you were asking for it every couple of minutes.

Pipes: receiving data and passing it on

A Pipe happens when your WebHook not only receives real-time data, but goes on to do something new and meaningful with it, triggering actions unrelated to the original event. For example, you create a script, register its URL at a photo site, and have it email you when your mother posts a new photo. Or make a script that creates a Twitter message, and have it triggered by a WebHook whenever you add a new product on your commerce website.

Plugins: processing data and giving something in return

This is where the entire web becomes a programming platform. You can use this form of WebHooks to allow others to extend your application. Facebook’s Application Platform uses WebHooks in this way, and so does Google Wave’s robot integration. The general idea is that a web application sending out data via WebHooks will also use the response to modify its own data. At Facebook, when you access an app, Facebook sends a WebHook out to your application saying “Hey, someone’s accessing your application, what do I do?!” The application responds with, “Show the user this page…” Facebook does so, and the pattern continues in the same manner as you continue to use the application. At Google Wave, when you do something in a wave, any robot you’ve added as a participant is notified via a WebHook, and the robot has the ability to modify the wave in its http response. Implement WebHooks in this way in your application if you want to allow others to truly extend and enhance the abilities of your application.

How do they work?

By letting the user specify a URL for various events, the application will POST data to those URLs when the events occur. With the cheap availability of PHP hosting and even easier simple app/script hosting like AppJet or Scriptlets, handling the POST data becomes fairly trivial. How you use it is up to you and whatever you want to accomplish. Among other things, you can:

  • create notifications to you or anybody via email, IRC, Jabber, …
  • put the data in another app (real-time data synchronization)
  • process the data and repost it using the app’s API
  • validate the data and potentially prevent it from being used by the app

Why should I care?

As integrated as we perceive the web, most web applications today operate in silos. With the rise of API’s we’ve seen mashups and some degree of integration between applications. However, we have not seen the vision of the programmable web: a web where you as the user can “pipe” data between apps much like the Unix command line. Some say RSS is the answer. They are wrong. The heart is in the right place, but the implementation is wrong. RSS is still useful, but it is not going to bring us the true programmable web.

We just need a simple way to get data out in real-time to let the user easily do whatever they want with it. That means no polling, no content constraints, and no XML parsing. That means no RSS. Using HTTP is simpler and easier to use. PHP is a very popular and accessible programming environment, so it’s likely to be used often for writing hooklets… getting data from a web POST in PHP is as simple as $_POST[‘something’]. And making the request to the user script is as simple as making an HTTP request, something already built-in to most programming environments. In fact, web hooks are easier to implement than an API.

However implemented (although the easier the more likely it will be adopted), having an output for the web will complement the input provided by the rising adoption of API’s. When you have both input and output, you have everything you need for apps to easily interact. This will encourage smaller, more focused apps that together with hook-enabled heavier apps will let amazing emergent creations happen!

How do I implement WebHooks?

Simply provide your users with the ability to submit their own URL, and POST to that URL when something happens. It’s that simple. There are no specs you have to follow.

WHAT IS A WEBHOOK?

Temporal Tables are generally available in Azure SQL Database

Temporal Tables allow you to track the full history of data changes directly in Azure SQL Database, without the need for custom coding. With Temporal Tables you can see your data as of any point in time in the past and use declarative cleanup policy to control retention for the historical data.

When to use Temporal Tables?

Quite often you may be in the situation to ask yourself fundamental questions: How did important information look yesterday, a month ago, a year ago, etc. What changes have been made since the beginning of the year? What were the dominant trends during a specific period of time?  Without proper support in the database, however, questions like these have never been easy to answer.
Temporal Tables are designed to improve your productivity when you develop applications that work with ever-changing data and when you want to derive important insights from the changes.
Use Temporal Tables to:

  1. Support data auditing in your applications
  2. Analyze trends or detect anomalies over time
  3. Easily implement slowly changing dimension pattern
  4. Perform fine-grained row repairs in case of accidental data errors made by humans or applications

Manage historical data with easy-to-use retention policy

Keeping history of changes tends to increase database size, especially if historical data is retained for a longer period of time. Hence, retention policy for historical data is an important aspect of planning and managing the lifecycle of every temporal table.  Temporal Tables in Azure SQL Database come with an extremely easy-to-use retention mechanism. Applying retention policy is very simple: it requires users to set single parameter during the table creation or table schema change, like shown in the following example.

ALTER TABLE [WebSiteClicks]
SET 
(
	SYSTEM_VERSIONING = ON 
	(
		HISTORY_TABLE = dbo. WebSiteClicks_History, 
                HISTORY_RETENTION_PERIOD = 3 MONTHS  
	)
);

You can alter retention policy at any moment and your change will be effective immediately.

Why you should consider Temporal Tables?

If you have requirements for tracking data changes, using Temporal Tables will give you multiple benefits over any custom solution. Temporal Tables will simplify every phase in the development lifecycle: object creation, schema evolution, data modification, point-in-time analysis and data aging.

temporaltableinazure

 

Temporal Tables are generally available in Azure SQL Database