What is ngrok?

Ngrok is a multiplatform tunnelling, reverse proxy software that establishes secure tunnels from a public endpoint such as internet to a locally running network service while capturing all traffic for detailed inspection and replay.

Before using ngrok, when we needed to expose a localhost application to web (internet) all we were doing is deploying the application in a server running a DMZ or we used to relocate the host to DMZ and configure NATing in the firewall. We also used to make DNS configuration in External DNS where the domain is hosted. In general, DMZ (De-Militarized Zone) is a computer host or small network inserted as a “neutral zone” between a company’s private network and the outside public network. It prevents outside users from getting direct access to a server that has company data. The following are the issues that we were facing before Ngrok deployment:

  • Unable to expose localhost application directly to internet without DMZ & other network configuration
  • Unable to demonstrate an application to Client on urgent basis
  • Unable to share websites for testing purpose
  • Develop any services which consume Webhooks (HTTP CallBacks)
  • Can’t share a website temporarily that is running only on our developer machine
  • Time Consuming on network and DNS configurations
  • Can’t debug or inspect HTTP Traffic in a precise manner
  • Can’t run networked services on machines that are firewalled off from the internet
  • Unable to expose application behind http proxy
  • Unable to forward non-http and non-local network services

 

Architecture before Ngrok deployment

Architecture-before-Ngrok-deployment

Using Ngrok, we can addressed all the about requirements and mainly it serves our business need in faster, secure and easy manner.

As this is a small 9MB executable(.exe) tool which can be downloaded from here. we can be generally executed with Ngrok command followed by the port no which has to be exposed as follows,

port-8080

Which gives a random subdomain on Ngrok.com and it’ll be accessible over both HTTP and HTTPS (Secure).

ngroc cmd

Now anyone can access the application running locally on your machine from anywhere in the world using the forward URLs providing by the ngrok tool.

Architecture after Ngrok deployment

Architecture after Ngrok deployment

Conclusion

All in all, this is an amazing, secure and powerful tool that helps to meet our business needs on right time.

What is ngrok?

Web Forms, ASP.NET MVC, and ASP.NET Web Pages

ASP.NET offers three frameworks for creating web applications: Web Forms, ASP.NET MVC, and ASP.NET Web Pages. All three frameworks are stable and mature, and you can create great web applications with any of them. No matter what framework you choose, you will get all the benefits and features of ASP.NET everywhere.

Each framework targets a different development style. The one you choose depends on a combination of your programming assets (knowledge, skills, and development experience), the type of application you’re creating, and the development approach you’re comfortable with.

Below is an overview of each of the frameworks and some ideas for how to choose between them.

If you have experience in Development style Expertise
Web Forms Win Forms, WPF, .NET Rapid development using a rich library of controls that encapsulate HTML markup Mid-Level, Advanced RAD
MVC Ruby on Rails, .NET Full control over HTML markup, code and markup separated, and easy to write tests. The best choice for mobile and single-page applications (SPA). Mid-Level, Advanced
Web Pages Classic ASP, PHP HTML markup and your code together in the same file New, Mid-Level

Web Forms

With ASP.NET Web Forms, you can build dynamic websites using a familiar drag-and-drop, event-driven model. A design surface and hundreds of controls and components let you rapidly build sophisticated, powerful UI-driven sites with data access.

MVC

ASP.NET MVC gives you a powerful, patterns-based way to build dynamic websites that enables a clean separation of concerns and that gives you full control over markup for enjoyable, agile development. ASP.NET MVC includes many features that enable fast, TDD-friendly development for creating sophisticated applications that use the latest web standards.1

ASP.NET Web Pages

ASP.NET Web Pages and the Razor syntax provide a fast, approachable, and lightweight way to combine server code with HTML to create dynamic web content. Connect to databases, add video, link to social networking sites, and include many more features that help you create beautiful sites that conform to the latest web standards.

Notes about Web Forms, MVC, and Web Pages

All three ASP.NET frameworks are based on the .NET Framework and share core functionality of .NET and of ASP.NET. For example, all three frameworks offer a login security model based around membership, and all three share the same facilities for managing requests, handling sessions, and so on that are part of the core ASP.NET functionality.

In addition, the three frameworks are not entirely independent, and choosing one does not preclude using another. Since the frameworks can coexist in the same web application, it’s not uncommon to see individual components of applications written using different frameworks. For example, customer-facing portions of an app might be developed in MVC to optimize the markup, while the data access and administrative portions are developed in Web Forms to take advantage of data controls and simple data access.

Web Forms, ASP.NET MVC, and ASP.NET Web Pages

What is the default capacity of a List?

As an .net developer we use List very frequently. List represents a strongly typed list of objects that can be accessed by index. Provides methods to search, sort, and manipulate lists.

Most of us think framework development team would have made all the List operations optimal. We never try to put thought, if we can do some thing to make it better. We can do few things as a developer to help .net framework to use List  optimally. One of those things is setting its capacity when we create a list. if you know in advance how many items you are going to add in a list, then it will be a better to make a list of predefined capacity.

if we don’t define a capacity of a list then it starts with a Capacity of 0. When you add the first element, then .net framework allocates a capacity of 4. After that, the capacity keeps doubling if expansion is needed.

This code spinet should demonstrate the behavior:

List<int> list = new List<int>();
int capacity = list.Capacity;
Console.WriteLine("Capacity: " + capacity);

for (int i = 0; i < 100000; i++)
{
    list.Add(i);
    if (list.Capacity > capacity)
    {
        capacity = list.Capacity;
        Console.WriteLine("Capacity: " + capacity);
    }
}

Capacity should be used if you know roughly how many items you want to store in List (or in Stack, or Queue).

That way you will avoid memory copying. The memory copying happens because under the hood Lists (Stacks and Queues) rely on array to store their items. That array size is you capacity, but it’s not the same as the list size. As size of list needs to be bigger than the size of array, the List implementation will allocate a bigger array and will copy all items from old array to the new one plus newly added items.

So, if you know that you may have from, say, 50 to 60 items in your list, create a list with capacity 60 and no memory deallocation will happen. And Garbage Collector will not have to clean up old arrays

What is the default capacity of a List?

AZURE REDIS CACHE

What is Azure Redis Cache?

Azure Redis Cache is based on the popular open-source Redis cache. It gives you access to a secure, dedicated Redis cache, managed by Microsoft, and accessible from any where in the world.

Azure Redis Cache is available in the following tiers:

  • Basic—Single node, multiple sizes, ideal for development/test and non-critical workloads. The basic tier has no SLA.
  • Standard—A replicated cache in a two node Primary/Secondary configuration managed by Microsoft, with a high availability SLA.
  • Premium—The new Premium tier includes a high availability SLA and all the Standard-tier features and more, such as better performance over Basic or Standard-tier Caches, bigger workloads, disaster recovery and enhanced security. Additional features include:
    • Redis persistence allows you to persist data stored in Redis cache. You can also take snapshots and back up the data which you can load in case of a failure.
    • Redis cluster automatically shards data across multiple Redis nodes, so you can create workloads of bigger memory sizes (greater than 53 GB) and get better performance.
    • Azure Virtual Network (VNET) deployment provides enhanced security and isolation for your Azure Redis Cache, as well as subnets, access control policies and other features to further restrict access.

Basic and Standard caches are available in sizes up to 53 GB and Premium caches are available in sizes up to 530 GB with more on request.

What Redis Cache offering and size should I use?

Each Azure Redis Cache offering provides different levels of size, bandwidth, high availability, and SLA options.

The following are considerations for choosing a Cache offering.

  • Memory: The Basic and Standard tiers offer 250 MB – 53 GB. The Premium tier offers up to 530 GB with more available on request. For more information, see Azure Redis Cache Pricing.
  • Network Performance: If you have a workload that requires high throughput, the Premium tier offers more bandwidth compared to Standard or Basic. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. See the following tablefor more information.
  • Throughput: The Premium tier offers the maximum available throughput. If the cache server or client reaches the bandwidth limits, you may receive timeouts on the client side. For more information, see the following table.
  • High Availability/SLA: Azure Redis Cache guarantees that a Standard/Premium cache is available at least 99.9% of the time. To learn more about our SLA, see Azure Redis Cache Pricing. The SLA only covers connectivity to the Cache endpoints. The SLA does not cover protection from data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss.
  • Redis Data Persistence: The Premium tier allows you to persist the cache data in an Azure Storage account. In a Basic/Standard cache, all the data is stored only in memory. If there are underlying infrastructure issues there can be potential data loss. We recommend using the Redis data persistence feature in the Premium tier to increase resiliency against data loss. Azure Redis Cache offers RDB and AOF (coming soon) options in Redis persistence. For more information, see How to configure persistence for a Premium Azure Redis Cache.
  • Redis Cluster: To create caches larger than 53 GB, or to shard data across multiple Redis nodes, you can use Redis clustering, which is available in the Premium tier. Each node consists of a primary/replica cache pair for high availability. For more information, see How to configure clustering for a Premium Azure Redis Cache.
  • Enhanced security and network isolation: Azure Virtual Network (VNET) deployment provides enhanced security and isolation for your Azure Redis Cache, as well as subnets, access control policies, and other features to further restrict access. For more information, see How to configure Virtual Network support for a Premium Azure Redis Cache.
  • Configure Redis: In both the Standard and Premium tiers, you can configure Redis for Keyspace notifications.
  • Maximum number of client connections: The Premium tier offers the maximum number of clients that can connect to Redis, with a higher number of connections for larger sized caches. For more information, see Azure Redis Cache pricing.
  • Dedicated Core for Redis Server: In the Premium tier, all cache sizes have a dedicated core for Redis. In the Basic/Standard tiers, the C1 size and above have a dedicated core for Redis server.
  • Redis is single-threaded so having more than two cores does not provide additional benefit over having just two cores, but larger VM sizes typically have more bandwidth than smaller sizes. If the cache server or client reaches the bandwidth limits, then you receive timeouts on the client side.
  • Performance improvements: Caches in the Premium tier are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies.

In what region should I locate my cache?

For best performance and lowest latency, locate your Azure Redis Cache in the same region as your cache client application.

How am I billed for Azure Redis Cache?

Azure Redis Cache pricing is here. The pricing page lists pricing as an hourly rate. Caches are billed on a per-minute basis from the time that the cache is created until the time that a cache is deleted. There is no option for stopping or pausing the billing of a cache.

What do the StackExchange.Redis configuration options do?

StackExchange.Redis has many options. This section talks about some of the common settings. For more detailed information about StackExchange.Redis options, see StackExchange.Redis configuration.

ConfigurationOptions Description Recommendation
AbortOnConnectFail When set to true, the connection will not reconnect after a network failure. Set to false and let StackExchange.Redis reconnect automatically.
ConnectRetry The number of times to repeat connection attempts during initial connect. See the following notes for guidance.
ConnectTimeout Timeout in ms for connect operations. See the following notes for guidance.

Usually the default values of the client are sufficient. You can fine-tune the options based on your workload.

  • Retries
    • For ConnectRetry and ConnectTimeout, the general guidance is to fail fast and retry again. This guidance is based on your workload and how much time on average it takes for your client to issue a Redis command and receive a response.
    • Let StackExchange.Redis automatically reconnect instead of checking connection status and reconnecting yourself. Avoid using the ConnectionMultiplexer.IsConnected property.
    • Snowballing – sometimes you may run into an issue where you are retrying and the retries snowball and never recovers. If snowballing occurs, you should consider using an exponential backoff retry algorithm as described in Retry general guidance published by the Microsoft Patterns & Practices group.
  • Timeout values
    • Consider your workload and set the values accordingly. If you are storing large values, set the timeout to a higher value.
    • Set AbortOnConnectFail to false and let StackExchange.Redis reconnect for you.
    • Use a single ConnectionMultiplexer instance for the application. You can use a LazyConnection to create a single instance that is returned by a Connection property, as shown in Connect to the cache using the ConnectionMultiplexer class.
    • Set the ConnectionMultiplexer.ClientName property to an app instance unique name for diagnostic purposes.
    • Use multiple ConnectionMultiplexer instances for custom workloads.
      • You can follow this model if you have varying load in your application. For example:
      • You can have one multiplexer for dealing with large keys.
      • You can have one multiplexer for dealing with small keys.
      • You can set different values for connection timeouts and retry logic for each ConnectionMultiplexer that you use.
      • Set the ClientName property on each multiplexer to help with diagnostics.
      • This guidance may lead to more streamlined latency per ConnectionMultiplexer.

 

What are Redis databases?

Redis Databases are just a logical separation of data within the same Redis instance. The cache memory is shared between all the databases and actual memory consumption of a given database depends on the keys/values stored in that database. For example a C6 cache has 53 GB of memory. You can choose to put all 53 GB into one database or you can split it up between multiple databases.

When should I enable the non-SSL port for connecting to Redis?

Redis server does not natively support SSL, but Azure Redis Cache does. If you are connecting to Azure Redis Cache and your client supports SSL, like StackExchange.Redis, then you should use SSL.

AZURE REDIS CACHE

WHAT IS A WEBHOOK?

The concept of a WebHook is simple. A WebHook is an HTTP callback: an HTTP POST that occurs when something happens; a simple event-notification via HTTP POST.

A web application implementing WebHooks will POST a message to a URL when certain things happen. When a web application enables users to register their own URLs, the users can then extend, customize, and integrate that application with their own custom extensions or even with other applications around the web. For the user, WebHooks are a way to receive valuable information when it happens, rather than continually polling for that data and receiving nothing valuable most of the time. WebHooks have enormous potential and are limited only by your imagination! (No, it can’t wash the dishes. Yet.)

WebHooks are meant to do something. To get your imagination spinning with your own ideas, here are the three general ways in which WebHooks can be used to make your web more programmable:

Push: receiving data in real time

Push is the simplest of reasons to use WebHooks. As was just stated above, no more polling every couple of minutes to find out if there is new information. Just register a WebHook and receive the data at your doorstep as soon as it exists. It’s less work, less hassle, and you’ll probably even receive it sooner than if you were asking for it every couple of minutes.

Pipes: receiving data and passing it on

A Pipe happens when your WebHook not only receives real-time data, but goes on to do something new and meaningful with it, triggering actions unrelated to the original event. For example, you create a script, register its URL at a photo site, and have it email you when your mother posts a new photo. Or make a script that creates a Twitter message, and have it triggered by a WebHook whenever you add a new product on your commerce website.

Plugins: processing data and giving something in return

This is where the entire web becomes a programming platform. You can use this form of WebHooks to allow others to extend your application. Facebook’s Application Platform uses WebHooks in this way, and so does Google Wave’s robot integration. The general idea is that a web application sending out data via WebHooks will also use the response to modify its own data. At Facebook, when you access an app, Facebook sends a WebHook out to your application saying “Hey, someone’s accessing your application, what do I do?!” The application responds with, “Show the user this page…” Facebook does so, and the pattern continues in the same manner as you continue to use the application. At Google Wave, when you do something in a wave, any robot you’ve added as a participant is notified via a WebHook, and the robot has the ability to modify the wave in its http response. Implement WebHooks in this way in your application if you want to allow others to truly extend and enhance the abilities of your application.

How do they work?

By letting the user specify a URL for various events, the application will POST data to those URLs when the events occur. With the cheap availability of PHP hosting and even easier simple app/script hosting like AppJet or Scriptlets, handling the POST data becomes fairly trivial. How you use it is up to you and whatever you want to accomplish. Among other things, you can:

  • create notifications to you or anybody via email, IRC, Jabber, …
  • put the data in another app (real-time data synchronization)
  • process the data and repost it using the app’s API
  • validate the data and potentially prevent it from being used by the app

Why should I care?

As integrated as we perceive the web, most web applications today operate in silos. With the rise of API’s we’ve seen mashups and some degree of integration between applications. However, we have not seen the vision of the programmable web: a web where you as the user can “pipe” data between apps much like the Unix command line. Some say RSS is the answer. They are wrong. The heart is in the right place, but the implementation is wrong. RSS is still useful, but it is not going to bring us the true programmable web.

We just need a simple way to get data out in real-time to let the user easily do whatever they want with it. That means no polling, no content constraints, and no XML parsing. That means no RSS. Using HTTP is simpler and easier to use. PHP is a very popular and accessible programming environment, so it’s likely to be used often for writing hooklets… getting data from a web POST in PHP is as simple as $_POST[‘something’]. And making the request to the user script is as simple as making an HTTP request, something already built-in to most programming environments. In fact, web hooks are easier to implement than an API.

However implemented (although the easier the more likely it will be adopted), having an output for the web will complement the input provided by the rising adoption of API’s. When you have both input and output, you have everything you need for apps to easily interact. This will encourage smaller, more focused apps that together with hook-enabled heavier apps will let amazing emergent creations happen!

How do I implement WebHooks?

Simply provide your users with the ability to submit their own URL, and POST to that URL when something happens. It’s that simple. There are no specs you have to follow.

WHAT IS A WEBHOOK?

Why isn’t the static constructor from my base class called?

Lets say we have 2 classes:

public abstract class Foo
{
    static Foo()
    {
        print("4");
    }
}

public class Bar : Foo
{
    static Bar()
    {
        print("2");
    }

    static void DoSomething()
    {
        /*...*/
    }
}

We expected that after calling Bar.DoSomething() (assuming this is the first time we access the Bar class) the order of event will be:

  1. Foo’s static constructor (again, assuming first access) > print 4
  2. Bar’s static constructor > print 2
  3. Execution of DoSomething

At the bottom line we expect 42 to be printed. After testing, it seems that only 2 is being printed.

Reason :

The static constructor for a class executes at most once in a given application domain. The execution of a static constructor is triggered by the first of the following events to occur within an application domain:

  1. An instance of the class is created.
  2. Any of the static members of the class are referenced.

Because you are not referencing any of the members of the base class, the constructor is not being executed.

Why isn’t the static constructor from my base class called?

Temporal Tables are generally available in Azure SQL Database

Temporal Tables allow you to track the full history of data changes directly in Azure SQL Database, without the need for custom coding. With Temporal Tables you can see your data as of any point in time in the past and use declarative cleanup policy to control retention for the historical data.

When to use Temporal Tables?

Quite often you may be in the situation to ask yourself fundamental questions: How did important information look yesterday, a month ago, a year ago, etc. What changes have been made since the beginning of the year? What were the dominant trends during a specific period of time?  Without proper support in the database, however, questions like these have never been easy to answer.
Temporal Tables are designed to improve your productivity when you develop applications that work with ever-changing data and when you want to derive important insights from the changes.
Use Temporal Tables to:

  1. Support data auditing in your applications
  2. Analyze trends or detect anomalies over time
  3. Easily implement slowly changing dimension pattern
  4. Perform fine-grained row repairs in case of accidental data errors made by humans or applications

Manage historical data with easy-to-use retention policy

Keeping history of changes tends to increase database size, especially if historical data is retained for a longer period of time. Hence, retention policy for historical data is an important aspect of planning and managing the lifecycle of every temporal table.  Temporal Tables in Azure SQL Database come with an extremely easy-to-use retention mechanism. Applying retention policy is very simple: it requires users to set single parameter during the table creation or table schema change, like shown in the following example.

ALTER TABLE [WebSiteClicks]
SET 
(
	SYSTEM_VERSIONING = ON 
	(
		HISTORY_TABLE = dbo. WebSiteClicks_History, 
                HISTORY_RETENTION_PERIOD = 3 MONTHS  
	)
);

You can alter retention policy at any moment and your change will be effective immediately.

Why you should consider Temporal Tables?

If you have requirements for tracking data changes, using Temporal Tables will give you multiple benefits over any custom solution. Temporal Tables will simplify every phase in the development lifecycle: object creation, schema evolution, data modification, point-in-time analysis and data aging.

temporaltableinazure

 

Temporal Tables are generally available in Azure SQL Database