When to use Content Delivery Network

CDN provides Adatum with many features that will ensure that the services can scale globally to meet emergency services demand during monitored natural disasters. CDN helps Adatum meet its needs in the following way:

CDN assists with large scaling to better handle instantaneous high loads. This functionality is useful for Adatum as it will ensure that emergency services are able to access imagery as a disaster unfolds and will minimize the chance that Adatum’s services become unavailable during a disaster.

CDN resources can be deployed in locations close to new customers as those customers subscribe to the service. Instead of deploying new IaaS VMs to host catastrophe imagery around the world, Adatum can configure a new POP in a location close to where the new customers are.

CDN allows content to be delivered to POPs that are close to where customers need to access that data. Adatum can ensure that a POP in Australia is populated with imagery related to an unfolding catastrophe in Australia whilst a separate POP in South American can be populated with imagery related to events unfolding there. The traffic related to each region’s POP can also be geo-filtered to ensure that data at the Australian POP isn’t accessible to clients from outside the region, reducing the likelihood that demand will make the service unavailable.

 

When not to use CDN

CDNs are typically best suited to technologies that employ many large static files such as photographic imagery. CDNs are most useful where you need the ability to serve files to a large number of simultaneous users worldwide, which is appropriate for Adatum as natural disasters don’t occur according to predictable patterns. If Adatum’s content was more dynamic, such as providing a service where satellite video was being streamed directly from Adatum servers, CDN wouldn’t provide significant advantages. This is because real-time live streaming doesn’t benefit substantially from being cached in locations around the world in the way that static files, including pre-recorded video files, would.

What is Orleans?

Building cloud-native applications can be a challenging and complicated task. Cloud-native apps must be designed to handle elasticity, resiliency, distributed infrastructure, and other concerns. Developers often rely on frameworks to provide abstractions over these internal implementation details, so they can focus on building business features. Orleans is a robust framework for building scalable, distributed, cloud-native applications. Orleans runs cross-platform anywhere that .NET is supported and integrates well with other platform features.

Orleans simplifies the process of building distributed, scalable applications. There are several key concepts to understand in order to work with Orleans effectively.

The virtual actor model

Orleans is built around the “actor model”, which is an established design pattern that has existed since the 1970s. The actor model is a pattern that stores pieces of state data and corresponding behavior in lightweight, immutable objects called actors. Actors communicate with each other using asynchronous messages. Orleans manages and simplifies much of the parallel communication required by distributed apps for you. Orleans invented the virtual actor model, where actors exist perpetually whenever they’re needed. This architecture lends itself well to cloud-native applications, which require distributed and resilient state and parallel operations.

Grains

Grains are the most essential primitives and building blocks of Orleans applications. They represent actors in the Actor model and define the state data and behavior of entities you define, such as shopping cart or product. Grains are each identified and tracked through user-defined keys and can be accessed by other grains and clients.

Grains are stored in silos, which you explore later. Grains that are currently active and in use by your application will remain in memory, while inactive grains can be persisted to your desired storage. A grain becomes active again when it’s needed or requested by the app. Grains have a managed life cycle, which means the Orleans runtime handles activating, deactivating, storing, and locating grains as needed by the application. Developers don’t have to worry about managing these concerns themselves and can write code that assumes a grain is available in memory.

grain-lifecycle

Silos

Silos are responsible for storing grains and are another core building block of Orleans. A silo can contain one or more grains; a group of silos is known as a cluster. The cluster coordinates work between silos, allowing communication with grains as though they were all available in a single process. You can organize your data by storing different types of grains in different silos. Your application can retrieve individual grains without having to worry about the details of how they’re managed within the cluster.

silos

Other features of Orleans

Orleans supports plenty of other features for more specific or advanced scenarios.

Streams: Streams help developers process sets of data in near-real-time. Like other features of Orleans, streams are managed by the runtime and available on demand. They’re loosely coupled and can apply various queue technologies, such as Azure Event Hubs.

Timers and Reminders: Orleans supports scheduling operations for grains. You can ensure that various actions are completed at a given time even on inactive grains.

Versioning: Grains can be versioned to account for changes in your application. Orleans will handle mapping and managing different implementations of your versioned grains across your silos and clusters.

ACID transactions: Grains can have transactional state and support ACID transaction features.

These capabilities can be explored as you start to build more complex apps. Next post you’ll explore the basics of how Orleans works.

Azure Monitor Overview

Azure Monitor helps you maximize the availability and performance of your applications and services.

Overview

The following diagram is a high-level view of Azure Monitor.

  • At the center of the diagram, you’ll find the data stores for metrics and logs and changes. These data stores are the fundamental types of data used by Azure Monitor.
  • On the left are the sources of monitoring data that populate the data stores.
  • On the right are the different functions that Azure Monitor performs with this collected data. This diagram includes such actions as analysis, alerting, and integration such as streaming to external systems.

 

 

What data does Azure Monitor collect?

Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability.

Pillar Description
Metrics Metrics are numerical values that describe some aspect of a system at a particular point in time. They’re collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels.
Logs Logs are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp.
Distributed traces Traces are series of related events that follow a user request through a distributed system.
Changes Changes are a series of events that occur in your Azure application and resources. Change Analysis tracks changes and is a subscription-level observability tool that’s built on the power of Azure Resource Graph.

Insights and curated visualizations

Some Azure resource providers have a “curated visualization”, which provide a customized monitoring experience for that particular service or set of services.

Larger, scalable, curated visualizations are known as “insights” and marked with that name in the documentation and the Azure portal. Examples of insights include Application Insights, Container Insights, and VM Insights.