Quantcast
Channel: Christos S. – chsakell's Blog
Viewing all articles
Browse latest Browse all 43

Building serverless apps with Azure Functions

$
0
0

As the cloud evolves over the years, application architectures also adapt to changes resulting to new, modern and more flexible patterns of how we build applications today. One of the hottest patterns nowadays is the serverless architecture which is the evolution of Platform as a Service (PaaS). Starting from On-Premises where we had to deal with the very hardware itself, backups or OS updates, cloud introduced IaaS where at least hardware management was delegated to the cloud provider. Still though, you have to manually install and run your software so PaaS was introduced to take cloud to the next step. PaaS were a major cloud upgrade where developers actually could start focusing on the business needs rather than the infrastructure. Who could even possibly think that you could spin up tens of VMs in a matter of a few minutes?

Cloud Evolution


So how come serverless architecture is an evolution of PaaS? The answer is two words: Speed and Cost. With PaaS when you need to scale you ask for specific size of computing power (disk capacity, memory, CPU etc..) despite the fact that you probably won’t be using it at its maximum scale. But you certainly pay for 100% of it, don’t you. Take for example a website that starts receiving thousands of requests/sec and getting too slow so you make the decision to spin up a new VM and distribute the work load. First of all spinning up the VM needs some time to finish and secondly you start paying for the new VM while it is possible that you may use less that its half resources. What serverless says is forget about servers, forget about disk capacity or memory, all this kind of stuff and much more will be handled automatically for you per need. The computing power and the resources you may need at some point in order to scale, are somewhere out there, ready to be allocated for you if needed which means speed is not a problem anymore. The best part is that you will pay for what and when you use it only.

In this post we are going to see what Azure Functions are and how they can help us build serverless applications. More specifically we are going to:

  • Define what Azure Functions are and what problems they can solve
  • Describe the different components of an Azure Function – Triggers and Bindings
  • What are the options to create and publish Azure Functions
  • Demo – Take a tour with a common application scenario where Azure Functions handle success payments from Stripe payment provider
  • Define what Durable Azure Functions are and how they differ from common Azure Functions
  • Describe Durable Azure Functions types
  • Demo – Take a look how Durable Azure Functions can process reports

You can find the most important parts of this post in the following presentation.


Ready? Let’s start..

Azure Functions

Azure Functions which is a serverless compute service, at its core is code running on cloud, triggered by specific events.

With Azure Functions you simply publish small pieces of code and define when you want that code to be executed. Azure ensures that your code has always the required computation resources to run smoothly even when the demands get too high. You can run your functions in a language of your choice and as described in the Consumption plan you will only pay for the time Azure spent running your code. When your functions are idle then you are stopped being charged. Functions can be created either directly on Azure Portal or in Visual Studio and are executed when specific events named triggers occur. Each Azure Function must have only one trigger which usually comes along with a payload to be consumed by the function’s code. For example in case the trigger is a QueueTrigger which means the function is executed when a new message arrives in a queue, then that message will be available on the function’s code. Here is a some of the available triggers:

Trigger Code executes on
HTTPTrigger New HTTP request
TimerTrigger Timer schedule
BlobTrigger New blob added to an azure storage container
QueueTrigger New message arrived on an azure storage queue
ServiceBusTrigger New message arrived on a Service Bus queue or topic
EventHubTrigger New message delivered to an event hub

Another key aspect of Azure functions is Bindings which let you declarative connect with data from other services instead of hard coding the connection details. Input bindings are used for making data available when the trigger is fired and Output bindings are used to push data back to the sources. For example, if your function receives as an input a record from an Azure Table named payments it could have a parameter as follow:

[Table("payments", "stripe", "{name}")] Payment payment

The {name} segment is the RowKey value of the Azure Table record to be retrieved which means when this function is triggered Azure will search the payments table for a record with PartitionKey equals to “stripe” and RowKey equal to the {name} value. Similar to the input binding your function could save a record to an Azure Table by declaring an output parameter as follow:

[Table("payments")] out Payment payment

When you populate the payment parameter the function will push a record to the Azure Table named payments. Default connection keys exist in a json configuration file. A Function may have multiple input and output bindings.

Creating Azure Functions

Azure Functions can be created and configured either directly on Azure Portal as scripts or published precompiled. In the first case you simply create an Azure App on Azure, you add a new Azure Function and write your code. It just works. Though this way can be very useful for prototyping you won’t be able to build large scale serverless apps. The best way to do it is through Visual Studio where you write your functions in the same way you already write your web applications (sharing libraries, models, packages, etc..). As far as deployment, Azure functions sit on top of Azure Service Apps which means they support a large range of deployment options. You can deploy your code using Visual Studio, Kudu, FTP, ZIP or via popular continuous integration solutions, like GitHub, Azure DevOps, Dropbox, Bitbucket, and others.

Demo – Payments

Enough with the talk, let’s see Azure functions in action by implementing a real application scenario for processing payments. In this scenario, we have a website where we provide some subscription plans for our customers. The customer selects a subscription plan, enters payment details using various payment methods such as VISA or MasterCard and completes the payment. They charge goes through a well known payment provider named Stripe. Stripe lets you test payments using test cards and customers which is what we are going to do for this post.

If you want to follow along go ahead and create a new account. Next visit API keys page and create a Test API key. Save both the Publishable and the Secret keys for later use

When the charge is completed we want to generate a licence file and email its url for download to the customer. In a traditional monolithic web application the flow would go like this:

  • Customer submits the checkout form to web app
  • Web app charges customer on the payment provider
  • Web app creates a licence file
  • Web app sends a confirmation email to the customer
  • Web app returns a success message to the client


The problem with this architecture is that the web app is responsible for too many things and customer waits too long before seeing the success message. Since any of the activities could create a bottleneck, scaling doesn’t work well either.

Serverless approach

In a serverless approach using Azure Functions the customer would see the success message instantly after the successful charge. All others would be handled by different Azure Functions, each responsible for different operation.

The first function uses an HTTPTrigger and pushes a message to a storage queue. HTTPTrigger means that the function can be called directly using an HTTP request. We will bound its URL to a stripe’s webhook which fires when a success charge occurs. This means that when the charge is completed stripe will invoke our Azure Function passing a Charge object along with any metadata we passed when calling the provider’s API. The output will be a payment message in an azure storage queue. The second azure function has a QueueTrigger attribute which triggers the function when a message is pushed to a storage queue. Message is received deserialized and the function outputs a licence file as a blob in an azure storage container. The third function is triggered when a new blob is pushed to the previous storage container. It has a SendGrid integration and sends the confirmation email which also contains the licence’s download URL.

SendGrid is a Email Delivery Service and you can use it for free. If you want to follow along go ahead and create a new account. Next visit API keys page and create a Test API key. Save it for later use

Demo application

Clone the repository associated with the post and open the solution in Visual Studio. First thing you need to do is setup the Stripe configuration. Open the appsettings.json file in the eShop .NET Core Web application and set the values for the SecretKey and PublishableKey keys from your stripe account API key you created before.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "Stripe": {
    "SecretKey": "",
    "PublishableKey": ""
  }
}

The eShop web app is the website that the customer may subscribe to one of the subscription plans.


Before moving to the first Azure Function take a look at the models inside the ServerlessApp.Models class library project. These are classes used by all Azure Functions. Azure Functions exist in an Azure Function App and each Azure Function app may have many Azure Functions. The project contains two Azure Function Apps, the ePaymentsApp for processing successful payments as described and the reportsApp that is responsible to produce reports for payments and we will review later on.

Visual Studio contains templates for Azure Function App when creating new projects. When you create one you can add Azure Functions on it by selecting one of the available templates for Triggers and Bindings

Azure Function available templates in VS

The first azure function in ePaymentsApp is the OnSuccessCharge. Let’s review and explain what the code does.

[FunctionName("OnSuccessCharge")]
public static async Task Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
    HttpRequestMessage req,
    [Queue("success-charges", Connection = "AzureWebJobsStorage")]IAsyncCollector<Transaction> queue,
    ILogger log)
{
    log.LogInformation("OnSuccessCharge HTTP trigger function processed a request.");

    var jsonEvent = await req.Content.ReadAsStringAsync();

    var @event = EventUtility.ParseEvent(jsonEvent);

    var charge = @event.Data.Object as Charge;
    var card = charge.Source as Card;

    var transaction = new Transaction
    {
        Id = Guid.NewGuid().ToString(), 
        ChargeId = charge.Id,
        Amount = charge.Amount,
        Currency = charge.Currency,
        DateCreated = charge.Created,
        StripeCustomerId = charge.CustomerId,
        CustomerEmail = card.Name,
        CardType = card.Brand,
        CustomerId = int.Parse(charge.Metadata["id"]),
        CustomerName = charge.Metadata["name"],
        Product = charge.Metadata["product"]
    };

    await queue.AddAsync(transaction);
}

HTTPTrigger attribute defines that the function can be triggered via an HTTP request. The attribute accepts several parameters such as the AuthorizationLevel which in this case is Anonymous meaning no token required for the function to be called. Also it defines that the HTTP request can be either a GET a or a POST method. If Route parameter is null then the function’s URL will be in the form application-host:port/api/method-name so in this case would be application-host:port/api/OnSuccessCharge. The req parameter is the HttpRequestMessage coming from the HTTP request that triggered the function. The code deserializes the request to a stripe’s Charge object since we know that we will bind its URL on stripe’s success charge webhook.
The Queue output binding defines the azure storage queue where the message will be pushed.

[Queue("success-charges", Connection = "AzureWebJobsStorage")]

The attribute says that the queue’s name is success-charges and the connection to the storage account comes from the key named AzureWebJobsStorage inside the local.settings.json file. By default this connection key will be used so you can just remove the Connection parameter of the Queue attribute. IAsyncCollector<Transaction> can be used to asynchronously push multiple messages to the queue by using the AddAsync method.

await queue.AddAsync(transaction);

If you want to push just one message synchronously to the queue you can use the following format:

[Queue("success-charges")]out Transaction queue

Assigning a value to the queue parameter results pushing the message to the queue. The flow for the first Azure Function is shown below:

Notice that the code reads some kind of metadata from the stripe’s webhook request:

var transaction = new Transaction
{
    Id = Guid.NewGuid().ToString(), 
    ChargeId = charge.Id,
    Amount = charge.Amount,
    Currency = charge.Currency,
    DateCreated = charge.Created,
    StripeCustomerId = charge.CustomerId,
    CustomerEmail = card.Name,
    CardType = card.Brand,
    CustomerId = int.Parse(charge.Metadata["id"]),
    CustomerName = charge.Metadata["name"],
    Product = charge.Metadata["product"]
};

Obviously metadata are information that we need in order to identify which customer triggered the entire process. But where do they come from? Switch to the HomeController.Charge method in the eShop website and take a look how a stripe charge is made:

Random r = new Random();
var customerService = new CustomerService();
var chargeService = new ChargeService();
var dbCustomerId = r.Next(0, 10);

var customer = await customerService.CreateAsync(new CustomerCreateOptions
{
    Email = stripeEmail,
    SourceToken = stripeToken
});

var charge = await chargeService.CreateAsync(new ChargeCreateOptions
{
    Amount = amountInCents,
    Description = "Azure Functions Payment",
    Currency = "usd",
    CustomerId = customer.Id,
    Metadata = new Dictionary<string, string> {
        { "id", dbCustomerId.ToString() },
        { "name", RandomNames[dbCustomerId] },
        { "product", productName }
    }
});

When calling the CreateAsync method of the stripe’s ChargeService, you can fill a Metadata dictionary which will be available when the webhook fires back to the Azure Function. Here we pass the subscription plan the customer selected, some random Customer Id and Name but in a real application would be the loggedin customer’s id and name respectively.

The second Azure Function is the ProcessSuccessCharge and is responsible to process a message from the success-charges queue and output a licence file as a blob. It also saves a payment record to an azure storage table.

[FunctionName("ProcessSuccessCharge")]
public static void Run([QueueTrigger("success-charges", Connection = "")]Transaction transaction, 
IBinder binder, 
[Table("payments")] out Payment payment,
ILogger log)
{
    log.LogInformation($"ProcessSuccessCharge function processed: {transaction}");

    payment = new Payment
    {
        PartitionKey = "stripe",
        RowKey = transaction.Id,
        ChargeId = transaction.ChargeId,
        Amount = transaction.Amount,
        CardType = transaction.CardType,
        Currency = transaction.Currency,
        CustomerEmail = transaction.CustomerEmail,
        CustomerId = transaction.CustomerId,
        CustomerName = transaction.CustomerName,
        Product = transaction.Product,
        DateCreated = transaction.DateCreated
    };

    using (var licence = binder.Bind<TextWriter>(new BlobAttribute($"licences/{transaction.Id}.lic")))
    {
        licence.WriteLine($"Transaction ID: {transaction.ChargeId}");
        licence.WriteLine($"Email: {transaction.CustomerEmail}");
        licence.WriteLine($"Amount payed: {transaction.Amount}  {transaction.Currency}");
        licence.WriteLine($"Licence key: {transaction.Id}");
    }
}

Since we want the function to get triggered every time a new message arrives on the success-charges queue we used a QueueTrigger.

[QueueTrigger("success-charges", Connection = "")]Transaction transaction

The trigger knows which queue needs to watch out for messages and how to automatically deserialize them to Transaction instances. This function produces two outputs, first it writes a record to an azure storage table named payments using an output Table binding (they will be used by the other Function App for reporting purposes) and secondly it writes the licence file to an azure storage blob container named licences using an IBinder. While it could just use a Blob output binding to create the blob, IBinder lets you define the blob’s name the way you want to. In our case the name will be transaction-id.lic which means for each customer’s transaction there will be a licence file. Visit Azure Blob storage bindings for Azure Functions to learn more about blob bindings.

The last Azure Function of the ePaymentsApp is the SendEmail.

[FunctionName("SendEmail")]
public static void Run([BlobTrigger("licences/{name}.lic")]CloudBlockBlob licenceBlob, 
    string name, 
    [Table("payments", "stripe", "{name}")] Payment payment,
    [SendGrid] out SendGridMessage message,
    ILogger log)
{
    log.LogInformation($"SendEmail Blob trigger function processing blob: {name}");
    message = new SendGridMessage();
    message.AddTo(System.Environment.GetEnvironmentVariable("EmailRecipient", EnvironmentVariableTarget.Process));
    message.AddContent("text/html", $"Download your licence <a href='{licenceBlob.Uri.AbsoluteUri}' alt='Licence link'>here</a>");
    message.SetFrom(new EmailAddress("payments@chsakell.com"));
    message.SetSubject("Your payment has been completed");
}

The function is triggered when a new blob is added to the licences container but also has the .lic extension. It also integrates with a Table binding so that automatically retrieves the payment record written to the payments azure storage table from the previous function. This is achieved by using the same name of the licence file as the search RowKey term in the payments table. The segment {name} will be the same for both the BlobTrigger and the Table binding. If you noticed the previous function uses the same value for the licence’s name and the table’s record RowKey value. This function manages to retrieve the right record from the table without hard coding any connection details. Last but not least is the SendGrid output binding which automatically uses the configuration property AzureWebJobsSendGridApiKey from the local.settings.json file.

Running the ePaymentsApp

After cloning the repository open a cmd and at the root of the solution restore the NuGet packages:

dotnet restore

Right click the ePaymentsApp and select Publish.. In the Create new App Service window select Create New and click Publish

For the Demo you can leave all the default values and click Create. Notice that VS knows that it is an Azure Function App and that’s why it will create the required Storage Account as well. Any queues or tables referenced by Azure Functions will be created automatically.

If asked to update the Azure Functions version click YES.

When the resources created and the deployment finishes, open Azure Portal and find the Azure Function App created. Mind that the Azure Function’s type will shown as App Service in the portal.

Navigate to that resource and find the Application Settings tab of the Azure Function App.

In case you have seen Azure App Services then this view should look familiar to you. You have to add the SendGrid configuration so that the SendEmail Azure Function can send the confirmation emails. Also you need to set an email where the emails will be sent so you don’t spam anyone during the demo. Add the following two properties to the App Settings:

  1. AzureWebJobsSendGridApiKey: Set the API key you created in SendGrid
  2. EmailRecipient: Set your email address so all emails are sent to you


Click Save and then select the OnSuccessCharge Azure Function to get its URL. It should look like this:

You need to add this URL as a Webhook in Stripe. Click Add endpoint, paste the URL and select the charge.succeeded event.


Now back to Visual Studio, make sure you have set the Stripe settings in the appsettings.json file and fire the app.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "Stripe": {
    "SecretKey": "",
    "PublishableKey": ""
  }
}

Complete a subscription by entering some of the Stripe’s test cards. It’s ok to enter a random email but expiration date must be a future date.

If the charge was successful you should see a view similar to the following:

Open Azure Storage Explorer (install it if you haven’t already) and confirm that there is a success-charges queue, a licences blob container and a payments table other than those that Azure Functions need to operate.

If you have set the SendGrid’s configuration properly you should have received an email as well.

You won’t be able to download the file before you set the Public Access Level to the licences container. Back to the Azure Storage Explorer, right click the licences container, select Set Public Access Level..


Select Public read access for containers and blobs and click apply. Now you should be able to download the license file directly from your email link.

Azure Durable Functions

So far the Azure Functions we have seen in action are quite isolated and they don’t have any knowledge about what happens in other functions during the payment process flow. This is OK for this scenario but there are times that you want to build a flow where there are dependencies and some kind of state between the functions. For example the output of the first function is the input for the second and so on. Also, you want to be able to cancel the entire flow or handle exceptions at any point during the flow, something that you cannot do with the common Azure Functions we have seen.

In common Azure functions you usually handle exceptions by sending messages to the appropriate queues and let other Functions start a retry or log the exception etc..

Azure Durable Functions is an extension of Azure Functions and Azure WebJobs for writing stateful functions that let you programmatically define workflows between different functions. Before explaining how it works let’s see two common patterns for Azure Durable Functions.

In Function Chaining you can execute a sequence of functions in a particular order. Output of one function is used as an input in the next one

The code for the previous looks like this:

public static async Task<object> Run(DurableOrchestrationContext ctx)
{
    try
    {
        var x = await ctx.CallActivityAsync<object>("F1");
        var y = await ctx.CallActivityAsync<object>("F2", x);
        return await ctx.CallActivityAsync<object>("F3", y);
    }
    catch (Exception)
    {
        // error handling/compensation goes here
    }
}

In Fan-out/fan-in you can execute multiple functions in parallel. When all functions running in parallel finish, you can aggregate their results and use them as you wish.

Fan-out/fan-in Pattern


We will see an example of the pattern on the demo for producing reports.

Azure Durable Functions Concepts

A flow with Azure Durable Functions consists of 3 types of Azure functions, Starter, Orchestrator and Activity functions.

  • Starter Function: Simple Azure Function that starts the Orchestration by calling the Orchestrator function. It uses an OrchestrationClient binding
  • Orchestrator Function: Defines a stateful workflow in code and invokes the activity functions. Sleeps during activity invocation and replays when wakes up. The code in an orchestrator function MUST be deterministic because during the flow the code will be executed again and again till all activity functions finish. You declare a function as an orchestrator by using a DurableOrchestrationContext
  • Activity Functions: Simple Azure Functions that are part of the workflow and can receive or return data. An Activity function uses an ActivityTrigger so that can be invoked by the orchestrator

Azure Durable Functions

Demo – Reports

The solution contains a reportsApp that uses Azure Durable Functions for creating reports for the payments created from the eShop website. More specifically recall that the ProcessSuccessCharge Azure Function of the ePaymentsApp writes a payment record in the azure storage table named payments. Each payment record contains also the payment type for that transaction.

Assuming that each payment method (VISA, MasterCard, etc..) needs to be processed differently we wish our reporting system to be fast and fully scalable. Every time the reporting process starts, for example at the end of each day, we want the final result to be an email containing links for the report created for each type of payment method, meaning the URL to the report file generated for all VISA payments, the URL for the report file generated for all MasterCard payments and so on. This scenario fits perfectly with the Fan-out/fan-in pattern we saw previously. Let’s take a look on the flow:

Serverless Reports

  • The starter function reads the latest payments from an Azure Storage table
  • For each type of payment (e.g visa, mastercard, paypal etc..) the orchestrator sends a group of payments to an activity function to generate the report
  • The report for each type is an azure blob. Reports for each card type are created in parallel
  • When all activities finish, the final result is an list of report urls

Starter Function

The starter function is the S_CreateReports and the only thing it does is reading the payment records and fire up an orchestration by calling the orchestrator function.

[FunctionName("S_CreateReports")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
    HttpRequestMessage req,
    [OrchestrationClient] DurableOrchestrationClient starter,
    ILogger log)
{
    log.LogInformation($"Executing Starter function S_CreateReports at: {DateTime.Now}");

    var orders = await GetOrders();

    var orchestrationId = await starter.StartNewAsync("O_GenerateReports", orders);

    return starter.CreateCheckStatusResponse(req, orchestrationId);
}

The DurableOrchestrationClient.CreateCheckStatusResponse method returns management operations links for:

  1. Query current orchestration status
  2. Send event notifications
  3. Terminate orchestration

Orchestrator Function

The Orchestrator function defines the workflow in code and must be deterministic since it is replayed multiple times. The reliability of the execution is ensured by saving execution history in azure storage tables. In the example, for each type of payments generate a single report (blob) by calling an activity function. Finally aggregate and return the results.

[FunctionName("O_GenerateReports")]
public static async Task<List<Report>> GenerateReports(
    [OrchestrationTrigger] DurableOrchestrationContext ctx,
    ILogger log)
{
    log.LogInformation("Executing orchestrator function");
    var payments = ctx.GetInput<List<Payment>>();
    var reportTasks = new List<Task<Report>>();

    foreach (var paymentGroup in payments.GroupBy(p => p.CardType))
    {
        var task = ctx.CallActivityAsync<Report>("A_CreateReport", paymentGroup.ToList());
        reportTasks.Add(task);
    }

    var reports = await Task.WhenAll(reportTasks);

    return reports.ToList();
}

Notice that the payments have been passed as in input from the Starter function. That’s because the Orchestrator cannot have non-deterministic code, that is code that may fetch different results during the replay of the function. Passing the payments from the starter function is good for this demo but in real cases you probably need other stuff too such as configuration values. For any non-deterministic data that you need on your orchestrator function you MUST call activity functions. You can run several activity functions in parallel by using the Task.WhenAll method. When all activities finish generating reports for each payment method, reports variable will contain a list of reports objects coming from the activity functions that run in parallel.

Activity Function

Activity functions are the units of work in durable orchestrations and they use an ActivityTrigger in order to be called by an orchestration function. In our example, the A_CreateReport activity function receives a list of payments for a specific payment method and generates the report as a blob file. Then it returns the blob’s URL for that report.

[FunctionName("A_CreateReport")]
public static async Task<Report> CreateReport(
    [ActivityTrigger] List<Payment> payments,
    IBinder binder, ILogger log)
{
    log.LogInformation($"Executing A_CreateReport");

    var cardType = payments.Select(p => p.CardType).First();
    var reportId = Guid.NewGuid().ToString();
    var reportResourceUri = $"reports/{cardType}/{reportId}.txt";

    using (var report = binder.Bind<TextWriter>(new BlobAttribute(reportResourceUri)))
    {
        report.WriteLine($"Total payments with {cardType}: {payments.Count}");
        report.WriteLine($"Total amount paid: ${payments.Sum(p => p.Amount)}");
    }

    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(Utils.GetEnvironmentVariable("AzureWebJobsStorage"));

    return new Report
    {
        CardType = cardType,
        Url = $"{storageAccount.BlobStorageUri.PrimaryUri.AbsoluteUri}{reportResourceUri}"
    };
}

Running the reportsApp

Right click the reportsApp and publish it to Azure but this time make sure to select the same Storage Account that created when you published the previous Azure Function App, otherwise the Starter Function will look in a payments table of a different azure storage account. (actually only the App name should be different in the create App Service window). When the app is deployed go to that resource and find the S_CreateReports starter function. Since it’s an HTTP triggered based function we can call it directly on the browser. Notice that the URL for this function contains a code token query string.

This is because we set the AuthorizationLevel for this function to AuthorizationLevel.Function.

[FunctionName("S_CreateReports")]
public static async Task<HttpResponseMessage> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]
    HttpRequestMessage req,
    [OrchestrationClient] DurableOrchestrationClient starter,
    ILogger log)
    // code omitted

Now the interesting part. Paste the URL on the browser and you will get an instant reponse with information about the orchestration that started. This comes from the following code in the starter function.

var orchestrationId = await starter.StartNewAsync("O_GenerateReports", orders);
return starter.CreateCheckStatusResponse(req, orchestrationId);


Click the statusQueryGetUri to check the status of the orchestration. In my case when the orchestration finished the response was the following:

{
{
   "instanceId":"203b69cc0c54440b929e06210f68f492",
   "runtimeStatus":"Completed",
   "input":[
      {
         "$type":"ServerlessApp.Models.Payment, ServerlessApp.Models",
         "ChargeId":"ch_1DXqnOHIJGnF6W22L7aMiR3s",
         "CardType":"MasterCard",
         "Amount":12000,
         "Currency":"usd",
         "CustomerId":9,
         "CustomerName":"Teressa Suitt",
         "CustomerEmail":"hello@world.com",
         "Product":"STARTER PLAN",
         "DateCreated":"2018-11-18T13:58:42Z",
         "PartitionKey":"stripe",
         "RowKey":"1ba832b5-5920-49fa-9557-fb8bb4940909",
         "Timestamp":"2018-11-18T13:58:50.3242492+00:00",
         "ETag":"W/\"datetime'2018-11-18T13%3A58%3A50.3242492Z'\""
      },
      {
         "$type":"ServerlessApp.Models.Payment, ServerlessApp.Models",
         "ChargeId":"ch_1DXn9eHIJGnF6W22TicIFksV",
         "CardType":"Visa",
         "Amount":60000,
         "Currency":"usd",
         "CustomerId":9,
         "CustomerName":"Teressa Suitt",
         "CustomerEmail":"test@example.com",
         "Product":"DEV PLAN",
         "DateCreated":"2018-11-18T10:05:26Z",
         "PartitionKey":"stripe",
         "RowKey":"41fc7a80-d583-422c-a720-7b957196d6bb",
         "Timestamp":"2018-11-18T10:05:35.8105278+00:00",
         "ETag":"W/\"datetime'2018-11-18T10%3A05%3A35.8105278Z'\""
      },
      {
         "$type":"ServerlessApp.Models.Payment, ServerlessApp.Models",
         "ChargeId":"ch_1DXqoFHIJGnF6W22RKPldJpV",
         "CardType":"American Express",
         "Amount":99900,
         "Currency":"usd",
         "CustomerId":9,
         "CustomerName":"Teressa Suitt",
         "CustomerEmail":"john@doe.com",
         "Product":"PRO PLAN",
         "DateCreated":"2018-11-18T13:59:35Z",
         "PartitionKey":"stripe",
         "RowKey":"a2739770-74b4-49d7-84ec-c6fb314cd223",
         "Timestamp":"2018-11-18T13:59:43.002334+00:00",
         "ETag":"W/\"datetime'2018-11-18T13%3A59%3A43.002334Z'\""
      },
      {
         "$type":"ServerlessApp.Models.Payment, ServerlessApp.Models",
         "ChargeId":"ch_1DXqpSHIJGnF6W22KW9pzz8K",
         "CardType":"American Express",
         "Amount":6000,
         "Currency":"usd",
         "CustomerId":1,
         "CustomerName":"Errol Medeiros",
         "CustomerEmail":"mario@example.com",
         "Product":"FREE PLAN",
         "DateCreated":"2018-11-18T14:00:50Z",
         "PartitionKey":"stripe",
         "RowKey":"aff5311d-e2f5-4b57-a24a-84bfceacc2b5",
         "Timestamp":"2018-11-18T14:01:35.0534822+00:00",
         "ETag":"W/\"datetime'2018-11-18T14%3A01%3A35.0534822Z'\""
      },
      {
         "$type":"ServerlessApp.Models.Payment, ServerlessApp.Models",
         "ChargeId":"ch_1DXqqTHIJGnF6W22Slz4R0JH",
         "CardType":"MasterCard",
         "Amount":60000,
         "Currency":"usd",
         "CustomerId":4,
         "CustomerName":"Buster Turco",
         "CustomerEmail":"nick@example.com",
         "Product":"DEV PLAN",
         "DateCreated":"2018-11-18T14:01:53Z",
         "PartitionKey":"stripe",
         "RowKey":"b8ecfe78-ae72-4f0c-9312-63ad31a1b5f2",
         "Timestamp":"2018-11-18T14:02:02.5297923+00:00",
         "ETag":"W/\"datetime'2018-11-18T14%3A02%3A02.5297923Z'\""
      }
   ],
   "customStatus":null,
   "output":[
      {
         "CardType":"MasterCard",
         "Url":"https://epaymentsapp201811181133.blob.core.windows.net/reports/MasterCard/3b751f4d-015c-42bb-9bd0-f5a22d2e9750.txt"
      },
      {
         "CardType":"Visa",
         "Url":"https://epaymentsapp201811181133.blob.core.windows.net/reports/Visa/c313e6a1-2147-47ca-b20b-df11c02366b5.txt"
      },
      {
         "CardType":"American Express",
         "Url":"https://epaymentsapp201811181133.blob.core.windows.net/reports/American Express/c234d014-99d0-43ab-9e57-3b535c58d6c5.txt"
      }
   ],
   "createdTime":"2018-11-18T14:10:37Z",
   "lastUpdatedTime":"2018-11-18T14:11:01Z"
}

The response shows what was the input to the orchestrator function and what was the final result. In my case, I had several payments with 3 different card types so it created 3 reports in parallel and returned their urls. The code saves each report as a blob in a reports container so don’t forget to set the Public Access Level again, as we did with the licences container.

Debugging Azure Functions

You can debug Azure Functions in the same way you debug web applications. First of all when you create an Azure Function App in Visual Studio, there is a local.settings.json where all the settings and default keys exist. The file looks like this:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": "",
    "AzureWebJobsSendGridApiKey": ""
  }
}

You won’t find it though in the repository because VS includes it at the Git ignore list. You will find though a local.settings-template.json file that I added so you can just rename it to local.settings.json. You can use the local azure storage emulator as well. While I was developing the apps the only thing that was tricky to reproduce was the stripe’s webhook. For this I took a sample of how the request looks like from the stripe’s website and sent the request locally on the OnSuccessCharge function, so simple. Another thing you have to do is install the latest Azure Functions Core Tools. You can use the following npm command:

npm i -g azure-functions-core-tools --unsafe-perm true

Next for each Azure Function App you want to debug, you have to create a Profile that has the following settings:

  • Launch: Executable
  • Executable: dotnet.exe
  • Application arguments: %userprofile%\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\func.dll host start

You can create a profile by right clicking the project, go to Properties and click the Debug tab.

Last but not least check that you have the latest Azure Functions and Web Jobs Tools extension installed.

That’s it we finished! We have seen how Azure functions work and how they can help us build serverless applications. All the Azure functions you created scale automatically and Azure will ensure that they always have the required resources to run regardless the load they receive. I recommend you to read the Performance Consideration page on Microsoft for Azure Functions best practices.

In case you find my blog’s content interesting, register your email to receive notifications of new posts and follow chsakell’s Blog on its Facebook or Twitter accounts.

Facebook Twitter
.NET Web Application Development by Chris S.
facebook twitter-small
twitter-small

Viewing all articles
Browse latest Browse all 43

Trending Articles