Tim Murphy's .NET Software Architecture Blog

Getting Up To Speed With Terraform For Azure Aug 07

Image result for Terraform Logo

I feel like I have gone back to the 90’s where every developer needed to know how everything worked in the infrastructure.  I guess that it is true that everything goes in cycles.  DevOps has many faces and lately I have been working with Terraform to implement infrastructure as code.  Below I will give a brief (maybe not so brief) overview of this tool and how to get started with it.  In part this post is because I found both the documentation and online tutorial lacking in even the simple explanations.


One of the first things I noticed about Terraform was the install process. It is simply a single executable with no actual installer. It isn’t horrible, but I can’t remember the last time I had to set the PATH environment variable so that I could get an application to run.

The product works by keeping track of the “state” of your environment and comparing it to the script that you are telling it to run.  On the surface this is just what you want, but you can end up fighting the implementation if you are not careful.

You need to also get up to speed with their reuse of terminology.  They call your main script file (.tf extension) a configuration file.  Given that we are configuring system resources it makes sense, but we already have the concept of configurations in development and this overload of the team can be confusing at first.

State is the term used to describe what Terraform believes is the current configuration of your infrastructure environment.  It refreshes that state each time you apply a new configuration and is the base of comparison for your plan.

The plan is a list of proposed changes based on your definition and the saved state of your infrastructure.  Anything in the plan marked with a plus sign (+) will be created.  Anything with a minus sign (-) will be destroyed.

Writing Code

The script is fairly simple. I was surprised to learn after working with it for a while that it is implemented in Go lang.  Since this is a language I have not used before I will reserve judgement until I further investigate it.

Variables files contain the definition of variables.  It has a .tf extension just like the configuration file, and seems to be pull together just by the fact that it is in the same directory.  Each variable is defined by a name, a type and a description.

variable “variable_name”
     type = string
     description = “Some description”

Variable value files hold the testing values for your Terraform variables.  They are simply a list of assignment statements as shown below.

variable_name1 = "variable 1 value"
variable_name2 = "variable 2 value"
variable_name3 = "variable 3 value"

Configurations use providers and resources for each platform and platform resource that you wish to instantiate.  For Azure the provider is azurerm.  Each of the resources is prefixed with the provider designation and you follow it’s name with an identifying string that can be used for the remainder of the configuration. In the example below “rg” is the identifier.  Below is the absolute minimum configuration for a new resource group.  There are other optional sections like “locals” which setup variables, but that is for another post.

provider “azurerm”
     version = "=2.2.0"
     features {}

resource “azurerm_resource_group” “rg” {
      name = “resourcegroupname”
     location = “azure region name”

Testing The Code

When it comes to running your Terraform configuration script I would suggest using PowerShell and the Azure AZ CLI.  This will allow you to easily log into your Azure account for testing specify which subscription you want to use as this is assumed to already be in place in just about any Terraform example you will find.

You then need to understand the lifecycle of Terraform: Init,Plan,Apply, Destroy.

First navigate to the directory that contains the configuration you want to test.  From this location execute the statement terraform init.  This will read the current state of your subscription and store it in a hidden state directory.

Next execute the terraform plan statement.  This will check your configuration for error and compare it to the state of your subscription. As described above you will get a list of each change that is calculated to be made.

In order to commit your configuration to the default subscription execute the terraform apply statement.  This will give you an output detailing each operation and a confirmation of the original plan.

Lastly, if you want to remove the resources created by your configuration execute the terraform destroy statement.


This post covered only the most basic aspects of Terraform.  It should be enough to get you started and hopefully covers some of the explanations that most people leave out.  From here you should refer to the developer documentation for details on each of the resources that can be used and some of the optional sections not included here..

In a future post I will describe how to integrate Terraform with your Azure DevOps pipeline

New Job Search World In 2020 May 22

Chicago20170502 (18)-1

This year I got the worst April Fools joke.  I was laid off.  For the first time in my life I was without a job.  I came to the rude awakening that things were much different than the last time I had search for a new position over 10 years ago.  COVID-19 was in full force.  It was shutting down many companies and causing hiring freezes.  On top of that it seems that the IT industry as a whole has decided it is too risky to hire people directly and most opportunities are either contract or contract-to-hire. This was a whole new ballgame.  I figured I would create a post with what I learned.

One of the first things that I found is that almost no companies post directly to job sites anymore.  In the past I would have been talking mostly with a recruiter in the hiring company’s HR department.  Today, almost every posting is through a recruiting firm.  I had one company presented to me by at least 20 different recruiters.  It is insanity.  Most of these recruiters don’t even look at your resume and send you an email that says you would be a great fit for their client.  The advice I would give here is to be patient.  It is a numbers game where you have pull enough weeds to find good placement firms.  There still ones that we used to call head hunters.  They listen to what you are looking for.  They know what the industry terms mean and what the client actually needs.  When you find one of these keep their number for a rainy day.

The most frustrating thing I found is that most companies aren’t looking for good architects and developers.  They are looking for unicorns.  They want the employee that doesn’t actually exist.  You have to know every nuance of every technology and have used it just they way they do.  Don’t get me wrong, this has been a problem in the industry for the over 25 years I have worked in it.  Now it has become the norm.  You have to prove to them that even if you don’t know a particular product or tool, that you have used similar tools to solve the same kind of problems.  Show them that problems aren’t unique and can be solved with multiple approaches.

Once you get an interview you need to do more than show them that you know their technology.  You need to connect with them.  Traditionally people tell you to find out about the company and what they do.  That is great, but find something in common with the interviewer through their LinkedIn profile.  You may have gone to the same school, worked at the same places, know the same people or even have the same interests.  This makes a much better connection than trying to tell them how you understand their business.

Lastly, you need to have a story. What makes you a strong candidate for their company?  What in your life and your career have given you the experience to handle whatever they throw your way?  If you can tell them what you learned from each project you have had and how you improved by it, then you are much more attractive as a candidate.

The market is tough right now.  There are so many people out of work that it is an employers market. You have to do whatever you can to stand out.  I hope that these few thoughts can help someone on their search.

Getting Back Into The Swing Aug 23

It has been forever since I have posted to this blog.  Since I have spending all my time working on a revision of a .NET Windows Service and ASP.NET Web Forms application I have been feeling the need to start learning something new and get back to writing.  As this project approaches the point of winding down I am looking to get back to work on Azure and general cloud technologies.  Stay tuned …

Time To Reboot Apr 05

Image result for reboot

It has been nearly six months since I blogged last.  Thankfully this means I have been busy working on client projects.  It is a new year and just back from spring break so I think it is time to start digging into technical topics again.  This post will actually help to do some testing for an Azure Logic App POC that I am working on.  Watch for a future post on this and other Azure topics. Until then …

Starting An Umbraco Project Nov 10


As I have been documenting Umbraco development I realized that people need a starting point.  This post will cover how to start an Umbraco project using an approach suitable for ALM development processes.

The criteria I feel a maintainable solution include are a customizable development project which can be easily in source control with a robust and replicatable database.  Of course this has to fall within the options available with Umbraco.  For mean this means an ASP.NET web application and a SQL Server database.  Let’s take a look at the steps required to get started with this architecture.

Create The Database

I prefer a standard SQL Server database instance over SQL Server Express due to its manageability.  For each Umbraco instance we need to create an empty database and then a SQL Server login and a user with permissions to alter the database structure.  You will need the login credentials when your first start your site.

Create The Solution

This is the easiest part of the an Umbraco project.  The base of each Umbraco solution I create starts with an empty ASP.NET Web Application.  Once that is created open the NuGet package manager and install the UmbracoCms package.  After that it is simply a matter of building and executing the application.

Finish Installation

As the ASP.NET application starts it will present the installation settings.  The first prompt you will get is to create your admin credentials as shown below.  Fill these fields in but don’t press any buttons.


The key is the be sure to click the Customize button before the install button as it doesn’t verify whether you want to use an existing database before running the install.  It will simply create a SQL Server Express instance on its own.  Pressing the Customize button will show the configuration screen shown below.  Fill in your SQL Server connection information and click Continue.



Once you start the install sit back and relax.  In a few minutes you will have an environment that is ready for your Umbraco development.  This will be the starting point for other future posts.  Stay tuned.

Relating Umbraco Content With the Content Picker Nov 08


After addressing Umbraco team development in my previous post I want to explore maintaining relationships between pieces of content in Umbraco and accessing them programmatically here.

For those of us who have a natural tendency to think of data entities and their relationships working within a CMS hierarchy can be challenging.  Add to that the fact that users don’t only want to query within that hierarchy and things get even more challenging.  Fortunately we will see here that adding the Content Picker to your document type defintion and a little bit of LINQ to your template you can deliver on all these scenarios.

Content Picker


Adding the Content Picker to your document type definition is the easiest part of the process but make sure that you use the new version and not the one that is marked as obsolete.  You will then be presented with a content tree that allows you to navigate to and select any node in your site.

Querying Associated Content

The field in your content will return the ID of the content instance you associated using the Content Picker.  Unfortunately it actually returns it as a HtmlString so you need to use the ToString method before utilizing it or you will get unexpected results and compile errors.

In the example below I am looking for the single piece of content selected in the Content Picker.  The LINQ query show a more complicated approach, but this also gives you and idea of how you could get a list of all nodes of a certain content type and use a lambda expression to filter it.  It requires that you first back up to the ancestors of the content you are displaying and find the root. 

The easier way is to use the Content or TypedContent methods of the UmbracoHelper.  In future posts I will show alternate methods for finding the root node as well.

    var contentId = Umbraco.Field("contentField");

    var associatedContent = Model.Content.Ancestors().FirstOrDefault().Children<ContentModels.MyContentType>().Where(x => x.Id == int.Parse(contentId .ToString())).FirstOrDefault();


While the Umbraco team needs to create some better documentation for this feature it is extremely useful for building and using relationships between content in your Umbraco site.

How Did I Become An IT Consultant Curmudgeon? Nov 02


I have been accused of being a curmudgeon by more than one co-worker.  The short, pithy answers to the question of I got to this point would be “experience” or “it comes with age”.  But what is the real reason and does it have any benefit?

Firstly, I was raised in an Irish-German family which by default makes me surly and sarcastic.  At almost half a century of this habit I don’t see any change coming there.  I also find that most developers have similar traits along with a dry sense of humor.

The main thing that gained me the label of curmudgeon is the knack for identifying issues that could adversely affect a project.  Things like scope creep and knowledge voids that could push a project beyond its budget and deadlines.  This tendency has come from 20 years of being a technical lead and architect.  It is an attribute that has served me well.

The place where this becomes a thorn in some team members side is that with years I think I have become more blunt with my assessments.  I am still capable of tact, but probably need to employee a little more liberally.

Ultimately, I think being a self-aware-curmudgeon is a good thing.  As long as we continue to learn and strive to work with people a little surly is just the spice of life.

Umbraco Team Development Oct 27


The Umbraco CMS platform give you the ability to create a content managed site with the familiar development process of ASP.NET MVC.  If you are the only developer things don’t get too complicated, but the moment you are sharing your solution with a team you gain a few wrinkles that have to be addressed.

Syncing Content and Document Types

Umbraco saves its content partially to the file system and partially to the database.  This complicates sharing document types, templates and content between developers.  While Courier allows you to sync these elements between your local machine and your stage and production servers it doesn’t do well between to local host instances.

We addressed this problem using the uSync package which is available in the Umbraco package page under the developer section of your site.  It allows you to automatically record changes every time you make a save in the back office.  They are saved to files that can then be transferred to another system and will automatically be imported when the application pool restarts.

Other Special Cases

Another area that you will have to address is the content indexing files and the location of the auto-generated classes.  Files like “all.dll.path” will need to be ignored in your source control since it contains a fully qualified directory location which will cause problems as you pull source code to multiple machines.

Of course you will also need to manage your web.config files as you would with any ASP.NET based solution to make sure that you don’t step on each developer’s local settings.


If you follow these couple of guidelines you will overcome some of the more annoying aspects of developing an Umbraco solution.  Ultimately it will make the life of your developers will be much easier. 

Azure Functions Visual Studio 2017 Development Aug 10

Image result for azure functions logo

The development tools and processes for Azure Functions are ever changing.  We started out only being able to create a function through the portal which I did a series on.  We then got a template in VS2015, but it really didn’t work very well.  They have since been able to create functions as Web Application libraries and now we are close the the release of a VS2017 template.

This post will walk through the basics of using the VS2017 Preview with the Visual Studio Tools For Azure Functions which you can download here.

Create New Project

To create the initial solution open up the New Project dialog and find the Azure Function project type, name your project and click OK.


Create New Function

To add a function to your project, right-click the project and select New Item.  In the New Item dialog select Azure Function and provide a name for the class and click Add. 


The next dialog which will appear is the New Azure Function dialog.  Here you will select the function trigger type and its parameters.  In the example below a timer trigger has been selected and a Cron schedule definition is automatically defined to execute every 5 minutes.

Also in this dialog you can set the name of the function.  When you compile a folder will be created with that name in you bin directory which will be used later for deployment.


Add Bindings

With each generation of Azure Function development the way you initially define bindings changes (even if they stay the same behind the scenes).  Initially you had to use the portal Integrate page.  This had its advantages.  It would visually prompt you for the type of binding and the parameters for that binding.

With the Visual Studio template you have to add attributes to the Run method of your function class.  This requires that you know what the attribute names are and what parameters are available and their proper values.  You can find a list of the main binding attributes here.

At compile time the attributes will be used to generate a function.json file with your trigger and bindings definition.

Add NuGet Packages

If you are building functions in the portal you have to create a projects.json file that defines the packages you want to include.  This requires that you know the format of the file.  Thankfully with the Visual Studio template you can use the normal Nuget Package manager.


There are a couple of ways to deploy your solution.  In the end a Function App is a specialized App Service.  This means you have the same deployment options of Visual Studio, PowerShell or via VSTS continuous deployment.  The main difference is that you don’t have a web.config file and have to manage you app settings and connection strings through the portal.  This can be reached by following the Application Settings link under the Configured Features section of the Function App Overview page.



While creating Azure Functions still isn’t a WYSIWYG turn key process the latest incarnation gives us an ALM capable solution.  I believe this is the development approach that will stabilize for the foreseeable future and anyone who is creating Functions should invest in learning.

Query Application Insights REST API To Create Custom Notifications Aug 04

Image result for azure application insights logo

Application Insights is one of those tools that has been around for a number of years now, but is finally getting understood as more companies move to Azure as a cloud solution.  It has become an amazing tool for monitoring the performance of your application, but it can also work as a general logging platform as I have posted before.

Now that you are capturing all this information how can you leverage it?  Going to the Azure portal whenever you want an answer is time consuming.  It would be great if you could automate this process.  Of course there are a number of metrics that you can create alerts for directly via the portal, but what if you want a non-standard metric or want to do something beside just send an alert?

Fortunately Microsoft has a REST API in beta for Application Insights.  It allows you to check standard metrics as well as run custom queries as you do in the Analytics portal.  Let’s explore how to use this API.

In this post will show how to create a demo that implements an Azure Function which calls the Application Insights REST API and then send the results out using SendGrid.  I created them with the VS2017 Preview and the new Azure Functions templates.

Generate Custom Events

First we need some data to work with.  The simplest way is to leverage the TrackEvent and TrackException method of the Application Insights API.  In order to do this you first need to setup a TelemetryClient.  The code below I have as part of the class level variables.

        private static string appInsightsKey = System.Environment.GetEnvironmentVariable("AppInsightKey", EnvironmentVariableTarget.Process);
        private static TelemetryClient telemetry = new TelemetryClient();
        private static string key = TelemetryConfiguration.Active.InstrumentationKey = appInsightsKey; //System.Environment.GetEnvironmentVariable("AN:InsightKey", EnvironmentVariableTarget.Process);

After that it is simple to call the TrackEvent method on the TelemetryClient object to log an activity in your code (be aware it may take 5 minutes for an event to show up in Application Insights).

            telemetry.TrackEvent($"This is a POC event");

Create a VS2017 Function Application

I will have another post on the details in the future, but if you have Visual Studio 2017 Preview 15.3.0 installed you will be able to create an Azure Functions project.


Right click the project and select the New Item context menu option and select Azure Function as shown below.


On the New Azure Function dialog select TimerTrigger and leave the remaining options as default.


Call Application Insights REST API

Once there are events in the customEvents collection we can write a query and execute it against the Application Insights REST API.  To accomplish this the example uses a simple HttpClient call.  The API page for Application Insights can be found here and contains the ULRs and formats for each call type.  We will be using the Query API scenario which will be setup with a couple of variables.

        private const string URL = "https://api.applicationinsights.io/beta/apps/{0}/query?query={1}";
        private const string query = "customEvents | where timestamp >= ago(20m) and name contains \"This is a POC event\" | count";

The call to the service is a common pattern using the HttpClient as shown below.  Add this to the Run method of your new function.

            HttpClient client = new HttpClient();
                new MediaTypeWithQualityHeaderValue("application/json"));
            client.DefaultRequestHeaders.Add("x-api-key", appInsightsApiKey);
            var req = string.Format(URL, appInsightsId, query);
            HttpResponseMessage response = client.GetAsync(req).Result;

Process Results

After we have a result we can deserialize the JSON using JSON.NET and send it to our support team via SendGrid.  You will have to add the NuGet package Microsoft.Azure.WebJobs.Extensions.SendGrid.

Modify the signature of your function’s Run method to match the code sample shown here.  In this example “message” is defined as an output variable for the Azure Function which is defined as a binding by using the SendGrid attribute. 

        public static void Run([TimerTrigger("0 */15 * * * *")]TimerInfo myTimer, TraceWriter log, [SendGrid(ApiKey = "SendGridApiKey")]out Mail message)

We will also need a structure to deserialize the returned JSON message into. If you look at the message itself it can appear rather daunting but it breaks down into the following class structure.  Create a new class file and replace the default class with this code.

    public class Column
        public string ColumnName { get; set; }
        public string DataType { get; set; }
        public string ColumnType { get; set; }

    public class Table
        public string TableName { get; set; }
        public List<Column> Columns { get; set; }
        public List<List<object>> Rows { get; set; }

    public class RootObject
        public List<Table> Tables { get; set; }

The last code example below performs the deserialization and creates the SendGrid email message.  Insert this to the Run method after the HttpClient call we previously added.

                string result = response.Content.ReadAsStringAsync().Result;

                RootObject aiResult = JsonConvert.DeserializeObject<RootObject>(result);

                string countString = aiResult.Tables[0].Rows[0][0].ToString();

                string recipientEmail = System.Environment.GetEnvironmentVariable($"recipient", EnvironmentVariableTarget.Process);
                string senderEmail = System.Environment.GetEnvironmentVariable($"sender", EnvironmentVariableTarget.Process);

                var messageContent = new Content("text/html", $"There were {countString} POC records found");

                message = new Mail(new Email(senderEmail), "App Insights POC", new Email(recipientEmail), messageContent);

Publish your solution to an Azure Function App by downloading the Function App’s profile and using the VS2017 projects publish options.  You will also need to define the application settings referred to in the code so that they are appropriate for you environment.  At that point you will be able to observe the results of you efforts.


This post demonstrates how a small amount of code can give you the ability to leverage Application Insights for more than just out of the box statistics alerts.  This approach is flexible enough to be use for report on types of errors and monitoring if subsystems are remaining available.  Combining the features within Azure’s cloud offerings gives you capabilities that would cost much more in development time and resource if they were done on premises. 

My only real problem with this approach is that I would prefer to be accessing values in the result by name rather than indexes because this makes the code less readable and more brittle to changes.

Try these examples out and see what other scenarios they apply to in your business.

Logging To Application Insights In Azure Functions Feb 16

In my last post I covered logging in Azure Functions using TraceWriter and log4net.  Both of these work, but Application Insights rolls all your monitoring into one solution, from metrics to tracking messages.  I have also heard a rumor that in the near future this will be an integrated part of Azure Functions.  Given these factors it seem wise to start give it a closer look.

So how do you take advantage of them right now?  If you go to GitHub there is a sample written by Christopher Anderson, but let me boil this down.  First we need to create an Application Insight instance and grab the instrumentation key.

When I created my Application Insight instance I chose the General application type and the same resource group as my function app.


Once the instance has been allocated you will need to go into the properties blade.  There you will find a GUID for the Instrumentation Key.  Save this off so that we can use it later.

You then need to add the Microsoft.ApplicationInsights NuGet package by creating a project.json file in your function.  Insert the following code in the new file and save it.  If you have your log window open you will see the package being loaded.

  "frameworks": {   
    "dependencies": {   
     "Microsoft.ApplicationInsights": "2.1.0"   

In the sample code read.me it says that you need to add a specific app setting, but as long as your code reads from the appropriate setting that is the most important part.  Take the Instrumentation Key that you saved earlier and place it in the app settings.  In my case I used one called InsightKey.  

Next setup your TelemetryClient object like the code here by creating global static variables that can be used throughout your application.  After that we are ready to start tracking our function. 

 private static TelemetryClient telemetry = new TelemetryClient();   
 private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("InsightKey", EnvironmentVariableTarget.Process);  

To track and event or an exception simply call the appropriate method.  I prefer to encapsulate them in their own methods where I can standardize the usage.  I have added the function name, method name, and context ID from the function execution to make it easier to search and associate entries.

 private static void TrackEvent(string desc, string methodName)   
   telemetry.TrackEvent($"{FunctionName} - {methodName} - {contextId}: {desc}");   
 } private static void TrackException(Exception ex, string desc, string methodName)   
   Dictionary<string,string> properties = new Dictionary<string,string>() {{"Function",FunctionName}, {"Method",methodName}, {"Description",desc}, {"ContextId",contextId}};   
   telemetry.TrackException(ex, properties);   


This isn’t an instant answer type of event store.  At the very least there is a few minute delay your application logging and event or exception and when it is visible in the Analytics board.

Once you are logging and sending metrics to Application Insights you need to read the results.  From your Application Insight main blade click on the Analytics button at the top of the overview.  It will open a new page that resembles what you see below.


Click the new tab button at the top next to the Home Page tab.  This will open a query window. The query language has a similar structure to SQL, but that is about as far as it goes.

The table objects are listed on the left navigation with the fields listed as you expand out each table.  Fortunately intellisense works pretty well in this tool.  You have what would normally be considered aggregate functions that make life easier.  As you can see below you can use the contains syntax that acts similar to a SQL like comparison.  There are also date range functions like the ago function used below.  I found that these two features can find most things you are looking for.



This posted didn’t cover a lot of the native functionality in Application Insight, but hopefully it gives you a starting point to instrument your Azure Functions.  The flexibility of this tool along with it the probability of it being built into Functions in the future make it a very attractive option.  Spend some time experimenting with it and I think you find it will pay dividends.

Implementing Logging In Azure Functions Feb 13


Logging is essential to the support of any piece of code.  In this post I will cover two approaches to logging in Azure Functions: TraceWriter and log4net.


The TraceWriter that is available out of the box with Azure Functions is a good starting point.  Unfortunately it is short lived and only 1000 messages are kept at a maximum and at most they are held in file form for two days.  That being said, I would not skip using the TraceWriter.

Your function will have a TraceWriter object passed to it in the parameters of the Run method.  You can use the Debug, Error, Fatal, Info and Warn methods to write different types of messages to the log as shown below.

log.Info($"Queue item received: {myQueueItem}");

Once it is in the log you need to be able to find the messages.  The easiest way to find the log files is through Kudu.  You have to drill down from the LogFiles –> Application –> Functions –> Function –> <your_function_name>.  At this location you will find a series of .log files if you function has been triggered recently.


The other way to look at your logs is through Table Storage via the Microsoft Azure Storage Explorer.  After attaching to your account open the storage account associated with your Function App.  Depending on how you organized your resource groups you can find the storage account by looking at the list of resources in the group that the function belongs to.

Once you drill down to that account look for the tables named AzureWebJobHostLogsyyyymm as you see below.


Opening these tables will allow you to see the different types of log entries saved by the TraceWriter.  If you filter to the partition key “I” you will see the entries your functions posted.  You can further filter name and date range to identify specific log entries.



If the default TraceWriter isn’t robust enough you can implement logging via a framework like log4net.  Unfortunately because of the architecture of Azure Functions this isn’t as easy as it would be with a normal desktop or web application.  The main stumbling block is the lack of ability to create custom configuration sections which these libraries rely on.  In this section I’ll outline a process for getting log4net to work inside your function.

The first thing that we need is the log4net library.  Add the log4net NuGet package by placing the following code in the project.json file.

  "frameworks": {
      "dependencies": {
        "log4net": "2.0.5"

To get around the lack of custom configuration sections we will bind a blob file with your log4net configuration.  Simply take the log4net section of and save it to a text file.  Upload that to a storage container and bind it to your function using the full storage path.


Add the references to the log4net library and configure the logger.  Once you have that simply call the appropriate method on the logger and off you go.  A basic sample of the code for configuring and using the logger is listed below.  In this case I am actually using a SQL Server appender.

using System;
using System.Xml;
using log4net;

public static void Run(string input, TraceWriter log, string inputBlob)
    log.Info($"Log4NetPoc manually triggered function called with input: {input}");

    XmlDocument doc = new XmlDocument();
    XmlElement element = doc.DocumentElement;


    ILog logger = LogManager.GetLogger("AzureLogger");

    logger.Debug($"Test log message from Azure Function", new Exception("This is a dummy exception"));


By no means does this post cover every aspect of these two logging approaches or all possible logging approaches for Azure Functions.  In future posts I will also cover AppInsight.  In any case it is always important to have logging for you application.  Find the tool that works for your team and implement it.

Building Azure Functions: Part 3 – Coding Concerns Feb 02

Image result for azure functions logo

In this third part of my series on Azure Function development I will cover a number of development concepts and concerns.  These are just some of the basics.  You can look for more posts coming in the future that will cover specific topics in more detail.

General Development

One of the first things you will have to get used to is developing in a very stateless manner.  Any other .NET application type has a class at its base.  Functions, on the other hand, are just what they say, a method that runs within its own context.  Because of this you don’t have anything resembling a global or class level variable.  This means that if you need something like a logger in every method you have to pass it in.

[Update 2016-02-13] The above information is not completely correct.  You can implement function global variables by defining them as private static.

You may find that it makes sense to create classes within your function either as DTOs or to make the code more manageable.  Start by adding a .csx file in the files view pane of your function.  The same coding techniques and standards apply as your Run.csx file, otherwise develop the class as you would any other .NET class.


In the previous post I showed how to create App Settings.  If you took the time to create them you are going to want to be able to retrieve them.  The GetEnvironmentVariable method of the Environment class gives you the same capability as using AppSettings from ConfigurationManager in traditional .NET applications.


A critical coding practice for functions that use perishable resources such as queues is to make sure that if you catch and log an exception that you rethrow it so that your function fails.  This will cause the queue message to remain on the queue instead of dequeuing.



It can be hard to read the log when the function is running full speed since instance run in parallel but report to the same log.  I would suggest that you added the process ID to your TraceWriter logging messages so that you can correlate them.

Even more powerful is the ability to remote debug functions from Visual Studio.  To do this open your Server Explorer and either connect to your Azure subscription.  From there you can drill down to the Function App in App Services and then to the run.csx file in the individual function.  Once you have open the code file and place your break points, right-click the function and select Attach Debugger.  From there it acts like any other Visual Studio debugging session.


Race Conditions

I wanted to place special attention on this subject.  As with any highly parallel/asynchronous processing environment you will have to make sure that you take into account any race conditions that may occur.  If at all possible keep the type of functionality that your create to non-related pieces of data.  If it is critical that items in a queue, blob container or table storage are processed in order then Azure Functions are probably not the right tool for your solution.


Azure Functions are one of the most powerful units of code available.  Hopefully this series gives you a starting point for your adventure into serverless applications and you can discover how they can benefit your business.

Building Azure Functions: Part 2–Settings And References Feb 01

Image result for azure functions logo

This is the second post in a series on building Azure Functions.  In this post I’ll continue by describing how to add settings to your function and reference different assemblies to give you more capabilities.



Functions do not have configuration files so you must add app settings and connection strings through the settings page.  The settings are maintained at an Function App level and not individual functions.  While this allows you to share common configuration values it means that if your custom assemblies need different values in configuration settings per function they will each function will have to live in a separate function app.

To get to them go to the Function App Settings link at the lower left of your App Function’s main page and then click the Configure App Settings button which will bring you to the blade shown below.  At that point it is the same any .NET configuration file.


At some point I would like to see the capability of importing and exporting settings since maintaining them individually, by hand leads to human error and less reliable application lifecycle management.

Another drawback to the Azure Functions development environment is that at the time of this post you don’t have the ability to leverage custom configuration sections.  The main place I have found this to cause heartburn is using logging libraries such as log4net where the most common scenario is to use a custom configuration section to define adapters and loggers.

Referencing Assemblies And Nuget

No .NET application is very useful if you can’t reference all of the .NET Framework as well as third party and your own custom assemblies.  There is no add references menu for Azure functions and there are multiple ways to add references.  Lets take a look at each.

There are a number of .NET assemblies that are automatically referenced for your Function application.  There are a second group of assemblies that are available but need to be specifically reference.  For a partial list consult the Azure Function documentation here.  You can also load your own custom assemblies or bring in Nuget packages. 

In order to load Nuget packages you need to create a project.json file.  Do this by clicking the View Files link in the upper right corner of the editor blade and then the Add link below the file list pane. 

project.json files require the same information that is contained in packages.config file, but it is formatted in json as shown in the example below.  Once you save this file and reference the assembly in your Run.csx file Azure will load the designated packages.


If you have custom libraries that you want to leverage you will need to add a bin folder to your function.  The easiest way I have found to do this is to open the App Service Editor from the Function App Settings page.  This will open up what is essentially Visual Studio Code in a browser.  Navigate the file tree to your function under wwwroot.  Right click your function name and select New Folder.  The folder must be named “bin”.  You can then right click the bin folder and upload your custom assemblies.

Once you have an assembly available you need to reference it using the “r#” directive as shown below.  You will notice that native assemblies and Nuget package loaded libraries do not need the dll extension specified, but they must be added for custom assemblies.

#r "System.Xml"
#r "System.Xml.Linq"
#r "System.Data.Entity"
#r "My.Custom.Data.dll"
#r "My.Custom.Domain.dll"
#r "Newtonsoft.Json"
#r "Microsoft.Azure.Documents.Client"
#r "Microsoft.WindowsAzure.Storage"

Now we are ready to declare our normal using statements and get down to the real business of functions.


After this post we have our trigger, bindings, settings and dependent assemblies.  This still isn’t enough for a useful function.  In the next post I will cover coding and debugging concerns to complete the story.

Building Azure Functions: Part 1–Creating and Binding Jan 31

Image result for azure functions logo

The latest buzz word is serverless applications.  Azure Functions are Microsoft’s offering in this space.  As with most products that are new on the cloud Azure Functions are still evolving and therefore can be challenging to develop.  Documentation is still being worked on at the time I am writing this so here are some things that I have learned while implementing them.

There is a lot to cover here so I am going to break this topic into a few posts:

  1. Creating and Binding
  2. Settings and References
  3. Coding Concerns

Creating A New Function

The first thing you are going to need to do is create a Function App.  This is a App Services product that serves as a container for your individual functions.  The easiest way I’ve found to start is to go to the main add (+) button on the Azure Portal and then do a search for Function App.


Click on Function App and then the Create button when the Function App blade comes up.  Fill in your app name remembering that this a container and not your actual function.  As with other Azure features you need to supply a subscription, resource group and location.  Additionally for a Function App you need to supply a hosting plan and storage account.  If you want to take full benefit of Function Apps scaling and pricing leave the default Consumption Plan.  This way you only pay for what you use.  If you chose App Service Plan your function will will pay for it whether it is actually processing or not.


Once you click Create the Function App will start to deploy.  At this point you will start to create your first function in the Function App.  Once you find your Function App in the list of App Services it will open the blade shown below.  It offers a quick start page, but I quickly found that didn’t give me options I needed beyond a simple “Hello World” function.  Instead press the New Function link at the left.  You will be offered a list of trigger based templates which I will cover in the next section.




Triggers define the event source that will cause your function to be executed.  While there are many different triggers and there are more being added every day, the most common ones are included under the core scenarios.  In my experience the most useful are timer, queue, and blob triggered functions.

Queues and blobs require a connection to a storage account be defined.  Fortunately this is created with a couple of clicks and can be shared between triggers and bindings as well as between functions.  Once you have that you simply enter the name of the queue or blob container and you are off to the races.

When it comes to timer dependent functions, the main topic you will have to become familiar with is chron scheduling definitions.  If you come from a Unix background or have been working with more recent timer based WebJobs this won’t be anything new.  Otherwise the simplest way to remember is that each time increment is defined by a division statement.


In the case of queue triggers the parameter that is automatically added to the Run method signature will be the contents of the queue message as a string.  Similarly most trigger types have a parameter that passes values from the triggering event.

Input and Output Bindings


Some of the function templates include an output binding.  If none of these fit your needs or you just prefer to have full control you can add a binding via the Integration tab.  The input and output binding definitions end up in the same function.json file as the trigger bindings. 

The one gripe I have with these bindings is that they connect to a specific entity at the beginning of your function.  I would find it preferable to bind to the parent container of whatever source you are binding to and have a set of standard commands available for normal CRUD operations.

Let’s say that you want to load an external configuration file from blob storage when your function starts.  The path shown below specifies the container and the blob name.  The default format show a variable “name” as the blob name.  This needs to be a variable that is available and populated when the function starts or an exception will be thrown.  As for your storage account specify it by clicking the “new” link next to the dropdown and pick the storage account from those that you have available.  If you specified a storage account while defining your trigger and it is the same as your binding it can be reused.


The convenient thing about blob bindings is that they are bound as strings and so for most scenarios you don’t have to do anything else to leverage them in your function.  You will have to add a string parameter to the function’s Run method that matches the name in the blob parameter name text box.


That should give you a starting point for getting the shell of your Azure Function created.  In the next two posts I will add settings, assembly references and some tips for coding your function.