Two ways to import an XML file with .Net Core or .Net Framework

It’s always the simple stuff you forget how you do. For years I’ve mainly been working with JSON files, so when faced with that task of reading an XML file my brain went “I can do that” followed by “actually how did I used to do that?”.

So here’s two different methods. They work on .Net Core and theoretically .Net Framework (my project is .Net Core and haven’t checked that they do actually work on framework).

My examples are using an XML in the following format:

<?xml version="1.0" encoding="utf-8"?>
<jobs>
    <job>
        <company>Construction Co</company>
        <sector>Construction</sector>
        <salary>£50,000 - £60,000</salary>
        <active>true</active>
        <title>Recruitment Consultant - Construction Management</title>
    </job>
    <job>
        <company>Medical Co</company>
        <sector>Healthcare</sector>
        <salary>£60,000 - £70,000</salary>
        <active>false</active>
        <title>Junior Doctor</title>
    </job>
</jobs>

Method 1: Reading an XML file as a dynamic object

The first method is to load the XML file into a dynamic object. This is cheating slightly by first using Json Convert to convert the XML document into a JSON string and then deserializing that into a dynamic object.

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Linq;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);
            XDocument xdoc = XDocument.Load(stream);

            string jsonText = JsonConvert.SerializeXNode(xdoc);
            dynamic dyn = JsonConvert.DeserializeObject<ExpandoObject>(jsonText);

            foreach (dynamic job in dyn.jobs.job)
            {
                string company;
                if (IsPropertyExist(job, "company"))
                    company = job.company;

                string sector;
                if (IsPropertyExist(job, "sector"))
                    company = job.sector;

                string salary;
                if (IsPropertyExist(job, "salary"))
                    company = job.salary;

                string active;
                if (IsPropertyExist(job, "active"))
                    company = job.active;

                string title;
                if (IsPropertyExist(job, "title"))
                    company = job.title;

                // A property that doesn't exist
                string foop;
                if (IsPropertyExist(job, "foop"))
                    foop = job.foop;
            }

            Console.ReadLine();
        }

        public static bool IsPropertyExist(dynamic settings, string name)
        {
            if (settings is ExpandoObject)
                return ((IDictionary<string, object>)settings).ContainsKey(name);

            return settings.GetType().GetProperty(name) != null;
        }
    }
}

A foreach loop then goes through each of the jobs, and a helper function IsPropertyExist checks for the existence of a value before trying to read it.

Method 2: Deserializing with XmlSerializer

My second approach is to turn the XML file into classes and then deserialize the XML file into it.

This approch requires more code, but most of it can be auto generated by visual studio for us, and we end up with strongly typed objects.

Creating the XML classes from XML

To create the classes for the XML structure:

1. Create a new class file and remove the class that gets created. i.e. Your just left with this

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

}

2. Copy the content of the XML file to your clipboard
3. Select the position in the file you want to the classes to go and then go to Edit > Paste Special > Paste XML as Classes

If your using my XML you will now have a class file that looks like this:

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

    // NOTE: Generated code may require at least .NET Framework 4.5 or .NET Core/Standard 2.0.
    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    [System.Xml.Serialization.XmlRootAttribute(Namespace = "", IsNullable = false)]
    public partial class jobs
    {

        private jobsJob[] jobField;

        /// <remarks/>
        [System.Xml.Serialization.XmlElementAttribute("job")]
        public jobsJob[] job
        {
            get
            {
                return this.jobField;
            }
            set
            {
                this.jobField = value;
            }
        }
    }

    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    public partial class jobsJob
    {

        private string companyField;

        private string sectorField;

        private string salaryField;

        private bool activeField;

        private string titleField;

        /// <remarks/>
        public string company
        {
            get
            {
                return this.companyField;
            }
            set
            {
                this.companyField = value;
            }
        }

        /// <remarks/>
        public string sector
        {
            get
            {
                return this.sectorField;
            }
            set
            {
                this.sectorField = value;
            }
        }

        /// <remarks/>
        public string salary
        {
            get
            {
                return this.salaryField;
            }
            set
            {
                this.salaryField = value;
            }
        }

        /// <remarks/>
        public bool active
        {
            get
            {
                return this.activeField;
            }
            set
            {
                this.activeField = value;
            }
        }

        /// <remarks/>
        public string title
        {
            get
            {
                return this.titleField;
            }
            set
            {
                this.titleField = value;
            }
        }
    }

}

Notice that the active field was even picked up as being a bool.

Doing the Deserialization

To do the deserialization, first create an instance of XmlSerializer for the type of the object we want to deserialize too. In my case this is jobs.

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));

Then call Deserialize passing in a XML Reader. I’m creating and XML reader on the stream I used in the dynamic example.

            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

The complete file now looks like this:

using System;
using System.IO;
using System.Text;
using System.Xml;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));
            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

            Console.ReadLine();
        }
    }
}

And thats it. Any missing nodes in your XML will just be blank rather than causing an error.

How to create a shared access signature for a specific folder in Azure Storage

As we move into a serverless wold where devlopment work is done less on generic multipurpose servers that require patching both to the OS and to the application installed on them, things become easier and in some cases cheaper.

For instance I am currently working on a project that needs to send and recieve XML files from different sources and with no other requirements for a server, Azure storage seemed the perfect fit. As it’s just a storage service you pay for what you need and nothing more. All the patching and maintencance of API’s is provided by Microsoft and we have an instant place to store out files. Ideal!

However new technology always comes with new challenges. My first issue came with the plan on how we would be getting files in and out with partners. This isn’t the first time I’ve worked with Azure File Storage, but it is the first time I did it without an opps person setting things up. Previously we used FTP, which while dated works with a lot of applications. Like a headphone jack it’s not the most glamorus of connection, but it works, everyone knows how it works and everyone has things that work with it. However it transpires that despite Azure storage being 10 years old and reciving requests for FTP from the start, Microsoft have decided to go the same route as Apple with the headphone jack and not have it. Instead the only option for an integration is REST. As it transpires the opps people I had worked with in the past when faced with this issue had just put a VM infront of it, which kind of defeats the point of using Azure storage in the first place!

So we’re going with REST and Microsoft provide quite a straightforward REST API all good so far, but how do we limit access? Well there’s a guide to Using the Azure Storage REST API which contains a section on creating an authorisation header. It’s long and overly complex, but does point you in the direction that to do this you need a Shared Access Signature. The other option is an access key, but this is something you should never give away to a third party.

Shared Access Signature

After a bit more digging through the documentation (and just clicking the thing that sounded right in the potal) I found this documentation on creating an Account SAS which sounded like what I wanted (it wasn’t, but it’s close).

With a shared access signature you can say what kind of service should be allowed, what permissions they should have, IP address’s, start and end dates. All awesome things.

Once I had this I could then use the REST API, but there was a problem. I could access every folder in the storage account and there was no way to stop this! For integrating with 2 third partys they would both be able to access each others stuff, and our own private stuff.

There is also no way to revoke the SAS once it’s been generated other than refreshing the access keys which would affect everyone.

Folder Level Shared Access Signature

After a bit more research I found what I was looking for. How to create a shared access signature at a folder or item level and how to link it to a policy.

The first thing you need is Azure Storage Explorer. Once your set up with this you will be able to view all your storage accounts.

From here you are able to browse to the folder you want to share right click it and choose Manage Access Policies.

This will open a dialoge to manage the policies for this specific object rather than the account.

Here you can set all the same permissions as you could for a signature at an account level but now for a specific object and against a policy rather than an actual signature, meaning the policy can be updated in the future with no change to the signature.

Better still you can remove the policy which will then invalidate any signature using it.

For the actual signature key right click the same folder and click Get Shared Access Signature.

Then in the dialoge select the policy from the drop down rather than spcifying the individual permissions.

Click create and you can copy the keys.

You now have an access key that is limited to a specific folder rather than the entire account.

This is only possible to do though one of the code/scripting interfaces. e.g. powershell or the storage explorer. The azure portal will only let you get signatures at an account level.

Config file transforms with Azure Devops

For a long time now our primary CI setup has been based around Team City and Octopus deploy, but as reliable as it is there are things I don’t like about it:

  1. It’s not a SASS setup meaning there’s a VM to occassionaly think about and updates to install. While Octopus is now availiable as a SASS option, Team City is not and moving Octopus will only solve half the problem.
  2. That VM they both sit on every so often gets and issue with it’s hard disk being full.
  3. It’s complicated to recommend the same setup to clients. You end up having to go through multiple things they need to buy which then require some installation and ongoing maintenance. Ideally we would have a setup thats easy for them to replicate and own themselves with minimal maintenance.

So when we took over a site recently that typically came with no existing CI setup in place, I decided to take a look at using Azure Devops instead. You can use Azure Devops with Octopus Deploy but as it claims to be able to manage releases as well as builds we went for doing the whole thing just in Azure Devops.

Getting a build set up was relatively straight forward so I’m going to skip past that bit, but in short we ended up with a build that will create a web deploy package and publish it as an artifact. Typical msbuild type stuff.

File transforms and variable substitution

The first real tricky point came with replacing variables in config files during a release to each envrionment. We were using the IIS Web App Deploy task to deploy the application to IIS on a VM (no new Azure Web App Services in this setup 😦 as I said we took over the site and this was just to get automated deploys of what they already have). A simple starting point with this is some built in functionality for XML Variable Substituion in the IIS Web App Deploy task.

Quite simply you can add all your varibles to the variable list, set the scope for which envrionment you want it to apply to and the during the deploy they are replaced in your config. Unlike some tag replacement tools I’ve used in the past this one actually uses the name of the connecting string or app setting you need to set, so if you need to set a connection string named web, the variable name will be web.

This is also where my problem stated. The description for what XML variable substitution does is:

Variables defined in the build or release pipeline will be matched against the ‘key’ or ‘name’ entries in the appSettings, applicationSettings, and connectionStrings sections of any config file and parameters.xml

This was a Sitecore solution and for Sitecore most of your config settings are in Sitecores own Sitecore section of the config file. So in other words the connection string will get updated but the rest won’t.

Parameter and SetParameter XML files

My next issue was trying to find a solution is actually quite hard. Searching for this problem either gave me a lot of results for setups using ARM templates (as I said, this was a solution we took over and that kind of change is not on the agenda), or you just get the easy bit above. Searching for Sitecore and Azure Devops also leads you to a lot of results on a cloud infrastructure setup (again not what we’re doing here, at least in the short term). Everything that was coming up felt far more complicated than the solution should be.

However the documentation on the XML variable substitution did have one interesting sentance.

If you are looking to substitute values outside of these elements you can use a (parameters.xml) file, however you will need to use a 3rd party pipeline task to handle the variable substitution.

A parameters.xml file isn’t something I’ve used before which makes this sentance a bit cryptic. The first half says I can do what I want with an xml file, but the second half says I’ll need something else to actually do it.

After a bit of reasearch this all comes back to web deploy. When you do a build that outputs a web deploy package, you get 5 files.

A zip file containing the actual site, a command file which has the script to do the deploy and a set parameters file which is used to set config variables during the deploy. The others aren’t so imporant.

To have different config set on different envrionments you just need to edit the set parameters file. But first you need to have the parameter in the set parameters file so that you can actually change it and this is where the parameters.xml file comes in.

Creating the parameter files

Add a file called parameters.xml file to the root of your project and then add parameters as follows.

<?xml version="1.0" encoding="utf-8" ?>
<parameters>
  <parameter name="DataFolderLocation" defaultvalue="#{dataFolder}">
    <parameterEntry kind="XmlFile" scope="App_Config\\Include\\Z.Project\\DataFolder\.config$" match="/configuration/sitecore/sc.variable[@name='dataFolder']/patch:attribute/text()" />
  </parameter>
</parameters>

Some important parts:

default valueThe value that the config setting will get set to
scopeThe path to the file containing the setting
matchAn XPath expression for find the part of the config file to update

Once you have this the build will start producing a SetParameters.xml file containing the extra parameters.

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="Default Web Site/SiteCore.Website_deploy" />
  <setParameter name="DataFolderLocation" value="#{dataFolder}" />
</parameters>

Note: I’ve set the value to be something I intend to replace in the release process.

Replacing the tokens

With our SetParameters.xml file now contining all the config we need to update, we need a step in the release process that will replace all the tokens with the correct values.

To do this I used a replaced tokens task https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens

Config options need to be set for:

Root DirectoryPath to the folder containing the SetParameters.xml file
Target filesA list of files to have replacements done in. In our case this was SiteCore.Website.SetParameters.xml
Token prefixThe prefix on tokens to be search for. Ours was #{
Token suffixThe suffix to denote the end of a token. Ours was }

Lastly in the IIS Web App Deploy step the SetParameters file needed to be selected and the new variables added to the variable list in Azure Devops. The variable names need to be called the bit between your prefix and suffix. i.e. #{datafolder} would be called datafolder.

If you don’t set the variables then the log’s will show warning for each one it couldn’t find.

2019-09-24T17:17:21.6950466Z ##[section]Starting: Replace tokens in SiteCore.Website.SetParameters.xml
2019-09-24T17:17:23.9831695Z ==============================================================================
2019-09-24T17:17:23.9831783Z Task         : Replace Tokens
2019-09-24T17:17:23.9831816Z Description  : Replace tokens in files
2019-09-24T17:17:23.9831861Z Version      : 3.2.1
2019-09-24T17:17:23.9831891Z Author       : Guillaume Rouchon
2019-09-24T17:17:23.9831921Z Help         : v3.2.1 - [More Information](https://github.com/qetza/vsts-replacetokens-task#readme)
2019-09-24T17:17:23.9831952Z ==============================================================================
2019-09-24T17:17:27.2703037Z replacing tokens in: C:\azagent\A1\_work\r1\a\PublishBuildArtifacts\SiteCore.Website.SetParameters.xml
2019-09-24T17:17:27.3133832Z ##[warning]variable not found: dataFolder
2019-09-24T17:17:27.3179775Z ##[section]Finishing: Replace tokens in SiteCore.Website.SetParameters.xml

With all this set our config has it’s variables configured within Azure Devops for each envrionment,

Setting up local https with IIS in 10 minutes

For very good reasons websites now nearly always run under https rather than http. As dev’s though this gives us a complication of either removing any local redirect to https rules and “hoping” things work ok when we get to a server, or setting local IIS up to have an https binding.

Having https setup locally is obviously a lot more favourable and what has traditionally been done is to create a self signed certificate however while this works as far as IIS is concerned, it still leaves an annoying browser warning as the browser will recognise it as un-secure. This can then create additional problems in client side code when certain things will hit the error when calling an api.

mkcert

The solution is to have a certificate added to your trusted root certificates rather than a self signed one. Fortunately there is a tool called mkcert that makes the process a lot simpler to do.

https://github.com/FiloSottile/mkcert#windows

Create a local cert step by step

1. If you haven’t already. Install chocolatey ( https://chocolatey.org/install ). Chocolatey is a package manager for windows which makes it super simple to install applications. The name is inspired from NuGet. i.e. Chocolatey Nuget

2. Install mkcert, to do this from a admin command window run

choco install mkcert

3. Create a local certificate authority (ca)

mkcert -install

4. Create a certificate

mkcert -pkcs12 example.com

Remember to change example.com to the domain you would like to create a certificate for.

5. Rename the .p12 file that was created to .pfx (this is what IIS requires). The certificate will now be created in the folder you have the command window open at.

You can now import the certificate into IIS as normal. When asked for a password this have been set to changeit

Redirecting to login page with AngularJs and .net WebAPI

So here’s the scenario, you have a web application which people log into and some of the pages (like a dashboard) contain ajax functionality. Inevitably the users session expires, they return to the window change a filter and nothing happens. In the background, your JavaScript is making http calls to the server which triggers an unauthorised response. The front end has no way to handle this and some errors appear in the JS console.

A few things are actually combining to make life hard for you here. Lets take a look at each in more detail.

WebAPI and the 301 Response

To protect your API’s from public access a good solution is to use the Authorize attribute. i.e.

[Authorize]
public ActionResult GetDashboardData(int foo)
{
   // Your api logic here
            
}

However chances are your solution also has a login page configured in your web.config so that your regular page controller automatically trigger a 301 response to the login page.

    <authentication mode="Forms">
      <forms timeout="30" loginUrl="/account/sign-in/" />
    </authentication>

So now what happens, is instead or responding with a 401 Unauthorised response, what’s actually returned is a 301 to the login page.

With an AJAX request from a browser you now hit a second issue. The browser is making an XMLHttpRequest. However if that request returns a 301, rather than returning it your JavaScript code to handle, it “helpfully” follows the redirect and returns that to your JavaScript. Which means rather than receiving a 301 redirect status back, your code is getting a 200 Ok.

So to summarise your API was set up to return a 401 Unauthorised, that got turned into a 301 Redirect, which was then followed and turned into a 200 Ok before it gets back to where it was requested from.

To fix this the easiest method is to create are own version of the AuthorizedAttribute which returns a 403 Forbidden for Ajax requests and the regular logic for anything else.

using System;
using System.Web.Mvc;

namespace FooApp
{
    [AttributeUsage(AttributeTargets.Method)]
    public class CustomAuthorizeAttribute : AuthorizeAttribute
    {
        protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                filterContext.Result = new HttpStatusCodeResult(403, "Forbidden");
            }
            else
            {
                base.HandleUnauthorizedRequest(filterContext);
            }
        }
    }
}

Now for any Ajax requests a 403 is returned, for everything else the 301 to the login page is returned.

Redirect 403 Responses in AngularJs to the login page

As our Ajax request is being informed about the unauthorised response, it’s up to our JavaScript code trigger the redirect in the browser to the login page. What would be really helpful would be to define the redirect logic in one place, rather than adding this logic to every api call in our code.

To do this we can use add an interceptor onto the http provider in angular js. The interceptor will inspect the response error coming back from the XmlHttpRequest and if it has a status of 401, use a window.locator to redirect the user to the login page.

app.factory('httpForbiddenInterceptor', ['$q', 'loginUrl', function ($q, loginUrl) {
    return {
        'responseError': function (rejection) {
            if (rejection.status == 403) {
                window.location = loginUrl;
            }
            return $q.reject(rejection);
        }
    };
}]);

app.config(['$httpProvider', function ($httpProvider) {
    $httpProvider.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
    $httpProvider.interceptors.push('httpForbiddenInterceptor');
}]);

You’ll notice a line updating the headers. This is to make the IsAjaxRequest() method on the api recognise the request as being Ajax.

Finally you’ll also notice the loginUrl being passed into the interceptor. As it’s not a great idea to have strings like urls littered around your code, this is using a value recipe to store the url. The code to do this is follows:

app.value('loginUrl', '/account/sign-in?returnurl=/dashboard/');

Running Gulp tasks in Visual Studio

Over the last decade, front end development has matured from a point where CSS and JavaScript were written in badly organised files containing the exact code a browser would interpret, to something that now more resembles back end development.

Pure CSS has become the byte code of the front end world with LESS and Sass becoming the languages of choice. Instead of MSBuild for running a compilation Gulp can be used to automate workflows to compile Sass files, minify them and do whatever else is needed to produce the code for deployment (not really an exact comparison, but you get the point).

To truly achieve a matured development setup though, the final step is to remove those “compiled” css and js files from source control. After all you wouldn’t check in a dll with the source code for each build. If your using a build server then this is a relatively easy step to add to your workflow. However the bigger issue is not making life hard for your back end developers.

Without the final CSS and JS files in source control your back end devs may now be faced with a site lacking all style and front end functionality when they hit F5. With LESS and Sass also not really being there thing, they’re now left not really knowing what to do, and also not overly happy about having to learn something about front end do continue with there back end work. To make matters worse, Visual Studio often isn’t the front end dev’s choice of tool so copying the front enders setup isn’t an ideal solution either.

The Aim

The ideal solution would be for a back end dev to be able to check out a solution from source control (containing no compiled CSS of JS files), hit F5 in Visual Studio, the final CSS and JS be created as part of the build and the site to run. No opening a command prompt to run a gulp task, or any other process the front end devs may be using. It should be completely invisible to them.

Visual Studio Task Runner

With Visual Studios Task Runner, this is entirely possible.

The Task Runner is built into Visual Studio, so there’s no need for the back end dev to install any extra tools, and more importantly, tasks can be linked into a before build event so that the back end dev doesn’t need to do anything.

A bit of background on gulp and the task runner

When front end developers set up gulp, they will configure a set of gulp tasks within a file named gulpfile.js (in reality they may actually separate the tasks into multiple files referenced by gulpfile.js, but this file is the important bit). These tasks may look a bit like this:

gulp.task("min:js", () => {
  return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
    .pipe(concat(paths.concatJsDest))
    .pipe(uglify())
    .pipe(gulp.dest("."));
});

gulp.task("min:css", () => {
  return gulp.src([paths.css, "!" + paths.minCss])
    .pipe(concat(paths.concatCssDest))
    .pipe(cssmin())
    .pipe(gulp.dest("."));
});

gulp.task("min", gulp.series(["min:js", "min:css"]));

In this example there are 3 tasks. The first minifies some JS while the second minifies some CSS. The third is defining a series which runs the first 2.

Visual Studios task runner will look file the file called gulpfile.js and pull out all the tasks within it. The task runner window may not be open by default, so to open it type task runner in the search box and select it from the results. Alternatively you can right click the gulpfile in the solution and select Task Runner from the context menu.

The task runner window will open at the bottom of the screen, and will list out all the gulp tasks found within the gulpfile.js file. If the front end devs have organised the tasks into separate files, as long as the gulpfile.js file in the root of the project has some way of referencing them, they will still show up.

If you don’t see any of the tasks, and have only just added the file or a task to the file. You may need to click the refresh button.

To run a task, simply right click it and then choose Run.

Automating on build

Being able to run a gulp task is handy, but what we’re really after is for it to be automated as part of the build.

For this we just need to right click the task to be run, and then from the bindings option, select before build.

This will add a comment to the top of the gulpfile.js which Visual Studio will look for to know what task should be run before a build.

/// <binding BeforeBuild='min' />

With this set, the back end devs no longer need to be concerned with not having any compiled css and js, and with a small amount of knowledge also have the ability to make minor changes to css and js where needed.

Some others tips

Depending on your front end devs setup there could be some additional challenges to overcome.

Front end devs not using Visual Studio

Quite often Visual Studio isn’t the choice of dev environment for a front end dev. One issue this can lead to is files missing from the Visual Studio solution. However an easy fix for this is to use wild cards in the csproj file.

If the front end devs code can be grouped into specific folders then use a wild card to include all the files from that folder in the project.

<None Include="build\js\**\*.js" />

CSS/JS not in the web project

If the front end devs have a separate folder for there work. e.g. they might work with static html files. Then the code may not be in the project what will be run, and therefore nothing will trigger the gulp file to have it’s task run.

A simple solution for this is to create a visual studio project for the folder with there work so that it has a build event to be attached to. Make sure your web project also references this project to trigger it to be built when the web project is.

Links

For more info on using Gulp with Visual Studio. Check out Microsofts guide on using Gulp with ASP.NET Core.

Cloud Hosting IaaS vs PaaS

A topic I hear from clients fairly regularly these days is a plan to move to “the cloud”, or we take over a site built by someone else thats hosted in “the cloud”. However in virtually every case it’s an IaaS setup and they don’t really know what the difference between IaaS and PaaS is.

What is IaaS?

So what is infrastructure as a service (IaaS for short)? To put it quite simply, IaaS in the cloud, be that Azure or AWS takes away the burden of managing your own or rented servers.

Anyone who’s worked in a corporate environment can tell you getting some new servers can be a lengthy process. Someone needs to arrange for them to be purchased, placed in a physical location, software installed etc. Even when the management of this has been outsourced to another company, it still remains a lengthy process.

Equally anyone working for a small company can tell you its not much better. There may be a lot less approval processes to jump through, but your on your own trying to buy and set up this kit from somewhere.

IaaS solves this by providing a very very quick service where you request a specific setup and a VM gets created for you ready to go in a couple of minutes. You can pick from a range of locations around the world and when you don’t need it any more you turn it off. There’s no large commitments to keeping a server for 6 months, you pay by the minute scaling up and down as needed.

IaaS is very simple to replace your current setup as its essentially the same thing just with much better management control around it.

What is PaaS?

Platform as a service (PaaS), is what people really mean when they talk about the future and the cloud. If you go to a conference and hear Microsoft talk about Azure, you can be 99% certain its a PaaS service they’re talking about.

To understand it think about the process of getting some office space. You could buy a plot of land, have a building placed on it and turn it into your office. But when the roof leaks you’ll need to arrange for that to be fixed, you’ll need to arrange regular fire alarm tests to make sure everyone remains safe, and when an issue is discovered in the buildings security you’ll need to get that fixed to. You didn’t really want to be a building manager, but that’s what’s happened.

The alternative is to rent some office space and leave the management to somebody else. All you have to do is follow some rules like don’t go on the roof and don’t go in that cupboard where all the fire alarm equipment is.

PaaS is a bit like this, we didn’t really want a box running Windows Server that we need to keep secure and up to date. We just want a SQL Server DB and that traditionally comes with a need to have a server to run it on. Equally for hosting a website we really just want somewhere our sites going to run, in the same way that for office space we just want somewhere for our staff to sit. It’s unlikely that we’re ever going to use these servers for more than one purpose so we don’t really need a generic system that allows us to install a multitude of things.

So with PaaS rather than buying a server your buying a service, which could be a web application, a db or many other things. As this is no longer just buying server space there are a number of restrictions. For example with a web application saving anything on the file system is rules out. Your application is going to be there but part of what makes it possible for all the server updates to be done for you is that at any time your application could be moved to a brand new server, anything not in the package to set it up will be lost.

The importance of build numbers

If I were to make a prediction, I would say that build numbers are something that are rarely treated as being important in the agency world of web development. That’s not to say milestone releases aren’t given names like “Phase 2”, “August Release” or a major feature name, but every build / release of a project in between, I’d sense largely have build numbers either ignored or never created.

It’s also easy to see why, after all it’s not like we’re producing software that’s going out to the masses to be installed. The solution is essentially just ending up having 1 install on a set of servers. When a new version is built, that replaces everything that came before it and if a bug is found we generally roll forward and fix the bug rather than ever reverting back.

Why use build numbers?

So when we’re constantly coding and improving applications in an agile world why should we care about and use build numbers?

To put it quite simply its just an easy way to identify a snapshot of code that could have actually have been built and then released to a server. This becomes hugely useful in scenarios such as:

  • A bug being reported by an end user
  • An issue being identified by some performance monitoring
  • An issue being picked up in some functionality further on from the site. e.g. in an integration

Without build numbers the only way to react to these scenarios is to look at commit dates in source control or manual release notes that may have been created to try and work out where an issue may have been created and what changed at that time. If the issue had subsequently been fixed you also can’t really give a version description when it was fixed other than a rough date.

Other advantages of build numbers can include:

  • Being able to reference a specific version that has been pen tested
  • Referencing a version that’s been tested with integrations
  • Having approval to release a specific version rather than just the latest on master
  • Anywhere you want to have a conversation referencing releases

Build numbers for deploys

The first step to use build numbers and with the rise in CI, possibly the one thing most people are doing is to start creating build numbers via a build server. By using any type of build server you will end up with build numbers. This instantly gives you a way to know when a build was created and what commits were new within the build.

Start involving an automated deployment setup either using your build server or with other tools like Octopus Deploy and you will now start to get a record of when each build was deployed to each server.

feature-1

Now you have an easy way to not only reference what build was on each environment and when through the deploy history, but also a way to see what went into a build through the build servers change log.

Tag builds in source control

Being able to see the changes that went into each build on your build server is all very good, but it’s still not an ideal situation for finding the exact code version a build relates to.

Thankfully if your using Team City it’s really easy to set it up to create a tag in your source control with each build number. Simply go to the build features section of your projects configuration and add a feature called “VCS Labeling”. This is a step that happens post build in the background and will create a tag in source control including the build number. It has lots of other configuration options, so if you need different tag formats for different branches its got you covered.

If your using GitHub once this is turned on you will be able to see a list of all the tags in the releases section.

GitHub Releases Screen

Update Assembly info

Being able to identify a build in source control and view a history of what should have been on a server at a particular time is all very good, but its also a good idea to be able to easily identify a build for a published version of code. That way just by looking at the code on a server you can tell which build version it is, and not rely on your deployment tool to be correct.

If your using Team City this also also made super simple through a build feature called “Assembly info patcher”. When using this the build number will automatically get patched in without having to edit AssemblyInfo.cs.

Conclusion

By following these tips you will now be able to identify a version by looking at the published code, see a history of when each version was not only built but also released to each envrionment and also have an easy way to find the exact source for that build.

The build number can then be used in any conversations around when a bug was introduced and also be referenced in release notes so everyone can keep track of what versions included what fix’s in a simple to understand format.

Update local Git branch list

Ever had the issue where your Git repo has a branch that your local hasn’t picked up on? This can often happen when a branch has just been created.

The solution is simple just run…

$ git remote update origin

Setting IP restrictions in IIS

It’s a frequent scenario that a website your in the process of building needs to be accessible over the internet before it should actually be publicly available over the internet. This can come in the form of clients needing to review staging sites before there live, test sites needing to be accessible to testers who may not be in a location that can access private servers, or working jointly with other suppliers.

This scenario presents a lot of dangers such as, the URL of a site could get leaked early ruining a marketing strategy, or the site could end up in Google destroying the SEO value on the clients current site and even worse, actually get real customers visiting it.

There are only 2 real methods of protecting test/staging sites. One is adding authentication to the site restricting access to people with a valid username and password. The other is IP white-listing so only people from a valid IP can access the site.

In the past I’ve seen people suggest using a robots.txt to tell search engines to ignore the site. This is guaranteed to fail, Google will index a site with a robots file saying not to. Your robot’s file may say don’t crawl, but that auto generated Sitemap will be obeyed an the files indexed. There will also come a time the robots file gets copied live de-indexing the live site, or someone forgets the file on staging and the staging site is indexed.

Using IIS to set up IP restrictions

Using IIS to set up IP restrictions is quick and easy, and what’s best about it is you can set it at the server level and not worry about people forgetting to add it to new sites. Better still you can also easily add configuration at a website level to allow certain people to see certain sites rather than the whole box.

Installing the Feature

First you need to make sure you have the feature installed on IIS. To do this on Windows Server 2012:

IP and Domain Restrictions

  1. Go to Server Manager and click “Add roles and features”
  2. Click next to take you from the Before you begin page to Installation Type
  3. Leave Role-based selected and click next
  4. On the Server Selection screen the server your on should be auto selected. Click next
  5. On the Server Roles screen scroll down to “Web Server (IIS)”. IP and Domain Restrictions is located under Web Server (IIS) > Web Server > Security
  6. Click the check box on IP and Domain Restrictions if its not already selected and complete the wizard to install the features.

Configuring IIS

The set up an IP restriction in IIS do the following:

  1. Open IIS and select your server in the left hand treeview. Alternatively if you wanted to add the restrictions to an individual site, select that site.
  2. Within the IIS section you should have an item titled IP Address and Domain RestrictionsIP and Domain Restrictions IIS
  3. The configured IP address will be listed out. To add a new one click the “Add Allow Entry” action on the right.
    IP and Domain Restrictions IIS Setting IPs
  4. This screen allows you to set up allow and deny lists, but the restrictions don’t actually have an effect until you edit the feature settings.
    IP and Domain Restrictions IIS Feature Settings
  5. On this screen you need to set the access for unspecified clients to deny. You can also specify a deny action type which alters the status code between unauthorized, forbidden, not found and abort.

What this doesn’t do

What this won’t do is block all traffic not in the allow list to your server. It will only cover IIS, so if you have other services running on your box like SQL Server, Mongo, Apache etc this will all still be publicly available.