Why is my Session ID changing on 3D Secure payments?

If you have a website where you are implementing 3D Secure payments, you may find that you have an issue where on receipt of the payment setup confirmation the users Session ID has changed with no apparent cause.

Lets have a quick run through of the payment process in this scenario (this is roughly how SagePay and WorldPay both work):

  1. User completes payment details on your site (some wizardry normally happens at this point with an iFrame for the card number to maintain PCI compliance) and the form is submitted to your server
  2. You server call’s an API from the payment gateway to setup a 3D Secure payment. It passes the users Session ID along with all the payment details.
  3. Payment gateway responds with a URL for you to redirect the user too by posting a form to it. You do this, likely in an iFrame so that the page still looks like your website (otherwise it’s a very ugly page).
  4. User may or may not get prompted by some sort of authentication by the bank. This could be something like receiving a text message with a code to enter.
  5. When authenticated the user is sent back to your website (in the iFrame) with a post request.
  6. Your server picks out the form details from the request and then calls an API to complete the transaction. In this API call you send the Session ID so that the payment gateway can validate it is the same as the one at the start of the process.
  7. Confirmation shown to the user.

Introducing the Same Site Cookie Policy

In 2020 a change made to how cookies function in browsers to defend against cross site scripting. Troy Hunt has a brilliant explanation of the issue with how cookies used to work and how this has changed here. I’m going to try a much shorter explanation;

When a request is made from a browser, as part of the request all the cookie values for that domain are sent with the request. This will include one for the Session ID. The theory here is that because the cookies are only being sent to the domain which set them in the first place, then information is only being shared back with the place that set it to begin with, which is therefor safe.

However the workings of the internet and what domain a button click might call isn’t overly obvious to most people. So what if clicking a link on one site causes the user to be redirected to another site? Answer: all the cookies are still sent. The same thing happens if a form is posted from one site to another site. The problem here is that if you are authenticated on the other website, then its possible for a cross site scripting attack to be using your session via your browser without you even realising you were on the site again. This is why it’s always a good idea to log out of websites!

The introduction of the same site policy changes how cookies work with three options:

  1. None: which is what the browsers used to do. i.e. send all the cookies with cross-origin request
  2. Lax: some limits on sending cookies with cross-origin request
  3. Strict: tight limits on sending cookies with cross-origin request

None is sending everything and a strict policy will basically stop cookies from being sent when they have a cross-origin request.

A Lax policy is slightly more interesting though, because it depends if the request is a GET or a POST. If it is a GET then the cookies will still be sent, which means that if you follow a link from another site or a search engine your cookies will still be sent to the site. However a POST (like what the 3D Secure page is doing) will no longer send the cookies.

If the policy isn’t set, then Lax is used as the default.

So why is my Session ID changing?

The problem is that post request back to your site, unless the Session ID had a cookie policy of None then the Session ID cookie won’t be sent. The server will then see that there is no Session ID and treat the users as if they are new on the site and as a result start a new session. From this point on the user has lost the old session and you can’t complete the payment. Worse still they’ve probably just been logged out and anything else using session data has also been lost.

In .Net 4.7.2 and up, Microsoft has implemented the ability to set the cookie policy of the session ID. You can do this in your web.config file like this:

<configuration>
 <system.web>
  <anonymousIdentification cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
  <authentication>
   <forms cookieSameSite="None" requireSSL="false" />
  </authentication>
  <sessionState cookieSameSite="None" /> <!-- No config attribute for Secure -->
  <roleManager cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
 <system.web>
<configuration>

You can find more about this here. In everything older the policy wont be set and will default to Lax.

However just because you can set the cookie policy to None, it doesn’t mean you should. After all, that just re-opens the vulnerability the browser was trying to protect against.

The solution I went with was to have a page that the users is redirected back to from the payment gateway that does nothing other than re-submit the values from the payment gateway to the site again. This way I can be sure of what form data is being posted to the site, and when I re-post it, it is a same site post and not a cross-origin which means the Session ID cookie will be sent.

To stop my initial page setting a new session cookie (which it will try to do because it won’t recieve one), I use the IIS URL Rewrite module to strip out the set cookie response headers from the page.

You can do this with an outbound rule as follows:

<rewrite>
      <rules>
      </rules>
      <outboundRules>
        <rule name="Remove SetCookie Header" preCondition="Match Payment Page">
          <match serverVariable="RESPONSE_Set-Cookie" pattern=".*" />
          <action type="Rewrite" value="" />
        </rule>
        <preConditions>
          <preCondition name="Match Payment Page" logicalGrouping="MatchAny">
            <add input="{REQUEST_URI}" pattern="PAGE NAME GOES HERE" />
          </preCondition>
        </preConditions>
      </outboundRules>
    </rewrite>

With this solution the cookies are still secure as the policy is set to Lax and I can take payments using 3D Secure which will soon become a requirement.

What are automated deployments and why do I need them?

When your working on a new website build as either the marketer responsible for the site or the project manager overseeing it’s build, some of the most frustrating tasks to be using up your budget can originate from the IT team and come under the category of setup.

They can suck days even weeks from a project but offer no visible features or benefit to the end user, worse still they often need to come at the start of a project which for people wanting to see a design turn into a webpage just feels like a delay and lack of progress.

Manual Deployments

So, what are automated deployments and why are they worth it? Well first let’s examine what a deployment is to begin with.

When a change or new feature is ready to be built for a site, it will have been turned into a specification and handed over to a development team to build. The developers will then write some code and test it on their local machines, they may even demo it to someone else. At some point though, that work needs to get from their machine to the server(s) it runs on so that other people like QA can see it without using the devs machine. Typically, there will be 3 server environments set up for a website:

  1. Dev / Test – An environment used by the developers and QA to build, test and refactor new features and fixes.
  2. UAT / Staging – An environment used by the website owners / stakeholders that the features have been deployed to for review and approval before they are deployed into production.
  3. Production – The live website that is accessible over the internet.

In a manual deployment setup, the developer is responsible for copying the website into each of the environments. This will generally involve:

  1. Getting the version of code to be deployed from source control.
  2. Publishing a build of the site locally (some languages need to be compiled into machine code, and most modern websites will run a task on the CSS and JavaScript to obfuscate and reduce the file size).
  3. Login to databases and run any scripts to update the database.
  4. Login to the web servers using something like a remote desktop client.
  5. Copy and Paste the new files onto the server.
  6. Make some sort of record that the deployment has been done and what it included.

As you can probably guess each of these steps is prone to human error; What if the developer gets the wrong version from source control? What if they overwrite the wrong folder? What if they miss the database changes? What if they don’t record the deployment, how can we be sure what version is on an environment?

Not only that but it’s also a lengthy task for someone to do. Some parts are just waiting for a code publish to complete or a file copy to finish. For the odd production deploy with a decent checklist this might not be so bad, but for iterating work into QA this can not only make QA fails for minor things really costly to fix, but also result in bundling more and more changes together to cut down the number of releases which then reduces the flow of work feeding into QA and slows the pace at which features can be developed and released.

A Better Way

Now we understand the issues, we certainly don’t want that as a cost throughout the lifetime of our site, so how do we avoid it?

Well, automated deployments quite simply take all these manual steps, automated them with a few bonuses on top. By doing this we remove all the elements of human error and are left with consistent results each time.

There’s two parts to an automated pipeline, the first is build and the second is release.

The build part relates to the first 2 steps of our manual process. We need to get the code from source control and then publish it. Once we’ve done this, we are left with a build that we can assign a release number to. By automating it we’ve also ensured that the build happens the same way each time rather than any differences between developers’ machines being. At this point we can also start detecting build failures and include some automated testing to pick up on errors before even involving QA.

The release part relates to the remaining steps of the manual process. First of all the process of copying files and running scripts now becomes one large script that won’t miss a step and in addition to that, now we have build numbers we can see which build is on each environment and only promote the one we want to the next environment. Because we’re promoting builds rather than doing the whole build process again, we’re also ensuring that the build going to production is the exact same thing that was tested on QA and then reviewed on UAT. With some clever integrations with things like GitHub and Jira we can also automated release notes to say what was included in each of the releases.

Degrees of Automation

It worth noting at this point that there are different degrees of automation and this will greatly affect the cost of setup.

As well as automating the deploy of code, we could also automate the entire server setup through the use or ARM templates in Azure. Different levels of automated testing can be implemented from very low code coverage to very high; a test could even be what the level of code coverage is.

Automation could also extend to include load balancers for green / blue deployments where zero downtime of the site is achieved by using multiple production servers and switching between them while files are being copied.

Is this the CI and CD thing I’ve heard about?

Two terms I deliberately haven’t used in this article is Continuous Integration (CI) and Continuous Delivery (CD) and that’s because although automated deployments forms part of achieving them they are not the same thing and it’s worth knowing how they differ.

Continuous Integration related to the build part of the automated setup and aims to deal with issues arising from multiple developers working on a project. When more than one developer works on a copy of a website locally, both sets are changing in different way to what is contained within source control. The longer this goes on for the bigger the differences get and the harder it will become to merge again. Continuous Integration advocates shortening the gaps between merges to as shorter time as possible, potentially multiple time per day. By having automated tools in place to run a build whenever this happens, issues can be identified as early as possible.

Continuous Delivery relates more to the release aspect of the automated setup and is an aim to be constantly delivering / deploying updates to a solution as fast as possible. By doing this, deployments become trivial and smaller updates are delivered at a much faster pace rather that being queued for one big release. Automation helps make this a reality by removing many of the manual time consuming tasks that are repetitive.

Azure devops and custom NuGet feeds

If your setting up a CI pipleline on Azure Devops for a site which uses a NuGet feed from a source that isn’t on nuget.org you may see the error:

“The nuget command failed with exit code(1) and error(Errors in packages.config projects Unable to find version…”

On your local dev machine you will have added extra an extra NuGet feed source through visual studio which will update a global file on you machine. However as Azure Pipelines is a serverless solution you don’t have the same global file to update to include the sources.

Instead of this you need to add a NuGet.config file to the root of your repository.

Here is an example of one set to include Sitecores NuGet package feed.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Sitecore NuGet v2 Feed" value="https://sitecore.myget.org/F/sc-packages/" />    
  </packageSources>
</configuration>

Next you will need to update your pipeline to tell the NuGet step to use this config file

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
    feedsToUse: 'config'
    nugetConfigPath: 'NuGet.config'

And that’s it. As long as all the sources are correct the NuGet command should now find your packages.

Bulk Inserting data using Entity Framework

Using tools like Entity Framework makes life far easier for a developer. Recently I blogged about how using them is what makes .Net Core one of the best platforms for prototype development, but the benefits don’t end there. They are also great from a security perspective by cutting a lot of risk around SQL injection attacks just by avoiding easy mistakes when using regular ADO.NET.

However, they do have some downsides, a main one being that they are particularly slow when it comes to doing bulk inserts to a database.

For example, assume you have an application which regularly receives an xml import file consisting of 200,000 records and each one either needs to be an insert of an update into the db. You’ll quickly learn that looping through the whole lot and then calling save changes results in a process taking an extremely long time to run, it may even just timeout. You then decide to get rid of that long save changes line by breaking it up into blocks of 500 and call save changes for each of those. That may save the timeout issue, but it still results in a process potentially lasting around an hour.

The problem is that this is a scenario Entity Framework or EF.Core just weren’t designed to handle. As a solution you could opt to drop Entity Framework altogether and revert to something like a native SQL Bulk Insert command, but what if you need to be doing some processing in code on the record before the import happens? What if you have one of those classic not quite always valid XML, XML files which would cause SQLs Bulk Insert to fail.

The solution is to use an open source extension called EFCore.BulkExtensions.

EFCore.BulkExtensions

EFCore.BulkExtensions is a set of extension methods to Entity Framework that provide the functionality to do bulk inserts. You can add it to your project using NuGet and you’ll find the project on GitHub here https://github.com/borisdj/EFCore.BulkExtensions

Usage is also very simple to do. Let’s assume you have some existing tradition EF code that loops through a collection and for each one create a new db item and adds it to the db:

public void DoImport(List<foo> collection)
{
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;

        await dbContext.Jobs.AddAsync(job);
    }

    await dbContext.SaveChangesAsync();
}

Rather than adding each item to the Entity Framework db context, you instead create a list of those objects and then call a BulkInsert function with them on your db context.

public void DoImport(List<foo> collection)
{
    List<Jobs> importJobs
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;
        
        importJobs.Add(job);
    }

    await dbContext.BulkInsert(importJobs);
}

If also works for updates, but rather than creating a new item, first retrieve it form the db and then at the end call BulkInsertOrUpdate with the list.

await dbContext.BulkInsertOrUpdate(importJobs);

From my experience doing this took my import process that would run for over an hour down to something which would complete in a few minutes.

Top reasons .Net is amazing for prototyping

The other day someone told me .net was slow to get something built, and to be fair to the person I can see why he would have thought that. Most of his interaction with .net projects have been on complex with large enterprise applications that often have integrated multiple other applications.

However, I would maintain that .net is a framework that is actually really fast to develop on and that what he had perceived as being slow was the complexities of a project rather than the actual coding time.

In fact, I would say it’s so fast to get something built in it, it actually becomes the fastest thing to develop a prototype in. Here are my top reasons why.

ASP.NET Core

ASP.NET Core inherits all the best bits from the ASP.NET Framework that came before it, giving the framework almost 20 years of refinement since its original release in 2002. The days of WebForms in the original ASP.NET are now long behind us and we now have the choice of building web applications with either MVC or Razor Pages.

Razor provides the perfect combination of a view language providing helpers to render your html without limiting what can be done on the front end. How you code your HTML is still completely up to you, the helpers just provide features like binding that make it even faster to do.

Another great thing about .Net core over that which came before it, is its platform independent. Rather than being confined to just Windows, you can run it on Mac or Linux too.

Great starter templates

What kicks of a great prototype project is starting with great templates, and ASP.NET Core has a bunch.

As already mentioned, you can build a Web Application with either Razor Pages or MVC, but the templates also provide you with the base for building an API, Angular App, React.js or React.js and Redux or you can simply create an Empty application.

My preference is to go for MVC as it’s what I’m the most familiar with and a key thing for rapidly building a prototype is that you develop rapidly. The idea is to focus on creating something new and unique, not learn how to develop in a new framework.

The MVC Web Application gives you a base site to work with a few pages already set up, bootstrap and jQuery are already included so you start right at the point of working on your logic rather than spending time doing setup.

SQL Server and EF Core

I’ve always been a bit of a database guy. I’m not sure why, but its a topic that has always just made sense to me, and despite being a topic that can get quite complex, the reasons behind it being complex always feel logical.

When it comes to building a prototype though there are two aspects which make storage with .net core super simple.

Firstly, Entity Framework Core (EF Core) means you don’t really need to know any SQL or spend any time writing it. It helps if you do, but at a minimum all you need to be doing is creating a model in your code, adding a few lines for a DB context that tells EF.Core that a model is a table and how they relate. Then turn on migrations and you’re done. When you run the application, the DB gets created for you and each time you change your model, you just add another migration and the next time the application runs the application the DB schema gets updated.

Querying your DB is done by writing LINQ queries against your entity framework model, allowing you to have next to no understanding of SQL and how the DB works. Everything is just done by magic for you.

The second part is SQL Server and its different versions. Often when you think of SQL Server you think of the big DB engine with its many many components that you install on a server or your local machine but there’s two others which are even more important. LocalDB and Azure SQL.

LocalDB is an option that can be installed as part of Visual Studio. No separate application is needed or services to be running in the background. It is essentially the minimum required to start the DB engine to be used for development purposes. In practical terms this means when you start your application EF.Core can run off a LocalDB which didn’t require any setup, but as far as your application in concerned it is no different than working with any other version of SQL Server.

Azure SQL as the name implies is SQL Server on Azure. The only thing I really need to say about this is that you can swap LocalDB and Azure SQL with ease. They may be different but as far as your prototype is concerned, they are the same.

Scaffolding

The only thing quicker than writing code is having someone else do it for you. So we’ve created our application from a template, added a model which generated our database and now its time to create some pages. Well the good news is it’s still not time to write much code because Visual Studio can scaffold out pages based on our model for us!

Adding a controller to our project in Visual Studio gives us some options on what should be generated for us, one of which is MVC Controller with views, using Entity Framework. What that means is given a model it will create controllers and views for listing items, creating them, editing them and deleting them. No coding by us required!

Now it’s unlikely that is exactly what you’re after, but it’s generally a good starting place and deleting code you don’t need is far quicker then writing it.

Azure

Lastly there is Azure. You may have spotted a theme to all these points and that is they all remove any effort required to do any setup and instead focus on building your own logic, and this point is no different.

I remember a time, when if I wanted a server to put an application on, I had to request it, and then wait a while. What I would get back would either be a server that already had resources running on it, or a blank server that would need applications installed on it. e.g. SQL Server or .Net Framework. IIS wouldn’t have been configured and it would be a number of hours before my application would be running.

With Azure you don’t even really need to leave Visual Studio. From the publish dialog box you can create a new App Service and DB, and then publish. All the connection strings are sorted out for you. There are service plans which cost next to nothing, a domain is configured for you and at the end of the publish the website opens and is working. The whole process has taken less than 10 minutes.

Logging with .net core and Application Insights

When you start builing serverless applications like Azure functions or Azure web jobs, one of the first things you will need to contend with is logging.

Traditionally logging was simply achieved by appending rows to a text file that got stored on the same server your application was running on. Tools like log4net made this simpler by bringing some structure to the proces and providing functionality like automatic time stamps, log levels and the ability to configure what logs should actually get written out.

With a serverless application though, writing to the hard disk is a big no no. You have no guarantee how long that server will exist for and when your application moves, that data will be lost. In a world where you might want to scale up and down, having logs split between servers is also hard to retrieve when an error does happen.

.net core

The first bit of good news is that .net core supports a logging API. Here I am configuring it in a web job to output logs to the console and to Application insights. This is part of the host builder config in the program.cs file.

//3. LOGGING function execution :
//3 a) for LOCAL - Set up console logging for high-throughput production scenarios.
hostBuilder.ConfigureLogging((context, b) =>
{
    b.AddConsole();

    // If the key exists in appsettings.json, use it to enable Application Insights rather than console logging.
    //3 b) PROD - When the project runs in Azure, you can't monitor function execution by viewing console output (3 a). 
    // -The recommended monitoring solution is Application Insights. For more information, see Monitor Azure Functions.
    // -Make sure you have an App Service app and an Application Insights instance to work with.
    //-Configure the App Service app to use the Application Insights instance and the storage account that you created earlier.
    //-Set up the project for logging to Application Insights.
    string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
    if (!string.IsNullOrEmpty(instrumentationKey))
    {
        b.AddApplicationInsights(o => o.InstrumentationKey = instrumentationKey);
    }
});

Microsofts documentation on logging in .NET Core and ASP.NET can be found here.

Creating a log in your code is then as simple as using dependency injection on your classes to inject an instance of ILogger and then using it’s functions to create a log.

public class MyClass
{
    private readonly ILogger logger;

    public MyClass(ILogger logger)
    {
        this.logger = logger;
    }

    public void Foo()
    {
        try
        {
            // Logging information
            logger.LogInformation("Foo called");
        }
        catch (Exception ex)
        {
            // Logging an error
            logger.LogError(ex, "Something went wrong");
        }
    }
}

Application Insights

When your application is running in Azure, Application Insights is where all your logs will live.

What’s great about App Insights is it will give you the ability to write queries against all your logs.

So for instance if I wanted to find all the logs for an import function starting, I can write a filter for messages containing “Import function started”.

One of my favourite where commands is ago(30m). It will output the time for a given timespan in the past. This is great when your running the same query frequently and are only interested in the last x amount of time, as you can simply write where timestamp > ago(30m) for the last 30 minutes of logs rather than trying to remember for date time format your string should be in

Queries can also be saved or pinned to a dashboard if they are a query you need to run frequently.

For all regular logs your application makes you need to query the traces. What can be confusing with this though is the errors.

With the code above I had a try catch block and in the catch block I called logger.LogError(ex, “Something went wrong”); so in my logs I expect to see the message and as I passed an exception I also expect to see an exception. But if we look at this example from application insights you will see an error in the traces log but no strack trace or anything else from the exception.

LogError will write to both the traces and exceptions log. If you want to view the exceptions you have to look at the exceptions log, not the traces.

This is just the start of the functionality that Application Insights provides, but if your just starting out, hopefully this is a good indication not only of how easy it is to add logging to your application, but also how much added value App Insights can offer over just having text files.

Two ways to import an XML file with .Net Core or .Net Framework

It’s always the simple stuff you forget how you do. For years I’ve mainly been working with JSON files, so when faced with that task of reading an XML file my brain went “I can do that” followed by “actually how did I used to do that?”.

So here’s two different methods. They work on .Net Core and theoretically .Net Framework (my project is .Net Core and haven’t checked that they do actually work on framework).

My examples are using an XML in the following format:

<?xml version="1.0" encoding="utf-8"?>
<jobs>
    <job>
        <company>Construction Co</company>
        <sector>Construction</sector>
        <salary>£50,000 - £60,000</salary>
        <active>true</active>
        <title>Recruitment Consultant - Construction Management</title>
    </job>
    <job>
        <company>Medical Co</company>
        <sector>Healthcare</sector>
        <salary>£60,000 - £70,000</salary>
        <active>false</active>
        <title>Junior Doctor</title>
    </job>
</jobs>

Method 1: Reading an XML file as a dynamic object

The first method is to load the XML file into a dynamic object. This is cheating slightly by first using Json Convert to convert the XML document into a JSON string and then deserializing that into a dynamic object.

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Linq;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);
            XDocument xdoc = XDocument.Load(stream);

            string jsonText = JsonConvert.SerializeXNode(xdoc);
            dynamic dyn = JsonConvert.DeserializeObject<ExpandoObject>(jsonText);

            foreach (dynamic job in dyn.jobs.job)
            {
                string company;
                if (IsPropertyExist(job, "company"))
                    company = job.company;

                string sector;
                if (IsPropertyExist(job, "sector"))
                    company = job.sector;

                string salary;
                if (IsPropertyExist(job, "salary"))
                    company = job.salary;

                string active;
                if (IsPropertyExist(job, "active"))
                    company = job.active;

                string title;
                if (IsPropertyExist(job, "title"))
                    company = job.title;

                // A property that doesn't exist
                string foop;
                if (IsPropertyExist(job, "foop"))
                    foop = job.foop;
            }

            Console.ReadLine();
        }

        public static bool IsPropertyExist(dynamic settings, string name)
        {
            if (settings is ExpandoObject)
                return ((IDictionary<string, object>)settings).ContainsKey(name);

            return settings.GetType().GetProperty(name) != null;
        }
    }
}

A foreach loop then goes through each of the jobs, and a helper function IsPropertyExist checks for the existence of a value before trying to read it.

Method 2: Deserializing with XmlSerializer

My second approach is to turn the XML file into classes and then deserialize the XML file into it.

This approch requires more code, but most of it can be auto generated by visual studio for us, and we end up with strongly typed objects.

Creating the XML classes from XML

To create the classes for the XML structure:

1. Create a new class file and remove the class that gets created. i.e. Your just left with this

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

}

2. Copy the content of the XML file to your clipboard
3. Select the position in the file you want to the classes to go and then go to Edit > Paste Special > Paste XML as Classes

If your using my XML you will now have a class file that looks like this:

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

    // NOTE: Generated code may require at least .NET Framework 4.5 or .NET Core/Standard 2.0.
    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    [System.Xml.Serialization.XmlRootAttribute(Namespace = "", IsNullable = false)]
    public partial class jobs
    {

        private jobsJob[] jobField;

        /// <remarks/>
        [System.Xml.Serialization.XmlElementAttribute("job")]
        public jobsJob[] job
        {
            get
            {
                return this.jobField;
            }
            set
            {
                this.jobField = value;
            }
        }
    }

    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    public partial class jobsJob
    {

        private string companyField;

        private string sectorField;

        private string salaryField;

        private bool activeField;

        private string titleField;

        /// <remarks/>
        public string company
        {
            get
            {
                return this.companyField;
            }
            set
            {
                this.companyField = value;
            }
        }

        /// <remarks/>
        public string sector
        {
            get
            {
                return this.sectorField;
            }
            set
            {
                this.sectorField = value;
            }
        }

        /// <remarks/>
        public string salary
        {
            get
            {
                return this.salaryField;
            }
            set
            {
                this.salaryField = value;
            }
        }

        /// <remarks/>
        public bool active
        {
            get
            {
                return this.activeField;
            }
            set
            {
                this.activeField = value;
            }
        }

        /// <remarks/>
        public string title
        {
            get
            {
                return this.titleField;
            }
            set
            {
                this.titleField = value;
            }
        }
    }

}

Notice that the active field was even picked up as being a bool.

Doing the Deserialization

To do the deserialization, first create an instance of XmlSerializer for the type of the object we want to deserialize too. In my case this is jobs.

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));

Then call Deserialize passing in a XML Reader. I’m creating and XML reader on the stream I used in the dynamic example.

            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

The complete file now looks like this:

using System;
using System.IO;
using System.Text;
using System.Xml;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));
            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

            Console.ReadLine();
        }
    }
}

And thats it. Any missing nodes in your XML will just be blank rather than causing an error.

How to create a shared access signature for a specific folder in Azure Storage

As we move into a serverless wold where devlopment work is done less on generic multipurpose servers that require patching both to the OS and to the application installed on them, things become easier and in some cases cheaper.

For instance I am currently working on a project that needs to send and recieve XML files from different sources and with no other requirements for a server, Azure storage seemed the perfect fit. As it’s just a storage service you pay for what you need and nothing more. All the patching and maintencance of API’s is provided by Microsoft and we have an instant place to store out files. Ideal!

However new technology always comes with new challenges. My first issue came with the plan on how we would be getting files in and out with partners. This isn’t the first time I’ve worked with Azure File Storage, but it is the first time I did it without an opps person setting things up. Previously we used FTP, which while dated works with a lot of applications. Like a headphone jack it’s not the most glamorus of connection, but it works, everyone knows how it works and everyone has things that work with it. However it transpires that despite Azure storage being 10 years old and reciving requests for FTP from the start, Microsoft have decided to go the same route as Apple with the headphone jack and not have it. Instead the only option for an integration is REST. As it transpires the opps people I had worked with in the past when faced with this issue had just put a VM infront of it, which kind of defeats the point of using Azure storage in the first place!

So we’re going with REST and Microsoft provide quite a straightforward REST API all good so far, but how do we limit access? Well there’s a guide to Using the Azure Storage REST API which contains a section on creating an authorisation header. It’s long and overly complex, but does point you in the direction that to do this you need a Shared Access Signature. The other option is an access key, but this is something you should never give away to a third party.

Shared Access Signature

After a bit more digging through the documentation (and just clicking the thing that sounded right in the potal) I found this documentation on creating an Account SAS which sounded like what I wanted (it wasn’t, but it’s close).

With a shared access signature you can say what kind of service should be allowed, what permissions they should have, IP address’s, start and end dates. All awesome things.

Once I had this I could then use the REST API, but there was a problem. I could access every folder in the storage account and there was no way to stop this! For integrating with 2 third partys they would both be able to access each others stuff, and our own private stuff.

There is also no way to revoke the SAS once it’s been generated other than refreshing the access keys which would affect everyone.

Folder Level Shared Access Signature

After a bit more research I found what I was looking for. How to create a shared access signature at a folder or item level and how to link it to a policy.

The first thing you need is Azure Storage Explorer. Once your set up with this you will be able to view all your storage accounts.

From here you are able to browse to the folder you want to share right click it and choose Manage Access Policies.

This will open a dialoge to manage the policies for this specific object rather than the account.

Here you can set all the same permissions as you could for a signature at an account level but now for a specific object and against a policy rather than an actual signature, meaning the policy can be updated in the future with no change to the signature.

Better still you can remove the policy which will then invalidate any signature using it.

For the actual signature key right click the same folder and click Get Shared Access Signature.

Then in the dialoge select the policy from the drop down rather than spcifying the individual permissions.

Click create and you can copy the keys.

You now have an access key that is limited to a specific folder rather than the entire account.

This is only possible to do though one of the code/scripting interfaces. e.g. powershell or the storage explorer. The azure portal will only let you get signatures at an account level.

Config file transforms with Azure Devops

For a long time now our primary CI setup has been based around Team City and Octopus deploy, but as reliable as it is there are things I don’t like about it:

  1. It’s not a SASS setup meaning there’s a VM to occassionaly think about and updates to install. While Octopus is now availiable as a SASS option, Team City is not and moving Octopus will only solve half the problem.
  2. That VM they both sit on every so often gets and issue with it’s hard disk being full.
  3. It’s complicated to recommend the same setup to clients. You end up having to go through multiple things they need to buy which then require some installation and ongoing maintenance. Ideally we would have a setup thats easy for them to replicate and own themselves with minimal maintenance.

So when we took over a site recently that typically came with no existing CI setup in place, I decided to take a look at using Azure Devops instead. You can use Azure Devops with Octopus Deploy but as it claims to be able to manage releases as well as builds we went for doing the whole thing just in Azure Devops.

Getting a build set up was relatively straight forward so I’m going to skip past that bit, but in short we ended up with a build that will create a web deploy package and publish it as an artifact. Typical msbuild type stuff.

File transforms and variable substitution

The first real tricky point came with replacing variables in config files during a release to each envrionment. We were using the IIS Web App Deploy task to deploy the application to IIS on a VM (no new Azure Web App Services in this setup 😦 as I said we took over the site and this was just to get automated deploys of what they already have). A simple starting point with this is some built in functionality for XML Variable Substituion in the IIS Web App Deploy task.

Quite simply you can add all your varibles to the variable list, set the scope for which envrionment you want it to apply to and the during the deploy they are replaced in your config. Unlike some tag replacement tools I’ve used in the past this one actually uses the name of the connecting string or app setting you need to set, so if you need to set a connection string named web, the variable name will be web.

This is also where my problem stated. The description for what XML variable substitution does is:

Variables defined in the build or release pipeline will be matched against the ‘key’ or ‘name’ entries in the appSettings, applicationSettings, and connectionStrings sections of any config file and parameters.xml

This was a Sitecore solution and for Sitecore most of your config settings are in Sitecores own Sitecore section of the config file. So in other words the connection string will get updated but the rest won’t.

Parameter and SetParameter XML files

My next issue was trying to find a solution is actually quite hard. Searching for this problem either gave me a lot of results for setups using ARM templates (as I said, this was a solution we took over and that kind of change is not on the agenda), or you just get the easy bit above. Searching for Sitecore and Azure Devops also leads you to a lot of results on a cloud infrastructure setup (again not what we’re doing here, at least in the short term). Everything that was coming up felt far more complicated than the solution should be.

However the documentation on the XML variable substitution did have one interesting sentance.

If you are looking to substitute values outside of these elements you can use a (parameters.xml) file, however you will need to use a 3rd party pipeline task to handle the variable substitution.

A parameters.xml file isn’t something I’ve used before which makes this sentance a bit cryptic. The first half says I can do what I want with an xml file, but the second half says I’ll need something else to actually do it.

After a bit of reasearch this all comes back to web deploy. When you do a build that outputs a web deploy package, you get 5 files.

A zip file containing the actual site, a command file which has the script to do the deploy and a set parameters file which is used to set config variables during the deploy. The others aren’t so imporant.

To have different config set on different envrionments you just need to edit the set parameters file. But first you need to have the parameter in the set parameters file so that you can actually change it and this is where the parameters.xml file comes in.

Creating the parameter files

Add a file called parameters.xml file to the root of your project and then add parameters as follows.

<?xml version="1.0" encoding="utf-8" ?>
<parameters>
  <parameter name="DataFolderLocation" defaultvalue="#{dataFolder}">
    <parameterEntry kind="XmlFile" scope="App_Config\\Include\\Z.Project\\DataFolder\.config$" match="/configuration/sitecore/sc.variable[@name='dataFolder']/patch:attribute/text()" />
  </parameter>
</parameters>

Some important parts:

default valueThe value that the config setting will get set to
scopeThe path to the file containing the setting
matchAn XPath expression for find the part of the config file to update

Once you have this the build will start producing a SetParameters.xml file containing the extra parameters.

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="Default Web Site/SiteCore.Website_deploy" />
  <setParameter name="DataFolderLocation" value="#{dataFolder}" />
</parameters>

Note: I’ve set the value to be something I intend to replace in the release process.

Replacing the tokens

With our SetParameters.xml file now contining all the config we need to update, we need a step in the release process that will replace all the tokens with the correct values.

To do this I used a replaced tokens task https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens

Config options need to be set for:

Root DirectoryPath to the folder containing the SetParameters.xml file
Target filesA list of files to have replacements done in. In our case this was SiteCore.Website.SetParameters.xml
Token prefixThe prefix on tokens to be search for. Ours was #{
Token suffixThe suffix to denote the end of a token. Ours was }

Lastly in the IIS Web App Deploy step the SetParameters file needed to be selected and the new variables added to the variable list in Azure Devops. The variable names need to be called the bit between your prefix and suffix. i.e. #{datafolder} would be called datafolder.

If you don’t set the variables then the log’s will show warning for each one it couldn’t find.

2019-09-24T17:17:21.6950466Z ##[section]Starting: Replace tokens in SiteCore.Website.SetParameters.xml
2019-09-24T17:17:23.9831695Z ==============================================================================
2019-09-24T17:17:23.9831783Z Task         : Replace Tokens
2019-09-24T17:17:23.9831816Z Description  : Replace tokens in files
2019-09-24T17:17:23.9831861Z Version      : 3.2.1
2019-09-24T17:17:23.9831891Z Author       : Guillaume Rouchon
2019-09-24T17:17:23.9831921Z Help         : v3.2.1 - [More Information](https://github.com/qetza/vsts-replacetokens-task#readme)
2019-09-24T17:17:23.9831952Z ==============================================================================
2019-09-24T17:17:27.2703037Z replacing tokens in: C:\azagent\A1\_work\r1\a\PublishBuildArtifacts\SiteCore.Website.SetParameters.xml
2019-09-24T17:17:27.3133832Z ##[warning]variable not found: dataFolder
2019-09-24T17:17:27.3179775Z ##[section]Finishing: Replace tokens in SiteCore.Website.SetParameters.xml

With all this set our config has it’s variables configured within Azure Devops for each envrionment,

Setting up local https with IIS in 10 minutes

For very good reasons websites now nearly always run under https rather than http. As dev’s though this gives us a complication of either removing any local redirect to https rules and “hoping” things work ok when we get to a server, or setting local IIS up to have an https binding.

Having https setup locally is obviously a lot more favourable and what has traditionally been done is to create a self signed certificate however while this works as far as IIS is concerned, it still leaves an annoying browser warning as the browser will recognise it as un-secure. This can then create additional problems in client side code when certain things will hit the error when calling an api.

mkcert

The solution is to have a certificate added to your trusted root certificates rather than a self signed one. Fortunately there is a tool called mkcert that makes the process a lot simpler to do.

https://github.com/FiloSottile/mkcert#windows

Create a local cert step by step

1. If you haven’t already. Install chocolatey ( https://chocolatey.org/install ). Chocolatey is a package manager for windows which makes it super simple to install applications. The name is inspired from NuGet. i.e. Chocolatey Nuget

2. Install mkcert, to do this from a admin command window run

choco install mkcert

3. Create a local certificate authority (ca)

mkcert -install

4. Create a certificate

mkcert -pkcs12 example.com

Remember to change example.com to the domain you would like to create a certificate for.

5. Rename the .p12 file that was created to .pfx (this is what IIS requires). The certificate will now be created in the folder you have the command window open at.

You can now import the certificate into IIS as normal. When asked for a password this have been set to changeit