Why is my Session ID changing on 3D Secure payments?

If you have a website where you are implementing 3D Secure payments, you may find that you have an issue where on receipt of the payment setup confirmation the users Session ID has changed with no apparent cause.

Lets have a quick run through of the payment process in this scenario (this is roughly how SagePay and WorldPay both work):

  1. User completes payment details on your site (some wizardry normally happens at this point with an iFrame for the card number to maintain PCI compliance) and the form is submitted to your server
  2. You server call’s an API from the payment gateway to setup a 3D Secure payment. It passes the users Session ID along with all the payment details.
  3. Payment gateway responds with a URL for you to redirect the user too by posting a form to it. You do this, likely in an iFrame so that the page still looks like your website (otherwise it’s a very ugly page).
  4. User may or may not get prompted by some sort of authentication by the bank. This could be something like receiving a text message with a code to enter.
  5. When authenticated the user is sent back to your website (in the iFrame) with a post request.
  6. Your server picks out the form details from the request and then calls an API to complete the transaction. In this API call you send the Session ID so that the payment gateway can validate it is the same as the one at the start of the process.
  7. Confirmation shown to the user.

Introducing the Same Site Cookie Policy

In 2020 a change made to how cookies function in browsers to defend against cross site scripting. Troy Hunt has a brilliant explanation of the issue with how cookies used to work and how this has changed here. I’m going to try a much shorter explanation;

When a request is made from a browser, as part of the request all the cookie values for that domain are sent with the request. This will include one for the Session ID. The theory here is that because the cookies are only being sent to the domain which set them in the first place, then information is only being shared back with the place that set it to begin with, which is therefor safe.

However the workings of the internet and what domain a button click might call isn’t overly obvious to most people. So what if clicking a link on one site causes the user to be redirected to another site? Answer: all the cookies are still sent. The same thing happens if a form is posted from one site to another site. The problem here is that if you are authenticated on the other website, then its possible for a cross site scripting attack to be using your session via your browser without you even realising you were on the site again. This is why it’s always a good idea to log out of websites!

The introduction of the same site policy changes how cookies work with three options:

  1. None: which is what the browsers used to do. i.e. send all the cookies with cross-origin request
  2. Lax: some limits on sending cookies with cross-origin request
  3. Strict: tight limits on sending cookies with cross-origin request

None is sending everything and a strict policy will basically stop cookies from being sent when they have a cross-origin request.

A Lax policy is slightly more interesting though, because it depends if the request is a GET or a POST. If it is a GET then the cookies will still be sent, which means that if you follow a link from another site or a search engine your cookies will still be sent to the site. However a POST (like what the 3D Secure page is doing) will no longer send the cookies.

If the policy isn’t set, then Lax is used as the default.

So why is my Session ID changing?

The problem is that post request back to your site, unless the Session ID had a cookie policy of None then the Session ID cookie won’t be sent. The server will then see that there is no Session ID and treat the users as if they are new on the site and as a result start a new session. From this point on the user has lost the old session and you can’t complete the payment. Worse still they’ve probably just been logged out and anything else using session data has also been lost.

In .Net 4.7.2 and up, Microsoft has implemented the ability to set the cookie policy of the session ID. You can do this in your web.config file like this:

<configuration>
 <system.web>
  <anonymousIdentification cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
  <authentication>
   <forms cookieSameSite="None" requireSSL="false" />
  </authentication>
  <sessionState cookieSameSite="None" /> <!-- No config attribute for Secure -->
  <roleManager cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
 <system.web>
<configuration>

You can find more about this here. In everything older the policy wont be set and will default to Lax.

However just because you can set the cookie policy to None, it doesn’t mean you should. After all, that just re-opens the vulnerability the browser was trying to protect against.

The solution I went with was to have a page that the users is redirected back to from the payment gateway that does nothing other than re-submit the values from the payment gateway to the site again. This way I can be sure of what form data is being posted to the site, and when I re-post it, it is a same site post and not a cross-origin which means the Session ID cookie will be sent.

To stop my initial page setting a new session cookie (which it will try to do because it won’t recieve one), I use the IIS URL Rewrite module to strip out the set cookie response headers from the page.

You can do this with an outbound rule as follows:

<rewrite>
      <rules>
      </rules>
      <outboundRules>
        <rule name="Remove SetCookie Header" preCondition="Match Payment Page">
          <match serverVariable="RESPONSE_Set-Cookie" pattern=".*" />
          <action type="Rewrite" value="" />
        </rule>
        <preConditions>
          <preCondition name="Match Payment Page" logicalGrouping="MatchAny">
            <add input="{REQUEST_URI}" pattern="PAGE NAME GOES HERE" />
          </preCondition>
        </preConditions>
      </outboundRules>
    </rewrite>

With this solution the cookies are still secure as the policy is set to Lax and I can take payments using 3D Secure which will soon become a requirement.

What are automated deployments and why do I need them?

When your working on a new website build as either the marketer responsible for the site or the project manager overseeing it’s build, some of the most frustrating tasks to be using up your budget can originate from the IT team and come under the category of setup.

They can suck days even weeks from a project but offer no visible features or benefit to the end user, worse still they often need to come at the start of a project which for people wanting to see a design turn into a webpage just feels like a delay and lack of progress.

Manual Deployments

So, what are automated deployments and why are they worth it? Well first let’s examine what a deployment is to begin with.

When a change or new feature is ready to be built for a site, it will have been turned into a specification and handed over to a development team to build. The developers will then write some code and test it on their local machines, they may even demo it to someone else. At some point though, that work needs to get from their machine to the server(s) it runs on so that other people like QA can see it without using the devs machine. Typically, there will be 3 server environments set up for a website:

  1. Dev / Test – An environment used by the developers and QA to build, test and refactor new features and fixes.
  2. UAT / Staging – An environment used by the website owners / stakeholders that the features have been deployed to for review and approval before they are deployed into production.
  3. Production – The live website that is accessible over the internet.

In a manual deployment setup, the developer is responsible for copying the website into each of the environments. This will generally involve:

  1. Getting the version of code to be deployed from source control.
  2. Publishing a build of the site locally (some languages need to be compiled into machine code, and most modern websites will run a task on the CSS and JavaScript to obfuscate and reduce the file size).
  3. Login to databases and run any scripts to update the database.
  4. Login to the web servers using something like a remote desktop client.
  5. Copy and Paste the new files onto the server.
  6. Make some sort of record that the deployment has been done and what it included.

As you can probably guess each of these steps is prone to human error; What if the developer gets the wrong version from source control? What if they overwrite the wrong folder? What if they miss the database changes? What if they don’t record the deployment, how can we be sure what version is on an environment?

Not only that but it’s also a lengthy task for someone to do. Some parts are just waiting for a code publish to complete or a file copy to finish. For the odd production deploy with a decent checklist this might not be so bad, but for iterating work into QA this can not only make QA fails for minor things really costly to fix, but also result in bundling more and more changes together to cut down the number of releases which then reduces the flow of work feeding into QA and slows the pace at which features can be developed and released.

A Better Way

Now we understand the issues, we certainly don’t want that as a cost throughout the lifetime of our site, so how do we avoid it?

Well, automated deployments quite simply take all these manual steps, automated them with a few bonuses on top. By doing this we remove all the elements of human error and are left with consistent results each time.

There’s two parts to an automated pipeline, the first is build and the second is release.

The build part relates to the first 2 steps of our manual process. We need to get the code from source control and then publish it. Once we’ve done this, we are left with a build that we can assign a release number to. By automating it we’ve also ensured that the build happens the same way each time rather than any differences between developers’ machines being. At this point we can also start detecting build failures and include some automated testing to pick up on errors before even involving QA.

The release part relates to the remaining steps of the manual process. First of all the process of copying files and running scripts now becomes one large script that won’t miss a step and in addition to that, now we have build numbers we can see which build is on each environment and only promote the one we want to the next environment. Because we’re promoting builds rather than doing the whole build process again, we’re also ensuring that the build going to production is the exact same thing that was tested on QA and then reviewed on UAT. With some clever integrations with things like GitHub and Jira we can also automated release notes to say what was included in each of the releases.

Degrees of Automation

It worth noting at this point that there are different degrees of automation and this will greatly affect the cost of setup.

As well as automating the deploy of code, we could also automate the entire server setup through the use or ARM templates in Azure. Different levels of automated testing can be implemented from very low code coverage to very high; a test could even be what the level of code coverage is.

Automation could also extend to include load balancers for green / blue deployments where zero downtime of the site is achieved by using multiple production servers and switching between them while files are being copied.

Is this the CI and CD thing I’ve heard about?

Two terms I deliberately haven’t used in this article is Continuous Integration (CI) and Continuous Delivery (CD) and that’s because although automated deployments forms part of achieving them they are not the same thing and it’s worth knowing how they differ.

Continuous Integration related to the build part of the automated setup and aims to deal with issues arising from multiple developers working on a project. When more than one developer works on a copy of a website locally, both sets are changing in different way to what is contained within source control. The longer this goes on for the bigger the differences get and the harder it will become to merge again. Continuous Integration advocates shortening the gaps between merges to as shorter time as possible, potentially multiple time per day. By having automated tools in place to run a build whenever this happens, issues can be identified as early as possible.

Continuous Delivery relates more to the release aspect of the automated setup and is an aim to be constantly delivering / deploying updates to a solution as fast as possible. By doing this, deployments become trivial and smaller updates are delivered at a much faster pace rather that being queued for one big release. Automation helps make this a reality by removing many of the manual time consuming tasks that are repetitive.

Do I need a Headless CMS?

What is a Headless CMS?

Headless CMS’s have become the hot topic in recent years along with preferences to move to microservices over monolithic architectures and to decouple parts of applications where possible. So, you may be wondering do I need one?

Let’s start with what a Headless CMS actually is. A typical CMS does two jobs;

Firstly, it provides a place for marketers to login and create pages of content that appear on the website. Some allow very basic content editing, other provide the capabilities to construct entire pages from pre-built components, preview what it looks like, apply personalisation rules, split testing and a whole host of other features.

Secondly, they allow developers to create templates of skins to render the pages to visitors. They also provide authentication, form data collection, track visits, browsing habits etc.

Essentially the functionality splits into what the marketer sees and what the visitor sees.

A Headless CMS on the other hand, chops of the second part and in its place leaves an API so that another application can pull content out of the CMS for it to display. The CMS then becomes a place solely responsible for managing content rather than managing a website.

Decoupling in this way offers a few advantages:

  1. The technology/platform is no longer determined by the CMS. You could have a .net based CMS and write the website in JavaScript, php, or any other language.
  2. The content can be provided to more than just a website. In the same way that digital asset management platforms specialise in serving digital assets to other systems. E.g. mobile app, web, print etc. A headless CMS can provide content to multiple other systems too.
  3. It’s much easier for a headless CMS to become a SASS offering. Most custom development when you install a CMS is to tailor the visitor website experience, rather than the admin experience. Once the visitor website is removed there leaves little reason for it to be a product you install and maintain making it much easier for vendors to supply as SASS.

So, should I have one?

With all these advantages the answer should be yes, right? Well its not quite that simple, there are also some disadvantages, so let’s answer some more questions to see if it’s really suited for your situation;

Do you want to use a different website language?

One of the advantages of going headless was that you could pick the language you write the head in, but which language do you want to write it in? Most articles will talk about writing the head in JavaScript frameworks such as React, Vuejs or Angular. The reason for this is simple, if you want to write your application in one of these then you’re a bit stuck for CMS choices as no viable CMSs exist written in React, Vuejs or Angular so using a headless CMS is the only option other than writing an entire bespoke admin.

But what if you want to write the head in .Net or PHP? A headless CMS won’t restrict you doing this in any way, in-fact Umbraco Heartcore even provides a .Net client library to help. The only thing you must question is if it’s going to give you more advantages than disadvantages. By cutting off the head you now have to create it, so are you creating something different to what Umbraco’s Head originally did or are you just recreating the same thing?

Is there a specific pre-built head you want to use?

An alternative to writing your own head is to buy one that is premade. The biggest downside I see to headless is that you have to build and support the head, so using one that is pre-built and compatible with your CMS is a big advantage.

However, this does also mean that you may end up paying a higher price in license fees. Unless your CMS costs decrease enough to cover the additional costs of the head, your ultimately now licensing two products which could add up to more than one.

How many heads are you going to have?

If the answer is one and it’s a website, then it’s highly likely that you don’t need a headless CMS as you’ll effectively end up writing an inferior head to the one, you’re replacing.

CMSs like WordPress have almost 2 decades of work gone into advancing them at this point getting them to a place where they are feature rich, reliable and most importantly secure. Creating your own head will require you to do they work they have already done.

Are you building a website?

If the answer to this is no, then a headless website makes a great option to provide an admin experience for your content to become editable. They are pre-built, integrate with the popular enterprise authentication systems and offer great features around workflow and user permissions.

However, if the answer is yes, then you need to think carefully. Going truly headless has some quite major disadvantages for a website:

  1. You need to write and maintain the head, and if that head is doing the same thing the CMS one did than you’re just reinventing the wheel.
  2. As the CMS is now only providing content to another application less will be possible through the admin interface. E.g. No ability to preview pages, no drag and drop page creation.
  3. Creating things like campaign landing pages or any new page layout creation may no longer be possible without involving the IT team. Because the CMS is now only providing the content rather than the layout, unless the head provides an admin to manage this you could end up with very fixed page templates.

A CMS that can work both head on and headless may be a better option. For instance, Sitecore can be set up like a traditional CMS where the custom rendering logic is added to the main application. However, it can also be set up as a decoupled CMS where the page rendering is achieved using Angular, Vue or React, but note this is decoupled and not headless. The visitor site can be hosted separate from the CMS like with headless, but a lot of the logic is still provided by Sitecore so things like personalisation, admin previews, page construction, Sitecore forms etc are all still possible. It also can work in a truly headless way by using the same API that powers the decoupled head to power a bespoke non-Sitecore head, like a mobile app.

Is your content reusable?

A news corporation writing articles that appear in newspapers, websites or mobile apps are obviously creating a lot of reusable content that can stay the same on each destination.

Alternatively, a typical brochure site (done well) are created with a journey in mind where the whole page has been designed to achieve an outcome. Some content may just be a 3 words combined with an image to evoke an emotional response. Other content may be specifically written to fit a certain gap so that it appears the same size as 2 other pieces of content next to it. None of which may actually translate into mobile.

If this is the case what you may end up actually doing is rather than creating a content repository that can be reused. Is creating a repository where 90% can’t be reused and the other 10% is lost amongst it.

Top reasons .Net is amazing for prototyping

The other day someone told me .net was slow to get something built, and to be fair to the person I can see why he would have thought that. Most of his interaction with .net projects have been on complex with large enterprise applications that often have integrated multiple other applications.

However, I would maintain that .net is a framework that is actually really fast to develop on and that what he had perceived as being slow was the complexities of a project rather than the actual coding time.

In fact, I would say it’s so fast to get something built in it, it actually becomes the fastest thing to develop a prototype in. Here are my top reasons why.

ASP.NET Core

ASP.NET Core inherits all the best bits from the ASP.NET Framework that came before it, giving the framework almost 20 years of refinement since its original release in 2002. The days of WebForms in the original ASP.NET are now long behind us and we now have the choice of building web applications with either MVC or Razor Pages.

Razor provides the perfect combination of a view language providing helpers to render your html without limiting what can be done on the front end. How you code your HTML is still completely up to you, the helpers just provide features like binding that make it even faster to do.

Another great thing about .Net core over that which came before it, is its platform independent. Rather than being confined to just Windows, you can run it on Mac or Linux too.

Great starter templates

What kicks of a great prototype project is starting with great templates, and ASP.NET Core has a bunch.

As already mentioned, you can build a Web Application with either Razor Pages or MVC, but the templates also provide you with the base for building an API, Angular App, React.js or React.js and Redux or you can simply create an Empty application.

My preference is to go for MVC as it’s what I’m the most familiar with and a key thing for rapidly building a prototype is that you develop rapidly. The idea is to focus on creating something new and unique, not learn how to develop in a new framework.

The MVC Web Application gives you a base site to work with a few pages already set up, bootstrap and jQuery are already included so you start right at the point of working on your logic rather than spending time doing setup.

SQL Server and EF Core

I’ve always been a bit of a database guy. I’m not sure why, but its a topic that has always just made sense to me, and despite being a topic that can get quite complex, the reasons behind it being complex always feel logical.

When it comes to building a prototype though there are two aspects which make storage with .net core super simple.

Firstly, Entity Framework Core (EF Core) means you don’t really need to know any SQL or spend any time writing it. It helps if you do, but at a minimum all you need to be doing is creating a model in your code, adding a few lines for a DB context that tells EF.Core that a model is a table and how they relate. Then turn on migrations and you’re done. When you run the application, the DB gets created for you and each time you change your model, you just add another migration and the next time the application runs the application the DB schema gets updated.

Querying your DB is done by writing LINQ queries against your entity framework model, allowing you to have next to no understanding of SQL and how the DB works. Everything is just done by magic for you.

The second part is SQL Server and its different versions. Often when you think of SQL Server you think of the big DB engine with its many many components that you install on a server or your local machine but there’s two others which are even more important. LocalDB and Azure SQL.

LocalDB is an option that can be installed as part of Visual Studio. No separate application is needed or services to be running in the background. It is essentially the minimum required to start the DB engine to be used for development purposes. In practical terms this means when you start your application EF.Core can run off a LocalDB which didn’t require any setup, but as far as your application in concerned it is no different than working with any other version of SQL Server.

Azure SQL as the name implies is SQL Server on Azure. The only thing I really need to say about this is that you can swap LocalDB and Azure SQL with ease. They may be different but as far as your prototype is concerned, they are the same.

Scaffolding

The only thing quicker than writing code is having someone else do it for you. So we’ve created our application from a template, added a model which generated our database and now its time to create some pages. Well the good news is it’s still not time to write much code because Visual Studio can scaffold out pages based on our model for us!

Adding a controller to our project in Visual Studio gives us some options on what should be generated for us, one of which is MVC Controller with views, using Entity Framework. What that means is given a model it will create controllers and views for listing items, creating them, editing them and deleting them. No coding by us required!

Now it’s unlikely that is exactly what you’re after, but it’s generally a good starting place and deleting code you don’t need is far quicker then writing it.

Azure

Lastly there is Azure. You may have spotted a theme to all these points and that is they all remove any effort required to do any setup and instead focus on building your own logic, and this point is no different.

I remember a time, when if I wanted a server to put an application on, I had to request it, and then wait a while. What I would get back would either be a server that already had resources running on it, or a blank server that would need applications installed on it. e.g. SQL Server or .Net Framework. IIS wouldn’t have been configured and it would be a number of hours before my application would be running.

With Azure you don’t even really need to leave Visual Studio. From the publish dialog box you can create a new App Service and DB, and then publish. All the connection strings are sorted out for you. There are service plans which cost next to nothing, a domain is configured for you and at the end of the publish the website opens and is working. The whole process has taken less than 10 minutes.

Config file transforms with Azure Devops

For a long time now our primary CI setup has been based around Team City and Octopus deploy, but as reliable as it is there are things I don’t like about it:

  1. It’s not a SASS setup meaning there’s a VM to occassionaly think about and updates to install. While Octopus is now availiable as a SASS option, Team City is not and moving Octopus will only solve half the problem.
  2. That VM they both sit on every so often gets and issue with it’s hard disk being full.
  3. It’s complicated to recommend the same setup to clients. You end up having to go through multiple things they need to buy which then require some installation and ongoing maintenance. Ideally we would have a setup thats easy for them to replicate and own themselves with minimal maintenance.

So when we took over a site recently that typically came with no existing CI setup in place, I decided to take a look at using Azure Devops instead. You can use Azure Devops with Octopus Deploy but as it claims to be able to manage releases as well as builds we went for doing the whole thing just in Azure Devops.

Getting a build set up was relatively straight forward so I’m going to skip past that bit, but in short we ended up with a build that will create a web deploy package and publish it as an artifact. Typical msbuild type stuff.

File transforms and variable substitution

The first real tricky point came with replacing variables in config files during a release to each envrionment. We were using the IIS Web App Deploy task to deploy the application to IIS on a VM (no new Azure Web App Services in this setup 😦 as I said we took over the site and this was just to get automated deploys of what they already have). A simple starting point with this is some built in functionality for XML Variable Substituion in the IIS Web App Deploy task.

Quite simply you can add all your varibles to the variable list, set the scope for which envrionment you want it to apply to and the during the deploy they are replaced in your config. Unlike some tag replacement tools I’ve used in the past this one actually uses the name of the connecting string or app setting you need to set, so if you need to set a connection string named web, the variable name will be web.

This is also where my problem stated. The description for what XML variable substitution does is:

Variables defined in the build or release pipeline will be matched against the ‘key’ or ‘name’ entries in the appSettings, applicationSettings, and connectionStrings sections of any config file and parameters.xml

This was a Sitecore solution and for Sitecore most of your config settings are in Sitecores own Sitecore section of the config file. So in other words the connection string will get updated but the rest won’t.

Parameter and SetParameter XML files

My next issue was trying to find a solution is actually quite hard. Searching for this problem either gave me a lot of results for setups using ARM templates (as I said, this was a solution we took over and that kind of change is not on the agenda), or you just get the easy bit above. Searching for Sitecore and Azure Devops also leads you to a lot of results on a cloud infrastructure setup (again not what we’re doing here, at least in the short term). Everything that was coming up felt far more complicated than the solution should be.

However the documentation on the XML variable substitution did have one interesting sentance.

If you are looking to substitute values outside of these elements you can use a (parameters.xml) file, however you will need to use a 3rd party pipeline task to handle the variable substitution.

A parameters.xml file isn’t something I’ve used before which makes this sentance a bit cryptic. The first half says I can do what I want with an xml file, but the second half says I’ll need something else to actually do it.

After a bit of reasearch this all comes back to web deploy. When you do a build that outputs a web deploy package, you get 5 files.

A zip file containing the actual site, a command file which has the script to do the deploy and a set parameters file which is used to set config variables during the deploy. The others aren’t so imporant.

To have different config set on different envrionments you just need to edit the set parameters file. But first you need to have the parameter in the set parameters file so that you can actually change it and this is where the parameters.xml file comes in.

Creating the parameter files

Add a file called parameters.xml file to the root of your project and then add parameters as follows.

<?xml version="1.0" encoding="utf-8" ?>
<parameters>
  <parameter name="DataFolderLocation" defaultvalue="#{dataFolder}">
    <parameterEntry kind="XmlFile" scope="App_Config\\Include\\Z.Project\\DataFolder\.config$" match="/configuration/sitecore/sc.variable[@name='dataFolder']/patch:attribute/text()" />
  </parameter>
</parameters>

Some important parts:

default valueThe value that the config setting will get set to
scopeThe path to the file containing the setting
matchAn XPath expression for find the part of the config file to update

Once you have this the build will start producing a SetParameters.xml file containing the extra parameters.

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="Default Web Site/SiteCore.Website_deploy" />
  <setParameter name="DataFolderLocation" value="#{dataFolder}" />
</parameters>

Note: I’ve set the value to be something I intend to replace in the release process.

Replacing the tokens

With our SetParameters.xml file now contining all the config we need to update, we need a step in the release process that will replace all the tokens with the correct values.

To do this I used a replaced tokens task https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens

Config options need to be set for:

Root DirectoryPath to the folder containing the SetParameters.xml file
Target filesA list of files to have replacements done in. In our case this was SiteCore.Website.SetParameters.xml
Token prefixThe prefix on tokens to be search for. Ours was #{
Token suffixThe suffix to denote the end of a token. Ours was }

Lastly in the IIS Web App Deploy step the SetParameters file needed to be selected and the new variables added to the variable list in Azure Devops. The variable names need to be called the bit between your prefix and suffix. i.e. #{datafolder} would be called datafolder.

If you don’t set the variables then the log’s will show warning for each one it couldn’t find.

2019-09-24T17:17:21.6950466Z ##[section]Starting: Replace tokens in SiteCore.Website.SetParameters.xml
2019-09-24T17:17:23.9831695Z ==============================================================================
2019-09-24T17:17:23.9831783Z Task         : Replace Tokens
2019-09-24T17:17:23.9831816Z Description  : Replace tokens in files
2019-09-24T17:17:23.9831861Z Version      : 3.2.1
2019-09-24T17:17:23.9831891Z Author       : Guillaume Rouchon
2019-09-24T17:17:23.9831921Z Help         : v3.2.1 - [More Information](https://github.com/qetza/vsts-replacetokens-task#readme)
2019-09-24T17:17:23.9831952Z ==============================================================================
2019-09-24T17:17:27.2703037Z replacing tokens in: C:\azagent\A1\_work\r1\a\PublishBuildArtifacts\SiteCore.Website.SetParameters.xml
2019-09-24T17:17:27.3133832Z ##[warning]variable not found: dataFolder
2019-09-24T17:17:27.3179775Z ##[section]Finishing: Replace tokens in SiteCore.Website.SetParameters.xml

With all this set our config has it’s variables configured within Azure Devops for each envrionment,

Setting up local https with IIS in 10 minutes

For very good reasons websites now nearly always run under https rather than http. As dev’s though this gives us a complication of either removing any local redirect to https rules and “hoping” things work ok when we get to a server, or setting local IIS up to have an https binding.

Having https setup locally is obviously a lot more favourable and what has traditionally been done is to create a self signed certificate however while this works as far as IIS is concerned, it still leaves an annoying browser warning as the browser will recognise it as un-secure. This can then create additional problems in client side code when certain things will hit the error when calling an api.

mkcert

The solution is to have a certificate added to your trusted root certificates rather than a self signed one. Fortunately there is a tool called mkcert that makes the process a lot simpler to do.

https://github.com/FiloSottile/mkcert#windows

Create a local cert step by step

1. If you haven’t already. Install chocolatey ( https://chocolatey.org/install ). Chocolatey is a package manager for windows which makes it super simple to install applications. The name is inspired from NuGet. i.e. Chocolatey Nuget

2. Install mkcert, to do this from a admin command window run

choco install mkcert

3. Create a local certificate authority (ca)

mkcert -install

4. Create a certificate

mkcert -pkcs12 example.com

Remember to change example.com to the domain you would like to create a certificate for.

5. Rename the .p12 file that was created to .pfx (this is what IIS requires). The certificate will now be created in the folder you have the command window open at.

You can now import the certificate into IIS as normal. When asked for a password this have been set to changeit

ASP.NET Core Platforms for a Blog

Like a lot of Sitecore developers my blog (at time of writing) is hosted on WordPress. The reason for it not being in Sitecore is simple. Sitecore is an enterprise level platform, which isn’t really needed for a personal blog.

For a .net dev to have there blog on a php platform however just seems plain wrong, but again there’s a logical reason. WordPress is actually really good as a blogging platform, and it doesn’t cost me anything.

Despite this I would much rather take control of my site and use it to play with all the cool features in Azure. It would also be nice to have the ability to do something about the Google PageSpeed result which is currently sitting at 24%. So in aid of this I’ve started looking into .net core based platforms and thought I’d share what I’ve found.

Miniblog.core

https://github.com/madskristensen/Miniblog.Core

As the name suggests Miniblog.core is both very small and based on .net core. Developed by Mads Kristensen its an extremely lightweight bare bones implementation, which if your after something you can help build upon is ideal. The code is straightforward to understand and very simple to adapt. Additionally if your after a 100% page speed score, then this achieves just that.

If on the other hand your after a deluxe admin experience full of functionality then this probably isn’t for you.

Piranha CMS

http://piranhacms.org/

Piranha CMS is built as a lightweight CMS platform rather than specifically as a blog, however it also contains a blog module which for me put’s it at a big advantage over the other CMS platforms I’ve listed below.

On the back end you get a choice of SQL Server, SQLite or MySQL. The documentation isn’t exactly complete, but on the day I tried it out, I found the team building it very responsive on GitHub. They even updated the documentation with one of my suggestions the very next day.

Another aspect I particularly liked about Piranha CMS was it’s block editor, which from the brief look I’ve had so far reminds me of the block editor Umbraco has. Whereas other platforms in this list were restricted to a large rich text field.

Orchard Core

https://github.com/OrchardCMS/OrchardCore

Orchard Core is the dot net core version of the Orchard CMS. It’s currently in beta, but I’m not sure that put’s it at much of a disadvantage over the others on this list.

My initial impressions of Orchard Core however weren’t as high as Piranha CMS. The admin interface wasn’t quite as nice and as far as I could tell, it didn’t have anything like Piranha’s block editor. The solution itself also seemed far more complex and I wasn’t certain what I got for this. I expect Orchard Core is likely better in some ways that I have yet to discover, but for my needs as a blog this is probably not the case. It also didn’t have a blog module out of the box.

Squidex

https://squidex.io/

I have’t had much of a chance to play with Squidex yet, but it does offer an interesting difference to the others mentioned so far.

For a start Squidex is an entirely headless cms, and is built around the concept of CQRS and Event Sourcing. Unlike the others it also uses MongoDB rather than a SQL based database.

Where MongoDB is concerned, I often get the impression people are using it because as developers we tend to have a preference to using something new rather than something adequate. However when it comes to Azure pricing, there is potentially a saving to be made by using Mongo rather than Azure SQL.

Redirecting to login page with AngularJs and .net WebAPI

So here’s the scenario, you have a web application which people log into and some of the pages (like a dashboard) contain ajax functionality. Inevitably the users session expires, they return to the window change a filter and nothing happens. In the background, your JavaScript is making http calls to the server which triggers an unauthorised response. The front end has no way to handle this and some errors appear in the JS console.

A few things are actually combining to make life hard for you here. Lets take a look at each in more detail.

WebAPI and the 301 Response

To protect your API’s from public access a good solution is to use the Authorize attribute. i.e.

[Authorize]
public ActionResult GetDashboardData(int foo)
{
   // Your api logic here
            
}

However chances are your solution also has a login page configured in your web.config so that your regular page controller automatically trigger a 301 response to the login page.

    <authentication mode="Forms">
      <forms timeout="30" loginUrl="/account/sign-in/" />
    </authentication>

So now what happens, is instead or responding with a 401 Unauthorised response, what’s actually returned is a 301 to the login page.

With an AJAX request from a browser you now hit a second issue. The browser is making an XMLHttpRequest. However if that request returns a 301, rather than returning it your JavaScript code to handle, it “helpfully” follows the redirect and returns that to your JavaScript. Which means rather than receiving a 301 redirect status back, your code is getting a 200 Ok.

So to summarise your API was set up to return a 401 Unauthorised, that got turned into a 301 Redirect, which was then followed and turned into a 200 Ok before it gets back to where it was requested from.

To fix this the easiest method is to create are own version of the AuthorizedAttribute which returns a 403 Forbidden for Ajax requests and the regular logic for anything else.

using System;
using System.Web.Mvc;

namespace FooApp
{
    [AttributeUsage(AttributeTargets.Method)]
    public class CustomAuthorizeAttribute : AuthorizeAttribute
    {
        protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                filterContext.Result = new HttpStatusCodeResult(403, "Forbidden");
            }
            else
            {
                base.HandleUnauthorizedRequest(filterContext);
            }
        }
    }
}

Now for any Ajax requests a 403 is returned, for everything else the 301 to the login page is returned.

Redirect 403 Responses in AngularJs to the login page

As our Ajax request is being informed about the unauthorised response, it’s up to our JavaScript code trigger the redirect in the browser to the login page. What would be really helpful would be to define the redirect logic in one place, rather than adding this logic to every api call in our code.

To do this we can use add an interceptor onto the http provider in angular js. The interceptor will inspect the response error coming back from the XmlHttpRequest and if it has a status of 401, use a window.locator to redirect the user to the login page.

app.factory('httpForbiddenInterceptor', ['$q', 'loginUrl', function ($q, loginUrl) {
    return {
        'responseError': function (rejection) {
            if (rejection.status == 403) {
                window.location = loginUrl;
            }
            return $q.reject(rejection);
        }
    };
}]);

app.config(['$httpProvider', function ($httpProvider) {
    $httpProvider.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
    $httpProvider.interceptors.push('httpForbiddenInterceptor');
}]);

You’ll notice a line updating the headers. This is to make the IsAjaxRequest() method on the api recognise the request as being Ajax.

Finally you’ll also notice the loginUrl being passed into the interceptor. As it’s not a great idea to have strings like urls littered around your code, this is using a value recipe to store the url. The code to do this is follows:

app.value('loginUrl', '/account/sign-in?returnurl=/dashboard/');

Running Gulp tasks in Visual Studio

Over the last decade, front end development has matured from a point where CSS and JavaScript were written in badly organised files containing the exact code a browser would interpret, to something that now more resembles back end development.

Pure CSS has become the byte code of the front end world with LESS and Sass becoming the languages of choice. Instead of MSBuild for running a compilation Gulp can be used to automate workflows to compile Sass files, minify them and do whatever else is needed to produce the code for deployment (not really an exact comparison, but you get the point).

To truly achieve a matured development setup though, the final step is to remove those “compiled” css and js files from source control. After all you wouldn’t check in a dll with the source code for each build. If your using a build server then this is a relatively easy step to add to your workflow. However the bigger issue is not making life hard for your back end developers.

Without the final CSS and JS files in source control your back end devs may now be faced with a site lacking all style and front end functionality when they hit F5. With LESS and Sass also not really being there thing, they’re now left not really knowing what to do, and also not overly happy about having to learn something about front end do continue with there back end work. To make matters worse, Visual Studio often isn’t the front end dev’s choice of tool so copying the front enders setup isn’t an ideal solution either.

The Aim

The ideal solution would be for a back end dev to be able to check out a solution from source control (containing no compiled CSS of JS files), hit F5 in Visual Studio, the final CSS and JS be created as part of the build and the site to run. No opening a command prompt to run a gulp task, or any other process the front end devs may be using. It should be completely invisible to them.

Visual Studio Task Runner

With Visual Studios Task Runner, this is entirely possible.

The Task Runner is built into Visual Studio, so there’s no need for the back end dev to install any extra tools, and more importantly, tasks can be linked into a before build event so that the back end dev doesn’t need to do anything.

A bit of background on gulp and the task runner

When front end developers set up gulp, they will configure a set of gulp tasks within a file named gulpfile.js (in reality they may actually separate the tasks into multiple files referenced by gulpfile.js, but this file is the important bit). These tasks may look a bit like this:

gulp.task("min:js", () => {
  return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
    .pipe(concat(paths.concatJsDest))
    .pipe(uglify())
    .pipe(gulp.dest("."));
});

gulp.task("min:css", () => {
  return gulp.src([paths.css, "!" + paths.minCss])
    .pipe(concat(paths.concatCssDest))
    .pipe(cssmin())
    .pipe(gulp.dest("."));
});

gulp.task("min", gulp.series(["min:js", "min:css"]));

In this example there are 3 tasks. The first minifies some JS while the second minifies some CSS. The third is defining a series which runs the first 2.

Visual Studios task runner will look file the file called gulpfile.js and pull out all the tasks within it. The task runner window may not be open by default, so to open it type task runner in the search box and select it from the results. Alternatively you can right click the gulpfile in the solution and select Task Runner from the context menu.

The task runner window will open at the bottom of the screen, and will list out all the gulp tasks found within the gulpfile.js file. If the front end devs have organised the tasks into separate files, as long as the gulpfile.js file in the root of the project has some way of referencing them, they will still show up.

If you don’t see any of the tasks, and have only just added the file or a task to the file. You may need to click the refresh button.

To run a task, simply right click it and then choose Run.

Automating on build

Being able to run a gulp task is handy, but what we’re really after is for it to be automated as part of the build.

For this we just need to right click the task to be run, and then from the bindings option, select before build.

This will add a comment to the top of the gulpfile.js which Visual Studio will look for to know what task should be run before a build.

/// <binding BeforeBuild='min' />

With this set, the back end devs no longer need to be concerned with not having any compiled css and js, and with a small amount of knowledge also have the ability to make minor changes to css and js where needed.

Some others tips

Depending on your front end devs setup there could be some additional challenges to overcome.

Front end devs not using Visual Studio

Quite often Visual Studio isn’t the choice of dev environment for a front end dev. One issue this can lead to is files missing from the Visual Studio solution. However an easy fix for this is to use wild cards in the csproj file.

If the front end devs code can be grouped into specific folders then use a wild card to include all the files from that folder in the project.

<None Include="build\js\**\*.js" />

CSS/JS not in the web project

If the front end devs have a separate folder for there work. e.g. they might work with static html files. Then the code may not be in the project what will be run, and therefore nothing will trigger the gulp file to have it’s task run.

A simple solution for this is to create a visual studio project for the folder with there work so that it has a build event to be attached to. Make sure your web project also references this project to trigger it to be built when the web project is.

Links

For more info on using Gulp with Visual Studio. Check out Microsofts guide on using Gulp with ASP.NET Core.

Force clients to refresh JS/CSS files

It’s a common problem with an easy solution. You make some changes to a JavaScript of CSS file, but your users still report an issue due to the old version being cached.

You could wait for the browsers cache to expire, but that isn’t a great solution. Worse if they have the old version of one file and the new version of another, there could be compatibility issues.

The solution is simple, just add a querystring value so that it looks like a different path and the browser downloads the new version.

Manually updating that path is a bit annoying though so we use modified time from the actual file to add the number of ticks to the querystring.


DefaultLayout.cshtml

using Utilities;
using UrlHelper = System.Web.Mvc.UrlHelper;

namespace Web.Mvc.Utils
{
    public static class UrlHelperExtensions
    {
        public static string FingerprintedContent(this UrlHelper helper, string contentPath)
        {
            return FileUtils.Fingerprint(helper.Content(contentPath));
        }
    }
}

UrlHelperExtensions.cs

using System;
using System.IO;
using System.Web;
using System.Web.Caching;
using System.Web.Hosting;

namespace Utilities
{
	public class FileUtils
    {
        public static string Fingerprint(string contentPath)
        {
            if (HttpRuntime.Cache[contentPath] == null)
            {
                string filePath = HostingEnvironment.MapPath(contentPath);

                DateTime date = File.GetLastWriteTime(filePath);

                string result = (contentPath += "?v=" + date.Ticks).TrimEnd('0');
                HttpRuntime.Cache.Insert(contentPath, result, new CacheDependency(filePath));
            }

            return HttpRuntime.Cache[contentPath] as string;
        }
    }
}

FileUtils.cs