Why is my Session ID changing on 3D Secure payments?

If you have a website where you are implementing 3D Secure payments, you may find that you have an issue where on receipt of the payment setup confirmation the users Session ID has changed with no apparent cause.

Lets have a quick run through of the payment process in this scenario (this is roughly how SagePay and WorldPay both work):

  1. User completes payment details on your site (some wizardry normally happens at this point with an iFrame for the card number to maintain PCI compliance) and the form is submitted to your server
  2. You server call’s an API from the payment gateway to setup a 3D Secure payment. It passes the users Session ID along with all the payment details.
  3. Payment gateway responds with a URL for you to redirect the user too by posting a form to it. You do this, likely in an iFrame so that the page still looks like your website (otherwise it’s a very ugly page).
  4. User may or may not get prompted by some sort of authentication by the bank. This could be something like receiving a text message with a code to enter.
  5. When authenticated the user is sent back to your website (in the iFrame) with a post request.
  6. Your server picks out the form details from the request and then calls an API to complete the transaction. In this API call you send the Session ID so that the payment gateway can validate it is the same as the one at the start of the process.
  7. Confirmation shown to the user.

Introducing the Same Site Cookie Policy

In 2020 a change made to how cookies function in browsers to defend against cross site scripting. Troy Hunt has a brilliant explanation of the issue with how cookies used to work and how this has changed here. I’m going to try a much shorter explanation;

When a request is made from a browser, as part of the request all the cookie values for that domain are sent with the request. This will include one for the Session ID. The theory here is that because the cookies are only being sent to the domain which set them in the first place, then information is only being shared back with the place that set it to begin with, which is therefor safe.

However the workings of the internet and what domain a button click might call isn’t overly obvious to most people. So what if clicking a link on one site causes the user to be redirected to another site? Answer: all the cookies are still sent. The same thing happens if a form is posted from one site to another site. The problem here is that if you are authenticated on the other website, then its possible for a cross site scripting attack to be using your session via your browser without you even realising you were on the site again. This is why it’s always a good idea to log out of websites!

The introduction of the same site policy changes how cookies work with three options:

  1. None: which is what the browsers used to do. i.e. send all the cookies with cross-origin request
  2. Lax: some limits on sending cookies with cross-origin request
  3. Strict: tight limits on sending cookies with cross-origin request

None is sending everything and a strict policy will basically stop cookies from being sent when they have a cross-origin request.

A Lax policy is slightly more interesting though, because it depends if the request is a GET or a POST. If it is a GET then the cookies will still be sent, which means that if you follow a link from another site or a search engine your cookies will still be sent to the site. However a POST (like what the 3D Secure page is doing) will no longer send the cookies.

If the policy isn’t set, then Lax is used as the default.

So why is my Session ID changing?

The problem is that post request back to your site, unless the Session ID had a cookie policy of None then the Session ID cookie won’t be sent. The server will then see that there is no Session ID and treat the users as if they are new on the site and as a result start a new session. From this point on the user has lost the old session and you can’t complete the payment. Worse still they’ve probably just been logged out and anything else using session data has also been lost.

In .Net 4.7.2 and up, Microsoft has implemented the ability to set the cookie policy of the session ID. You can do this in your web.config file like this:

<configuration>
 <system.web>
  <anonymousIdentification cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
  <authentication>
   <forms cookieSameSite="None" requireSSL="false" />
  </authentication>
  <sessionState cookieSameSite="None" /> <!-- No config attribute for Secure -->
  <roleManager cookieRequireSSL="false" /> <!-- No config attribute for SameSite -->
 <system.web>
<configuration>

You can find more about this here. In everything older the policy wont be set and will default to Lax.

However just because you can set the cookie policy to None, it doesn’t mean you should. After all, that just re-opens the vulnerability the browser was trying to protect against.

The solution I went with was to have a page that the users is redirected back to from the payment gateway that does nothing other than re-submit the values from the payment gateway to the site again. This way I can be sure of what form data is being posted to the site, and when I re-post it, it is a same site post and not a cross-origin which means the Session ID cookie will be sent.

To stop my initial page setting a new session cookie (which it will try to do because it won’t recieve one), I use the IIS URL Rewrite module to strip out the set cookie response headers from the page.

You can do this with an outbound rule as follows:

<rewrite>
      <rules>
      </rules>
      <outboundRules>
        <rule name="Remove SetCookie Header" preCondition="Match Payment Page">
          <match serverVariable="RESPONSE_Set-Cookie" pattern=".*" />
          <action type="Rewrite" value="" />
        </rule>
        <preConditions>
          <preCondition name="Match Payment Page" logicalGrouping="MatchAny">
            <add input="{REQUEST_URI}" pattern="PAGE NAME GOES HERE" />
          </preCondition>
        </preConditions>
      </outboundRules>
    </rewrite>

With this solution the cookies are still secure as the policy is set to Lax and I can take payments using 3D Secure which will soon become a requirement.

What are automated deployments and why do I need them?

When your working on a new website build as either the marketer responsible for the site or the project manager overseeing it’s build, some of the most frustrating tasks to be using up your budget can originate from the IT team and come under the category of setup.

They can suck days even weeks from a project but offer no visible features or benefit to the end user, worse still they often need to come at the start of a project which for people wanting to see a design turn into a webpage just feels like a delay and lack of progress.

Manual Deployments

So, what are automated deployments and why are they worth it? Well first let’s examine what a deployment is to begin with.

When a change or new feature is ready to be built for a site, it will have been turned into a specification and handed over to a development team to build. The developers will then write some code and test it on their local machines, they may even demo it to someone else. At some point though, that work needs to get from their machine to the server(s) it runs on so that other people like QA can see it without using the devs machine. Typically, there will be 3 server environments set up for a website:

  1. Dev / Test – An environment used by the developers and QA to build, test and refactor new features and fixes.
  2. UAT / Staging – An environment used by the website owners / stakeholders that the features have been deployed to for review and approval before they are deployed into production.
  3. Production – The live website that is accessible over the internet.

In a manual deployment setup, the developer is responsible for copying the website into each of the environments. This will generally involve:

  1. Getting the version of code to be deployed from source control.
  2. Publishing a build of the site locally (some languages need to be compiled into machine code, and most modern websites will run a task on the CSS and JavaScript to obfuscate and reduce the file size).
  3. Login to databases and run any scripts to update the database.
  4. Login to the web servers using something like a remote desktop client.
  5. Copy and Paste the new files onto the server.
  6. Make some sort of record that the deployment has been done and what it included.

As you can probably guess each of these steps is prone to human error; What if the developer gets the wrong version from source control? What if they overwrite the wrong folder? What if they miss the database changes? What if they don’t record the deployment, how can we be sure what version is on an environment?

Not only that but it’s also a lengthy task for someone to do. Some parts are just waiting for a code publish to complete or a file copy to finish. For the odd production deploy with a decent checklist this might not be so bad, but for iterating work into QA this can not only make QA fails for minor things really costly to fix, but also result in bundling more and more changes together to cut down the number of releases which then reduces the flow of work feeding into QA and slows the pace at which features can be developed and released.

A Better Way

Now we understand the issues, we certainly don’t want that as a cost throughout the lifetime of our site, so how do we avoid it?

Well, automated deployments quite simply take all these manual steps, automated them with a few bonuses on top. By doing this we remove all the elements of human error and are left with consistent results each time.

There’s two parts to an automated pipeline, the first is build and the second is release.

The build part relates to the first 2 steps of our manual process. We need to get the code from source control and then publish it. Once we’ve done this, we are left with a build that we can assign a release number to. By automating it we’ve also ensured that the build happens the same way each time rather than any differences between developers’ machines being. At this point we can also start detecting build failures and include some automated testing to pick up on errors before even involving QA.

The release part relates to the remaining steps of the manual process. First of all the process of copying files and running scripts now becomes one large script that won’t miss a step and in addition to that, now we have build numbers we can see which build is on each environment and only promote the one we want to the next environment. Because we’re promoting builds rather than doing the whole build process again, we’re also ensuring that the build going to production is the exact same thing that was tested on QA and then reviewed on UAT. With some clever integrations with things like GitHub and Jira we can also automated release notes to say what was included in each of the releases.

Degrees of Automation

It worth noting at this point that there are different degrees of automation and this will greatly affect the cost of setup.

As well as automating the deploy of code, we could also automate the entire server setup through the use or ARM templates in Azure. Different levels of automated testing can be implemented from very low code coverage to very high; a test could even be what the level of code coverage is.

Automation could also extend to include load balancers for green / blue deployments where zero downtime of the site is achieved by using multiple production servers and switching between them while files are being copied.

Is this the CI and CD thing I’ve heard about?

Two terms I deliberately haven’t used in this article is Continuous Integration (CI) and Continuous Delivery (CD) and that’s because although automated deployments forms part of achieving them they are not the same thing and it’s worth knowing how they differ.

Continuous Integration related to the build part of the automated setup and aims to deal with issues arising from multiple developers working on a project. When more than one developer works on a copy of a website locally, both sets are changing in different way to what is contained within source control. The longer this goes on for the bigger the differences get and the harder it will become to merge again. Continuous Integration advocates shortening the gaps between merges to as shorter time as possible, potentially multiple time per day. By having automated tools in place to run a build whenever this happens, issues can be identified as early as possible.

Continuous Delivery relates more to the release aspect of the automated setup and is an aim to be constantly delivering / deploying updates to a solution as fast as possible. By doing this, deployments become trivial and smaller updates are delivered at a much faster pace rather that being queued for one big release. Automation helps make this a reality by removing many of the manual time consuming tasks that are repetitive.

How to create a custom droplist in Sitecore

You could call this how to create a custom drop down, or select list, potentially even picker, but if you’ve ended up here hopefully what your after is a way to create a drop down list in the Sitecore admin populated with options that aren’t from Sitecore.

Fortunately this is quite simple to do by creating a new field type. First we will create the code are new custom field type, then add it to the list of field types in the core db and then use it in a template.

Custom Field Type Code

  1. Either create a new class library or decide on an existing one to put the code for your new field type in.
    I’ve imaginatively gone for one called HiMyNameIsTim.Sitecore.FieldTypes
  2. Add a reference to Sitecore.Kernel (you can get this from the Sitecore NuGet feed)
  3. Create a class for your custom field type
  4. Add the following code, replacing the options with your logic to get the values.
using Sitecore.Web.UI.HtmlControls;

namespace HiMyNameIsTim.Sitecore.FieldTypes
{
    public class SchemeDropList : Control
    {
        protected override void DoRender(System.Web.UI.HtmlTextWriter output)
        {
            output.Write("<select" + ControlAttributes + ">");
            output.Write("<option></option>");

            string s;
            if (base.Value == "1")
                s = "selected";
            else
                s = "";
            output.Write("<option value=\"1\" " + s + ">First</option>");

            if (base.Value == "2")
                s = "selected";
            else
                s = "";
            output.Write("<option value=\"2\" " + s + ">Second</option>");
            
            output.Write("</select>");

            RenderChildren(output);
        }
    }
}

  1. Publish this into your site
  2. Login to the admin and switch to the core db
  3. Navigate to “/sitecore/system/Field types/List Types/” in the tree
  4. Create a new item of type “/sitecore/templates/System/Templates/Template field type”
  5. Populate the assembly and class option with the values for the class you just made. For me this is:
At this point you could populate the control field instead, but that requires creating a config file so unless you have a reason to do that, you might as well just do assembly and class
  1. Switch to the master DB and create a template using your new field type. Here’s mine:
  1. Create a page and you should have a drop down list with the values from your class.

Equivalent of the Octopus Package Library for Azure Devops

I’ve been using Team City and Octopus deploy in our CI setups for several years, but over the last 6 months have slowly been moving over to use Azure Devops. This is mostly to move away from needing to keep an application like Team City updated by switching to a SAAS service. Octopus is available as a SAAS service, but as Azure Devops is also capable of doing releases I opted to start with that rather than using Octopus again.

One of my first challenges was how you replace Octopus’s package library. The solutions I’m working on (which are generally Sitecore based), consist of the application we write and store in Source Control, and all the other pre-built parts of Sitecore which we don’t store in source control.

With Octopus deploy we would add these static parts to the Octopus package library and then release a combination of them and our application to a server, wiping what is there before we do. That then gives us the exact same deploy on every target.

With Azure Devops however, things are a bit different.

Azure Artifacts

The closest equivalent of the package library is Azure Artifacts. Rather than a primary purpose being to upload packages to be included in a deploy. The goal of Artifacts is more inline with the output of a pipeline being saved to an Artifact which can then be a package feed for things like NuGet or npm.

Unfortunately, there is also no UI to directly upload a NuGet package to an Artifact feed like there is with Octopus’s package library. However, it is possible to upload a pre-built NuGet package another way using NuGet.exe itself.

Manually uploading a NuGet package to Azure Artifacts

To start you need to create a feed for your package to uploaded to.

Login to Azure DevOps and got to Artifacts. Click Create feed and give it a name.

Click connect to feed and select NuGet.exe. You will see under the heading project setup a nuget.config file to add to your solution.

Create an empty solution in Visual Studio and add a nuget.config file to the root folder using the source from the website. It will look something like this;

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />
    <add key="MyFeed" value="https://pkgs.dev.azure.com/.../_packaging/MyFeed/nuget/v3/index.json" />
  </packageSources>
</configuration>

Copy the NuGet package you would like you publish into the root folder as well. I’m using a package we have created for a tool called Feydra. My folder now looks like this.

Before we can publish into our NuGet feed we need to setup a personal access token. This is done from within Azure Devops.

In the top right corner click the person icon and then select profile. On the profile screen you will see a section called Personal access tokens under Security.

From here click New Token and give it the Read, write & manage permission for Packaging.

You will be given a token which you should save a copy of.

Now we are ready to publish are NuGet package to the Artifact feed.

Open a command prompt at the folder containing your solution file and run the following commands

nuget push -Source <SourceName> -ApiKey az <PackagePath>

For me this is as follows:

nuget push -Source MyFeed -ApiKey az .\Feydra.Custom.1.0.0.30.nupkg

You will then be prompted for a username and password which you should use the personal access token for both.

Refresh the feed on the website and you should see you package added to it, ready to be included in a release pipeline.

Sitecore 10 with headless ASP.NET Core

Sitecore 10 is here and with it comes the new developer experience with what Sitecore are calling Sitecore Headless Development.

Now you may be thinking, “didn’t Sitecore already have a Headless setup in Sitecore 9” and the answer would be yes, is still exists and is referred to as Sitecore Javascript Services (JSS). What makes this difference is the rendering layer is now using ASP.NET Core rather than Javascript libraries like Angular, VueJS and React. This gives us the benefits of a headless setup without having to program in one language for the back end (C#) and another for the front (JS). It’s now C# everywhere.

Before we get any further into what this new experience is, let’s clear one thing up. Like Sitecore JSS, this isn’t actually a true headless setup, it’s decoupled. The subtle difference being that a headless CMS has no rendering engine and has a purpose to feed content to multiple heads that could be anything from a website or app to physical display boards. They generally lack the ability to do things like preview because they have no knowledge of how the content will be rendered. Decoupled on the other hand still has a rendering engine but its been split off from the backend. Headless however is far more of a buzz word right now and alas Sitecore have called this headless.

So how does it work?

The traditional part of Sitecore is essentially the same as it is now. You still have Content Manager and Content Delivery servers, XConnect, Identity and all the other roles exist just as they did in Sitecore 9. However rather than creating View and Controller Renderings in Sitecore to return HTML, you will now create JSON renderings that will return item data through an API.

Communicating with that API is a new Rendering Host layer written in ASP.NET Core.

So now when a visitor comes to your site they will be interacting with the ASP.NET Core site which in turn will call the Headless Service API on your content delivery server, this will return JSON objects for the item data. The ASP.NET Core site then renders the page and returns it to the visitor.

This may sound like a bit more work, as you now have to setup a completely separate ASP.NET Core site and have that talk to an API but there’s good news. Sitecore have written a Sitecore ASP.NET Rendering SDK (included via NuGet) which will do most of the communication with the API for you. Most of what you will actually do is just a mapping of a View in the Rendering Host to a Layout Rendering item in Sitecore. The SDK will take care of the rest.

What’s the benefit?

As a developer there are three massive benefits that I can see for this setup.

Installs and Upgrades should be easier

If I had one complaint about Sitecore, it’s the amount of config to get the site running. With platforms like Umbraco and EPI, you just include a NuGet package in your project, run it from Visual Studio and you have a working CMS. Sitecore has to be installed in one of countless ways (SIF, GUI, Serverless, Containers) and that install process creates an ever increasing number of roles that all need to communicate with each other, and all need to be upgraded at some point.

Now this isn’t quite a SASS model, but it’s getting closer. With your rendering host now separated from the Sitecore install there’s less reasons to ever touch what gets setup by the installer.

Notice I do say less and not no reason. You will still be making changes to the CM and CD instances. For any type of rendering where you would have written a controller you will likely create what is called a contents resolver class that will need to go on the CM/CD to generate the object for your view.

You can run and debug directly from Visual Studio

Whenever I switch away from working on Sitecore and then go back the first point of pain is always the realisation that I’m going back to a process of Write code in Visual Studio > Publish to local site > Look at local site > if there’s a need to debug ctrl+alt+p to attach to process > realize Visual Studio isn’t in admin mode so restart Visual Studio etc etc etc, but with this setup it’s Write code > Press F5 > watch the lightweight front end instantly spin up and work.

Front End devs only need the ASP.NET Core project

Life is hard for a Front End developer working on Sitecore. There skill set is in HTML and CSS, but they have to work with this beast of a CMS and get updates from the back end devs to work on their local environments. Tools like Feydra help improve the situation, but it’s still not perfect.

In theory with a decoupled setup, with the Sitecore instance running on a server somewhere and a decent internet connection, all they now need is the ASP.NET Core Rendering Host project which will run direct from source control. No need to install anything and they can even work on a Mac!

How do you get setup?

To get going is a relatively simple experience due to the fact Siteocre have provided a getting started template (https://doc.sitecore.com/developers/100/developer-tools/en/walkthrough–using-the-getting-started-template.html) and a guide for creating your first model-bound view.

The guide uses a Sitecore Container setup (also new in Sitecore 10) which makes it even easier to get started with (no more installing all those pesky pre-req’s like Solr with https and debugging SIF errors).

I ran into a few issues with the containers that came down to ports not being available (if you do get errors, check the documentation for containers, it lists some additional port numbers that need to be free), but once you have it setup I would say you end up with more questions on the container side of things and the rendering host part just works.

What is missing?

This is a first release so obviously some things are missing right now. The biggest things I’ve come across so far are:

  • Information on how you debug. Wanting to know if an issue is with the API not returning data or the rendering host not rending it lacks any guidance on how to do this right now.
  • Sitecore Forms. A relatively important module for sites which won’t be available if you choose this setup.
  • Ability or at least instructions on how the Rendering Host should interact with the Sitecore DB or Search. For instance if you wanted to create an API to provide an autocomplete on a search box, logically you would now create the API in the rendering host, but the best practice way to retrieve the data from Sitecore is not yet clear.

Using Sitecore TDS with Azure Pipelines

Sitecore TDS allows developers to serialize Sitecore items into a file format which enables them to be stored in source control. These items can then be turned into a Sitecore update package to be deployed into a Sitecore solution.

With tools like Team City it was possible to install the TDS application on the server, and MSBuild would pick it up in the same way that Visual Studio would when you create a build locally. However, with build tools like Azure Piplelines that are a SAAS service you do not have any access to install components on a server.

If your solution contains a TDS project and you run an Azure Pipeline, you will probably see this error saying that the target ‘Build’ does not exist in the project.

Fortunately, TDS can be added to a project as a NuGet package. The documentation I found on Hedgehogs site for TDS says the package is unlisted on NuGet.org but can be installed in the normal way if you know what it’s called. When I tried this is didn’t work, but the NuGet package is also available in the TDS download so we can do it a different way.

Firstly download TDS from Hedgehogs website. https://www.teamdevelopmentforsitecore.com/Download/TDS-Classic

Next create a folder in the root of your drive called LocalNuGet and copy the nuget package from the download into this folder.

Within Visual Studio go to Tools > NuGet Package Manager > Package Manager Settings, and in the window that opens select Package Sources on the left.

Add a new package source and set it to your LocalNuGet folder.

From the package manager screen select your local nuget server as the package source and then add the TDS package to the relevant projects.

When you commit to source control make sure you add the hedgehog package in your projects packages folder. Normally this will be excluded as your build will try and restore the NuGet packages from a NuGet feed rather than containing them in the repo.

If you run your Azure Pipleine now you should get the following error:

This is a great first step as we can now see TDS is being used and we’re getting an error from it.

If you’re not getting this, then open the csproj file for your solution and check that there are no references to c:\program files (x86)\Hedgehog Development\ you should have something like this:

<Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
    <PropertyGroup>
      <ErrorText>This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them.  For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
    </PropertyGroup>
    <Error Condition="!Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets')" Text="$([System.String]::Format('$(ErrorText)', '..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets'))" />
  </Target>
  <Import Project="..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets" Condition="Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets')" />

To fix the product key error we need to add our license details.

Edit your pipeline and then click on the variables button in the top right. Add two new variables:

  • TDS_Key – Which should be set to your license key
  • TDS_Owner – Which should be set to the company name the key is linked to

Now run your pipleline again and the build should succeed

Moving the Media Cache folder in Sitecore

One of the cache’s that Sitecore has is the Media Cache. Whenever you use an image from Sitecore’s media library, Sitecore will retrieve the image from the database, scale it to the size you requested and then store it to disk in the media cache folder. On any subsequent request the image will now be retrieved from the media cache rather than the database.

By default each content management and content delivery server will locate the media cache in /App_Data/MediaCache

This is a relatively logical place to store a cache for images that wont cause you many issues. However, if you have automated deployments setup then you are likely wiping the whole of your website folder each time you do a deploy to ensure that your deploy remains the same on all environments. As the App_Data folder is in the website, your Media Cache will be deleted too.

Depending on your site then deleting the media cache potentially isn’t much of an issue. After all it’s a cache so all that will happen is the images will get cached the next time they are requested. But depending how many images get retrieved at the same time this could slow down performance, particularly if your content editors decided to put every image they ever uploaded into the same folder. Opening the that folder in the admin will create a nice amount of load on your server, particularly if you also have some extra image optimization logic installed.

Like most things in Sitecore though, you can change the location through a config setting.

In Sitecore.config you will find a property called Media.CacheFolder. Change this to somewhere outside of your website folder and Sitecore will now start storing the Media Cache in this location and it will be safe your all your deploys.

    <!--  MEDIA - CACHE FOLDER
            The folder under which media files are cached by the system.
            Default value: /App_Data/MediaCache
      -->
    <setting name="Media.CacheFolder" value="/App_Data/MediaCache"/>

Azure devops and custom NuGet feeds

If your setting up a CI pipleline on Azure Devops for a site which uses a NuGet feed from a source that isn’t on nuget.org you may see the error:

“The nuget command failed with exit code(1) and error(Errors in packages.config projects Unable to find version…”

On your local dev machine you will have added extra an extra NuGet feed source through visual studio which will update a global file on you machine. However as Azure Pipelines is a serverless solution you don’t have the same global file to update to include the sources.

Instead of this you need to add a NuGet.config file to the root of your repository.

Here is an example of one set to include Sitecores NuGet package feed.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Sitecore NuGet v2 Feed" value="https://sitecore.myget.org/F/sc-packages/" />    
  </packageSources>
</configuration>

Next you will need to update your pipeline to tell the NuGet step to use this config file

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
    feedsToUse: 'config'
    nugetConfigPath: 'NuGet.config'

And that’s it. As long as all the sources are correct the NuGet command should now find your packages.

Do I need a Headless CMS?

What is a Headless CMS?

Headless CMS’s have become the hot topic in recent years along with preferences to move to microservices over monolithic architectures and to decouple parts of applications where possible. So, you may be wondering do I need one?

Let’s start with what a Headless CMS actually is. A typical CMS does two jobs;

Firstly, it provides a place for marketers to login and create pages of content that appear on the website. Some allow very basic content editing, other provide the capabilities to construct entire pages from pre-built components, preview what it looks like, apply personalisation rules, split testing and a whole host of other features.

Secondly, they allow developers to create templates of skins to render the pages to visitors. They also provide authentication, form data collection, track visits, browsing habits etc.

Essentially the functionality splits into what the marketer sees and what the visitor sees.

A Headless CMS on the other hand, chops of the second part and in its place leaves an API so that another application can pull content out of the CMS for it to display. The CMS then becomes a place solely responsible for managing content rather than managing a website.

Decoupling in this way offers a few advantages:

  1. The technology/platform is no longer determined by the CMS. You could have a .net based CMS and write the website in JavaScript, php, or any other language.
  2. The content can be provided to more than just a website. In the same way that digital asset management platforms specialise in serving digital assets to other systems. E.g. mobile app, web, print etc. A headless CMS can provide content to multiple other systems too.
  3. It’s much easier for a headless CMS to become a SASS offering. Most custom development when you install a CMS is to tailor the visitor website experience, rather than the admin experience. Once the visitor website is removed there leaves little reason for it to be a product you install and maintain making it much easier for vendors to supply as SASS.

So, should I have one?

With all these advantages the answer should be yes, right? Well its not quite that simple, there are also some disadvantages, so let’s answer some more questions to see if it’s really suited for your situation;

Do you want to use a different website language?

One of the advantages of going headless was that you could pick the language you write the head in, but which language do you want to write it in? Most articles will talk about writing the head in JavaScript frameworks such as React, Vuejs or Angular. The reason for this is simple, if you want to write your application in one of these then you’re a bit stuck for CMS choices as no viable CMSs exist written in React, Vuejs or Angular so using a headless CMS is the only option other than writing an entire bespoke admin.

But what if you want to write the head in .Net or PHP? A headless CMS won’t restrict you doing this in any way, in-fact Umbraco Heartcore even provides a .Net client library to help. The only thing you must question is if it’s going to give you more advantages than disadvantages. By cutting off the head you now have to create it, so are you creating something different to what Umbraco’s Head originally did or are you just recreating the same thing?

Is there a specific pre-built head you want to use?

An alternative to writing your own head is to buy one that is premade. The biggest downside I see to headless is that you have to build and support the head, so using one that is pre-built and compatible with your CMS is a big advantage.

However, this does also mean that you may end up paying a higher price in license fees. Unless your CMS costs decrease enough to cover the additional costs of the head, your ultimately now licensing two products which could add up to more than one.

How many heads are you going to have?

If the answer is one and it’s a website, then it’s highly likely that you don’t need a headless CMS as you’ll effectively end up writing an inferior head to the one, you’re replacing.

CMSs like WordPress have almost 2 decades of work gone into advancing them at this point getting them to a place where they are feature rich, reliable and most importantly secure. Creating your own head will require you to do they work they have already done.

Are you building a website?

If the answer to this is no, then a headless website makes a great option to provide an admin experience for your content to become editable. They are pre-built, integrate with the popular enterprise authentication systems and offer great features around workflow and user permissions.

However, if the answer is yes, then you need to think carefully. Going truly headless has some quite major disadvantages for a website:

  1. You need to write and maintain the head, and if that head is doing the same thing the CMS one did than you’re just reinventing the wheel.
  2. As the CMS is now only providing content to another application less will be possible through the admin interface. E.g. No ability to preview pages, no drag and drop page creation.
  3. Creating things like campaign landing pages or any new page layout creation may no longer be possible without involving the IT team. Because the CMS is now only providing the content rather than the layout, unless the head provides an admin to manage this you could end up with very fixed page templates.

A CMS that can work both head on and headless may be a better option. For instance, Sitecore can be set up like a traditional CMS where the custom rendering logic is added to the main application. However, it can also be set up as a decoupled CMS where the page rendering is achieved using Angular, Vue or React, but note this is decoupled and not headless. The visitor site can be hosted separate from the CMS like with headless, but a lot of the logic is still provided by Sitecore so things like personalisation, admin previews, page construction, Sitecore forms etc are all still possible. It also can work in a truly headless way by using the same API that powers the decoupled head to power a bespoke non-Sitecore head, like a mobile app.

Is your content reusable?

A news corporation writing articles that appear in newspapers, websites or mobile apps are obviously creating a lot of reusable content that can stay the same on each destination.

Alternatively, a typical brochure site (done well) are created with a journey in mind where the whole page has been designed to achieve an outcome. Some content may just be a 3 words combined with an image to evoke an emotional response. Other content may be specifically written to fit a certain gap so that it appears the same size as 2 other pieces of content next to it. None of which may actually translate into mobile.

If this is the case what you may end up actually doing is rather than creating a content repository that can be reused. Is creating a repository where 90% can’t be reused and the other 10% is lost amongst it.

Managing Redirects in Sitecore

One of the most important aspects if not the most important aspect to managing a website is SEO. It doesn’t matter how good your website is, it doesn’t really matter if nobody can find it. Creating good SEO is a lot about having pages match what users are searching for, which then results in a high ranking for those pages. However, a second part to this formula is what happens with a page when it no longer exists. For instance, one day this blog post may no longer be relevant to have on our site, but up until that point search engines will have crawled it, people will have posted links to it and all of this adds to its value which we would to lose should it be removed. An even simpler scenario could be that we rename the blog section to something else and the page still exists under a different URL, but links to it no longer work.

Fortunately, the internet has a way of dealing with this and it’s in the form of a 301 redirect. 301 redirect are an instruction to anyone visiting the URL that the page has permanently moved to a different URL. When you set up a 301 redirect it achieves two things. Firstly, users following a link to your site end up on a relevant page rather a generic page not found message. Secondly search engines recrawling your site update their index with the new page and transfer all the associated value. So, to avoid losing any value associated with a page, whenever you move, rename or remove a page you should also set up a redirect.

Sitecore

Given how important redirects are it’s then surprising that an enterprise CMS platform like Sitecore doesn’t come with a universal way to set redirects up out of the box.

Typically, when we’ve taken over a site one of the support requests we will eventually receive is a question surrounding setting up a redirect from a marketer who’s ended up on Sitecore’s documentation site and is confused why they can’t find any of the things in this article https://doc.sitecore.com/users/sxa/17/sitecore-experience-accelerator/en/map-a-url-redirect.html.

The article appears to suggest that creating redirects is a feature that is supported, however it’s only available to sites built using Sitecore Experience Accelerator rather than the traditional approach.

More often that not, the developers who created the site will have added 301 redirect functionalities but will have done it using the IIS Redirect module, which while serves the purpose of creating redirects does not provide a Sitecore interface for doing so.

301 Redirect Module

Fortunately, there are some open source modules available which add the missing feature to Sitecore. The one we typically work with is neatly named 301 Redirect Module.

Once installed the module can be found in the System > Modules folder. Clicking on the Redirects folder will give you the option to either create another Redirect Folder to help group your redirect into more manageable folders, Redirect URL, Redirect Pattern or Redirect Rule.

Redirect URL

This is by far the most common redirect to be set up. You specify an exact URL that is to be redirected and set the destination as either another exact URL or a Sitecore item. There’s even an option to choose if it should be a permanent or temporary redirect.

Redirect Pattern

Redirect patterns are slightly more complex and require some knowledge of being able to write regular expressions. However, these are most useful if you ever want to move an entire section of your site. Rather than creating rules for each of the pages under a particular page, one rule can redirect all of them. Not only does this save time when creating the redirects, but it also makes your list of redirects far more manageable.

Auto Generated Rules

One of my favourite features of the 301 Redirect Module is that it can auto generate rules for any pages you move. For instance, if you decide a sub-page should become a top-level page moving it will result in a rule automatically being generated without you having to do a thing.

Summary

A marketer’s ability to manage 301 Redirect is as important as them being able to edit the content on the site and while traditional Sitecore sites may be missing this functionality it can easily be added using 3rd party modules, best of all they’re free!