Azure devops and custom NuGet feeds

If your setting up a CI pipleline on Azure Devops for a site which uses a NuGet feed from a source that isn’t on nuget.org you may see the error:

“The nuget command failed with exit code(1) and error(Errors in packages.config projects Unable to find version…”

On your local dev machine you will have added extra an extra NuGet feed source through visual studio which will update a global file on you machine. However as Azure Pipelines is a serverless solution you don’t have the same global file to update to include the sources.

Instead of this you need to add a NuGet.config file to the root of your repository.

Here is an example of one set to include Sitecores NuGet package feed.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Sitecore NuGet v2 Feed" value="https://sitecore.myget.org/F/sc-packages/" />    
  </packageSources>
</configuration>

Next you will need to update your pipeline to tell the NuGet step to use this config file

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
    feedsToUse: 'config'
    nugetConfigPath: 'NuGet.config'

And that’s it. As long as all the sources are correct the NuGet command should now find your packages.

Do I need a Headless CMS?

What is a Headless CMS?

Headless CMS’s have become the hot topic in recent years along with preferences to move to microservices over monolithic architectures and to decouple parts of applications where possible. So, you may be wondering do I need one?

Let’s start with what a Headless CMS actually is. A typical CMS does two jobs;

Firstly, it provides a place for marketers to login and create pages of content that appear on the website. Some allow very basic content editing, other provide the capabilities to construct entire pages from pre-built components, preview what it looks like, apply personalisation rules, split testing and a whole host of other features.

Secondly, they allow developers to create templates of skins to render the pages to visitors. They also provide authentication, form data collection, track visits, browsing habits etc.

Essentially the functionality splits into what the marketer sees and what the visitor sees.

A Headless CMS on the other hand, chops of the second part and in its place leaves an API so that another application can pull content out of the CMS for it to display. The CMS then becomes a place solely responsible for managing content rather than managing a website.

Decoupling in this way offers a few advantages:

  1. The technology/platform is no longer determined by the CMS. You could have a .net based CMS and write the website in JavaScript, php, or any other language.
  2. The content can be provided to more than just a website. In the same way that digital asset management platforms specialise in serving digital assets to other systems. E.g. mobile app, web, print etc. A headless CMS can provide content to multiple other systems too.
  3. It’s much easier for a headless CMS to become a SASS offering. Most custom development when you install a CMS is to tailor the visitor website experience, rather than the admin experience. Once the visitor website is removed there leaves little reason for it to be a product you install and maintain making it much easier for vendors to supply as SASS.

So, should I have one?

With all these advantages the answer should be yes, right? Well its not quite that simple, there are also some disadvantages, so let’s answer some more questions to see if it’s really suited for your situation;

Do you want to use a different website language?

One of the advantages of going headless was that you could pick the language you write the head in, but which language do you want to write it in? Most articles will talk about writing the head in JavaScript frameworks such as React, Vuejs or Angular. The reason for this is simple, if you want to write your application in one of these then you’re a bit stuck for CMS choices as no viable CMSs exist written in React, Vuejs or Angular so using a headless CMS is the only option other than writing an entire bespoke admin.

But what if you want to write the head in .Net or PHP? A headless CMS won’t restrict you doing this in any way, in-fact Umbraco Heartcore even provides a .Net client library to help. The only thing you must question is if it’s going to give you more advantages than disadvantages. By cutting off the head you now have to create it, so are you creating something different to what Umbraco’s Head originally did or are you just recreating the same thing?

Is there a specific pre-built head you want to use?

An alternative to writing your own head is to buy one that is premade. The biggest downside I see to headless is that you have to build and support the head, so using one that is pre-built and compatible with your CMS is a big advantage.

However, this does also mean that you may end up paying a higher price in license fees. Unless your CMS costs decrease enough to cover the additional costs of the head, your ultimately now licensing two products which could add up to more than one.

How many heads are you going to have?

If the answer is one and it’s a website, then it’s highly likely that you don’t need a headless CMS as you’ll effectively end up writing an inferior head to the one, you’re replacing.

CMSs like WordPress have almost 2 decades of work gone into advancing them at this point getting them to a place where they are feature rich, reliable and most importantly secure. Creating your own head will require you to do they work they have already done.

Are you building a website?

If the answer to this is no, then a headless website makes a great option to provide an admin experience for your content to become editable. They are pre-built, integrate with the popular enterprise authentication systems and offer great features around workflow and user permissions.

However, if the answer is yes, then you need to think carefully. Going truly headless has some quite major disadvantages for a website:

  1. You need to write and maintain the head, and if that head is doing the same thing the CMS one did than you’re just reinventing the wheel.
  2. As the CMS is now only providing content to another application less will be possible through the admin interface. E.g. No ability to preview pages, no drag and drop page creation.
  3. Creating things like campaign landing pages or any new page layout creation may no longer be possible without involving the IT team. Because the CMS is now only providing the content rather than the layout, unless the head provides an admin to manage this you could end up with very fixed page templates.

A CMS that can work both head on and headless may be a better option. For instance, Sitecore can be set up like a traditional CMS where the custom rendering logic is added to the main application. However, it can also be set up as a decoupled CMS where the page rendering is achieved using Angular, Vue or React, but note this is decoupled and not headless. The visitor site can be hosted separate from the CMS like with headless, but a lot of the logic is still provided by Sitecore so things like personalisation, admin previews, page construction, Sitecore forms etc are all still possible. It also can work in a truly headless way by using the same API that powers the decoupled head to power a bespoke non-Sitecore head, like a mobile app.

Is your content reusable?

A news corporation writing articles that appear in newspapers, websites or mobile apps are obviously creating a lot of reusable content that can stay the same on each destination.

Alternatively, a typical brochure site (done well) are created with a journey in mind where the whole page has been designed to achieve an outcome. Some content may just be a 3 words combined with an image to evoke an emotional response. Other content may be specifically written to fit a certain gap so that it appears the same size as 2 other pieces of content next to it. None of which may actually translate into mobile.

If this is the case what you may end up actually doing is rather than creating a content repository that can be reused. Is creating a repository where 90% can’t be reused and the other 10% is lost amongst it.

Managing Redirects in Sitecore

One of the most important aspects if not the most important aspect to managing a website is SEO. It doesn’t matter how good your website is, it doesn’t really matter if nobody can find it. Creating good SEO is a lot about having pages match what users are searching for, which then results in a high ranking for those pages. However, a second part to this formula is what happens with a page when it no longer exists. For instance, one day this blog post may no longer be relevant to have on our site, but up until that point search engines will have crawled it, people will have posted links to it and all of this adds to its value which we would to lose should it be removed. An even simpler scenario could be that we rename the blog section to something else and the page still exists under a different URL, but links to it no longer work.

Fortunately, the internet has a way of dealing with this and it’s in the form of a 301 redirect. 301 redirect are an instruction to anyone visiting the URL that the page has permanently moved to a different URL. When you set up a 301 redirect it achieves two things. Firstly, users following a link to your site end up on a relevant page rather a generic page not found message. Secondly search engines recrawling your site update their index with the new page and transfer all the associated value. So, to avoid losing any value associated with a page, whenever you move, rename or remove a page you should also set up a redirect.

Sitecore

Given how important redirects are it’s then surprising that an enterprise CMS platform like Sitecore doesn’t come with a universal way to set redirects up out of the box.

Typically, when we’ve taken over a site one of the support requests we will eventually receive is a question surrounding setting up a redirect from a marketer who’s ended up on Sitecore’s documentation site and is confused why they can’t find any of the things in this article https://doc.sitecore.com/users/sxa/17/sitecore-experience-accelerator/en/map-a-url-redirect.html.

The article appears to suggest that creating redirects is a feature that is supported, however it’s only available to sites built using Sitecore Experience Accelerator rather than the traditional approach.

More often that not, the developers who created the site will have added 301 redirect functionalities but will have done it using the IIS Redirect module, which while serves the purpose of creating redirects does not provide a Sitecore interface for doing so.

301 Redirect Module

Fortunately, there are some open source modules available which add the missing feature to Sitecore. The one we typically work with is neatly named 301 Redirect Module.

Once installed the module can be found in the System > Modules folder. Clicking on the Redirects folder will give you the option to either create another Redirect Folder to help group your redirect into more manageable folders, Redirect URL, Redirect Pattern or Redirect Rule.

Redirect URL

This is by far the most common redirect to be set up. You specify an exact URL that is to be redirected and set the destination as either another exact URL or a Sitecore item. There’s even an option to choose if it should be a permanent or temporary redirect.

Redirect Pattern

Redirect patterns are slightly more complex and require some knowledge of being able to write regular expressions. However, these are most useful if you ever want to move an entire section of your site. Rather than creating rules for each of the pages under a particular page, one rule can redirect all of them. Not only does this save time when creating the redirects, but it also makes your list of redirects far more manageable.

Auto Generated Rules

One of my favourite features of the 301 Redirect Module is that it can auto generate rules for any pages you move. For instance, if you decide a sub-page should become a top-level page moving it will result in a rule automatically being generated without you having to do a thing.

Summary

A marketer’s ability to manage 301 Redirect is as important as them being able to edit the content on the site and while traditional Sitecore sites may be missing this functionality it can easily be added using 3rd party modules, best of all they’re free!

User Authentication across sub domains in Sitecore

The Scenario

In some scenarios you may have sub domains set up on your site, this may be to have an api configured as a separate site, or you may have a microsite set up on a sub domain. Depending on the scenario you may also want a user who it logged in on one of the sites to be logged in on the other.

When providing a login section to a site, after the visitor logs in their authentication is tracked using cookies. Cookies are linked to a domain and as your sites only differ by sub-domain you may be thinking great it should just work, no need for a single sign on solution. But you would be wrong.

The Solution

By default, cookies will be set on a specific domain including the sub-domain unless you tell it otherwise. Fortunately, this is quite easy to do.

In the web.config file, find the section for forms authentication and add an attribute domain set to the top-level domain. Your authentication should now always be set on the top level domain and work across all subdomains.

<system.web>
  <authentication mode="None">
    <forms name=".ASPXAUTH" cookieless="UseCookies" domain="mydomain.com" timeout="30"/>
  </authentication>
<system.web>

Bulk Inserting data using Entity Framework

Using tools like Entity Framework makes life far easier for a developer. Recently I blogged about how using them is what makes .Net Core one of the best platforms for prototype development, but the benefits don’t end there. They are also great from a security perspective by cutting a lot of risk around SQL injection attacks just by avoiding easy mistakes when using regular ADO.NET.

However, they do have some downsides, a main one being that they are particularly slow when it comes to doing bulk inserts to a database.

For example, assume you have an application which regularly receives an xml import file consisting of 200,000 records and each one either needs to be an insert of an update into the db. You’ll quickly learn that looping through the whole lot and then calling save changes results in a process taking an extremely long time to run, it may even just timeout. You then decide to get rid of that long save changes line by breaking it up into blocks of 500 and call save changes for each of those. That may save the timeout issue, but it still results in a process potentially lasting around an hour.

The problem is that this is a scenario Entity Framework or EF.Core just weren’t designed to handle. As a solution you could opt to drop Entity Framework altogether and revert to something like a native SQL Bulk Insert command, but what if you need to be doing some processing in code on the record before the import happens? What if you have one of those classic not quite always valid XML, XML files which would cause SQLs Bulk Insert to fail.

The solution is to use an open source extension called EFCore.BulkExtensions.

EFCore.BulkExtensions

EFCore.BulkExtensions is a set of extension methods to Entity Framework that provide the functionality to do bulk inserts. You can add it to your project using NuGet and you’ll find the project on GitHub here https://github.com/borisdj/EFCore.BulkExtensions

Usage is also very simple to do. Let’s assume you have some existing tradition EF code that loops through a collection and for each one create a new db item and adds it to the db:

public void DoImport(List<foo> collection)
{
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;

        await dbContext.Jobs.AddAsync(job);
    }

    await dbContext.SaveChangesAsync();
}

Rather than adding each item to the Entity Framework db context, you instead create a list of those objects and then call a BulkInsert function with them on your db context.

public void DoImport(List<foo> collection)
{
    List<Jobs> importJobs
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;
        
        importJobs.Add(job);
    }

    await dbContext.BulkInsert(importJobs);
}

If also works for updates, but rather than creating a new item, first retrieve it form the db and then at the end call BulkInsertOrUpdate with the list.

await dbContext.BulkInsertOrUpdate(importJobs);

From my experience doing this took my import process that would run for over an hour down to something which would complete in a few minutes.

Top reasons .Net is amazing for prototyping

The other day someone told me .net was slow to get something built, and to be fair to the person I can see why he would have thought that. Most of his interaction with .net projects have been on complex with large enterprise applications that often have integrated multiple other applications.

However, I would maintain that .net is a framework that is actually really fast to develop on and that what he had perceived as being slow was the complexities of a project rather than the actual coding time.

In fact, I would say it’s so fast to get something built in it, it actually becomes the fastest thing to develop a prototype in. Here are my top reasons why.

ASP.NET Core

ASP.NET Core inherits all the best bits from the ASP.NET Framework that came before it, giving the framework almost 20 years of refinement since its original release in 2002. The days of WebForms in the original ASP.NET are now long behind us and we now have the choice of building web applications with either MVC or Razor Pages.

Razor provides the perfect combination of a view language providing helpers to render your html without limiting what can be done on the front end. How you code your HTML is still completely up to you, the helpers just provide features like binding that make it even faster to do.

Another great thing about .Net core over that which came before it, is its platform independent. Rather than being confined to just Windows, you can run it on Mac or Linux too.

Great starter templates

What kicks of a great prototype project is starting with great templates, and ASP.NET Core has a bunch.

As already mentioned, you can build a Web Application with either Razor Pages or MVC, but the templates also provide you with the base for building an API, Angular App, React.js or React.js and Redux or you can simply create an Empty application.

My preference is to go for MVC as it’s what I’m the most familiar with and a key thing for rapidly building a prototype is that you develop rapidly. The idea is to focus on creating something new and unique, not learn how to develop in a new framework.

The MVC Web Application gives you a base site to work with a few pages already set up, bootstrap and jQuery are already included so you start right at the point of working on your logic rather than spending time doing setup.

SQL Server and EF Core

I’ve always been a bit of a database guy. I’m not sure why, but its a topic that has always just made sense to me, and despite being a topic that can get quite complex, the reasons behind it being complex always feel logical.

When it comes to building a prototype though there are two aspects which make storage with .net core super simple.

Firstly, Entity Framework Core (EF Core) means you don’t really need to know any SQL or spend any time writing it. It helps if you do, but at a minimum all you need to be doing is creating a model in your code, adding a few lines for a DB context that tells EF.Core that a model is a table and how they relate. Then turn on migrations and you’re done. When you run the application, the DB gets created for you and each time you change your model, you just add another migration and the next time the application runs the application the DB schema gets updated.

Querying your DB is done by writing LINQ queries against your entity framework model, allowing you to have next to no understanding of SQL and how the DB works. Everything is just done by magic for you.

The second part is SQL Server and its different versions. Often when you think of SQL Server you think of the big DB engine with its many many components that you install on a server or your local machine but there’s two others which are even more important. LocalDB and Azure SQL.

LocalDB is an option that can be installed as part of Visual Studio. No separate application is needed or services to be running in the background. It is essentially the minimum required to start the DB engine to be used for development purposes. In practical terms this means when you start your application EF.Core can run off a LocalDB which didn’t require any setup, but as far as your application in concerned it is no different than working with any other version of SQL Server.

Azure SQL as the name implies is SQL Server on Azure. The only thing I really need to say about this is that you can swap LocalDB and Azure SQL with ease. They may be different but as far as your prototype is concerned, they are the same.

Scaffolding

The only thing quicker than writing code is having someone else do it for you. So we’ve created our application from a template, added a model which generated our database and now its time to create some pages. Well the good news is it’s still not time to write much code because Visual Studio can scaffold out pages based on our model for us!

Adding a controller to our project in Visual Studio gives us some options on what should be generated for us, one of which is MVC Controller with views, using Entity Framework. What that means is given a model it will create controllers and views for listing items, creating them, editing them and deleting them. No coding by us required!

Now it’s unlikely that is exactly what you’re after, but it’s generally a good starting place and deleting code you don’t need is far quicker then writing it.

Azure

Lastly there is Azure. You may have spotted a theme to all these points and that is they all remove any effort required to do any setup and instead focus on building your own logic, and this point is no different.

I remember a time, when if I wanted a server to put an application on, I had to request it, and then wait a while. What I would get back would either be a server that already had resources running on it, or a blank server that would need applications installed on it. e.g. SQL Server or .Net Framework. IIS wouldn’t have been configured and it would be a number of hours before my application would be running.

With Azure you don’t even really need to leave Visual Studio. From the publish dialog box you can create a new App Service and DB, and then publish. All the connection strings are sorted out for you. There are service plans which cost next to nothing, a domain is configured for you and at the end of the publish the website opens and is working. The whole process has taken less than 10 minutes.

Logging with .net core and Application Insights

When you start builing serverless applications like Azure functions or Azure web jobs, one of the first things you will need to contend with is logging.

Traditionally logging was simply achieved by appending rows to a text file that got stored on the same server your application was running on. Tools like log4net made this simpler by bringing some structure to the proces and providing functionality like automatic time stamps, log levels and the ability to configure what logs should actually get written out.

With a serverless application though, writing to the hard disk is a big no no. You have no guarantee how long that server will exist for and when your application moves, that data will be lost. In a world where you might want to scale up and down, having logs split between servers is also hard to retrieve when an error does happen.

.net core

The first bit of good news is that .net core supports a logging API. Here I am configuring it in a web job to output logs to the console and to Application insights. This is part of the host builder config in the program.cs file.

//3. LOGGING function execution :
//3 a) for LOCAL - Set up console logging for high-throughput production scenarios.
hostBuilder.ConfigureLogging((context, b) =>
{
    b.AddConsole();

    // If the key exists in appsettings.json, use it to enable Application Insights rather than console logging.
    //3 b) PROD - When the project runs in Azure, you can't monitor function execution by viewing console output (3 a). 
    // -The recommended monitoring solution is Application Insights. For more information, see Monitor Azure Functions.
    // -Make sure you have an App Service app and an Application Insights instance to work with.
    //-Configure the App Service app to use the Application Insights instance and the storage account that you created earlier.
    //-Set up the project for logging to Application Insights.
    string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
    if (!string.IsNullOrEmpty(instrumentationKey))
    {
        b.AddApplicationInsights(o => o.InstrumentationKey = instrumentationKey);
    }
});

Microsofts documentation on logging in .NET Core and ASP.NET can be found here.

Creating a log in your code is then as simple as using dependency injection on your classes to inject an instance of ILogger and then using it’s functions to create a log.

public class MyClass
{
    private readonly ILogger logger;

    public MyClass(ILogger logger)
    {
        this.logger = logger;
    }

    public void Foo()
    {
        try
        {
            // Logging information
            logger.LogInformation("Foo called");
        }
        catch (Exception ex)
        {
            // Logging an error
            logger.LogError(ex, "Something went wrong");
        }
    }
}

Application Insights

When your application is running in Azure, Application Insights is where all your logs will live.

What’s great about App Insights is it will give you the ability to write queries against all your logs.

So for instance if I wanted to find all the logs for an import function starting, I can write a filter for messages containing “Import function started”.

One of my favourite where commands is ago(30m). It will output the time for a given timespan in the past. This is great when your running the same query frequently and are only interested in the last x amount of time, as you can simply write where timestamp > ago(30m) for the last 30 minutes of logs rather than trying to remember for date time format your string should be in

Queries can also be saved or pinned to a dashboard if they are a query you need to run frequently.

For all regular logs your application makes you need to query the traces. What can be confusing with this though is the errors.

With the code above I had a try catch block and in the catch block I called logger.LogError(ex, “Something went wrong”); so in my logs I expect to see the message and as I passed an exception I also expect to see an exception. But if we look at this example from application insights you will see an error in the traces log but no strack trace or anything else from the exception.

LogError will write to both the traces and exceptions log. If you want to view the exceptions you have to look at the exceptions log, not the traces.

This is just the start of the functionality that Application Insights provides, but if your just starting out, hopefully this is a good indication not only of how easy it is to add logging to your application, but also how much added value App Insights can offer over just having text files.

Connecting Sitecore to Salesforce

When it comes to creating CMS websites the most common integration requirement is one to a CRM system. In this article I’m going to explain the different methods of integrating Sitecore and Salesforce along with the advantages and disadvantages of each.

Salesforce Web to Lead

The first option isn’t Sitecore specific at all and is functionality provided and supported by Salesforce. Web to Lead allows you to create a HTML form from within Salesforce that when submitted will create a Lead item in Salesforce. To add it to your Sitecore instance can be as simple as copy and pasting the generated HTML and setting up a confirmation page for Salesforce to redirect to once the form has been submitted.

The process for the form submission is then as follows:

With some extra technical input, as the HTML for the form is hosted on your site rather than through an iFrame, it is also possible to customise it with your own CSS and validation to fit in with the rest of your site.

Advantages

  • Very simple to set up
  • Very low in terms of complexity
  • Easy to maintain
  • No duplication of data in Sitecore (simpler for GDPR)
  • Potential for form to be CMS managed by pasting in HTML from Salesforce

Disadvantages

  • Vulnerable to spam
  • No data is fed back into Sitecore analytics
  • Limited to form submissions
  • Limited functionality

For more information on web to lead see: https://www.salesforce.com/products/guide/lead-gen/web-to-lead/

Salesforce Web to Lead plus some custom code

What’s good about web to lead is it provides a simple platform that can be built upon to fix some of its disadvantages.

With some custom code on the Sitecore side, we can change the process from the visitor submitting a form directly to Salesforce, to a form submitting to Sitecore which in turn does an HTTP Post to Salesforce. This gives us the ability to also add in some protection against spam using something like re-captcha and the ability to record some analytics information in Sitecore analytics.

The process for the form submission is as follows:

Advantages

  • Low complexity
  • Easy to maintain
  • No duplication of data in Sitecore (simpler for GDPR)
  • Protected against spam
  • Built on top of Salesforces web to lead functionality

Disadvantages

  • To make fully CMS-able requires a larger bespoke implementation
  • Limited to form submissions

Sitecore Connect for Salesforce CRM

If you’re after something greater than simple lead creation from a form on the site, then another option is the Sitecore Connect for Salesforce CRM module from Sitecore. This is built on top of Sitecore’s data exchange framework and works by syncing data between the two platforms.

Out of the box it is set up with some default mappings for contact information, but this can be expanded to included additional facets of information against a contact.

Using this type of integration as new contacts are created on the site, they will be created in salesforce too. Or a contact being in a salesforce campaign could be synced to the Sitecore instance to customise the users experience there.

For more information on Sitecore Connect for Salesforce CRM see: https://doc.sitecore.com/developers/salesforce-connect/20/sitecore-connect-for-salesforce-crm/en/sitecore-connect-for-salesforce-crm-configuration-guide.html

Advantages

  • Built and supported by Sitecore
  • Ability to build a more complex integration on a framework

Disadvantages

  • Much higher complexity
  • No quick wins for common scenarios such as creating a lead

Fuse IT S4S Connector

Since 2009 Fuse IT have been building a Sitecore to Salesforce connector targeting the biggest use cases for connecting the two platforms.

The connector is installed as packages into both Sitecore and Salesforce and provide out the box functionality such as:

  • Sitecore Forms and Web Forms for Marketers field mapping wizard to create leads in Salesforce. Which gives marketers complete freedom to add a form to a site that creates a lead in Salesforce
  • Sitecore analytics integration to sync behavioural data to salesforce leads and contacts
  • Sitecore personalisation for a contact from Salesforce

The package also includes sample code to enable expanding the solutions with bespoke development.

For more information on S4S see: https://www.fuseit.com/products/sitecore-salesforce-connector/

Advantages

  • Out the box functionality that takes advantages of Sitecore’s features
  • Ability to build a more complex integration on a framework

Disadvantages

  • Additional license fee
  • Creates a dependency that may affect upgrades in the future

Fully Bespoke

Often when you suggest a fully bespoke solution people can take an instant reaction to fear it will be costly, hard to maintain or will break. Writing traditional code often can get viewed as being bad or hard, but the reality is most alternative implementations still involve writing code but just have a nice interface at the start. Depending on the kind of integration your trying to achieve going fully bespoke might be the easiest solution. After all code is essentially an easy tool to achieve a complex scenario and it is generally the scenario which leads to the problem being hard, rather than a developer’s ability to write code in any given language.

A bespoke solution could be something as simple as a webservice being provided by a Salesforce developer that Sitecore will post to, much in the same way as web to lead, but capable of achieving some additional functionality on top.

Or it could be a middleware API layer that is used to connect multiple systems together. E.g. If your site needed to talk to Salesforce and a stock level system at the same time. Or it could be used to completely abstract the CRM system in order to facilitate Salesforce being swapped out with something else in the future.

Advantages

  • Anything’s possible

Disadvantages

  • Could be to complex for simple scenarios
  • Everything must be built

Two ways to import an XML file with .Net Core or .Net Framework

It’s always the simple stuff you forget how you do. For years I’ve mainly been working with JSON files, so when faced with that task of reading an XML file my brain went “I can do that” followed by “actually how did I used to do that?”.

So here’s two different methods. They work on .Net Core and theoretically .Net Framework (my project is .Net Core and haven’t checked that they do actually work on framework).

My examples are using an XML in the following format:

<?xml version="1.0" encoding="utf-8"?>
<jobs>
    <job>
        <company>Construction Co</company>
        <sector>Construction</sector>
        <salary>£50,000 - £60,000</salary>
        <active>true</active>
        <title>Recruitment Consultant - Construction Management</title>
    </job>
    <job>
        <company>Medical Co</company>
        <sector>Healthcare</sector>
        <salary>£60,000 - £70,000</salary>
        <active>false</active>
        <title>Junior Doctor</title>
    </job>
</jobs>

Method 1: Reading an XML file as a dynamic object

The first method is to load the XML file into a dynamic object. This is cheating slightly by first using Json Convert to convert the XML document into a JSON string and then deserializing that into a dynamic object.

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Linq;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);
            XDocument xdoc = XDocument.Load(stream);

            string jsonText = JsonConvert.SerializeXNode(xdoc);
            dynamic dyn = JsonConvert.DeserializeObject<ExpandoObject>(jsonText);

            foreach (dynamic job in dyn.jobs.job)
            {
                string company;
                if (IsPropertyExist(job, "company"))
                    company = job.company;

                string sector;
                if (IsPropertyExist(job, "sector"))
                    company = job.sector;

                string salary;
                if (IsPropertyExist(job, "salary"))
                    company = job.salary;

                string active;
                if (IsPropertyExist(job, "active"))
                    company = job.active;

                string title;
                if (IsPropertyExist(job, "title"))
                    company = job.title;

                // A property that doesn't exist
                string foop;
                if (IsPropertyExist(job, "foop"))
                    foop = job.foop;
            }

            Console.ReadLine();
        }

        public static bool IsPropertyExist(dynamic settings, string name)
        {
            if (settings is ExpandoObject)
                return ((IDictionary<string, object>)settings).ContainsKey(name);

            return settings.GetType().GetProperty(name) != null;
        }
    }
}

A foreach loop then goes through each of the jobs, and a helper function IsPropertyExist checks for the existence of a value before trying to read it.

Method 2: Deserializing with XmlSerializer

My second approach is to turn the XML file into classes and then deserialize the XML file into it.

This approch requires more code, but most of it can be auto generated by visual studio for us, and we end up with strongly typed objects.

Creating the XML classes from XML

To create the classes for the XML structure:

1. Create a new class file and remove the class that gets created. i.e. Your just left with this

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

}

2. Copy the content of the XML file to your clipboard
3. Select the position in the file you want to the classes to go and then go to Edit > Paste Special > Paste XML as Classes

If your using my XML you will now have a class file that looks like this:

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

    // NOTE: Generated code may require at least .NET Framework 4.5 or .NET Core/Standard 2.0.
    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    [System.Xml.Serialization.XmlRootAttribute(Namespace = "", IsNullable = false)]
    public partial class jobs
    {

        private jobsJob[] jobField;

        /// <remarks/>
        [System.Xml.Serialization.XmlElementAttribute("job")]
        public jobsJob[] job
        {
            get
            {
                return this.jobField;
            }
            set
            {
                this.jobField = value;
            }
        }
    }

    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    public partial class jobsJob
    {

        private string companyField;

        private string sectorField;

        private string salaryField;

        private bool activeField;

        private string titleField;

        /// <remarks/>
        public string company
        {
            get
            {
                return this.companyField;
            }
            set
            {
                this.companyField = value;
            }
        }

        /// <remarks/>
        public string sector
        {
            get
            {
                return this.sectorField;
            }
            set
            {
                this.sectorField = value;
            }
        }

        /// <remarks/>
        public string salary
        {
            get
            {
                return this.salaryField;
            }
            set
            {
                this.salaryField = value;
            }
        }

        /// <remarks/>
        public bool active
        {
            get
            {
                return this.activeField;
            }
            set
            {
                this.activeField = value;
            }
        }

        /// <remarks/>
        public string title
        {
            get
            {
                return this.titleField;
            }
            set
            {
                this.titleField = value;
            }
        }
    }

}

Notice that the active field was even picked up as being a bool.

Doing the Deserialization

To do the deserialization, first create an instance of XmlSerializer for the type of the object we want to deserialize too. In my case this is jobs.

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));

Then call Deserialize passing in a XML Reader. I’m creating and XML reader on the stream I used in the dynamic example.

            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

The complete file now looks like this:

using System;
using System.IO;
using System.Text;
using System.Xml;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));
            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

            Console.ReadLine();
        }
    }
}

And thats it. Any missing nodes in your XML will just be blank rather than causing an error.

How to create a shared access signature for a specific folder in Azure Storage

As we move into a serverless wold where devlopment work is done less on generic multipurpose servers that require patching both to the OS and to the application installed on them, things become easier and in some cases cheaper.

For instance I am currently working on a project that needs to send and recieve XML files from different sources and with no other requirements for a server, Azure storage seemed the perfect fit. As it’s just a storage service you pay for what you need and nothing more. All the patching and maintencance of API’s is provided by Microsoft and we have an instant place to store out files. Ideal!

However new technology always comes with new challenges. My first issue came with the plan on how we would be getting files in and out with partners. This isn’t the first time I’ve worked with Azure File Storage, but it is the first time I did it without an opps person setting things up. Previously we used FTP, which while dated works with a lot of applications. Like a headphone jack it’s not the most glamorus of connection, but it works, everyone knows how it works and everyone has things that work with it. However it transpires that despite Azure storage being 10 years old and reciving requests for FTP from the start, Microsoft have decided to go the same route as Apple with the headphone jack and not have it. Instead the only option for an integration is REST. As it transpires the opps people I had worked with in the past when faced with this issue had just put a VM infront of it, which kind of defeats the point of using Azure storage in the first place!

So we’re going with REST and Microsoft provide quite a straightforward REST API all good so far, but how do we limit access? Well there’s a guide to Using the Azure Storage REST API which contains a section on creating an authorisation header. It’s long and overly complex, but does point you in the direction that to do this you need a Shared Access Signature. The other option is an access key, but this is something you should never give away to a third party.

Shared Access Signature

After a bit more digging through the documentation (and just clicking the thing that sounded right in the potal) I found this documentation on creating an Account SAS which sounded like what I wanted (it wasn’t, but it’s close).

With a shared access signature you can say what kind of service should be allowed, what permissions they should have, IP address’s, start and end dates. All awesome things.

Once I had this I could then use the REST API, but there was a problem. I could access every folder in the storage account and there was no way to stop this! For integrating with 2 third partys they would both be able to access each others stuff, and our own private stuff.

There is also no way to revoke the SAS once it’s been generated other than refreshing the access keys which would affect everyone.

Folder Level Shared Access Signature

After a bit more research I found what I was looking for. How to create a shared access signature at a folder or item level and how to link it to a policy.

The first thing you need is Azure Storage Explorer. Once your set up with this you will be able to view all your storage accounts.

From here you are able to browse to the folder you want to share right click it and choose Manage Access Policies.

This will open a dialoge to manage the policies for this specific object rather than the account.

Here you can set all the same permissions as you could for a signature at an account level but now for a specific object and against a policy rather than an actual signature, meaning the policy can be updated in the future with no change to the signature.

Better still you can remove the policy which will then invalidate any signature using it.

For the actual signature key right click the same folder and click Get Shared Access Signature.

Then in the dialoge select the policy from the drop down rather than spcifying the individual permissions.

Click create and you can copy the keys.

You now have an access key that is limited to a specific folder rather than the entire account.

This is only possible to do though one of the code/scripting interfaces. e.g. powershell or the storage explorer. The azure portal will only let you get signatures at an account level.