Azure devops and custom NuGet feeds

If your setting up a CI pipleline on Azure Devops for a site which uses a NuGet feed from a source that isn’t on nuget.org you may see the error:

“The nuget command failed with exit code(1) and error(Errors in packages.config projects Unable to find version…”

On your local dev machine you will have added extra an extra NuGet feed source through visual studio which will update a global file on you machine. However as Azure Pipelines is a serverless solution you don’t have the same global file to update to include the sources.

Instead of this you need to add a NuGet.config file to the root of your repository.

Here is an example of one set to include Sitecores NuGet package feed.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Sitecore NuGet v2 Feed" value="https://sitecore.myget.org/F/sc-packages/" />    
  </packageSources>
</configuration>

Next you will need to update your pipeline to tell the NuGet step to use this config file

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
    feedsToUse: 'config'
    nugetConfigPath: 'NuGet.config'

And that’s it. As long as all the sources are correct the NuGet command should now find your packages.

Bulk Inserting data using Entity Framework

Using tools like Entity Framework makes life far easier for a developer. Recently I blogged about how using them is what makes .Net Core one of the best platforms for prototype development, but the benefits don’t end there. They are also great from a security perspective by cutting a lot of risk around SQL injection attacks just by avoiding easy mistakes when using regular ADO.NET.

However, they do have some downsides, a main one being that they are particularly slow when it comes to doing bulk inserts to a database.

For example, assume you have an application which regularly receives an xml import file consisting of 200,000 records and each one either needs to be an insert of an update into the db. You’ll quickly learn that looping through the whole lot and then calling save changes results in a process taking an extremely long time to run, it may even just timeout. You then decide to get rid of that long save changes line by breaking it up into blocks of 500 and call save changes for each of those. That may save the timeout issue, but it still results in a process potentially lasting around an hour.

The problem is that this is a scenario Entity Framework or EF.Core just weren’t designed to handle. As a solution you could opt to drop Entity Framework altogether and revert to something like a native SQL Bulk Insert command, but what if you need to be doing some processing in code on the record before the import happens? What if you have one of those classic not quite always valid XML, XML files which would cause SQLs Bulk Insert to fail.

The solution is to use an open source extension called EFCore.BulkExtensions.

EFCore.BulkExtensions

EFCore.BulkExtensions is a set of extension methods to Entity Framework that provide the functionality to do bulk inserts. You can add it to your project using NuGet and you’ll find the project on GitHub here https://github.com/borisdj/EFCore.BulkExtensions

Usage is also very simple to do. Let’s assume you have some existing tradition EF code that loops through a collection and for each one create a new db item and adds it to the db:

public void DoImport(List<foo> collection)
{
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;

        await dbContext.Jobs.AddAsync(job);
    }

    await dbContext.SaveChangesAsync();
}

Rather than adding each item to the Entity Framework db context, you instead create a list of those objects and then call a BulkInsert function with them on your db context.

public void DoImport(List<foo> collection)
{
    List<Jobs> importJobs
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;
        
        importJobs.Add(job);
    }

    await dbContext.BulkInsert(importJobs);
}

If also works for updates, but rather than creating a new item, first retrieve it form the db and then at the end call BulkInsertOrUpdate with the list.

await dbContext.BulkInsertOrUpdate(importJobs);

From my experience doing this took my import process that would run for over an hour down to something which would complete in a few minutes.

Top reasons .Net is amazing for prototyping

The other day someone told me .net was slow to get something built, and to be fair to the person I can see why he would have thought that. Most of his interaction with .net projects have been on complex with large enterprise applications that often have integrated multiple other applications.

However, I would maintain that .net is a framework that is actually really fast to develop on and that what he had perceived as being slow was the complexities of a project rather than the actual coding time.

In fact, I would say it’s so fast to get something built in it, it actually becomes the fastest thing to develop a prototype in. Here are my top reasons why.

ASP.NET Core

ASP.NET Core inherits all the best bits from the ASP.NET Framework that came before it, giving the framework almost 20 years of refinement since its original release in 2002. The days of WebForms in the original ASP.NET are now long behind us and we now have the choice of building web applications with either MVC or Razor Pages.

Razor provides the perfect combination of a view language providing helpers to render your html without limiting what can be done on the front end. How you code your HTML is still completely up to you, the helpers just provide features like binding that make it even faster to do.

Another great thing about .Net core over that which came before it, is its platform independent. Rather than being confined to just Windows, you can run it on Mac or Linux too.

Great starter templates

What kicks of a great prototype project is starting with great templates, and ASP.NET Core has a bunch.

As already mentioned, you can build a Web Application with either Razor Pages or MVC, but the templates also provide you with the base for building an API, Angular App, React.js or React.js and Redux or you can simply create an Empty application.

My preference is to go for MVC as it’s what I’m the most familiar with and a key thing for rapidly building a prototype is that you develop rapidly. The idea is to focus on creating something new and unique, not learn how to develop in a new framework.

The MVC Web Application gives you a base site to work with a few pages already set up, bootstrap and jQuery are already included so you start right at the point of working on your logic rather than spending time doing setup.

SQL Server and EF Core

I’ve always been a bit of a database guy. I’m not sure why, but its a topic that has always just made sense to me, and despite being a topic that can get quite complex, the reasons behind it being complex always feel logical.

When it comes to building a prototype though there are two aspects which make storage with .net core super simple.

Firstly, Entity Framework Core (EF Core) means you don’t really need to know any SQL or spend any time writing it. It helps if you do, but at a minimum all you need to be doing is creating a model in your code, adding a few lines for a DB context that tells EF.Core that a model is a table and how they relate. Then turn on migrations and you’re done. When you run the application, the DB gets created for you and each time you change your model, you just add another migration and the next time the application runs the application the DB schema gets updated.

Querying your DB is done by writing LINQ queries against your entity framework model, allowing you to have next to no understanding of SQL and how the DB works. Everything is just done by magic for you.

The second part is SQL Server and its different versions. Often when you think of SQL Server you think of the big DB engine with its many many components that you install on a server or your local machine but there’s two others which are even more important. LocalDB and Azure SQL.

LocalDB is an option that can be installed as part of Visual Studio. No separate application is needed or services to be running in the background. It is essentially the minimum required to start the DB engine to be used for development purposes. In practical terms this means when you start your application EF.Core can run off a LocalDB which didn’t require any setup, but as far as your application in concerned it is no different than working with any other version of SQL Server.

Azure SQL as the name implies is SQL Server on Azure. The only thing I really need to say about this is that you can swap LocalDB and Azure SQL with ease. They may be different but as far as your prototype is concerned, they are the same.

Scaffolding

The only thing quicker than writing code is having someone else do it for you. So we’ve created our application from a template, added a model which generated our database and now its time to create some pages. Well the good news is it’s still not time to write much code because Visual Studio can scaffold out pages based on our model for us!

Adding a controller to our project in Visual Studio gives us some options on what should be generated for us, one of which is MVC Controller with views, using Entity Framework. What that means is given a model it will create controllers and views for listing items, creating them, editing them and deleting them. No coding by us required!

Now it’s unlikely that is exactly what you’re after, but it’s generally a good starting place and deleting code you don’t need is far quicker then writing it.

Azure

Lastly there is Azure. You may have spotted a theme to all these points and that is they all remove any effort required to do any setup and instead focus on building your own logic, and this point is no different.

I remember a time, when if I wanted a server to put an application on, I had to request it, and then wait a while. What I would get back would either be a server that already had resources running on it, or a blank server that would need applications installed on it. e.g. SQL Server or .Net Framework. IIS wouldn’t have been configured and it would be a number of hours before my application would be running.

With Azure you don’t even really need to leave Visual Studio. From the publish dialog box you can create a new App Service and DB, and then publish. All the connection strings are sorted out for you. There are service plans which cost next to nothing, a domain is configured for you and at the end of the publish the website opens and is working. The whole process has taken less than 10 minutes.

Logging with .net core and Application Insights

When you start builing serverless applications like Azure functions or Azure web jobs, one of the first things you will need to contend with is logging.

Traditionally logging was simply achieved by appending rows to a text file that got stored on the same server your application was running on. Tools like log4net made this simpler by bringing some structure to the proces and providing functionality like automatic time stamps, log levels and the ability to configure what logs should actually get written out.

With a serverless application though, writing to the hard disk is a big no no. You have no guarantee how long that server will exist for and when your application moves, that data will be lost. In a world where you might want to scale up and down, having logs split between servers is also hard to retrieve when an error does happen.

.net core

The first bit of good news is that .net core supports a logging API. Here I am configuring it in a web job to output logs to the console and to Application insights. This is part of the host builder config in the program.cs file.

//3. LOGGING function execution :
//3 a) for LOCAL - Set up console logging for high-throughput production scenarios.
hostBuilder.ConfigureLogging((context, b) =>
{
    b.AddConsole();

    // If the key exists in appsettings.json, use it to enable Application Insights rather than console logging.
    //3 b) PROD - When the project runs in Azure, you can't monitor function execution by viewing console output (3 a). 
    // -The recommended monitoring solution is Application Insights. For more information, see Monitor Azure Functions.
    // -Make sure you have an App Service app and an Application Insights instance to work with.
    //-Configure the App Service app to use the Application Insights instance and the storage account that you created earlier.
    //-Set up the project for logging to Application Insights.
    string instrumentationKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
    if (!string.IsNullOrEmpty(instrumentationKey))
    {
        b.AddApplicationInsights(o => o.InstrumentationKey = instrumentationKey);
    }
});

Microsofts documentation on logging in .NET Core and ASP.NET can be found here.

Creating a log in your code is then as simple as using dependency injection on your classes to inject an instance of ILogger and then using it’s functions to create a log.

public class MyClass
{
    private readonly ILogger logger;

    public MyClass(ILogger logger)
    {
        this.logger = logger;
    }

    public void Foo()
    {
        try
        {
            // Logging information
            logger.LogInformation("Foo called");
        }
        catch (Exception ex)
        {
            // Logging an error
            logger.LogError(ex, "Something went wrong");
        }
    }
}

Application Insights

When your application is running in Azure, Application Insights is where all your logs will live.

What’s great about App Insights is it will give you the ability to write queries against all your logs.

So for instance if I wanted to find all the logs for an import function starting, I can write a filter for messages containing “Import function started”.

One of my favourite where commands is ago(30m). It will output the time for a given timespan in the past. This is great when your running the same query frequently and are only interested in the last x amount of time, as you can simply write where timestamp > ago(30m) for the last 30 minutes of logs rather than trying to remember for date time format your string should be in

Queries can also be saved or pinned to a dashboard if they are a query you need to run frequently.

For all regular logs your application makes you need to query the traces. What can be confusing with this though is the errors.

With the code above I had a try catch block and in the catch block I called logger.LogError(ex, “Something went wrong”); so in my logs I expect to see the message and as I passed an exception I also expect to see an exception. But if we look at this example from application insights you will see an error in the traces log but no strack trace or anything else from the exception.

LogError will write to both the traces and exceptions log. If you want to view the exceptions you have to look at the exceptions log, not the traces.

This is just the start of the functionality that Application Insights provides, but if your just starting out, hopefully this is a good indication not only of how easy it is to add logging to your application, but also how much added value App Insights can offer over just having text files.

Two ways to import an XML file with .Net Core or .Net Framework

It’s always the simple stuff you forget how you do. For years I’ve mainly been working with JSON files, so when faced with that task of reading an XML file my brain went “I can do that” followed by “actually how did I used to do that?”.

So here’s two different methods. They work on .Net Core and theoretically .Net Framework (my project is .Net Core and haven’t checked that they do actually work on framework).

My examples are using an XML in the following format:

<?xml version="1.0" encoding="utf-8"?>
<jobs>
    <job>
        <company>Construction Co</company>
        <sector>Construction</sector>
        <salary>£50,000 - £60,000</salary>
        <active>true</active>
        <title>Recruitment Consultant - Construction Management</title>
    </job>
    <job>
        <company>Medical Co</company>
        <sector>Healthcare</sector>
        <salary>£60,000 - £70,000</salary>
        <active>false</active>
        <title>Junior Doctor</title>
    </job>
</jobs>

Method 1: Reading an XML file as a dynamic object

The first method is to load the XML file into a dynamic object. This is cheating slightly by first using Json Convert to convert the XML document into a JSON string and then deserializing that into a dynamic object.

using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.IO;
using System.Text;
using System.Xml;
using System.Xml.Linq;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);
            XDocument xdoc = XDocument.Load(stream);

            string jsonText = JsonConvert.SerializeXNode(xdoc);
            dynamic dyn = JsonConvert.DeserializeObject<ExpandoObject>(jsonText);

            foreach (dynamic job in dyn.jobs.job)
            {
                string company;
                if (IsPropertyExist(job, "company"))
                    company = job.company;

                string sector;
                if (IsPropertyExist(job, "sector"))
                    company = job.sector;

                string salary;
                if (IsPropertyExist(job, "salary"))
                    company = job.salary;

                string active;
                if (IsPropertyExist(job, "active"))
                    company = job.active;

                string title;
                if (IsPropertyExist(job, "title"))
                    company = job.title;

                // A property that doesn't exist
                string foop;
                if (IsPropertyExist(job, "foop"))
                    foop = job.foop;
            }

            Console.ReadLine();
        }

        public static bool IsPropertyExist(dynamic settings, string name)
        {
            if (settings is ExpandoObject)
                return ((IDictionary<string, object>)settings).ContainsKey(name);

            return settings.GetType().GetProperty(name) != null;
        }
    }
}

A foreach loop then goes through each of the jobs, and a helper function IsPropertyExist checks for the existence of a value before trying to read it.

Method 2: Deserializing with XmlSerializer

My second approach is to turn the XML file into classes and then deserialize the XML file into it.

This approch requires more code, but most of it can be auto generated by visual studio for us, and we end up with strongly typed objects.

Creating the XML classes from XML

To create the classes for the XML structure:

1. Create a new class file and remove the class that gets created. i.e. Your just left with this

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

}

2. Copy the content of the XML file to your clipboard
3. Select the position in the file you want to the classes to go and then go to Edit > Paste Special > Paste XML as Classes

If your using my XML you will now have a class file that looks like this:

using System;
using System.Collections.Generic;
using System.Text;

namespace XMLExportExample
{

    // NOTE: Generated code may require at least .NET Framework 4.5 or .NET Core/Standard 2.0.
    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    [System.Xml.Serialization.XmlRootAttribute(Namespace = "", IsNullable = false)]
    public partial class jobs
    {

        private jobsJob[] jobField;

        /// <remarks/>
        [System.Xml.Serialization.XmlElementAttribute("job")]
        public jobsJob[] job
        {
            get
            {
                return this.jobField;
            }
            set
            {
                this.jobField = value;
            }
        }
    }

    /// <remarks/>
    [System.SerializableAttribute()]
    [System.ComponentModel.DesignerCategoryAttribute("code")]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true)]
    public partial class jobsJob
    {

        private string companyField;

        private string sectorField;

        private string salaryField;

        private bool activeField;

        private string titleField;

        /// <remarks/>
        public string company
        {
            get
            {
                return this.companyField;
            }
            set
            {
                this.companyField = value;
            }
        }

        /// <remarks/>
        public string sector
        {
            get
            {
                return this.sectorField;
            }
            set
            {
                this.sectorField = value;
            }
        }

        /// <remarks/>
        public string salary
        {
            get
            {
                return this.salaryField;
            }
            set
            {
                this.salaryField = value;
            }
        }

        /// <remarks/>
        public bool active
        {
            get
            {
                return this.activeField;
            }
            set
            {
                this.activeField = value;
            }
        }

        /// <remarks/>
        public string title
        {
            get
            {
                return this.titleField;
            }
            set
            {
                this.titleField = value;
            }
        }
    }

}

Notice that the active field was even picked up as being a bool.

Doing the Deserialization

To do the deserialization, first create an instance of XmlSerializer for the type of the object we want to deserialize too. In my case this is jobs.

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));

Then call Deserialize passing in a XML Reader. I’m creating and XML reader on the stream I used in the dynamic example.

            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

The complete file now looks like this:

using System;
using System.IO;
using System.Text;
using System.Xml;

namespace XMLExportExample
{
    class Program
    {
        static void Main(string[] args)
        {
            string jobsxml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><jobs>    <job><company>Construction Co</company><sector>Construction</sector><salary>£50,000 - £60,000</salary><active>true</active><title>Recruitment Consultant - Construction Management</title></job><job><company>Medical Co</company><sector>Healthcare</sector><salary>£60,000 - £70,000</salary><active>false</active><title>Junior Doctor</title></job></jobs>";

            byte[] byteArray = Encoding.UTF8.GetBytes(jobsxml);
            MemoryStream stream = new MemoryStream(byteArray);

            var s = new System.Xml.Serialization.XmlSerializer(typeof(jobs));
            jobs o = (jobs)s.Deserialize(XmlReader.Create(stream));

            Console.ReadLine();
        }
    }
}

And thats it. Any missing nodes in your XML will just be blank rather than causing an error.

ASP.NET Core Platforms for a Blog

Like a lot of Sitecore developers my blog (at time of writing) is hosted on WordPress. The reason for it not being in Sitecore is simple. Sitecore is an enterprise level platform, which isn’t really needed for a personal blog.

For a .net dev to have there blog on a php platform however just seems plain wrong, but again there’s a logical reason. WordPress is actually really good as a blogging platform, and it doesn’t cost me anything.

Despite this I would much rather take control of my site and use it to play with all the cool features in Azure. It would also be nice to have the ability to do something about the Google PageSpeed result which is currently sitting at 24%. So in aid of this I’ve started looking into .net core based platforms and thought I’d share what I’ve found.

Miniblog.core

https://github.com/madskristensen/Miniblog.Core

As the name suggests Miniblog.core is both very small and based on .net core. Developed by Mads Kristensen its an extremely lightweight bare bones implementation, which if your after something you can help build upon is ideal. The code is straightforward to understand and very simple to adapt. Additionally if your after a 100% page speed score, then this achieves just that.

If on the other hand your after a deluxe admin experience full of functionality then this probably isn’t for you.

Piranha CMS

http://piranhacms.org/

Piranha CMS is built as a lightweight CMS platform rather than specifically as a blog, however it also contains a blog module which for me put’s it at a big advantage over the other CMS platforms I’ve listed below.

On the back end you get a choice of SQL Server, SQLite or MySQL. The documentation isn’t exactly complete, but on the day I tried it out, I found the team building it very responsive on GitHub. They even updated the documentation with one of my suggestions the very next day.

Another aspect I particularly liked about Piranha CMS was it’s block editor, which from the brief look I’ve had so far reminds me of the block editor Umbraco has. Whereas other platforms in this list were restricted to a large rich text field.

Orchard Core

https://github.com/OrchardCMS/OrchardCore

Orchard Core is the dot net core version of the Orchard CMS. It’s currently in beta, but I’m not sure that put’s it at much of a disadvantage over the others on this list.

My initial impressions of Orchard Core however weren’t as high as Piranha CMS. The admin interface wasn’t quite as nice and as far as I could tell, it didn’t have anything like Piranha’s block editor. The solution itself also seemed far more complex and I wasn’t certain what I got for this. I expect Orchard Core is likely better in some ways that I have yet to discover, but for my needs as a blog this is probably not the case. It also didn’t have a blog module out of the box.

Squidex

https://squidex.io/

I have’t had much of a chance to play with Squidex yet, but it does offer an interesting difference to the others mentioned so far.

For a start Squidex is an entirely headless cms, and is built around the concept of CQRS and Event Sourcing. Unlike the others it also uses MongoDB rather than a SQL based database.

Where MongoDB is concerned, I often get the impression people are using it because as developers we tend to have a preference to using something new rather than something adequate. However when it comes to Azure pricing, there is potentially a saving to be made by using Mongo rather than Azure SQL.

Redirecting to login page with AngularJs and .net WebAPI

So here’s the scenario, you have a web application which people log into and some of the pages (like a dashboard) contain ajax functionality. Inevitably the users session expires, they return to the window change a filter and nothing happens. In the background, your JavaScript is making http calls to the server which triggers an unauthorised response. The front end has no way to handle this and some errors appear in the JS console.

A few things are actually combining to make life hard for you here. Lets take a look at each in more detail.

WebAPI and the 301 Response

To protect your API’s from public access a good solution is to use the Authorize attribute. i.e.

[Authorize]
public ActionResult GetDashboardData(int foo)
{
   // Your api logic here
            
}

However chances are your solution also has a login page configured in your web.config so that your regular page controller automatically trigger a 301 response to the login page.

    <authentication mode="Forms">
      <forms timeout="30" loginUrl="/account/sign-in/" />
    </authentication>

So now what happens, is instead or responding with a 401 Unauthorised response, what’s actually returned is a 301 to the login page.

With an AJAX request from a browser you now hit a second issue. The browser is making an XMLHttpRequest. However if that request returns a 301, rather than returning it your JavaScript code to handle, it “helpfully” follows the redirect and returns that to your JavaScript. Which means rather than receiving a 301 redirect status back, your code is getting a 200 Ok.

So to summarise your API was set up to return a 401 Unauthorised, that got turned into a 301 Redirect, which was then followed and turned into a 200 Ok before it gets back to where it was requested from.

To fix this the easiest method is to create are own version of the AuthorizedAttribute which returns a 403 Forbidden for Ajax requests and the regular logic for anything else.

using System;
using System.Web.Mvc;

namespace FooApp
{
    [AttributeUsage(AttributeTargets.Method)]
    public class CustomAuthorizeAttribute : AuthorizeAttribute
    {
        protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
        {
            if (filterContext.HttpContext.Request.IsAjaxRequest())
            {
                filterContext.Result = new HttpStatusCodeResult(403, "Forbidden");
            }
            else
            {
                base.HandleUnauthorizedRequest(filterContext);
            }
        }
    }
}

Now for any Ajax requests a 403 is returned, for everything else the 301 to the login page is returned.

Redirect 403 Responses in AngularJs to the login page

As our Ajax request is being informed about the unauthorised response, it’s up to our JavaScript code trigger the redirect in the browser to the login page. What would be really helpful would be to define the redirect logic in one place, rather than adding this logic to every api call in our code.

To do this we can use add an interceptor onto the http provider in angular js. The interceptor will inspect the response error coming back from the XmlHttpRequest and if it has a status of 401, use a window.locator to redirect the user to the login page.

app.factory('httpForbiddenInterceptor', ['$q', 'loginUrl', function ($q, loginUrl) {
    return {
        'responseError': function (rejection) {
            if (rejection.status == 403) {
                window.location = loginUrl;
            }
            return $q.reject(rejection);
        }
    };
}]);

app.config(['$httpProvider', function ($httpProvider) {
    $httpProvider.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
    $httpProvider.interceptors.push('httpForbiddenInterceptor');
}]);

You’ll notice a line updating the headers. This is to make the IsAjaxRequest() method on the api recognise the request as being Ajax.

Finally you’ll also notice the loginUrl being passed into the interceptor. As it’s not a great idea to have strings like urls littered around your code, this is using a value recipe to store the url. The code to do this is follows:

app.value('loginUrl', '/account/sign-in?returnurl=/dashboard/');

Running Gulp tasks in Visual Studio

Over the last decade, front end development has matured from a point where CSS and JavaScript were written in badly organised files containing the exact code a browser would interpret, to something that now more resembles back end development.

Pure CSS has become the byte code of the front end world with LESS and Sass becoming the languages of choice. Instead of MSBuild for running a compilation Gulp can be used to automate workflows to compile Sass files, minify them and do whatever else is needed to produce the code for deployment (not really an exact comparison, but you get the point).

To truly achieve a matured development setup though, the final step is to remove those “compiled” css and js files from source control. After all you wouldn’t check in a dll with the source code for each build. If your using a build server then this is a relatively easy step to add to your workflow. However the bigger issue is not making life hard for your back end developers.

Without the final CSS and JS files in source control your back end devs may now be faced with a site lacking all style and front end functionality when they hit F5. With LESS and Sass also not really being there thing, they’re now left not really knowing what to do, and also not overly happy about having to learn something about front end do continue with there back end work. To make matters worse, Visual Studio often isn’t the front end dev’s choice of tool so copying the front enders setup isn’t an ideal solution either.

The Aim

The ideal solution would be for a back end dev to be able to check out a solution from source control (containing no compiled CSS of JS files), hit F5 in Visual Studio, the final CSS and JS be created as part of the build and the site to run. No opening a command prompt to run a gulp task, or any other process the front end devs may be using. It should be completely invisible to them.

Visual Studio Task Runner

With Visual Studios Task Runner, this is entirely possible.

The Task Runner is built into Visual Studio, so there’s no need for the back end dev to install any extra tools, and more importantly, tasks can be linked into a before build event so that the back end dev doesn’t need to do anything.

A bit of background on gulp and the task runner

When front end developers set up gulp, they will configure a set of gulp tasks within a file named gulpfile.js (in reality they may actually separate the tasks into multiple files referenced by gulpfile.js, but this file is the important bit). These tasks may look a bit like this:

gulp.task("min:js", () => {
  return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
    .pipe(concat(paths.concatJsDest))
    .pipe(uglify())
    .pipe(gulp.dest("."));
});

gulp.task("min:css", () => {
  return gulp.src([paths.css, "!" + paths.minCss])
    .pipe(concat(paths.concatCssDest))
    .pipe(cssmin())
    .pipe(gulp.dest("."));
});

gulp.task("min", gulp.series(["min:js", "min:css"]));

In this example there are 3 tasks. The first minifies some JS while the second minifies some CSS. The third is defining a series which runs the first 2.

Visual Studios task runner will look file the file called gulpfile.js and pull out all the tasks within it. The task runner window may not be open by default, so to open it type task runner in the search box and select it from the results. Alternatively you can right click the gulpfile in the solution and select Task Runner from the context menu.

The task runner window will open at the bottom of the screen, and will list out all the gulp tasks found within the gulpfile.js file. If the front end devs have organised the tasks into separate files, as long as the gulpfile.js file in the root of the project has some way of referencing them, they will still show up.

If you don’t see any of the tasks, and have only just added the file or a task to the file. You may need to click the refresh button.

To run a task, simply right click it and then choose Run.

Automating on build

Being able to run a gulp task is handy, but what we’re really after is for it to be automated as part of the build.

For this we just need to right click the task to be run, and then from the bindings option, select before build.

This will add a comment to the top of the gulpfile.js which Visual Studio will look for to know what task should be run before a build.

/// <binding BeforeBuild='min' />

With this set, the back end devs no longer need to be concerned with not having any compiled css and js, and with a small amount of knowledge also have the ability to make minor changes to css and js where needed.

Some others tips

Depending on your front end devs setup there could be some additional challenges to overcome.

Front end devs not using Visual Studio

Quite often Visual Studio isn’t the choice of dev environment for a front end dev. One issue this can lead to is files missing from the Visual Studio solution. However an easy fix for this is to use wild cards in the csproj file.

If the front end devs code can be grouped into specific folders then use a wild card to include all the files from that folder in the project.

<None Include="build\js\**\*.js" />

CSS/JS not in the web project

If the front end devs have a separate folder for there work. e.g. they might work with static html files. Then the code may not be in the project what will be run, and therefore nothing will trigger the gulp file to have it’s task run.

A simple solution for this is to create a visual studio project for the folder with there work so that it has a build event to be attached to. Make sure your web project also references this project to trigger it to be built when the web project is.

Links

For more info on using Gulp with Visual Studio. Check out Microsofts guide on using Gulp with ASP.NET Core.

Force clients to refresh JS/CSS files

It’s a common problem with an easy solution. You make some changes to a JavaScript of CSS file, but your users still report an issue due to the old version being cached.

You could wait for the browsers cache to expire, but that isn’t a great solution. Worse if they have the old version of one file and the new version of another, there could be compatibility issues.

The solution is simple, just add a querystring value so that it looks like a different path and the browser downloads the new version.

Manually updating that path is a bit annoying though so we use modified time from the actual file to add the number of ticks to the querystring.


DefaultLayout.cshtml

using Utilities;
using UrlHelper = System.Web.Mvc.UrlHelper;

namespace Web.Mvc.Utils
{
    public static class UrlHelperExtensions
    {
        public static string FingerprintedContent(this UrlHelper helper, string contentPath)
        {
            return FileUtils.Fingerprint(helper.Content(contentPath));
        }
    }
}

UrlHelperExtensions.cs

using System;
using System.IO;
using System.Web;
using System.Web.Caching;
using System.Web.Hosting;

namespace Utilities
{
	public class FileUtils
    {
        public static string Fingerprint(string contentPath)
        {
            if (HttpRuntime.Cache[contentPath] == null)
            {
                string filePath = HostingEnvironment.MapPath(contentPath);

                DateTime date = File.GetLastWriteTime(filePath);

                string result = (contentPath += "?v=" + date.Ticks).TrimEnd('0');
                HttpRuntime.Cache.Insert(contentPath, result, new CacheDependency(filePath));
            }

            return HttpRuntime.Cache[contentPath] as string;
        }
    }
}

FileUtils.cs

Using compile options for version compatibility

Here’s the scenario; Your building a module and it needs to be compatible with different versions of a platform. e.g. Sitecore, and everything’s great up until the day you need to call different methods in different versions of the platform. You’d rather not drop support for the old versions, and nor do you want to start maintaining two code bases. So what do you do?

C# Preprocessor Directives

Preprocessor directives provide a way to give the compiler instructions to follow while its compiling a project. By using this we can give the compiler conditions to compile different versions in different ways. Thereby allowing us to maintain one codebase, but produce compilations for different versions of the platform. e.g. One for Sitecore 8.0 and another for Sitecore 9.0.

#if, #else and #endif

When the compiler encounters an #if followed by an #endif, it will only compile the code between the two if the specified symbol had been defined.

#if DEBUG
    Console.WriteLine("Debug version");
#else
    Console.WriteLine("Non Debug version");
#endif

Defining a preprocessor symbol

For the if statement to work, your going to need to define your symbol which is being evaluate.

This can be included in code as follows

#define YOURSYMBOL

A more useful was of defining this however is to include it in your call to MSBuild (this is particularly useful when using a build server).

-define:name[;name2]

If your compiling from Visual Studio an easier solution is to set up a new build configuration with a conditional compilation symbol.

  1. Right click your solution item in Solution Explorer and select Properties
  2. Click Configuration Properties on the left and then Configuration Manager on the right
  3. In the pop up window click the Active solution configuration drop down and then click New
    BuildConfiguration
  4. Enter the name of the build config. In my example above I have SC82 for Sitecore 8.2 and SC90 for Sitecore 9.0.
  5. Click Ok and close all the windows you just opened
  6. Right click the project that your going to build and select Properties
  7. Select the Build tab
  8. Select your build configuration from the configuration at the top
  9. Enter the symbol your using for the #if directives
    Conditional Compilation Symbols

Reference different versions of an assembly

Adding conditions to our code is good, but for this to fully work we also need to reference different versions of the assemblies that are causing the issue in the first place.

There’s no way of doing this through Visual Studio but by editing the .csproj file manually we can update the hint path on a reference to include the configuration name as a variable.

    
       ..\libraries\$(Configuration)\Sitecore.Kernel.dll
      False
    

This example shows how different versions of the Sitecore Kernel can be referenced by keeping each version in a subfolder that corresponds with the build configuration name.

As well as different versions of assemblies, it may also be needed to target different versions of the .net framework. This can be done in the .csproj file by including additional proerty groups that have a condition on the configuration name.

  
    bin\SC82\
    TRACE;SC82
    true
    v4.5.2
  
  
    bin\SC90\
    TRACE;SC90
    true
    v4.6.2
  

In this example I’m targeting .net 4.5.2 for my Sitecore 8.2 configuration and 4.6.2 for my Sitecore 9 configuration.

Useful Links

C# preprocessor directives
-define (C# Compiler Options)