Equivalent of the Octopus Package Library for Azure Devops

I’ve been using Team City and Octopus deploy in our CI setups for several years, but over the last 6 months have slowly been moving over to use Azure Devops. This is mostly to move away from needing to keep an application like Team City updated by switching to a SAAS service. Octopus is available as a SAAS service, but as Azure Devops is also capable of doing releases I opted to start with that rather than using Octopus again.

One of my first challenges was how you replace Octopus’s package library. The solutions I’m working on (which are generally Sitecore based), consist of the application we write and store in Source Control, and all the other pre-built parts of Sitecore which we don’t store in source control.

With Octopus deploy we would add these static parts to the Octopus package library and then release a combination of them and our application to a server, wiping what is there before we do. That then gives us the exact same deploy on every target.

With Azure Devops however, things are a bit different.

Azure Artifacts

The closest equivalent of the package library is Azure Artifacts. Rather than a primary purpose being to upload packages to be included in a deploy. The goal of Artifacts is more inline with the output of a pipeline being saved to an Artifact which can then be a package feed for things like NuGet or npm.

Unfortunately, there is also no UI to directly upload a NuGet package to an Artifact feed like there is with Octopus’s package library. However, it is possible to upload a pre-built NuGet package another way using NuGet.exe itself.

Manually uploading a NuGet package to Azure Artifacts

To start you need to create a feed for your package to uploaded to.

Login to Azure DevOps and got to Artifacts. Click Create feed and give it a name.

Click connect to feed and select NuGet.exe. You will see under the heading project setup a nuget.config file to add to your solution.

Create an empty solution in Visual Studio and add a nuget.config file to the root folder using the source from the website. It will look something like this;

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear />
    <add key="MyFeed" value="https://pkgs.dev.azure.com/.../_packaging/MyFeed/nuget/v3/index.json" />
  </packageSources>
</configuration>

Copy the NuGet package you would like you publish into the root folder as well. I’m using a package we have created for a tool called Feydra. My folder now looks like this.

Before we can publish into our NuGet feed we need to setup a personal access token. This is done from within Azure Devops.

In the top right corner click the person icon and then select profile. On the profile screen you will see a section called Personal access tokens under Security.

From here click New Token and give it the Read, write & manage permission for Packaging.

You will be given a token which you should save a copy of.

Now we are ready to publish are NuGet package to the Artifact feed.

Open a command prompt at the folder containing your solution file and run the following commands

nuget push -Source <SourceName> -ApiKey az <PackagePath>

For me this is as follows:

nuget push -Source MyFeed -ApiKey az .\Feydra.Custom.1.0.0.30.nupkg

You will then be prompted for a username and password which you should use the personal access token for both.

Refresh the feed on the website and you should see you package added to it, ready to be included in a release pipeline.

Sitecore 10 with headless ASP.NET Core

Sitecore 10 is here and with it comes the new developer experience with what Sitecore are calling Sitecore Headless Development.

Now you may be thinking, “didn’t Sitecore already have a Headless setup in Sitecore 9” and the answer would be yes, is still exists and is referred to as Sitecore Javascript Services (JSS). What makes this difference is the rendering layer is now using ASP.NET Core rather than Javascript libraries like Angular, VueJS and React. This gives us the benefits of a headless setup without having to program in one language for the back end (C#) and another for the front (JS). It’s now C# everywhere.

Before we get any further into what this new experience is, let’s clear one thing up. Like Sitecore JSS, this isn’t actually a true headless setup, it’s decoupled. The subtle difference being that a headless CMS has no rendering engine and has a purpose to feed content to multiple heads that could be anything from a website or app to physical display boards. They generally lack the ability to do things like preview because they have no knowledge of how the content will be rendered. Decoupled on the other hand still has a rendering engine but its been split off from the backend. Headless however is far more of a buzz word right now and alas Sitecore have called this headless.

So how does it work?

The traditional part of Sitecore is essentially the same as it is now. You still have Content Manager and Content Delivery servers, XConnect, Identity and all the other roles exist just as they did in Sitecore 9. However rather than creating View and Controller Renderings in Sitecore to return HTML, you will now create JSON renderings that will return item data through an API.

Communicating with that API is a new Rendering Host layer written in ASP.NET Core.

So now when a visitor comes to your site they will be interacting with the ASP.NET Core site which in turn will call the Headless Service API on your content delivery server, this will return JSON objects for the item data. The ASP.NET Core site then renders the page and returns it to the visitor.

This may sound like a bit more work, as you now have to setup a completely separate ASP.NET Core site and have that talk to an API but there’s good news. Sitecore have written a Sitecore ASP.NET Rendering SDK (included via NuGet) which will do most of the communication with the API for you. Most of what you will actually do is just a mapping of a View in the Rendering Host to a Layout Rendering item in Sitecore. The SDK will take care of the rest.

What’s the benefit?

As a developer there are three massive benefits that I can see for this setup.

Installs and Upgrades should be easier

If I had one complaint about Sitecore, it’s the amount of config to get the site running. With platforms like Umbraco and EPI, you just include a NuGet package in your project, run it from Visual Studio and you have a working CMS. Sitecore has to be installed in one of countless ways (SIF, GUI, Serverless, Containers) and that install process creates an ever increasing number of roles that all need to communicate with each other, and all need to be upgraded at some point.

Now this isn’t quite a SASS model, but it’s getting closer. With your rendering host now separated from the Sitecore install there’s less reasons to ever touch what gets setup by the installer.

Notice I do say less and not no reason. You will still be making changes to the CM and CD instances. For any type of rendering where you would have written a controller you will likely create what is called a contents resolver class that will need to go on the CM/CD to generate the object for your view.

You can run and debug directly from Visual Studio

Whenever I switch away from working on Sitecore and then go back the first point of pain is always the realisation that I’m going back to a process of Write code in Visual Studio > Publish to local site > Look at local site > if there’s a need to debug ctrl+alt+p to attach to process > realize Visual Studio isn’t in admin mode so restart Visual Studio etc etc etc, but with this setup it’s Write code > Press F5 > watch the lightweight front end instantly spin up and work.

Front End devs only need the ASP.NET Core project

Life is hard for a Front End developer working on Sitecore. There skill set is in HTML and CSS, but they have to work with this beast of a CMS and get updates from the back end devs to work on their local environments. Tools like Feydra help improve the situation, but it’s still not perfect.

In theory with a decoupled setup, with the Sitecore instance running on a server somewhere and a decent internet connection, all they now need is the ASP.NET Core Rendering Host project which will run direct from source control. No need to install anything and they can even work on a Mac!

How do you get setup?

To get going is a relatively simple experience due to the fact Siteocre have provided a getting started template (https://doc.sitecore.com/developers/100/developer-tools/en/walkthrough–using-the-getting-started-template.html) and a guide for creating your first model-bound view.

The guide uses a Sitecore Container setup (also new in Sitecore 10) which makes it even easier to get started with (no more installing all those pesky pre-req’s like Solr with https and debugging SIF errors).

I ran into a few issues with the containers that came down to ports not being available (if you do get errors, check the documentation for containers, it lists some additional port numbers that need to be free), but once you have it setup I would say you end up with more questions on the container side of things and the rendering host part just works.

What is missing?

This is a first release so obviously some things are missing right now. The biggest things I’ve come across so far are:

  • Information on how you debug. Wanting to know if an issue is with the API not returning data or the rendering host not rending it lacks any guidance on how to do this right now.
  • Sitecore Forms. A relatively important module for sites which won’t be available if you choose this setup.
  • Ability or at least instructions on how the Rendering Host should interact with the Sitecore DB or Search. For instance if you wanted to create an API to provide an autocomplete on a search box, logically you would now create the API in the rendering host, but the best practice way to retrieve the data from Sitecore is not yet clear.

Using Sitecore TDS with Azure Pipelines

Sitecore TDS allows developers to serialize Sitecore items into a file format which enables them to be stored in source control. These items can then be turned into a Sitecore update package to be deployed into a Sitecore solution.

With tools like Team City it was possible to install the TDS application on the server, and MSBuild would pick it up in the same way that Visual Studio would when you create a build locally. However, with build tools like Azure Piplelines that are a SAAS service you do not have any access to install components on a server.

If your solution contains a TDS project and you run an Azure Pipeline, you will probably see this error saying that the target ‘Build’ does not exist in the project.

Fortunately, TDS can be added to a project as a NuGet package. The documentation I found on Hedgehogs site for TDS says the package is unlisted on NuGet.org but can be installed in the normal way if you know what it’s called. When I tried this is didn’t work, but the NuGet package is also available in the TDS download so we can do it a different way.

Firstly download TDS from Hedgehogs website. https://www.teamdevelopmentforsitecore.com/Download/TDS-Classic

Next create a folder in the root of your drive called LocalNuGet and copy the nuget package from the download into this folder.

Within Visual Studio go to Tools > NuGet Package Manager > Package Manager Settings, and in the window that opens select Package Sources on the left.

Add a new package source and set it to your LocalNuGet folder.

From the package manager screen select your local nuget server as the package source and then add the TDS package to the relevant projects.

When you commit to source control make sure you add the hedgehog package in your projects packages folder. Normally this will be excluded as your build will try and restore the NuGet packages from a NuGet feed rather than containing them in the repo.

If you run your Azure Pipleine now you should get the following error:

This is a great first step as we can now see TDS is being used and we’re getting an error from it.

If you’re not getting this, then open the csproj file for your solution and check that there are no references to c:\program files (x86)\Hedgehog Development\ you should have something like this:

<Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
    <PropertyGroup>
      <ErrorText>This project references NuGet package(s) that are missing on this computer. Use NuGet Package Restore to download them.  For more information, see http://go.microsoft.com/fwlink/?LinkID=322105. The missing file is {0}.</ErrorText>
    </PropertyGroup>
    <Error Condition="!Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets')" Text="$([System.String]::Format('$(ErrorText)', '..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets'))" />
  </Target>
  <Import Project="..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets" Condition="Exists('..\packages\HedgehogDevelopment.TDS.6.0.0.10\build\HedgehogDevelopment.TDS.targets')" />

To fix the product key error we need to add our license details.

Edit your pipeline and then click on the variables button in the top right. Add two new variables:

  • TDS_Key – Which should be set to your license key
  • TDS_Owner – Which should be set to the company name the key is linked to

Now run your pipleline again and the build should succeed

Moving the Media Cache folder in Sitecore

One of the cache’s that Sitecore has is the Media Cache. Whenever you use an image from Sitecore’s media library, Sitecore will retrieve the image from the database, scale it to the size you requested and then store it to disk in the media cache folder. On any subsequent request the image will now be retrieved from the media cache rather than the database.

By default each content management and content delivery server will locate the media cache in /App_Data/MediaCache

This is a relatively logical place to store a cache for images that wont cause you many issues. However, if you have automated deployments setup then you are likely wiping the whole of your website folder each time you do a deploy to ensure that your deploy remains the same on all environments. As the App_Data folder is in the website, your Media Cache will be deleted too.

Depending on your site then deleting the media cache potentially isn’t much of an issue. After all it’s a cache so all that will happen is the images will get cached the next time they are requested. But depending how many images get retrieved at the same time this could slow down performance, particularly if your content editors decided to put every image they ever uploaded into the same folder. Opening the that folder in the admin will create a nice amount of load on your server, particularly if you also have some extra image optimization logic installed.

Like most things in Sitecore though, you can change the location through a config setting.

In Sitecore.config you will find a property called Media.CacheFolder. Change this to somewhere outside of your website folder and Sitecore will now start storing the Media Cache in this location and it will be safe your all your deploys.

    <!--  MEDIA - CACHE FOLDER
            The folder under which media files are cached by the system.
            Default value: /App_Data/MediaCache
      -->
    <setting name="Media.CacheFolder" value="/App_Data/MediaCache"/>

Azure devops and custom NuGet feeds

If your setting up a CI pipleline on Azure Devops for a site which uses a NuGet feed from a source that isn’t on nuget.org you may see the error:

“The nuget command failed with exit code(1) and error(Errors in packages.config projects Unable to find version…”

On your local dev machine you will have added extra an extra NuGet feed source through visual studio which will update a global file on you machine. However as Azure Pipelines is a serverless solution you don’t have the same global file to update to include the sources.

Instead of this you need to add a NuGet.config file to the root of your repository.

Here is an example of one set to include Sitecores NuGet package feed.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://www.nuget.org/api/v2/" />
    <add key="Sitecore NuGet v2 Feed" value="https://sitecore.myget.org/F/sc-packages/" />    
  </packageSources>
</configuration>

Next you will need to update your pipeline to tell the NuGet step to use this config file

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'
    feedsToUse: 'config'
    nugetConfigPath: 'NuGet.config'

And that’s it. As long as all the sources are correct the NuGet command should now find your packages.

Do I need a Headless CMS?

What is a Headless CMS?

Headless CMS’s have become the hot topic in recent years along with preferences to move to microservices over monolithic architectures and to decouple parts of applications where possible. So, you may be wondering do I need one?

Let’s start with what a Headless CMS actually is. A typical CMS does two jobs;

Firstly, it provides a place for marketers to login and create pages of content that appear on the website. Some allow very basic content editing, other provide the capabilities to construct entire pages from pre-built components, preview what it looks like, apply personalisation rules, split testing and a whole host of other features.

Secondly, they allow developers to create templates of skins to render the pages to visitors. They also provide authentication, form data collection, track visits, browsing habits etc.

Essentially the functionality splits into what the marketer sees and what the visitor sees.

A Headless CMS on the other hand, chops of the second part and in its place leaves an API so that another application can pull content out of the CMS for it to display. The CMS then becomes a place solely responsible for managing content rather than managing a website.

Decoupling in this way offers a few advantages:

  1. The technology/platform is no longer determined by the CMS. You could have a .net based CMS and write the website in JavaScript, php, or any other language.
  2. The content can be provided to more than just a website. In the same way that digital asset management platforms specialise in serving digital assets to other systems. E.g. mobile app, web, print etc. A headless CMS can provide content to multiple other systems too.
  3. It’s much easier for a headless CMS to become a SASS offering. Most custom development when you install a CMS is to tailor the visitor website experience, rather than the admin experience. Once the visitor website is removed there leaves little reason for it to be a product you install and maintain making it much easier for vendors to supply as SASS.

So, should I have one?

With all these advantages the answer should be yes, right? Well its not quite that simple, there are also some disadvantages, so let’s answer some more questions to see if it’s really suited for your situation;

Do you want to use a different website language?

One of the advantages of going headless was that you could pick the language you write the head in, but which language do you want to write it in? Most articles will talk about writing the head in JavaScript frameworks such as React, Vuejs or Angular. The reason for this is simple, if you want to write your application in one of these then you’re a bit stuck for CMS choices as no viable CMSs exist written in React, Vuejs or Angular so using a headless CMS is the only option other than writing an entire bespoke admin.

But what if you want to write the head in .Net or PHP? A headless CMS won’t restrict you doing this in any way, in-fact Umbraco Heartcore even provides a .Net client library to help. The only thing you must question is if it’s going to give you more advantages than disadvantages. By cutting off the head you now have to create it, so are you creating something different to what Umbraco’s Head originally did or are you just recreating the same thing?

Is there a specific pre-built head you want to use?

An alternative to writing your own head is to buy one that is premade. The biggest downside I see to headless is that you have to build and support the head, so using one that is pre-built and compatible with your CMS is a big advantage.

However, this does also mean that you may end up paying a higher price in license fees. Unless your CMS costs decrease enough to cover the additional costs of the head, your ultimately now licensing two products which could add up to more than one.

How many heads are you going to have?

If the answer is one and it’s a website, then it’s highly likely that you don’t need a headless CMS as you’ll effectively end up writing an inferior head to the one, you’re replacing.

CMSs like WordPress have almost 2 decades of work gone into advancing them at this point getting them to a place where they are feature rich, reliable and most importantly secure. Creating your own head will require you to do they work they have already done.

Are you building a website?

If the answer to this is no, then a headless website makes a great option to provide an admin experience for your content to become editable. They are pre-built, integrate with the popular enterprise authentication systems and offer great features around workflow and user permissions.

However, if the answer is yes, then you need to think carefully. Going truly headless has some quite major disadvantages for a website:

  1. You need to write and maintain the head, and if that head is doing the same thing the CMS one did than you’re just reinventing the wheel.
  2. As the CMS is now only providing content to another application less will be possible through the admin interface. E.g. No ability to preview pages, no drag and drop page creation.
  3. Creating things like campaign landing pages or any new page layout creation may no longer be possible without involving the IT team. Because the CMS is now only providing the content rather than the layout, unless the head provides an admin to manage this you could end up with very fixed page templates.

A CMS that can work both head on and headless may be a better option. For instance, Sitecore can be set up like a traditional CMS where the custom rendering logic is added to the main application. However, it can also be set up as a decoupled CMS where the page rendering is achieved using Angular, Vue or React, but note this is decoupled and not headless. The visitor site can be hosted separate from the CMS like with headless, but a lot of the logic is still provided by Sitecore so things like personalisation, admin previews, page construction, Sitecore forms etc are all still possible. It also can work in a truly headless way by using the same API that powers the decoupled head to power a bespoke non-Sitecore head, like a mobile app.

Is your content reusable?

A news corporation writing articles that appear in newspapers, websites or mobile apps are obviously creating a lot of reusable content that can stay the same on each destination.

Alternatively, a typical brochure site (done well) are created with a journey in mind where the whole page has been designed to achieve an outcome. Some content may just be a 3 words combined with an image to evoke an emotional response. Other content may be specifically written to fit a certain gap so that it appears the same size as 2 other pieces of content next to it. None of which may actually translate into mobile.

If this is the case what you may end up actually doing is rather than creating a content repository that can be reused. Is creating a repository where 90% can’t be reused and the other 10% is lost amongst it.

Managing Redirects in Sitecore

One of the most important aspects if not the most important aspect to managing a website is SEO. It doesn’t matter how good your website is, it doesn’t really matter if nobody can find it. Creating good SEO is a lot about having pages match what users are searching for, which then results in a high ranking for those pages. However, a second part to this formula is what happens with a page when it no longer exists. For instance, one day this blog post may no longer be relevant to have on our site, but up until that point search engines will have crawled it, people will have posted links to it and all of this adds to its value which we would to lose should it be removed. An even simpler scenario could be that we rename the blog section to something else and the page still exists under a different URL, but links to it no longer work.

Fortunately, the internet has a way of dealing with this and it’s in the form of a 301 redirect. 301 redirect are an instruction to anyone visiting the URL that the page has permanently moved to a different URL. When you set up a 301 redirect it achieves two things. Firstly, users following a link to your site end up on a relevant page rather a generic page not found message. Secondly search engines recrawling your site update their index with the new page and transfer all the associated value. So, to avoid losing any value associated with a page, whenever you move, rename or remove a page you should also set up a redirect.

Sitecore

Given how important redirects are it’s then surprising that an enterprise CMS platform like Sitecore doesn’t come with a universal way to set redirects up out of the box.

Typically, when we’ve taken over a site one of the support requests we will eventually receive is a question surrounding setting up a redirect from a marketer who’s ended up on Sitecore’s documentation site and is confused why they can’t find any of the things in this article https://doc.sitecore.com/users/sxa/17/sitecore-experience-accelerator/en/map-a-url-redirect.html.

The article appears to suggest that creating redirects is a feature that is supported, however it’s only available to sites built using Sitecore Experience Accelerator rather than the traditional approach.

More often that not, the developers who created the site will have added 301 redirect functionalities but will have done it using the IIS Redirect module, which while serves the purpose of creating redirects does not provide a Sitecore interface for doing so.

301 Redirect Module

Fortunately, there are some open source modules available which add the missing feature to Sitecore. The one we typically work with is neatly named 301 Redirect Module.

Once installed the module can be found in the System > Modules folder. Clicking on the Redirects folder will give you the option to either create another Redirect Folder to help group your redirect into more manageable folders, Redirect URL, Redirect Pattern or Redirect Rule.

Redirect URL

This is by far the most common redirect to be set up. You specify an exact URL that is to be redirected and set the destination as either another exact URL or a Sitecore item. There’s even an option to choose if it should be a permanent or temporary redirect.

Redirect Pattern

Redirect patterns are slightly more complex and require some knowledge of being able to write regular expressions. However, these are most useful if you ever want to move an entire section of your site. Rather than creating rules for each of the pages under a particular page, one rule can redirect all of them. Not only does this save time when creating the redirects, but it also makes your list of redirects far more manageable.

Auto Generated Rules

One of my favourite features of the 301 Redirect Module is that it can auto generate rules for any pages you move. For instance, if you decide a sub-page should become a top-level page moving it will result in a rule automatically being generated without you having to do a thing.

Summary

A marketer’s ability to manage 301 Redirect is as important as them being able to edit the content on the site and while traditional Sitecore sites may be missing this functionality it can easily be added using 3rd party modules, best of all they’re free!

User Authentication across sub domains in Sitecore

The Scenario

In some scenarios you may have sub domains set up on your site, this may be to have an api configured as a separate site, or you may have a microsite set up on a sub domain. Depending on the scenario you may also want a user who it logged in on one of the sites to be logged in on the other.

When providing a login section to a site, after the visitor logs in their authentication is tracked using cookies. Cookies are linked to a domain and as your sites only differ by sub-domain you may be thinking great it should just work, no need for a single sign on solution. But you would be wrong.

The Solution

By default, cookies will be set on a specific domain including the sub-domain unless you tell it otherwise. Fortunately, this is quite easy to do.

In the web.config file, find the section for forms authentication and add an attribute domain set to the top-level domain. Your authentication should now always be set on the top level domain and work across all subdomains.

<system.web>
  <authentication mode="None">
    <forms name=".ASPXAUTH" cookieless="UseCookies" domain="mydomain.com" timeout="30"/>
  </authentication>
<system.web>

Bulk Inserting data using Entity Framework

Using tools like Entity Framework makes life far easier for a developer. Recently I blogged about how using them is what makes .Net Core one of the best platforms for prototype development, but the benefits don’t end there. They are also great from a security perspective by cutting a lot of risk around SQL injection attacks just by avoiding easy mistakes when using regular ADO.NET.

However, they do have some downsides, a main one being that they are particularly slow when it comes to doing bulk inserts to a database.

For example, assume you have an application which regularly receives an xml import file consisting of 200,000 records and each one either needs to be an insert of an update into the db. You’ll quickly learn that looping through the whole lot and then calling save changes results in a process taking an extremely long time to run, it may even just timeout. You then decide to get rid of that long save changes line by breaking it up into blocks of 500 and call save changes for each of those. That may save the timeout issue, but it still results in a process potentially lasting around an hour.

The problem is that this is a scenario Entity Framework or EF.Core just weren’t designed to handle. As a solution you could opt to drop Entity Framework altogether and revert to something like a native SQL Bulk Insert command, but what if you need to be doing some processing in code on the record before the import happens? What if you have one of those classic not quite always valid XML, XML files which would cause SQLs Bulk Insert to fail.

The solution is to use an open source extension called EFCore.BulkExtensions.

EFCore.BulkExtensions

EFCore.BulkExtensions is a set of extension methods to Entity Framework that provide the functionality to do bulk inserts. You can add it to your project using NuGet and you’ll find the project on GitHub here https://github.com/borisdj/EFCore.BulkExtensions

Usage is also very simple to do. Let’s assume you have some existing tradition EF code that loops through a collection and for each one create a new db item and adds it to the db:

public void DoImport(List<foo> collection)
{
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;

        await dbContext.Jobs.AddAsync(job);
    }

    await dbContext.SaveChangesAsync();
}

Rather than adding each item to the Entity Framework db context, you instead create a list of those objects and then call a BulkInsert function with them on your db context.

public void DoImport(List<foo> collection)
{
    List<Jobs> importJobs
    foreach (var item in collection)
    {
        Jobs job = new Jobs();
        
        job.DateAdded = DateTime.UtcNow;
        job.Name = item.Name;
        job.Location = item.Location;
        
        importJobs.Add(job);
    }

    await dbContext.BulkInsert(importJobs);
}

If also works for updates, but rather than creating a new item, first retrieve it form the db and then at the end call BulkInsertOrUpdate with the list.

await dbContext.BulkInsertOrUpdate(importJobs);

From my experience doing this took my import process that would run for over an hour down to something which would complete in a few minutes.

Top reasons .Net is amazing for prototyping

The other day someone told me .net was slow to get something built, and to be fair to the person I can see why he would have thought that. Most of his interaction with .net projects have been on complex with large enterprise applications that often have integrated multiple other applications.

However, I would maintain that .net is a framework that is actually really fast to develop on and that what he had perceived as being slow was the complexities of a project rather than the actual coding time.

In fact, I would say it’s so fast to get something built in it, it actually becomes the fastest thing to develop a prototype in. Here are my top reasons why.

ASP.NET Core

ASP.NET Core inherits all the best bits from the ASP.NET Framework that came before it, giving the framework almost 20 years of refinement since its original release in 2002. The days of WebForms in the original ASP.NET are now long behind us and we now have the choice of building web applications with either MVC or Razor Pages.

Razor provides the perfect combination of a view language providing helpers to render your html without limiting what can be done on the front end. How you code your HTML is still completely up to you, the helpers just provide features like binding that make it even faster to do.

Another great thing about .Net core over that which came before it, is its platform independent. Rather than being confined to just Windows, you can run it on Mac or Linux too.

Great starter templates

What kicks of a great prototype project is starting with great templates, and ASP.NET Core has a bunch.

As already mentioned, you can build a Web Application with either Razor Pages or MVC, but the templates also provide you with the base for building an API, Angular App, React.js or React.js and Redux or you can simply create an Empty application.

My preference is to go for MVC as it’s what I’m the most familiar with and a key thing for rapidly building a prototype is that you develop rapidly. The idea is to focus on creating something new and unique, not learn how to develop in a new framework.

The MVC Web Application gives you a base site to work with a few pages already set up, bootstrap and jQuery are already included so you start right at the point of working on your logic rather than spending time doing setup.

SQL Server and EF Core

I’ve always been a bit of a database guy. I’m not sure why, but its a topic that has always just made sense to me, and despite being a topic that can get quite complex, the reasons behind it being complex always feel logical.

When it comes to building a prototype though there are two aspects which make storage with .net core super simple.

Firstly, Entity Framework Core (EF Core) means you don’t really need to know any SQL or spend any time writing it. It helps if you do, but at a minimum all you need to be doing is creating a model in your code, adding a few lines for a DB context that tells EF.Core that a model is a table and how they relate. Then turn on migrations and you’re done. When you run the application, the DB gets created for you and each time you change your model, you just add another migration and the next time the application runs the application the DB schema gets updated.

Querying your DB is done by writing LINQ queries against your entity framework model, allowing you to have next to no understanding of SQL and how the DB works. Everything is just done by magic for you.

The second part is SQL Server and its different versions. Often when you think of SQL Server you think of the big DB engine with its many many components that you install on a server or your local machine but there’s two others which are even more important. LocalDB and Azure SQL.

LocalDB is an option that can be installed as part of Visual Studio. No separate application is needed or services to be running in the background. It is essentially the minimum required to start the DB engine to be used for development purposes. In practical terms this means when you start your application EF.Core can run off a LocalDB which didn’t require any setup, but as far as your application in concerned it is no different than working with any other version of SQL Server.

Azure SQL as the name implies is SQL Server on Azure. The only thing I really need to say about this is that you can swap LocalDB and Azure SQL with ease. They may be different but as far as your prototype is concerned, they are the same.

Scaffolding

The only thing quicker than writing code is having someone else do it for you. So we’ve created our application from a template, added a model which generated our database and now its time to create some pages. Well the good news is it’s still not time to write much code because Visual Studio can scaffold out pages based on our model for us!

Adding a controller to our project in Visual Studio gives us some options on what should be generated for us, one of which is MVC Controller with views, using Entity Framework. What that means is given a model it will create controllers and views for listing items, creating them, editing them and deleting them. No coding by us required!

Now it’s unlikely that is exactly what you’re after, but it’s generally a good starting place and deleting code you don’t need is far quicker then writing it.

Azure

Lastly there is Azure. You may have spotted a theme to all these points and that is they all remove any effort required to do any setup and instead focus on building your own logic, and this point is no different.

I remember a time, when if I wanted a server to put an application on, I had to request it, and then wait a while. What I would get back would either be a server that already had resources running on it, or a blank server that would need applications installed on it. e.g. SQL Server or .Net Framework. IIS wouldn’t have been configured and it would be a number of hours before my application would be running.

With Azure you don’t even really need to leave Visual Studio. From the publish dialog box you can create a new App Service and DB, and then publish. All the connection strings are sorted out for you. There are service plans which cost next to nothing, a domain is configured for you and at the end of the publish the website opens and is working. The whole process has taken less than 10 minutes.