Blog

Running multiple projects from Visual Studio

We're probably all familiar with the debugging experience in Visual Studio. Press F5 or hit the green play button and your application will run in debug mode. If you have multiple projects in you solution which can run (i.e. they're not a class library) then the one with a bold title in solution explorer will run.

To change which project starts by default right click the project and choose Set as Startup Project.

Selecting start up project in Visual Studio

You can also use the drop down next to the play button to switch which project will run when you start debugging.

Selecting project to run in Visual Studio

Running multiple project at the same time

However switching between projects might not be what your after. You may need multiple projects running that interact with each other. e.g. You could have an API written as an Azure Function and a Website which uses the API. The simple option is to just open to copies of Visual Studio and run one in each, but there is a better way.

Right click your solution in Solution Explorer and click properties.

Selecting properties for solution in Visual Studio

Make sure you have Startup project selected on the left. On the right change the radio to Multiple startup projects and in the grid use the drop down to change the action to either Start or Start without debugging on the projects you want to run.

Visual Studio selecting multiple startup projects

The drop down next the play button will now show that multiple startup projects have been selected.

Visual Studio multiple startup projects selected

Now when you start to debug all the projects you have set to start will all debug together for one seamless debugging experience. Just make sure they're not all running on the same port number.

Deploying a SQL DB with Azure Pipelines

Normally when I work with SQL Azure I handle DB schema changes with Entity Framework migrations. However if you using Azure Functions rather than Web Jobs it seems there's a number of issues with this and I could not find a decent guide which resulted in a working solution.

Migrations isn't the only way to release a DB change though. SQL Server Database projects have existed for a long time and are a perfectly good way of automating a DB change. My preference to use EF Migrations really comes from a place of not wanting to have an EF model and a separate table scheme when they're essentially a duplicate of each other.

Trying to find out how to deploy this through Azure Devops Pipelines however was far harder than I expected (my expectation was about 5 mins). A lot of guides weren't very good and virtually all of them start with Click new pipeline, then select Use the classic editor. WAIT Classic Editor on an article written 3 months ago!?!?! Excuse me while I search for a solution slightly more up to date.

Creating a dacpac file

High level the solution solution is to have a SQL Server Database project, use an Azure Pipeline to compile that to a dacpac file. Then use a release pipeline to deploy that to the SQL Azure DB.

I'm not going to go into any details about how you create a SQL Server Database project, its relatively straightforward, but the one thing to be aware of is the project needs to have a target platform of Microsoft Azure SQL Database otherwise you'll get a compatibility error when you try to deploy.

Building a SQL Server Database project in Azure Devops

To build a dacpac file create a new pipeline in Azure Devops (the yaml kind), select your repo and get yourself a blank configuration file. Also at this point make sure your code is actually in the repo!

The configuration I used looks like this; I've included notes in the code to explain what's going on.

# The branch you want to trigger a build
trigger:
- master

pool:
vmImage: "windows-latest"

variables:
configuration: release
platform: "any cpu"
solutionPath: # Add the path to your Visual Studio solution file here

steps:
# Doing a Visual Studio build of your solution will trigger the dacpac file to be created
# if you have more projects in your solution (which you probably will) you may get an error here
# as we haven't restored any nuget packages etc. For just a SQL DB project, this should work
- task: VSBuild@1
  displayName: Build solution
  inputs:
    solution: $(solutionPath)
    platform: $(platform)
    configuration: $(configuration)
    clean: true

# When the dacpac is built it will be in the projects bin/configuation folder 
# to get into an artifact (probably with some other things you want to publish like an Azure function)
# we need to move it somewhere else. This will move it to a folder called drop
- task: CopyFiles@2
  displayName: Copy DACPAC
  inputs:
    SourceFolder: "$(Build.SourcesDirectory)/MyProject.Database/bin/$(configuration)"
    Contents: "*.dacpac"
    TargetFolder: "$(Build.ArtifactStagingDirectory)/drop"

# Published the contents of the drop folder into an artifact
- task: PublishBuildArtifacts@1
  displayName: "Publish artifact"
  inputs:
    PathtoPublish: "$(Build.ArtifactStagingDirectory)/drop"
    ArtifactName: # Artifact name goes here
    publishLocation: container

Releasing to SQL Azure

Once the pipeline has run you should have an artifact coming out of it that contains the dacpac file.

To deploy the dacpac to SQL Azure you need to create a release pipeline. You can do this within the build pipeline, but personally I think builds and releases are different things and should therefore be kept separate. Particularly as releases should be promoted through environments.

Go to the releases section in Azure Devops and click New and then New release pipeline.

There is no template for this kind of release, so choose Empty job on the next screen that appears.

On the left you will be able to select the artifact getting built from your pipeline.

Then from the Tasks drop down select Stage 1. Stages can represent the different environments your build will be deployed to, so you may want to rename this something like Dev or Production.

On Agent Job click the plus button to add a task to the agent job. Search for dacpac and click the Add button on Azure SQL Database deployment.

Complete the fields to configure which DB it will be deployed to (as shown in the picture but with your details).

And that's it. You can now run the pipelines and your SQL Project will be deployed to SQL Azure.

Some other tips

On the Azure SQL Database deployment task there is a property called Additional SqlPackage.exe Arguments this can be used to specify things like should loss of data be allows. You can find the list of these at this url https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15#properties

If you are deploying to multiple environments you will want to use variables for the server details rather than having them on the actual task. This will make it easier to clone the stages and have all connections details configured in one place.

Azure Data Factory: Date formats when working with XML in Data Flows

If there was one thing you could guarantee would always go wrong when importing data it's date formats. Some countries like to show dates as dd/MM/yyyy others prefer MM/dd/yyyy and despite formats like yyyy-MM-dd being a thing in programming for a very long time, people seem to still create files with these ambiguous formats.

I found finding out how to specify what format the date is in within a flat file was not obvious. If you go to the dataset item for a XML file, unlike other formats it's missing the schema tab.

This wasn't overly surprising as I've come to find when using Datasets in a Mapping Data Flow half the time it seems to ignore any schema definition, or if you use wildcard paths it even seems to ignore all the path settings on the dataset.

Within the Data Flow, the source has a projection tab which will import a schema from your xml file. If your data looks like a date, then this will hopefully set the data type to date. One thing I found was having data in dd/MM/yyyy resulted in a format of string rather than date. Annoyingly unless you want to start manually editing the script the UI generates, there's no way of fixing the projection.

Assuming you have a date as the data type this is a good start, but I then found when I ran my data flow which had dates in dd/MM/yyyy, the date field was blank! So it definitely knows it's a date and not a string, but it's doesn't like the format so it's ignored the data.

Back on the projection tab of the source there is another button "Define default format". This will open a side panel where you can set what format your dates, times, whole numbers and fractions will be in. Once I had set this, my dates started feeding through.