Devops
SonarQube for .NET Framework with GitHub Actions

SonarQube for .NET Framework with GitHub Actions

If you haven't tried SonarQube or SonarCloud out then I suggest you do. The cloud version is quite straightforward to setup and from my experience the stuff it finds can be quite insightful. Like all these tools, at times you'll disagree with what they say, but there's always the option to change the rules.

What I particularly like with SonarQube is the examples you get with each bug that clearly explains why there's an issue and what you need to do in order to fix it.

What I didn't like however were the instructions for setting a project using .NET Framework. There are instructions labelled .NET, but this heavily assumes your using .NET Core, which while that might be our general preference, products like Sitecore could force your hand back to .NET Framework and all those legacy projects didn't just go away.

How to setup SonarQube using GitHub Actions for .NET Framework

The GitHub setup instructions (https://docs.sonarqube.org/latest/analysis/github-integration/) will give you the following code to create your GitHub Action with. This is also the same code you will get if you follow the wizard in SonarQube.

1name: Build
2on:
3 push:
4 branches:
5 - master # or the name of your main branch
6 pull_request:
7 types: [opened, synchronize, reopened]
8jobs:
9 build:
10 name: Build
11 runs-on: windows-latest
12 steps:
13 - name: Set up JDK 11
14 uses: actions/setup-java@v1
15 with:
16 java-version: 1.11
17 - uses: actions/checkout@v2
18 with:
19 fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
20 - name: Cache SonarQube packages
21 uses: actions/cache@v1
22 with:
23 path: ~\sonar\cache
24 key: ${{ runner.os }}-sonar
25 restore-keys: ${{ runner.os }}-sonar
26 - name: Cache SonarQube scanner
27 id: cache-sonar-scanner
28 uses: actions/cache@v1
29 with:
30 path: .\.sonar\scanner
31 key: ${{ runner.os }}-sonar-scanner
32 restore-keys: ${{ runner.os }}-sonar-scanner
33 - name: Install SonarQube scanner
34 if: steps.cache-sonar-scanner.outputs.cache-hit != 'true'
35 shell: powershell
36 run: |
37 New-Item -Path .\.sonar\scanner -ItemType Directory
38 dotnet tool update dotnet-sonarscanner --tool-path .\.sonar\scanner
39 - name: Build and analyze
40 env:
41 GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
42 shell: powershell
43 run: |
44 .\.sonar\scanner\dotnet-sonarscanner begin /k:"example" /d:sonar.login="${{ secrets.SONAR_TOKEN }}" /d:sonar.host.url="${{ secrets.SONAR_HOST_URL }}"
45 dotnet build
46 .\.sonar\scanner\dotnet-sonarscanner end /d:sonar.login="${{ secrets.SONAR_TOKEN }}"

There's two aspects to notice with this. Firstly the Build and analyze section is running a command dotnet build which is fine if your running .Net Core, but for .Net Framework it isn't going to work.

Secondly it's highly likely your solution will use NuGet packages and there's no step in here to restore them.

To setup and restore NuGet packages add in the following steps before the Build and analyze step. Be sure to put your solution filename in the restore command.

1 - name: Setup Nuget
2 uses: Nuget/setup-nuget@v1.0.5
3
4 - name: Restore nuget packages
5 run: nuget restore MySolution.sln

To do a build that will compile your .Net Framework code you will need to use MsBuild rather than dotnet. However if you just swap them over you'll get an invalid command error. First you need to add msbuild to PATH. Change your build steps as follows.

1 - name: Add msbuild to PATH
2 uses: microsoft/setup-msbuild@v1.0.2
3
4 - name: Build and analyze
5 env:
6 GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
7 shell: powershell
8 run: |
9 .\.sonar\scanner\dotnet-sonarscanner begin /k:"example" /d:sonar.login="${{ secrets.SONAR_TOKEN }}" /d:sonar.host.url="${{ secrets.SONAR_HOST_URL }}"
10 dotnet build
11 .\.sonar\scanner\dotnet-sonarscanner end /d:sonar.login="${{ secrets.SONAR_TOKEN }}"

With that now in place you can now compile some .Net Framework code and have the results sent back to your SonarQube instance.

Exporting a Database from SQL Azure

Exporting a Database from SQL Azure

If you need to copy your SQL Azure database locally then exporting it as a Bacpac file is the simplest route to go.

There's many ways you can do this including PowerShell, REST API, and even SQL Server Management Studio. However I am going to show you how to do it through the Azure Portal.

Exporting a Bacpac file

First off find your DB in the Azure Portal.

In the top set of buttons when you view your DB (make sure you're on the actual DB not the server), you will see an export button 3rd from the left.

Clicking Export takes you to a page where you need to configures the filename to export as, the storage account to export to and the authentication to log into the DB. The storage account is required as a place for the export to be saved, you will get the choice of configuring the container for it to go into, so I like to create one called backups.

Checking the status of the export

When you start the export you will get a confirmation that it has started, but knowing how to check what the progress is isn't obvious.

To check the progress, navigate to the DB server in the portal.

In the left nav, under Data management, select Import/Export history.

From here you can view the status of each of the exports.

Once they are complete you can download them through blob storage.

Managing SQL Azure Users in the Portal

Managing SQL Azure Users in the Portal

Managing users for a SQL Azure DB is something which I have found is more complex that you would expect. A lot of guides will also tell you it's something which can't be done through the admin portal and needs to be done using scripts in the DB.

This is true to some extent. If you want to set specific role permissions to a DB then you have to do it by assigning roles through SQL scripts. Also if you want to set usernames and passwords at a DB level rather than using Active Directory then this also needs to be done in the DB.

However if you want to give a bunch of active directory users admin access to all the DB's in a server or if you want to give a group of people the same access then this can be done through the azure portal.

Admin Permissions For All

When you create your DB instance an admin user will get created, and for some teams you could just share the password. However sharing passwords isn't that great and there is a better way.

In the Azure Portal search for groups in the big search box at the top.

Create a security group with a sensible name, description and add all the members who you want to give admin permission to.

Go to your SQL server resource (this is the parent of the database), and got to the Azure Active Directory setting.

Click the top button to Set Admin, choose your new group and then click save. This will create the user with the correct permissions in the master DB of the server.

That's it, the members of the group will now be able to access any of the DB's on the server by logging in using Active Directory with Password through SSMS, or through the azure portal using Query Editor.

Query editor will actually give you a nice green tick if you have permission to log in.

To add or remove peoples access to the DB, just add and remove them from the group.

If you can't log in it could be due to a firewall permission for your IP rather than an actual login permission.

Permissions to Specific DBs

Giving everyone admin permission to every DB on the instance might not be what your after. Fine for a dev instance, but probably not something you want for production.

Fortunately the same concept of using groups can make life a lot easier but you will need to do some SQL scripting.

Create your group as above and then make sure your logged in as someone who is an active directory admin for the SQL Server. You can do this with the instructions above or if you want to be the only admin then rather than setting a group to be the admin, just set yourself.

Next log into the DB either using SSMS or Query Editor. Personally I prefer to use Query Editor as I'm doing everything else through the portal.

Our first script is to create an external user in our DB. In our case the external user is the group we want to give permission to rather than a specific user.

1CREATE USER [GROUP NAME]
2FROM EXTERNAL PROVIDER
3WITH DEFAULT_SCHEMA = dbo;

This is called adding a contained user to the DB.

Next we need to give the group some role permissions to do something.

1ALTER ROLE db_datareader ADD MEMBER [GROUP NAME];
2ALTER ROLE db_datawriter ADD MEMBER [GROUP NAME];

Repeat these steps for each DB you want to give the group access too.

The members of your new group should now have permissions to the individual DBs with reader and writer permissions.

If you want to give access to more people, just add them to the group.

Deploying a SQL DB with Azure Pipelines

Deploying a SQL DB with Azure Pipelines

Normally when I work with SQL Azure I handle DB schema changes with Entity Framework migrations. However if you using Azure Functions rather than Web Jobs it seems there's a number of issues with this and I could not find a decent guide which resulted in a working solution.

Migrations isn't the only way to release a DB change though. SQL Server Database projects have existed for a long time and are a perfectly good way of automating a DB change. My preference to use EF Migrations really comes from a place of not wanting to have an EF model and a separate table scheme when they're essentially a duplicate of each other.

Trying to find out how to deploy this through Azure Devops Pipelines however was far harder than I expected (my expectation was about 5 mins). A lot of guides weren't very good and virtually all of them start with Click new pipeline, then select Use the classic editor. WAIT Classic Editor on an article written 3 months ago!?!?! Excuse me while I search for a solution slightly more up to date.

Creating a dacpac file

High level the solution solution is to have a SQL Server Database project, use an Azure Pipeline to compile that to a dacpac file. Then use a release pipeline to deploy that to the SQL Azure DB.

I'm not going to go into any details about how you create a SQL Server Database project, its relatively straightforward, but the one thing to be aware of is the project needs to have a target platform of Microsoft Azure SQL Database otherwise you'll get a compatibility error when you try to deploy.

Building a SQL Server Database project in Azure Devops

To build a dacpac file create a new pipeline in Azure Devops (the yaml kind), select your repo and get yourself a blank configuration file. Also at this point make sure your code is actually in the repo!

The configuration I used looks like this; I've included notes in the code to explain what's going on.

1# The branch you want to trigger a build
2trigger:
3- master
4
5pool:
6 vmImage: "windows-latest"
7
8variables:
9 configuration: release
10 platform: "any cpu"
11 solutionPath: # Add the path to your Visual Studio solution file here
12
13steps:
14 # Doing a Visual Studio build of your solution will trigger the dacpac file to be created
15 # if you have more projects in your solution (which you probably will) you may get an error here
16 # as we haven't restored any nuget packages etc. For just a SQL DB project, this should work
17 - task: VSBuild@1
18 displayName: Build solution
19 inputs:
20 solution: $(solutionPath)
21 platform: $(platform)
22 configuration: $(configuration)
23 clean: true
24
25 # When the dacpac is built it will be in the projects bin/configuation folder
26 # to get into an artifact (probably with some other things you want to publish like an Azure function)
27 # we need to move it somewhere else. This will move it to a folder called drop
28 - task: CopyFiles@2
29 displayName: Copy DACPAC
30 inputs:
31 SourceFolder: "$(Build.SourcesDirectory)/MyProject.Database/bin/$(configuration)"
32 Contents: "*.dacpac"
33 TargetFolder: "$(Build.ArtifactStagingDirectory)/drop"
34
35 # Published the contents of the drop folder into an artifact
36 - task: PublishBuildArtifacts@1
37 displayName: "Publish artifact"
38 inputs:
39 PathtoPublish: "$(Build.ArtifactStagingDirectory)/drop"
40 ArtifactName: # Artifact name goes here
41 publishLocation: container

Releasing to SQL Azure

Once the pipeline has run you should have an artifact coming out of it that contains the dacpac file.

To deploy the dacpac to SQL Azure you need to create a release pipeline. You can do this within the build pipeline, but personally I think builds and releases are different things and should therefore be kept separate. Particularly as releases should be promoted through environments.

Go to the releases section in Azure Devops and click New and then New release pipeline.

There is no template for this kind of release, so choose Empty job on the next screen that appears.

On the left you will be able to select the artifact getting built from your pipeline.

Then from the Tasks drop down select Stage 1. Stages can represent the different environments your build will be deployed to, so you may want to rename this something like Dev or Production.

On Agent Job click the plus button to add a task to the agent job. Search for dacpac and click the Add button on Azure SQL Database deployment.

Complete the fields to configure which DB it will be deployed to (as shown in the picture but with your details).

And that's it. You can now run the pipelines and your SQL Project will be deployed to SQL Azure.

Some other tips

On the Azure SQL Database deployment task there is a property called Additional SqlPackage.exe Arguments this can be used to specify things like should loss of data be allows. You can find the list of these at this url https://docs.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver15#properties

If you are deploying to multiple environments you will want to use variables for the server details rather than having them on the actual task. This will make it easier to clone the stages and have all connections details configured in one place.