Debugging deployed Azure Web Apps with VSTS Symbol Server

Obviously, debugging an already deployed application, doesn’t matter if Azure or any other environment, is something we must use as a last resort, I would always prefer to be able to reproduce some kind of situation with a local test and correct it. But sometimes we would need to debug an already deployed application, and it is never as easy as click F5 in our Visual Studio. We must connect to remote processes, make sure we have the right version of the code, and more important, be able to match the binaries with the code we have, because surely you always build your applications as Release to deploy them (and if not … run, run and do it).

Having the right version of the application is easy we have branches, tags, and other tools which allows us to locate the right code version. Attach to remote processes is something maybe a little bit more complex, but fortunately we have Remote Debugging tools, and even more, in Azure Web Apps, we can directly enable it from Visual Studio or the portal, and connect automatically, we will see this later.

To make sure we match the binaries with the code we have symbols, but we need to have the right symbols for the binaries, something we can get via a Symbol Server, this is something you can setup by yourself, but luckily now we have a Symbol Server included in VSTS, well it is still in preview mode, but it is something rather interesting to setup and worth it. Let’s start on how to set it up.

Disclaimer: I’m not digging into how to create a build definition, or deploy an application to Azure Web App using Release Management, so if you are not familiar with this kind of tasks first make familiar with these, or just leave me a comment if you find it interesting for next blog posts.

Disclaimer II: You will need Visual Studio 2017 updated to the last update for doing this.

Enabling the feature

As a preview feature, we must first turn it on for our account, or ask something with the needed privileges to turn it on, we just click on the top right, in our account icon, select Preview features and enabled it for the account.image

But in this case, this is not a stand-alone feature, this feature needs the Package Management extension from the Visual Studio Marketplace, which, remember, is not free, but the symbol server added to the package management extension is worth the price.

Publishing symbols

Once set-up the account we can start publishing our symbols. Usually (if not … again … run) we have one build, n deployments. So this is something we are doing during the build. So let’s go to edit the build definition used to generate artifacts for deploy, and add a new step after the build step, and add the publish symbols task.

image

By default, when adding the task, it is added with the version 1.*, but we will select version 2.* (preview) version of the task, and in the Symbol server type parameter, we select Symbol server in this account … I will remind you the need of Package Management extension.

image

This is the only change we need to do in our build or releases. So let’s go to next step.

Configuring Visual Studio to consume VSTS Symbol Server

We must configure Visual Studio for a couple of things: instruct it to debug using symbol servers, which symbol servers to use. We will do both of them from the Tools / Options screen.

For the first one we go to Debugging section of tools, and disable Enable just my code, yes, disable it, so Visual Studio is instructed to debug external code.

image

Now under Debugging / Symbols, click on the icon I show you in the next picture, which will bring the add new Symbol Server from VSTS. You can leave the rest of the parameters with the default values.

image

On the next screen just select the VSTS account in which you set-up the symbol server. And after that close the Options screen.

image

Debugging the Azure Web App

If we came this far, we have everything ready to start debugging, so with the version of the code we published the symbols and deployed open into Visual Studio, we will attach to the Web App process for debugging. Be sure to enable the breakpoints you need for the debugging and also notice you will impact any user of the application so better do it in a slot or any environment with no real users.

On Visual Studio 2017 Server Explorer, make sure you are connected to the Azure Subscription, and in the App Service list, locate the resource group containing your app, locate the app (and the slot if you have it), right click it, and select Attach debugger.

image

This can take a while, but after finish the attach, it will automatically connect to the Symbol Server, get the debug Symbols and you are ready to go and start debugging your web app with the breakpoints and the debugging features of Visual Studio and Azure.

If you receive an alert like this.

image

Remember to disable the Enable Just my code mentioned earlier in this same post.

Keep it clean

After debugging, this is something I like to do, and is go to the Azure Web App Application Settings, and disable the remote debugging check. Next time you need to debug, following previous steps, Visual Studio will re-enable it for you, but I just like to keep it off … just for the case …

image

Hope you enjoyed this feature as much as I do.

Phased deployments with Release Management gates

When we are enabling continuous deployment in development teams, there are a lot of things we must take care. First of all, enabling continuous deployment is not about throwing new code or features to the users, nevertheless the quality or the value it gives, this is about enabling a continuous value flow from development to the users.

For this we must ensure the quality and the impact of any new code we are going to deploy. Apart from the usual automated tests during the builds and deploys, there are something, a lot of companies does, which is a phased deployment, in which you are deploying new changes to a particular subset of users, until the new code has been “real life” tested enough to impact 100% of the users of your applications. This is something you probably has already experienced with for example Windows Insiders, Twitter which deploys features to particular subset of users, Facebook does the same, and even VSTS you can opt-in for new features until they are Generally Available for all users.

But for this one of the most important questions is, how do you decide when to deploy to a broader set of users? and also, which mechanism are you going to implement to automate this?.

This two questions can be resolved with the new (in preview at the moment of writing this) feature of Release Management Gates. These are automated approvals we can set, for any particular environment of a Release Management definition, which will be automatically evaluated, prior or post any environment deployment, also these gates are evaluated every specific period of time, until they pass, or until they timeout if they fail every check.

Gates can be set, out of the box, on a variety of things, but new ones can be created, like this example based on twitter sentiments:

  • Azure Functions: The gate will call a particular Azure Function, sent the function a pre-defined message (defined in the gate definition), and wait for the response, even being able to parse the response to check everything went ok.
  • Invoke API Rest: is similar to the previous one, but calling any particular Rest API.
  • Work Items Query: Check if a particular Work Item Query has grown its count of items. Think of this for example a Bugs Query, in which you decide the Gate has failed if the bug count grows over a particular threshold.
  • Azure Monitor: The one I wanted to explain in this article. If checks for any particular (one or n) Azure defined alert, to check if the alert has been thrown. Think for example an alert on performance degradation or number of errors to check in a particular environment, prior to deploy to new broader set of users environments.
Show me the code boxes

First of all, we must enable Gates on our preview Features, so on your Visual Studio Team Services account, click on your profile, and select Preview Features.

image

And on the preview features for the account (or just for you) enable Gates.

image

We start with a normal Release Management definition, with two environments, one dependent of another in sequence, and let’s say first one is for early adopters and the next one is the general available, I agree this is a great simplification of any real environment, but is enough for this example.

image

Now, lets inspect the GeneralAvailability environment pre-deployment approvals clicking on image And enable Gates as a pre-deployment approval, clicking on add  we can see the different choices we have, for this example we will go with Azure Monitor.

image

We have also there the delay before start evaluating this gate, this is the time needed to pass after previous environment deployment and the moment the first check for the gate is done. Once we add the new gate, we have to fill all the information for the gate.

  • Display Name for the gate.
  • Azure subscription connection, if we don’t have it already we need to setup an Azure connection via services
  • The name of the resource group in which the resource exists in the Azure subscription.
  • The type of the Azure resource, we can choose between Application Insights, App Service, Storage Account, Virtual Machines. In this demo we go with Application Insights resource.
  • Name of the Resource
  • Alert or alerts we want to be monitoring. The alerts must already exists in the chosen Azure Resource, but we will see this later on this post.
image

When we have this information filled, if we continue going down on that same screen, we can fill several options for all the gates.

image

First we have the timeout which is the time after which, we finish the deployment for this environment as failed if any of the gates has not passed, so we can’t go on with the environment. Sampling interval is the period of time between each check of the gate. This times by default are, respectively, 15 and 5 minutes, but they can be longer, even days, so you have time enough to go on with the early adopters (for example) environment before going ahead.

Also, for the case there are manual approvals before the deployment, you can select between three different options, like (as seen in the image) manual approvals must be done before start checking the gates, manual approval needed only after all gates has passed, or manual approval after each gate.

With this we would have the gate, but for the case you are not familiar with Azure alerts, just one thing more. For this example we chosen Azure monitor against an Application Insights Alert, so what I created before going on with all of this, I had my Application Insights resource created and configured for my selected app or Azure environment for the, in this case, EarlyAdopters environment (for the case you are not familiar with Application Insights check it here).

image

And if e click on Alerts, we can see the configured alerts, and go to specific alert configuration, the one we selected on the Azure Monitor Gate.image

In this case the alert will raise if in the last 5 minutes, more than 2 errors has occurred in the application.

When we finish this configuration, we will start deploying the application to the early adopters environment, hopefully users will start using the new features or version of the application, after the configured time for the gate, VSTS will check for the alert, if the alert has not been raised, it will continue with the next environment deployment, if the gate has been raised, it will continue waiting for the next check of the gate, until timeout configured occurs or the gate passes.

I hope you liked gates as much as I like them, as conclusion, when working with phased continuous deployment it is important to establish which are your gates defining how to move from one phase to another, and afterwards, configure them as you need, and configure your phased deployment with VSTS Release Management.

Work Items bulk edit with templates on VSTS

There are some occasions in which you need to apply the same changes to multiple Work Items, not only once, but several times during a project, for sure most of you already know the edit selected items feature, with several Work Items selected, just right click and select edit (sorry I had to protect the innocent on the captures)  :

image  image

This allows you to edit all the selected Work Items and make the changes to the fields you selected. The only point with this, is when you need to do it several times, and always apply the same values to the same fields, as it is a little bit tedious.

  1. So we have Work Item templates, we start from the same point, select several Work Items, in this case all of them must be from the same type, but I will explain this later, right click, and select Templates/Manage:image
  2. This bring us to the template editor screen, in this screen you will see we can define as much templates as we want for each type of Work Item, that’s the reason I said in the previous point all selected Work Items must be from the same type, as you will apply a template for a particular type:
    image
  3. When clicking on New template button, it will open a screen for stating which are the values for the different fields for this template:
    image
  4. When you save it, and go back to the list of Work Items (you will need to refresh the browser window), select the Work Items you want, right click, and now, under Templates option, you will have this newly created template, and once applied to the selected Work Items, it will apply the values to the fields selected in the template:
    image

As you can see, templates can simplify our editing a lot when moving work across teams, organizing backlogs, bugs, etc. so go and check which would be your needed templates and go create them. Just remember, they must be defined per Work Item Type, maybe it is just a small “cons” for this, as when we used to do the bulk editing, we can select different Work Item Types, but for repetitive editing, templates are far more powerful.

Test and feedback extension

Earlier in October Microsoft announced General Availability for the Test and Feedback extension for Google Chrome (and yes, there is no Edge version yet). This extension was previously called Exploratory Testing Extension, if you already tried it before.

I have been using this extension with some customers since early versions, and sincerely after seen how it evolved I must send a lot of kudos to the team, it is a great way to share findings on bugs and tests but also for feedback. Basically this extension allow teams (and stakeholders) to record sessions in web applications which they can share the results with the rest of the teams in several work item types: as bugs, tasks of even tests cases with the steps from the exploration.

I found this is a great way for stakeholders who are not used to write long descriptions or feedback acceptance texts to give quick feedback and communicate unexpected behaviors quickly to developers. As well as it allow developers to receive quick and actionable feedback within VSTS and TFS directly.

As you can see in the next image, when we, after recording, choose to create a new Work Item, it stores as steps with automatic image captures, all the steps done during exploration:

image

It is remarkable also that you can add additional information like: notes, screenshots, and even video with voice recording !!! (and yes, my customers love especially this last two ones … it saves a lot of time from writing). Also it has a couple of modes of working:

  • Standalone: you can use it without even a connection to a TFS/VSTS and after the session it produces an HTML report of the session. At this moment I haven’t used this mode so much, as all of of my customers are in TFS or VSTS so I haven’t found this very useful for my scenarios.
  • Connected mode: you connect to a Team Project on TFS or VSTS for the session, everything gets recorded on the exploratory testing sessions for TFS, and also you can create bugs, tasks or test cases with data from the session. This is really the mode I have been using almost all of the time.

And how this works? well first install the Google Chrome extension, and then you will have this button: image on your Google Chrome toolbar, just click it, and select the mode for the first time (you can change it later):

image

Then just click on the play icon image and start recording, as there is already a very good documents on the Visual Studio ALM Blog, let me resume them for you:

  • General announcement and overview: General data about the extension and main announcement on GA.
  • Capture information as screenshots, video, notes, page load data and more: interesting to discover how to use the different type of information which you can capture in your sessions for adding them later in Work Items.
  • Artifact creation: How to create the different type of work items or reports (on standalone mode) after the sessions and including the additional information.
  • Team collaboration: Information about how to use in the two different modes and with different access levels to TFS and VSTS and how to consult the results afterwards in the form of reports, sessions or artifacts captured. To fully understand this one it is important to read the two previous ones. Specially interesting is how to use it with Stakeholders with limited access.

So go ahead, install the extension, and start sharing the findings between teams and Stakeholders, I’m sure you will like it as much as I do.

Sample VSTS Build and Release Management task for Yarn package manager

During this weekend I wanted to try the new package manager Facebook created: Yarn, one of it’s bigger claims is to be faster than normal npm, although it uses the same package repositories as npm. this is done, as far as I know, using improvements in transfer the files along with a local package cache, so you don’t have to go always to npm to restore a package you already restored previously for other project.

As locally it was everything working smoothly, I decided create a new build task for VSTS so I can use it in my builds. First point it is only based to be used with your own agents for several reasons: first it has a demand which requires yarn to be installed (hosted agents does not have it … yet …), second, as said previously it uses a local cache to be faster, so at this moment, it made no sense to me to prepare something to be used with hosted agents, remember hosted agents are created on-the-fly so unless you need to restore packages for  several projects per build (next scenario I will try to cover), Yarn will be not necessary (well I agree it also improves download speed, but …).

So, for this first version, I just looked at the code of the current npm task and modified it to use Yarn, it is pretty simple and straightforward. The code is in my github account: https://github.com/lfraile/YarnTask

Feel free to see it, modify it, play with it and install it, I will be reviewing it for some improvements.

PD: Just as I was writing this post, I noticed something I should have look before … There is already a Yarn task in the Marketplace, hehehe, as I mention in my talk about creating custom tasks: always look if there is something available before build it … so well .. my fault … but, still, I will continue trying to improve it.

Creating custom tasks for VSTS Team Build and Release Management slides

Recently I gave a talk a out creating custom tasks for Team Build and Release Management for VSTS and also for Team Foundation Server, as also I’m starting again to write here (with lots of articles at first with ideas I had in my mind) I thought it also could be a good idea to leave links to content here.

It is in Spanish, but well, there is a bunch of useful links in the PPTX. It was mainly based on creating tasks with Javascript but I cover also Typescript (well both of them are Javascript executed with NodeJS at the end of the day) and PowerShell.

So here are the slides:

 
And also the source code used in the demos, it was a very basic set of tasks for .NET Core projects to restore packages, build and publish:
 
Have fun.
 

Configure Work Item Field as team field in Team Foundation Server

Recently, working on a customer, due to the teams and project structure they have, and the reporting needs for this structure (a correct structure BTW), we came to a situation in which dividing the Teams by areas was not so useful, and didn’t helped us in our work item and reports strategy, as you probably already have observed, it is not so easy to create the reports per team with complex structures, as areas are tree views.

So I came up to this article, which helped in creating a Work Item field to define the teams, I found it very useful for this and other situations, being honest, I find this even more comfortable than using areas for this.

For what is coming in this blog post I assume you already have knowledge about how to divide the work between teams in the same Team Project and feel comfortable with TFS Work Items personalization’s. Also this article is entirely based on Team Foundation Server On Premises.

Basically the procedure (go to the article for the details) is:

  1. Define a Global List for your list of teams.
  2. Add a new field for Features, Epics (the article doesn’t mention this first two but we also added to them=, Product Backlog Items, Bugs, Tasks and Test Plans. At the end of the day, any Work Item type which can be used to work in backlogs. It is important to define the field with the same name in all Work Item Types, and also be sure to make this reportable as dimension if you plan to use it in Reporting.
  3. Specify this field to have values from the Teams Global List as allowed values.
  4. Use witadmin command line tool to export the process, and modify the process to specify the new field as the field defining the team the work item belongs to. (<TypeField refname=”MyCompany.Team” type=”Team” />).

When you go to admin your Team Project (all of this is done at Project Level) you will see the possibility to define the value for the new field for each Team, so the backlogs and panels are filtered correctly.Make sure to specify this value for all of your teams, if you don’t do it, you will receive alerts about your team is not correctly configured. Also remember a particular team in TFS can own different values for this field, which is particularly useful for Product Owners views or management views.

IC686842

Also, there is another point in the article, which allows you to be able to specify the team you want the Work Item to belong during its creation from the Product Backlog view, something like the next image. But this configuration brought me a small problem, when someone from within a team, selects to create the Work Item for a different team, the backlog view produces an error as is it weren’t able to save the Work Item, so I disabled this configuration.

image

And as for final conclusion of this, well I haven’t been still a lot of time with this solution in production let’s say, but at this moment I find it very useful, as it allows me to improves some reports and Queries so I can clearly see the team a work Item belongs to, without any trick to truncate the area path or something to make the information easy to filter and more readable, specially in reports.

If you are going to need this or you will follow this article, please, test it thoroughly before going live, customizations in Work Items are always tricky, and specially in this level in which we are modifying the default behavior of TFS.

Also I tested it in a TFS “15” preview environment, and it also worked successfully as expected, so going forward to next version of TFS is expected to work.

Code Search extension for VSTS and Team Foundation Server “15”

Recently Microsoft put as Generally Available a very interesting extension in the Visual Studio Marketplace, the Code Search extension. Install it on your VSTS is as simple as go to the previous link and click install, then select the Visual Studio Team Services account in which you want to install the extension, of course you need to be an Administrator to install it.

To install on Team Foundation Server “15”, is just as simple as install it during the installation phase of TFS.

InstallCSOnTFS

But, what enables this extension? Once you install it, on your team projects you will have, in the menu bar, a search box in which you can select to search code:

image

image

When you select code, you will be presented some of the main options to search for code.

But there are even more options you can check in the help page.

The interesting thing of this Code Search extension, is it not only look for text inside the code files, that would be easy, it searches  across all projects or just the ones  you want.

But it also allows you to put filters like for example look only for classes named like the term you are looking for, or comments, references and a lot of more, I’m really impressed on how rich it is. Of course you can refine your queries with AND, OR, NOT terms.

Also it integrates with history, so when you find what you are looking for, you can see its history, compare with previous versions, and you can even see annotations within the code.

Just as a conclusion a pretty nice extension you can start installing and using on your VSTS to search for code, but far beyond the usual look in files functionality.

As technology it uses in the behind, if you look here you will see it uses:

  • Elasticsearch
  • Oracle Server JRE, yes for TFS you will need to install it on the server, but you can install Code Search on a separate server. Of course for VSTS you don’t need to care about this.
  • Mardowndeep.
  • Roslyn (hype increasing on this one)
  • ANTLR

And about languages it supports, currently it supports C#, C++, C, VB.NET, and recently they added support for Java also, and it is ok to think they will be updating the list of languages.

So go ahead, install it and try it, you can find more options and documentation here.

I’m back! Executing Entity Framework migrations from VSTS Release Management

Uff there has been a long, long time since last time I wrote here … but, several people have lately asked me about my blog, so there I go, and better than write a bla bla article about the last time and why I haven’t wrote so much, let’s get technical.

In this article I assume you have the basic knowledge of creating Team Build definitions, and Release Management definitions. I’m not covering this topics in this article, as it would make it so long to read. If you are not familiar with this, I would recommend you to read about it here https://www.visualstudio.com/en-us/docs/release/overview

When we are deploying applications with a database, there are several ways we can do, usually we will deploy differential scripts to update the DB, or DACPAC and other technologies, but we can also use Entity Framework migrations, although I still have to think if it is the best way to do it …

Usually migrations are executed Entity Framework initializers, but if we need to execute them before deployment, so we can be sure we updated the DB even before deploy our application, there is a tool named “migrate.exe” included in the NuGet package of Entity Framework.

There is a couple of steps we need to take to be able to execute this tool from Release Manage:

  1. Build our migrations Assembly
  2. Copy the tool migrate.exe from the tools folder in the EF Nuget package folder to the same directory as the migrations assembly
  3. Execute migrations

Let’s go with the first two steps, which we will do in a Team Build definition which publishes the results, as artifacts, to the Release Management definition. The build step is easy, usually our migrations assembly will be included in the Solution we are building to deploy our application, if not, well include it in the solution or build it in a separate build step in the same build definition, no tricks in this one.

Copy the migrate.exe tool includes a couple more of steps. First I would copy the result binaries from building the migration project to a separate folder we will publish as an artifact. This is done with a Copy task in the build steps, which we will configure this way:

image

The parameters:

  • Source folder: we point to the binaries result of our migration assembly, notice I have oversimplified the directory, with /MigrationAssembly/ be sure to include the full path to it. And I have used a couple of variables $(build.sourcesdirectory) a system variable which points to the root of sources downloaded by the agent, and a custom variable $(buildConfiguration) which points to the current build configuration (i.e.: Debug, Release or whatever you use).
  • Contents: ** so we copy all results.
  • Target folder: I’m copying to a new folder automatically created in the artifacts staging directory, as configured with the system variable $(build.artifactstagingdirectory), you don’t need to create a complex folder structure under this one, but be sure to at least create a folder structure which allows you to separate different results and artifacts.

Next step, copy the migrate.exe file, again we use a copy task:

image

With the parameters:

  • Source folder: we point to NuGet packages folder, which is usually at the same level of the solution we are building, but be sure to check this correctly, probably this path is one of the trickiest paths of this configuration.
  • Contents: “migrate.exe” well, no comments …
  • Target folder: I’m copying to a the same folder we copied the results of the migration assembly in the previous step. This is very important for all of this work, so be sure to check it twice.

And the last step in the build, publish the artifacts, usually as simple as this one, which will publish all the folder structure we have in the artifacts directory to a server artifact:

image

Final steps will be seen like this:

image

Once we have done this, we can queue this build definition, and once finished, in the resulting artifacts just check you have your binaries from the assembly migration along with the migrate.exe tool in the same folder within the artifacts.

For the Release to execute the migrate.exe file it is just a simple task of execute a command line, of course one gotcha of this is to link the build definition with the Release Management definition (again, I assume you are already familiar with this).

So within the desired environment of our Release definition, we will just add a Run on agent task of type Run script, one important point, remember this tasks runs on the agent, so you need to ensure your agent can communicate with your SQL Server or SQL Azure.

We will configure this task in this way, before the deploying task for the application:

image

The parameters we are using:

  • Path: here we configured the path to migrate.exe within the build artifacts we are using, you can take advantage of the “…” button to look for it, again, remember: you must have linked your Release Management definition to your Build definition for this to be available.
  • Arguments: there is different arguments you can use here, even just point to a *.config file with all the values (check full documentation), in this case I just pointed to a custom variable containing the connection string (be sure to make it secret to protect it hehe), and as I pointed to a connection string, it is mandatory to configure the parameter of “Connectionprovidername”, which in my case is just SQL Server.

Some important gotchas here, be sure to test thoroughly your migrations, and be sure to enable the appropriate backups of databases for the rollback cases, this is not easy, and you have to really take care about it, so have different environments of Release Management for test all the deployments and migrations.

Once you have this, next  time you run this Build and Release definition, your database will be (hopefully if you have done it correctly) updated to last migration from Entity Framework.

And hopefully, see you later around here with more articles Smile

[TFService] ¿Dónde está el panel Kanban?

Buenas, ni voy a comentar el tiempo que hace que no escribo …. al tema.

Desde que se empezó a hablar del nuevo panel Kanban de Team Foundation Service, hay una pregunta que ya me han hecho varias veces ¿dónde está la plantilla de proyectos con el panel de Kanban? , y la respuesta es, en ningún sitio Sonrisa.

Efectivamente el panel de Kanban NO es una plantilla nueva, es un nuevo panel que está en Team Foundation Service y en Team Foundation Server 2012 Update 1 (sí, necesitáis el Update 1 de TFS 2012), y acceder es bien sencillo, desde la página principal del panel web de un Team Project, podemos, en la parte de la derecha, bajo Activities, acceder al Backlog:

Backlog TFSService

Una vez accedemos al backlog podemos acceder al Board, dónde encontraremos el panel Kanban.

image

Una vez en el panel Kanban, podemos comprobar que podemos personalizar el número de elementos WIP de las swimlanes de Active y Resolved, y eso es todo, ya podemos empezar a usar nuestro panel Kanban ,moviendo elementos entre las swimlanes, una cosa que yo echo de menos es el poder reordenarlas en cuanto a prioridad por cierto … quizá lo veamos en futuros updates. También, en la esquina superior derecha, podemos pulsar y obtener el informe de Cumulative Flow, o lo que es lo mismo el flujo de trabajo que hemos ido realizando y que tenemos en cada uno de los estados.

imageimage

Así que ya sabéis dónde tenéis el panel Kanban Sonrisa