Password is expired when using Visual Studio Release Management

Today I was investigating an issue on Visual Studio Online Release Management, getting a deployment error related to the Azure credentials used for the deployment operation

Password is Expired summary

When going to the log details, the error happens on a resource manager task. The logs show that the password of the user account used to connect to Azure has expired.

Password is Expired

And here comes something to highlight, because the configuration of the service connection between VS Online and Azure has evolved in the latest months, to support both Azure Classic and Resource Manager models. Also note that the tasks you can configure on Visual Studio Release Management can work with one of these models or both. Let me show with an example.

Configuring Azure service connection on VS Online Release Management

To setup the Azure connection on Release Management, you need to click on the “Manage Project”

ManageProject

Once there, go the “Services” tab and when clicking on the “New Service Endpoint” you will see two ways to connect to Azure: Azure Classic and the new Azure Resource Manager. A few months back there was an only option to support Classic and RM scenarios, but this changed later. The main difference now is the way each connection authenticates with Azure

SNAGHTMLd1dcd0b

So a big difference here is the use of a service principal on Azure Resource Manager connections instead of using a user principal when using Azure Classic connections. This is important for our case, because the “Password is expired” error message we got refers to a user principal, not to a service principal where the “password” and “expiration” concepts are different.

Note that depending on the task used on Release Management, you can use one or other connection, or only the Classic connection. For example:

Fixing the “password is expired” issue

Once we understand these concepts, we have just to fix the password expiration issue. The solution has two steps:

  1. Change the password of the user principal and then update the Azure Classic connections with the new password. Note: You should use a long/very strong password for these user principals because of the second step; service principals come to play to stop using user principals in the future;
  2. To avoid this to happen again, change the password expiration policy for this account to don’t expire.

The first step can be easily done manually. For the second step, we need the help of Azure AD PowerShell module.

Install Azure AD PowerShell module

On this MSDN article you can find all the information related to managing Azure AD via PowerShell: https://msdn.microsoft.com/en-us/library/jj151815.aspx The Azure AD module is supported on the following Windows operating systems with the default version of Microsoft .NET Framework and Windows PowerShell: Windows 8.1, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. You need to install two utilities:

Connecting to Azure AD

Once you have installed both utilities, to start using all the Azure AD cmdlets just open a PowerShell command prompt.

$msolcred = get-credential
connect-msolservice -credential $msolcred

Obviously you need to introduce admin credentials if you want to use the administrative cmdlets later, like changing the password expiration policy.

Change the user principal password expiration policy

Once logged in, we can just change the password expiration policy for the user with this script:

# Gets the current policiy value
Get-MsolUser -UserPrincipalName «releasemanagement@mytenant.onmicrosoft.com» | select PasswordNeverExpires

# Changes the policy to never expire
Get-MsolUser -UserPrincipalName «releasemanagement@mytenant.onmicrosoft.com» | Set-MsolUser -PasswordNeverExpires $true

There is a good blog post about this at https://azure.microsoft.com/en-us/documentation/articles/active-directory-passwords-set-expiration-policy/

What happens with service principals? Passwords never expire?

Service principals works in a different way. When you create a service principal, you can specify the StartDate and EndDate for the security principal credential (by default, StartDate=Now; EndDate = Now + 1 Year). You can change the EndDate in a similar way (is not a boolean flag, you need to set the EndDate).

For more information, visit the MSDN article https://msdn.microsoft.com/en-us/library/dn194091.aspx

Azure Bootcamp 2016: cierre y hasta el año que viene!

Después de la resaca que nos ha dejado el Global Azure Bootcamp 2016, no nos queda más que dar las gracias a todos los que habéis puesto vuestro granito de arena en lo que ha sido la mejor edición de todos los bootcamps que hemos realizado durante los últimos años. Comenzamos en 2014 con cerca de 140 asistentes, el año pasado pasamos de 200, y este año, las cifras se han disparado a lo siguiente:
· 450 registrados 1 mes antes de la fecha del evento, llegando al límite de capacidad de las instalaciones. Al final sobre 390 asistentes en total
· 3 tracks de sesiones haciendo un total de 21 sesiones entre una pre-selección de 60 charlas. Todos los recursos de las mismas ya están enlazados desde la agenda en http://azurebootcamp.es
· Emisión en directo a través de Channel9, quedando publicadas en el mismo canal http://aka.ms/azurebootcampstreaming
· 13 patrocinadores duplicando el número de la última edición
· 1 Lab Global (RacingLab con backend en Azure) con 365 participantes simultáneos
· 187 localizaciones en 62 países alrededor del mundo
Hemos recibido feedback muy positivo y hemos tomado nota de algunas incidencias a mejorar en la próxima edición, estoy ya no hay quien lo pare. También hemos subido algunas fotos del evento que ya están enlazadas también desde la propia web del Azure Bootcamp, incluyendo la foto finish, de los que se quedaron hasta más de las 19:00 horas para el sorteo final (tenemos grabado El Preguntón en Channel 9!).
Una vez más, gracias a todos por vuestra asistencia, colaboración y sobretodo, por habérnoslo hecho pasar en grande. Nos vemos en 2017.
clip_image002

Azure Bootcamp 2016: a una semana del gran evento

bootcamp2016-150x101Parece mentira, pero ya ha pasado un año desde la última vez que pasamos un sábado estupendo aprendiendo sobre los últimos avances y nuevas características de Azure, un día que nos dejó a todos un buen sabor de boca compartiendo una experiencia global y con ganas de repetir de nuevo en 2016.
¡Y ese momento por fin ya llegó! El próximo sábado 16 de Abril nos vemos de nuevo y las cifras se han disparado.

Un evento global

Como ya sabéis, el Azure Bootcamp es un evento creado por y para la comunidad, donde voluntarios alrededor del globo se ofrecen para compartir conocimiento a través de sesiones técnicas, resolviendo dudas y contando las últimas novedades sobre la plataforma de nube pública de Microsoft. Este año se celebra en casi 180 ciudades alrededor del globo, por lo que si tienes una cerca, no pierdas la oportunidad. En este enlace tienes más información de todas las ciudades donde se celebra el evento: http://global.azurebootcamp.net/locations/
EventoGlobal
Para el evento que tenemos preparado para Madrid no van a faltar las sorpresas, y siguiendo la misma tónica del año pasado tendremos:

  • 3 tracks de sesiones técnicas, desde nivel para principiantes hasta para los más avanzados.
  • Un área “Azk the expert” para preguntar de primera mano a los que más saben sobre la plataforma y así resolver tus dudas
  • Racing Lab: ver detalles más abajo
  • Novedad: Hands-on Labs con premios directos por completarlos
  • Sorteos de licencias de software relacionado con Azure para hacerte la vida más fácil
  • Presencia de las empresas que en la actualidad más en contacto están con la plataforma, por lo que si tienes un proyecto que no sabes cómo meterle mano o buscas una nueva carrera profesional en el mundo cloud, puede ser tu gran oportunidad
  • Ah! Café y bocatas también habrá 🙂

Toda esta expectación de un evento que no ha hecho más que crecer a lo largo de los años, ha provocado que a un mes de la fecha llegáramos a completar el aforo que inicialmente habíamos calculado, después de conseguir más patrocinadores y así poder ampliar al número de participantes (al doble de la última edición), tuviéramos que colgar el cartel de “todo vendido”. Hemos hecho lo que ha estado en nuestra mano para liberar plazas que se han ido reservando de nuevo casi inmediatamente. Pero no os preocupéis, que ya tenemos algunas ideas para que podáis participar de algún modo en la experiencia, tanto en los hand-on labs como en algunos de los premios. Estad atentos al Twitter de @gwab_es para más información.

Racing Lab

Los que ya habéis estado en ediciones anteriores, recordaréis que se ha implementado un lab científico, que desafortunadamente este año la organización global no ha podido encontrar el algoritmo que cumpliera las expectativas. Pero una vez más, sí que estará disponible el Global Azure Bootcamp Racing Game Lab, un juego que ha sido diseñado para ejecutarse sobre Azure y conocer los detalles de cómo funciona para así aprender cómo se millones de datos de telemetría y finalmente explotarlos directamente desde una simple página web con Javascript.
El lab permite a los asistentes de todo el mundo a competir para conseguir las vueltas más rápidas en un juego de conducción 3D. Los servicios de back-end del juego están alojados en Azure y procesarán los tiempos por vuelta, telemetría y logros desbloqueados en el juego.
Ahora mismo los organizadores de todo el mundo hemos comenzado a testearlo para que esté a punto para el día del evento con más de 10.000 jugadores alrededor del globo, y por ahora, dejando el podio bien alto 🙂
Highscores1
Para conocer más detalles del juego puedes echarle un vistazo al siguiente video de Alan Smith, quien ha creado e implementado el mismo:

 

Patrocinadores locales

Por supuesto, la realización de este evento con los líderes de la comunidad de Microsoft Azure en España manteniendo la asistencia gratuita no sería posible sin la dedicación y duro trabajo de los presentadores, organizadores y contribuciones económicas de otras organizaciones para ayudar a financiar la logística del mismo.
patrocinadoreslocales

Patrocinadores globales

Agradecemos también a los patrocinadores globales por ofrecer esa guinda al pastel con las licencias de software que nos harán la vida más fácil a todos los que trabajamos con Azure.

¡Y no olvides ponerte en contacto con nosotros en http://azurebootcamp.es o a través de la cuenta de Twitter @gwab_es para cualquier duda que tengas!

Troubleshooting DNN based websites with Application Insights Analytics

Hi folks! I am really excited to finally be able to talk about one of the areas in which we have been working here at DNN Software over the last months. If you haven’t heard, some years back we started to deliver Evoq products (previously known as DotNetNuke Professional) as a Service. This is a cloud based Software as a Service solution called Evoq OnDemand, which runs on Microsoft Azure. We have been continuously evolving our solution, adapting to new Azure Platform changes and improvements such as improvements on Azure App Service, SQL Database v12, Azure Active Directory or Redis cache to name a few. We advantage our customers by leveraging new tools as they appear.
image001
To share some numbers, we have already delivered more than 50,000 websites including Evoq trials and production environments, backed up around 400 terabytes of site information. While the log data size isn’t huge, managing 160GB of site logs per month is not easy from the operational point of view, especially when we need to troubleshoot performance issues on one of our customer properties and try to find the root cause.
When an incident happens, our DevOps team need to figure out in minutes what is the cause, and in a cloud connected world the problem sources grow at the same rhythm that new service offerings appear: is an underlying infrastructure issue? Is a recent DNN update the cause? Is a 3rd party module? Is a 3rd party connected service? We needed to add telemetry and instrumentation to every single part of our cloud infrastructure, and not only customer properties but also our backend automatic provisioning systems.
We have been covering our needs with NewRelic, successfully allowing us to dig into the problems and solving operational issues and, meanwhile, keeping an eye on Microsoft Application Insights’ evolution. Our monitoring needs kept growing, looking for aggregate views (i.e. how many websites are experiencing the issue we discovered on a customer log entry? How many websites are using this 3rd party module and experiencing performance issues?). So we continued trying other application insights tools like NewRelic Insights and Splunk for more advanced scenarios. And during Q4 last year we saw a demo of what Microsoft was doing in this field to improve the current Application Insights service. On the first demo we saw 70 terabytes of data filtered in almost real time, an advanced web tool for complex lightning fast queries, a desktop tool for multiple account aggregate queries, and the ability to consume the queries from a PowerBI dashboard. It sounded like the foundation of what we were looking for.

Preparing the Application Insights Analytics onboarding

As I mentioned before, our cloud infrastructure is not only serving customer websites or trials. Some years back I presented at Cloudburst our set of cloud services for tasks such as automated backup and restores, Evoq product updates, order processing and account management. These initial services continued growing over the years and we have now new services for pageviews calculation (Evoq OnDemand is available in page view tiers), IFilter index offloading for Azure App service environment and others for background tasks. We also have continuous integration implemented for the nightly builds of Evoq and DNN Platform being deployed on Azure App Service. Having the ability to automatically send the Application Insights information to the Analytics store was the next requirement.
image003
The easiest path to have all the data available on Application Insights Analytics was to instrument each cloud service and website with Application Insights to start sending all the telemetry data. Once on Application Insights, all the information would be available for querying from Analytics using AQL (Analytics Query Language). So we finally worked on two areas:
1) Modifying all the worker roles (cloud services) to start sending the telemetry data to Application Insights. During the Connect(); event last year the Azure Diagnostics integration with Application Insights was announced, available with the Azure SDK 2.8. This was really easy to implement just by following the steps mentioned on the blog post and deploying a new version of each worker. In just a few minutes we started to have all the telemetry available on the Azure Portal. Kudos to the team for making this so easy;
image005
2) Create a new Application Insights monitoring provider to automate the Application Insights account provisioning and deployment on each website under our control. When we initially designed our backend monitoring services, we implemented a “monitoring provider” approach starting with Pingdom and NewRelic implementations. A monitoring provider is just an integration point in our platform, that supports methods like “install, uninstall, pause and resume monitoring” helping us, for example, to pause all the alerts on a website during maintenance or update operations. Our internal Application Insights monitoring provider implements this interface, automatically provisioning the account and alerts as well as pushing a web deploy package using the Resource Manager API. We can also run these operations manually by using our backend systems, through a web UI or by using our custom PowerShell cmdlets to provision and configure hundreds of Application Insights accounts with just a few lines of code:
image007
We can then visit the Azure Portal and check what is going on with each website or service and find performance issues and what is causing them: if the problem is on server side, a dependency, or is just a new skin that the customer has applied to the website that is performing badly as we can see in the graph below where the server response was consistent but the Page Load time skewed upwards, indicating client-side problems.
image009
For every single web application, we are able to search not only by page views, requests, traces or exceptions. Since we have implemented a custom logger for DNN, we are able to search by DNN Eventlog records or the typical log4net data being stored under /portals/_default/logs. We finally have one place where we can query for all the parameters.

Advanced search using Application Insights Analytics

And once we have all the telemetry data sent to Application Insights, is now when we can start running advanced queries by using the new Analytics feature.
Application Insights Analytics is a powerful query engine for your Application Insights telemetry that uses a query language named AQL. The language instead of nesting statements like in SQL, allows to pipe the data from one elementary operation to the next. We can filter all the raw telemetry data sent from each website by any field including DNN Eventlog records, execute statistical aggregations and immediately show the raw text results or with powerful visualizations.
As part of our automatic Application Insights provisioning, we create alerts for each resource being monitored. When we receive an alert we start using the tool to start digging into the problems to find patterns by using AQL. The UI allow us to save predefined queries and load them for later use.
image011
image013

Side Benefits of Machine Learning running on background: Proactive Detection

One thing that is amazing and is getting better day by day, is the Application Insights Proactive Detection. This feature notifies you about potential performance problems in your app, by using “Near Real Time Proactive Diagnostics”. What you get are alerts on abnormal rise in the failed request rate, and no configuration is required! It just works.
As example, check this alert we received today. I was shocked on the information provided by the service and how fast we go to the root of a problem.
On this case was a bot requesting bad formatted URLs and causing an abnormal rise in failed request rate. We detected the problem thanks to the stack trace provided on the alert that arrived 15 minutes after the proactive analysis, found the problem, created a patch and problem gone.
Do you love it? Me too!
image015

Application Insights module for DNN Platform

If you also have a DNN based website and want to get started with Application Insights and Analytics I have published on GitHub an open source module that allows to start sending all your website telemetry to Application Insights: pageviews, web requests, trace information (log4net log file contents), exceptions (including client side browser exceptions) and DNN Eventlog records.

Getting Started

The module is a DNN Platorm extension to integrate Visual Studio Application Insights to monitor your DNN installation. To setup the module on your installation, follow these steps:
1. Provision a new Application Insights service following the guide at https://azure.microsoft.com/en-us/documentation/articles/app-insights-overview/ Ensure you choose «ASP.net web application» on the «Application Type» parameter
2. Once provisioned, copy the «Instrumentation Key» available on the resource Essentials properties
image017
3. Now from the Releases folder https://github.com/davidjrh/dnn.appinsights/tree/master/Releases download the latest module package version ending on «…Install.zip» (the Source.zip package contains the source code that is not needed for production websites).
4. Install the extension package in your DNN instance from the «Host>Extensions» menu like any other module
5. Once installed, a new menu under «Host (Advanced menu)>Application Insights» will allow you to paste the instrumentation key obtained on step 2. After applying the changes, you will start receiving data on Application Insights after a few minutes.
image019
Un saludo and happy coding!

Visual Studio AppInsights module for DNN

DNNLovesAppInsightsAs a website developer or operator, I always need to know if my site goes down and get an alert, verify if the site is performing well or if it is under an attack. There are lot of tools today to give you an insight on what is happening on your web deployment, and one I’m using more and more is Visual Studio Application Insights.

While AppInsights is still a “Preview” service (note that preview means that is not generally available, so no SLA is offered yet), you can start from a Free tier that probably fits the majority of small websites, and optionally start paying depending on the amount of telemetry data you send to the store. This gives to you a powerful insight and tools to operate, diagnose and fix issues immediately.

“With great power comes…great number of alerts!” – David Rodriguez

I’m not going to start selling you all the benefits of using AppInsights or another service such as NewRelic. I’ve been personally using them both for a long time, and while NR has been on the first position on my insights tools list, now I’m getting very excited on what AppInsights is offering today. Here a few interesting highlights for DNN website developers and owners:

  • Monitor the usage and performance of live apps
  • Get immediate alerts on performance or availability issues
  • Get telemetry for existing web apps without redeploying
  • Use for a wide range of app types on devices, servers, or desktops
  • Monitor ASP.NET web apps hosted anywhere: on Azure, other cloud services, or on-premises servers
  • Search traces and exception logs for failure diagnoses (including DNN Event logs and log4net logs!!)
  • Track events, metrics, page views, users, crashes, dependencies, perf counters, and response times

AppInsights module for DNN

The only thing that perhaps is not easy and not documented at all, is how to setup AppInsights on a DNN instance. While I’ve been doing the task manually for a while, I have finally created a simple module available at https://github.com/davidjrh/dnn.appinsights that allows, on this initial version:

  • Easily setup AppInsights on a DNN Platform or Evoq installation as a Host user
  • Automatically send telemetry data to AppInsights:
    • Http requests information
    • Page views
    • Server and browser exceptions
    • Trace information including the log4net logs information
    • DNN event log entries with a new logging provider
    • Performance counters

There is still space for improvement, like adding UI to specify which performance counters you want to add (currently you have to manually edit the ~/ApplicationInsights.config file for this task). Feedback is welcome! Pull requests are welcome!

And that is not all since more features are continuously being added to AppInsights. Can you imagine a service that automatically learns how your site is being used and alert you if an abnormal pattern is detected? You would like to contact AppInsightsML@microsoft.com to try it out!

ProactiveDetection

Getting started

This module is a DNN Platform extension to integrate Visual Studio Application Insights to monitor your DNN installation. To setup the module on your installation, follow these steps:

  1. Provision a new AppInsights service following the guide at https://azure.microsoft.com/en-us/documentation/articles/app-insights-overview/. Ensure you choose "ASP.net web application" on the "Application Type" parameter
    CreateAppInsights
  2. Once provisioned, copy the "Instrumentation Key" available on the resource Essentials properties
    InstrumentationKey
  3. Now from the Releases folder https://github.com/davidjrh/dnn.appinsights/tree/master/Releases, download the latest module package version ending on "…Install.zip" (the Source.zip package contains the source code that is not needed for production websites).
  4. Install the extension package in your DNN instance from the "Host>Extensions" menu like any other module
  5. Once installed, a new menu under "Host (Advanced menu)>Application Insights" will allow you to paste the instrumentation key obtained on step 2. After applying the changes, you will start receiving data on AppInsights after a few minutes.
    ModuleSetup

What changes are done on my site?

Some changes are done during the installation and other when Enabling the Application Insights module. Note that by default, until you enable the module and specify an instrumentation key, any AppInsights module or assembly will be loaded and no telemetry data will be sent.

During the install, the following assemblies will be added to the ~/bin folder:

  • DotNetNuke.Monitoring.AppInsights.dll
  • Microsoft.AI.Agent.Intercept.dll
  • Microsoft.AI.DependencyCollector.dll
  • Microsoft.AI.PerfCounterCollector.dll
  • Microsoft.AI.ServerTelemetryChannel.dll
  • Microsoft.AI.Web.dll
  • Microsoft.AI.WindowsServer.dll
  • Microsoft.ApplicationInsights.dll
  • Microsoft.ApplicationInsights.TraceListener.dll
  • Microsoft.Web.XmlTransform.dll

After enabling the module, the following configuration files are changed, in order to send telemetry, dot4net logs tracing and DNN event logs to AppInsights:

  • /Web.config
  • /ApplicationInsights.config
  • /DotNetNuke.log4net.logs
  • /DesktopModules/AppInsights/js/appinsights.js

If you disable or uninstall the module, all the previous changes are reverted.

The following image illustrates the DNN event log data being sent to AppInsights, where you can search or filter by content, and why not, create some alerts based on any criteria.

EventLogProvider

 

Resources

Si te perdiste Connect(); vente a ReConnect(); con TenerifeDev

ReConnectTenerifeDev

Los días 18 y 19 de noviembre hemos podido ver en vivo y en directo el evento Connect 2015, donde desde Nueva York se nos han presentado el presente y futuro de las herramientas y servicios de Microsoft para una nueva era de desarrolladores.

“El papel de los desarrolladores está cambiando dramáticamente en el mundo de hoy, y Microsoft también lo está haciendo. Hace un año comenzamos un viaje para un nuevo Microsoft para desarrolladores, al presentar el futuro de un .NET de código abierto en Linux y Mac, y Visual Studio gratis para apuntar a cualquier dispositivo y sistema operativo.”

Así rezaba la introducción de la KeyNote de Scott Guthrie, en un discurso lleno de anuncios y demostraciones, mostrando los siguientes pasos de la transformación de Microsoft con las herramientas y servicios que ayudan a los desarrolladores a tener éxito en esta nueva era.

ReConnect();

TenerideDev y otras comunidades técnicas a lo ancho y largo del territorio español, hemos querido acercarte la repetición de las jugadas más interesantes en un evento resumen llamado ReConnect(); siguiendo la idea del edición anterior. Para ello, hemos desplazado a enero la próxima charla que teníamos programada para hacerle hueco a este pedazo de evento donde veremos cosas como compilar .NET nativo para Linux desde Visual Studio. What?? Sí, lo que oyes, estos suecos se han vuelto locos.

Evento: ReConnect() 2015
Fecha: 17 de diciembre de 2015
Lugar: Salón de Grados de la Escuela Superior de Ingeniería y Tecnología (anteriormente ETSII)
Horario: 16:30 a 19:30 (con descanso incluido)
Speakers: Santiago Porras (@saintwukong, Windows Platform MVP), Cesar Abreu (@cesabreu, Azure MVP), David Rodriguez (@davidjrh, Azure MVP)

Inscripción: en el MeetUp de TenerifeDev 

¡Corred insensatos!

Rebuilding SQL Database indexes using Azure Automation

Some days back I had a discussion with an Azure SQL Database engineer about the need of rebuilding indexes on SQL Database. I thought that was a best practice but the engineer told me that the task was accomplished automatically as part of the managed service, in the same way you haven’t had to execute any file related task.

I remembered the thing today and found two interesting articles from Alexandre Brisebois just confirming my initial thoughts, that the indexing management is under your responsibility, and you need to pay attention to how fragmented they are.

There are three good blog posts from Alexandre on the matter I found interesting:

Running the following T-SQL on a SQL Database, you can get the index fragmentation on a specific table in percent:

SELECT name, avg_fragmentation_in_percent

FROM sys.dm_db_index_physical_stats (

       DB_ID(N’MyDatabaseName’)

     , OBJECT_ID(‘MyTableName’)

     , NULL

     , NULL

     , NULL) AS a

JOIN sys.indexes AS b

ON a.object_id = b.object_id AND a.index_id = b.index_id

If you want to get all the indexes in a database with more than a 30% of fragmentation, you can run this other one:

SELECT name, avg_fragmentation_in_percent

FROM sys.dm_db_index_physical_stats (

       DB_ID(N’MyDatabaseName’)

     , NULL

     , NULL

     , NULL

     , NULL) AS a

JOIN sys.indexes AS b

ON a.object_id = b.object_id AND a.index_id = b.index_id

WHERE avg_fragmentation_in_percent > 30

image

And now don’t be scared when seeing the results. The good news is that you are going to get better performance after rebuilding the indexes without having to scale to another SQL Database tier. Alexandre itself commented something about the possibility of using Azure Automation to do the task.

Reindexing using Azure Automation

I’m in love with Azure Automation and all the small things that can be automated to do our daily job easier, mostly based on running a small script on a schedule. Curiously there is a PowerShell Workflow Runbook on the Automation Gallery that allows to fully automate the SQL Database reindexing. Let’s see how to configure it step by step:

1) Provision an Automation Account if you don’t have any, by going to https://portal.azure.com and select New > Management > Automation Account

image

2) After creating the Automation Account, open the details and now click on Runbooks > Browse Gallery

image

3) Type on the search box the word “indexes” and the runbook “Indexes tables in an Azure database if they have a high fragmentation” appears:

image

4) Note that the author of the runbook is the SC Automation Product Team at Microsoft. Click on Import:

image

5) After importing the runbook, now let’s add the database credentials to the assets. Click on Assets > Credentials and then on “Add a credential…” button.

image

6) Set a Credential name (that will be used later on the runbook), the database user name and password:

image

7) Now click again on Runbooks and then select the “Update-SQLIndexRunbook” from the list, and click on the “Edit…” button. You will be able to see the PowerShell script that will be executed:

image

8) If you want to test the script, just click on the “Test Pane” button, and the test window opens. Introduce the required parameters and click on Start to execute the index rebuild. If any error occurs, the error is logged on the results window. Note that depending on the database and the other parameters, this can take a long time to complete:

image

9) Now go back to the editor, and click on the “Publish” button enable the runbook. If we click on “Start”, a window appears asking for the parameters. But as we want to schedule this task, we will click on the “Schedule” button instead:

image

10) Click on the Schedule link to create a new Schedule for the runbook. I have specified once a week, but that will depend on your workload and how your indexes increase their fragmentation over time. You will need to tweak the schedule based on your needs and by executing the initial queries between executions:

image

11) Now introduce the parameters and run settings:

image

NOTE: you can play with having different schedules with different settings, i.e. having a specific schedule for a specific table.

With that, you have finished. Remember to change the Logging settings as desired:

image

Conclusion

Cool? SUPERCOOL!!

Un saludo y happy coding!

Azure Web Apps: Kudu REST API is a box of surprises

VitorinoxAfter working with Azure Websites Web Apps for a long time, I have noticed that every day I use some tips and tricks that surprise people in some way. Normally the tips are nothing spectacullar, but when you use them your daily productivity is enhanced. I’m going to start posting some of these tips in order to have my own “notebook” to revisit when needed, but also to share and make easier small tasks like working with files, website management, getting alerts, etc.

So let’s start with a simple tip:

How to download the website contents of a site hosted on Azure Web Apps

There are many ways to answer this question coming to my mind: FTP, MS Deploy, uploading 7zip command line tool via Kudu, …but is there something more simple than that just for downloading the content?

Answer is YES.

Azure Web Apps ZIP API

After spending some time working with Azure Web Apps, you have probably noticed that “behind” your website, there is a lot of tools being served by the SCM (Service Control Manager, also known as Kudu).

You can direclty access this service by simply browsing the “https://mywebsite.scm.azurewebsites.net” URL, where “mywebsite” is your site name, and then introducing your Azure credentials (using an Azure AD or Microsoft Account). You can also use Basic Authentication by browsing “https://mywebsite.scm.azurewebsites.net/basicauth” and then introduce the deployment credentials you can get from your website settings at the Azure Management portal.

Kudu offers you an user interface full of tools to manage and diagnostic your web app:

KuduUI

And if you dig into the Kudu documentation, you will notice that some REST APIs come out-of-the-box. One of these REST APIs is the ZIP, that allows downloading folders as zip files or expanding zip files into folders:

ZIPAPI

Download website contents using a simple URL in your browser

Enough! With just introducing this URL in your browser and typing your credentials, you can download your full website contents:

https://mywebsite.scm.azurewebsites.net/api/zip/site/wwwroot

If you want to download a subfolder of your website, you could use something like:

https://mywebsite.scm.azurewebsites.net/api/zip/site/wwwroot/subfolder1

Note that you can download anything inside the “D:home” folder where the “/api/zip” is relative, so if for example, you want to download all your site log files, including IIS log files, you can use the following URL:

https://mywebsite.scm.azurewebsites.net/api/zip/LogFiles

NOTE: an equivalent one would be to use the “dump” API:

https://mywebsite.scm.azurewebsites.net/api/dump

Adding some Azure PowerShell sauce

Is quite normal to download these log files to your PC and then run your favourite log parsing tool like Log Parser and Log Parser Studio. It’s easy to manually download them from Kudu but it’s not funny when you have to do the same task almost every day over some hundreds of websites.

So why not to use PowerShell to automate the task?

After installing Azure PowerShell, you can run the following script to download the files and folders using the Kudu ZIP REST API. You can tweak it a little by iterating between all your websites, and also iterating all your subscriptions, so you could download the IIS logs of all the websites you own just with some lines of code.

In the following script, I’ve changed the folder to download to a specific one where DNN Platform stores their Log4Net daily logs, that BTW, you can then review on-premises using Log4View.

NOTE: I used PowerShell 5.0 available on Windows 10, with “wget” and “Invoke-RestMethod” support what simplifies the script.

# Input parameters

$subscriptionName = «MySubscriptionName»

$websiteName = «MyWebsitename»

$slotName = «Production»

$folderToDownload = «site/wwwroot/Portals/_default/logs/» # must end with / for folders or you will get a 401

$outputZipFile = «D:TempLogFiles.zip»

 

# Ask for Azure credentials to obtain the publishing credentials

Add-AzureAccount

Select-AzureSubscription $subscriptionName

 

# Build the basic authentication header

$website = Get-AzureWebsite $websiteName -Slot $slotName

$publishingUsername = $website.PublishingUsername

$publishingPassword = $website.PublishingPassword

$base64AuthInfo = [System.Convert]::ToBase64String( `

                    [System.Text.Encoding]::ASCII.GetBytes(( `

                    «{0}:{1}» -f $publishingUsername, $publishingPassword)))

 

# Download the log files using wget or Invoke-RestMethod, available in Windows Powershell 5.0 🙂

Invoke-RestMethod -Headers @{Authorization=(«Basic {0}» -f $base64AuthInfo)} `

                  -Uri «https://$websiteName.scm.azurewebsites.net/api/zip/$folderToDownload« `

                  -OutFile $outputZipFile

 

And don’t forget other Kudu APIs available!

I have some other tips I use in a daily basis I will be posting soon. Don’t forget to take a look to other APIs available, because they are a box of surprises:

  • api/vfs: allows you to execute files and folder operations
  • api/command: allows you to execute something (one of my favourites when combined with the previous one)
  • api/settings: allows you to change the site settings
  • api/dump, api/diagnostics, api/logs: for diagnostics, tracing, etc.
  • api/scm, api/deployments, api/sshkey: for repository and deployment management;
  • api/siteextensions: enable or diable other site extensions like Visual Studio Monaco. The available extensions grow in a monthly basis, don’t forget to revisit from time to time

Un saludo and Happy Coding!

Próximos eventos TenerifeDev

Después de un Agosto sofocante y aún con la resaca de la Tenerife LAN Party, ya estamos de vuelta! Y esta vez vamos a programar los eventos de los próximos cuatro meses en un flash, donde hablaremos de todo un poco: Xamarin, Machine Learning, Windows 10 IoT Core, Raspberry Pi, y hasta una sesión que nos pareció la mar de interesante dedicada íntegramente al personal branding.

¿Cuándo? ¿Dónde?

A continuación mostramos la lista de eventos y sus fechas, que en principio se desarrollarán en el Salón de Grados de la ETSII como viene siendo habitual. Lo más recomendable es que os registréis en http://www.meetup.com/es/tenerifedev/ para estar al tanto de cualquier modificación y del contenido de cada sesión:

  • Jueves 17 de septiembre de 18:00 a 19:30 – Xamarin Forms
  • Jueves 15 de octubre de 18:00 a 19:30 – Machine Learning
  • Jueves 19 de Noviembre de 18:00 a 19:30 – Automatizando la oficina con Raspberry Pi, Windows 10 y Azure
  • Jueves 17 de diciembre de 18:00 a 19:30 – Personal branding: Por qué y cómo convertirnos en marca

¡Corred insensatos!

[TenerifeDev] Se acerca TLP Innova con muchas novedades

LogoTenerifeDevQueda una semana para que comience TLP Innova, el espacio dedicado a la innovación tecnológica donde profesionales y estudiantes podremos compartir conocimientos y novedades en forma de charlas técnicas y talleres y así mejorar nuestras capacidades y poder aplicarlas a nuestro entorno laboral o de investigación.

Como ya os hemos adelantado, TenerifeDev participa este año más activamente que nunca colaborando para que Microsoft esté presente, con dos miembros del equipo de Microsoft DX que os informarán de todo lo que necesitéis y que además estarán apoyando a los ponentes que hablaremos sobre sus tecnologías, ahora más abiertas que nunca ya que .NET se ha declarado Open Source al igual que otros proyectos.

En cuanto a nuestra agenda, tendremos la oportunidad de contar con sesiones y talleres técnicos muy variados y de los que podréis sacar mucha experiencia y conocimiento y que serán impartidos por algunos de los profesionales más reconocidos en el territorio nacional e incluso algunos de talla internacional como Bruno Capuano, Alberto Díaz y Josué Yeray, todos MVP de Microsoft, Alejandro Campos Magencio miembro de Microsoft DX y otros ponentes de renombre a nivel nacional como Javier Suárez Ruiz y Santiago Porras también MVP de Microsoft, Vanessa Estorach fundadora de e-Growing, César Abreu y Javi Medina Azure Advisors.

 

Agenda

Miércoles:

11:00 – 14:00 | Taller: Microsoft Azure, la nube más potente al alcance de todos

Jueves

10:00 – 13:00 | Taller: Xamarin.Forms, un código para unirlos a todos!

16:00 – 19:00 | Taller: Desarrolla para todos los dispositivos Microsoft con Windows 10

Viernes

clip_image002

Este año viene cargado de sesiones que pueden tener interés para Startups y Apps multiplataforma, además de que en pocos días será el lanzamiento oficial de Windows 10 que viene con más novedades que nunca y sobre todo, abierto a nuevas plataformas como Xbox para las que se podrá desarrollar con aplicaciones universales. Este año, por encima de cualquier otro os interesará lo que podamos contaros.

Ni qué decir, que necesitamos que nos ayudes a correr la voz, así que si conoces a alguien interesado en asistir, no dudes en contárselo.