xunit

XUnit: Run your Web Api integration tests in strict order with ITestCaseOrderer

In some scenarios, we might wan’t to run strictly ordered integrations tests to be sure a complex process with several Web Api calls success.
One example could be register a fixture created user, log in with the account, update it’s profile data y and finally delete it.

This four actions should be evaluated individually, but we may need chaining them in strict order because the next test needs to be fed with the previous execution output result. A good example could be controller actions that are decorated with [Authorize] and need the authorization header to be set, so we must log the recently registered user to proceed.

Since Version 2.0, XUnit provide us with some interfaces and decorators that allow us to perform this test case ordering. In this post we are going to show the usage of ITestCaseOrderer interface and Owin Test Server alltogether.

We are going to code some extensions methods as well to code cleaner facts when using Owin RequestBuilder fluent api.

Let’s start by creating three [Facts] in a class:

 

The first fact, will create a Register user request to the Web Api and will store the register output in static context (XUnit creates a new instance of the class whenever a new fact is executed so we need to keep the result).

 

[Fact]
public void Register_User_Should_Return_Valid_UserId()
{
   var userRegisterRequest = CreateRegisterRequestFixture();

   var registerResponse = TestServerConfig.GetServer()
                          .HttpClient
                          .PostAsJsonAsync(ApiHelpers.RegisterUser, userRegisterRequest).Result;
									  
   registerItem = JsonConvert.DeserializeObject<RegisterItem>(registerResponse.ResponseString());

   registerItem.UserId.Should().NotBe(Guid.Empty);
}

The second fact, will update the user profile, which is an Authorized action, so we have to log the user in the api and recover its authorization header to chain will the next api call. We are also recovering data from registerItem variable stored in the previous test, and setting some user data to be updated:

 

[Fact]
public void Logged_User_Can_Change_Profile_Info()
{
   userDataRequest = new UserPersonalDataRequest()
	                {
  	                  Email = registerItem.Email,
 	                  Name = fixture.Create<string>(),
 	                  Surname1 = fixture.Create<string>()
	               };

    var testServer = TestServerConfig.GetServer();
    var token = testServer.LoginUser(GetRegisterUserLoginRequest());

    var updateProfileResult = testServer
                             .CreateAuthenticatedRequest(ApiHelpers.UpdateProfile, token)
                             .PostAsJsonAsync(userDataRequest).ResponseString();

    Convert.ToBoolean(updateProfileResult).Should().BeTrue();
}

The third fact will recover the logged user information from the backend, that is an authorized action as well, and It will use the variable userDataRequest from the previous fact.

 

[Fact]
public void Get_Logged_User_Profile_Should_Return_Valid_Information()
{
 var loginRequest = GetRegisterUserLoginRequest();
 var testServer = TestServerConfig.GetServer();
 var token = testServer.LoginUser(loginRequest);

 var userProfileResult = testServer.
                        CreateAuthenticatedRequest(ApiHelpers.UserGetProfile, token)
                       .GetAsync().ResponseString();

 var userProfile = JsonConvert.DeserializeObject<UserProfile>(userProfileResult);

 userProfile.Id.Should().NotBeEmpty();
 userProfile.Name.Should().Be(userDataRequest.Name);
}

 

 

RequestBuilder  Fluent Api Extension

public static class OwinRequestBuilderExtensions
{
    public async static Task<HttpResponseMessage>PostAsJsonAsync<T>(this RequestBuilder requestBuilder, T value, Encoding encoding = null)
   {
      Encoding requestEncoding = encoding ?? Encoding.UTF8;
      return await requestBuilder
                  .And(configure =>;
                       {
                         configure.Content = new StringContent(JsonConvert.SerializeObject(value), requestEncoding, "application/json");
                       })
                 .PostAsync();
     }
}

 

Now it is time to create our implementation of ITestCaseOrderer to implement our own specs ordering logic:

The first of all, we are going to create a custom attribute that we will use to decorate our facts with the desired order:

 

public class TestPriorityAttribute: Attribute
{
  public int Priority { get; set; }
  public TestPriorityAttribute(int Priority)
  {
    this.Priority = Priority;
  }
}

With the custom attribute in place lets implement the Test Case Orderer:


 public class TestCollectionOrderer : ITestCaseOrderer
{
  public IEnumerable<TTestCase> OrderTestCases<TTestCase>(IEnumerable<TTestCase> testCases) where TTestCase : ITestCase
  {
    var sortedMethods = new SortedDictionary<int, TTestCase>();

    foreach (TTestCase testCase in testCases)
    {
       IAttributeInfo attribute = testCase.TestMethod.Method.
                                           GetCustomAttributes((typeof(TestPriorityAttribute)
                                          .AssemblyQualifiedName)).FirstOrDefault();

       var priority = attribute.GetNamedArgument<int>("Priority");
       sortedMethods.Add(priority, testCase);
    }

    return sortedMethods.Values;
  }
}

 

The ITestCaseOrderer receives the Test Cases that are declared as Fact in the class decorated with the orderer implementation.
the ITestCase interface is heavily based on abstractions so we should not use standard reflection and must use the interface of XUnit.Abstractions.IMethodInfo to extract the priority value of the TestPrioriry attributes. We use a SortedDictionary to add the priority/TestCase key value and we return them sorted.
Now, we just need to decorate our class to implement the Orderer and decorate the methods with our TestPriority attribute and the desired order so XUnit will run the facts in strict order.


[TestCaseOrderer(TestCollectionOrderer.TypeName,TestCollectionOrderer.AssemblyName)]
public class RegisterUser
{	
   [Fact, TestPriority(1)]
   public void Register_User_Should_Return_Valid_UserId() {...}
	
   [Fact, TestPriority(2)]
   public void Logged_User_Can_Change_Profile_Info(){...}
		
   [Fact, TestPriority(3)]
   public void Get_Logged_User_Profile_Should_Return_Valid_Information(...)	
}

 

And that is all for today!. Happy Coding!

Asp.net Core with SignalR server and Autofac

SignalR server for asp.net core is currently in version 0.2.0. It has not been released yet to the official nuget repositories, so we must configure our project to use the asp.net core dev feed to obtain SignalR related packages.
Autofac for asp.net core still does not have the hubs registration extensions that we are used to when using full framework packages, but we will see how we can write them from scratch.

First of all we create a new asp.net core web project and we create a NuGet.config file so we can specify the core devevelopment NuGet feed so we can restore the websockets and SignalR packages.

The NuGet.config file should look like this:
 


<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="AspNetCore" value="https://dotnet.myget.org/F/aspnetcore-ci-dev/api/v3/index.json" />
    <add key="NuGet" value="https://api.nuget.org/v3/index.json" />
  </packageSources>
</configuration>

 

Once created the nuget file, we should add the dependencies to project.json. The dependencies to use SignalR and autofac are shown below:

 "Microsoft.AspNetCore.SignalR.Server": "0.2.0-*",
 "Microsoft.AspNetCore.WebSockets": "0.2.0-*",
 "Autofac.Extensions.DependencyInjection": "4.0.0"

 

We are now set to start configuring our hubs!. As an example we are going to create a ChatHub that will receive a service to keep track of connections in it’s constructor.

Let’s create first the service interface and implementation to register it with AutoFac:
 

IHubTrackingService.cs


  public interface IHubTrackingService
  {
    bool TrackConnection(HubTrack hubTrack);
  }

 

HubTrackingService.cs

  public class HubTrackingService : IHubTrackingService
  {
    public bool TrackConnection(HubTrack hubTrack)
    {
      // Do some stuff with your repo to keep tracking of connection;
      return true;
    }
  }

 

The HubTrack class is a POCO class with the following properties, just as an example:
HubTrack.cs

  public class HubTrack
  {
    public string HubName { get; set; }
    public string UserId { get; set; }
    public string ConnectionId { get; set; }
    public DateTime ConnectionTime { get; set; }
  }

 

Now we are ready to create our ChatHub injecting in the constructor the HubTrackingService to allow tracking clients with OnConnected Method:

 

ChatHub.cs
 

 public class ChatHub : Hub {

   private readonly IHubTrackingService hubTrackingService;

   public ChatHub(IHubTrackingService hubTrackingService)
   {
       this.hubTrackingService = hubTrackingService;
   }
   public override Task OnConnected()
   {
       RegisterConnection();
       return base.OnConnected();
   }

   private void RegisterConnection()
   {            
      this.hubTrackingService.TrackConnection(new HubTrack()
      {
       ConnectionId = Context.ConnectionId,
       HubName = nameof(ChatHub),
       UserId = Context.User?.Identity?.Name ?? string.Empty,
       ConnectionTime = DateTime.UtcNow
      });
   }  
}      

 

Now, it is time to configure our Startup.cs to add SignalR Server and dependency injection.
In full framework, we use the RegisterHubs autofac extesion to register our hub classes:

builder.RegisterHubs(Assembly.GetExecutingAssembly());

but it is not available yet in Autofac for asp.net core so we should write a ContainerBuilder Extension to do this for us, the code is shown below:
 
AutofacExtensions.cs
 

 public static class AutoFacExtensions {
        
    public static IRegistrationBuilder<object, ScanningActivatorData, DynamicRegistrationStyle>
                 RegisterHubs(this ContainerBuilder builder, params Assembly[] assemblies)
    {
      return builder.RegisterAssemblyTypes(assemblies)
                   .Where(t => typeof(IHub).IsAssignableFrom(t))
                  .ExternallyOwned();
    }
 }

Note: We use ExternallyOwned() to allow SignalR disposing the Hubs instead of being managed by Autofac.

Once our extension is added to the project we should set our Configure and ConfigureServices methods in Startup.cs
 
Startup.cs

public IServiceProvider ConfigureServices(IServiceCollection services)
{  
  services.AddSignalR();
  services.AddMvc();
            
  var builder = new ContainerBuilder();
  builder.RegisterType<HubTrackingService>().As<IHubTrackingService>();
  builder.RegisterHubs(typeof(Startup).GetTypeInfo().Assembly);  
  builder.Populate(services);
           
  return new AutofacServiceProvider(builder.Build());
}
       
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
 app.UseSignalR();
 app.UseMvc();
}

 

Notice the line:

builder.RegisterHubs(typeof(Startup).GetTypeInfo().Assembly); 

In asp.net core we don’t have Assembly.GetExecutingAssembly() so we get the assembly from the Startup.cs type

 
How do we use the Hubs to notify clients when execution is inside a Mvc/Api Controller? Continue reading “Asp.net Core with SignalR server and Autofac”

asp_net

Asp.net core node services to execute your nodejs scripts

Nowadays, it is very common to have pieces of code developed in different languages and technologies, and today, more than ever in Javascript.

If you have a tested and battle proven javascript module, maybe you don’t want to rewrite it from scratch in C# to use it in asp.net, and you prefer to invoke it and fetch the results.

This post will show how to use Microsoft Asp Net Core Node Services in asp.net to execute our javascript commonjs modules.

In the following example, we will consume an online free json post service, that returns lorem ipsum posts information based on the request page id. To Achieve this we have a really simple nodejs module that will fetch the contents from the api using the library axios.

We are going to fetch this information through the Post Api Controller and return them in Json to the client.

Lets get started:

 

First of all we create a new asp.net core project:

create_aspnetcore_project

 

Then we add the Reference to Microsoft.AspNetCore.NodeServices in  project.json file:

 

add_nodeservices_package

 

To finish the setup, we have to create a npm configuration file and add the axios reference that will be used by our node script:

To add the npm configuration file use Add > new Item > client-side > npm configuration file and then add the axios dependency as the following image:

 

npm_axios_package_install

 

With this the setup is complete, let’s jump to the api configuration, we want to add Node Services and fast configure the JsonSerializer indentation settings on our Startup.cs class in ConfigureServices method:

  public void ConfigureServices(IServiceCollection services)
  {
     services.AddNodeServices();

     services.AddMvc().
              AddJsonOptions(options =>
              {
                options.SerializerSettings.Formatting = Formatting.Indented;
              });
   }
 

 

Once our api is configured to use Node Services we are going to create the Post Controller, and inject the node services in the constructor. The framework will automatically provide it to us so no further configuration is needed. We want to serialize the output to be an ExpandoObject so we can dynamically access to the properties in case it is needed. Right now we just want to send the output to the client in Json format.


using System;
using System.Dynamic;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.NodeServices;

namespace Api.Core.Controllers
{
    [Route("api/[controller]")]
    public class PostController : Controller
    {
        private readonly INodeServices nodeServices;
        private const string NODE_POST_SCRIPT = "./Node/postClient";

        public PostController(INodeServices nodeServices)
        {
            this.nodeServices = nodeServices;
        }

        [HttpGet, Route("{page}")]
        public async Task<IActionResult> Get(int page)
        {
          try
          {
            dynamic postResult = await nodeServices.InvokeAsync<ExpandoObject>(NODE_POST_SCRIPT, page);
            return Json(postResult);
          }
          catch (Exception ex)
          {
            return BadRequest();
          }
        }
    }
}

The next step will be creating a folder called Node in our root folder and add the following nodejs script within a file named postClient.js.

This js module will just create a GET request to the api, requesting the post Id and invoking the callback based on the network request result:

 


var root = 'https://jsonplaceholder.typicode.com';
var axios = require('axios');

module.exports = function (callback, page) {
    var commentsUrl = root + '/posts/' + page.toString();
    console.log("Calling Url :" + commentsUrl);
    axios.get(commentsUrl)
        .then(function (response) {
            callback(null, response.data);
        }).catch(function (err) {
            callback(err, null);
        });
};

NOTE: We should define a callback in the module parameters so Node Services can play the output results.
If we want the script to be successful we should pass a null error callback. On the other hand we will provide an error as the first callback parameter so the nodeService.InvokeAsync throws an Exception.

 

Final Steps, executing the controller action:

Everything is in place and we are going to use Fiddler to create requests to the post controller requesting different page ids:

 

fiddler_compose_request

 

Once we execute the request we can see debugging the resulting output from the js script:

 

debugging_post_controller

 

And then we get the output on fiddler as well:

 

fiddler_response

 

And that’s all for today, I hope you enjoyed.

Happy Coding!.

geeks.ms node notifier (no te pierdas ningún post) (node & js ES6)

Ayer por la tarde llovía y decidí entrar a ver las nuevas entradas en los blogs de Geeks, y me di cuenta de que no siempre me acuerdo o tengo el tiempo de hacerlo asiduamente, así que tenía ganas de tirar algo de código y me puse manos a la obra.

El proyecto consiste en una aplicación node utilizando js (ES6). El script se conecta al blog de geeks, y parsea las entradas de los últimos posts añadidos, avisándonos mediante un popup en el escritorio de los nuevos posts que van llegando mientras esta corriendo el demonio.

 

Geeks.ms node notifier

 

El proyecto utiliza los siguientes módulos de node:

axios (Http client basado en promesas que funciona en browser y en node)

cheerio (Implementación de Jquery para trabajar del lado del servidor)

eventEmitter (Modulo para subscribirse y publicar eventos)

fs (Módulo de node para acceder al sistema de archivos)

node-notifier (Notificaciones de escritorio compatibles con todos los SO’s)

Los parámetros como el store path, fichero temporal y el intervalo de crawling (milisegundos) a la web es configurable en el fichero config/appConfig.

module.exports = {
    GEEKMS_URL : "http://geeks.ms/blogs",
    DEFAULT_TOAST_TITLE : "GeeksMs Notifier",
    POST_STORE_PATH : "c:\\temp",
    POST_STORE_FILE : "poststore.json",
    CRAWL_INTERVAL : 20000,
    DIRECTORY_PERMISION: "0744"
}

Para instalar el proyecto tenemos que instalar los paquetes con npm e instalar forever. El paquete forever nos permite lanzar nuestro script de node como daemon, y se encarga de gestionarlo ante posibles fallos o salidas inexperadas.

Ejecutamos en la consola:

npm install

npm install forever -g

forever start boot.js

Una vez forever lanza el script lo dejar corriendo en background, y tan solo tenemos que utilizar el comando forever list para listar los demonios que tenemos corriendo actualmente. La salida del comando nos mostrará el uptime del proceso, y el log donde está volcando los eventos y console.log que se van registrando.

foreverlist

 

Y eso es todo!. Espero que los que os decidáis a instalarlo os sea útil.

El repositorio de github lo podéis encontrar en el siguiente enlance:

https://github.com/CarlosLanderas/GeekMs-Node-Notifier

Edit: Parece que funciona 🙂

notifier

console.log(“Hello everyone”);

Hola a todos!, mi nombre es Carlos Landeras y trabajo como Software Engineer en
Plain Concepts
Madrid. Este primer post es algo muy especial para mí, pues he estado varios años siguiendo a todos los fenómenos que aportan en esta comunidad, que disponen de una cantidad de conocimientos y experiencia abrumadora y poder contribuir junto a ellos es para mí todo un orgullo.

La temática del blog será variada, porque aunque durante años he sido programador backend en el stack de tecnologías .net (Winforms, Webforms, MVC, WebApi, Wcf, SignalR …), últimamente paso mucho tiempo en lo laboral y lo personal trabajando con Javascript, Typescript, NodeJs, ReactJs, AngularJs, Knockout y demás tecnologías frontend.

También quiero dedicar ciertas entradas a programación orientada a hardware y algo de python.

Un saludo a todos y Happy coding!