Quantcast
Channel: ASP.NET Hacker
Viewing all 759 articles
Browse latest View live

The ASP.​NET Core React Project

$
0
0

In the last post I wrote I had a first look into a plain, clean and lightweight React setup. I'm still impressed how easy the setup is and how fast the loading of a React app really is. Before trying to push this setup into a ASP.NET Core application, it would make sense to have a look into the ASP.NET Core React project.

Create the React project

You can either use the "File New Project ..." dialog in Visual Studio 2017 or the .NET CLI to create a new ASP.NET Core React project:

dotnet new react -n MyPrettyAwesomeReactApp

This creates a ready to go React project.

The first impression

At the first glance I saw the webpack.config.js, which is cool. I really love Webpack and I love how it works, how it bundles the relevant files recursively and how it saves a lot of time. Also a tsconfig.json is available in the project. This means the React-Code will be written in TypeScript. Webpack compiles the TypeScript into JavaScript and bundles it into an output file, called main.js

Remember: In the last post the JavaScript code was written in ES6 and transpiled using Babel

The TypeScript files are in the folder ClientApp and the transpiled and bundled Webpack output gets moved to the wwwroot/dist/ folder. This is nice. The Build in VS2017 runs Webpack, this is hidden in MSBuild tasks inside the project file. To see more, you need to have a look into the project file by right clicking the project and select Edit projectname.csproj

You'll than find a ItemGroup with the removed ClientApp Folder:

<ItemGroup><!-- Files not to publish (note that the 'dist' subfolders are re-added below) --><Content Remove="ClientApp\**" /></ItemGroup>

And there are two Targets, which have definitions for the Debug and Publish build defined:

<Target Name="DebugRunWebpack" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('wwwroot\dist') "><!-- Ensure Node.js is installed --><Exec Command="node --version" ContinueOnError="true"><Output TaskParameter="ExitCode" PropertyName="ErrorCode" /></Exec><Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." /><!-- In development, the dist files won't exist on the first run or when cloning to
        a different machine, so rebuild them if not already present. --><Message Importance="high" Text="Performing first-run Webpack build..." /><Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js" /><Exec Command="node node_modules/webpack/bin/webpack.js" /></Target><Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish"><!-- As part of publishing, ensure the JS resources are freshly built in production mode --><Exec Command="npm install" /><Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js --env.prod" /><Exec Command="node node_modules/webpack/bin/webpack.js --env.prod" /><!-- Include the newly-built files in the publish output --><ItemGroup><DistFiles Include="wwwroot\dist\**; ClientApp\dist\**" /><ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)"><RelativePath>%(DistFiles.Identity)</RelativePath><CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory></ResolvedFileToPublish></ItemGroup></Target>

As you can see it runs Webpack twice. Once for the vendor dependencies like Bootstrap, jQuery, etc. and once for the react app in the ClientApp folder.

Take a look at the ClientApp

The first thing you'll see, if you look into the ClientApp folder. There are *.tsx-files instead of *.ts files. This are TypeScript files which are supporting JSX, the wired XML/HTML syntax inside JavaScript code. VS 2017 already knows about the JSX syntax and doesn't show any errors. That's awesome.

This client app is bootstrapped in the boot.tsx (we had the index.js in the other blog post). This app supports routing via the react-router-dom Component. The boot.tsx defines an AppContainer, that primarily hosts the route definitions. stored in the routes.tsx. The Routes than calls the different components depending on the path in the bowsers address bar. This routing concept is a little more intuitive to use than the Angular one. The routing is defined in the component that hosts the routed contents. In this case the Layout component contains the dynamic contents:

// routes.tsx
export const routes = <Layout><Route exact path='/' component={ Home } /><Route path='/counter' component={ Counter } /><Route path='/fetchdata' component={ FetchData } /></Layout>;

Inside the Layout.tsx you see, that the routed components will be rendered in a specific div tag that renders the children defined in the routes.tsx

// Layout.tsx
export class Layout extends React.Component<LayoutProps, {}> {
  public render() {
    return <div className='container-fluid'><div className='row'><div className='col-sm-3'><NavMenu /></div><div className='col-sm-9'>
      { this.props.children }</div></div></div>;
  }
}

Using this approach, it should be possible to add sub routes for specific small areas of the app. Some kind of "nested routes".

There's also an example available about how to fetch data from a Web API. This sample uses isomorphic-fetch' to fetch the data from the Web API:

constructor() {    
  super();
  this.state = { forecasts: [], loading: true };

  fetch('api/SampleData/WeatherForecasts')
    .then(response => response.json() as Promise<WeatherForecast[]>)
    .then(data => {
          this.setState({ forecasts: data, loading: false });
	});
}

Since React doesn't provide a library to load data via HTTP request, you are free to use any library you want. Some other libraries used with React are axios, fetch or Superagent.

A short look into the ASP.NET Core parts

The Startup.cs is a little special. Not really much, but you'll find some differences in the Configure method. There is the use of the WebpackDevMiddleware, that helps while debugging. It calls Webpack on every change in the used TypeScript files and reloads the scripts in the browser while debugging. Using this middleware, you don't need to recompile the whole application or to restart debugging:

if (env.IsDevelopment())
{
  app.UseDeveloperExceptionPage();
  app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
  {
    HotModuleReplacement = true,
    ReactHotModuleReplacement = true
  });
}
else
{
  app.UseExceptionHandler("/Home/Error");
}

And the route configuration contains a fallback route, that gets used, if the requested path doesn't match any MVC route:

app.UseMvc(routes =>
{
  routes.MapRoute(
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

  routes.MapSpaFallbackRoute(
    name: "spa-fallback",
    defaults: new { controller = "Home", action = "Index" });
});

The Integration in the views is interesting as well. In the _Layout.cshtml:

  • There is a base href set to the current base URL.
  • The vendor.css and a site.css is referenced in the head of the document.
  • The vendor.js is referenced at the bottom.
<!DOCTYPE html><html><head><meta charset="utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1.0" /><title>@ViewData["Title"] - ReactWebApp</title><base href="~/" /><link rel="stylesheet" href="~/dist/vendor.css" asp-append-version="true" /><environment exclude="Development"><link rel="stylesheet" href="~/dist/site.css" asp-append-version="true" /></environment></head><body>
    @RenderBody()<script src="~/dist/vendor.js" asp-append-version="true"></script>
    @RenderSection("scripts", required: false)</body></html>

The actual React app isn't referenced here, but in the Index.cshtml:

@{
    ViewData["Title"] = "Home Page";
}<div id="react-app">Loading...</div>

@section scripts {
    <script src="~/dist/main.js" asp-append-version="true"></script>
}

This makes absolutely sense. Doing like this, you are able to create a React app per view. Routing probably doesn't work this way, because there is only one SpaFallbackRoute, but if you just want to make single views more dynamic, it would make sense to create multiple views which are hosting a specific React app.

This is exactly what I expect using React. E. g. I have many old ASP.NET Applications and I want to get rid of the old client script and I want to modernize those applications step by step. In many cases a rewrite costs to much and it would be easy to replace the old code by clean React apps.

The other changes in that project are not really related to React in general. They are just implementation details of this React demo applications

  • There is a simple API controller to serve the weather forecasts
  • The HomeController only contains the Index and the Error actions

Some concluding words

I didn't really expect such a clearly and transparently configured project template. If I try to put the setup of the last post into a ASP.NET Core project, I would do it almost the same way. Using Webpack to transpile and bundle the files and save them somewhere in the wwwroot folder.

From my perspective, I would use this project template as a starter for small projects to medium sized projects (whatever this means). For medium to bigger sized projects, I would - again - propose to divide the client app ad the server part into two different projects, to host them independently, to develop them independently. Hosting independently also means, scale independently. Develop independently means both, scale the teams independently and to focus only on the technology and tools, which are used for this part of the application.

To learn more about React and how it works with ASP.NET Core in Visual Studio 2017, I will create a Chat-App. I will also write a small series about it:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo


Another GraphQL library for ASP.​NET Core

$
0
0

I recently read a interesting tweet by Glenn Block about a GraphQL app running on the Linux Subsystem for Windows:

It is impressive to run a .NET Core app in Linux on Windows, which is not a Virtual Machine on Windows. I never hat the chance to try that. I just played a little bit with the Linux Subsystem for Windows. The second that came to mind was: "wow, did he use my GraphQL Middleware library or something else?"

He uses different libraries, as you can see in his repository on GitHub: https://github.com/glennblock/orders-graphql

  • GraphQL.Server.Transports.AspNetCore
  • GraphQL.Server.Transports.WebSockets

This libraries are built by the makers of graphql-dotnet. The project is hosted in the graphql-dotnet organization on GitHub: https://github.com/graphql-dotnet/server. They also provide a Middleware that can be used in ASP.NET Core projects. The cool thing about that project is a WebSocket endpoint for GraphQL.

What about the GraphQL middleware I wrote?

Because my GraphQL middleware, is also based on graphql-dotnet, I'm not yet sure whether to continue maintaining this middleware or to retire this project. I'm not yet sure what to do, but I'll try the other implementation to find out more.

I'm pretty sure the contributors of the graphql-dotnet project know a lot more about GraphQL and there library, than I do. Both project will work the same way and will return the same result - hopefully. The only difference is the API and the configuration. The only reason to continue working on the project is to learn more about GraphQL or to maybe provide a better API ;-)

If I retire my project, I would try to contribute to the graphql-dotnet projects.

What do you think? Drop me a comment and tell me.

Creating a chat application using React and ASP.​NET Core - Part 1

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Requirements

I want to create a small chat application that uses React, SignalR and ASP.NET Core 2.0. The frontend should be created using React. The backend serves a Websocket end-point using SignalR and some basic Web API end-points to fetch some initial data, some lookup data and to do the authentication (I'll use IdentityServer4 to do the authentication). The project setup should be based on the Visual Studio React Project I introduced in one of the last posts.

The UI should be clean and easy to use. It should be possible to use the chat without a mouse. So the focus is also on usability and a basic accessibility. We will have a large chat area to display the messages, with an input field for the messages below. The return key should be the primary method to send the message. There's one additional button to select emojis, using the mouse. But basic emojis should also be available using text symbols.

On the left side, I'll create a list of online users. Every new logged on user should be mentioned in the chat area. The user list should be auto updated after a user logs on. We will use SignalR here too.

  • User list using SignalR
    • small area on the left hand side
    • Initially fetching the logged on users using Web API
  • Chat area using SignalR
    • large area on the right hand side
    • Initially fetching the last 50 messages using Web API
  • Message field below the chat area
    • Enter kay should send the message
    • Emojis using text symbols
  • Storing the chat history in a database (using Azure Table Storage)
  • Authentication using IdentityServer4

Project setup

The initial project setup is easy and already described in one of the last post. I'll just do a quick introduction here.

You can either use visual studio 2017 to create a new project

or the .NET CLI

dotnet new react -n react-chat-app

It takes some time to fetch the dependent packages. Especially the NPM packages are a lot. The node_modules folder contains around 10k files and will require 85 MB on disk.

I also added the "@aspnet/signalr-client": "1.0.0-alpha2-final" to the package.json

Don'be be confused, with the documentation. In the GitHub repository they wrote the NPM name signalr-client should not longer used and the new name is just signalr. But when I wrote this lines, the package with the new name is not yet available on NPM. So I'm still using the signalr-client package.

After adding that package, an optional dependency wasn't found and the NPM dependency node in Visual Studio will display a yellow exclamation mark. This is annoying and id seems to be an critical error, but it will work anyway:

The NuGet packages are fine. To use SignalR I used the the Microsoft.AspNetCore.SignalR package with the version 1.0.0-alpha2-final.

The project compiles without errors. And after pressing F5, the app starts as expected.

Since a while I configured Edge as the start-up browser to run ASP.NET Core projects, because Chrome got very slow. Once the IISExpress or Kestrel is running you can easily use Chrome or any other browser to call and debug the web. Which makes sense, since the React developer tolls are not yet available for Edge and IE.

This is all to setup and to configure. All the preconfigured TypeScript and Webpack stuff is fine and runs as expected. If there's no critical issue, you don't really need to know about it. It just works. I would anyway recommend to learn about the TypeScript configuration and Webpack to be safe.

Closing words

Now the requirements are clear and the project is set-up. In this series I will not set up an automated build using CAKE. I'll also not write about unit tests. The focus is React, SignalR and ASP.NET Core only.

In the next chapter I'm going build the UI React components and to implement the basic client logic to get the UI working.

Creating a chat application using React and ASP.​NET Core - Part 2

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Basic Layout

First let's have a quick look into the hierarchy of the React components in the folder ClientApp.

The app gets bootstrapped within the boot.tsx file. This is the first sort of component where the AppContainer gets created and the router is placed. This file also contains the the call to render the react app in the relevant HTML element, which is a div with the ID react-app in this case. It is a div in the Views/Home/Index.cshtml

This component also renders the content of the routes.tsx. This file contains the route definitions wrapped inside a Layout element. This Layout element is defined in the layout.tsx inside the components folder. The routes.tsx also references three more components out of the components folder: Home, Counter and FetchData. So it seems the router renders the specific components, depending on the requested path inside the Layout element:

// routes.tsx
import * as React from 'react';
import { Route } from 'react-router-dom';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';

export const routes = <Layout><Route exact path='/' component={ Home } /><Route path='/counter' component={ Counter } /><Route path='/fetchdata' component={ FetchData } /></Layout>;

As expected, the Layout component than defines the basic layout and renders the contents into a Bootstrap grid column element. I changed that a little bit to render the contents directly into the fluid container and the menu is now outside the fluid container. This component now contains less code than before.:

import * as React from 'react';
import { NavMenu } from './NavMenu';

export interface LayoutProps {
    children?: React.ReactNode;
}

export class Layout extends React.Component<LayoutProps, {}> {
    public render() {
        return <div><NavMenu /><div className='container-fluid'>
                {this.props.children}</div></div>;
    }
}

I also changed the NavMenu component to place the menu on top of the page using the typical Bootstrap styles. (Visit the repository for more details.)

My chat goes into the Home component, because this is the most important feature of my app ;-) This is why I removed all the contents of the Home component and placed the layout for the actual chat there.

import * as React from 'react';
import { RouteComponentProps } from 'react-router';

import { Chat } from './home/Chat';
import { Users } from './home/Users';

export class Home extends React.Component<RouteComponentProps<{}>, {}> {
    public render() {
        return <div className='row'><div className='col-sm-3'><Users /></div><div className='col-sm-9'><Chat /></div></div>;
    }
}

This component uses two new components: Users to display the online users and Chat to add the main chat functionalities. It seems to be a common way in Rdeact to store sub-components inside a subfolder with the same name as the parent component. So, I created a Home folder inside the components folder and placed the Users component and the Chat component inside of that new folder.

The Users Component

Let's have a look into the more simple Users component first. This component doesn't have any interaction yet. It only fetches and displays the users online. To keep the first snippet simple I removed the methods inside. This file imports all from the module 'react' as React object. Using this we are able to access the Component type we need to derive from:

// components/Home/Users.tsx
import * as React from 'react';

interface UsersState {
    users: User[];
}
interface User {
    id: number;
    name: string;
}

export class Users extends React.Component<{}, UsersState> {
    //
}

This base class also defines a state property. The type of that state is defined in the second generic argument of the React.Component base class. (The first generic argument is not needed here). The state is a kind of a container type that contains data you want to store inside the component. In this case I just need a UsersState with a list of users inside. To display a user in the list we only need an identifier and a name. A unique key or id is required by React to create a list of items in the DOM

I don't fetch the data from the server side yet. This post is only about the UI components, so I'm going to mock the data in the constructor:

constructor() {
    super();
    this.state = {
        users: [
            { id: 1, name: 'juergen' },
            { id: 3, name: 'marion' },
            { id: 2, name: 'peter' },
            { id: 4, name: 'mo' }]
    };
}

Now the list of users is available in the current state and I'm able to use this list to render the users:

public render() {
    return <div className='panel panel-default'><div className='panel-body'><h3>Users online:</h3><ul className='chat-users'>
                {this.state.users.map(user =><li key={user.id}>{user.name}</li>
                )}</ul></div></div>;
}

JSX is a wired thing: HTML like XML syntax, completely mixed with JavaScript (or TypeScript in this case) but it works. It remembers a little bit like Razor. this.state.users.map iterates through the users and renders a list item per user.

The Chat Component

The Chat component is similar, but contains more details and some logic to interact with the user. Initially we have almost the same structure:

// components/Home/chat.tsx
import * as React from 'react';
import * as moment from 'moment';

interface ChatState {
    messages: ChatMessage[];
    currentMessage: string;
}
interface ChatMessage {
    id: number;
    date: Date;
    message: string;
    sender: string;
}

export class Chat extends React.Component<{}, ChatState> {
    //
}

I also imported the module moment, which is moment.js I installed using NPM:

npm install moment --save

moment.js is a pretty useful library to easily work with dates and times in JavaScript. It has a ton of features, like formatting dates, displaying times, creating relative time expressions and it also provides a proper localization of dates.

Now it makes sense to have a look into the render method first:

// components/Home/chat.tsx
public render() {
    return <div className='panel panel-default'><div className='panel-body panel-chat'
            ref={this.handlePanelRef}><ul>
                {this.state.messages.map(message =><li key={message.id}><strong>{message.sender} </strong>
                        ({moment(message.date).format('HH:mm:ss')})<br />
                        {message.message}</li>
                )}</ul></div><div className='panel-footer'><form className='form-inline' onSubmit={this.onSubmit}><label className='sr-only' htmlFor='msg'>Message</label><div className='input-group col-md-12'><button className='chat-button input-group-addon'>:-)</button><input type='text' value={this.state.currentMessage}
                        onChange={this.handleMessageChange}
                        className='form-control'
                        id='msg'
                        placeholder='Your message'
                        ref={this.handleMessageRef} /><button className='chat-button input-group-addon'>Send</button></div></form></div></div>;
}

I defined a Bootstrap panel, that has the chat area in the panel-body and the input fields in the panel-footer. In the chat area we also have a unordered list ant the code to iterate through the messages. This is almost similar to the user list. We only display some more date here. Here you can see the usage of moment.js to easily format the massage date.

The panel-footer contains the form to compose the message. I used a input group to add a button in front of the input field and another one after that field. The first button is used to select an emoji. The second one is to also send the message (for people who cannot use the enter key to submit the message).

The ref attributes are used for a cool feature. Using this, you are able to get an instance of the element in the backing code. This is nice to work with instances of elements directly. We will see the usage later on. The code in the ref attributes are pointing to methods, that get's an instance of that element passed in:

msg: HTMLInputElement;
panel: HTMLDivElement;

// ...

handlePanelRef(div: HTMLDivElement) {
    this.panel = div;
}
handleMessageRef(input: HTMLInputElement) {
    this.msg = input;
}

I save the instance globally in the class. One thing I didn't expect is a wired behavior of this. This behavior is a typical JavaScript behavior, but I expected is to be solved in TypeScript. I also didn't see this in Angular. The keyword this is not set. It is nothing. If you want to access this in methods used by the DOM, you need to kinda 'inject' or 'bind' an instance of the current object to get this set. This is typical for JavaScript and makes absolutely sense This needs to be done in the constructor:

constructor() {
    super();
    this.state = { messages: [], currentMessage: '' };

    this.handlePanelRef = this.handlePanelRef.bind(this);
    this.handleMessageRef = this.handleMessageRef.bind(this);
    // ...
}

This is the current constructor, including the initialization of the state. As you can see, we bind the the current instance to those methods. We need to do this for all methods, that need to use the current instance.

To get the message text from the text field, it is needed to bind an onChange method. This method collects the value from the event target:

handleMessageChange(event: any) {
    this.setState({ currentMessage: event.target.value });
}

Don't forget to bind the current instance in the constructor:

this.handleMessageChange = this.handleMessageChange.bind(this);

With this code we get the current message into the state to use it later on. The current state is also bound to the value of that text field, just to clear this field after submitting that form.

The next important event is onSubmit in the form. This event gets triggered by pressing the send button or by pressing enter inside the text field:

onSubmit(event: any) {
    event.preventDefault();
    this.addMessage();
}

This method stops the default behavior of HTML forms, to avoid a reload of the entire page. And calls the method addMessage, that creates and ads the message to the current states messages list:

addMessage() {
    let currentMessage = this.state.currentMessage;
    if (currentMessage.length === 0) {
        return;
    }
    let id = this.state.messages.length;
    let date = new Date();

    let messages = this.state.messages;
    messages.push({
        id: id,
        date: date,
        message: currentMessage,
        sender: 'juergen'
    })
    this.setState({
        messages: messages,
        currentMessage: ''
    });
    this.msg.focus();
    this.panel.scrollTop = this.panel.scrollHeight - this.panel.clientHeight;
}

Currently the id and the sender of the message are faked. Later on, in the next posts, we'll send the message to the server using Websockets and we'll get a massage including a valid id back. We'll also have an authenticated user later on. As mentioned the current post, is just about to get the UI running.

We get the currentMessage and the massages list out of the current state. Than we add the new message to the current list and assign a new state, with the updated list and an empty currentMessage. Setting the state triggers an event to update the the UI. If I just update the fields inside the state, the UI don't get notified. It is also possible to only update a single property of the state.

If the state is updated, I need to focus the text field and to scroll the panel down to the latest message. This is the only reason, why I need the instance of the elements and why I used the ref methods.

That's it :-)

After pressing F5, I see the working chat UI in the browser

Closing words

By closing this post, the basic UI is working. This was easier than expected, I just stuck a little bit, by accessing the HTML elements to focus the text field and to scroll the chat area and when I tried to access the current instance using this. React is heavily used and the React community is huge. This is why it is easy to get help pretty fast.

In the next post, I'm going to integrate SignalR and to get the Websockets running. I'll also add two Web APIs to fetch the initial data. The current logged on users and the latest 50 chat messages, don't need to be pushed by the Websocket. Using this I need to get into the first functional component in React and to inject this into the UI components of this post.

Creating a chat application using React and ASP.​NET Core - Part 3

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

About SignalR

SignalR for ASP.NET Core is a framework to enable Websocket communication in ASP.NET Core applications. Modern browsers already support Websocket, which is part of the HTML5 standard. For older browser SignalR provides a fallback based on standard HTTP1.1. SignalR is basically a server side implementation based on ASP.NET Core and Kestrel. It uses the same dependency injection mechanism and can be added via a NuGet package into the application. Additionally, SignalR provides various client libraries to consume Websockets in client applications. In this chat application, I use @aspnet/signalr-client loaded via NPM. The package also contains the TypeScript definitions, which makes it easy to use in a TypeScript application, like this.

I added the React Nuget package in the first part of this blog series. To enable SignalR I need to add it to the ServiceCollection:

services.AddSignalR();

The server part

In C#, I created a ChatService that will later be used to connect to the data storage. Now it is using a dictionary to store the messages and is working with this dictionary. I don't show this service here, because the implementation is not relevant here and will change later on. But I use this Service in in the code I show here. This service is mainly used in the ChatController, the Web API controller to load some initial data and in the ChatHub, which is the Websocket endpoint for this chat. The service gets injected via dependency injection that is configured in the Startup.cs:

services.AddSingleton<IChatService, ChatService>();

Web API

The ChatController is simple, it just contains GET methods. Do you remember the last posts? The initial data of the logged on users and the first chat messages were defined in the React components. I moved this to the ChatController on the server side:

[Route("api/[controller]")]
public class ChatController : Controller
{
    private readonly IChatService _chatService;

    public ChatController(IChatService chatService)
    {
        _chatService = chatService;
    }
    // GET: api/<controller>
    [HttpGet("[action]")]
    public IEnumerable<UserDetails> LoggedOnUsers()
    {
        return new[]{
            new UserDetails { Id = 1, Name = "Joe" },
            new UserDetails { Id = 3, Name = "Mary" },
            new UserDetails { Id = 2, Name = "Pete" },
            new UserDetails { Id = 4, Name = "Mo" } };
    }

    [HttpGet("[action]")]
    public IEnumerable<ChatMessage> InitialMessages()
    {
        return _chatService.GetAllInitially();
    }
}

The method LoggedOnUsers simply created the users list. I will change that, if the authentication is done. The method InitialMessages loads the first 50 messages from the faked data storage.

SignalR

The Websocket endpoints are defined in so called Hubs. One Hub is defining one single Websocket endpoint. I created a ChatHub, that is the endpoint for this application. The methods in the ChatHub are handler methods, to handle incoming messages through a specific channel.

The ChatHub needs to be added to the SignalR middleware:

app.UseSignalR(routes =>
{
    routes.MapHub<ChatHub>("chat");
});

A SignalR Methods in the Hub are the channel definitions and the handlers at the same time, while NodeJS socket.io is defining channels and binds an handler to this channel.

The currently used data are still fake data and authentication is not yet implemented. This is why the users name is hard coded yet:

using Microsoft.AspNetCore.SignalR;
using ReactChatDemo.Services;

namespace ReactChatDemo.Hubs
{
    public class ChatHub : Hub
    {
        private readonly IChatService _chatService;

        public ChatHub(IChatService chatService)
        {
            _chatService = chatService;
        }

        public void AddMessage(string message)
        {
            var chatMessage = _chatService.CreateNewMessage("Juergen", message);
            // Call the MessageAdded method to update clients.
            Clients.All.InvokeAsync("MessageAdded", chatMessage);
        }
    }
}

This Hub only contains a method AddMessage, that gets the actual message as a string. Later on we will replace the hard coded user name, with the name of the logged on user. Than a new message gets created and also added to the data store via the ChatService. The new message is an object, that contains a unique id, the name of the authenticated user, a create date and the actual message text.

Than the message gets, send to the client through the Websocket channel "MessageAdded".

The client part

On the client side, I want to use the socket in two different components, but I want to avoid to create two different Websocket clients. The idea is to create a WebsocketService class, that is used in the two components. Usually I would create two instances of this WebsocketService, but this would create two different clients too. I need to think about dependency injection in React and a singleton instance of that service.

SignalR Client

While googling for dependency injection in React , I read a lot about the fact, that DI is not needed in React. I was kinda confused. DI is everywhere in Angular, but it is not necessarily needed in React? There are packages to load, to support DI, but I tried to find another way. And actually there is another way. In ES6 and in TypeScript it is possible to immediately create an instance of an object and to import this instance everywhere you need it.

import { HubConnection, TransportType, ConsoleLogger, LogLevel } from '@aspnet/signalr-client';

import { ChatMessage } from './Models/ChatMessage';

class ChatWebsocketService {
    private _connection: HubConnection;

    constructor() {
        var transport = TransportType.WebSockets;
        let logger = new ConsoleLogger(LogLevel.Information);

        // create Connection
        this._connection = new HubConnection(`http://${document.location.host}/chat`,
            { transport: transport, logging: logger });
        
        // start connection
        this._connection.start().catch(err => console.error(err, 'red'));
    }

    // more methods here ...
   
}

const WebsocketService = new ChatWebsocketService();

export default WebsocketService;

Inside this class the Websocket (HubConnection) client gets created and configured. The transport type needs to be WebSockets. Also a ConsoleLogger gets added to the Client, to send log information the the browsers console. In the last line of the constructor, I start the connection and add an error handler, that writes to the console. The instance of the connections is stored in a private variable inside the class. Right after the class I create an instance and export the instance. This way the instance can be imported in any class:

import WebsocketService from './WebsocketService'

To keep the Chat component and the Users component clean, I created additional service classes for each the components. This service classes encapsulated the calls to the Web API endpoints and the usage of the WebsocketService. Please have a look into the GitHub repository to see the complete services.

The WebsocketService contains three methods. One is to handle incoming messages, when a user logged on the chat:

registerUserLoggedOn(userLoggedOn: (id: number, name: string) => void) {
    // get new user from the server
    this._connection.on('UserLoggedOn', (id: number, name: string) => {
        userLoggedOn(id, name);
    });
}

This is not yet used. I need to add the authentication first.

The other two methods are to send a chat message to the server and to handle incoming chat messages:

registerMessageAdded(messageAdded: (msg: ChatMessage) => void) {
    // get nre chat message from the server
    this._connection.on('MessageAdded', (message: ChatMessage) => {
        messageAdded(message);
    });
}
sendMessage(message: string) {
    // send the chat message to the server
    this._connection.invoke('AddMessage', message);
}

In the Chat component I pass a handler method to the ChatService and the service passes the handler to the WebsocketService. The handler than gets called every time a message comes in:

//Chat.tsx
let that = this;
this._chatService = new ChatService((msg: ChatMessage) => {
    this.handleOnSocket(that, msg);
});

In this case the passed in handler is only an anonymous method, a lambda expression, that calls the actual handler method defined in the component. I need to pass a local variable with the current instance of the chat component to the handleOnSocket method, because this is not available after when the handler is called. It is called outside of the context where it is defined.

The handler than loads the existing messages from the components state, adds the new message and updates the state:

//Chat.tsx
handleOnSocket(that: Chat, message: ChatMessage) {
    let messages = that.state.messages;
    messages.push(message);
    that.setState({
        messages: messages,
        currentMessage: ''
    });
    that.scrollDown(that);
    that.focusField(that);
}

At the end, I need to scroll to the latest message and to focus the text field again.

Web API client

The UsersService.ts and the ChatService.ts, both contain a method to fetch the data from the Web API. As preconfigured in the ASP.NET Core React project, I am using isomorphic-fetch to call the Web API:

//ChatService.ts
public fetchInitialMessages(fetchInitialMessagesCallback: (msg: ChatMessage[]) => void) {
    fetch('api/Chat/InitialMessages')
        .then(response => response.json() as Promise<ChatMessage[]>)
        .then(data => {
            fetchInitialMessagesCallback(data);
        });
}

The method fetchLogedOnUsers in the UsersService service looks almost the same. The method gets a callback method from the Chat component, that gets the ChatMessages passed in. Inside the Chat component this method get's called like this:

this._chatService.fetchInitialMessages(this.handleOnInitialMessagesFetched);

The handler than updates the state with the new list of ChatMessages and scrolls the chat area down to the latest message:

handleOnInitialMessagesFetched(messages: ChatMessage[]) {
    this.setState({
        messages: messages
    });

    this.scrollDown(this);
}

Let's try it

Now it is time to try it out. F5 starts the application and opens the configured browser:

This is almost the same view as in the last post about the UI. To be sure React is working, I had a look into the network tap in the browser developer tools:

Here it is. Here you can see the message history of the web socket endpoint. The second line displays the message sent to the server and the third line is the answer from the server containing the ChatMessage object.

Closing words

This post was less easy than the posts before. Not the technical part, but I refactored the the client part a little bit to keep the React component as simple as possible. For the functional components, I used regular TypeScript files and not the TSX files. This worked great.

I'm still impressed by React.

In the next post I'm going to add Authorization to get the logged on user and to authorize the chat to logged-on users only. I'll also add a permanent storage for the chat message.

Creating a chat application using React and ASP.​NET Core - Part 4

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

My idea about this app is to split the storages, between a storage for flexible objects and immutable objects. The flexible objects are the users and the users metadata in this case. Immutable objects are the chat message.

The messages are just stored one by one and will never change. Storing a message doesn't need to be super fast, but reading the messages need to be as fast as possible. This is why I want to go with the Azure Table Storage. This is one of the fastest storages on Azure. In the past, at the YooApps, we also used it as an event store for CQRS based applications.

Handling the users doesn't need to be super fast as well, because we only handle one user at one time. We don't read all of the users at one blow, we don't do batch operations on it. So using a SQL Storage with IdentityServer4on e.g. a Azure SQL Database should be fine.

The users online will be stored in memory only, which is the third storage. The memory is save in this case, because, if the app shuts down, the users need to logon again anyway and the list of users online gets refilled. And it is even not really critical, if the list of the users online is not in sync with the logged on users.

This leads into three different storages:

  • Users: Azure SQL Database, handled by IdentityServer4
  • Users online: Memory, handled by the chat app
    • A singleton instance of a user tracker class
  • Messages: Azure Table Storage, handled by the chat app
    • Using the SimpleObjectStore and the Azure table Storage provider

Setup IdentityServer4

To keep the samples easy, I do the logon of the users on the server side only. (I'll go through the SPA logon using React and IdentityServer4 in another blog post.) That means, we are validating and using the senders name on the server side - in the MVC controller, the API controller and in the SignalR Hub - only.

It is recommended to setup the IdentityServer4 in a separate web application. We will do it the same way. So I followed the quickstart documentation on the IdentityServer4 web site, created a new empty ASP.NET Core project and added the IdentiyServer4 NuGet packages, as well as the MVC package and the StaticFile package. I first planned to use ASP.NET Core Identity with the IdentityServer4 to store the identities, but I changed that, to keep the samples simple. Now I only use the in-memory configuration, you can see in the quickstart tutorials, I'm able to use ASP.NET Identity or any other custom SQL storage implementation later on. I also copied the IdentityServer4 UI code from the IdentityServer4.Quickstart.UI repository into that project.

The Startup.cs of the IdentityServer project look s pretty clean. It adds the IdentityServer to the service collection and uses the IdentityServer middleware. While adding the services, I also add the configurations for the IdentityServer. As recommended and shown in the quickstart, the configuration is wrapped in the Config class, that is used here:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        // configure identity server with in-memory stores, keys, clients and scopes
        services.AddIdentityServer()
            .AddDeveloperSigningCredential()
            .AddInMemoryIdentityResources(Config.GetIdentityResources())
            .AddInMemoryApiResources(Config.GetApiResources())
            .AddInMemoryClients(Config.GetClients())
            .AddTestUsers(Config.GetUsers());
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        // use identity server
        app.UseIdentityServer();

        app.UseStaticFiles();
        app.UseMvcWithDefaultRoute();
    }
}

The next step is to configure the IdentityServer4. As you can see in the snippet above, this is done in a class called Config:

public class Config
{
    public static IEnumerable<Client> GetClients()
    {
        return new List<Client>
        {
            new Client
            {
                ClientId = "reactchat",
                ClientName = "React Chat Demo",

                AllowedGrantTypes = GrantTypes.Implicit,
                    
                RedirectUris = { "http://localhost:5001/signin-oidc" },
                PostLogoutRedirectUris = { "http://localhost:5001/signout-callback-oidc" },

                AllowedScopes =
                {
                    IdentityServerConstants.StandardScopes.OpenId,
                    IdentityServerConstants.StandardScopes.Profile
                }
            }
        };
    }

    internal static List<TestUser> GetUsers()
    {
        return new List<TestUser> {
            new TestUser
            {
                SubjectId = "1",
                Username = "juergen@gutsch-online.de",
                Claims = new []{ new Claim("name", "Juergen Gutsch") },
                Password ="Hello01!"
            }
        };
    }
    public static IEnumerable<ApiResource> GetApiResources()
    {
        return new List<ApiResource>
        {
            new ApiResource("reactchat", "React Chat Demo")
        };
    }

    public static IEnumerable<IdentityResource> GetIdentityResources()
    {
        return new List<IdentityResource>
        {
            new IdentityResources.OpenId(),
            new IdentityResources.Profile(),
        };
    }
}

The clientid is calles reactchat. I configured both projects, the chat application and the identity server application, to run with specific ports. The chat application runs with port 5001 and the identity server uses port 5002. So the redirect URIs in the client configuration points to the port 5001.

Later on we are able to replace this configuration with a custom storage for the users and the clients.

We also need to setup the client (the chat application) to use this identity server.

Adding authentication to the chat app

To add authentication, I need to add some configuration to the Startup.cs. The first thing is to add the authentication middleware to the Configure method. This does all the authentication magic and handles multiple kinds of authentication:

app.UseAuthentication();

Be sure to add this line before the usage of MVC and SignalR. I also put this line before the usage of the StaticFilesMiddleware.

Now I need to add and to configure the needed services for this middleware.

services.AddAuthentication(options =>
    {
        options.DefaultScheme = "Cookies";
        options.DefaultChallengeScheme = "oidc";                    
    })
    .AddCookie("Cookies")
    .AddOpenIdConnect("oidc", options =>
    {
        options.SignInScheme = "Cookies";

        options.Authority = "http://localhost:5002";
        options.RequireHttpsMetadata = false;
        options.TokenValidationParameters.NameClaimType = "name";

        options.ClientId = "reactchat";
        options.SaveTokens = true;
    });

We add cookie authentication as well as OpenID connect authentication. The cookie is used to temporary store the users information to avoid an OIDC login on every request. To keep the samples simples I switched off HTTPS.

I need to specify the NameClaimType, because IdentityServer4 provides the users name within a simpler claim name, instead of the long default one..

That's it for the authentication part. We now need to secure the chat, This is done by adding the AuthorizeAttribute to the HomeController. Now the app will redirect to the identity servers login page, if we try to access the view created by the secured controller:

After entering the credentials, we need to authorize the app to get the needed profile information from the identity server:

If this is done we can start using the users name in the chat. To do this, we need to change the AddMessage method in the ChatHab a little bit:

public void AddMessage(string message)
{
    var username = Context.User.Identity.Name;
    var chatMessage =  _chatService.CreateNewMessage(username, message);
    // Call the MessageAdded method to update clients.
    Clients.All.InvokeAsync("MessageAdded", chatMessage);
}

I removed the magic string with my name in it and replaced it with the username I get from the current Context. Now the chat uses the logged on user to add chat messages:

I'll not go into the user tracker here, to keep this post short. Please follow the GitHub repos to learn more about tracking the online state of the users.

Storing the messages

The idea is to keep the messages stored permanently on the server. The current in-memory implementation doesn't handle a restart of the application. Every time the app restarts the memory gets cleared and the messages are gone. I want to use the Azure Table Storage here, because it is pretty simple to use and reading the storage is amazingly fast. We need to add another NuGet package to our app which is the AzureStorageClient.

To encapsulate the Azure Storage I will create a ChatStorageRepository, that contains the code to connect to the Tables.

Let's quickly setup a new storage account on azure. Logon to the azure portal and go to the storage section. Create a new storage account and follow the wizard to complete the setup. After that you need to copy the storage credentials ("Account Name" and "Account Key") from the portal. We need them to to connect to the storage account alter on.

Be careful with the secrets

Never ever store the secret information in a configuration or settings file, that is stored in the source code repository. You don't need to do this anymore, with the user secrets and the Azure app settings.

All the secret information and the database connection string should be stored in the user secrets during development time. To setup new user secrets, just right click the project that needs to use the secrets and choose the "Manage User Secrets" entry from the menu:

Visual Studio then opens a secrets.json file for that specific project, that is stored somewhere in the current users AppData folder. You see the actual location, if you hover over the tab in Visual Studio. Add your secret data there and save the file.

The data than gets passed as configuration entries into the app:

// ChatMessageRepository.cs
private readonly string _tableName;
private readonly CloudTableClient _tableClient;
private readonly IConfiguration _configuration;

public ChatMessageRepository(IConfiguration configuration)
{
    _configuration = configuration;

    var accountName = configuration.GetValue<string>("accountName");
    var accountKey = configuration.GetValue<string>("accountKey");
    _tableName = _configuration.GetValue<string>("tableName");

    var storageCredentials = new StorageCredentials(accountName, accountKey);
    var storageAccount = new CloudStorageAccount(storageCredentials, true);
    _tableClient = storageAccount.CreateCloudTableClient();
}

On Azure there is an app settings section in every Azure Web App. Configure the secrets there. This settings get passes as configuration items to the app as well. This is the most secure approach to store the secrets.

Using the table storage

You don't really need to create the actual table using the Azure portal. I do it by code if the table doesn't exists. To do this, I needed to create a table entity object first. This defines the available fields in that Azure Table Storage

public class ChatMessageTableEntity : TableEntity
{
    public ChatMessageTableEntity(Guid key)
    {
        PartitionKey = "chatmessages";
        RowKey = key.ToString("X");
    }

    public ChatMessageTableEntity() { }

    public string Message { get; set; }

    public string Sender { get; set; }
}

The TableEntity has three default properties, which are a Timestamp, a unique RowKey as string and a PartitionKey as string. The RowKey need to be unique. In a users table the RowKey could be the users email address. In our case we don't have a unique value in the chat messages so we'll use a Guid instead. The PartitionKey is not unique and bundles several items into something like a storage unit. Reading entries from a single partition is quite fast, data inside a partition never gets spliced into many storage locations. They will kept together. In the current phase of the project it doesn't make sense to use more than one partition. Later on it would make sense to use e.g. one partition key per chat room.

The ChatMessageTableEntity has one constructor we will use to create a new entity and an empty constructor that is used by the TableClient to create it out of the table data. I also added two properties for the Message and the Sender. I will use the Timestamp property of the parent class for the time shown in the chat window.

Add a message to the Azure Table Storage

To add a new message to the Azure Table Storage, I created a new method to the repository:

// ChatMessageRepository.cs
public async Task<ChatMessageTableEntity> AddMessage(ChatMessage message)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();

    var chatMessage = new ChatMessageTableEntity(Guid.NewGuid())
    {
        Message = message.Message,
        Sender = message.Sender
    };

    // Create the TableOperation object that inserts the customer entity.
    TableOperation insertOperation = TableOperation.Insert(chatMessage);

    // Execute the insert operation.
    await table.ExecuteAsync(insertOperation);

    return chatMessage;
}

This method uses the TableClient created in the constructor.

Read messages from the Azure Table Storage

Reading the messages is done using the method ExecuteQuerySegmentedAsync. With this method it is possible to read all the table entities in chunks from the Table Storage. This makes sense, because there is a request limit of 1000 table entities. In my case I don't want to load all the data but the latest 100:

// ChatMessageRepository.cs
public async Task<IEnumerable<ChatMessage>> GetTopMessages(int number = 100)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();
    
    string filter = TableQuery.GenerateFilterCondition(
        "PartitionKey", 
        QueryComparisons.Equal, "chatmessages");
    var query = new TableQuery<ChatMessageTableEntity>()
        .Where(filter)
        .Take(number);

    var entities = await table.ExecuteQuerySegmentedAsync(query, null);

    var result = entities.Results.Select(entity =>
        new ChatMessage
        {
            Id = entity.RowKey,
            Date = entity.Timestamp,
            Message = entity.Message,
            Sender = entity.Sender
        });

    return result;
}

Using the repository

In the Startup.cs I changed the registration of the ChatService from Singleton to Transient, because we don't need to store the messages in memory anymore. I also add a transient registration for the IChatMessageRepository:

services.AddTransient<IChatMessageRepository, ChatMessageRepository>();
services.AddTransient<IChatService, ChatService>();

The IChatMessageRepository gets injected into the ChatService. Since the Repository is async I also need to change the signature of the service methods a little bit to support the async calls. The service looks cleaner now:

public class ChatService : IChatService
{
    private readonly IChatMessageRepository _repository;

    public ChatService(IChatMessageRepository repository)
    {
        _repository = repository;
    }

    public async Task<ChatMessage> CreateNewMessage(string senderName, string message)
    {
        var chatMessage = new ChatMessage(Guid.NewGuid())
        {
            Sender = senderName,
            Message = message
        };
        await _repository.AddMessage(chatMessage);

        return chatMessage;
    }

    public async Task<IEnumerable<ChatMessage>> GetAllInitially()
    {
        return await _repository.GetTopMessages();
    }
}

Also the Controller action and the Hub method need to change to support the async calls. It is only about making the methods async, returning Tasks and to await the service methods.

// ChatController.cs
[HttpGet("[action]")]
public async Task<IEnumerable<ChatMessage>> InitialMessages()
{
    return await _chatService.GetAllInitially();
}

Almost done

The authentication and storing the messages is done now. What needs to be done in the last step, is to add the logged on user to the UserTracker and to push the new user to the client. I'll not cover that in this post, because it already has more than 410 lines and more than 2700 words. Please visit the GitHub repository during the next days to learn how I did this.

Closing words

Even this post wasn't about React. The authentication is only done server side, since this isn't really a single page application.

To finish this post I needed some more time to get the Authentication using IdentityServer4 running. I stuck in a Invalid redirect URL error. At the end it was just a small typo in the RedirectUris property of the client configuration of the IdentityServer, but it took some hours to find it.

In the next post I will come back a little bit to React and Webpack while writing about the deployment. I'm going to write about automated deployment to an Azure Web App using CAKE, running on AppVeyor.

I'm attending the MVP Summit next week, so the last post of this series, will be written and published from Seattle, Bellevue or Redmond :-)

Creating a chat application using React and ASP.NET Core - Part 5

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

In this post I will write about the deployment of the app to Azure App Services. I will use CAKE to build pack and deploy the apps, both the identity server and the actual app. I will run the build an AppVeyor, which is a free build server for open source projects and works great for projects hosted on GitHub.

I'll not go deep into the AppVeyor configuration, the important topics are cake and azure and the app itself.

BTW: SignalR was going into the next version the last weeks. It is not longer alpha. The current version is 1.0.0-preview1-final. I updated the version in the package.json and in the ReactChatDemo.csproj. Also the NPM package name changed from "@aspnet/signalr-client" to "@aspnet/signalr". I needed to update the import statement in the WebsocketService.ts file as well. After updating SignalR I got some small breaking changes, which are easily fixed. (Please see the GitHub repo, to learn about the changes.)

Setup CAKE

CAKE is a build DSL, that is built on top of Roslyn to use C#. CAKE is open source and has a huge community, who creates a ton of add-ins for it. It also has a lot of built-in features.

Setting up CAKE is easily done. Just open the PowerShell and cd to the solution folder. Now you need to load a PowerShell script that bootstraps the CAKE build and loads more dependencies if needed.

Invoke-WebRequest https://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Later on, you need to run the build.ps1 to start your build script. Now the Setup is complete and I can start to create the actual build script.

I created a new file called build.cake. To edit the file it makes sense to use Visual Studio Code, because @code also has IntelliSense. In Visual Studio 2017 you only have syntax highlighting. Currently I don't know an add-in for VS to enable IntelliSense.

My starting point for every new build script is, the simple example from the quick start demo:

var target = Argument("target", "Default");

Task("Default")
  .Does(() =>
  {
    Information("Hello World!");
  });

RunTarget(target);

The script then gets started by calling the build.ps1 in a PowerShell

.\build.ps1

If this is working I'm able to start hacking the CAKE script in. Usually the build steps I use looks like this.

  • Cleaning the workspace
  • Restoring the packages
  • Building the solution
  • Running unit tests
  • Publishing the app
    • In the context of non-web application this means packaging the app
  • Deploying the app

To deploy the App I use the CAKE Kudu client add-in and I need to pass in some Azure App Service credentials. You get this credentials, by downloading the publish profile from the Azure App Service. You can just copy the credentials out of the file. Be careful and don't save the secrets in the file. I usually store them in environment variables and read them from there. Because I have two apps (the actual chat app and the identity server) I need to do it twice:

#addin nuget:?package=Cake.Kudu.Client

string  baseUriApp     = EnvironmentVariable("KUDU_CLIENT_BASEURI_APP"),
        userNameApp    = EnvironmentVariable("KUDU_CLIENT_USERNAME_APP"),
        passwordApp    = EnvironmentVariable("KUDU_CLIENT_PASSWORD_APP"),
        baseUriIdent   = EnvironmentVariable("KUDU_CLIENT_BASEURI_IDENT"),
        userNameIdent  = EnvironmentVariable("KUDU_CLIENT_USERNAME_IDENT"),
        passwordIdent  = EnvironmentVariable("KUDU_CLIENT_PASSWORD_IDENT");;

var target = Argument("target", "Default");

Task("Clean")
    .Does(() =>
          {	
              DotNetCoreClean("./react-chat-demo.sln");
              CleanDirectory("./publish/");
          });

Task("Restore")
	.IsDependentOn("Clean")
	.Does(() => 
          {
              DotNetCoreRestore("./react-chat-demo.sln");
          });

Task("Build")
	.IsDependentOn("Restore")
	.Does(() => 
          {
              var settings = new DotNetCoreBuildSettings
              {
                  NoRestore = true,
                  Configuration = "Release"
              };
              DotNetCoreBuild("./react-chat-demo.sln", settings);
          });

Task("Test")
	.IsDependentOn("Build")
	.Does(() =>
          {
              var settings = new DotNetCoreTestSettings
              {
                  NoBuild = true,
                  Configuration = "Release",
                  NoRestore = true
              };
              var testProjects = GetFiles("./**/*.Tests.csproj");
              foreach(var project in testProjects)
              {
                  DotNetCoreTest(project.FullPath, settings);
              }
          });

Task("Publish")
	.IsDependentOn("Test")
	.Does(() => 
          {
              var settings = new DotNetCorePublishSettings
              {
                  Configuration = "Release",
                  OutputDirectory = "./publish/ReactChatDemo/",
                  NoRestore = true
              };
              DotNetCorePublish("./ReactChatDemo/ReactChatDemo.csproj", settings);
              settings.OutputDirectory = "./publish/ReactChatDemoIdentities/";
              DotNetCorePublish("./ReactChatDemoIdentities/ReactChatDemoIdentities.csproj", settings);
          });

Task("Deploy")
	.IsDependentOn("Publish")
	.Does(() => 
          {
              var kuduClient = KuduClient(
                  baseUriApp,
                  userNameApp,
                  passwordApp);
              var sourceDirectoryPath = "./publish/ReactChatDemo/";
              var remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);

              kuduClient = KuduClient(
                  baseUriIdent,
                  userNameIdent,
                  passwordIdent);
              sourceDirectoryPath ="./publish/ReactChatDemoIdentities/";
              remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);
          });

Task("Default")
    .IsDependentOn("Deploy")
    .Does(() =>
          {
              Information("Your build is done :-)");
          });

RunTarget(target);

To get this script running locally, you need to set each of the environment variables in the current PowerShell session:

$env:KUDU_CLIENT_PASSWORD_APP = "super secret password"
# and so on...

If you only want to test the compile and publish stuff, just set the dependency of the default target to "Publish" instead of "Deploy". Doing this the deploy part will not run, you don't deploy in accident and you save some time while trying this.

Use CAKE in AppVeyor

On AppVeyor the environment variables are set in the UI. Don't set them in the YAML configuration, because they are not properly save and everybody can see them.

The most simplest appveyor.yml file looks like this.

version: 1.0.0-preview1-{build}
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
skip_tags: true
image: Visual Studio 2017 Preview
build_script:
- ps: .\build.ps1
test: off
deploy: off
# this is needed to install the latest node version
environment:
  nodejs_version: "8.9.4"
install:
  - ps: Install-Product node $env:nodejs_version
  # write out version
  - node --version
  - npm --version

This configuration only builds the master and the develop branch, which makes sense if you use git flow, as I used to do. Otherwise change it to just use the master branch or whatever branch you want to build. I skip tags to build and any other branches.

The image is Visual Studio 2017 (preview only if you want to try the latest features)

I can switch off tests, because this is done in the CAKE script. The good thing is, that the XUnit test output, built by the test runs in CAKE , gets anyway published to the AppVeyor reports. Deploy is also switched off, because it's done in CAKE too.

The last thing that needs to be done is to install the latest Node.JS version. Otherwise the already installed pretty much outdated version is is used. This is needed to download the React dependencies and to run Webpack to compile and bundle the React app.

You could also configure the CAKE script in a way that test, deploy and build calls different targets inside CAKE. But this is not really needed and makes the build a little less readable.

If you now push the entire repository to your repository on GitHub, you need to go to AppVeyor and to setup a new build project by selecting your GitHub repository. An new AppVeyor account is easily set up using an existing GitHub account. When the build project is created, you don't need to setup more. Just start a new build and see what happens. Hopefully you'll also get a green build like this:

Closing words

This post was finished one day after the Global MVP Summit 2018 on a pretty sunny day in Seattle

I spent two nights before the summit starts in Seattle downtown and the two nights after. Both times it was unexpectedly sunny.

I finish this series with this fifth blog post and learned a little bit about React and how it behaves in an ASP.NET Core project. And I really like it. I wouldn't really do a single page application using React, this seems to be much easier and faster using Angular, but I will definitely use react in future to create rich HTML UIs.

It works great using the React ASP.NET Core project in Visual Studio. It is great that Webpack is used here, because it saves a lot of time and avoids hacking around the VS environment.

Recap the MVP Global Summit 2018

$
0
0

Being a MVP has a lot of benefits. Getting free tools, software and Azure credits are just a few of them. The direct connection to the product group has a lot more value than all software. Even more valuable is the is the fact of being a part of an expert community with more than 3700 MVPs from around the world.

In fact there are a lot more experts outside the MVP community which are also contributing to the communities of the Microsoft related technologies and tools. Being an MVP also means to find those experts and to nominate them to also get the MVP award.

The most biggest benefit of being an MVP is the yearly MVP Global Summit in Redmond. Also this year Microsoft invites the MVPs to attend the MVP Global Summit. More than 2000 MVPs and Regional Directors were registered to attend the summit.

I also attended the summit this year. It was my third summit and the third chance to directly interact with the product group and with other MVPs from all over the world.

The first days in Seattle

My journey to the summit starts at Frankfurt airport where a lot of German, Austrian and Swiss MVPs start their journey and where many more MVPs from Europe change the plain. The LH490 and LH491 flights around the summits are called the "MVP plains" because of this. This always feels like a yearly huge school trip.

The flight was great, sunny the most time and I had an impressive view over Greenland and Canada:

Greenland

After we arrived at SEATEC, some German MVP friends and me took the train to Seattle downtown. We checked in at the hotels and went for a beer and a burger. This year I decided to arrive one day earlier than the last years and to stay in Seattle downtown for the first two nights and the last two nights. This was a great decision.

Pike Place Seattle

I spent the nights just a few steps away from the pike place. I really love the special atmosphere at this place and this area. There are a lot of small stores, small restaurants, the farmers market and the breweries. Also the very first Starbucks restaurant is at this place. It's really a special place. This also allows me to use the public transportation, which works great in Seattle.

There is a direct train from the airport to Seattle downtown and an express bus from Seattle downtown to the center of Bellevue where the conference hotels are located. For those of you, who don't want to spent 40USD or more for Uber, Taxy or a Shuttle, the train to Seattle costs 3USD and the express bus 2,70USD. Both need around 30 minutes, maybe you need some time to wait a few minutes in the underground station in Seattle.

The Summit days

After checking-in into the my conference hotel on Sunday morning, I went to the registration, but it seemed I was pretty early:

Summit Registration

But it wasn't really right. The most of the MVPs where in the queue to register for the conference and to get their swag.

Like the last years, the summit days where amazing, even if we don't really learn a lot of really new things in my contribution area. The most stuff in the my MVP category is open source and openly discussed on GitHub and Twitter and in the blog posts written by Microsoft. Anyway we learned about some cool ideas, which I unfortunately cannot write down here, because it is almost all NDA content.

So the most amazing things during the summit are the events and parties around the conference and to meet all the famous MVPs and Microsoft employees. I'm not really a selfie guy, but this time I really needed to take a picture with the amazing Phil "Mister ASP.NET MVC" Haack.

Phil Haak

I'm also glad to met Steve Gorden, Andrew Lock, David Pine, Damien Bowden, Jon Galloway, Damien Edwards, David Fowler, Immo Landwerth, Glen Condron, and many, many more. And of course the German speaking MVP Family from Germany (D), Austria (A) and Switzerland (CH) (aka DACH)

Special Thanks to Alice, who manages all the MVPs in the DACH area.

I'm also pretty glad to meet the owner of millions of hats, Mr. Jeff Fritz in person who ask me to do a lightning talk in front of many program managers during the summit. Five MVPs should tell the developer division program managers stories about the worst or the best things about the development tools. I was quite nervous, but it worked out well, mostly because Jeff was super cool. I told a worse story about the usage of Visual Studio 2015 and TFS by a customer with a huge amount of solutions and a lot more VS projects in it. It was pretty wired to also tell Julia Liuson (Corporate Vice President of Visual Studio) about that problems. But she was really nice, asked the right questions.

BTW: The power bank (battery pack) we got from Jeff, after the lightning talk, is the best power bank I ever had. Thanks Jeff.

On Thursday, the last Summit day for the VS and dev tools MVPs, there was a hackathon. They provided different topics to work on. There was a table for working with Blazor, another one for some IoT things, F#, C# and even VB.NET still seems to be a thing ;-)

My idea was to play around with Blazor, but I wanted to finalize a contribution to the ASP.NET documentation first. Unfortunately this took longer than expected, this is why I left the table and took a place on another table. I fixed a over-localization issue in the German ASP.NET documentation and took care about an issue on LightCore. On LightCore we currently have an open issue regarding some special registrations done by ASP.NET Core. We thought it was because of special registrations after the IServiceProvider were created, but David Fowler told me the provider is immutable and he points me to the registrations of open generics. LightCore already provides open generics, but implemented the resolution in a wrong way. In case a registrations of a list of generics is not found, LightCore should return an empty list instead of null.

It was amazing how fast David Fowler points me to the right problem. Those guys are crazy smart. Just a few seconds after I showed him the missing registration, I got the right answer. Glen Condron told me right after, how to isolate this issue and test it. Problem found and I just need to fix it.

Thanks guys :-)

The last days in Seattle

I also spent the last two nights at the same location near the Pike Place. Right after the hackathon, I grabbed my luggage at the conference hotel and used the express bus to go to Seattle again. I had a nice dinner together with André Krämer at the Pike Brewery. On the next Morning I had a amazingly yummy breakfast in a small restaurant at the Pike Place market, with a pretty cool morning view to the water front. Together with Kostja Klein, we had a cool chat about this and that, the INETA Germany and JustCommunity.

The last day usually is also the time to buy some souvenirs for the Kids, my lovely wife and the Mexican exchange student, who lives in hour house. I also finished the blog series about React and ASP.NET Core.

At the last morning in Seattle, I stumbled over the Pike Street into the Starbucks to take a small breakfast. It was pretty early at the Pike Place:

Pike Place Seattle

Leaving the Seattle area and the Summit feels a little bit of leaving a second home.

I'm really looking forward to the next summit :-)

BTW: Seattle isn't about rainy and cloudy weather

Have I already told you, that every time I visited Seattle, it was sunny and warm?

It's because of me, I think.

During the last summits it was Sunny when I visit Seattle downtown. In summer 2012, I was in a pretty warm and sunny Seattle, together with my family.

This time it was quite warm during the first days. It started to rain, when I left Seattle to go to the summit locations in Bellevue and Redmond and it was sunny and warm again when I moved back to Seattle downtown.

It's definitely because of me, I'm sure. ;-)

Or maybe the rainy cloudy Seattle is a different one ;-)

Topics I'll write about

Some of the topics I'm allowed to write about and I definitely will write about in the next posts are the following:

  • News on ASP.NET Core 2.1
  • News on ASP.NET (yes, it is still alive)
  • New features in C# 7.x
  • Live Share
  • Blazor

Why I use paket now

$
0
0

I never really had any major problem using the NuGet client. By reading the Twitter timeline, it seems I am the only one without problems. But depending on what dev process you like to use, there could be a problem. This is not really a NuGet fault, but this process makes the usage of NuGet a little bit more complex than it should be.

As mentioned in previous posts, I really like to use Git Flow and the clear branching structure. I always have a production branch, which is the master. It contains the sources of the version which is currently in production.

In my projects I don't need to care about multiple version installed on multiple customer machines. Usually as a web developer, you only have one production version installed somewhere on a webserver.

I also have a next version branch, which is the develop branch. This contains the version we are currently working on. Besides of this, we can have feature branches, hotfix branches, release branches and so on. Read more about Git Flow in this pretty nice cheat sheet.

The master branch get's compiled in release mode and uses a semantic version like this. (breaking).(feature).(patch). The develop branch, get's compiled in debug mode and has an version number that tells NuGet that it is a preview version: (breaking).(feature).(patch)-preview(build). Where build is the build number generated by the build server.

The actual problem

We use this versioning, build and release process for web projects and shared libraries. And with those shared libraries it starts to get complicated using NuGet.

Some of the shared libraries are used in multiple solutions and shared via a private NuGet feed, which is a common way, I think.

Within the next version of a web project we also use the next versions of the shared libraries to test them. In the current versions of the web projects we use the current versions of the shared libraries. Makes kinda sense, right? If we do a new production release of a web project, we need to switch back to the production version of the shared libraries.

Because in the solutions packages folder, NuGet creates package sub-folders containing the version number. And the project references the binaries from those folder. Changing the library versions, needs to use the UI or to change the packages.config AND the project files, because the reference path contains the version information.

Maybe switching the versions back and forth doesn't really makes sense in the most cases, but this is the way how I also try new versions of the libraries. In this special case, we have to maintain multiple ASP.NET applications, which uses multiple shared libraries, which are also dependent to different versions of external data sources. So a preview release of an application also goes to a preview environment with a preview version of a database, so it needs to use the preview versions of the needed libraries. While releasing new features, or hotfixes, it might happen that we need to do an release without updating the production environments and the production databases. So we need to switch the dependencies back to the latest production version of the libraries .

Paket solves it

Paket instead only supports one package version per solution, which makes a lot more sense. This means Paket doesn't store the packages in a sub-folder with a version number in its name. Changing the package versions is easily done in the paket.dependencies file. The reference paths don't change in the project files and the projects immediately use the other versions, after I changed the version and restored the packages.

Paket is an alternative NuGet client, developed by the amazing F# community.

Paket works well

Fortunately Paket works well with MSBuild and CAKE. Paket provides MSBuild targets to automatically restore packages before the build starts. Also in CAKE there is an add-in to restore Paket dependencies. Because I don't commit Paket to the repository I use the command line interface of Paket directly in CAKE:

Task("CleanDirectory")
	.Does(() =>
	{
		CleanDirectory("./Published/");
		CleanDirectory("./packages/");
	});

Task("LoadPaket")
	.IsDependentOn("CleanDirectory")
	.Does(() => {
		var exitCode = StartProcess(".paket/paket.bootstrapper.exe");
		Information("LoadPaket: Exit code: {0}", exitCode);
	});

Task("AssemblyInfo")
	.IsDependentOn("LoadPaket")
	.Does(() =>
	{
		var file = "./SolutionInfo.cs";		
		var settings = new AssemblyInfoSettings {
			Company = " YooApplications AG",
			Copyright = string.Format("Copyright (c) YooApplications AG {0}", DateTime.Now.Year),
			ComVisible = false,
			Version = version,
			FileVersion = version,
			InformationalVersion = version + build
		};
		CreateAssemblyInfo(file, settings);
	});

Task("PaketRestore")
	.IsDependentOn("AssemblyInfo")
	.Does(() => 
	{	
		var exitCode = StartProcess(".paket/paket.exe", "install");
		Information("PaketRestore: Exit code: {0}", exitCode);
	});

// ... and so on

Conclusion

No process is 100% perfect, even this process is not. But it works pretty well in this case. We are able to do releases and hotfix very fast. The setup of a new project using this process is fast and easy as well.

The whole process of releasing a new version, starting with the command git flow release start ... to the deployed application on the web server doesn't take more than 15 minutes. Depending on the size of the application and the amount of tests to run on the build server.

I just recognized, this post is not about .NET Core or ASP.NET Core. The problem I described only happens to classic projects and solutions that store the NuGet packages in the solutions packages folder.

Any Questions about that? Do you wanna learn more about Git Flow, CAKE and Continuous Deployment? Just drop me a comment.

Running and Coding

$
0
0

I wasn't really sporty before two years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really worked out well.

Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it more than once.

Running the agile way

I tried it again in Easter 2016 in a little different way, and it worked. I tried to do it the same way as in a perfect software project:

I did it in an agile way, using pretty small goals to get as much success as possible.

Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot.

It sounds weird and funny, but it worked really well. I lost 20Kg since then!

I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this.

I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me.

Finding time to run

Finding time was the hardest thing. In the past I always thought that I'm too busy to run. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog.

Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I usually start between 7 to 8 in the morning.) Running is great when you are working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. But I'm still able to run three to four times a week.

Starting to run

The first runs were a real pain. I just chose a small lap of 2,5km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run shorter tracks up-hill. Why up-hill? Because this is more exhausting than running leveled-up. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer.

This was the first success just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing.

On every run there was a success and that really pushed me. But I not only succeeded on running, I also started to loose weight, which pushed me even more. So the pain wasn't too hard and I continued running.

Some weeks later I ran the entire lap of 2.5km. I was running the whole lap not really fast but without a walking pause. Some more motivation.

I continued running just this 2.5km for a few more weeks to get some success on personal records on this lap.

Low carb

I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopped eating carbs. After that phase, started to eat a little of them. I also had one cheating-day per week when I was able to eat the normal way.

After 6 Months of eating less carbs and running, I lost around 10Kg, which was amazing and I was absolutely happy with this progress.

Cycling as a compensation

As already mentioned I run every second day. The days between I used my new mountain bike to climb the hills around the city where I live. Actually, it really was a kind of compensation because cycling uses other parts of the legs. (Except when I run up-hill).

Using my smart watch, I was able to measure that running burns three times more calories per hour in average than cycling in the same time. This is a measurement done on my person only and cannot adopt to any other person, but actually it makes sense to me.

Unfortunately cycling during the winter was a different kind of pain. It hurts the face, the feet and the hands. It was too cold. so I stopped it, if the temperature was lower than 5 degrees.

Extending the lap

After a few weeks running the entire 2.5Km, I increased the length to 4.5. This was more exhausting than expected. Two kilometers more needs a completely new kind of training. I needed to enforce myself to not run too fast at the beginning. I needed to start to manage my power. Again I started slowly and used some walking pauses to get the whole lap done. During the next months the walking pauses had decrease more and more until I didn't need a walking pause anymore on this lap.

The first official run

Nine months later I wanted to challenge myself a little bit and attended the first public run. It was a new years eve run. Pretty cold than, but unexpectedly a lot of fun. I was running with my brother which was a good idea. The atmosphere before and during the run was pretty special and I still like it a lot. I got three challenges done during this run. I reached the finish (1) and I wasn't the last one who passed the finish line (2). That was great. I also got a new personal record on the 5km (3).

This was done one year and three months ago. I did exactly the same run again last new years eve and got a new personal record, was faster than my brother and reached the finish. Amazing. More success to push myself.

The first 10km

During the last year I increased the number of kilometers, attended some more public runs. In September 2015 I finished my first public 10km run. Even more success to push me foreword.

I didn't increase the number of kilometer fast. Just one by one kilometer. Trained one to three months on this range and added some more kilometer. Last spring I started to do a longer run during the weekends, just because I had time to do this. On workdays it doesn't make sense to run more than 7 km because this would also increase the time used for the lunch break. I tried to just use one hour for the lunch run, including the shower and changing the cloths.

Got it done

Last November I got it done: I actually did loose 20kg since I started to run. This was really great. It was a great thing to see a weight less than 85kg.

Conclusion

How did running changed my life? it changed it a lot. I cannot really live without running for more than two days. I get really nervous than.

Do I feel better since I started running? Because of the sports I am more tired than before, I have muscle ache, I also had two sport accidents. But I'm pretty much more relaxed I think. Physically the most time it feels bad but in a wired positive way because I feel I've done something.

Also some annoying work was done more easily. I really looking foreword to the next lunch break to run the six or seven kilometer with the dog, or to ride the bike up and down the hills and to get the brain cleaned up.

I'm running on almost every weather except it is too slippery because of ice or snow. Fresh snow is fine, mud is fun, rain I don't feel anymore, sunny is even better and heat is challenging. Only the dog doesn't love warm weather.

Crazy? Yes, but I love it.

Yo you want to follow me on Strava?

Creating Dummy Data Using GenFu

$
0
0

Two years ago I already wrote about playing around with GenFu and I still use it now, as mentioned in that post. When I do a demo, or when I write blog posts and articles, I often need dummy data and I use GenFu to create it. But every time I use it in a talk or a demo, somebody still asks me a question about it,

Actually I really forgot about that blog post and decided to write about it again this morning because of the questions I got. Almost accidently I stumbled upon this "old" post.

I wont create a new one. Now worries ;-) Because of the questions I just want to push this topic a little bit to the top:

Playing around with GenFu

GenFu on GitHub

PM> Install-Package GenFu

Read about it, grab it and use it!

It is one of the most time saving tools ever :)

A generic logger factory facade for classic ASP.NET

$
0
0

ASP.NET Core already has this feature. There is a ILoggerFactory to create a logger. You are able to inject the ILoggerFactory to your component (Controller, Service, etc.) and to create a named logger out of it. During testing you are able to replace this factory with a mock, to not test the logger as well and to not have an additional dependency to setup.

Recently we had the same requirement in a classic ASP.NET project, where we use Ninject to enable dependency injection and log4net to log all the stuff we do and all exceptions. One important requirement is a named logger per component.

Creating named loggers

Usually log4net gets created inside the components as a private static instance:

private static readonly ILog _logger = LogManager.GetLogger(typeof(HomeController));

There already is a static factory method to create a named logger. Unfortunately this isn't really testable anymore and we need a different solution.

We could create a bunch of named logger in advance and register them to Ninject, which obviously is not the right solution. We need to have a more generic solution. We figured out two different solutions:

// would work well
public MyComponent(ILoggerFactory loggerFactory)
{
    _loggerA = loggerFactory.GetLogger(typeof(MyComponent));
    _loggerB = loggerFactory.GetLogger("MyComponent");
    _loggerC = loggerFactory.GetLogger<MyComponent>();
}
// even more elegant
public MyComponent(
    ILoggerFactory<MyComponent> loggerFactoryA
    ILoggerFactory<MyComponent> loggerFactoryB)
{
    _loggerA = loggerFactoryA.GetLogger();
    _loggerB = loggerFactoryB.GetLogger();
}

We decided to go with the second approach, which is a a simpler solution. This needs a dependency injection container that supports open generics like Ninject, Autofac and LightCore.

Implementing the LoggerFactory

Using Ninject the binding of open generics looks like this:

Bind(typeof(ILoggerFactory<>)).To(typeof(LoggerFactory<>)).InSingletonScope();

This binding creates an instance of LoggerFactory<T> using the requested generic argument. If I request for an ILoggerFactory<HomeController>, Ninject creates an instance of LoggerFactory<HomeController>.

We register this as an singleton to reuse the ILog instances as we would do using the usual way to create the ILog instance in a private static variable.

The implementation of the LoggerFactory is pretty easy. We use the generic argument to create the log4net ILog instance:

public interface ILoggerFactory<T>
{
	ILog GetLogger();
}

public class LoggerFactory<T> : ILoggerFactory<T>
{
    private ILog _logger;
    public ILog GetLogger()
    {
        if (_logger == null)
        {
            var type = typeof(T);
            _logger = LogManager.GetLogger(typeof(T));
        }
        return _logger;
    }
}

We need to ensure the logger is created before creating a new one. Because Ninject creates a new instance of the LoggerFactory per generic argument, the LoggerFactory don't need to care about the different loggers. It just stores a single specific logger.

Conclusion

Now we are able to create one or more named loggers per component.

What we cannot do, using this approach is to create individual named loggers, using a specific string as a name. There is a type needed that gets passed as generic argument. So every time we need an individual named logger we need to create a specific type. In our case this is not a big problem.

If you don't like to create types just to create individual named loggers, feel free to implement a non generic LoggerFactory and make a generic GetLogger method as well as a GetLogger method that accepts strings as logger names.

Creating a signature pad using Canvas and ASP.​NET Core Razor Pages

$
0
0

In one of our projects, we needed to add a possibility to add signatures to PDF documents. A technician fills out a checklist online and a responsible person and the technician need to sign the checklist afterwards. The signatures then gets embedded into a generated pdf document together with the results of the checklist. The signatures must be created on a web UI, running on an iPad Pro.

It was pretty clear that we need to use the HTML5 canvas element and to capture the pointer movements. Fortunately we stumbled upon a pretty cool library on GitHub, created by Szymon Nowak from Poland. It is the super awesome Signature Pad written in TypeScript and available as NPM and Yarn package. It is also possible to use a CDN to use the Signature Pad.

Use Signature Pad

Using Signature Pad is really easy and works well without any configuration. Let me show you in a quick way how it works:

To play around with it, I created a new ASP.NET Core Razor Pages web using the dotnet CLI:

dotnet new razor -n SignaturePad -o SignaturePad

I added a new razor page called Signature and added it to the menu in the _Layout.cshtml. I created a simple form and placed some elements in it:

<form method="POST"><p><canvas width="500" height="400" id="signature" 
                style="border:1px solid black"></canvas><br><button type="button" id="accept" 
                class="btn btn-primary">Accept signature</button><button type="submit" id="save" 
                class="btn btn-primary">Save</button><br><img width="500" height="400" id="savetarget" 
             style="border:1px solid black"><br><input type="text" asp-for="@Model.SignatureDataUrl"> </p></form>

The form posts the content to the current URL, which is the same Razor page, but the different HTTP method handler. We will have a look later on.

The canvas is the most important thing. This is the area where the signature gets drawn. I added a border to make the pad boundaries visible on the screen. I add a button to accept the signature. This means we lock the canvas and write the image data to the input field added as last element. I also added a second button to submit the form. The image is just to validate the signature and is not really needed, but I was curious about, how it looks in an image tag.

This is not the nicest HTML code but works for a quick test.

Right after the form I added a script area to render the JavaScript to the end of the page. To get it running quickly, I use jQuery to access the HTML elements. I also copied the signature_pad.min.js into the project, instead of using the CDN version

@section Scripts{<script src="~/js/signature_pad.min.js"></script><script>
        $(function () {

            var canvas = document.querySelector('#signature');
            var pad = new SignaturePad(canvas);

            $('#accept').click(function(){

                var data = pad.toDataURL();

                $('#savetarget').attr('src', data);
                $('#SignatureDataUrl').val(data);
                pad.off();
            
            });
                    
        });
    </script>
}

As you can see, creating the Signature Pad is simply done by creating a new instance of SignaturePad and passing the canvas as an argument. On click at the accept button, I start working with the pad. The function toDataURL() generates an image data URL that can be directly used as image source, like I do in the next line. After that I store the result as value in the input field to send it to the server. In Production this should be a hidden field. at the end I switch the Signature Pad off to lock the canvas and the user cannot manipulate the signature anymore.

Handling the Image Date URL with C##

The image data URL looks like this:

data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAAGQCAYA...

So after the comma the image is a base 64 encoded string. The data before the comma describes the image type and the encoding. I now send the complete data URL to the server and we need to decode the string.

public void OnPost()
{
    if (String.IsNullOrWhiteSpace(SignatureDataUrl)) return;

    var base64Signature = SignatureDataUrl.Split(",")[1];            
    var binarySignature = Convert.FromBase64String(base64Signature);

    System.IO.File.WriteAllBytes("Signature.png", binarySignature);
}

On the page model we need to create a new method OnPost() to handle the HTTP POST method. Inside we first check whether the bound property has a value or not. Then we split the string by comma and convert the base 64 string to an byte array.

With this byte array we can do whatever we need to do. In the current project I store the image directly in the PDF and in this demo I just store the data in an image on the hard drive.

Conclusion

As mentioned this is just a quick demo with some ugly code. But the rough idea could be used to make it better in Angular or React. To learn more about the Signature Pad visit the repository: https://github.com/szimek/signature_pad

This example also shows what is possible with HTML5 this times. I really like the possibilities of HTML5 and the HTML5 APIs used with JavaScript.

Hope this helps :-)

Four times in a row

$
0
0

One year later, it is the July 1st and I got the email from the Global MVP Administrator. I got the MVP award the fourth time in a row :)

I'm pretty proud and honored about that and I'm really happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit next year to meet all the other MVPs from around the world.

Still not really a fan-boy...!?

I'm also proud of being a MVP, because I never called myself a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (I feel like a bad boy). But I like most of the development tools built by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And it is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

In the last year being an MVP, I also learned that it is kinda fun to contribute to Microsoft's open source projects, being a part of that project and to see my own work in that projects. If you like open source as well, contribute to the the open source projects. Make the projects better, make the documentations better.

I also need to say Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

And like last year, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending so much time to contribute to the community. I also need to say Thanks to the YooApplications AG, my colleagues and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

Configuring HTTPS in ASP.NET Core 2.1

$
0
0

Finally HTTPS gets into ASP.NET Core. It was there before back in 1.1, but was kinda tricky to configure. It was available in 2.0 bit not configured by default. Now it is part of the default configuration and pretty much visible and present to the developers who will create a new ASP.NET Core 2.1 project.

So the title of that blog post is pretty much misleading, because you don't need to configure HTTPS. because it already is. So let's have a look how it is configured and how it can be customized. First create a new ASP.NET Core 2.1 web application.

Did you already install the latest .NET Core SDK? If not, go to https://dot.net/ to download and install the latest version for your platform.

Open a console and CD to your favorite location to play around with new projects. It is C:\git\aspnet\ in my case.

mkdir HttpsSecureWeb && cd HttpSecureWeb
dotnet new mvc -n HttpSecureWeb -o HttpSecureWeb
dotnet run

This commands will create and run a new application called HttpSecureWeb. And you will see HTTPS the first time in the console output by running an newly created ASP.NET Core 2.1 application:

There are two different URLs where Kestrel is listening on: https://localhost:5001 and http://localhost:5000

If you go to the Configure method in the Startup.cs there are some new middlewares used to prepare this web to use HTTPS:

In the Production and Staging environment mode there is this middleware:

app.UseHsts();

This enables HSTS (HTTP Strinct Transport Protocol), which is a HTTP/2 feature to avoid man-in-the-middle attacks. It tells the browser to cache the certificate for the specific host-headers and for a specific time range. If the certificate changes before the time range ends, something is wrong with the page. (More about HSTS)

The next new middleware redirects all requests without HTTPS to use the HTTPS version:

app.UseHttpsRedirection();

If you call http://localhost:5000, you get redirected immediately to https://localhost:5001. This makes sense if you want to enforce HTTPS.

So from the ASP.NET Core perspective all is done to run the web using HTTPS. Unfortunately the Certificate is missing. For the production mode you need to buy a valid trusted certificate and to install it in the windows certificate store. For the Development mode, you are able to create a development certificate using Visual Studio 2017 or the .NET CLI. VS 2017 is creating a certificate for you automatically.

Using the .NET CLI tool "dev-certs" you are able to manage your development certificates, like exporting them, cleaning all development certificates, trusting the current one and so on. Just type the following command to get more detailed information:

dotnet dev-certs https --help

On my machine I trusted the development certificate to not get the ugly error screen in the browser about an untrusted certificate and an unsecure connection every time I want to debug a ASP.NET Core application. This works quite well:

dotnet dev-cert https --trust

This command trusts the development certificate, by adding it to the certificate store or to the keychain on Mac.

On Windows you should use the certificate store to register HTTPS certificated. This is the most secured way on Windows machines. But I also like the idea to store the password protected certificate directly in the web folder or somewhere on the web server. This makes it pretty easy to deploy the application to different platforms, because Linux and Mac use different ways to store the certificated. Fortunately there is a way in ASP.NET Core to create a HTTPS connection using a file certificate which is stored on the hard drive. ASP.NET Core is completely customizable. If you want to replace the default certification handling, feel free to do it.

To change the default handling, open the Program.cs and take a quick look at the code, especially to the method CreateWebHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup<Startup>();

This method creates the default WebHostBuilder. This has a lot of stuff preconfigured, which is working great in the most scenarios. But it is possible to override all of the default settings here and to replace it with some custom configurations. We need to tell the Kestrel webserver which host and port he need to listen on and we are able to configure the ListenOptions for specific ports. In this ListenOptions we can use HTTPS and pass in the certificate file and a password for that file:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseKestrel(options =>
        {
            options.Listen(IPAddress.Loopback, 5000);
            options.Listen(IPAddress.Loopback, 5001, listenOptions =>
            {
                listenOptions.UseHttps("certificate.pfx", "topsecret");
            });
        })
        .UseStartup<Startup>();

Usually we would use the hardcoded values from a configuration file or environment variables, instead of hardcoding it.

Be sure the certificate is password protected using a long password or even better a pass-phrase. Be sure to not store the password or the pass-phrase into a configuration file. In development mode you should use the user secrets to store such secret date and in production mode the Azure Key Vault could be an option.

Conclusion

I hope this helps to get you a rough overview over the the usage of HTTPS in ASP.NET Core. This is not really a deep dive, but tries to explain what are the new middlewares good for and how to configure HTTPS for different platforms.

BTW: I just saw in the blog post about HTTPS improvements, about HSTS in ASP.NET Core, there is a way to store the HTTPS configuration in the launchSettings.json. This is an easy way to pass in environment variables on startup to the application. The samples also shows to add the certificate password to this settings file. Please never ever do this! Because a file is easily shared to a source code repository or any other way, so the password inside is shared as well. Please use different mechanisms to set passwords in an application, like the already mentioned user secrets or the Azure Key Vault.


Live streaming ideas

$
0
0

With this post, I'd like to share some ideas about two live streaming shows with you. It would be cool to get some feedback from you, especially from the German speaking readers as well. The first idea is about an German speaking .NET Developer Community Standup and the second one is about a live coding stream (English or German), both hosted on Google Hangouts.

A German speaking .NET Developer Community Standup

Since the beginning of the ASP.NET Community Standup, I watch this show more or less regularly. I think I missed only two or three shows. Because of the different time zone it is almost not possible to watch the live stream. Anyway. I really like the format of that show.

Also since a few years the number of user group attendees decreases. In my user group sometimes only two or three attendees show up, even if we have a lot more registrations via meetup. We (Olivier Giss and me) have kinda fun hosting the user group, but it is also hard to push much effort in it for just a handful of loyal attendees. Since a while we record the sessions using skype for business or google hangouts and push them to YouTube. This gives some more folks the chance to see the talks. We thought a lot about the reasons and tried to change some things to get more attendees, but that didn't really work.

This is the reason why I'm thinking laud about a .NET Developer Community Standup for the German speaking region (Germany, Austria and Switzerland) since months.

I'd like to find two more people to join the team to host the show. Would be cool to have a person from Austria as well as from Switzerland. Since I'm a swiss MVP, I could also take over the swiss part in behalf ;-) In that case I would like to have another person from Germany. One host per country would be cool.

Three host is a nice number and it wouldn't be necessary for the hosts to be available every time we do a live stream. Anyone interested in joining the team?

To keep it simple I'd also use google hangouts to stream the show, and it is not necessary to have an high end steaming equipment. A good headset and a good internet connection should be enough.

In the show I would like to go threw some interesting community and technology news. Talking about some random stuff and I'd also like to invite special guests, who can show us things they did or who would like to talk about special things. This should be a lazy show about interesting stuff about technology and community. I'd also like to give community leads the chance to talk about their work and their events.

What are you thinking about that? Are you interested in?

If yes, I would set up a GitHub repo to collect ideas and topics to talk about.

Live Coding via Live Stream on Google Hangouts

Another idea is inspired by Jeff Fritz live stream on Twitch called "Fritz and Friends". The recorded streams are published to YouTube afterwards. I really like this live stream, even if it's a completely different kind of video to watch. Jeff is permanently in discussion with the users in the chat, while working on his projects. This is kinda wired and makes the show a little nervous, but it is also really interesting. The really cool thing is that he accepts pull request from his audience and he discuss their changes with the audience while working on his project.

I would do such a live stream as well, there were a few projects I would like to work on:

  • LightCore 2.0
    • An alternative DI container for .NET and .NET Core projects
    • Almost done, but needs to be finalized.
    • Maybe you folks want do add more features or add some optimizations
  • Working on the GraphQL middleware for ASP.NET Core
  • Working on health checks for ASP.NET and ASP.NET Core
    • Including health check application provided in the same way IdentityServer is provided to ASP.NET projects: Mainly as a single but extendable library and a optional UI to visualize the health of the connected services.
  • Working on a developer community platform like the portal Microsoft planned to release last year?
    • Unfortunately Microsoft retired that project. It would make more sense anyway, if this project is built and hosted by the community itself.
    • So this would be a great way to create such a developer community platform

Maybe it makes also sense to invite a special guest to talk about specific topics while working on the project. e.g. inviting Dominick Baier to implement authentication to the developer community platform.

What if I do the same thing? Are you interested in? What would be the best Language for that kind of life stream?

If you are interested, I would also set up a GitHub repo to collect ideas and topics to talk about and I would setup additional repos per project.

What do you think?

Do you like these ideas? Do you have any other idea? Please drop me a comment and share your thoughts :-)

New Blog Series: Customizing ASP.​NET Core

$
0
0

With this post I want to introduce a new blog series about things you can or maybe need to customize in ASP.NET Core. Initially this series will contain ten different topics. Maybe later I'll write some more posts about that.

The initial topics are based on my talk about Customizing ASP.NET Core. I did this talk several times in German and English. I did the talk on the .NET Conf 2018 as well.

Unfortunately on the .NET Conf the talk started with pretty bad audio for some reasons. The first five minutes can be moved directly to the trash IMHO. I also could only show 7 out of 10 demos, even if I tried to get all the demos into 45 minutes one day before. I'm almost sure the audio problem wasn't on my side. Via the router I disconnected almost all devices from the internet during the our I was presenting and it went well before the presentation when we did the latest tech check.

Anyway, after five minutes the audio went a lot better and the audience was able to follow the rest of the presentation.

For this series I'm going to follow the same order as in that presentation, which is the order from bottom to top, from the server configuration parts, over Web.API up to the MVC topics.

Initial series topics

Additional series topics

  • Customizing ASP.NET Core Part 11: Hosting
  • Customizing ASP.NET Core Part 12: InputFormatters
  • Customizing ASP.NET Core Part 13: ViewComponents

Do you want to see that talk?

If you are interested in this talk about Customizing ASP.NET Core, feel free to drop me a comment, a message via Twitter or an email. I'm able to do it remotely via Skype, Skype for Business or on side, if the travel costs are covered somehow. For free at community events, like Meetups or user group meetings and fairly paid on commercial events.

Discover more possible talks on Sessionize: https://sessionize.com/juergengutsch

Customizing ASP.​NET Core Part 01: Logging

$
0
0

In this first part of the new blog series about customizing ASP.NET Core, I will show you how to customize the logging. The default logging only writes to the console or to the debug window. This is quite good for the most cases, but maybe you need to log to a sink like a file or a database. Maybe you want to extend the logger with additional information. In that cases you need to know how to change the default logging.

The series topics

Configure logging

In previous versions of ASP.NET Core (pre 2.0) the logging was configured in the Startup.cs. Since 2.0 the Startup.cs was simplified and a lot of configurations where moved to a default WebHostBuilder, which is called in the Program.cs. Also the logging was moved to the default WebHostBuilder:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

In ASP.NET Core you are able to override and customize almost everything. So you can with the logging. The IWebHostBuilder has a lot of extension methods to override the default behavior. To override the default settings for the logging we need to choose the ConfigureLogging method. The next snippet shows exactly the same logging as it was configured inside the CreateDefaultBuilder() method:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureLogging((hostingContext, logging) =>
    {
        logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
        logging.AddConsole();
        logging.AddDebug();
    })                
    .UseStartup<Startup>();

This method needs a lambda that gets a WebHostBuilderContext that contains the hosting context and a LoggingBuilder to configure the logging.

Create a custom logger

To demonstrate a custom logger, I created a small useless logger that is able to colorize log entries with an specific log level in the console. This so called ColoredConsoleLogger will be added and created using a LoggerProvider we also need to write by our own. To specify the color and the log level to colorize, we need to add a configuration class. In the next snippet all three parts (Logger, LoggerProvider and Configuration) are shown:

public class ColoredConsoleLoggerConfiguration
{
    public LogLevel LogLevel { get; set; } = LogLevel.Warning;
    public int EventId { get; set; } = 0;
    public ConsoleColor Color { get; set; } = ConsoleColor.Yellow;
}

public class ColoredConsoleLoggerProvider : ILoggerProvider
{
    private readonly ColoredConsoleLoggerConfiguration _config;
    private readonly ConcurrentDictionary<string, ColoredConsoleLogger> _loggers = new ConcurrentDictionary<string, ColoredConsoleLogger>();

    public ColoredConsoleLoggerProvider(ColoredConsoleLoggerConfiguration config)
    {
        _config = config;
    }

    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, name => new ColoredConsoleLogger(name, _config));
    }

    public void Dispose()
    {
        _loggers.Clear();
    }
}

public class ColoredConsoleLogger : ILogger
{
	private static object _lock = new Object();
    private readonly string _name;
    private readonly ColoredConsoleLoggerConfiguration _config;

    public ColoredConsoleLogger(string name, ColoredConsoleLoggerConfiguration config)
    {
        _name = name;
        _config = config;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel == _config.LogLevel;
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        lock (_lock)
        {
            if (_config.EventId == 0 || _config.EventId == eventId.Id)
            {
                var color = Console.ForegroundColor;
                Console.ForegroundColor = _config.Color;
                Console.WriteLine($"{logLevel.ToString()} - {eventId.Id} - {_name} - {formatter(state, exception)}");
                Console.ForegroundColor = color;
            }
        }
    }
}

We need to lock the actual console output, because we will get some race conditions where wrong log entries get colored with the wrong color, because the console itself is not really thread save.

If this is done we can start to plug in the new logger to the configuration:

logging.ClearProviders();

var config = new ColoredConsoleLoggerConfiguration
{
    LogLevel = LogLevel.Information,
    Color = ConsoleColor.Red
};
logging.AddProvider(new ColoredConsoleLoggerProvider(config));

If needed you are able to clear all the previously added logger providers. Than we call AddProvider to add a new instance of our ColoredConsoleLoggerProvider with the specific settings. We could also add some more instances of the provider with different settings.

This shows ho to handle different log levels in a a different way. You can use this to send an emails on hard errors, to log debug messages to a different log sink than regular informational messages and so on.

In many cases it doesn't make sense to write a custom logger because there are already many good third party loggers, like elmah, log4net and NLog. In the next section I'm going to show you how to use NLog in ASP.NET Core

Plug-in an existing Third-Party logger provider

NLog was one of the very first loggers, which was available as a .NET Standard library and usable in ASP.NET Core. NLog also already provides a Logger Provider to easily plug it into ASP.NET Core.

The next snippet shows a typical NLog.Config that defines two different sinks to log all messages in one log file and custom messages only into another file:

<?xml version="1.0" encoding="utf-8" ?><nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\dotnetconf\001-logging\internal-nlog.txt"><!-- Load the ASP.NET Core plugin --><extensions><add assembly="NLog.Web.AspNetCore"/></extensions><!-- the targets to write to --><targets><!-- write logs to file --><target xsi:type="File" name="allfile" fileName="C:\git\dotnetconf\001-logging\nlog-all-${shortdate}.log"
                 layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" /><!-- another file log, only own logs. Uses some ASP.NET core renderers --><target xsi:type="File" name="ownFile-web" fileName="C:\git\dotnetconf\001-logging\nlog-own-${shortdate}.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}" /><!-- write to the void aka just remove --><target xsi:type="Null" name="blackhole" /></targets><!-- rules to map from logger name to target --><rules><!--All logs, including from Microsoft--><logger name="*" minlevel="Trace" writeTo="allfile" /><!--Skip Microsoft logs and so log only own logs--><logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" /><logger name="*" minlevel="Trace" writeTo="ownFile-web" /></rules></nlog>

We than need to add the NLog ASP.NET Core package from NuGet:

dotnet add package NLog.Web.AspNetCore

(Be sure you are in the project directory before you execute that command)

Now you only need to add NLog in the ConfigureLogging method in the Program.cs

hostingContext.HostingEnvironment.ConfigureNLog("NLog.Config");
logging.AddProvider(new NLogLoggerProvider());

The first line configures NLog to use the previously created NLog.Config and the second line adds the NLogLoggerProvider to the list of logging providers. Here you can add as many logger providers you need.

Conclusion

The good thing of hiding the basic configuration is only to clean up the newly scaffolded projects and to keep the actual start as simple as possible. The developer is able to focus on the actual features. But the more the application grows the more important is logging. The default logging configuration is easy and it works like charm, but in production you need a persisted log to see errors from the past. So you need to add a custom logging or a more flexible logger like NLog or log4net.

To learn more about ASP.NET Core configuration have a look into the next part of the series: Customizing ASP.NET Core Part 02: Configuration.''

Customizing ASP.​NET Core Part 02: Configuration

$
0
0

This second part of the blog series about customizing ASP.NET Core is about the application configuration, how to use it and how to customize the configuration to use different ways to configure your app.

The series topics

Configure the configuration

As well as the logging, since ASP.NET Core 2.0 the configuration is also hidden in the default configuration of the WebHostBuilder and not part of the Startup.cs anymore. This is done for the same reasons to keep the Startup clean and simple:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

Fortunately you are also able to override the default settings to customize the the configuration in a way you need it.

When you create a new ASP.NET Core project you already have an appsettings.json and an appsettings.Development.json configured. You can and you should use this configuration files to configure your app. You should because this is the pre-configured way and the most ASP.NET Core developers will look for an appsettings.json to configure the application. This is absolutely fine and works pretty well.

But maybe you already have an existing XML configuration or want to share a YAML configuration file over different kind of applications. This could also make sense. Sometimes it makes also sense to read configuration values out of a database.

The next snippet shows the hidden default configuration to read the appsettigns.json files:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

This configuration also set the base path of the application and adds the configuration via environment variables. The method ConfigureAppConfiguration accepts a lambda method that gets a ConfigurationBuilderContext and a ConfigurationBuilder passed in

Whenever you customize the the application configuration you should add the configuration via environment variable as the last step. The order of the configuration matters and the latter added configuration providers will override the previously added configurations. Be sure the environment variables always override the configurations via file. This way you ensure the configure on azure web apps via the Application Settings UI on Azure which will be passed to the application as environment variables.

The IConfigurationBuilder has a lot of extension methods to add more configurations like XML or INI configuration files, in-memory configurations and so on. You can find a lot more configuration providers provided by the community to read in YAML files, database values and a lot more. In this demo I'm going to show you how to read INI files in.

Typed configurations

Before trying to read the INI files it makes sense to show how to use typed configuration instead of reading the configuration via the IConfiguration key by key.

To read a type configuration you need to define the type to configure. I usually crate a class called AppSettings like this:

public class AppSettings
{
    public int Foo { get; set; }
    public string Bar { get; set; }
}

This classes than can be filled with specific configuration sections inside the method ConfigureServices in the Startup.cs

services.Configure<AppSettings>(Configuration.GetSection("AppSettings"));

This way the typed configuration also gets registered as a service in the dependency injection container and can be used everywhere in the application. You are able to create different configuration types per configuration section. In the most cases one section should be fine, but maybe it makes sense to divide the settings into different sections.

This configuration than can be used via dependency injection in every part of your application. The next snippets shows how to use the configuration in a MVC controller:

public class HomeController : Controller
{
    private readonly AppSettings _options;

    public HomeController(IOptions<AppSettings> options)
    {
        _options = options.Value;
    }

The IOptions<AppSettings> is a wrapper around our AppSettings type and the property Value contains the actual instance of the AppSettings including the values from the configuration file.

To try that out the appsettings.json need to have the AppSettings section configured, otherwise the values are null or not set.

{"Logging": {"LogLevel": {"Default": "Warning"
    }
  },"AllowedHosts": "*","AppSettings": {"Foo": 123,"Bar": "Bar"
  }
}

Configuration using INI files

To also use INI files to configure the application we need to add the INI configuration inside the method ConfigureAppConfiguration in the Program.cs:

config.AddIniFile("appsettings.ini", optional: false, reloadOnChange: true);
config.AddJsonFile($"appsettings.{env.EnvironmentName}.ini", optional: true, reloadOnChange: true);

This code loads the INI files the same way as the JSON configuration files. The first line is a required configuration and the second one an optional configuration depending on the current runtime environment.

The INI file could look like this:

[AppSettings]
Bar="FooBar"

This file also contains a section called AppSettings and a property called Bar. Initially I wrote the order of the configuration matters. If you added the two lines to configure via INI files after the configuration via JSON files, the INI files will override the settings from the JSON files. The property Bar gets overridden with "FooBar" and the property Foo stays the same. Also the values out of the INI file will be available via the previously created AppSettings class.

Every other configuration provider will work the same way. Let's see how a configuration provider would look like.

Configuration Providers

A configuration provider is an implementation of an IConfigurationProvider that get's created by an configuration source, which is an implementation of an IConfigurationSource. The configuration provider than reads the date in from somewhere and provides it via a Dictionary.

To add a custom or third party configuration provider to ASP.NET Core you need to call the method Add on the configuration builder and put the configuration source in:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        // add new configuration source
        config.Add(new MyCustomConfigurationSource{
        	SourceConfig = //configure whatever source 
            Optional = false,
            ReloadOnChange = true
        });
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

Usually you would create an extension method to easier add the configuration source:

config.AddMyCustomSource("source", optional: false, reloadOnChange: true);

A really detailed concrete example about how to create a custom configuration provider is written by the fellow MVP Andrew Lock.

Conclusion

In the most cases it is not needed to add a different configuration provider or to create your own configuration provider, but it's good to know how to change it in case you need it. Also using typed configuration is a nice way to read the settings. In classic ASP.NET we used a manually created façade to to read the application settings in a typed way. Now this is automatically done by just providing a class. This class get's automatically filled and provided via dependency injection.

To learn more about ASP.NET Core Dependency Injection have a look into the next part of the series: Customizing ASP.NET Core Part 03: Dependency Injection

Customizing ASP.​NET Core Part 03: Dependency Injection

$
0
0

In the third part we'll take a look into the ASP.NET Core dependency injection and how to customize it to use a different dependency injection container if needed.

The series topics

Why using a different dependency injection container?

In the most projects you don't really need to use a different dependency injection Container. The DI implementation in ASP.NET Core supports the main basic features and works well and pretty fast. Anyway, some other DI container support some interesting features you maybe want to use in your application.

  • Maybe you like to create an application that support modules as lightweight dependencies.
    • E.g. modules you want to put into a specific directory and they get automatically registered in your application
    • This could be done with NInject.
  • Maybe you want to configure the services in a configuration file outside the application, in an XML or JSON file instead in C# only
    • This is a common feature in various DI containers, but not yet supported in ASP.NET Core.
  • Maybe you don't want to have an immutable DI container, because you want to add services at runtime.
    • This is also a common feature in some DI containers.

A look at the ConfigureServices Method

Create a new ASP.NET Core project and open the Startup.cs, you will find the method to configure the services which looks like this:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
	services.Configure<CookiePolicyOptions>(options =>
	{
		// This lambda determines whether user consent for non-essential cookies is needed for a given request.
		options.CheckConsentNeeded = context => true;
		options.MinimumSameSitePolicy = SameSiteMode.None;
	});
    services.AddTransient<IService, MyService>();

	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}

This method gets the IServiceCollection, which already filled with a bunch of services which are needed by ASP.NET Core. This services got added by the hosting services and parts of ASP.NET Core that got executed before the method ConfigureSercices is called.

Inside the method some more services gets added. First a configuration class that contains cookie policy options is added to the ServiceCollection. In this sample I also add a custom service called MyService that implements the IService interface. After that the method AddMvc() adds another bunch of services needed by the MVC framework. Until yet we have around 140 services registered to the IServiceCollection. But the service collections isn't the actual dependency injection container.

The actual DI container is wrapped in the so called service provider, which will be created out of the service collection. The IServiceCollection has an extension method registered to create a IServiceProvider out of the service collection.

IServiceProvider provider = services.BuildServiceProvider()

The ServiceProvider than contains the immutable container that cannot be changed at runtime. With the default method ConfigureServices the IServiceProvider gets created in the background after this method was called, but it is possible to change the method a little bit:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });
    services.AddTransient<IService, MyService>(); // custom service
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    return services.BuildServiceProvider()
}

I changed the return type to IServiceProvider and return the ServiceProvider created with the method BuildServiceProvider(). This change will still work in ASP.NET Core.

Use a different ServiceProvider

To change to a different or custom DI container you need to replace the default implementation of the IServiceProvider with a different one. Additionally you need to find a way to move the already registered services to the new container.

The next code sample uses Autofac as a third party container. I use Autofac in this snippet because you are easily able to see what is happening here:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });

    //services.AddTransient<IService, MyService>();

    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

    // create a Autofac container builder
    var builder = new ContainerBuilder();

    // read service collection to Autofac
    builder.Populate(services);

    // use and configure Autofac
    builder.RegisterType<MyService>().As<IService>();

    // build the Autofac container
    ApplicationContainer = builder.Build();

    // creating the IServiceProvider out of the Autofac container
    return new AutofacServiceProvider(ApplicationContainer);
}

// IContainer instance in the Startup class 
public IContainer ApplicationContainer { get; private set; }

Also Autofac works with a kind of a service collection inside the ContainerBuilder and it creates the actual container out of the ContainerBuilder. To get the registered services out of the IServiceCollection into the ContainerBuilder, Autofac uses the Populate() method. This copies all the existing services to the Autofac container.

Our custom service MyService now gets registered using the Autofac way.

After that, the container gets build and stored in a property of type IContainer. In the last line of the method ConfigureServices we create a AutofacServiceProvider and pass in the IContainer. This is the IServiceProvider we need to return to use Autofac within our application.

UPDATE: Introducing Scrutor

You don't always need to replace the existing .NET Core DI container to get and use nice features. In the beginning I mentioned the auto registration of services. This can also be done with a nice NuGet package called Scrutor by Kristian Hellang (https://kristian.hellang.com/). Scrutor extends the IServiceCollection to automatically register services to the .NET Core DI container.

"Assembly scanning and decoration extensions for Microsoft.Extensions.DependencyInjection" https://github.com/khellang/Scrutor

Andrew Lock published a pretty detailed blog post about Scrutor. It doesn't make sense to repeat that. Read that awesome post and learn more about it: Using Scrutor to automatically register your services with the ASP.NET Core DI container

Conclusion

Using this approach you are able to use any .NET Standard compatible DI container to replace the existing one. If the container of your choice doesn't provide an ServiceProvider, create an own one that implements IServiceProvider and uses the DI container inside. If the container of your choice doesn't provide a method to populate the registered services into the container, create your own method. Loop over the registered services and add them to the other container.

Actually the last step sounds easy, but can be a hard task. Because you need to translate all the possible IServiceCollection registrations into registrations of the different container. The complexity of that task depends on the implementation details of the other one.

Anyway, you have the choice to use any DI container which is compatible to the .NET Standard. You have the choice to change a lot of the default implementations in ASP.NET Core.

So you can with the default HTTPS behavior on Windows. To learn more about that please read the next post about Customizing ASP.NET Core Part 04: HTTPS.

Viewing all 759 articles
Browse latest View live