ProgressNEXT 2019 – A Developer’s Perspective

Last month (May 7th – 9th 2019), I had the opportunity to attend ProgressNEXT in Orlando, FL. The opportunity to attend was presented to me by my good friend Sam Basu (@samidip), Developer Advocate at Progress Software (@ProgressSW). For a long time, my experience with Progress Software has been focused on the user interface developer tools offered under the Telerik brand name. After some additional research, I learned that Progress Software has a broad range of tools and platforms that, from a developer perspective, present additional opportunities to deliver solutions to our end users and customers.

 

The team at Progress Software is awesome. I was fortunate to meet Courtney Ferrucci and Danielle Sutherby who help me with the logistics of getting to ProgressNEXT. Their attention to detail is impressive especially given the fact that they were key participants in planning ProgressNEXT for over 500 attendees. They truly made me feel welcomed the entire time of the event and were wonderful hosts.

 

Upon arriving, I was greeted by the excellent team operating the registration desk. Registration was flawless. My registration was located, badge printed, and swag bag presented in what felt like under a minute. Once registration was completed, I strolled over to the evening reception where attendees were presented with a wonderful selection of food and drinks. There was also a live band playing great music which was perfect for the evening and a live alligator welcoming us to Florida.

clip_image002

The first day of ProgressNEXT began with a great opening session. Loren Jarrett (@LorenJarrett), Chief Marketing Officer, welcomed all of the attendees and built up our excitement for all of the value we were about to receive from the additional session speakers and other conference sessions.

clip_image002[5]

The next speaker was the CEO of Progress, Yogesh Gupta. He gave a wonderful presentation on Modern Application/Systems Architecture and very eloquently demonstrated how various tools from Progress can provide value when considering/designing these types of solutions.

clip_image004

Once the general session ended, it was time to get into the details of the various technologies that were either a part of or could be leveraged within the Progress ecosystem.

 

The first technical session for me was titled “Getting Started with NativeScript”. Having a background in web development, you would think that I would have naturally transitioned from Angular/Typescript to NativeScript for mobile development but that was not my chosen path. So I decided to attend this session to get a better understanding of what NativeScript was all about. Rob Lauer (@RobLauer) was the presenter and he did a wonderful job sharing with us the basics of NativeScript and how it matched up with other similar frameworks. We also built a simple NativeScript app and learned how NativeScript fits into the Kinvey Platform.

 

So what is this Kinvey Platform? Well, I heard Kinvey mentioned in the couple of sessions I attended already and did not know anything about it so I thought it would be a great idea if I attended the “Getting Started with Kinvey” session. Tara Manicsic (@tzmanics) was the presenter and she did a wonderful job introducing the Kinvey Platform and how, as developers, we can leverage features such as storage, authentication, and serverless functions. The platform provides those core functions that just about every modern application requires. It was pretty easy to use and I will definitely try it out on some future projects to acquire some hands on experience. Building on my initial introduction to Kinvey, I attended a session led by Ignacio Fuentes (@ignacioafuentes), Progress Sales Engineer, that covered how to improve mobile app offline experience using Kinvey. It was a great session and demonstrated how to leverage Kinvey’s technology to provide offline data storage and synchronization.

 

If you know me, then you are aware that one of my many technology passions is Xamarin and Xamarin.Forms. ProgressNEXT hit a home run by having Sam Basu (@samidip), Progress Developer Advocate, deliver a presentation on Xamarin.Forms. He touched on the standard target platforms: iOS, Android, and UWP but also cover other options for utilizing Xamarin.Forms: MacOS, Tizen, and web. Sam, as always, did a great job covering the latest features and opportunities for leveraging Xamarin.Forms to create cross platform applications.

clip_image002[7]

 

Following the Xamarin.Forms presentation, I attended a session led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact, that covered what’s coming in R2 2019 of Telerik and KendoUI. There is a lot of great things in this release which is now available and you can check out here: https://www.progress.com/kendo-ui. From my perspective, the most exciting updates were for UI for Xamarin (of course) and UI for Blazor. It is amazing how the Progress team is rapidly evolving the toolset, especially giving that Blazor is not a generally available product (at the time this was published) and UI for Blazor is available.

clip_image002[9]

 

After seeing all the “goodness” planned for KendoUI, I was fortunate to attend a session led by T.J. VanToll (@tjvantooll), Principal Developer Advocate at Progress. His session was titled “One Project, One Language, Three Apps.”. In this session, he focused on NativeScript and React Native and how they both can be used to build web, native iOS, and native Android applications. The demos were great, and he also covered when and when not to use each tool.

clip_image002[11]

 

The final ProgressNEXT technical session I attended was led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact. In this session, Carl covered Blazor, the client-side .NET framework that runs on any browser. (Yes, c# executing in the browser!) Blazor utilizes the Mono .NET runtime implemented in WebAssembly that executes normal .NET assemblies in a browser. Carl did an awesome job introducing Blazor and how a .NET developer can leverage the technology in building applications.

image

 

I had a great time at my first ProgressNEXT conference. The Progress team did a wonderful job with all aspects of this event. The venue, food, entertainment, scheduling, general and technical sessions were excellent. As a developer who was only familiar with the UI/UX tools Progress creates, attending ProgressNEXT has greatly expanded my perspective and understanding of the Progress ecosystem. I highly recommend attending ProgressNEXT and hope to see you at ProgressNEXT20, June 14-17, 2020 in Boston, MA.

clip_image002[13]

Creating a “Smarter” Application with Xamarin, Azure Cognitive Services, and ML.NET

Data science, machine learning, and artificial intelligence are hot topics in the software engineering industry today.  The products data scientists and software engineers are creating with these tools are very innovative and in some verticals create true competitive advantages in the industries in which they are applied.  This next generation of software products is changing how software is utilized to solve common problems in society and with proper application is adding value in areas that have been difficult reach.

As a software engineer, I would consider myself somewhat late to the machine learning and artificial intelligence game.  So I though it would be a great idea to explore these tools by utilizing them with some of the technology that I’m passionate about.  The idea is pretty simple for this exercise.  I will attempt to create a “smarter” cross-platform RSS reader using Xamarin, Azure Cognitive Services, and ML.NET.  Why those tools?  Well, I’m a .NET developer that is really into Xamarin.  Xamarin is not my technology “daily driver” but I’m excited about the value it brings to the table for creating cross-platform applications.  I’m also an Azure fan and since ML.NET is a machine learning framework for .NET, it seems these tools will give me the best shot at creating something useful.

Here is how I plan to utilize the tools:

 

Xamarin Used to create a cross-platform application that runs on Android, iOS, and Windows desktop(UWP application)
Azure Cognitive Services Used for searching the web for RSS feeds and applying topics (tags) to subscribed RSS feeds
ML.NET Used to create/update models used to recommend new RSS feeds based to prior application usage

 

I’m sure as I continue to build out features there will be other tool and technology considerations.  The name of the reader is ViewPointReader, it is open source, and tracked here: ViewPointReader  I plan to create blog posts of the progress at each signification phase of the project.  Your comments on the project and code as it progresses is welcomed and greatly appreciated.

-Richard

Upgrading an Existing Web API Project to ASP.NET Core 2.1 Preview 1

In late February 2018, .NET Core 2.1 Preview 1, ASP.NET Core 2.1 Preview 1, and Entity Framework 2.1 Preview 1 were announced and released to the public. According to the documentation, there are several themes influencing the feature set of the final release. Here are the themes:

· Faster Build Performance

· Close gaps in ASP.NET core and EF Core

· Improve compatibility with .NET Framework

· GDPR and Security

· Microservices and Azure

· More capable Engineering System

All of the themes listed are important to the evolution, maturity, and adoption of the platform. Starting a new project with .NET/ASP.NET Core 2.1 Preview is a simple process and follows the familiar workflow of creating a new project in Visual Studio. Upgrading an existing .NET/ASP.NET Core 1.x/2.0 to .NET/ASP.NET Core 2.1 Preview 1 is a little more involved and this walkthrough is aimed at assisting you with the process by providing a breakdown of my experience.

When I present on the topic of web development, I typically use a reference application I created that is used to demonstrate technologies, tools, and software engineering techniques to the attendees. As these topics evolve, so does the reference application. In preparation for the upcoming release of .NET/ASP.NET Core 2.1, I spent sometime recently upgrading a Web API project that is a part of the reference application.

The first step in upgrading the Web API project was to download and install the NET Core 2.1 Preview 1 SDK. You can download it here: .NET Core 2.1 Preview 1 SDK . It is a simple and painless install but there are two things to note:

  • The SDK installs side-by-side with other versions of the SDK but your default SDK will be the latest version which is the preview.  If you have problems with other projects and the preview SDK, you can force a project to use a specific version of the SDK using a global.json file as documented here.
  • The 64-bit installer version is 128MB. Not a huge download but make sure you have a decent connection to the internet.

Once I had the .NET Core 2.1 Preview 1 SDK installed, I began the process of updating the Web API project. This process is manual and involved editing several files to get everything updated. The first file I edited was the .csproj file. I changed the TargetFramework element to reference .NET Core 2.1:

Original element: <TargetFramework>netcoreapp2.0</TargetFramework>

Updated element: <TargetFramework>netcoreapp2.1</TargetFramework>

Next, I updated all Microsoft.AspNetCore and Microsoft.Extensions package reference elements to version “2.1.0-preview1-final”. Below is an example of one of the updated package references elements:

Original element: <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.0.3" />

Updated element: <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.0-preview1-final" />

Since I was modifying the .csproj file, I thought it was a great opportunity to perform some cleanup. (This step is not required.) This Web API has existed since the .NET Core 1.0 RC days and at that time the practice was to list each of the needed packages from Microsoft.AspNetCore and Microsoft.Extensions individually. The result in this project; a .csproj file with 15+ package reference elements. Now, there is nothing technically wrong with this but later versions of ASP.NET Core offer a way to reduce the number of package reference elements in your .csproj file.

ASP.NET Core 2.0 introduced the metapackage Microsoft.AspNetCore.All. This package includes all packages supported by the ASP.NET Core Team, all Entity Framework Core packages, and any third-party package dependencies of ASP.NET Core and Entity Framework Core. All features of ASP.NET Core and Entity Framework Core are included. In addition, the default project template used this package. This approach takes advantage of the .NET Runtime Store which contains all the runtime assets needed to run ASP.NET Core 2.x applications. Since all the assets are part of the .NET Runtime Store and the assets are precompiled, no ASP.NET Core nuget packages are deployed with the application (Except in the case of self-contained deployments) and application startup time is reduced. If there is an overall concern of the deployment size, a package trimming process can be used to remove any packages that are not used and therefore not deployed. More information on package trimming can be located here: Package Trimming

ASP.NET Core 2.1 introduces a new metapackage Microsoft.AspNetCore.App. The concept is the same as in Microsoft.AspNetCore.All except the new metapackage reduces the number of dependencies on packages not own/supported by the ASP.NET and .NET teams. Only the packages deemed necessary to ensure major framework features are included. More information on the new metapackage Microsoft.AspNetCore.App can be located here: Microsoft.AspNetCore.App

To reduce the number of package reference elements in the .csproj file, I removed all package reference elements for Microsoft.AspNetCore.* and Microsoft.Extensions.* and added a single package reference element for the new metapackage. Here is the new entry:

<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-preview1-final" />

Here is the final .csproj file:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <PreserveCompilationContext>true</PreserveCompilationContext>
    <AssemblyName>sregister_webapi</AssemblyName>
    <OutputType>Exe</OutputType>
    <PackageId>sregister_webapi</PackageId>    
    <PackageTargetFallback>$(PackageTargetFallback);dotnet5.6;portable-net45+win8</PackageTargetFallback>
    <TrimUnusedDependencies>true</TrimUnusedDependencies>
  </PropertyGroup>

  <ItemGroup>
    <None Update="wwwroot\**\*">
      <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
    </None>
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.0-preview1-final" />    
    <PackageReference Include="IdentityServer4.AccessTokenValidation" Version="2.4.0" />
    <PackageReference Include="Dapper" Version="1.50.2" />
    <PackageReference Include="Microsoft.Packaging.Tools.Trimming" Version="1.1.0-preview1-25818-01" />
  </ItemGroup>

  <ItemGroup>
    <ProjectReference Include="..\sregister_core\sregister_core.csproj" />
    <ProjectReference Include="..\sregister_infrastructure\sregister_infrastructure.csproj" />
  </ItemGroup>

</Project>

The next set of changes were made in the program.cs file. Here is the before and after:

BEFORE:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;

namespace sregister_webapi
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .Build();

            host.Run();
        }
    }
}

AFTER:

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;

namespace sregister_webapi
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }
               
        private static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup<Startup>();
    }
}

As I stated earlier, this Web API project was originally created in the .NET/ASP.NET Core 1.0 RC days and things are done differently now. As you can see, the recommended approach is to have main call a method that returns IWebHostBuilder which is configured to know which class to use during startup. Main calls the Build and Run methods (in that order) on the returned IWebHostBuilder which starts the application.

The last step in upgrading the Web API to .NET/ASP.NET Core 2.1 Preview was to modify the Startup.cs file. The main changes were:

In the ConfigurationServices method:

· Changed the call to add MVC services to services.AddMvcCore().SetCompatibilityVersion(CompatibilityVersion.Version_2_1)

In the Configure method:

· Added app.UseHsts() – More information on this can be found here: HSTS

· Added app.UseHttpsRedirection() – This redirects HTTP traffic to HTTPS

Both changes in the Configure method were made to take advantage of the features of ASP.NET Core 2.1 Preview to secure web applications using HTTPS during development and production. A detailed explanation of the features can be found here: ASP.NET Core 2.1.0-preview1: Improvements for using HTTPS

Here is the before and after of Startup.cs:

BEFORE

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using sregister_core.Interfaces;
using sregister_infrastructure.Repositorities;
using sregister_core.Models;

namespace sregister_webapi
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvc();

            services.AddCors();
            services.AddOptions();
            services.Configure<SpeakerRegisterOptions>(Configuration.GetSection("Options"));

            // adding custom services
            services.AddTransient<ISpeakerRepository, SpeakerRepository>();
            services.AddTransient<IConferenceRepository, ConferenceRepository>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            app.UseCors(builder => builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod());

            app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
            {
                Authority = "http://localhost:9440",
                AllowedScopes = { "sregisterAPI" },
                RequireHttpsMetadata = false
            });
            
            app.UseMvc();
        }
    }
}

AFTER

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using sregister_core.Interfaces;
using sregister_infrastructure.Repositorities;
using sregister_core.Models;
using System;

namespace sregister_webapi
{
    public class Startup
    {
        public Startup(IHostingEnvironment env)
        {
            var builder = new ConfigurationBuilder()
                .SetBasePath(env.ContentRootPath)
                .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
                .AddEnvironmentVariables();
            Configuration = builder.Build();
        }

        public IConfigurationRoot Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddMvcCore()
                .SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
                .AddAuthorization()
                .AddJsonFormatters()
                .AddCors();

            services.AddHsts(options => {
                options.MaxAge = TimeSpan.FromDays(90);
                options.IncludeSubDomains = false;
                options.Preload = false;
            });

            services.AddAuthentication("Bearer").AddIdentityServerAuthentication(options => {
                options.Authority = "http://localhost:9440";
                options.RequireHttpsMetadata = false;
                options.ApiName = "sregisterAPI";
                options.LegacyAudienceValidation = true;    //temporary until token service is updated
            });

            services.AddOptions();
            services.Configure<SpeakerRegisterOptions>(Configuration.GetSection("Options"));
            
            // adding custom services
            services.AddTransient<ISpeakerRepository, SpeakerRepository>();
            services.AddTransient<IConferenceRepository, ConferenceRepository>();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
        {
            loggerFactory.AddConsole(Configuration.GetSection("Logging"));
            loggerFactory.AddDebug();

            app.UseCors(builder => builder.AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod());                   

            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseHsts();
            }

            app.UseHttpsRedirection();
            app.UseAuthentication();
            app.UseMvc();
        }
    }
}

 

Finally, I was able to build and run the upgraded Web API that is now using .NET/ASP.NET Core 2.1 Preview 1. I haven’t explored any of the new features of .NET/ASP.NET Core 2.1 Preview 1 (except for the HTTPS enhancements) but I’m looking forward to leveraging what the new platform has to offer.

If you want to explore more about ASP.NET Core 2.1 Preview 1, here is information provided from the ASP.NET Team regarding the announcement of the release, new features, and upgrade steps. ASP.NET Core 2.1 Preview 1 Now Available

Keep Right’in Code!

Richard

Cross Platform Application Development Fundamentals with Xamarin

Xamarin, founded in 2011 by Miguel de Icaza and now owned by Microsoft, is an open source platform that allows developers to create native iOS, Android, and Windows UWP applications using the .NET framework and C#.  With Xamarin, developers have a powerful tool that can be used to deliver cross platform applications from a single codebase.  As with all tools, failure or success is based on knowledge of how best to use the tool and this applies to Xamarin.  Xamarin is very powerful and requires some fundamental knowledge to maximize the value of its design goals.

Xamarin utilizes a number of items to create cross platform applications.

  • C# Language - Allows developers to use a modern language with advanced features

  • Mono .NET framework - A cross platform version of the .NET framework

  • Visual Studio for Windows and Mac - An advanced IDE that developers use to create, build, and deploy software

  • Compiler - Produces an executable for each of the target platforms

For developers currently utilizing the C# and the .NET framework, these items should be familiar.

To access platform-specific features, Xamarin exposes SDK’s via a familiar C# syntax.  Here is a breakdown by platform:

iOS

Android

Windows

Xamarin.iOS exposes the CocoaTouch SDK as namespaces that can be referenced from C#

Xamarin.Android exposes the Android SDK as namespaces that can be referenced from C#

Windows UWP applications can only be built using Visual Studio for Windows selecting the corresponding project type.  There are namespaces available that can be referenced from C#

 

Building cross platform applications with Xamarin follow the familiar process of code, compile, and deploy.  During the compilation step for Xamarin, the C# code is converted into native applications for each target platform but the output is quite different.  For iOS, the C# code is ahead-of-time (AOT) compiled to ARM assembly language.  The .NET framework is included minus any unused classes.  For Android, the C# code is compiled to Intermediate Language (IL) and packaged with the Mono runtime with Just In Time (JIT) compilation.  As with the iOS scenario, the .NET framework is included minus any unused classes.  The Android application runs side-by-side with Java/ART (Android runtime) and interacts with native types using JNI. (Java Native Interface)  For Windows UWP, the C# code is compiled to Intermediate Language (IL) and executed by the built-in runtime.  Windows UWP applications do not use Xamarin directly.  Despite these differences, Xamarin provides a seamless experience for writing C# code that can be reused for all target platforms.

Visual Studio for Windows or Visual Studio for Mac can be used for Xamarin development.  Which IDE you choose will determine which platforms you can target with your application.  Here is a quick breakdown by IDE:

Visual Studio for Windows

Visual Studio for Mac

  • Xamarin.iOS (requires a Mac)

  • Xamarin.Android

  • Xamarin.UWP

  • Xamarin.iOS

  • Xamarin.Android

 

If your plan is to target iOS, Android, and UWP, Visual Studio for Windows is your choice for IDE but you must have access to a Mac running macOS for build and licensing. Here is a link for more information on system requirements for Xamarin.

In order to deliver cross platform applications with Xamarin that inherently support code reuse, the application architecture is a key component to success.  Following object oriented programming principles such as encapsulation, separation of concerns/responsibilities, and polymorphism can contribute positively to code reuse.  Structuring the application in layers that focus on specific concerns (i.e. data layer, data access layer, business logic layer, user interface layer) and utilizing common design patterns such as Model-View-ViewModel (MVVM) and Model-View-Controller (MVC) also support code reuse.  These concepts/patterns when used appropriately minimizes duplication and complexity, effectively organizes, and positions the code to be leveraged for multiple platforms.

Sharing code is also a key component to successfully using Xamarin to deliver cross platform applications.  There are several options for sharing code between target platforms. Shared Projects, Portable Class Libraries (PCL), and .NET Standard Libraries are the options and each have unique features which are summarized below:

Shared Projects

Portable Class Libraries(PCL)

.NET Standard Libraries

Allows code to exist in a common location that can be shared between target platforms



Allows code to exist in a common location that can be shared between target platforms



Allows code to exist in a common location that can be shared between target platforms

Shared Projects can contain platform specific code/functionality


Compiler directives are used to include/exclude platform-specific functionality for each platform target

PCL’s cannot contain any platform-specific code

PCL’s have a profile that describes which features are supported (the broader the profile the smaller the number of available features)

.NET Standard Libraries cannot contain any platform-specific code

.NET Standard Libraries have a larger number of available features than PCL’s

.NET Standard Libraries have a uniform API for all .NET platforms supported by the version of the library

During the build process, the code in the shared project is included in each of the platform target assemblies (there is no output assembly for the shared project)

PCL’s are referenced by other projects and there is an output assembly after the build

.NET Standard Libraries are referenced by other projects and there is an output assembly after the build

More Shared Project information

More PCL information

More .NET Standard information

Microsoft released .NET Standard 2.0 in late 2017.  In this release, the number of API’s supported increased by over 20,000 from the previous version (1.6) of the .NET Standard.  This increase allows developers to place even more code into reusable libraries.  The .NET Standard 2.0 is supported by the following .NET implementations:

  • .NET Core 2.0

  • .NET Framework 4.6.1

  • Mono 5.4

  • Xamarin.iOS 10.14

  • Xamarin.Mac 3.8

  • Xamarin.Android 8.0

  • Universal Windows Platform 10.0.16299

As you can see by the list, if your goal is maximum code reuse and your platform targets are supported by the .NET Standard 2.0, a .NET Standard 2.0 Library is where you should place all of your reusable code.  For Xamarin, having code reuse central to your software architecture design goal allows you to deliver your mobile application faster to multiple platforms.

When exploring Xamarin, make sure you spend adequate time learning and understanding these fundamental design and project organizational concepts. Understanding and using them are necessary to take full advantage of Xamarin as a tool to effectively and efficiently deliver cross platform applications.

Keep Right’in Code!

Richard

Why Unit Testing?

Throughout my career in the software development industry, unit testing has been a practice that I have known about, somewhat understood the benefits, but never had a real opportunity to practice.  When you are a part of a small development team and you are looking for ways to deliver value to your customers more quickly, skipping unit testing “feels” like you are doing the right thing.  But if your project becomes more successful, more complicated, and your team grows, not having unit tests becomes risky and costly.

Unit testing is a software testing approach by which a unit of work within your source code is tested to determine if it functions properly.  You can think of a unit (unit of work) as the smallest testable part of your code.  A unit test is code a developer creates to test a unit.  It is basically code written to test code.  For example, let’s say some code has a function that accepts an array of numbers and returns the sum of the values.  Several unit tests (code) can be written to test the function by providing arrays of known values as input and comparing the result to the known sum of the of the values in the input array.  If the sum returned by the function matches the known sum, the unit test passes.  So big deal, the function can correctly calculate the sum of the values in an input array.  What’s the value in creating unit tests for that?  

Well let’s expand our thinking for a moment.  You have an idea that might improve the efficiency of the function by 10x and it requires that you refactor how the function sums the values of the input array.  Let’s make this a little more interesting and assume you are not the original author of the function and the output of the function should be the same as before your refactor.  Without unit tests in place, how do you determine if the function is still working correctly after your refactor? In this scenario is where unit testing adds tremendous value.

There are a couple of items in this scenario that contribute to increased risk and cost.  All non-trivial applications/libraries will have code refactored at some point which introduces risk.  Without unit testing, the only way to ensure that code continues to function correctly after refactoring is to perform manual testing and in some scenarios the code that was refactored can only be tested indirectly as a part of a larger operation.  This leads to increased cost because resources (developers/testers) have to be allocated to a manual testing effort and human resources are the most expensive part of any business.  In addition, this manual testing effort requires time which delays the delivery of value to the end users.

Returning to our scenario, let’s assume there were unit tests available and the tests were setup to execute as a part of the build process.  After the refactor, you rebuild the code which kicks off the execution of the unit tests.  Almost instantaneously you will get feedback on your code changes.  Assuming that the unit tests are correctly testing the function (big assumption) and the unit tests are still passing after your refactor, you can be fairly confident that you have not introduced errors.  This does not eliminate the need for all manual testing but it is safe to conclude that the manual testing efforts can be minimized and the unit testing has reduced, not eliminated, risks and cost.

Based on my view of the software development world, I would say that most development shops do not actively use unit testing as a tool to mitigate risk and cost.  To the inexperienced, unit testing is overhead in the software development process that can be eliminated to deliver value to users faster.  But what happens when the “overhead” is eliminated for years, the project has become more complicated, and more software developers have been added or experienced developer have departed.  The result: a huge amount of risk to the project when changes are made, increased cost in time to test, and delayed delivery of value to the end users - a mountain of technical debt.  Unit testing is a tool, if used properly, to help teams of all sizes be more efficient and minimize technical debt.  It should not be viewed as “overhead” or a barrier to delivering value to the end users.

If you are developing software using the .NET (Microsoft) technology stack, there are several unit testing frameworks available.  The most popular are Microsoft Unit Testing Framework for Managed Code (MSTest), NUnit, and xUnit.Net.  All get the job done and warrant evaluation if you are considering adding unit testing to your project.

Unit testing is a tool and with all tools they can cause problems if not used properly.  But when used properly, unit testing is a powerful tool in the tool chest.  If you are starting a new project or supporting a legacy codebase, unit testing should be in your tool belt to help minimize risk and cost while delivering value to your end users.

Keep Right’in Code!
Richard

Why Choose Xamarin?

Today there are many options to choose from when building a mobile application.  What’s clear is that if you do not support both iOS and Android at a minimum, you run the risk of alienating potential users.  Many organizations have mitigated that risk by investing heavily in creating multiple teams that have different skills and use different tools to deliver applications targeted for the various mobile platforms.  For some, that investment may seem like a duplication of effort and potentially presents an opportunity to consolidate and reduce costs.  Again there are several options to consider when evaluating this opportunity but if your organization has a current investment in the .NET (Microsoft) framework using C#, Xamarin could be a solution that allows you to leverage that investment to extend the value your organization creates into the mobile space.

Xamarin, founded in 2011 by Miguel de Icaza and now owned by Microsoft, is an open source platform that allows developers to create native iOS, Android, and Windows applications using the .NET framework and C#.  For organizations/teams that currently develop using the .NET framework, this is very familiar territory because Xamarin is fully integrated into Visual Studio and Visual Studio for Mac which are core development tools in this space.  So if you are looking to leverage existing .NET/C# skills, the Xamarin story starts to become compelling; but there is more....

Developers have a myriad of options when creating software to solve problems.  Without guidance and experience, they can get into trouble really quickly.  Of all the guidance available (and there is plenty), there are two things that I consider paramount for success as a developer; 1 - Keep things simple, 2 - Don’t repeat yourself (DRY which contributes to #1).  Code reuse should be a top priority when developing production software.  

Because the Xamarin platform utilizes the .NET framework, there are some tools, cross platform capabilities, and code reuse strategies available that enhances the Xamarin story.  For example, C# is used to create applications with the Xamarin platform and the language provides features that can be leveraged with intentional design to achieve code reuse. (i.e. class inheritance, generics, etc.)  Also, the .NET framework supports Shared Projects, Portable Class Libraries (PCL), and .NET Standard Libraries.  Placing core business logic in those project/libraries, allows them to be used in solutions for all .NET framework target platforms including those supported by Xamarin.  For example, one could have core business calculations or operations in a Shared Project, PCL, or .NET Standard Library and reuse that code for windows, web, and Xamarin applications.  Here is a quick break down of the features/options for the project/library types:

Shared Projects

Portable Class Libraries(PCL)

NET Standard Libraries

Allows code to exist in a common location that can be shared between target platforms Allows code to exist in a common location that can be shared between target platforms Allows code to exist in a common location that can be shared between target platforms
Compiler directives are used to include/exclude platform-specific functionality located in the code

PCL’s cannot contain any platform-specific code

PCL’s have a profile that describes which features are supported (the broader the profile, more selected platform targets, the smaller number of available features)

.NET Standard Libraries cannot contain any platform-specific code

.NET Standard Libraries have a larger number of available features than PCL’s

.NET Standard Libraries have a uniform API for all .NET platforms supported by the version of the library

During the build process, the code in the shared project is included in each platform target assemblies. (there is no output assembly for a shared project) PCL’s are referenced by other projects and there is an output assembly after the build .NET Standard Libraries are referenced by other projects and there is an output assembly after the build
More Shared Project information More PCL information More .NET Standard information

 

Microsoft released .NET Standard 2.0 in late 2017.  In this release, the number of API’s supported increased by over 20,000 from the previous version (1.6) of the .NET Standard.  This increase allows developers to place even more code into reusable libraries.  The .NET Standard 2.0 is supported by the following .NET implementations:

  • .NET Core 2.0

  • .NET Framework 4.6.1

  • Mono 5.4

  • Xamarin.iOS 10.14

  • Xamarin.Mac 3.8

  • Xamarin.Android 8.0

  • Universal Windows Platform 10.0.16299

As you can see by the list, if your goal is maximum code reuse and your platform targets are supported by the .NET Standard 2.0, a .NET Standard 2.0 Library is where you should place all of your reusable code.  For Xamarin, having code reuse central to your software architecture design goal allows you to deliver your mobile application faster to multiple platforms, but there is more...

What if you could take code reuse to the user interface?  Well, you can with Xamarin Forms.  Xamarin Forms allows developers to build native user interfaces for iOS, Android, and Windows using C# and XAML.  Developers use abstractions of user interface controls to construct the user interface and at compile time those abstractions are converted to native user interface elements. By connecting a Xamarin Forms user interface with shared backend code, developers can build a fully native iOS, Android, and Windows application from a single code base and depending on the application and technical design, can achieve over 96% code reuse.

Until now we have covered all the advantages to using Xamarin for cross platform mobile development but what are the disadvantages?  I have been using Xamarin for about a year.  My largest hurdle has been learning about developing for the target platforms and their requirements.  This is not a disadvantage of Xamarin but a entry cost to anyone new to mobile development and selecting any tool would require paying this cost.  So, I would say that Xamarin can be used for most mobile applications.  Unless your application requires use of specific platform features or special hardware, Xamarin is real compelling option for delivering mobile applications on multiple platforms and a potential cost saver if your organization/team already has .NET framework and C# skills.

Keep Right’in Code!

Richard

2017: What a Fantastic Year

In many ways, personally and professionally, 2017 has been a fantastic year for me.  Here is just some of the highlights: 

  • Started Charlotte Xamarin Developers meetup group.  As of today, we have 102 members.  I have big plans for this group in 2018
  • Continued leading Modern Devs Charlotte meetup group. As of today, we have 1,007 members.
  • Presented at 13 conferences/events on Angular, .NET/.NET Core, ASP.NET/ASP.NET Core, Xamarin, and Azure
  • Awarded the Telerik/Progress Developer Expert designation
  • Received the Microsoft Most Valuable Professional (MVP) award in the Visual Studio and Development Technologies category
  • Interviewed for the podcast Developer On Fire
  • Visited Alaska for the first time
  • Started a new position as Director of Engineering at SentryOne
  • My son graduated from college with a BS in Computer Science and will start fulltime in 2018 as a Jr. Software Engineer at SentryOne
  • and many more…

As I reflect on this past year, there is no doubt I have been extremely blessed and I am thankful.  I have had some incredible experiences and have met some incredible people.  Many have given me opportunities, helped me, and put their trust and faith in me. I could not have done it without your support.  To you I say “Thank You!”

Looking forward to 2018, I plan to do even more to give back.  The tech community is a real passion of mine.  I find it incredibly exciting to share what I have learned and see others use that knowledge to do great things.

So 2017, I bid you farewell, it has been fantastic.  To 2018, watch out, I am “going hard” to make it even more fantastic than 2017.

 

Keep Right’in Code!

Richard

Implementing a Successful Architecture for your Angular 2 Application Using Modules

The choices made when setting out to build an application can either contribute to the success or failure of a project.  Execution environment/platform, source control management, testing strategies, tools, languages, code organization, and software architecture are all key decisions at the beginning and throughout the development of a project. New frameworks like Angular 2 have little guidance or reference material to help you move down a successful path when setting out to build what would be considered a non-trivial application.

Currently, I am focused on acquiring an in-depth knowledge of Angular 2.  Having worked with Angular 1.x for the past couple of years, I am excited about the value that Angular 2 delivers.  I learn by doing, so I have set out to build a simple web application (Speaker Register) that allows conference speakers to create searchable profiles that are available to conference and meeting planners.  I have decided to use a SPA (Single-Page Application) architecture using Angular 2 and ASP.NET Web API hosted on ASP.NET 1.0 Core and .Net Core 1.0.  The application’s repo is located here:  https://github.com/rightincode/speakerregister  Feel free to follow along as the application evolves over the next several weeks.

Below is a high-level diagram of the current software architecture of the Angular 2 code:

SpeakerRegisterArchitecture

Angular 2 applications are intended to be modular by organizing your code into modules.  Modules are blocks of code dedicated to a single purpose and inherently support re-use.  They consolidate components, directives, and pipes (angular concepts) into cohesive blocks of functionality focused on a feature area, workflow, or common collection of utilities (library).  Modules export something of value, for example, a class, function, value, service, and/or component.  The exported items can be imported by other modules. 

An Angular 2 application contains at least one module that is named the root module.  Angular 2 launches an application by bootstrapping the root module.  Additional modules, named feature modules or shared modules can be used to separate concerns and better organize an application.  An Angular 2 module is a class (a Typescript class in the case of the sample application) that is decorated with the @NgModule decorator function.  Let’s take a look at the modules in Speaker Register.

SpeakerRegisterArchitecture-Modules

import { NgModule }      from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { HTTP_PROVIDERS } from '@angular/http';

/* App Root */
import { AppComponent }  from './app.component';
import { PageNotFoundComponent } from './pagenotfound.component';
import './rxjs-operators';

/* Feature Modules */
import { SpeakerModule } from './speaker/speaker.module';
import { ConferenceModule } from './conference/conference.module';

/* Routing */
import { routing, appRouterProviders } from './app.routing';

@NgModule({
    imports: [BrowserModule, ConferenceModule, SpeakerModule, routing],
    declarations: [AppComponent, PageNotFoundComponent],
    providers: [appRouterProviders, HTTP_PROVIDERS],
    bootstrap: [AppComponent]
})
export class AppModule { }

The AppModule is the root module of the Speaker Register application.  Lines 17-22 above is where Angular is instructed that the exported class AppModule is a module.  The @NgModule decorator function accepts an object that contains metadata (configuration information) that describes features of the module.

  • imports: (line 18) this is a list of modules who exposed assets (directives, pipes, etc.) are available to templates in this module. In the code above, BrowserModule is imported from the Angular library.  ConferenceModule and SpeakerModule are two other modules (feature modules) that contain specific functionality for those areas of the Speaker Register application.  (routing is beyond the scope of this post but will be covered later)
  • declarations: (line 19) this is a list of the components, directives, pipes, etc. that belong to this module. AppComponent (container component for the application) and PageNotFoundComponent belong to the root module
  • providers: (line 20) this is a list of injectable objects that are available in the injector of this module. (providers are beyond the scope of this post but will be covered later)
  • bootstrap: (line 21) the list of components that should be bootstrapped when this module is bootstrapped.  In the code above, AppModule is our root module and when it is bootstrapped, AppComponent (the container component for the application) is bootstrapped

When the application starts, AppModule gets bootstrapped. (starts executing)  AppModule imports several modules from the Angular libraries, imports two components (AppComponent and PageNotFoundComponent – we will cover those in a later post) that are a part of AppModule, loads two feature modules (SpeakerModule and ConferenceModule), and sets up routing. (we will cover routing in a later post)

Speaker Register also has two feature modules named SpeakerModule and ConferenceModule.  Both of these modules export classes that are decorated by the @NgModule function similar to the AppModule.  They contain only what is needed to implement the features of those parts of the application.  This allows for the separation of concerns between modules in the application. For example, the SpeakerModule is only concerned about the speaker functionality and exports an API to modules that import it.  It (SpeakerModule) can be updated independently without a negative impact to other modules as long as the API has not changed after the updates.  Take a look at the SpeakerModule and ConferenceModule in the source code: https://github.com/rightincode/speakerregister

The module in Angular 2 is a very powerful tool to help organize your code and build a codebase that is much easier to maintain.  In Speaker Register, we have made use of the module to create a software architecture designed to separate the concerns of specific areas within the application.  Add this tool (module) to your toolbox.  It will help you create a successful software architecture for your Angular 2 applications.

Keep Right’in Code!

Richard – @rightincode

Using Angular 2 RC5 with Visual Studio 2015 and ASP.NET Core 1.0

Lately, I have been spending time learning Angular 2.  During my study, I have seen many examples of setting up and getting started by using Visual Studio Code, WebStorm, and other excellent IDE’s.  I have also read several articles about setting up and using Angular 2 beta versions with Visual Studio 201x.  Since  I spend most of my time using Visual Studio 2015, this article will show you how to setup Angular 2 RC5 with Visual Studio 2015 and ASP.NET Core 1.0 on .NET Core 1.0.

Before you get stared, make sure you have Update 3 for Visual Studio 2015 installed and Preview 2 tooling for Microsoft .NET Core 1.0.  Once you have confirmed your installation, fire up Visual Studio 2015 and select “New Project” from the Start Page.  You will be presented with the dialog below:

New Project

Make sure you have selected the “ASP.NET Core Web Application (.NET Core)” option. Name the project and the solution, and set the location to whatever you would like.  Click “Ok” to continue.  Next you are presented with selecting what type of project template you would like to begin with.  Select “Empty”.  Authentication should be set to “No Authentication” and the “Host in the cloud” option should not be selected.  See the screenshot below:

Empty Template

Click “Ok” to continue.  After VS (Visual Studio) completes the setup you will be presented with the project readme HTML file in the editor.  If you take a look at Solution Explorer, your solution structure should look like the screenshot below:

Solution Explorer - Start

The next step in setting up ASP.NET Core 1.0 to serve Angular 2 RC5 is to configure your application to serve static files.  First, right-click on your web application in solution explorer and select “Manage NuGet Packages”.  In the NuGet Package Manager, enter “Microsoft.AspNetCore.StaticFiles” in the search window.  You will be presented with the screenshot below:

NuGet Package Manager

Make sure you have the latest stable version selected and click “Install”.  You may be asked permission to update your application as well as accept licensing terms.  Confirm to complete the installation and close the NuGet Package Manager Tab.  Finally edit the Startup.cs file in your solution.  It should look like the code below:

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;

namespace WebApplication1
{
    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app)
        {
            app.UseFileServer();
        }
    }
}

For a more in depth understanding of serving static files with ASP.NET Core 1.0, please visit here.

Now we are ready to begin setting up our environment for Angular 2 RC5.  “Right-click” on your web application project and select “Add” then “New Item”.  You will be presented with the dialog below:

Package-JSON

Navigate to “Client-side” under .NET Core and then select “npm Configuration File” (package.json).  Click Add.  This file is used by NPM (Node Package Manager) to install required modules for our Angular 2 application.  You will be presented with the package.json file loaded in the editor.  Edit the package.json file to look like the code below:

{
  "version": "1.0.0",
  "name": "webapplication1",
  "scripts": {
    "postinstall": "typings install",
    "typings": "typings"
  },
  "dependencies": {
    "@angular/common": "2.0.0-rc.5",
    "@angular/compiler": "2.0.0-rc.5",
    "@angular/core": "2.0.0-rc.5",
    "@angular/forms": "0.3.0",
    "@angular/http": "2.0.0-rc.5",
    "@angular/platform-browser": "2.0.0-rc.5",
    "@angular/platform-browser-dynamic": "2.0.0-rc.5",
    "@angular/router": "3.0.0-rc.1",
    "@angular/router-deprecated": "2.0.0-rc.2",
    "@angular/upgrade": "2.0.0-rc.5",

    "systemjs": "0.19.27",
    "es6-shim": "^0.35.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "zone.js": "^0.6.12",

    "angular2-in-memory-web-api": "0.0.15",
    "jquery": "^3.1.0",
    "bootstrap": "^3.3.6"
  },
  "devDependencies": {
    "typescript": "^1.8.10",
    "gulp": "^3.9.1",
    "path": "^0.12.7",
    "gulp-clean": "^0.3.2",
    "fs": "^0.0.2",
    "gulp-concat": "^2.6.0",
    "gulp-typescript": "^2.13.1",
    "typings": "^0.8.1",
    "gulp-tsc": "^1.1.5"
  }
}

Immediately after saving the package.json file, Visual Studio will begin downloading all the dependencies listed to a folder named “node_modules” in the directory where your web application is located. (If you would like a detail explanation of these settings, you can go here.)  In addition, you will probably receive an error message stating that npm was not able to resolve Typings dependencies due to a missing “typings.json” file.  Let’s create that file now.

“Right-click” on the web application and select “Add” then “New Item”.  Again select “Client-side” on the far left and then select “JavaScript File”.  Make sure you name the file “typings.json”.  Your screen should look like the screenshot below:

Typings-JSON

Click “Add” and you will be presented with the typings.json file in the editor.  Edit the typings.json file to look like the code below:

{
  "ambientDependencies": {
    "es6-shim": "registry:dt/es6-shim#0.31.2+20160317120654",
    "jasmine": "registry:dt/jasmine#2.2.0+20160412134438",
    "node": "registry:dt/node#4.0.0+20160509154515"
  }
}

Save the typings.json file.  If you would like an explanation of these settings, you can go here. Now if you switch to the project.json file and save it again, Visual Studio should complete the installation of the dependent modules without error.

Next we add a TypeScript JSON Configuration File. (tsconfig.json)  “Right-click” on the web application in Solution Explorer and select “Add” and then “New Item”.  Select “Client-side” on the left and “TypeScript JSON Configuration File”.  Your screen should look like the screenshot below:

TypeScript-JSON

Select “Add” and the tsconfig.json file will be loaded into the editor.  Edit the tsconfig.json to look like the code below:

{
  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "module": "commonjs",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": false,
    "noEmitOnError": true,
    "noImplicitAny": false,
    "outDir": "./wwwroot/scripts"
  },
  "exclude": [
    "node_modules",
    "wwwroot",
    "typings/main",
    "typings/main.d.ts"
  ]
}

Save the file.  If you would like an explanation of these settings, you can go here.

We are going to use SystemJS to load our application and library modules.  To do so, we need to create a configuration file for SystemJS so that it can locate the code we need loaded.

Since ASP.NET Core 1.0 serves static files from the “wwwroot” folder, we are going to place the SystemJS configuration file in a folder named “scripts” under this folder.  “Right-click” on the wwwroot folder and select “Add” then “New Folder”.  Name the folder “scripts”.  Solution Explorer should look like the screenshot below:

Solution Explorer - scripts folder

Now add the SystemJS configuration file to the scripts folder by selecting “Right-click” on the script folder and selecting “Add” then “New Item”.  You will be presented with the dialog below:

SystemJS

Select “Client-side” on the left and “JavaScript File”.  Name the file “systemjs.config.js”.  Click “Add” and the “systemjs.config.js” file will be displayed in the editor.  Edit the “systemjs.config.js” file to look like the code below:

(function (global) {
    // map tells the System loader where to look for things
    var map = {
        'app': 'scripts',
        '@angular': 'libs/@angular',
        'angular2-in-memory-web-api': 'libs/angular2-in-memory-web-api',
        'rxjs': 'libs/rxjs'
    };
    // packages tells the System loader how to load when no filename and/or no extension
    var packages = {
        'app': { main: 'main.js', defaultExtension: 'js' },
        'rxjs': { defaultExtension: 'js' },
        'angular2-in-memory-web-api': { defaultExtension: 'js' }
    };
    var ngPackageNames = [
      'common',
      'compiler',
      'core',
      'forms',
      'http',
      'platform-browser',
      'platform-browser-dynamic',
      'router',
      'router-deprecated',
      'upgrade'
    ];
    // Add package entries for angular packages
    ngPackageNames.forEach(function (pkgName) {
        packages['@angular/' + pkgName] = { main: './bundles/' + pkgName + '.umd.js', defaultExtension: 'js' };
    });
    var config = {
        map: map,
        packages: packages
    }
    System.config(config);
})(this);

If you are interested in the settings in the “systemjs.config.js” file, you can go here.

There is one final configuration step to complete and then we are ready to code our Angular 2 application.  As stated before, ASP.NET Core 1.0 serves static files from the wwwroot folder by default.  As a result, we need to move required library files from the node_modules folder to a location under the wwwroot folder.  In addition, if we would like to perform any debugging of the TypeScript code in our browser development tools, we need to have the original TypeScript files served from the server.  In order to accomplish this, we are going to use a gulp script to handle copying the files to their needed location.

“Right-click” on the web application and select “Add” then “New Item”.  Select “Gulp Configuration File”.  The dialog should look like the screenshot below:

Gulp

Click “Add” and you will be presented with gulpfile.js in the editor.  Edit the gulpfile.js to look like the code below:

/// <binding AfterBuild='clearLibsDestinationFolder, clearAppDestinationFolder, moveToLibs' />
/*
This file in the main entry point for defining Gulp tasks and using Gulp plugins.
Click here to learn more. http://go.microsoft.com/fwlink/?LinkId=518007
*/

var gulp = require('gulp');
var clean = require('gulp-clean');

var libsDestPath = './wwwroot/libs/';
var appDestPath = './wwwroot/app/';

//clear destination folders
gulp.task('clearLibsDestinationFolder',
    function () {
        return gulp.src(libsDestPath)
            .pipe(clean());
    });

gulp.task('clearAppDestinationFolder',
    function () {
        return gulp.src(appDestPath)
            .pipe(clean());
    });

gulp.task('moveToLibs', function () {
    gulp.src([
      'node_modules/es6-shim/es6-shim.min.js',
      'node_modules/systemjs/dist/system-polyfils.js',
      'node_modules/systemjs/dist/system.src.js',
      'node_modules/reflect-metadata/Reflect.js',
      'node_modules/rxjs/bundles/Rx.js',
      'node_modules/zone.js/dist/zone.js',
      'node_modules/jquery/dist/jquery.*js',
      'node_modules/bootstrap/dist/js/bootstrap*.js',

      'node_modules/core-js/client/shim.min.js'

      //'node_modules/systemjs/dist/*.*',
    ]).pipe(gulp.dest('./wwwroot/libs/'));

    gulp.src(['node_modules/@angular/**/*'], { base: 'node_modules/@angular' })
        .pipe(gulp.dest('./wwwroot/libs/@angular'));
    gulp.src(['node_modules/angular2-in-memory-web-api/**/*'], { base: 'node_modules/angular2-in-memory-web-api' })
        .pipe(gulp.dest('./wwwroot/libs/angular2-in-memory-web-api'));
    gulp.src(['node_modules/rxjs/**/*'], { base: 'node_modules/rxjs' })
        .pipe(gulp.dest('./wwwroot/libs/rxjs'));

    gulp.src([
      'node_modules/bootstrap/dist/css/bootstrap.css'
    ]).pipe(gulp.dest('./wwwroot/libs/css'));

    //copy typescript files for debugging purposes - would not deploy to production environment
    gulp.src(['app/**/*']).pipe(gulp.dest('./wwwroot/app'));
});

The gulp file is configured to execute after a successful build of the solution once the Task Runner Explorer is setup.  From the VS menu, select “View”, “Other Windows”, and then “Task Runner Explorer”.   Click the “Refresh” button (top left, next to the application name) and Task Runner Explorer will read the gulp file.  You should see three(3) task: clearAppDestinationFolder, clearLibsDestinationFolder, and moveToLibs.  You should also see the number three(3) next to the “After Build” Bindings.  Now the gulp script will execute after a successfully build.

Okay, finally let’s code our Angular 2 application.

First step is to add an “index.html” file.  “Right-click” on the wwwroot folder and select “Add” then “New Item” and add an HTML file named “index.html”.  Edit the file to look like the code below:

<!DOCTYPE html>
<html>
<head>
    <title>Angular 2/ASP.NET Core 1.0 QuickStart</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="libs/css/bootstrap.css">

    <!-- 1. Load libraries -->
    <script src="libs/jquery.min.js"></script>
    <script src="libs/bootstrap.min.js"></script>
    <script src="libs/zone.js"></script>
    <script src="libs/Reflect.js"></script>
    <script src="libs/system.src.js"></script>

    <!-- Polyfill(s) for older browsers -->
    <script src="libs/es6-shim.min.js"></script>
    
    <!-- 2. Configure SystemJS -->
    <script src="scripts/systemjs.config.js"></script>
    <script>
      System.import('app').catch(function(err){ console.error(err); });
    </script>
</head>
<body>
<h1>Hello world from ASP.NET Core 1.0 on .NET Core 1.0!</h1>
    <br/><br/>
    <my-app>Loading...</my-app>
</body>
</html>

Now let’s add a new folder name “app” that will contain all of our TypeScript code for our Angular application.  “Right-click” on the web application project and select “Add” then “New Folder”.  Name the folder “app”.

The first TypeScript file we will add to the project is “main.ts”.  This is where our Angular 2 application starts up.  “Right-click” on the “app” folder and select “Add” then “New Item”.  Select “Client-side” on the left and in the center select “TypeScript File”.  Name the file “main.ts” and select “Add”.  “main.ts” should now be loaded in the editor and your solution should look similar to the screenshot below:

Main.ts

Edit the main.ts file to look like the code below:

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';

import { AppModule } from './app.module';

platformBrowserDynamic().bootstrapModule(AppModule);

Visual Studio will flag a few errors but that is okay for now.  They will be resolved when we add the remaining files.  Now let’s add a new TypeScript file to the app folder named “app.module.ts” and edit the file to contain the code below:

import { NgModule }      from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent }  from './app.component';

@NgModule({
    imports: [BrowserModule],
    declarations: [AppComponent],
    bootstrap: [AppComponent]
})
export class AppModule { }

Finally, let’s add a third TypeScript file to the app folder named “app.component.ts” and edit the file to contain the code below:

import { Component } from '@angular/core';

@Component({
    selector: 'my-app',
    template: '<h3>Angular 2 RC5 is running here!</h3>'
})

export class AppComponent { }

After adding the third TypeScript file, all of the Visual Studio errors should have been resolved.  Your solution should look similar to the screenshot below:

Final TypeScript

I will publish another blog post with more details about the three(3) TypeScript files we just added.  If you would like more information now, you can go here.

Okay, let’s build the application in Visual Studio.  The application should build successfully.  Now let’s launch the application and you should be greeted with the screenshot below:

AppWelcome

We have an Angular 2 RC5 application served up from ASP.NET Core 1.0 on .NET Core 1.0 and built using Visual Studio 2015!  You can find a Visual Studio solution here.

I hope this post is helpful with getting started with Angular 2 development using Visual Studio 2015 and ASP.NET Core 1.0 on .NET Core 1.0.  Let me know if you have any questions.  Keep Right’in Code!

Richard - @rightincode

The Value of a Full-Stack Developer

Several days ago, a colleague of mine gave an excellent talk about Full-Stack developers.  There were several points he made that I thought were interesting:

  • The definition of “Full-Stack” has evolved over the years
  • Based on his definition, to maintain the skills necessary to be a Full-Stack developer is extremely difficult or impossible
  • Given the maintenance difficulty, it is probably not worth the effort to do what is necessary to maintain/become a Full-Stack developer

For the most part, I agree with his assessment of the role Full-Stack developer.  As with all ideas and concepts, there are counterpoints, different opinions, and other perspectives.

We agree that the definition of the role of Full-Stack developer has evolved over the years.  Early in my career, “the stack” consisted of very few parts.  If you knew HTML, JavaScript, CSS, a server-side language (PHP/VB.NET/C#), and SQL and could put it all together to build a web application, you would be considered a Full-Stack developer.  Today, using what is considered a modern approach to building a web application, you would add to the previous list of technologies several client-side JavaScript libraries, additional tools to manage those libraries and other assets (version management/minification/bundling/etc.), new data transport and security tools/techniques, and custom back-end API’s to respond to client-side requests.  If are planning to support mobile, you just added even more complexity to the stack.  This evolution has increased the complexity of successfully delivering a web application by many orders of magnitude.

10 years ago, keeping up with the technologies required to successfully build a web application was a fairly easy task.  The pace of change was something one could truly mange within a typical work week and acceptable work/life balance.  Of course this is a personal preference but the frame of reference for my statement is a 40 hour work week.  Fast forward to today.  The pace of change is incredible.  As an individual developer, keeping up with changes in the technologies that are a part of “the stack” are impossible.  My colleague and I agree on this point.  One cannot master all the technologies and still be productive.  If you tried, you would spend all your time learning, never building, and a reasonable work/life balance (again personal definition) could not be achieved.

So is it worth the effort to maintain/become (or attempt to become) a Full-Stack developer?  This is where my colleague and I disagree.  It is my personal belief that striving to become or maintaining your role as a Full-Stack developer is definitely worth it.  I don’t believe that one could master all the parts of today’s modern technology “stack”  but I do believe you are much more valuable to an organization seeking solid contributors.  As a hiring manager, I am never impressed by how “deep” your knowledge of a specific technology or part of “the stack”.  The tools and technologies change constantly.  What I’m looking for is your ability and enthusiasm to solve problems with the appropriate tools and technologies.  Your ability to fearlessly and with an open mind explore the leading-edge of our profession is also more valuable in my opinion.

The Full-Stack developer may or may not exist today because it is 100% based on your definition.  But it is my genuine belief when reviewing your career as a professional software developer, the broader your knowledge maximizes your value and minimizes your risk.  Define what Full-Stack developer is to you, set your limits on work/life balance, and go capitalize on the incredible career of a software developer.

 

-Richard

@rightincode