ProgressNEXT 2019 – A Developer’s Perspective

Last month (May 7th – 9th 2019), I had the opportunity to attend ProgressNEXT in Orlando, FL. The opportunity to attend was presented to me by my good friend Sam Basu (@samidip), Developer Advocate at Progress Software (@ProgressSW). For a long time, my experience with Progress Software has been focused on the user interface developer tools offered under the Telerik brand name. After some additional research, I learned that Progress Software has a broad range of tools and platforms that, from a developer perspective, present additional opportunities to deliver solutions to our end users and customers.

 

The team at Progress Software is awesome. I was fortunate to meet Courtney Ferrucci and Danielle Sutherby who help me with the logistics of getting to ProgressNEXT. Their attention to detail is impressive especially given the fact that they were key participants in planning ProgressNEXT for over 500 attendees. They truly made me feel welcomed the entire time of the event and were wonderful hosts.

 

Upon arriving, I was greeted by the excellent team operating the registration desk. Registration was flawless. My registration was located, badge printed, and swag bag presented in what felt like under a minute. Once registration was completed, I strolled over to the evening reception where attendees were presented with a wonderful selection of food and drinks. There was also a live band playing great music which was perfect for the evening and a live alligator welcoming us to Florida.

clip_image002

The first day of ProgressNEXT began with a great opening session. Loren Jarrett (@LorenJarrett), Chief Marketing Officer, welcomed all of the attendees and built up our excitement for all of the value we were about to receive from the additional session speakers and other conference sessions.

clip_image002[5]

The next speaker was the CEO of Progress, Yogesh Gupta. He gave a wonderful presentation on Modern Application/Systems Architecture and very eloquently demonstrated how various tools from Progress can provide value when considering/designing these types of solutions.

clip_image004

Once the general session ended, it was time to get into the details of the various technologies that were either a part of or could be leveraged within the Progress ecosystem.

 

The first technical session for me was titled “Getting Started with NativeScript”. Having a background in web development, you would think that I would have naturally transitioned from Angular/Typescript to NativeScript for mobile development but that was not my chosen path. So I decided to attend this session to get a better understanding of what NativeScript was all about. Rob Lauer (@RobLauer) was the presenter and he did a wonderful job sharing with us the basics of NativeScript and how it matched up with other similar frameworks. We also built a simple NativeScript app and learned how NativeScript fits into the Kinvey Platform.

 

So what is this Kinvey Platform? Well, I heard Kinvey mentioned in the couple of sessions I attended already and did not know anything about it so I thought it would be a great idea if I attended the “Getting Started with Kinvey” session. Tara Manicsic (@tzmanics) was the presenter and she did a wonderful job introducing the Kinvey Platform and how, as developers, we can leverage features such as storage, authentication, and serverless functions. The platform provides those core functions that just about every modern application requires. It was pretty easy to use and I will definitely try it out on some future projects to acquire some hands on experience. Building on my initial introduction to Kinvey, I attended a session led by Ignacio Fuentes (@ignacioafuentes), Progress Sales Engineer, that covered how to improve mobile app offline experience using Kinvey. It was a great session and demonstrated how to leverage Kinvey’s technology to provide offline data storage and synchronization.

 

If you know me, then you are aware that one of my many technology passions is Xamarin and Xamarin.Forms. ProgressNEXT hit a home run by having Sam Basu (@samidip), Progress Developer Advocate, deliver a presentation on Xamarin.Forms. He touched on the standard target platforms: iOS, Android, and UWP but also cover other options for utilizing Xamarin.Forms: MacOS, Tizen, and web. Sam, as always, did a great job covering the latest features and opportunities for leveraging Xamarin.Forms to create cross platform applications.

clip_image002[7]

 

Following the Xamarin.Forms presentation, I attended a session led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact, that covered what’s coming in R2 2019 of Telerik and KendoUI. There is a lot of great things in this release which is now available and you can check out here: https://www.progress.com/kendo-ui. From my perspective, the most exciting updates were for UI for Xamarin (of course) and UI for Blazor. It is amazing how the Progress team is rapidly evolving the toolset, especially giving that Blazor is not a generally available product (at the time this was published) and UI for Blazor is available.

clip_image002[9]

 

After seeing all the “goodness” planned for KendoUI, I was fortunate to attend a session led by T.J. VanToll (@tjvantooll), Principal Developer Advocate at Progress. His session was titled “One Project, One Language, Three Apps.”. In this session, he focused on NativeScript and React Native and how they both can be used to build web, native iOS, and native Android applications. The demos were great, and he also covered when and when not to use each tool.

clip_image002[11]

 

The final ProgressNEXT technical session I attended was led by Carl Bergenhem (@carlbergenhem), Product Manager, Web Components @KendoUI and @KendoReact. In this session, Carl covered Blazor, the client-side .NET framework that runs on any browser. (Yes, c# executing in the browser!) Blazor utilizes the Mono .NET runtime implemented in WebAssembly that executes normal .NET assemblies in a browser. Carl did an awesome job introducing Blazor and how a .NET developer can leverage the technology in building applications.

image

 

I had a great time at my first ProgressNEXT conference. The Progress team did a wonderful job with all aspects of this event. The venue, food, entertainment, scheduling, general and technical sessions were excellent. As a developer who was only familiar with the UI/UX tools Progress creates, attending ProgressNEXT has greatly expanded my perspective and understanding of the Progress ecosystem. I highly recommend attending ProgressNEXT and hope to see you at ProgressNEXT20, June 14-17, 2020 in Boston, MA.

clip_image002[13]

Cross Platform Application Development Fundamentals with Xamarin

Xamarin, founded in 2011 by Miguel de Icaza and now owned by Microsoft, is an open source platform that allows developers to create native iOS, Android, and Windows UWP applications using the .NET framework and C#.  With Xamarin, developers have a powerful tool that can be used to deliver cross platform applications from a single codebase.  As with all tools, failure or success is based on knowledge of how best to use the tool and this applies to Xamarin.  Xamarin is very powerful and requires some fundamental knowledge to maximize the value of its design goals.

Xamarin utilizes a number of items to create cross platform applications.

  • C# Language - Allows developers to use a modern language with advanced features

  • Mono .NET framework - A cross platform version of the .NET framework

  • Visual Studio for Windows and Mac - An advanced IDE that developers use to create, build, and deploy software

  • Compiler - Produces an executable for each of the target platforms

For developers currently utilizing the C# and the .NET framework, these items should be familiar.

To access platform-specific features, Xamarin exposes SDK’s via a familiar C# syntax.  Here is a breakdown by platform:

iOS

Android

Windows

Xamarin.iOS exposes the CocoaTouch SDK as namespaces that can be referenced from C#

Xamarin.Android exposes the Android SDK as namespaces that can be referenced from C#

Windows UWP applications can only be built using Visual Studio for Windows selecting the corresponding project type.  There are namespaces available that can be referenced from C#

 

Building cross platform applications with Xamarin follow the familiar process of code, compile, and deploy.  During the compilation step for Xamarin, the C# code is converted into native applications for each target platform but the output is quite different.  For iOS, the C# code is ahead-of-time (AOT) compiled to ARM assembly language.  The .NET framework is included minus any unused classes.  For Android, the C# code is compiled to Intermediate Language (IL) and packaged with the Mono runtime with Just In Time (JIT) compilation.  As with the iOS scenario, the .NET framework is included minus any unused classes.  The Android application runs side-by-side with Java/ART (Android runtime) and interacts with native types using JNI. (Java Native Interface)  For Windows UWP, the C# code is compiled to Intermediate Language (IL) and executed by the built-in runtime.  Windows UWP applications do not use Xamarin directly.  Despite these differences, Xamarin provides a seamless experience for writing C# code that can be reused for all target platforms.

Visual Studio for Windows or Visual Studio for Mac can be used for Xamarin development.  Which IDE you choose will determine which platforms you can target with your application.  Here is a quick breakdown by IDE:

Visual Studio for Windows

Visual Studio for Mac

  • Xamarin.iOS (requires a Mac)

  • Xamarin.Android

  • Xamarin.UWP

  • Xamarin.iOS

  • Xamarin.Android

 

If your plan is to target iOS, Android, and UWP, Visual Studio for Windows is your choice for IDE but you must have access to a Mac running macOS for build and licensing. Here is a link for more information on system requirements for Xamarin.

In order to deliver cross platform applications with Xamarin that inherently support code reuse, the application architecture is a key component to success.  Following object oriented programming principles such as encapsulation, separation of concerns/responsibilities, and polymorphism can contribute positively to code reuse.  Structuring the application in layers that focus on specific concerns (i.e. data layer, data access layer, business logic layer, user interface layer) and utilizing common design patterns such as Model-View-ViewModel (MVVM) and Model-View-Controller (MVC) also support code reuse.  These concepts/patterns when used appropriately minimizes duplication and complexity, effectively organizes, and positions the code to be leveraged for multiple platforms.

Sharing code is also a key component to successfully using Xamarin to deliver cross platform applications.  There are several options for sharing code between target platforms. Shared Projects, Portable Class Libraries (PCL), and .NET Standard Libraries are the options and each have unique features which are summarized below:

Shared Projects

Portable Class Libraries(PCL)

.NET Standard Libraries

Allows code to exist in a common location that can be shared between target platforms



Allows code to exist in a common location that can be shared between target platforms



Allows code to exist in a common location that can be shared between target platforms

Shared Projects can contain platform specific code/functionality


Compiler directives are used to include/exclude platform-specific functionality for each platform target

PCL’s cannot contain any platform-specific code

PCL’s have a profile that describes which features are supported (the broader the profile the smaller the number of available features)

.NET Standard Libraries cannot contain any platform-specific code

.NET Standard Libraries have a larger number of available features than PCL’s

.NET Standard Libraries have a uniform API for all .NET platforms supported by the version of the library

During the build process, the code in the shared project is included in each of the platform target assemblies (there is no output assembly for the shared project)

PCL’s are referenced by other projects and there is an output assembly after the build

.NET Standard Libraries are referenced by other projects and there is an output assembly after the build

More Shared Project information

More PCL information

More .NET Standard information

Microsoft released .NET Standard 2.0 in late 2017.  In this release, the number of API’s supported increased by over 20,000 from the previous version (1.6) of the .NET Standard.  This increase allows developers to place even more code into reusable libraries.  The .NET Standard 2.0 is supported by the following .NET implementations:

  • .NET Core 2.0

  • .NET Framework 4.6.1

  • Mono 5.4

  • Xamarin.iOS 10.14

  • Xamarin.Mac 3.8

  • Xamarin.Android 8.0

  • Universal Windows Platform 10.0.16299

As you can see by the list, if your goal is maximum code reuse and your platform targets are supported by the .NET Standard 2.0, a .NET Standard 2.0 Library is where you should place all of your reusable code.  For Xamarin, having code reuse central to your software architecture design goal allows you to deliver your mobile application faster to multiple platforms.

When exploring Xamarin, make sure you spend adequate time learning and understanding these fundamental design and project organizational concepts. Understanding and using them are necessary to take full advantage of Xamarin as a tool to effectively and efficiently deliver cross platform applications.

Keep Right’in Code!

Richard

Why Unit Testing?

Throughout my career in the software development industry, unit testing has been a practice that I have known about, somewhat understood the benefits, but never had a real opportunity to practice.  When you are a part of a small development team and you are looking for ways to deliver value to your customers more quickly, skipping unit testing “feels” like you are doing the right thing.  But if your project becomes more successful, more complicated, and your team grows, not having unit tests becomes risky and costly.

Unit testing is a software testing approach by which a unit of work within your source code is tested to determine if it functions properly.  You can think of a unit (unit of work) as the smallest testable part of your code.  A unit test is code a developer creates to test a unit.  It is basically code written to test code.  For example, let’s say some code has a function that accepts an array of numbers and returns the sum of the values.  Several unit tests (code) can be written to test the function by providing arrays of known values as input and comparing the result to the known sum of the of the values in the input array.  If the sum returned by the function matches the known sum, the unit test passes.  So big deal, the function can correctly calculate the sum of the values in an input array.  What’s the value in creating unit tests for that?  

Well let’s expand our thinking for a moment.  You have an idea that might improve the efficiency of the function by 10x and it requires that you refactor how the function sums the values of the input array.  Let’s make this a little more interesting and assume you are not the original author of the function and the output of the function should be the same as before your refactor.  Without unit tests in place, how do you determine if the function is still working correctly after your refactor? In this scenario is where unit testing adds tremendous value.

There are a couple of items in this scenario that contribute to increased risk and cost.  All non-trivial applications/libraries will have code refactored at some point which introduces risk.  Without unit testing, the only way to ensure that code continues to function correctly after refactoring is to perform manual testing and in some scenarios the code that was refactored can only be tested indirectly as a part of a larger operation.  This leads to increased cost because resources (developers/testers) have to be allocated to a manual testing effort and human resources are the most expensive part of any business.  In addition, this manual testing effort requires time which delays the delivery of value to the end users.

Returning to our scenario, let’s assume there were unit tests available and the tests were setup to execute as a part of the build process.  After the refactor, you rebuild the code which kicks off the execution of the unit tests.  Almost instantaneously you will get feedback on your code changes.  Assuming that the unit tests are correctly testing the function (big assumption) and the unit tests are still passing after your refactor, you can be fairly confident that you have not introduced errors.  This does not eliminate the need for all manual testing but it is safe to conclude that the manual testing efforts can be minimized and the unit testing has reduced, not eliminated, risks and cost.

Based on my view of the software development world, I would say that most development shops do not actively use unit testing as a tool to mitigate risk and cost.  To the inexperienced, unit testing is overhead in the software development process that can be eliminated to deliver value to users faster.  But what happens when the “overhead” is eliminated for years, the project has become more complicated, and more software developers have been added or experienced developer have departed.  The result: a huge amount of risk to the project when changes are made, increased cost in time to test, and delayed delivery of value to the end users - a mountain of technical debt.  Unit testing is a tool, if used properly, to help teams of all sizes be more efficient and minimize technical debt.  It should not be viewed as “overhead” or a barrier to delivering value to the end users.

If you are developing software using the .NET (Microsoft) technology stack, there are several unit testing frameworks available.  The most popular are Microsoft Unit Testing Framework for Managed Code (MSTest), NUnit, and xUnit.Net.  All get the job done and warrant evaluation if you are considering adding unit testing to your project.

Unit testing is a tool and with all tools they can cause problems if not used properly.  But when used properly, unit testing is a powerful tool in the tool chest.  If you are starting a new project or supporting a legacy codebase, unit testing should be in your tool belt to help minimize risk and cost while delivering value to your end users.

Keep Right’in Code!
Richard

Why Choose Xamarin?

Today there are many options to choose from when building a mobile application.  What’s clear is that if you do not support both iOS and Android at a minimum, you run the risk of alienating potential users.  Many organizations have mitigated that risk by investing heavily in creating multiple teams that have different skills and use different tools to deliver applications targeted for the various mobile platforms.  For some, that investment may seem like a duplication of effort and potentially presents an opportunity to consolidate and reduce costs.  Again there are several options to consider when evaluating this opportunity but if your organization has a current investment in the .NET (Microsoft) framework using C#, Xamarin could be a solution that allows you to leverage that investment to extend the value your organization creates into the mobile space.

Xamarin, founded in 2011 by Miguel de Icaza and now owned by Microsoft, is an open source platform that allows developers to create native iOS, Android, and Windows applications using the .NET framework and C#.  For organizations/teams that currently develop using the .NET framework, this is very familiar territory because Xamarin is fully integrated into Visual Studio and Visual Studio for Mac which are core development tools in this space.  So if you are looking to leverage existing .NET/C# skills, the Xamarin story starts to become compelling; but there is more....

Developers have a myriad of options when creating software to solve problems.  Without guidance and experience, they can get into trouble really quickly.  Of all the guidance available (and there is plenty), there are two things that I consider paramount for success as a developer; 1 - Keep things simple, 2 - Don’t repeat yourself (DRY which contributes to #1).  Code reuse should be a top priority when developing production software.  

Because the Xamarin platform utilizes the .NET framework, there are some tools, cross platform capabilities, and code reuse strategies available that enhances the Xamarin story.  For example, C# is used to create applications with the Xamarin platform and the language provides features that can be leveraged with intentional design to achieve code reuse. (i.e. class inheritance, generics, etc.)  Also, the .NET framework supports Shared Projects, Portable Class Libraries (PCL), and .NET Standard Libraries.  Placing core business logic in those project/libraries, allows them to be used in solutions for all .NET framework target platforms including those supported by Xamarin.  For example, one could have core business calculations or operations in a Shared Project, PCL, or .NET Standard Library and reuse that code for windows, web, and Xamarin applications.  Here is a quick break down of the features/options for the project/library types:

Shared Projects

Portable Class Libraries(PCL)

NET Standard Libraries

Allows code to exist in a common location that can be shared between target platforms Allows code to exist in a common location that can be shared between target platforms Allows code to exist in a common location that can be shared between target platforms
Compiler directives are used to include/exclude platform-specific functionality located in the code

PCL’s cannot contain any platform-specific code

PCL’s have a profile that describes which features are supported (the broader the profile, more selected platform targets, the smaller number of available features)

.NET Standard Libraries cannot contain any platform-specific code

.NET Standard Libraries have a larger number of available features than PCL’s

.NET Standard Libraries have a uniform API for all .NET platforms supported by the version of the library

During the build process, the code in the shared project is included in each platform target assemblies. (there is no output assembly for a shared project) PCL’s are referenced by other projects and there is an output assembly after the build .NET Standard Libraries are referenced by other projects and there is an output assembly after the build
More Shared Project information More PCL information More .NET Standard information

 

Microsoft released .NET Standard 2.0 in late 2017.  In this release, the number of API’s supported increased by over 20,000 from the previous version (1.6) of the .NET Standard.  This increase allows developers to place even more code into reusable libraries.  The .NET Standard 2.0 is supported by the following .NET implementations:

  • .NET Core 2.0

  • .NET Framework 4.6.1

  • Mono 5.4

  • Xamarin.iOS 10.14

  • Xamarin.Mac 3.8

  • Xamarin.Android 8.0

  • Universal Windows Platform 10.0.16299

As you can see by the list, if your goal is maximum code reuse and your platform targets are supported by the .NET Standard 2.0, a .NET Standard 2.0 Library is where you should place all of your reusable code.  For Xamarin, having code reuse central to your software architecture design goal allows you to deliver your mobile application faster to multiple platforms, but there is more...

What if you could take code reuse to the user interface?  Well, you can with Xamarin Forms.  Xamarin Forms allows developers to build native user interfaces for iOS, Android, and Windows using C# and XAML.  Developers use abstractions of user interface controls to construct the user interface and at compile time those abstractions are converted to native user interface elements. By connecting a Xamarin Forms user interface with shared backend code, developers can build a fully native iOS, Android, and Windows application from a single code base and depending on the application and technical design, can achieve over 96% code reuse.

Until now we have covered all the advantages to using Xamarin for cross platform mobile development but what are the disadvantages?  I have been using Xamarin for about a year.  My largest hurdle has been learning about developing for the target platforms and their requirements.  This is not a disadvantage of Xamarin but a entry cost to anyone new to mobile development and selecting any tool would require paying this cost.  So, I would say that Xamarin can be used for most mobile applications.  Unless your application requires use of specific platform features or special hardware, Xamarin is real compelling option for delivering mobile applications on multiple platforms and a potential cost saver if your organization/team already has .NET framework and C# skills.

Keep Right’in Code!

Richard

Implementing a Successful Architecture for your Angular 2 Application Using Modules

The choices made when setting out to build an application can either contribute to the success or failure of a project.  Execution environment/platform, source control management, testing strategies, tools, languages, code organization, and software architecture are all key decisions at the beginning and throughout the development of a project. New frameworks like Angular 2 have little guidance or reference material to help you move down a successful path when setting out to build what would be considered a non-trivial application.

Currently, I am focused on acquiring an in-depth knowledge of Angular 2.  Having worked with Angular 1.x for the past couple of years, I am excited about the value that Angular 2 delivers.  I learn by doing, so I have set out to build a simple web application (Speaker Register) that allows conference speakers to create searchable profiles that are available to conference and meeting planners.  I have decided to use a SPA (Single-Page Application) architecture using Angular 2 and ASP.NET Web API hosted on ASP.NET 1.0 Core and .Net Core 1.0.  The application’s repo is located here:  https://github.com/rightincode/speakerregister  Feel free to follow along as the application evolves over the next several weeks.

Below is a high-level diagram of the current software architecture of the Angular 2 code:

SpeakerRegisterArchitecture

Angular 2 applications are intended to be modular by organizing your code into modules.  Modules are blocks of code dedicated to a single purpose and inherently support re-use.  They consolidate components, directives, and pipes (angular concepts) into cohesive blocks of functionality focused on a feature area, workflow, or common collection of utilities (library).  Modules export something of value, for example, a class, function, value, service, and/or component.  The exported items can be imported by other modules. 

An Angular 2 application contains at least one module that is named the root module.  Angular 2 launches an application by bootstrapping the root module.  Additional modules, named feature modules or shared modules can be used to separate concerns and better organize an application.  An Angular 2 module is a class (a Typescript class in the case of the sample application) that is decorated with the @NgModule decorator function.  Let’s take a look at the modules in Speaker Register.

SpeakerRegisterArchitecture-Modules

import { NgModule }      from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { HTTP_PROVIDERS } from '@angular/http';

/* App Root */
import { AppComponent }  from './app.component';
import { PageNotFoundComponent } from './pagenotfound.component';
import './rxjs-operators';

/* Feature Modules */
import { SpeakerModule } from './speaker/speaker.module';
import { ConferenceModule } from './conference/conference.module';

/* Routing */
import { routing, appRouterProviders } from './app.routing';

@NgModule({
    imports: [BrowserModule, ConferenceModule, SpeakerModule, routing],
    declarations: [AppComponent, PageNotFoundComponent],
    providers: [appRouterProviders, HTTP_PROVIDERS],
    bootstrap: [AppComponent]
})
export class AppModule { }

The AppModule is the root module of the Speaker Register application.  Lines 17-22 above is where Angular is instructed that the exported class AppModule is a module.  The @NgModule decorator function accepts an object that contains metadata (configuration information) that describes features of the module.

  • imports: (line 18) this is a list of modules who exposed assets (directives, pipes, etc.) are available to templates in this module. In the code above, BrowserModule is imported from the Angular library.  ConferenceModule and SpeakerModule are two other modules (feature modules) that contain specific functionality for those areas of the Speaker Register application.  (routing is beyond the scope of this post but will be covered later)
  • declarations: (line 19) this is a list of the components, directives, pipes, etc. that belong to this module. AppComponent (container component for the application) and PageNotFoundComponent belong to the root module
  • providers: (line 20) this is a list of injectable objects that are available in the injector of this module. (providers are beyond the scope of this post but will be covered later)
  • bootstrap: (line 21) the list of components that should be bootstrapped when this module is bootstrapped.  In the code above, AppModule is our root module and when it is bootstrapped, AppComponent (the container component for the application) is bootstrapped

When the application starts, AppModule gets bootstrapped. (starts executing)  AppModule imports several modules from the Angular libraries, imports two components (AppComponent and PageNotFoundComponent – we will cover those in a later post) that are a part of AppModule, loads two feature modules (SpeakerModule and ConferenceModule), and sets up routing. (we will cover routing in a later post)

Speaker Register also has two feature modules named SpeakerModule and ConferenceModule.  Both of these modules export classes that are decorated by the @NgModule function similar to the AppModule.  They contain only what is needed to implement the features of those parts of the application.  This allows for the separation of concerns between modules in the application. For example, the SpeakerModule is only concerned about the speaker functionality and exports an API to modules that import it.  It (SpeakerModule) can be updated independently without a negative impact to other modules as long as the API has not changed after the updates.  Take a look at the SpeakerModule and ConferenceModule in the source code: https://github.com/rightincode/speakerregister

The module in Angular 2 is a very powerful tool to help organize your code and build a codebase that is much easier to maintain.  In Speaker Register, we have made use of the module to create a software architecture designed to separate the concerns of specific areas within the application.  Add this tool (module) to your toolbox.  It will help you create a successful software architecture for your Angular 2 applications.

Keep Right’in Code!

Richard – @rightincode

Using Angular 2 RC5 with Visual Studio 2015 and ASP.NET Core 1.0

Lately, I have been spending time learning Angular 2.  During my study, I have seen many examples of setting up and getting started by using Visual Studio Code, WebStorm, and other excellent IDE’s.  I have also read several articles about setting up and using Angular 2 beta versions with Visual Studio 201x.  Since  I spend most of my time using Visual Studio 2015, this article will show you how to setup Angular 2 RC5 with Visual Studio 2015 and ASP.NET Core 1.0 on .NET Core 1.0.

Before you get stared, make sure you have Update 3 for Visual Studio 2015 installed and Preview 2 tooling for Microsoft .NET Core 1.0.  Once you have confirmed your installation, fire up Visual Studio 2015 and select “New Project” from the Start Page.  You will be presented with the dialog below:

New Project

Make sure you have selected the “ASP.NET Core Web Application (.NET Core)” option. Name the project and the solution, and set the location to whatever you would like.  Click “Ok” to continue.  Next you are presented with selecting what type of project template you would like to begin with.  Select “Empty”.  Authentication should be set to “No Authentication” and the “Host in the cloud” option should not be selected.  See the screenshot below:

Empty Template

Click “Ok” to continue.  After VS (Visual Studio) completes the setup you will be presented with the project readme HTML file in the editor.  If you take a look at Solution Explorer, your solution structure should look like the screenshot below:

Solution Explorer - Start

The next step in setting up ASP.NET Core 1.0 to serve Angular 2 RC5 is to configure your application to serve static files.  First, right-click on your web application in solution explorer and select “Manage NuGet Packages”.  In the NuGet Package Manager, enter “Microsoft.AspNetCore.StaticFiles” in the search window.  You will be presented with the screenshot below:

NuGet Package Manager

Make sure you have the latest stable version selected and click “Install”.  You may be asked permission to update your application as well as accept licensing terms.  Confirm to complete the installation and close the NuGet Package Manager Tab.  Finally edit the Startup.cs file in your solution.  It should look like the code below:

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;

namespace WebApplication1
{
    public class Startup
    {
        // This method gets called by the runtime. Use this method to add services to the container.
        // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=398940
        public void ConfigureServices(IServiceCollection services)
        {
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app)
        {
            app.UseFileServer();
        }
    }
}

For a more in depth understanding of serving static files with ASP.NET Core 1.0, please visit here.

Now we are ready to begin setting up our environment for Angular 2 RC5.  “Right-click” on your web application project and select “Add” then “New Item”.  You will be presented with the dialog below:

Package-JSON

Navigate to “Client-side” under .NET Core and then select “npm Configuration File” (package.json).  Click Add.  This file is used by NPM (Node Package Manager) to install required modules for our Angular 2 application.  You will be presented with the package.json file loaded in the editor.  Edit the package.json file to look like the code below:

{
  "version": "1.0.0",
  "name": "webapplication1",
  "scripts": {
    "postinstall": "typings install",
    "typings": "typings"
  },
  "dependencies": {
    "@angular/common": "2.0.0-rc.5",
    "@angular/compiler": "2.0.0-rc.5",
    "@angular/core": "2.0.0-rc.5",
    "@angular/forms": "0.3.0",
    "@angular/http": "2.0.0-rc.5",
    "@angular/platform-browser": "2.0.0-rc.5",
    "@angular/platform-browser-dynamic": "2.0.0-rc.5",
    "@angular/router": "3.0.0-rc.1",
    "@angular/router-deprecated": "2.0.0-rc.2",
    "@angular/upgrade": "2.0.0-rc.5",

    "systemjs": "0.19.27",
    "es6-shim": "^0.35.0",
    "reflect-metadata": "^0.1.3",
    "rxjs": "5.0.0-beta.6",
    "zone.js": "^0.6.12",

    "angular2-in-memory-web-api": "0.0.15",
    "jquery": "^3.1.0",
    "bootstrap": "^3.3.6"
  },
  "devDependencies": {
    "typescript": "^1.8.10",
    "gulp": "^3.9.1",
    "path": "^0.12.7",
    "gulp-clean": "^0.3.2",
    "fs": "^0.0.2",
    "gulp-concat": "^2.6.0",
    "gulp-typescript": "^2.13.1",
    "typings": "^0.8.1",
    "gulp-tsc": "^1.1.5"
  }
}

Immediately after saving the package.json file, Visual Studio will begin downloading all the dependencies listed to a folder named “node_modules” in the directory where your web application is located. (If you would like a detail explanation of these settings, you can go here.)  In addition, you will probably receive an error message stating that npm was not able to resolve Typings dependencies due to a missing “typings.json” file.  Let’s create that file now.

“Right-click” on the web application and select “Add” then “New Item”.  Again select “Client-side” on the far left and then select “JavaScript File”.  Make sure you name the file “typings.json”.  Your screen should look like the screenshot below:

Typings-JSON

Click “Add” and you will be presented with the typings.json file in the editor.  Edit the typings.json file to look like the code below:

{
  "ambientDependencies": {
    "es6-shim": "registry:dt/es6-shim#0.31.2+20160317120654",
    "jasmine": "registry:dt/jasmine#2.2.0+20160412134438",
    "node": "registry:dt/node#4.0.0+20160509154515"
  }
}

Save the typings.json file.  If you would like an explanation of these settings, you can go here. Now if you switch to the project.json file and save it again, Visual Studio should complete the installation of the dependent modules without error.

Next we add a TypeScript JSON Configuration File. (tsconfig.json)  “Right-click” on the web application in Solution Explorer and select “Add” and then “New Item”.  Select “Client-side” on the left and “TypeScript JSON Configuration File”.  Your screen should look like the screenshot below:

TypeScript-JSON

Select “Add” and the tsconfig.json file will be loaded into the editor.  Edit the tsconfig.json to look like the code below:

{
  "compileOnSave": true,
  "compilerOptions": {
    "target": "es5",
    "module": "commonjs",
    "moduleResolution": "node",
    "sourceMap": true,
    "emitDecoratorMetadata": true,
    "experimentalDecorators": true,
    "removeComments": false,
    "noEmitOnError": true,
    "noImplicitAny": false,
    "outDir": "./wwwroot/scripts"
  },
  "exclude": [
    "node_modules",
    "wwwroot",
    "typings/main",
    "typings/main.d.ts"
  ]
}

Save the file.  If you would like an explanation of these settings, you can go here.

We are going to use SystemJS to load our application and library modules.  To do so, we need to create a configuration file for SystemJS so that it can locate the code we need loaded.

Since ASP.NET Core 1.0 serves static files from the “wwwroot” folder, we are going to place the SystemJS configuration file in a folder named “scripts” under this folder.  “Right-click” on the wwwroot folder and select “Add” then “New Folder”.  Name the folder “scripts”.  Solution Explorer should look like the screenshot below:

Solution Explorer - scripts folder

Now add the SystemJS configuration file to the scripts folder by selecting “Right-click” on the script folder and selecting “Add” then “New Item”.  You will be presented with the dialog below:

SystemJS

Select “Client-side” on the left and “JavaScript File”.  Name the file “systemjs.config.js”.  Click “Add” and the “systemjs.config.js” file will be displayed in the editor.  Edit the “systemjs.config.js” file to look like the code below:

(function (global) {
    // map tells the System loader where to look for things
    var map = {
        'app': 'scripts',
        '@angular': 'libs/@angular',
        'angular2-in-memory-web-api': 'libs/angular2-in-memory-web-api',
        'rxjs': 'libs/rxjs'
    };
    // packages tells the System loader how to load when no filename and/or no extension
    var packages = {
        'app': { main: 'main.js', defaultExtension: 'js' },
        'rxjs': { defaultExtension: 'js' },
        'angular2-in-memory-web-api': { defaultExtension: 'js' }
    };
    var ngPackageNames = [
      'common',
      'compiler',
      'core',
      'forms',
      'http',
      'platform-browser',
      'platform-browser-dynamic',
      'router',
      'router-deprecated',
      'upgrade'
    ];
    // Add package entries for angular packages
    ngPackageNames.forEach(function (pkgName) {
        packages['@angular/' + pkgName] = { main: './bundles/' + pkgName + '.umd.js', defaultExtension: 'js' };
    });
    var config = {
        map: map,
        packages: packages
    }
    System.config(config);
})(this);

If you are interested in the settings in the “systemjs.config.js” file, you can go here.

There is one final configuration step to complete and then we are ready to code our Angular 2 application.  As stated before, ASP.NET Core 1.0 serves static files from the wwwroot folder by default.  As a result, we need to move required library files from the node_modules folder to a location under the wwwroot folder.  In addition, if we would like to perform any debugging of the TypeScript code in our browser development tools, we need to have the original TypeScript files served from the server.  In order to accomplish this, we are going to use a gulp script to handle copying the files to their needed location.

“Right-click” on the web application and select “Add” then “New Item”.  Select “Gulp Configuration File”.  The dialog should look like the screenshot below:

Gulp

Click “Add” and you will be presented with gulpfile.js in the editor.  Edit the gulpfile.js to look like the code below:

/// <binding AfterBuild='clearLibsDestinationFolder, clearAppDestinationFolder, moveToLibs' />
/*
This file in the main entry point for defining Gulp tasks and using Gulp plugins.
Click here to learn more. http://go.microsoft.com/fwlink/?LinkId=518007
*/

var gulp = require('gulp');
var clean = require('gulp-clean');

var libsDestPath = './wwwroot/libs/';
var appDestPath = './wwwroot/app/';

//clear destination folders
gulp.task('clearLibsDestinationFolder',
    function () {
        return gulp.src(libsDestPath)
            .pipe(clean());
    });

gulp.task('clearAppDestinationFolder',
    function () {
        return gulp.src(appDestPath)
            .pipe(clean());
    });

gulp.task('moveToLibs', function () {
    gulp.src([
      'node_modules/es6-shim/es6-shim.min.js',
      'node_modules/systemjs/dist/system-polyfils.js',
      'node_modules/systemjs/dist/system.src.js',
      'node_modules/reflect-metadata/Reflect.js',
      'node_modules/rxjs/bundles/Rx.js',
      'node_modules/zone.js/dist/zone.js',
      'node_modules/jquery/dist/jquery.*js',
      'node_modules/bootstrap/dist/js/bootstrap*.js',

      'node_modules/core-js/client/shim.min.js'

      //'node_modules/systemjs/dist/*.*',
    ]).pipe(gulp.dest('./wwwroot/libs/'));

    gulp.src(['node_modules/@angular/**/*'], { base: 'node_modules/@angular' })
        .pipe(gulp.dest('./wwwroot/libs/@angular'));
    gulp.src(['node_modules/angular2-in-memory-web-api/**/*'], { base: 'node_modules/angular2-in-memory-web-api' })
        .pipe(gulp.dest('./wwwroot/libs/angular2-in-memory-web-api'));
    gulp.src(['node_modules/rxjs/**/*'], { base: 'node_modules/rxjs' })
        .pipe(gulp.dest('./wwwroot/libs/rxjs'));

    gulp.src([
      'node_modules/bootstrap/dist/css/bootstrap.css'
    ]).pipe(gulp.dest('./wwwroot/libs/css'));

    //copy typescript files for debugging purposes - would not deploy to production environment
    gulp.src(['app/**/*']).pipe(gulp.dest('./wwwroot/app'));
});

The gulp file is configured to execute after a successful build of the solution once the Task Runner Explorer is setup.  From the VS menu, select “View”, “Other Windows”, and then “Task Runner Explorer”.   Click the “Refresh” button (top left, next to the application name) and Task Runner Explorer will read the gulp file.  You should see three(3) task: clearAppDestinationFolder, clearLibsDestinationFolder, and moveToLibs.  You should also see the number three(3) next to the “After Build” Bindings.  Now the gulp script will execute after a successfully build.

Okay, finally let’s code our Angular 2 application.

First step is to add an “index.html” file.  “Right-click” on the wwwroot folder and select “Add” then “New Item” and add an HTML file named “index.html”.  Edit the file to look like the code below:

<!DOCTYPE html>
<html>
<head>
    <title>Angular 2/ASP.NET Core 1.0 QuickStart</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="libs/css/bootstrap.css">

    <!-- 1. Load libraries -->
    <script src="libs/jquery.min.js"></script>
    <script src="libs/bootstrap.min.js"></script>
    <script src="libs/zone.js"></script>
    <script src="libs/Reflect.js"></script>
    <script src="libs/system.src.js"></script>

    <!-- Polyfill(s) for older browsers -->
    <script src="libs/es6-shim.min.js"></script>
    
    <!-- 2. Configure SystemJS -->
    <script src="scripts/systemjs.config.js"></script>
    <script>
      System.import('app').catch(function(err){ console.error(err); });
    </script>
</head>
<body>
<h1>Hello world from ASP.NET Core 1.0 on .NET Core 1.0!</h1>
    <br/><br/>
    <my-app>Loading...</my-app>
</body>
</html>

Now let’s add a new folder name “app” that will contain all of our TypeScript code for our Angular application.  “Right-click” on the web application project and select “Add” then “New Folder”.  Name the folder “app”.

The first TypeScript file we will add to the project is “main.ts”.  This is where our Angular 2 application starts up.  “Right-click” on the “app” folder and select “Add” then “New Item”.  Select “Client-side” on the left and in the center select “TypeScript File”.  Name the file “main.ts” and select “Add”.  “main.ts” should now be loaded in the editor and your solution should look similar to the screenshot below:

Main.ts

Edit the main.ts file to look like the code below:

import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';

import { AppModule } from './app.module';

platformBrowserDynamic().bootstrapModule(AppModule);

Visual Studio will flag a few errors but that is okay for now.  They will be resolved when we add the remaining files.  Now let’s add a new TypeScript file to the app folder named “app.module.ts” and edit the file to contain the code below:

import { NgModule }      from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';

import { AppComponent }  from './app.component';

@NgModule({
    imports: [BrowserModule],
    declarations: [AppComponent],
    bootstrap: [AppComponent]
})
export class AppModule { }

Finally, let’s add a third TypeScript file to the app folder named “app.component.ts” and edit the file to contain the code below:

import { Component } from '@angular/core';

@Component({
    selector: 'my-app',
    template: '<h3>Angular 2 RC5 is running here!</h3>'
})

export class AppComponent { }

After adding the third TypeScript file, all of the Visual Studio errors should have been resolved.  Your solution should look similar to the screenshot below:

Final TypeScript

I will publish another blog post with more details about the three(3) TypeScript files we just added.  If you would like more information now, you can go here.

Okay, let’s build the application in Visual Studio.  The application should build successfully.  Now let’s launch the application and you should be greeted with the screenshot below:

AppWelcome

We have an Angular 2 RC5 application served up from ASP.NET Core 1.0 on .NET Core 1.0 and built using Visual Studio 2015!  You can find a Visual Studio solution here.

I hope this post is helpful with getting started with Angular 2 development using Visual Studio 2015 and ASP.NET Core 1.0 on .NET Core 1.0.  Let me know if you have any questions.  Keep Right’in Code!

Richard - @rightincode

The Value of a Full-Stack Developer

Several days ago, a colleague of mine gave an excellent talk about Full-Stack developers.  There were several points he made that I thought were interesting:

  • The definition of “Full-Stack” has evolved over the years
  • Based on his definition, to maintain the skills necessary to be a Full-Stack developer is extremely difficult or impossible
  • Given the maintenance difficulty, it is probably not worth the effort to do what is necessary to maintain/become a Full-Stack developer

For the most part, I agree with his assessment of the role Full-Stack developer.  As with all ideas and concepts, there are counterpoints, different opinions, and other perspectives.

We agree that the definition of the role of Full-Stack developer has evolved over the years.  Early in my career, “the stack” consisted of very few parts.  If you knew HTML, JavaScript, CSS, a server-side language (PHP/VB.NET/C#), and SQL and could put it all together to build a web application, you would be considered a Full-Stack developer.  Today, using what is considered a modern approach to building a web application, you would add to the previous list of technologies several client-side JavaScript libraries, additional tools to manage those libraries and other assets (version management/minification/bundling/etc.), new data transport and security tools/techniques, and custom back-end API’s to respond to client-side requests.  If are planning to support mobile, you just added even more complexity to the stack.  This evolution has increased the complexity of successfully delivering a web application by many orders of magnitude.

10 years ago, keeping up with the technologies required to successfully build a web application was a fairly easy task.  The pace of change was something one could truly mange within a typical work week and acceptable work/life balance.  Of course this is a personal preference but the frame of reference for my statement is a 40 hour work week.  Fast forward to today.  The pace of change is incredible.  As an individual developer, keeping up with changes in the technologies that are a part of “the stack” are impossible.  My colleague and I agree on this point.  One cannot master all the technologies and still be productive.  If you tried, you would spend all your time learning, never building, and a reasonable work/life balance (again personal definition) could not be achieved.

So is it worth the effort to maintain/become (or attempt to become) a Full-Stack developer?  This is where my colleague and I disagree.  It is my personal belief that striving to become or maintaining your role as a Full-Stack developer is definitely worth it.  I don’t believe that one could master all the parts of today’s modern technology “stack”  but I do believe you are much more valuable to an organization seeking solid contributors.  As a hiring manager, I am never impressed by how “deep” your knowledge of a specific technology or part of “the stack”.  The tools and technologies change constantly.  What I’m looking for is your ability and enthusiasm to solve problems with the appropriate tools and technologies.  Your ability to fearlessly and with an open mind explore the leading-edge of our profession is also more valuable in my opinion.

The Full-Stack developer may or may not exist today because it is 100% based on your definition.  But it is my genuine belief when reviewing your career as a professional software developer, the broader your knowledge maximizes your value and minimizes your risk.  Define what Full-Stack developer is to you, set your limits on work/life balance, and go capitalize on the incredible career of a software developer.

 

-Richard

@rightincode

CodeStock 2015 Recap

Once again, I had the privilege of attending CodeStock in Knoxville, TN. last weekend.  The CodeStock team really knows how to put on a great conference.  In 2014, there were about 450 attendees.  This year, 900 attendees!  Yes, doubled in one year!  That should give you a good indication of the popularity and the quality of the conference.  Also, the conference relocated to the Knoxville Convention Center and I must say, the facilities were great.

This year it was our pleasure as attendees to hear from keynote speaker Scott Hanselman, @shanselman. (hanselman.com)  If you haven’t had the please of hearing Scott live, add it to your bucket list.  It was an absolute treat and he is hilarious.  I have a little something extra to say about Scott but I will save that for later.

CodeStock’s sessions were broken down into five categories: Design, Development, Entrepreneur, IT Pro, and other.  I mainly work on Web projects, so I focused on the Design and Development categories. (Although, I did attend a couple of Entrepreneur sessions given by some folks I respect.)  Day one for me consisted of talks on Dependency Injection by James Bender, @JamesBender, AngularJs by Dave Baskin, @dfbaskin, ASP.NET vnext by Sam Basu, @samidip , and An Honest Look at a Successfully Software Consultant by Jim Christopher, @beefarino .  All great talks by some of the best presenters out there. Day two was more of the same, Diving into Angular 2.0 by Josh Carroll, @jwcarroll , Deep Dive into ASP.NET 5, Jeff Fritz, @csharpfritz , and Web Application Security by Steve Brownell.  Again all great presentations!

CodeStock is a jewel of a conference.  Great talks, great attendees, great team, great value, and great price!  I have attended this conference for the past three years and it has only improved year after year.  The CodeStock team has earned my respect and support and I plan to continue to attend.  If you are looking for a great return on your conference dollars, CodeStock is a conference you should have on your list.

One last thing.  I was sitting in one of the conference rooms waiting for the next talk, actually looking down at my laptop, and Scott Hanselman stopped by and said hello.  Of course, I was a quite surprised when he said “hey, we follow each other on twitter, right?”  I don’t send 100 tweets per day, barely 10 per week but he recognized me out of his 130k+ followers.  I thought that was incredible given his popularity and just the number of people he comes in contact with.  He and I shared a 10 min conversation.  During our chat, he asked me what I was working on and even offered some advice. (invaluable advice)  I’m writing this because I have “mad” (great) respect for someone of his caliber that is genuinely attentive to the community.  As I said earlier, if you haven’t heard Scott @shanselman (hanselman.com) speak live, add it to your bucket list.  It will be a real treat.

 

Richard

@rightincode