Automation with the TFS web hook and REST services

At Associated Electric, we leverage Microsoft’s Team Foundation Server (TFS) for work tracking and a source code repository.  Over the years I have customized and built out work items to help me manage and stay on top of the work my team is doing.  I have also experimented with some of the forecasting and scheduling functions of TFS, i.e. burn downs, kanban boards, task boards, etc.  The challenge with many of these functions is that the data required to make them work can be intense.  A medium sized project may consist of a single TFS work item, with a dozen or so user stories and those user stories may have half a dozen tasks all requiring a half dozen or so fields just to communicate the work, who’s doing it, and when.  Getting a burn down and accurate forecasting in TFS requires at least another half dozen fields and if you don’t take the time to think about each of those fields having them won’t provide you with any value.  For a project like this you may easily spend a day to two just keying work items in to TFS, not a very exciting endeavor for an analyst or developer.  I just settled by using story points on the user stories and not bothering with the task level, but I was always hungry for more out of TFS.

Recently I had to spend an extended amount of time away from managing my team and when I returned, I discovered that things were not running as well as I would have hoped.  It is nothing against my team, they are just trying to execute on work the best they knew how.  It became obvious to me that I needed more insight into all the work going on in my team and they needed more clarity on priories.  Across my team there is a large range of work, everything from writing annual reviews, tier two user support, and implementing ASP.NET web applications.  The one thing that all work has in common is that it can all be abstracted into tasks.  Tasks that can be scheduled and prioritized in TFS.

I also discovered that in TFS 2015, there are service hooks including a generic web hook that can be called on work item creation and updates.  From there my creativity exploded.  All of my team’s project milestones and support requests are modeled as User Stories, and using the TFS REST APIs, I created a web hook that automatically created a child task for the implementation of the project milestone, or resolution of the support request.  I even plugged into the time tracking data we maintain to populate time estimates into these tasks based on the TFS area.  Doing that instantly enabled me to start leveraging the TFS Task board with accurate time estimates with absolutely no additional data entry.  I also implemented a web hook for work item updates to automatically close the parent user story when all child tasks are closed.  I created a second update action to propagate changes from the parent user story down to all active children, i.e. if you carry a user story forward into the next iteration, the iteration will automatically update for the remaining child tasks.  By automating this data entry, my team is now using the Task Board which give me a single place to review all the work going on and give me the opportunity to easily guide and correct work priorities for team members.  All of this is happening without any additional data entry on my part.  I’ve included the models that you can use to deserialize the JSON calls from the TFS Service hooks to help anyone else get started.

 

TfsWebHookModel.cs

User Authentication with PassportJS

My next adventure in learning Node centers around mastering some of most common operations performed on web sites.  Recently I dove into user management and authentication.  The most common way that a web site secures it self is with a user name/password and then a cookie.  Web sites need to use this cookie because your authentication information needs to stay static between round trips to the server.  The web server needs to store who you are and the fact that you are logged in.  In recent years this story has change has you can now login with a facebook or google account to websites that are not facebook or google.  My intent with Node is to just get down the basics so I’m only focusing on a basic user name/password situation, but it would sure be nice if I could scale it out and swap out different login providers.  To handle user authentication I discovered the “PassportJS” middle ware that does just that.  Passport defines the core functionality and then relies on different “strategies” to swap out different providers.  In my experimentation I focused on the “local” strategy, but there are dozens of different strategies that connect to different login and authentication providers that you can use.  The local strategy is basically a do it yourself option, perfect for learning.  Getting Passport working required three things:
Define your authentication “Strategy”
Tell Passport how to serialize/deserialize users

Make sure you have the session enabled

My Strategy

To get going, I defined the implementation to my local strategy, which is simply to compare a login attempt to a hard coded list of user names/passwords.  It goes without saying that this is terrible practice, but I’m practicing.  A real life local strategy would call out to a database, hash the password and check for a match between user name and hashed passwords; oh, and not surface the user’s password out to the rest of the application.

Serialization

Second thing is to tell Passport how to serialize and deserialize user records.  When a user visits a page on your application, Passpost will hydrate the user record for you so that your application can have access to details about the user visiting the website.  In my example, that would mean that I could have access to the user’s name, password and id.  If this was a true local authentication, then I would be deserializing a user by executing a database call with a session identifier and pulling out a users information and then extending a session, certainly not leaking out a user’s password.

Remembering who you are

Unless you cache something off, the web server will forget every thing about your current request the next time you click on a link in the application.  I understood that in concept but the implementation in Node was lost to me.  Every blog post and example I read said that I only needed to define the above three functions and register them into Passport to have it start authenticating users.  However, every page load after that initial login was unauthenticated.  The secret that was missing was the Express Session.  Why was I missing that to begin with?  All the examples and blogs were out of date.  ExpressJS no longer bundles add ons, like session, in the core node package, they needed to be added and configured separately.  As soon as I added the “Express-Session” package and added a line to initialize it everything started working.  Cookies were set on the browser and user logins persisted across page loads.  My app started remembering who I was between pages views.

Charting with LinqPad

Over the course of my career I have had to query, extract, and analyze data sets from all sorts of databases.  Microsoft Excel is probably the number one tool for this job, but there are limitations to excel in consuming and analyzing normalized databases.  That is were someone like me comes in to build a query to extract and de-normalize the data set to make it easy for end users to run their analysis.  The type of analysis I’m talking about here is the quick ad-hoc analysis, someone asked a question and we are going to answer it once; Excel has some pretty slick tooling for analyzing normalized databases but it generally isn’t worth setting that up to just answer a question.

During my career I have been both the query writer and extractor, and the analyzer (working with a dataset in Excel to answer a question.)  My favorite tool for the first job is LinqPad.  With LinqPad, you are querying in C# and not SQL.  My favorite use case with LinqPad over any other SQL IDE is advanced joining and aggregations over multiple queries.  Producing a single dataset to work with can often be completed quicker (IMO) by creating multiple small queries and combining the results rather than trying to write one large SQL query with several joins.  I also recently discovered that LinqPad can display WCF and WinForms in its output window.  I figured way not build and display a .NET chart object?  It works!  So why stop with just displaying a single chart that I build once, way not be able to display a chart of any IEnumberable?

I created a extension method for LinqPad called “ToChart” that converts any .NET IEnumerable (i.e. the results of a query in LinqPad) into a .NET chart object.  Using reflection this proved to be pretty straight forward.  The object you want to display as data points in the chart have some naming conventions for their properties.  For example, you must have a property called “Key” to be the X Axis point.  Any numeric properties in the object are then made into the Y axis points, or Series.  You can also customize the properties of each series by appending the property name to a new property in you object, i.e. color could be controlled like so: Value_Color = Color.Red.  In the below screen shot you can see where I’m setting the Color and Tooltip of each point in the “Hours” series.

image

Why did I do this?  Number one, because it was fun and because I could.  Secondly it allowed me to stay out of excel and keep me closer to the data when trying to answer a question.  Finally, it helps me answer the question faster.  Excel is a great tool but if I can remove one step from the question answering process then that is great.  Like I said earlier, I use this for the quick ad-hoc question answering, not for detailed a recurring reporting.  For more robust or recurring reporting I would look to SSRS or Excel to build a solution.

https://github.com/jcwmoore/blog/tree/master/Blog.ChartExtensions

http://www.linqpad.net/

Microsoft Dependency Injection

ASP.NET Core recently hit the 1.0 milestone and one of the most interesting features in this version (or edition, I’m not sure of future of the MVC 5.X series) is that dependency injection is baked into the core of the framework.  That means that our controllers are no longer created by simplistic construction logic and can have complex dependencies.  In order to facilitate my unit testing requirements, my team has (for years) overridden the default IDependencyResolver in MVC to use Unity (Microsoft’s IoC library) to resolve all dependencies (“Services”) for MVC.  This approach is functional, and great for Unit Testing becuase you can inject mocked members into your controller, but it would sure be nice if that functionality was cooked into the framework so that my team didn’t have to bolt it on.  You can see the details about how to configure DI in MVC Core here.

Unfortunately, I am not ready to jump into MVC Core for my team’s production applications.  It’s not a me thing, it is a we thing.  Jumping into MVC Core is really a decision that all our development teams need to agree to and we just have too many other things to focus on.  I really believe that MVC core will be were we go in the future, so I did decide to just into the DI stack that Microsoft is using in MVC Core.  Unfortunately, Microsoft’s new DI stack is not Unity.  I decided to go against our defacto standard of Unity because if we do go to MVC core, there’s no reason to continue with Unity.  Unless the new DI stack is terribly complicated, this DI stack will become my team’s standard in the future, spoiler alert, IMHO this new DI stack is better and easier to use than Unity.

Unity (and Ninject, and AutoFac, etc) have a ton of awesomely cool features that I have never used in a professional project.  Since I understood the idea of Dependency Injection, I have desired a simpler tool.  In fact, to learn how DI works and create a simpler tool to use, I created my own IoC Container.  From my experience, and experimentation, the new DI stack is extremely simple.  The root interface, “IServiceProvider,” only has a single method, GetService.  There are several helper method to make it simpler to use, like the generic version of the method.

Dependency Injection Code

https://github.com/jcwmoore/blog

Automated builds with Travis

All developers should know the value in unit testing and automated builds.  They ensure quality is built and maintained in your software products.  You don’t want to have you interns come in and add some cool functionality into your product and deploy a broken product because they didn’t understand how their changes impacted existing functionality.  In my last post in this series I went over how to create unit tests for a node.js application.  Unit tests by themselves are great, but you (and your intern) need to execute them for there to be any value in having them.  Chances are that intern doesn’t know about unit testing and won’t know they should, or even how to, run the unit tests before committing their changes.  Fortunately there are tools that automate testing and reporting on commits.  Travis is one such tool, and it is free to use.  Setting it up for my node application was stupid easy.  There were three steps, first I needed to define a “default” gulp task like so:

 

gulp.task(‘default’, [‘run-tests’]);

 

This step is necessary because I only defined the “run-test” task in gulp when setting up the unit tests, Travis by default runs the “default” gulp task, so it needs to exist.  Second you need to create a “.travis.yml” file to define how Travis should run.
travis1
Third, you need to log in to Travis with your github account and select the repository to test.  That is it.  You are then running and testing automatically.  If all is setup up right you will see a screen like this shortly after pushing your changes to github, complete with your “build passing” badge.  Awesome!
travis2