This is my first post in almost 18 months and the kick off
of a series on my role in leading my organization to the cloud, or to borrow a
buzz term, #DigitalTransformation! In
the past I have been a diligent blogger, using my blog to document my
explorations in technology and leadership; however, the last 18 months have
been filled with many personal and professions highs and lows. As a result, I have not been keeping up with
my blog. To kick off this new series
(and it will be a big one, 9 or 10 posts) I need to start with a little
background. Prior to my stewardship of
SharePoint, we created an experimental SharePoint online tenant to support a
project that involved an outside engineering firm. As of 2018, that tenant was defunct. Also, in early 2018, I created a second,
“temporary” tenant. After quickly
spinning it up and building a forum site it was turned over to the requestors
where it has sat, unused, since. Finally,
in late 2017, I made the decision to pull the pug on a failing SharePoint 2016
upgrade. This decision played into a
series of events that result in me rebuilding a team that in early 2018 was focused
on fixing SharePoint.
When it came time to spin up our new SharePoint Online
environment, I discovered that, in addition to the two known tenants, a third
tenant already existed for our company.
This tenant had our desired tenant name but it existed in an “unmanaged”
state. It turns out that if people signup
for online services with their work email, Microsoft will create a tenant based
on their email address. As it turns out,
about a dozen people already signed up for the free version of PowerBI. As a result, I needed to go through the admin
takeover process. To complete this
process, a shared inbox was created called “Office365Admin.” This will result in not having an on-prem
user account that syncs to Azure AD.
Once you sign up an account on office.com I signed up for PowerBI with
that account and then visit with the office.com admin center to start the admin
takeover. The process is simple and
involved adding a TXT record to your DNS.
Once that was completed our
tenant setup was completed and ready for licensing.
Back in 2016, I led our company into our first Enterprise
Agreement with Microsoft. Our plan at
the time was to upgrade our traditional (on-prem) software platforms and start
experimenting with cloud hosted versions.
For that reason, we purchased Office ProPlus with E3 add-ons. This
license was allocated to one of our existing tenants that was now dormant. With a support request and some email
verification, our old tenant was scheduled for deletion and all the licensing
moved to our new home in the cloud. All
of this took less than one day to finish.
Between 2015 and 2018, SharePoint was one of many
responsibilities for my team; the one that was publicly failing and that my
company was losing faith in our ability to manage. Because of this, my role was refocused to fix
this problem. One of the main reasons
for me to choose to make the shift to the cloud was because of this setup
process. In one day, we had a production ready SharePoint environment.
My next adventure in learning Node centers around mastering some of most common operations performed on web sites. Recently I dove into user management and authentication. The most common way that a web site secures it self is with a user name/password and then a cookie. Web sites need to use this cookie because your authentication information needs to stay static between round trips to the server. The web server needs to store who you are and the fact that you are logged in. In recent years this story has change has you can now login with a facebook or google account to websites that are not facebook or google. My intent with Node is to just get down the basics so I’m only focusing on a basic user name/password situation, but it would sure be nice if I could scale it out and swap out different login providers. To handle user authentication I discovered the “PassportJS” middle ware that does just that. Passport defines the core functionality and then relies on different “strategies” to swap out different providers. In my experimentation I focused on the “local” strategy, but there are dozens of different strategies that connect to different login and authentication providers that you can use. The local strategy is basically a do it yourself option, perfect for learning. Getting Passport working required three things:
Define your authentication “Strategy”
Tell Passport how to serialize/deserialize users
Make sure you have the session enabled
To get going, I defined the implementation to my local strategy, which is simply to compare a login attempt to a hard coded list of user names/passwords. It goes without saying that this is terrible practice, but I’m practicing. A real life local strategy would call out to a database, hash the password and check for a match between user name and hashed passwords; oh, and not surface the user’s password out to the rest of the application.
Second thing is to tell Passport how to serialize and deserialize user records. When a user visits a page on your application, Passpost will hydrate the user record for you so that your application can have access to details about the user visiting the website. In my example, that would mean that I could have access to the user’s name, password and id. If this was a true local authentication, then I would be deserializing a user by executing a database call with a session identifier and pulling out a users information and then extending a session, certainly not leaking out a user’s password.
Remembering who you are
Unless you cache something off, the web server will forget every thing about your current request the next time you click on a link in the application. I understood that in concept but the implementation in Node was lost to me. Every blog post and example I read said that I only needed to define the above three functions and register them into Passport to have it start authenticating users. However, every page load after that initial login was unauthenticated. The secret that was missing was the Express Session. Why was I missing that to begin with? All the examples and blogs were out of date. ExpressJS no longer bundles add ons, like session, in the core node package, they needed to be added and configured separately. As soon as I added the “Express-Session” package and added a line to initialize it everything started working. Cookies were set on the browser and user logins persisted across page loads. My app started remembering who I was between pages views.
Over the course of my career I have had to query, extract, and analyze data sets from all sorts of databases. Microsoft Excel is probably the number one tool for this job, but there are limitations to excel in consuming and analyzing normalized databases. That is were someone like me comes in to build a query to extract and de-normalize the data set to make it easy for end users to run their analysis. The type of analysis I’m talking about here is the quick ad-hoc analysis, someone asked a question and we are going to answer it once; Excel has some pretty slick tooling for analyzing normalized databases but it generally isn’t worth setting that up to just answer a question.
During my career I have been both the query writer and extractor, and the analyzer (working with a dataset in Excel to answer a question.) My favorite tool for the first job is LinqPad. With LinqPad, you are querying in C# and not SQL. My favorite use case with LinqPad over any other SQL IDE is advanced joining and aggregations over multiple queries. Producing a single dataset to work with can often be completed quicker (IMO) by creating multiple small queries and combining the results rather than trying to write one large SQL query with several joins. I also recently discovered that LinqPad can display WCF and WinForms in its output window. I figured way not build and display a .NET chart object? It works! So why stop with just displaying a single chart that I build once, way not be able to display a chart of any IEnumberable?
I created a extension method for LinqPad called “ToChart” that converts any .NET IEnumerable (i.e. the results of a query in LinqPad) into a .NET chart object. Using reflection this proved to be pretty straight forward. The object you want to display as data points in the chart have some naming conventions for their properties. For example, you must have a property called “Key” to be the X Axis point. Any numeric properties in the object are then made into the Y axis points, or Series. You can also customize the properties of each series by appending the property name to a new property in you object, i.e. color could be controlled like so: Value_Color = Color.Red. In the below screen shot you can see where I’m setting the Color and Tooltip of each point in the “Hours” series.
Why did I do this? Number one, because it was fun and because I could. Secondly it allowed me to stay out of excel and keep me closer to the data when trying to answer a question. Finally, it helps me answer the question faster. Excel is a great tool but if I can remove one step from the question answering process then that is great. Like I said earlier, I use this for the quick ad-hoc question answering, not for detailed a recurring reporting. For more robust or recurring reporting I would look to SSRS or Excel to build a solution.
ASP.NET Core recently hit the 1.0 milestone and one of the most interesting features in this version (or edition, I’m not sure of future of the MVC 5.X series) is that dependency injection is baked into the core of the framework. That means that our controllers are no longer created by simplistic construction logic and can have complex dependencies. In order to facilitate my unit testing requirements, my team has (for years) overridden the default IDependencyResolver in MVC to use Unity (Microsoft’s IoC library) to resolve all dependencies (“Services”) for MVC. This approach is functional, and great for Unit Testing becuase you can inject mocked members into your controller, but it would sure be nice if that functionality was cooked into the framework so that my team didn’t have to bolt it on. You can see the details about how to configure DI in MVC Core here.
Unfortunately, I am not ready to jump into MVC Core for my team’s production applications. It’s not a me thing, it is a we thing. Jumping into MVC Core is really a decision that all our development teams need to agree to and we just have too many other things to focus on. I really believe that MVC core will be were we go in the future, so I did decide to just into the DI stack that Microsoft is using in MVC Core. Unfortunately, Microsoft’s new DI stack is not Unity. I decided to go against our defacto standard of Unity because if we do go to MVC core, there’s no reason to continue with Unity. Unless the new DI stack is terribly complicated, this DI stack will become my team’s standard in the future, spoiler alert, IMHO this new DI stack is better and easier to use than Unity.
Unity (and Ninject, and AutoFac, etc) have a ton of awesomely cool features that I have never used in a professional project. Since I understood the idea of Dependency Injection, I have desired a simpler tool. In fact, to learn how DI works and create a simpler tool to use, I created my own IoC Container. From my experience, and experimentation, the new DI stack is extremely simple. The root interface, “IServiceProvider,” only has a single method, GetService. There are several helper method to make it simpler to use, like the generic version of the method.
All developers should know the value in unit testing and automated builds. They ensure quality is built and maintained in your software products. You don’t want to have you interns come in and add some cool functionality into your product and deploy a broken product because they didn’t understand how their changes impacted existing functionality. In my last post in this series I went over how to create unit tests for a node.js application. Unit tests by themselves are great, but you (and your intern) need to execute them for there to be any value in having them. Chances are that intern doesn’t know about unit testing and won’t know they should, or even how to, run the unit tests before committing their changes. Fortunately there are tools that automate testing and reporting on commits. Travis is one such tool, and it is free to use. Setting it up for my node application was stupid easy. There were three steps, first I needed to define a “default” gulp task like so:
This step is necessary because I only defined the “run-test” task in gulp when setting up the unit tests, Travis by default runs the “default” gulp task, so it needs to exist. Second you need to create a “.travis.yml” file to define how Travis should run.
Third, you need to log in to Travis with your github account and select the repository to test. That is it. You are then running and testing automatically. If all is setup up right you will see a screen like this shortly after pushing your changes to github, complete with your “build passing” badge. Awesome!