Ignite 2017: Dyanmics 365

Day two of Ignite, and this one was kind of a let down. I was super stoked to get here to learn more about containers and so I scheduled in two classes related to containers, one focused on SQL server and one focused on Visual Studio. The latter, was a complete let down. The was a demonstration of new and upcoming Visual Studio features, they didn’t even talk about containers, and worse, some of the stuff demoed I am already using and some of it I saw last year at Build, 2016. The SQL Server Container talk was very interesting; however, I didn’t learn anything thing new. I already knew that containers were stateless. I already knew that being stateless creates a problem for any database. Finally, I already knew that some type of attached storage would be required to make SQL Server work in Containers. The session was interesting but I didn’t leave with any new ideas. My greatest inspiration today came from my first session talking about Dynamics 365 for Finance and Operations.

My tweet puts it best, Dynamics 365 could be a digital transformation for my organization. If I ask myself was causes me the most pain at work, the answer would be on-prem software upgrades. These “projects” suck the life out of my team; they add zero value to the organization (probably even negative value because of the time commitments from the business to communication, train, and test) and they block my team members from doing their primary jobs, delivering business solutions with software. Dynamics 365 can solve this problem. Going to a hosted solution will permanently get my team out of the software patching and upgrading business. With that added time, we can learn Power BI and Power Apps and extend and enhance these software platforms to add value beyond the core offering. The lesson that I am leave with for today is that I must be dauntless in focusing my team on work where they add the most value to our organization, and getting the mundane and non-value adding work out of my organization and into hands of a trusted vendor. O365, SharePoint Online, and Dynamics 365, I am ready to start exploring what you offer.

Ignite 2017: Modern Office, Business Apps, and Quantum Computing?

Today at Ignite, Satya Nadella kicked off the day with the key note. There are four take always that I got from this talk, first, Microsoft is not divesting in developers, Infrastructure, Data and AI. All of those topics came up in the key note but they were not the focus. Under Mr. Nadella, Microsoft set out to reinvented themselves. From where I was sitting today, I can say mission accomplished. Today I am proud to call my self a Microsoft Developer (even if I am getting into Management.) So proud in fact, that I was able to tell the story of Microsoft’s reinvention to a potential intern when they asked me about my teams usage/acceptance of open source technology and products. It is truly a great experience to tell my future employees that we embrace open source and are better because of it. Microsoft three years ago was focused on open source, azure services and AI. Today, they are leaders in those areas and so the focus for the future is changing. The three areas of focus I took from the key note are “Modern Workspaces,” “Business Application” aka Dynamics 365, and Quantum Computing?

Modern Workspaces
The idea of a “Modern Workspace” is the evolution of workspaces into a mobile first cloud first world. It is seamlessly working from anywhere at anytime. This idea is the evolution of Office 365 into something bigger than a software service or offering. The Microsoft Graph is building out connections to help make content and documents. Microsoft’s acquisition of LinkedIn is also becoming part of this graph. One of the coolest announcements was Bing for Business. I sat in on a presentation on Search this afternoon with Naomi Moneypenny and Kathrine Hammervold. For the first time today, I truly understood why search in the enterprise was so hard. Microsoft and Google got search right in a consumer space, it should just work in enterprise, but it turns out the needs for enterprise search are vastly different from consumer search. Bing for Business will tie in to the Microsoft Graph to make enterprise search better. Super exciting!

Business Applications
Dynamics 365 made a big appearance today in the sessions and in the keynote. There was a lot of talk about how this platform is leveraging the Microsoft Graph to improve business outcomes. I attended a Business Apps keynote after the main keynote and they provided a demo and uses of Dynamics 365, with the graph and LinkedIn to make recruiting and talent discovery easier. My observations is the Microsoft is Microsoft is viewing Dynamics 365 as a mature offering and a natural extension of the core offerings of Office 365. I’m looking forward to exploring this product line in the future.

Quantum Computer
This topic was the biggest topic in the keynote and for me, it came out of left field. There was a full panel discussion on the advances in building a quantum computer. Microsoft announced today that quantum computing was available today in simulators in Visual Studio (on your PC) and in Azure. I think this is a clear signal that true quantum computing is coming, and it is coming at scale soon. This is not some research or pet project at Microsoft. This feels very similar to Microsoft vision of AI 3 or 4 years ago and today it is very real. I can only imagine what Quantum Computing at scale will do for our world if it comes to fruition in the next 4 years.

Container Fest

This week I had the opportunity to attend Microsoft Ignite in Orlando. I will be making my best effort over this week to document and communicate out my learnings each day. Today was day zero and I was able to attend the pre-conference training “Container Fest.” Overall, I was a little disappointed because the session was very lecture heavy and light on demo’s and labs, me and my crew even got kicked out at 6 PM while we were working on the lab :(. Don’t get me wrong, I learned a ton, and this class validated my assumptions of containers, they totally rock!

Pros
It is cliche, but containers are designed to “build once, run anywhere.” My entire career, I have been chasing the dream where a line of code it written and it works anywhere I want it to work. Despite all my unit testing, automated deployments, and documentation, production applications break because changes were made to the system, network, software, etc.

Infrastructure as code is a requirement with containers. Your server configuration, network setup, patching level, everything about your container, is defined in code. You have all the benefits associated with that, such as rapid deployments, scalability, and change management. Most importantly, everyone that is working on a product or solution is speaking in the same language and using the same technology stack to solve the problem at hand.

Cons
There is still overhead to maintaining and operating these containers, and their hosts. You still have operating systems that need to be patched and rebooted. Even though there is zero down time, if done correctly, you software stack will still be updated and you will still need to keep it up to date. If you set up your build and deploy process, this can be minimal, but it is still a consideration.

Conclusions
I see two situations where containers are a no brainer. First is if you value portability. Containers run on prem, and in any public cloud. If you value, or part of your cloud strategy is portability or diversification, containers enable it. Second is for applications where you do not have sufficient control over the build and release process, i.e. third party software. I still have a lot to learn about how to make this work, but containers can make updates to systems seamless and automatic. More importantly they also can make changes to the underlying software stack equally seamless and automatic. I cannot count the number of times that a operating system update had some negative impact to a production system that I had to deal with the next morning. It is not that the people applying the updates are doing anything wrong, they just don’t know everything about every system. Containers have the possibility to automate and improve the delivery of changes to 1st and 3rd party system, and I intend to explore more and learn how.

ORM in Node.js

In my last post (a long time ago) in this series, I described how to solve one of the most basic problems in web development with Node, user authentication.  In this post I’m going to talk about another common web development problem, database access.  Specifically, using an ORM library to facilitate (and accelerate  database access.) Since I first understood the concept of ORM (Object Relational Mapping) and used it in a professional setting, I have been a huge fan.  My first exposure to ORM (where I truly understood what as going on) was with Linq to SQL while building a WCF service in C#.  It was super empowering when I realized that I could query the database writing C# code!  No more string contamination to build queries (which is a major risk to SQL injection attacks) and no more dependencies on other teams to build stored procedures.  As a developer, this was extremely liberating!  Naturally, when exploring a new framework I want to learn what libraries and tools are available to provide the ORM.  For Node, I have chosen sequelize.  I’m going to go over defining a model (a table) and querying the database.

The Model

In ORM, the model is the representation of your database table.  It defines what columns there are, the data they they are, relationships to other models (tables) and, in more advanced cases, things like indexes and constraints.  In my experience, there is no ORM library that give you the full functionality on SQL in defining your tables, so the question is, why bother with defining models?  The answer is portability.  Sequelize (like any good ORM tool) can map to different SQL dialects.  It supports MySQL, SQLite, and SQL Server.  If you are running automated tests, no problem, sequelize will use the in memory SQLite database.  You are deploying a Linux cloud or an on prem Microsoft environment, again, no problem because the library will generate SQL for the targeted environment.  An ORM can also generate the database for you and update the schema. This concept is called ‘Migrations’ and is a much more involved topic and I will probably devote an entire post to in the future.  Below is a very basic model that describes a database table called ‘WeatherObservation.’  I have defied a few columns to store some data from a Raspberry PI and a DHT 11 sensor.

Querying

Once you have the model defined, inserting, updating and querying data out of your database now becomes native code.  It is the same as all the other code you are writing for you application.  Below are two examples, first is a simple GET operation that returns all records in a database table.  Notice that you don’t have to write any SQL to parse out any fields from the database query, the ORM library does all that for you and you get back you data in a format that is easy for you to manipulate and work with.  You are also using the “all” method from sequelize so you are simply writing JavaScript.  The second example is a POST operation that generates an “INSERT” statement using the “create” method from sequelize.

Automation with the TFS web hook and REST services

At Associated Electric, we leverage Microsoft’s Team Foundation Server (TFS) for work tracking and a source code repository.  Over the years I have customized and built out work items to help me manage and stay on top of the work my team is doing.  I have also experimented with some of the forecasting and scheduling functions of TFS, i.e. burn downs, kanban boards, task boards, etc.  The challenge with many of these functions is that the data required to make them work can be intense.  A medium sized project may consist of a single TFS work item, with a dozen or so user stories and those user stories may have half a dozen tasks all requiring a half dozen or so fields just to communicate the work, who’s doing it, and when.  Getting a burn down and accurate forecasting in TFS requires at least another half dozen fields and if you don’t take the time to think about each of those fields having them won’t provide you with any value.  For a project like this you may easily spend a day to two just keying work items in to TFS, not a very exciting endeavor for an analyst or developer.  I just settled by using story points on the user stories and not bothering with the task level, but I was always hungry for more out of TFS.

Recently I had to spend an extended amount of time away from managing my team and when I returned, I discovered that things were not running as well as I would have hoped.  It is nothing against my team, they are just trying to execute on work the best they knew how.  It became obvious to me that I needed more insight into all the work going on in my team and they needed more clarity on priories.  Across my team there is a large range of work, everything from writing annual reviews, tier two user support, and implementing ASP.NET web applications.  The one thing that all work has in common is that it can all be abstracted into tasks.  Tasks that can be scheduled and prioritized in TFS.

I also discovered that in TFS 2015, there are service hooks including a generic web hook that can be called on work item creation and updates.  From there my creativity exploded.  All of my team’s project milestones and support requests are modeled as User Stories, and using the TFS REST APIs, I created a web hook that automatically created a child task for the implementation of the project milestone, or resolution of the support request.  I even plugged into the time tracking data we maintain to populate time estimates into these tasks based on the TFS area.  Doing that instantly enabled me to start leveraging the TFS Task board with accurate time estimates with absolutely no additional data entry.  I also implemented a web hook for work item updates to automatically close the parent user story when all child tasks are closed.  I created a second update action to propagate changes from the parent user story down to all active children, i.e. if you carry a user story forward into the next iteration, the iteration will automatically update for the remaining child tasks.  By automating this data entry, my team is now using the Task Board which give me a single place to review all the work going on and give me the opportunity to easily guide and correct work priorities for team members.  All of this is happening without any additional data entry on my part.  I’ve included the models that you can use to deserialize the JSON calls from the TFS Service hooks to help anyone else get started.

 

TfsWebHookModel.cs